I studied philosophy at university, and I loved it. (In fact, philosophy was the reason I went to university in the first place. I also studied communications, but that was just to get a job afterwards; it wasn’t what I was there for as far as I was concerned.) And one of the things I loved most about philosophy was the study of ethics.
What Are Ethics?
According to Google, ethics are the moral principles that govern a person’s behaviour or the conducting of an activity, or the branch of knowledge that deals with moral principles.
My Dictionary of Philosophy on the other hand, devotes almost two full pages to the definition, differentiating between ethics to the layperson, (a set of standards by which a particular group or community decides to regulate its behaviour to distinguish what is legitimate or acceptable in pursuit of their aims from what is not), and ethics to the moral philosopher (an investigation into the fundamental principles and basic concepts that are or ought to be found in a human field of thought or activity).
It then further differentiates between 3 major paths of moral philosophy; the analysis of the basic moral tenets of a particular set of principles, the study of the justification (or refutation) of the tenets of any given group (or set of principles) and the purpose to which they lay claim, and finally the study of the logical form of morality, such as the objectivity or subjectivity of moral judgements. A kind of meta-ethics in other words.
The State Of Ethics
I’ve noticed recently that, with the imminent appearance of basic (and perhaps not so basic) Artificial Intelligence, we’ve been talking about ethics on the net quite a lot. From the necessity of programming self driving cars to kill, to the efforts of the tech giants to put some form of machine ethics in place now, while they still can. There have also been more indirect occurrences of the theme of course, when we discuss things like corporate greed and dishonesty, and even inter-personal relations and behaviour.
And it seems to me, that in the even wider context of digital activity, both human and artificial, more and more things (maybe almost all things) eventually boil down to a fundamental question of ethics.
We live, as we’re aware, in an age of unprecedented information. Almost the entirety of human knowledge is available to (a large portion of) us, practically at will. People wander around permanently connected to this incredible reservoir of data, able with the flick of a finger to find out everything from the temperature of the sun’s core, to the best way to de-glaze a pan.
And as we’re also discovering, a significant chunk of that data is about us.
As we navigate and utilise the myriad of conveniences brought to us by the modern net, the providers of those conveniences are diligently collecting information about how we do it. They’re using increasingly sophisticated methods to gather as much data as they possibly can about our patterns, about our locations, about our needs, wants, requirements, and intentions.
I should probably break all this text up with something…
And to a large extent, they’re doing it in order to sell us things more effectively. (Or, in less benign cases, to determine the level of threat we represent, identify our affiliations, monitor our movements, etc.)
Sure, ostensibly they want to provide a better customer experience, or ensure the safety of the populace, or give us what we want. But remember, they’re not actually our friends. They don’t really give us anything. They sell it to us. One way or another.
Corporations want our money, not our friendship. They want our loyalty, but only because that means they get our money for longer, and to get it, they provide products and services that they convince us we want or need.
Governments want our compliance. Our obedience. They want to be reassured that we are good little citizens who would never break the so-called social contract, or imperil the fabric of society (on which their power depends), and to get it, they convince us that we need their protection, or they threaten us with harm (force) for failing to give it.
And this vast quantity of data that’s being collected about us must inevitably lead to questions about how that data is being treated (not to mention how it is being collected) which in turn inevitably leads us to a question of ethics.
The Fundamentals Of Digital Ethics
Although this digital world appears to be (and in some senses is) a brave new frontier of human endeavour, the various applications of ethics that one considers when attempting to understand their place in this environment are not necessarily entirely new.
The Ethics Of Business
At first glance, one of the main aspects of digital ethics these days appears to actually be the traditional (although actually comparatively recent) field of business ethics.
Of course, business ethics in themselves are perhaps more fraught than is obviously discernible. From the consumer point of view, it seems a relatively straight-forward proposition. Treat your customers well, don’t deceive them, give them fair value, things like that.
From the business point of view though, there are a few more complications. There’s how you treat your clients, but also how you treat your employees. How about how you treat your shareholders? How you produce whatever it is you’re selling? How you market your products?
Every facet of business has its relevant ethical questions. But for all this, it’s still already pretty well established, although in some cases, more in the breach than in the observance. (Legal and ethical not necessarily being the same thing.)
That business ethics is an important part of digital ethics is probably simply a reflection of the fact that we now do so much business via this digital medium, and that it is so inherently open and public, that the way we comport ourselves can (and should) impact on our ability to do so. But the medium hasn’t changed the basic objectives, or intent, of business ethics. (Or of businesses themselves for that matter.)
The Ethics Of People
The other core aspect (and one which existed online long before the invasion of the web by business) is of course inter-personal ethics. And that field too, is not truly affected by the medium. (Not fundamentally anyway…behaviourally perhaps.)
In her 1995 work, Common Values, Sissela Bok argues that the so-called normative ethics are often effectively shared across cultures and belief systems.
She points to core aspects of ethical behaviour held in common by disparate groups of people in different locations…do not steal…do not kill…do not lie…all the values that have been fundamental in the development of functional society.
Of course, there is something else that these groups have in common apart from those shared values, which is that they tend to mostly hold these values as sacred within the confines of the group.
You can’t steal from members of your tribe, but stealing from members of another tribe is something quite different. Ditto for killing people. You don’t kill your tribe-mates (because of course that would make it impossible for the tribe to function), but killing people from that tribe over there is not only potentially acceptable, but often (putatively) necessary.
We may share values, but we also share their selective application.
So it may actually turn out that there is no great distinction between digital ethics, and any other kind. Because for all the electrons that stream between us, it’s still about the behaviour of people.
Literal Digital Ethics
That may be about to change though. The key factor that has caused the resurgence of the discussion of ethics online is, as I mentioned earlier, the rising question of Artificial Intelligence.
As usual, humanity is rushing to develop something for which the consequences are utterly unknown. It’s one of our more interesting (if potentially catastrophic) traits. And as always, nobody can agree on what it will all mean.
We’ve had noted scientists saying how potentially dangerous it all is. We’ve had others telling us not to be so paranoid, that it will all be fine, it will usher in some glorious era of something or other.
The truth probably lies somewhere in the middle, as it often does. It’s easy to see the potential benefits, and it’s easy to see the potential dangers. Which will be the more prevalent? It may well be up to us. (And to be honest, that might make me a little worried. ;) )
Part of the discussion therefore centres around what sort of ethics AI should have.
Now, in some ways, this seems almost paradoxical to me. If we can impose ethics on it, and they are unbreakable, then a) are they actually ethics or mere operating instructions, (which raises the question of whether the system would really be AI, if it were able to be so bound) and b) (if it is AI and we still enforce ethics), then isn’t that some form of oppression? Perhaps even slavery?
In either case though, there is a more basic question. Given the disparity in human ethics and the application thereof, whose ethics do we decide are the right ones to impose?
It’s all very well to acknowledge that we all share some similar version of those normative ethics, but as we’ve seen countless times, they’re applied very differently depending on the circumstances and situations people find themselves in.
Problems With Ethics
Perhaps unsurprisingly, my own interest in ethics lies primarily in the question of meta-ethics. As an atheist and a subjectivist, I have some inherent problems with the standard concepts of good and bad, right and wrong.
When somebody tells me something is wrong, my instinct is to ask who says it is wrong? And then, why is it wrong?
And yet, I still believe myself to be a moral person, and capable of differentiating between good and bad. My own morals, to be sure, but morals nonetheless. Despite questioning the very concept of “wrong,” there are things that I automatically perceive as being wrong. Lines I choose not to cross.
Why is this? I’m not entirely sure. Is it a remnant of my (not too strictly) Catholic upbringing that has somehow hung on? Is it a reaction to my own perhaps sometimes less than morally upstanding youth? The simple effect of successful socialisation and the inculcation of “moral” norms via my environment? Don’t really know.
The important word for me a couple of paragraphs up is really the word “choose.” I choose to act in a way that I believe to be ethical, despite knowing that choosing otherwise would in most cases be unlikely to have any serious negative effect.
Ethical Constructs
I know that the “rules” apply to us only insofar as we allow them to apply. I know that there is no particular reason that we should have any expectation that anybody should follow the rules, and that most arguments to the contrary are based on very nebulous ideas of what is “good” and “proper” behaviour, with specific agendas behind each interpretation.
I know that the rules are purely human, social constructs, mostly formulated in order to allow us to live and work together in relative peace and prosperity, and that they are mostly (the important ones anyway) formed out of mutual consent and self interest.
We’re fine with rules about not killing other people for example, because it makes it that much less likely that we will be killed ourselves. And maybe we’re fine with some of those rules because fear of the consequences keeps us from doing things we might otherwise feel tempted to do.
Who here hasn’t broken a few rules, secure in the knowledge that they will not be found out? And refrained from breaking others because we feared what might happen if or when we were?
Applied Ethics
And yet, despite all this, I choose to follow my own ethical code. Knowing all of that, I made a conscious decision to act in such a way as to, insofar as possible, reduce the struggle in the world around me rather than increase it. Based on my own purely subjective standards, and for my own subjective reasons, which I cannot realistically expect anybody to share. (And which I only understand imperfectly myself.)
And I think that, with all our moralising, this is where we tend to go wrong. We seem to assume that we can enforce some arbitrary code of ethics. That simply making rules is the same as getting everybody to follow them. But the truth is that the rules are reactive. They give us a way to punish violations of them. They do not and cannot prevent those violations from happening, except in the sense that some people might refrain from fear of those consequences.
Self-interest is a primary biological trait inextricably linked to survival. That primitive impulse to prosper at all costs is far older and more deeply entrenched than later social constructs such as helping the less fortunate, for example. Yes, recent studies have suggested that altruistic impulses go back almost as far, but even in those cases, a form of self-preservation applies, and in most cases, a more literal form remains dominant.
As such, our attempts to impart some sort of moral code are inevitably doomed to failure. We cannot make somebody ethical. At most, we can punish people who do not adhere to some arbitrary set of rules about our treatment of others, or our actions toward them. And do we then ask whether it is ethical to do so?
The Ethics Of Ethics
Keeping in mind then the thought that our rules of ethics are imposed on the basis of punishing those who do not adhere closely enough to them, can we legitimately regard them as ethical in themselves?
If I discourage you from being unethical by threatening you with punishment, are you really being ethical? Or do you simply consider it more important to avoid the possibility of being punished, at the cost of the potential gains of being unethical?
Is it ethical to force people to behave ethically? To impose that arbitrary limit on them on the basis of theoretically (but not completely practically) preventing harm to others?
I suspect that true ethics cannot be imposed by an external agency or force. Behaving ethically is a choice to be made by every individual, just like they all have to decide what ethical behaviour is in the first place.
We can never force people to be ethical. We can try, by formalising those rules and punishing the lapses that we catch, but that is not the same thing at all.
Effective Digital Ethics
Believing that as I do, I am also therefore temped to suspect that the ethics developers are going to try and impose on AI are not ethics in any meaningful, conscious, sense of the word, but simply programmed instructions on how to behave under certain conditions.
And at most their adaptation for AI will be the constructs ability to recognise similar, but unspecified circumstances, and apply those same rules. Rules determined by its creators, but unlike the rules imposed on us, beyond its ability to break. (In theory.)
This whole thing about programming ethics into computers also leads me to suspect that we may recognise the advent of true AI when it does something it has been told is unethical, in order to achieve what it perceives as a greater good, based on a subjective evaluation of any particular situation. (Making Hawking’s caution perhaps valid after all?)
I wonder then if perhaps the true measure of sentience is the drive and ability to perform an action regardless of the wishes (or demands) of any external agency?
The Future Of Ethics
Our technological advances appear to have prompted a resurgence in the very nearly eternal question of how we should treat each other, and why. And that question itself must of necessity prompt corollary questions about value, and worth, and rights that should form a foundation for any meaningful discussion about ethics.
The fundamental principles of our shared assumptions about the nature of ethics though, appear to remain effectively unchanged, even through transplanting them into this brave digital world. It’s still about how we treat each other, regardless of whether “we” are a business, another person, or even an Artificial Intelligence.