Tuesday, January 28, 2014

Practical Morality, Part 1



(The first of two parts. Part 2 is here.)


The Social Contract

Where-ever you are right now, take a quick look around. Do a quick survey of all the stuff you can see. Think about the number of things you have around you that other people have made. If you are in your own home, then great, the experiment works even better - the things around you probably belong to you, you make some kind of use of them, and quite possibly your life would be less satisfying without them. Some of these things may even be, if not essential for life, indispensable for a comfortable modern existence.

Now try to count up the number of these things that you could make yourself. I'm trying this now (the counting, not the actual making), and there is really very little that I could contemplate building, perhaps some of the simpler bits of wooden furniture, if really pushed. Quite possibly (and this is not meant as an insult) there is nothing of the stuff that have around you right now that you could build yourself. 

Alternatively, perhaps you are quite adept with your hands, and there are several things you could put together yourself. But presumably, you would need tools for this. And possibly even tools to make those tools. You would need materials, for the thing you want to make and for the tools to make them. 

Even if you have the skill and energy to actually make some of the things you own, starting from nothing but a pile of raw materials (ignoring for the moment the complexity of acquiring that pile of raw materials), you wouldn't be very efficient. So many processes are involved in converting raw materials into desirable objects, and so many of them require expert knowledge, skill, practice, and dedicated specialization, that you could never approach the efficiency of a large community of individuals, each pretty much focused on some limited domain of expertise. Economies of scale emerge naturally from such specialization. If all I do is cut down trees, then before too long, I predict that I am going to be better at cutting down trees than somebody who just cuts down a tree whenever he needs a bit of wood. If all I do is cut down trees, then I can invest a lot into the having the best possible tools, tailor made for the job of cutting down trees - I don't need my tools to be also good for digging copper out of the ground, because I don't do that, somebody else does that.

And we haven't even begun to think about anything technologically advanced. Getting your laptop to you, for the low price you paid for it, took the research and development efforts of literally thousands of engineers and scientists, practically all of whom got to a position to be able to do that work by engaging in years of dedicated study during which it was impossible to support themselves through full-time employment. For the technology to reach this stage, it took support structures to enable those people to engage in full-time study, and it took efficient dissemination of knowledge, sufficient to make research results readily available and synthesizable in all corners of the world. It also took stable international trade to have all the rare-earth elements and other necessary materials readily available for the manufacture of this product, and it took robust legal protection of intellectual property, in order for all those R&D hours at the product development stage to be worth the investment.

These mechanisms together with many others, all evolved to help make society function as much as possible to everybody's benefit, are known collectively as "the social contract." They are, for example, what make it reasonable for me to exchange items of real economic value for a few trivial-looking pieces of paper, or a few bits of information on some (to me unknown) computer. They make it possible for me drive a car in confidence that another vehicle coming towards me will stay on its designated side of the road, allowing us to pass each other without injury. They make it probable that the foods I buy won't poison me, the machines I use won't kill me, and the politicians I vote for won't throw me in jail if I refuse to vote for them next time.

Biologically evolved behaviours, such as my tendency to care much more about close family members than about people I've never met, play a major part in defining our core moral objectives. These may include elements of social cooperation, such as, perhaps, a fundamental desire to live in proximity with other people, leading naturally to a desire to live peacefully with other people. Such genetically determined traits help to make us intrinsically caring about others. The social contract does not necessarily define our core moral values, but, by virtue of the colossal technological benefits its brings us, serves as an indispensable aid for achieving the things we value most. It has the profound effect of making my personal, selfish values intimately entangled with the values of more or less every other human on the planet.


Realistic Moral Relativism: A Practical Matter  

Fact (1): your core moral values are completely determined by the real physical properties of the matter out of which your mind is built. This is what I mean by moral realism. (See my earlier article, and supporting arguments)

Fact (2): there are no principles of moral value that necessarily hold for all beings in all parts of the universe. This is what I mean by moral relativism. There is one moral meta-principle that holds absolutely, namely Fact (1), above, but it doesn't specify any value that any being must hold.

Fact (1) is the foundation of our moral science. It has the dual advantages of: (i) phenomenal empirical support, and (ii) being logically inescapable. Fact (2) follows as a trivial consequence: since values are determined by arrangements of matter, then different arrangements of matter may support different values.

Fact (2) scares the bejeezus out of people, which I'll discuss more in Part 2. Whatever the reason, however, there is extraordinary resistance to acceptance of Fact (1). When those of us who have come to recognize the potential to develop a moral science try to explain these findings, there is a tendency to present thought experiments aimed at demonstrating the possibility to measure moral value. These typically involve some highly advanced neurological apparatus, maybe some kind of ultra-high-resolution, perfectly calibrated magnetic resonance scanner, capable of recording all relevant details of a person's evolving brain states, and using the data to precisely quantify value. If we could do that, we explain, we would know everything about the human condition, and the moral facts would be laid out before us.

There is nothing incorrect about this (once 1 or 2 matters of interpretation have been clarified), but the argument often fails to have its desired impact. There seem to be two common reasons for this, both of which I sympathize with. Both follow from the extreme implausibility of the described measurement: to record a complete description of a person's brain state. Person A complains that such a measurement, resulting in complete knowledge of a person's private state of mind is strictly impossible, thus invalidating the principle we hoped to illustrate. Person B doesn't have the time to philosophically investigate the limits of epistemology, but recognizes the practical impossibility of this - it ain't gonna happen in the foreseeable future - and so dismisses the whole thing as a fanciful science fiction, not worthy of a second thought.

Both person A and person B have missed the point. It was supposed to be a thought experiment, illustrating the kinds of information that we might strive to access, in order to advance a moral science. But the truth of Fact (1) does not depend on the ability to attain such complete knowledge, and neither, in fact, does our ability to develop a moral science from it.

To think that a truth does not exist, simply because we are prevented, in principle, from uncertainty-free knowledge of it is mind-projection fallacy, pure and simple. Facts exist. Propositions about the real world are either true or false, and their truth state is independent of how confident we feel about them (we might feel that the recursive proposition, X: "I believe confidently that X is true", is a counter example, but X is not a coherent proposition about the real world - what could it possibly mean?). No science can deliver knowledge that is completely free of uncertainty. This is why the gold standard for expressing scientific advance consists of calculating probabilities. And because we have probability theory, that exquisite invention that saves us from the misery of complete epistemological crisis, science does not need absolute certainty in order to make concrete advances: with incomplete knowledge we continue, often in baby steps, to make real advances in understanding, enabling actual technological gains.

We don't need the full blown complexity of the above thought experiment in order to establish our moral science, or indeed to produce from it incremental strategic advances for society. In fact there are only two things we need, in order to make progress in moral science:

  1. to measure our moral goals
  2. to measure the universe

"Really?" you ask, "that's all?"

Stay with me. First I'll explain why those two, then I'll say a bit about what they mean.

To work efficiently towards our goals, it is a requirement that we have reliable estimates of what our goals are, hence the first requirement. Without these estimates, any effort we expend relies on luck to achieve its objectives - we might just as well do nothing. About the second requirement, to maximize the probability to attain one's goals, one has to choose strategically between some set of possible actions. The outcomes of those actions, though, are entirely dependent on the content and behaviour of one's environment - if my goal is to boil a pan of water, then attempting to light a fire under it is not a good strategy, if the pan of water happens to be currently at the bottom of a swimming pool. So we need some kind of reasonable model of reality - an estimate of the stuff that populates it, and a somewhat accurate account of the mechanisms by which that stuff interacts.

Now it's time to qualify what I mean by measure, when I say for example, "measure the universe." A measurement consists of two steps: step one is collecting some set of empirical data, and step two consists of some procedure to draw inferences from that data, usually by combination with previous inferences from previous data. That's it. Notice that there is no statement in there about the quality of the data, or the degree of uncertainty in the resulting inference. Note, though, that as long as a good procedure of inference (a scientific procedure) has been used, we can always construct a model of reality according to which our uncertainty is reduced by the new data. The new data always tells us something new about the world.

Thus, I can measure the universe, simply by opening my eyes. If this is my first time to measure the universe, then it's quite a good start! Every time I open my eyes, I can make new inferences, with an expected increase in confidence about the contents of my environment and the mechanisms by which those contents transform themselves.

Similarly, I can interrogate my moral goals, simply by asking myself, 'what do you actually want?' It is perhaps the crudest experiment we can imagine, but we can also easily imagine improved experimental designs. This is the principal activity of the scientist. There is no sense in which we can invalidate a set of raw data, so what we do instead is try to think of ways in which our inference procedure might have failed to capture what is really going on. And once we have thought of some possible failure modes, we can add controls to our experiments. If the machine says "24," then the machine says "24," and it only remains to discover how well the machine is calibrated - what is the correspondence between the machine saying "24," and the thing I actually want to draw inferences about?

So a better protocol for measuring a person's goals might control for the fact that a person may be mistaken about what their goals are. Luckily, psychologists have already developed such protocols, capable of investigating a subject's state of mind, even when she is probably not very well aware of it herself. We can look at other behaviours that according to separate evidence are more intimately connected to the aspects of interest of a person's state of mind. Pupil dilation, for example, typically happens without the person's awareness, and is extremely hard to fake. With careful observations of the pupil, one can determine, for example, that a subject was very interested in a particular stimulus, without them having had the faintest idea of it.

The range and sophistication of the protocols and controls we might apply to the problem of measuring value are open ended. Dozens of powerful methodologies exist already for the investigation of mental states (all of which can be cross-checked against each another), from the carefully worded questionnaire, right up to the heroic machines of neuroscience, such as the famed fMRI scanner, which, while often problematic to interpret (or see many articles by Neuroskeptic), still provides an immense richness of data. Thus, we should not let anybody tell us that at the present time, moral value can not be measured - the only challenge for the future is to reduce the error bars.

One might complain that the fMRI experiment only measures neural activity, whereas what we want to know is the subject's mental state, but in this regard, how is fMRI different from measuring pupil dilation? It is the same calibration problem. This problem is solved in essentially the same way in all science: by trying to think of ways that our inference procedure might be too naive, and doing experiments to test them. To say that there is no way to manage the calibration problem is to suppose that all technological advance to date is the result of pure good fortune.

In practice, we very often already act as if we know that knowledge of and progress towards our moral objectives are both attainable through rational means. Regardless what we consciously profess, we do this because experience informs us that it works. For instance, I believe, with a high level of confidence, that my future satisfaction will be compromised if money I have now leaves my possession without me getting something of value in return, and I therefore make conscious efforts to ensure that I do not lose track of my wallet [compulsively checks pocket again ...].

In run-of-the-mill, day-to-day activity, our unconscious, not-too-rigorous adherence to and application of Fact (1), from above, serves us very well. We know this, because whatever departures there exist between what really happens and what we would be inclined to describe with the phrase, 'serves us very well,' are small enough for us not to find them particularly obvious. This, of course, is no accident. If such departures were very obvious, then we would tend to modify our behaviour, accordingly - this is actually exactly what has happened, and we call the process 'growing up,' though the phrase can apply equally well to the history of a given individual as to the application of selective pressures on gene populations, over time scales of thousands of millennia.

In a similar way, in run-of-the-mill, day-to-day activity, it is enough for me to know that there is some force of gravity tending to make things go down, but if I want to establish a communications satellite in orbit, I had better know about the inverse-square law that describes that force, and a few other complicated things besides. Thus, as we strive to answer ever more exacting moral questions, and as we place ever more challenging demands on our moral technology, it is obvious that we must be ever more rigorous and deliberate in the development of our moral science. This can only happen effectively, if the people who are to work on these grand problems can acknowledge the truth and practical importance of Fact (1).

Our future flourishing, therefore, must be expected to be greatly enhanced by widespread taking on board of the principles of moral science - Fact (1) and its corollaries - through ever improving estimated answers to such questions as (1) what values are primary, as opposed to secondary? (2) what secondary values do we hold in the mistaken belief that they support our primary goals? (3) what common moral values are held by almost all people? (4) how much does the perception of value differ from person to person? and (5) at what point do software engineers need to worry about possible suffering experienced by their algorithms? That is for the future (though it can start today). In the remaining two sections, in Part 2, I'll give good reasons to suppose that a broad acceptance by society of the validity of moral science should have immediate, important, and valuable consequences for almost everybody.

I'm not claiming that a precise determination of core human values is an easy measurement to take - it is fraught with difficulty - but it will never be possible until it is accepted as something we can aspire to. Understanding Fact (1) and its inevitable truth is a crucial first step, and to begin taking such steps is practically guaranteed to lead to some kind of improved understanding. Not only that, but merely recognizing that this moral science is in principle possible has substantial immediate practical consequences, as I will argue, next.

Consider also this: there are, no doubt, some truths about the universe that are forever obscured from science (such as what Euclid's grandfather had for breakfast 3 days before his 10th birthday), but this can only be because these things make no significant difference to anything today. Conversely, if a thing makes a big difference, then by definition, we can detect it and measure it relatively easily. This must apply equally well to the things that affect the outcomes of our moral decisions. The prospects for an applied science of human ethics, producing practical technological benefits, are not so bleak.




Find Part 2 here.



No comments:

Post a Comment