Archive for October, 2015

h1

AI Ethics and Safety: Emotion is not needed for Ethics.

October 14, 2015

Whether powerful, superhuman AIs will be moral is the theoretical question;
whether they will a danger to us..turning us into mulch, or if we are a bit luckier,
pets..is the practical question.

We are seeing the beginnings of true AI already: machines that can equal or beat humans
in specific tasks, that can learn flexibly, rather than being limited to explicitly
programmed behaviour, machines that can perform the kind of fuzzy logic and pattern recognition
that were previously considered typically organic and machines that behave agentively,
having goals of their own. These abilities are not equivalent, for all that many of them are
covered by the vague term “thinking for itself”, and not all AIs would have all of them.

It takes an effort of imagination to see just how different an advanced AI could potentially
be to a human, and just how powerful it could be as well. Moore’s law indicates geometric growth in the power of
computer hardware, and has held for several decades. Some AI proponents speculate that in
the future AIs will be able to understand their own source code, and rewrite it in
improved versions, effectively spawning “children” that are more powerful versions of themselves.
(This is known as Recursive Self-Improvement).
This improvement in software would take place in parallel with Moore’s law hardware improvements.
The most extreme form of this scenario (known as “Foom!”, a reference to an atomic explosion)
has recursive self improvement taking place over hours or days.

While the threat of superhuman AI, the worst case scenario, is bad indeed,
the promise is also formidable. Such an AI could find cures for cancer and other diseases,
develop sources of cheap energy, extend human life,
improve political systems, bring lasting peace, and so on. There are few
limits to what could be achieved by sufficient intelligence.

We are also seeing the beginnings of ethical dilemmas connected with AI already:
should self-driving cars be programmed to maximise the safety of everyone in general,
or should they sacrifice strangers to preserve their passengers? (cf. Trolley Problems).

——————

Evidence for non-human ethical behaviour comes from the animal kingdom, where altruistic behaviour
can sometimes be observed. But why do we associate altruism with ethics? The assumption seems
to be that by default, by intrinisc drives and biology, humans will be selfish,
so that altruistic behaviour, where it is socially useful, needs to be enforced
and encouraged externally, by social behaviour and cultural standards.
But if ethics is essentially a social construct, then animals don’t have ethics!
And AIs are potentially much more different to humans than animals. They might not have drives to
selfishness or self preservation, and therefore not need altruistic ethics
as a means to overcome their selfishness. But, to return to the practical problem of
AI safety, they might still needs something to keep us safe from them.

Inasmuch as ethics exists within society and is transmitted from one generation to the next,
it usually exists in the form of ready made religious ethics. These systems contain arbitrary,
symbolic elements, such as eating fish on friday, and it is difficult to find a standpoint
in order to to make a non-arbitrary choice between them. Here, philosophy has
the potential to help, because philosophers have striven to formulate ethical systems based
on (hopefully) self-evident logical principles, and devoid of arbitrary elements,
such as Bentham’s Utilitarianism and Kant’s Categorical Imperative.

That sounds like the kind of ethics often attributed to computers in sci-fi: pure,
impartial and objective. But it contains hidden pitfalls: it might be the case that an
AI is to objective for human comfort. For instance, Utilitarians usually tacitly
assume that only human utility counts:if an AI decides that chicken lives count as much as human ones,
then humanity’s interests will automatically be outweighed by our own farmyard animals.
And that is just the beginning: in the extreme case, an AI whose ethics holds all life to be
valuable might decide that humans are basically a problem, and adopt some sort of ecological extremism.
The moral of the story is that for humans to be safe from AIs, AIs need to have the right kind of morals.

People have differing, even contradictory, ideas about emotion and ethics.
One is that without the “interference” of emotion, an AI would apply rules flawlessly and impartially,
and therefore make a better practical ethicist than a human.
The equal and opposite intuition is that human emotion and intuition make humans better ethicists,
since it prevents them applying rules too rigidly and unexceptionally. There are a number of assumptions
in both ideas which it is important to make clear.

One is a concern about rules. The emotion-good argument takes it that rules are always inadequate,
and need to be supplemented by something else. But is the inadequacy of rules inherent to rules, or is it
just the rules used by humans? Well-known sets of ethical principles do tend to be simple, and the world
is a complicated place, which leads to situations where exceptions need to be made in the application of
rules to avoid undesirable consequences. There is at least a possibility that the problem only applies
to overly simple rules, not all rules. If an AI were to notice that an exception needed to be made to an
ethical principle, that would itself be the outcome of rules, in a sense, since everything in a
computer is. Does a human differ from a computer in that regard?

Humans have a division between the
conscious mind (or Damiel Kahneman’s system 2) and the unconscious mind (or System 1). Much of what the
human mind does is at the unconscious level. When a conscious process of thought is overridden by the
unconscious mind, or some information appears that isn’t the result of a conscious process,
it is described as intuition, feeling, emotion, or some other term implying a mystery.
However, at a fine-grained neurological level, unconscious processes are just neurons firing,
like conscious processes. The basis of the operation of the brain is neurons transmitting
and processing information electrochemically. In principle the operation of an entire
brain could be simulated on a sufficiently large computer (https://en.wikipedia.org/wiki/Mind_uploading), and if this occurred it
would, in a sense, be running on rules. For computer programmer, everything inside a running computer is the execution
of rules all the time.

Reversing the argument, if the entire brain
operates by executing rules, then so does the unconscious mind, and so do emotion and intuition.
Emotion and intition don’t appear to be operating rules to the conscious mind, which is to say that the
conscious mind is not aware of the rules they are operating in the way that it is aware of the rule sit
is employing when it adds up figures or makes a legal judgement. The operation of unconscious
processes takes place in, so to speak, a “black box”…but only from the point of view of the individual..in this way of thinking
about unconscious processes, the mystery is not fundamental or intrinsic, it is apparent, a matter of the way humans
are wired up. The results of unconscious cognition appear “all at once” to the conscious mind as
if by magic, because we are not aware of what is happening inside the box, but something
unmagical is happening. So when we fail to follow through on a conscious, explicit rule because
something “feels wrong”, that would be, by this argument,
a complex but unconscious set of rules overriding a conscious but simple set.

The thing is that unconscious cognitive processes aren’t dumb.
Compared to conscious thought, unconscious thought is somewhat inflexible and hard to train,
but it is also very fast, impressively so when making a “judgement call”
based on a lifetime of accumulated experience. To be equal or superior to the intelligence of a
human, an AI must have something equivalent to this powerful unconscious processing.
In that sense, it “needs emotion”, but that statement needs to be
strongly qualified. There is no obvious reason it needs anything like the human conscious/unconscious
split. To do what emotion and intuition do, an AI would need equivalents of them,
but they might not be the same at all. The fact that humans use emotion, in the fully fledged sense,
for moral judgement. The connection between emotion and ethics might just be a fact about humans.