h1

Consistency and Subtyping Orthogonality.

December 6, 2015

“Gradual typing can easily be integrated into the type system of an object-oriented language that already uses the subsumption rule to allow implicit up-casts with respect to subtyping. The main idea is that consistency and subtyping are orthogonal ideas that compose nicely.”

h1

Two Dimensionalism

November 16, 2015

Complex systems have aspects, such as data storage and user interface, and and they also have features, such as billing and user authorisation. Features can potentially involve all aspects. Plugins are intended to implement features in a swappable way.

h1

Striped Wrapping

November 2, 2015

original_stripe-wrapping-paper
Wrapping can be thought of  adding a filter to the input of a procedure, and also the output.

The easy case is when the procedure is the outermost layer.

The problem of striped wrapping is how you do that to a procedure which is at in intermediate level of an already built  programme.

That is itself a part of the problem of plugins: how do you 1) add a component, that 2) effects every aspect of an existing system, without 3) rewriting the existing system.
Standard modules are supposed to implement only one aspect of a system, so plugins are a cross-cutting concern in relation to them. Complex systems have aspects, such as data storage and user interface, and and they also have features, such as billing and user authorisation. Features can potentially involve all aspects. Plugins are intended to implement features in a swappable way. (I call this two dimensionalism).

Class based object-orientation presents a bad way of freely adding features. If Class C inherits from class B which inherits from class A, you can’t interpolate — stripe — some 4th class in the chain without changing the existing code.

The currently fashionable technique of dependency injection is much better: major changes can be made by modifying the highest level of code alone. In a sense the top level of code is a kind of configuration.

Dependency injection is a variation on the call back mechanism. Another common (but contrasting) pattern for extending existing code without modifying it is the wrapper. Cleanly implemented wrappers require their own syntactic form.

h1

AI Ethics and Safety: Emotion is not needed for Ethics.

October 14, 2015

Whether powerful, superhuman AIs will be moral is the theoretical question;
whether they will a danger to us..turning us into mulch, or if we are a bit luckier,
pets..is the practical question.

We are seeing the beginnings of true AI already: machines that can equal or beat humans
in specific tasks, that can learn flexibly, rather than being limited to explicitly
programmed behaviour, machines that can perform the kind of fuzzy logic and pattern recognition
that were previously considered typically organic and machines that behave agentively,
having goals of their own. These abilities are not equivalent, for all that many of them are
covered by the vague term “thinking for itself”, and not all AIs would have all of them.

It takes an effort of imagination to see just how different an advanced AI could potentially
be to a human, and just how powerful it could be as well. Moore’s law indicates geometric growth in the power of
computer hardware, and has held for several decades. Some AI proponents speculate that in
the future AIs will be able to understand their own source code, and rewrite it in
improved versions, effectively spawning “children” that are more powerful versions of themselves.
(This is known as Recursive Self-Improvement).
This improvement in software would take place in parallel with Moore’s law hardware improvements.
The most extreme form of this scenario (known as “Foom!”, a reference to an atomic explosion)
has recursive self improvement taking place over hours or days.

While the threat of superhuman AI, the worst case scenario, is bad indeed,
the promise is also formidable. Such an AI could find cures for cancer and other diseases,
develop sources of cheap energy, extend human life,
improve political systems, bring lasting peace, and so on. There are few
limits to what could be achieved by sufficient intelligence.

We are also seeing the beginnings of ethical dilemmas connected with AI already:
should self-driving cars be programmed to maximise the safety of everyone in general,
or should they sacrifice strangers to preserve their passengers? (cf. Trolley Problems).

——————

Evidence for non-human ethical behaviour comes from the animal kingdom, where altruistic behaviour
can sometimes be observed. But why do we associate altruism with ethics? The assumption seems
to be that by default, by intrinisc drives and biology, humans will be selfish,
so that altruistic behaviour, where it is socially useful, needs to be enforced
and encouraged externally, by social behaviour and cultural standards.
But if ethics is essentially a social construct, then animals don’t have ethics!
And AIs are potentially much more different to humans than animals. They might not have drives to
selfishness or self preservation, and therefore not need altruistic ethics
as a means to overcome their selfishness. But, to return to the practical problem of
AI safety, they might still needs something to keep us safe from them.

Inasmuch as ethics exists within society and is transmitted from one generation to the next,
it usually exists in the form of ready made religious ethics. These systems contain arbitrary,
symbolic elements, such as eating fish on friday, and it is difficult to find a standpoint
in order to to make a non-arbitrary choice between them. Here, philosophy has
the potential to help, because philosophers have striven to formulate ethical systems based
on (hopefully) self-evident logical principles, and devoid of arbitrary elements,
such as Bentham’s Utilitarianism and Kant’s Categorical Imperative.

That sounds like the kind of ethics often attributed to computers in sci-fi: pure,
impartial and objective. But it contains hidden pitfalls: it might be the case that an
AI is to objective for human comfort. For instance, Utilitarians usually tacitly
assume that only human utility counts:if an AI decides that chicken lives count as much as human ones,
then humanity’s interests will automatically be outweighed by our own farmyard animals.
And that is just the beginning: in the extreme case, an AI whose ethics holds all life to be
valuable might decide that humans are basically a problem, and adopt some sort of ecological extremism.
The moral of the story is that for humans to be safe from AIs, AIs need to have the right kind of morals.

People have differing, even contradictory, ideas about emotion and ethics.
One is that without the “interference” of emotion, an AI would apply rules flawlessly and impartially,
and therefore make a better practical ethicist than a human.
The equal and opposite intuition is that human emotion and intuition make humans better ethicists,
since it prevents them applying rules too rigidly and unexceptionally. There are a number of assumptions
in both ideas which it is important to make clear.

One is a concern about rules. The emotion-good argument takes it that rules are always inadequate,
and need to be supplemented by something else. But is the inadequacy of rules inherent to rules, or is it
just the rules used by humans? Well-known sets of ethical principles do tend to be simple, and the world
is a complicated place, which leads to situations where exceptions need to be made in the application of
rules to avoid undesirable consequences. There is at least a possibility that the problem only applies
to overly simple rules, not all rules. If an AI were to notice that an exception needed to be made to an
ethical principle, that would itself be the outcome of rules, in a sense, since everything in a
computer is. Does a human differ from a computer in that regard?

Humans have a division between the
conscious mind (or Damiel Kahneman’s system 2) and the unconscious mind (or System 1). Much of what the
human mind does is at the unconscious level. When a conscious process of thought is overridden by the
unconscious mind, or some information appears that isn’t the result of a conscious process,
it is described as intuition, feeling, emotion, or some other term implying a mystery.
However, at a fine-grained neurological level, unconscious processes are just neurons firing,
like conscious processes. The basis of the operation of the brain is neurons transmitting
and processing information electrochemically. In principle the operation of an entire
brain could be simulated on a sufficiently large computer (https://en.wikipedia.org/wiki/Mind_uploading), and if this occurred it
would, in a sense, be running on rules. For computer programmer, everything inside a running computer is the execution
of rules all the time.

Reversing the argument, if the entire brain
operates by executing rules, then so does the unconscious mind, and so do emotion and intuition.
Emotion and intition don’t appear to be operating rules to the conscious mind, which is to say that the
conscious mind is not aware of the rules they are operating in the way that it is aware of the rule sit
is employing when it adds up figures or makes a legal judgement. The operation of unconscious
processes takes place in, so to speak, a “black box”…but only from the point of view of the individual..in this way of thinking
about unconscious processes, the mystery is not fundamental or intrinsic, it is apparent, a matter of the way humans
are wired up. The results of unconscious cognition appear “all at once” to the conscious mind as
if by magic, because we are not aware of what is happening inside the box, but something
unmagical is happening. So when we fail to follow through on a conscious, explicit rule because
something “feels wrong”, that would be, by this argument,
a complex but unconscious set of rules overriding a conscious but simple set.

The thing is that unconscious cognitive processes aren’t dumb.
Compared to conscious thought, unconscious thought is somewhat inflexible and hard to train,
but it is also very fast, impressively so when making a “judgement call”
based on a lifetime of accumulated experience. To be equal or superior to the intelligence of a
human, an AI must have something equivalent to this powerful unconscious processing.
In that sense, it “needs emotion”, but that statement needs to be
strongly qualified. There is no obvious reason it needs anything like the human conscious/unconscious
split. To do what emotion and intuition do, an AI would need equivalents of them,
but they might not be the same at all. The fact that humans use emotion, in the fully fledged sense,
for moral judgement. The connection between emotion and ethics might just be a fact about humans.

h1

Functional Programming as the Next Step from Procedural Programming.

July 11, 2014

Procedural programming languages hide goto underneath for, if and while. Functional languages hide for, if and while underneath filter, map and reduce.

Object orientated languages hide case/switch underneath method calls.

h1

Hiding References…the next “Procedural Breakthrough”

July 11, 2014

The introduction of procedural programming was the largest step forward in the history of programming design.much more so that what Object Orientation The basic idea is you get rid of gotos. You do that by finding the non pathological cases and turning them into specialised structures.

References,(and particularly pointers, raw, machine level references)are the gotos of data. They can lead to a host problems, including circularity, and “dangling” (premature dereference ).

The process of getting rid of explicit references is similar to the process forgetting rid of gotos: identify the use cases, and replace them with special purpose constructs. (Of course, raw pointers will gave to be used somewhere down in the machinery, for the same reason that implied gotos will implement for, if and while in procedural languages)

Pointers are used to implement complex databstruxtures such as trees an linked lists in languages lke C, so one tranche of references can be got rid of by using flexible data containers.
The other main use is passing data out of subroutines. The safe pattern is to pass in a references from outer scope, for the subroutine to modify. This avoids the problems attendant on a subroutine passing out references to their local variables, which might have beendestroyed, leaving the pointer dangling.

Both those techniques are used in the new language Parasail.

One might wonder why it took so long to get rid of the “second goto”.

The rise of object orientation ameliorates the use of references to pass data out of a subroutine. “This” and “self” are effectively implicit references, when used in modifiers. And the rise of dynamic languages ameliorate the use if references to build dynamic structures,…dynamic languages often have references, but their programmers rarely need them.

h1

Orthogonality V: Fixed Message, Flexible Receiver, Fixed Receiver, Flexible Message.

July 10, 2014