My view of intelligence augmentation

I follow the Licklider school of intelligence augmentation, in that we will be able create machines that act together in a tightly coupled way, in his words “to enable men and computers to cooperate in making decisions and controlling complex situations without inflexible dependence on predetermined programs.”

We seem to have met most of his preconditions. But we seem not much closer to his vision. We are still dependent on inflexible predetermined programs.

I think this is possible but will need a change in how we design computer systems.

AI and machine learning is a little bit better, we no longer have predetermined programs, we have predetermined problems to solve with heavily curated data sets.  AlphaGO just solves GO, deep blue just solved chess. The same algorithms can solve a number of different problems, with some parameter tweaks and changes to the data sets.

When I’m trying to solve a problem in a complex situation I probably won’t have a full definition of the problem or a well curated data set. But then I don’t demand perfection, I’d want a computer system that attempts a partial imperfect solution but gets better with practice. I want one that I can talk to and it can explain its thinking and I can correct it.  In short I want one that is trainable.

A trainable computer sounds a lot like an AI, but there is an important difference. An AI through all of fiction, is seen as its own self-sufficient moral agent. A trainable system would learn all moral judgement and goals from the user, it would have none of its own. It would be equivalent to an external lobe of our brain (maybe one with limited bandwidth and a weird feedback mechanism, depending upon technology level). Different but still part of us.

So there is one thing I disagree with Licklider about. I think that humans will form a necessary part of the human computer hybrid, not for capability but for directing the evolution of the programs so that they still do what we want.

So what are doing wrong? The major problem is that we have stuck the programmer/maintainer – computer relationship. Our lobes of our brains need no programmers or maintainers, they update themselves to solve new problems and think new things. So we must try and alter this relationship.

The first experiment along this path is to try and get computers to manage their own resources, based on human feedback. Managing resources is a large part of maintenance, it removes old programs or viruses and gives good programs the resources they need to perform their tasks. It is also a precursor technology to being able to enable experimentation in new programs and interpreting human speech as programs. Both of which would be useful for trainable computers.

There is a lot more to say, but that will be another day. My experiment is here.

Advertisements