This is complicated topic. The rough concept of takeoff, if we can make intelligence with intelligence, we will have more intelligence that will help us make more intelligence. And we get exponentially more intelligence in short order and this will magically change the world.
Now I agree with the rough concept, that there will be some feedback loop between our ability to create intelligence and our ability to create more intelligence. However I don’t think the effect on the future of the world will be as strong as many people.
Mostly though I wish this was still being discussed. I wish that we were bringing in different viewpoints to adjust our estimates. My work is partly to test another viewpoint to try and reduce the uncertainty around this.
So my view of intelligence (IQ/g-factor etc) is as a measure of “How well we can absorb information from other people on how to solve problems”. This is different from the commonly accepted definition from Hutter
Intelligence measures an agent’s ability to achieve goals in a wide range of environments.
My definition is connected to Hutter’s in that someone who can absorb information well and has a capable society around them can achieve goals in a wide range of environments. But it also presents a natural limit to take off, the abilities of an individual are limited by what the society knows. Increasing a systems intelligence would just increase how quickly it hits that limit.
A question then is how does society progress, if people can only get up to the limit? It is a mixture of luck and collecting more data. Luck because you have to be asking the right questions or looking in the right search spaces. Data is needed to constrain answers or guide the search. Intelligent people are the ones that do this because they have absorbed the information from society and can go new places in mental models of the world and not reinvent the wheel.
Even if we accept intelligence is absorbing information from society, the ability to create intelligence could still be massively revolutionary. Creating something that can absorb the sum of human knowledge quickly and find connections and extrapolations would change the world. So how quickly should we expect this to happen? If it can use the knowledge about the world to change its algorithms to improve how it absorbs information, we should expect the first systems to hit the limits quickly. My view of intelligence augmentation is that that effect will be weak because the systems will be messy and hard to change.
- Inventive Agoric systems are already changing the programs inside the system subconsciously. This makes conscious changes of the code of the system hard. So it is already absorbing information, but changes are done in a localised fashion, not sweeping architectural changes. So it will likely get stuck in local minima.
- There is no centralised algorithm for intelligence, just lots of different learnt skills. So individual code changes to these skills will not be powerful.
- These learnt skills will not necessarily have a full reasoned explanation for why they are like what they are. This means that making changing or getting rid of them hard as you might break important functionality.
Imagine a big ball of mud that is constantly changing itself. I suspect that will be what the first IA systems will look like.
We will get better over time, as we try lots of different starting sets of programs in lots of different real world environments. But it won’t be a smooth sailing to the top.
These are just my current thoughts and the philosophy behind my system. If I am right about the system being useful, I think it implies that takeoff will not be very hard.
I shall try and develop this line of argument.