This is going to be the trickiest section to argue. There would need to be a longer version of why IA is unlikely to lead to AI that kills us all else you would have a hard time arguing for positive impact. This view of intelligence also means that singletons are much less likely, so you have to argue for solutions to intelligence that allow multiple actors, even if they are complicated and harder to reason about. Multiple actors also means we can reason less about the far future, although I would debate how much we can realistically reason about the far future anyway.
Given this view there are three different stages of impact.
- Positive psychological impact during development – Some people (how many?) are currently depressed about the world, thinking that their jobs will be taken by AI and they may have to rely on state handouts or disliking relying on powerful companies they cannot control. Quantifying how big this is will be hard. Worth looking at the increase in depression? How much of it can be attributed to the modern world? Having a concrete thing to work on that mitigates these things might reduce depression and provide an positive outlet for frustration.
- Benefits from IA – There will be economic benefits from IA, increased productivity scientific output. These should be spread more evenly than if it was AI giving humanity the productivity boosts.
- Benefits from autonomy – If autonomous communities are widely deployed there would be less waste, less environmental damage, and more safety from existential risk. Along with psychological benefits of actually realizing more control over their environments.
The questions here are would progress in IA collapse into IA developed closed source by companies for competitive purposes. There are models of open development that are currently self-sustaining, if they come first and have momentum. There is a pathway to IA that is based on novel resource allocation in computers which can be worked on as a first hypothesis and should give us more information.
What is less known is how the current politically powerful will see these developments. If they are heavily anti-them it would not be at all tractable.
Work on safe AI, if it doesn’t lead to singletons, seems to be concentrating power in the hands of corporations and governments. Companies have little incentive to develop empowering technology as it doesn’t lead to repeat business/trade.
There is work from kernel and neuralink, but they aren’t providing the positive psychological benefits or building the software for IA.
People often talk about merging with technology . But little work is actually being done on it. It is easier to do science on AI (easier to do repeatable experiments, without a messy human involved).