I’m trying to get back into coding agorint after spending some time working on this blog. So I thought I would write a blog post on an architectural point I am trying to decide. Using this blog as a type of rubber duck architecting, but also hopefully giving people a view into how I think about these things. Stay away if you are not interested in the details of the code.
Disclaimer: I’m not an economist. I suspect there have been things written about this before. When I have some time I shall try and do some research.
So if we want autonomy to be easy we need to develop the technologies. As I discussed in the safety blog post open source should work for technologies if it changes a lot and there are a lot of people changing it.
So it makes sense to work on technologies that lots of people can adopt and that can also allow them to adopt more technologies. So apart from a desire to be more free and more insulated from disruption, people adopt and maintain technologies when it becomes more economically viable.
This is going to be the trickiest section to argue. There would need to be a longer version of why IA is unlikely to lead to AI that kills us all else you would have a hard time arguing for positive impact. This view of intelligence also means that singletons are much less likely, so you have to argue for solutions to intelligence that allow multiple actors, even if they are complicated and harder to reason about. Multiple actors also means we can reason less about the far future, although I would debate how much we can realistically reason about the far future anyway.
There are lots of important uncertainties around intelligence that impact our response, these are called by the community crucial considerations. The interlocking (non-exhaustive) list of crucial considerations list I tend to think about are things like:
- Will man-made Intelligence be sufficiently like current machine learning system so that you can expect safety solutions for ML to be at all transferable to it?
- Will man-made intelligence be neat or messy (and how does that impact the takeoff speed)
- Will a developing intelligence (whether a pure AI or human/machine hybrid) be able to get a decisive strategic advantage? Can they do so without war?
- Can we make a good world with intelligence augmentation or is co-ordination too hard?
- Should we expect a singleton to be stable?
These cannot be known ahead of time, until we have developed intelligence or got pretty far down that road. We can estimate the current probabilities with our current knowledge but new information is coming in all the time.
Answering all these questions is an exercise in forecasting, making educated guesses. The better we can reduce the uncertainty around these questions, the better we can allocate resources to making sure the future of intelligence is beneficial to all.
This is complicated topic. The rough concept of takeoff, if we can make intelligence with intelligence, we will have more intelligence that will help us make more intelligence. And we get exponentially more intelligence in short order and this will magically change the world.
Now I agree with the rough concept, that there will be some feedback loop between our ability to create intelligence and our ability to create more intelligence. However I don’t think the effect on the future of the world will be as strong as many people.
I really like effective altruism. It challenges you to think about how you are doing good and whether you could do it better
As we cannot easily measure improvements in the human condition, we need proxy measurements to guide our activities.
However I think the current measures effective altruism uses are in danger of neglecting the value of freedom and autonomy.
So I previously said that you need to study resource allocation to get to intelligence augmentation. But what do we do after we have a market that can allocate resources to the programs? We are left with two problems, getting the right currency to the right programs and somehow getting the better programs into the system, so it can improve what it does.
This can be seen as a partial recipe for AI as well, but you would need some function that took the part of the human of giving feedback to the agoric system. Most likely you would want something that humans could at least partially influence so they could guide its learning, unless you are planning to code the basic knowledge of the world into the system. You may also need a different set of programs, as the system could not rely on the human for things like goal setting.
The reason why not much code has been written in it, is because of the language. There is nothing very special about it (it is a stack and capability based language), but I have not written a high-level language and it is a bit of pain to write and test the equivalent of un-typed assembler.
I should probably explain somewhat how the project came to be.
I started to be interested in intelligence augmentation in the 2000s, I even was active in sl4 2002. I was pretty immature but I don’t think my views have changed much. The core question I started with was:
How can you build something that can modify itself inventively and still get something useful?
This will be a series of blog posts on agorint the prototype market based system for user trained resource allocation in computers.
So agorint, not such a great name. It was supposed to be short for agoric intentionality, in that it was a market with a direction towards some users needs. I’m not such a fan of this nowadays. But I am stuck with it until such time as I get around to thinking of a better one. Continue reading “Introduction to agorint (part 1)”