Benjamin Todd
👤 PersonAppearances Over Time
Podcast Appearances
So we need to make sure the AI system shares our goals.
This, however, is not easy.
No one knows how to program moral behavior into a computer.
Within computer science, this is known as the alignment problem.
Solving the alignment problem might be hugely important, but today, very few people are working on it
We estimate the number of full-time researchers working directly on the alignment problem is around 300, making it over 10 times more neglected than biosecurity.
At the same time, there is momentum behind this work.
In the last 10 years, the field has gained academic and industry support, such as Stephen Hawking, Stuart Russell, who wrote the most popular textbook in the field of AI, and Geoffrey Hinton, who pioneered the field of AI.
If you're not a good fit for technical research yourself, you can contribute in other ways, for example by working as a research manager or assistant, or donating and raising funds for this research.
This will also be a huge issue for governments.
AI policy is fast becoming an important area, but policymakers are focused on short-term issues like how to regulate self-driving cars and job loss, rather than the key long-term issues, i.e.
the future of civilization.
Of all the issues we've covered so far, reducing the risks posed by AI is among the most important, but also the most neglected.
Despite also being harder to solve, we think it's likely to be among the most high-impact problems of the coming decades.
This was a surprise to us when we first considered it, but we think it's where the arguments lead.
These days we spend more time researching machine learning than malaria nets.
Dealing with uncertainty and going meta
Our views have changed a great deal over the last 12 years, and they could easily change again.
We could commit to working on AI or biosecurity, but might we discover something even better in the coming years?
And what might this uncertainty imply about where to focus now?