A lot of people are predicting that AI will get out of hand and create a nightmare scenario. I don’t know if I believe that or not. Computers are pretty amazing, but I suspect there are limits to what they can do.
However, it’s certainly a possibility we shouldn’t ignore.
Given that, what is there to do about it?
Regulation isn’t going to stop it. First of all, the regulators aren’t smart enough. But even if they were, we can’t regulate every country in the world, nor get every country to agree to regulations.
People have proposed things like …
- Ethical guidelines for AI. Who says AI would follow them?
- Oversight bodies. That’s like the regulators. They’re not up to the task.
- AI kill switches. Who says AI won’t be able to disable the switch?
The solution in Dune was to destroy all “thinking machines” and kill anybody who tried to make one. Up to and including nuking them.
Is there a better option?
Here’s a long but interesting discussion on the future of AI and humanity with Mo Gawdat, who was some bigwig at Google.