Henry Kissinger is on a crusade to stop dangerous AI.
I think many of the fears of AI are misplaced. AI does things like make recommendations on Spotify or Netflix. It’s not ready to take over the world.
But … shall we wait until the thinking machines rule us, and then hope that a Butlerian jihad will triumph over the machine overlords? Or shall we come up with rules now, before it’s too late?
Simply stated, an ounce of prevention is worth a pound of cure.
Exactly.
It’s going to be interesting.
Just this week there was a WP story where people were complaining that attempting to apply algorithmic Facebook rules against hate speech were catching too many insults about whites and men which wasn’t the intent. The algorithm wasn’t smart enough to know that “white men, amirite? they’re all terrible” is acceptable venting about supremacist structures, but “black men, amirite? they’re all terrible” is racist hate speech, unless possibly spoken by black women.
Today’s AI tends to the autistic side. It’s pretty good at noticing things but absolutely terrible at pretending not to notice things for social reasons.
In order to rig the output, or rather to ensure algorithmic justice, they’re going to have to weigh things pretty heavily.
I might have mentioned this before: I have a friend whose job at a bank was to come up with algorithms which would predict credit success. But he couldn’t do it _too_ well. Certain forbidden categories were very predictive, and everyone in the field knows it, but, well, they’re forbidden. So he had to come up with other criteria which the bank could justify and didn’t sacrifice too much efficiency. If AIs are left to themselves, they’ll come up with equivalents of the infamous grape soda proxy, and start discriminating in ways we’d prefer they didn’t. It’s a major headache, I was told.
Yes, but if the AI finds something that confirms a socially unacceptable prejudice, that shows that the programmers wrote that prejudice into the software. I mean, what else could it be?