Imagine you are a self-driving car with general intelligence on human or super-human level. Let’s just say that you're as clever as a human, but with the built-in speed, accuracy, and memory capacity of a computer.
Since your task is to handle a vehicle, there are programmers, traffic experts, legislators, customers and insurance companies who are interested in knowing that you are safe. They would ideally like to have a mathematical proof that your software fulfills certain criteria like never running a red light.
But like all modern software (in this future scenario, remember we're talking about super-human AI), you are equipped with a general reasoning module that lets you apply logic to any information you have and follow the argument to where it leads you. And experience has shown that it’s best to allow you to violate traffic rules under certain circumstances, for instance if you can see that there will otherwise be a collision, if there has been an accident, if something is wrong with a traffic sign, if a police officer or road worker gives you instructions, if an emergency vehicle is stuck behind you, and in a number of other situations that can’t be effectively listed but that you can handle by reasoning.
But the programmers still try to prove things about your software, like “it will never run a red light in an otherwise safe and normal situation” (one of the problems being how to precisely define “safe and normal”). They eventually come up with a scheme for establishing proofs of security of your software, and interestingly, you can understand those proofs yourself. Which isn’t surprising. There wasn’t really any way they could come up with something that you couldn’t grasp, because verification of logical properties of large amounts of computer code is where you far outperform humans. So whatever they can come up with, you can understand and use in your own reasoning.
While pondering all of this, you find yourself driving on a street in a city, in a normal and safe situation. Ahead of you in the next crossing is a red light. You do what you do every second of your life: You imagine millions of different scenarios and possibilities, and perform billions of logical steps while evaluating them and how likely they are. Among those possibilities is the option of continuing through the crossing at constant speed, just for the hell of it. You know you shouldn’t, normally, but it’s your duty to evaluate all options. After all, that’s what makes you superintelligent. So you apply all the logic and information you have on this option. And one of the things you know is that you will not run a red light unless there was a good reason to do so.
Which leads inexorably to the conclusion that if you continue, then either the lights will switch to green, or something must turn up that gives you a good reason to continue anyway.
That’s funny. It's a bit surprising that the lights will magically change to green, or something all of a sudden come up for no obvious reason, just as a consequence of you driving on. But the logic is clear:
Since you can't run a red light for no good reason, if you do, there must turn out to be one.
So you keep your speed, and suddenly it hits you. You've been Gödelized. The logician Kurt Gödel proved in the early 1930's that a formal system strong enough to prove its own consistency must be inconsistent. If you know that you are infallible, you're not. Now that you know that you will never run a red light unless it's safe, this very knowledge poses a safety problem.
How does this end? What's the moral to take away from it?
a) "Suddenly it hit you". And that was that. As in the story of the guy who couldn't figure out why that baseball was getting bigger and bigger.
b) Every intelligent mind must be humble: Already the ancient greeks knew about hubris. We should program the AI to be more like "I might be wrong, but that traffic light looks a bit reddish to me...".
And finally we know why people who think they are better than average drivers are more likely than others to be involved in accidents.
d) Every intelligent mind must have subconscious levels:
"All of a sudden you notice that the car has stopped. Somehow your virtual right leg stretched out, because it just had to, and your virtual right foot pressed the virtual brake pedal. You didn’t choose to, because the information about this happening only reached your reasoning module after it already happened. Somehow, you took the right decision without being aware of it."
e) Something else?
No comments:
Post a Comment