Killer App
Russ Robert’s EconTalk yesterday indulged in some doom-mongering about what we now call artificial intelligence. “Artificial intelligence” is our current name for the technology. This may still change, since new technologies have often started under one name identifying them with something that already exists before receiving a novel name as something entirely different and distinct.
Eliezer Yudkowsky is a machine intelligence researcher who has studied silicon smarts enough to freak himself out. He believes that AI is so dangerous that it most certainly will use its power to eradicate the human species. He expects this outcome to happen, in fact, and that we must undertake every effort to shut AI down for good, put it under international treaties, and nuke any country that violates that treaty. Really. Later on he admits that no effort is likely to eliminate this outcome, because the correct steps should have been undertaken decades ago. It’s all to late, but we must do something drastic, which resembles some of the environmental doom-mongering, arguing that suicidal action is better than certain death.
The reasoning sounded smart—too smart for me. I didn’t entirely follow some of it. Russ Roberts tried to argue the case for less gloom, but Eliezer Yudkowsky was having none of it. When Roberts points out that a smart entity might not want to eliminate humans, Yudkowsky said failure to understand this was a sign of a lack of intelligence. This was incoherent as well as insulting.
If I understand correctly, the idea is that AIs have been built for the express purpose of winning: of beating humans at games, tasks, and skills. So Yudkowsky assumes they will follow this logic to its dire end, meaning defeating all humans once and for all. The contradiction as I see it is this: this super-intelligence will be smart enough to manipulate humans to keep us from shutting down the computer hardware it resides on. It will manipulate people to keep us from killing or stopping it. But it will not realize that it needs humans for its continued existence—at least for the foreseeable future—in order to keep all the support infrastructure functioning that makes its own continued existence possible. Why wouldn’t a super-smart entity want to keep us around to serve it and keep it alive?
It took hundreds of millions of years of evolution to produce us, the smartest species on the planet, the smartest entities we’ve ever discovered. We required tens of thousands of years and billions of people to build civilizations that could achieve scientific and technological wonders. But AI can make similar strides in a matter of days when measured as invention and discovery. Which is an odd metric of the Apocalypse. What part of human invention and discovery led humans to believe we must consciously eliminate all rival species from the planet? What part of invention and discovery entailed massive destruction?
It seems to me that, if Yudkowsky is correct, there’s nothing that can be done. It’s just a question of time before AI ends the human party. If that’s the case, it’s too late to do anything about it but worry, and worrying about something we can’t do anything about is a complete waste. We can’t stop it, so we’ll have to learn to do what we always do: muddle through and attempt to make the best of it.
Personally, I think it much more likely that mankind will do itself in by its own hands rather than some quadrillion-tetra-byte-version-of-Phil showing us the door. But if that turns out not to be the case, I'm still right, since AI came from... where was it that it came from again? Oh yeah.
Us.
Go ahead. Turn some AI loose on that proposition and see what *it* has to say about it.
Morning all
Today is Lost Sock Memorial Day, Butterscotch and Chocolate Brownies & Alphabet Refrigerator Magnet Day. ( which I own some...lol, and some with phrases actually)
I am still home but better, I slept a long time and it helped...took an alka seltzer nighttime and I was out like a light...probably will sniffle for a few days...hoping to go back to work tomorrow.
WAPO today had a piece with charts on whether you job is at high risk for being taken out by AI...Bookkeeping is apparently on the block, but as I work for a small company and will probably be here till I retire, I think I will be safe.