Delayed Future
Memorial Day, Monday, June 24, 2024
Delayed Future
Computer scientist and MIT professor Rodney Brooks has worked in the fields of robotics and artificial intelligence for decades. His view of technology and what it means for the future is very tempered. He is not a technological cheerleader, and certainly not an industry chieftain overselling what tech can do or how quickly it will arrive at these abilities. He has witnessed many tech hype cycles involving robots and computers—most recently regarding artificial intelligence that supposedly will lead to the holy grail of AGI: artificial general intelligence. AGI is when computers can do everything for us, making human thought and effort obsolete.
How sanguine is Brooks about the forward rush of technological advancement? He keeps track of it using an annual blog. As he says about industry cheerleading: “Having ideas is easy. Turning them into reality is hard. Turning them into being deployed at scale is even harder.”
Most of us I’m sure can remember a time a decade ago when the runaway enthusiasm was about the idea of self-driving, or autonomous, vehicles (AVs). The self-interested tech booster Elon Musk proclaimed full self-driving abilities were months away from maturity, whereas engineers and computer science researchers were much more cautious. Rather than engaging in such reckless over-promising, Rodney Brooks described how technology actually advances.
For AVs, there was a belief that the technology was not just here, but also fully developed and ready to go. What happened next were failed attempts to produce an actual complete version of the idea that worked as initially promised. The initial promise was for fleets of driverless cars without driver controls dominating the roads—by 2014. What the technology was able to deliver were some incremental improvements to driver assistance.
There was a long retreat into the recognition that many problems that were supposedly solved at the core of the new technology were in fact incompletely understood. The dream was easy, it merely required implementing a whole bunch of technological solutions that hadn’t been invented yet.
Thus, in the case of AVs, the problem is not just a matter of having sensors that “see” the roads and computers that predict road hazards, but also technology to do the complex human tasks of guessing what everyone else on the road is about to do. Just having raw computing power combined with sensors is insufficient. Drivers are engaged with one another in subtle ways, often involving eye contact and non-verbal communication. The enthusiasts still hadn’t fully accounted for the implications of this reality.
From there, the tech evolved gradually in small increments. For AVs, this involved a whole lot more development of sensor technology, greater computing power, and refinements to computer software. Some of the incremental advancements have found their way into new vehicles. They have happened behind the rearview mirror or the brand badge on the grille: there are better and smaller sensors measuring more things than any human driver can perceive. The sensors are meant to aid drivers and improve road safety.
With all these advancements, traffic accidents and fatalities have been on the rise. This suggests the technology has unpredicted limitations, in this case, in how human behavior changes in response to enhanced safety systems. Drivers who put their faith in sensors and computers lose situational awareness. The human mind gets distracted when there is something more than a slight amount of automation.
We are quick to adopt technologies that seem to let us stop thinking. In the long technological evolution of global positioning satellite (GPS) systems, for example, first there was the promise of knowing where things were on earth that industry needed to track. Decades later, map reading is a widely forgotten skill. Now we have geolocation built into our smartphones as a feature we perceive as a basic device function. We rely on it as a matter of course. Our economy and infrastructure rely on GPS to such an extent that a satellite network failure could bring civilized life to a complete crash: the electric grid, public water systems, the banking system, and the public road networks. The technology is embedded everywhere—and yet it cannot operate self-driving cars.
This is where we are with the large-language models (LLMs)—the visible forefront of artificial intelligence (AI). LLMs were adopted more widely and quickly than any computer technology ever before thanks to their ability to generate human-sounding language. The initial enthusiasm has waned as people have come to see their capabilities as incremental improvements while discovering their limitations. Yet on a smaller scale, AI capabilities are coming into wider use everywhere. They are tools that help people do their jobs involving language, even when the users are tired, stressed out, mentally blocked, or otherwise have run out of the ideas to write that note, draft that text, or sketch that image required for their jobs.
The hype and associated fear were under the assumption that LLMs meant computers would replace human workers everywhere very quickly. This, along with the fear of a doomsday technology, now look quite far away. The initial surprise at humanoid LLMs has dissipated. It is even regarded as somewhat humdrum. And not even two full years have lapsed since ChatGPT was rolled out publicly to great fanfare.
While LLMs caught the public imagination for a time, what they really did was to tell us more about our own natural hopes and fears for the future.

I'm planning to vacuum upstairs, but Jake is sleeping in a corner of the bedroom, so maybe I shouldn't.
A comment about a toddler with a knife (about Kevin Williamson's article today) reminded me of an incident.
When Daughter D was about 2, we were having a Cub Scouts event at the dining hall at our nearby camp. The Boy Scouts from our troop did the cooking in the kitchen there. After doing something or other, I looked around for D and couldn't find her. After a bit of looking around, I went into the kitchen, and sure enough, there she was.
She had wandered into the kitchen, picked up a knife from a counter, and started waving it at the Scoutmaster's knees, that being her height. When he took the knife away, she started to cry, and Thor picked her up. When I arrived, he and the rest of his patrol were doing a sort of line dance and singing this song about Assassin's Creed, while D grinned and sang, "Dieeee, dieeee!"
https://www.youtube.com/watch?v=pfsRxTjNGvo
She never had a chance of being a normal child, really.
Good morning.