Tool Use and Abuse
Wednesday, August 6, 2025
Tool Use and Abuse
The biggest danger to 1 as a species lies in becoming over-reliant on technology. This was my main conclusion after hearing the short explanation here from The Economist’s technology correspondent, in a discussion of new research findings about AI use and human perception.
The video short is available on YouTube if you follow this link. What follows below amounts to noodling on the themes presented in the video podcast clip.

We are given to looking for shortcuts, to seeking ways to simplify complex tasks, to conserving energy. We assign a negative value to such behavior by calling it laziness; we give it a positive label by calling it smart and thrifty.
Large-language computer models help us with the complex tasks involving manipulating abstractions, of working with ideas and concepts. They save us the hard work of learning things, either by way of practice, rote memorization, or the fussy business of collaboration. Yet collaboration is what we have evolved to do. We learn socially: collaboration is where we innovate and invent, where we share ideas in the pursuit of problem solving. LLMs that produce language can make us feel as if we were collaborating.
LLMs remove most of the fraught bits of working together socially. When we get together socially, we have to sort out the terms under which we intend to collaborate. This is where different egos and ambitions can come into conflict.
As described by Joseph Henrich in The Secret of Our Success, our evolution predisposes us to accept hierarchies in the novel form of prestige. Prestige hierarchies form a basis for us to collaborate. Without that built-in ability, we would have a much harder time collaborating, since the only thing we’d be involved in would be establishing dominance over each other to determine who gets to first dibs at scarce resources.
A lack of rules for collaboration means, essentially, all-out war. A built-in propensity for accepting social rules means we can actually focus our efforts on coordinating group tasks.
We already know from experience that computer automation has made us worse at certain tasks that seem mundane and perhaps altogether unnecessary: things like correct spelling or route-planning/ map-reading. Why should we learn to do those things if we don’t have to? We’ve got these big brains that we could put to better use doing things other than remembering banalities like “i before e except after c”. Is there any real purpose to learning how to use reference books, after all? Is that a skill we should spend the limited hours of our mortality fussing with?
Questions like those are easily extended to all sorts of other things we find fiddly and annoying. Before 20th Century America, most of us would have spent most of our days doing subsistence agriculture, stowing food for non-growing seasons. Before the 19th Century, we would also have spent a lot of time twisting fibers into thread in order to make the textile fabric for our own clothes.
More modern questions of utility include: Do we need to learn how to do math since we have calculators? Do we really need to understand foreign languages now that smartphones can perform basic, near-simultaneous interpretation for us on demand? Why learn or practice the skills of essay writing when AIs can handle that drudgery for us? And so on.
I don’t know the answer to those questions, and they’re rhetorical anyway.
Artificial intelligence in the form of language-generating machines apparently can persuade us to accept as true things that might better be considered with skepticism. Relying on it too much makes it easier to fool ourselves. Working through complex issues on our own, individually or socially, is a skill we improve through practice. Letting machines do the thinking for us deprives us of acquiring the skills of critical thinking and all their related abilities.
Martin Gurri has referred to our 21st Century experience as the “Fifth Wave”—the tsunami of data and information that follows earlier information revolutions like the invention of writing or movable type. We can look up anything we want to know at anytime and anywhere we happen to be, for all intents and purposes. Now AI can summarize all that data and information, distilling it for us into the basic points that can be condensed so as to fit into a text message or micro-blogging Tweet.
Access to that much data and information seems to make us so much more confident in the things we think we know. It gives us a certain smug certitude that we truly do know it all. Is this why it feels like we don’t debate things anymore? Is this why it feels like we approach the world less with curiosity than with certitude, with our minds already made up?
Questions upon questions. ChatGPT or Claude would praise them and then come up with definitive answers.
One final note: I am making a few assumptions that may or may not be valid. One is that our collective reasoning is reliably useful—or more so than what a computer or machine might generate. It’s also possible that the resources consumed in working out hierarchies are fundamentally inefficient. Or maybe the effort takes us down collective cognitive blind alleys: foolish group-think and design-by-committee work products that don’t work very well. Collaboration is our species’s unique ability, but it is no guarantor of quality output.
We never really want to admit when we’re wrong. The pull of certitude combined with self-blinkering group-think may well produce tomfoolery that is worse than the status quo ante. Reasoning in groups can make us feel very, very good socially about some very, very bad ideas.
This time I mean the first-person plural to refer to the collective “we/us” as the species Homo sapiens sapiens, not as a regal-sounding alternative to the first-person singular.

"Letting machines do the thinking for us deprives us of acquiring the skills of critical thinking and all their related abilities."
I agree with this. Why learn to do math calculations? It's good for the brain, like lifting weights (carefully) is good for the muscles. We have a certain amount of innate intelligent, but we also have developed intelligence. For example it's been demonstrated that children who have heard fewer words in early childhood demonstrate lower intelligence throughout their lives. Our brains need to work.
Failure to work with our reasoning skills can leave us without much in the way of reasoning skills. On the other hand, when we look at people with elite academic credentials - Senator Elizabeth Warren, for only one example - we find that they don't demonstrate much in the way of reasoning skills, but only "emoting" and "jumping to conclusions" and "tendentious blather."
Did these people get through years of high-status schooling without actually learning to build an argument using facts and logic, or did they have those skills but decide to abandon them?
Good morning. It's been raining again, but not constantly like yesterday. I'll text the stable around 7:00 to find out if D's work shift will be happening. I tried to get into the state's 529 plan website to request reimbursement for Thor's tuition, but it errored out. This keeps happening, presumably because everyone in the state is trying to do the same thing. I'll try again later.