Chattering classes
ChatGPT continues to generate considerable organically derived output, especially from people, and especially about ChatGPT and what its arrival on the scene means. Jack points us in the direction of Bret Devereaux, for instance, who has a calm skepticism that the so-called artificial intelligence programs will ever live up to their hype, positive or negative, judging by what is available now.
In his lengthy musing, On ChatGPT, he describes the device as fairly stupid rather than intelligent. ChatGPT is a language-probability guessing machine that estimates what words are likely to come next in a sequence, based on millions of texts in its database, but it does not have any sense of what words mean or anything approaching an inkling about what things are that exist in the world, including “the world” and “existence”.
Says Devereaux:
ChatGPT does not understand the logical correlations of these words or the actual things that the words (as symbols) signify (their ‘referents’). It does not know that water makes you wet, only that ‘water’ and ‘wet’ tend to appear together and humans sometimes say ‘water makes you wet’ (in that order) for reasons it does not and cannot understand.
In that sense, ChatGPT’s greatest limitation is that it doesn’t know anything about anything […]. ChatGPT is, in fact, incapable of knowing anything at all. The assumption so many people make is that when they ask ChatGPT a question, it ‘researches’ the answer the way we would, perhaps by checking Wikipedia for the relevant information. But ChatGPT […] has no discrete facts. To put it one way, ChatGPT does not and cannot know that “World War I started in 1914.” What it does know is that “World War I” “1914” and “start” (and its synonyms) tend to appear together in its training material, so when you ask, “when did WWI start?” it can give that answer. But it can also give absolutely nonsensical or blatantly wrong answers with exactly the same kind of confidence because the language model has no space for knowledge as we understand it; it merely has a model of the statistical relationships between how words appear in its training material.
From the point of view of a college professor trying to teach concepts and writing skills to students, the software represents an impediment and a distraction. It generates language to tie words together which are coherent as language, but which are not necessarily cogent or logical. As such, it can generate something like an essay, but has no concept of the essay’s meaning. (As someone said in a podcast recently, the fact that that ChatGPT can generate college-level essays that can fool college teachers is less flattery of ChatGPT than a condemnation of college-level essay expectations.)
On the other hand, ChatGPT can cause humans to anthropomorphize, to see a sentient fellow human operating at the other end of the conversation. Tyler Cowen has a short blurb (based on a link I can’t seem to trace to a larger article):
“ChatGPT’s answers are generally considered to be more helpful than humans’ in more than half of questions, especially for finance and psychology areas.”
Most of all, ChatGPT does better in terms of concreteness. Note also that ChatGPT uses more nouns and deploys a more neutral tone than do the human experts. ChatGPT fares worst in the medical domain, but its biggest problem (from the point of view of the human evaluators) is giving too much information and not enough simple instructions.
The source material is a research project comparing how people respond to ChatGPT’s advice with how they respond to human call-center advice.
In sum, the AI chatbots are stupid rather than intelligent, but they are capable of feeding us language that offers helpful tips and can make us feel better. Perhaps they have the capacity to leverage our psychology to make us feel better about ourselves, help us talk to ourselves in a way that helps us make choices with confidence, whether wise or not.
This still seems a long way away from us having to hand over control of human affairs to the robots at the point of a robotic gun. That should come as a relief—for now.
So, in reading here that ChatGPT "doesn't know anything about anything", does not understand "logical correlations", can "tie words together which are coherent as language, but which are not necessarily cogent or logical" and "can also give absolutely nonsensical or blatantly wrong answers with...confidence", I discover that this particular bot is actually nothing new, just an electrified / digitized version of other bots that have been ambling around the scene practically as long as there's been a scene in which to amble: politicians.
Everything old is new again at some point.
Okay, this is very late but I ask this question: Are the clients at a nutritionist's office a dietribe?