Garbage out
The automatic text-generating artificial intelligence program (AI) Chat GPT has been part of the background news now since at least last November, flaring up from time to time. It’s even been featured here! So it is safe to assume it’s something more than a passing fancy, and therefore a trend about which we can safely jump to conclusions, if not move along quickly to a state of near panic, as I’m confident we’ll have once the misplaced-top-secret-document and lying-congressman scandals have faded.
In case you haven’t heard, Chat GPT is a bit of AI that is capable of generating human-looking written language, and at a quality that can fool a lot of people. Early reports have been about the potential for students to use the software to cheat on essay assignments, for instance. For writers, it could take over the tiresome task of banging text into keyboards. Chat GPT holds the promise of generating pages of original natural-language writing for you to sign your name to—created while you were having breakfast! (Sigh. Yes, of course I tried—but the connection is always overrun.)
One recent dramatic headline claimed that the internet could find itself awash in such AI generated content within just a couple years. By way of the Drudge Report, we have the Yahoo! News headline “90% of online content could be ‘generated by AI by 2025,’ expert says”.
You can easily go from imagining written text that can fool human users to imagining voice-faking software powered by AI that can fool people into talking with it on their phones. Imagine an Alexa or Siri voice assistant that can carry on a conversation with a person rather than just setting a timer, reporting the weather, or playing a music requests. For that matter, once the computer chips that handle video processing catch up, you might imagine seeing deep-fake photos and videos generated instantaneously that are capable of fooling users as they look at their screens.
Most of the readers and commenters here are old enough to remember having to pay to make phone calls outside the local telephone exchange (landline—the only kind there was), before the Bell System telephone monopoly was broken apart. Most of us remember the ensuing price wars among different long-distance providers through the ‘80s and ‘90s, as long-distance prices per minute inexorably dropped to where they are now: near nothing. And ever since, people who have phones have had to deal with the problem of unwanted telephone marketing.
Analogous to junk mail, these unwanted “junk calls” have become aggressively fraudulent—aside from being pesky and (at least since the Telephone Consumer Protection Act of 1991) illegal. Computerization has allowed the most dishonest callers to disguise their phone numbers to mislead recipients into answering, and many of them use social engineering techniques to attempt to coax individuals’ bank and credit account information for outright theft, often from overseas, beyond the reach of domestic law enforcement.
The fraudsters do this because it costs virtually nothing to place phone calls and to connect. Many of them have driven costs even lower by using automatic dialers and computer voice recognition instead of live operators for the initial call. But imagine if such computer “robocallers” were persuasive AI programs that sounded like real, live humans, and capable of learning better, more effective ways to fool people at the other end of the line. Imagine if they could call you on the phone and sound like an acquaintance, friend, or loved one.
One proposed solution is to improve the methods of detecting fraudulent AI so as to block it from engaging with real people in the first place. Another would require biometric tests to ensure only real people can join a network, complete a call, or engage in live online conversations. Perhaps a price could be put in place for completing a call to your phone.
Beyond that, if you can no longer trust the authenticity of online interactions, wouldn’t this improve the value of in-person, face-to-face conversations with verifiably real people? If online interactions become too suspicious, many users will probably stop trusting them altogether. It seems conceivable that this lack of trust would drive legitimate businesses away from electronic communications because it would damage their reputations. If so, communicating via electronic device may go from being the common standard in society to being the dispreferred, undesirable means of interacting—because you just can’t tell if you’re engaging with another human.
There will, presumably, be loud public cries for the government to act, to find fixes in laws and statutes, to protect telecommunication users from frauds and scams. This approach, however, could be even less promising than encouraging people to approach online interactions with the greatest skepticism. How likely is it that a government agency can match the pace of change, with virtual AI criminals that can’t be caught, much less punished? Is it realistic to think government could keep up with—much less keep ahead of—all the attempted fraud?
All that negative speculation aside, it’s also possible that talking AIs could become something desirable, too. They offer the promise of giving us something similar to human interaction when we want it, even if we are aware that it might not be real. In some ways, it might help people as a form of interactive, social entertainment.
There are easily imagined downsides to this too, of course. And thanks to our innate negativity bias, I’m sure we can look forward to plentiful scare stories about all of them. But if the technology were to cause us to put down our phones, tablets, and computers, so as to spend a bit more time interacting with each other directly and in person than we do now, that could well be something to look forward to.
"Chat GPT holds the promise of generating pages of original natural-language writing for you to sign your name to—created while you were having breakfast!"
Thus explaining the CSLF from 1/18/23.
😉
I, for one, welcome the inevitable Butlerian Jihad that will end the reign of the AI Overlords.