Bias Confirmed!
It’s so satisfying when what you suspected all along turns out to be true. Almost too satisfying. It overrides any potential skepticism, enabling suspension in favor of pristine self-affirming goodness.
We wrote a long time ago about how the roller-coaster ride of hype regarding AI would evolve: the early bubbly reports of enthusiasm and panic would eventually subside, and boring reality would set in. But one possible outcome would be that AI-generated web content would lead to distrust in online interactions. Here’s what we said back in January of 2023:
Beyond that, if you can no longer trust the authenticity of online interactions, wouldn’t this improve the value of in-person, face-to-face conversations with verifiably real people? If online interactions become too suspicious, many users will probably stop trusting them altogether. It seems conceivable that this lack of trust would drive legitimate businesses away from electronic communications because it would damage their reputations. If so, communicating via electronic device may go from being the common standard in society to being the dispreferred, undesirable means of interacting—because you just can’t tell if you’re engaging with another human.
StudyFinds has revealed just such a trend now, based on surveys. In brief:
The survey of 2,000 adults by Talker Research paints a concerning picture. Americans believe only 41% of online content is accurate, factual, and made by humans. They think 23% is completely false and purposely inaccurate or misleading, while 36% falls somewhere in between. Three-quarters of respondents say they trust the internet less today than ever before.
The prediction here was that a surfeit of AI/LLM-generated online content would essentially debase the currency of online information. Will that still come true? Or will people decide the dopamine hits from vibrating devices are worth it, even if what they alert us to is junk and nonsense?
In a separate study, researchers examined the question of whether AI doomsday predictions were distracting the public from more immediate AI harms. This study was more strictly academic, and the built-in assumption was that people should spend their time worried about some aspect of AI. It just assumed they should be worried about the immediate moment, not some imagined far-future consequences. The main thing, presumably, is to remain worried.
Therefore, the problem isn’t that we’re worried, but that we’re worried insufficiently in the wrong ways—something else to worry about! Unless this turns out to be unreliable because it’s just more online junk that we shouldn’t believe.
Since we’re in the online world of make-believe anyway, why not just choose the most entertaining option and believe that? Or is that what we’re already doing? You decide.
Good morning. It's finally the Big Day(s) for North Carolina Envirothon today and tomorrow. The weather forecast at the location (Burlington) has improved, but we'll still take both of our waterproof canopies, just for luck. Also a 10-pack of single-use rain ponchos. They used to be $1.00 each at Walmart, but now they're $2.00 each if you buy one at a time, but still $1.00 each if you buy the 10-pack, and why wouldn't you, if you plan on camping and other outdoorsiness over the next several months?
The teams to support are Beautiful Butterflies at the High School level and Ladies and Gent in Middle School.
"Americans believe only 41% of online content is accurate, factual, and made by humans. They think 23% is completely false and purposely inaccurate or misleading, while 36% falls somewhere in between."
I'll bet the survey had one of those slider things where you have to enter your opinion with a specific number between 0% and 100%. What nonsense.
That said, it's fairly likely that this non-information is, as the pundits say, "directionally accurate," that is, indicative of a vibe.