Blog Archives
Dead Internet
There are two ways to destroy something: make it unusable, or reduce its utility to zero. The latter may be happening with the internet.
Let’s back up. I was browsing a Reddit Ask Me Anything (AMA) thread by a researcher who worked on creating “AI invisibility cloak” sweaters. The goal was to design “adversarial patterns” that essentially tricked AI-based cameras from no longer recognizing that a person was, in fact, a person. During the AMA though, they were asked what they thought about language-model AI like GPT-3. The reply was:
I have a few major concerns about large language models.
– Language models could be used to flood the web with social media content to promote fake news. For example, they could be used to generate millions of unique twitter or reddit responses from sockpuppet accounts to promote a conspiracy theory or manipulate an election. In this respect, I think language models are far more dangerous than image-based deep fakes.
This struck me as interesting, as I would have assumed deep-faked celebrity endorsements – or even straight-up criminal framing – would have been a bigger issue for society. But… I think they are right.
There is a conspiracy theory floating around for a number of years called “The Dead Internet Theory.” This Atlantic article explains in more detail, but the premise is that the internet “died” in 2016-2017 and almost all content since then has been generated by AI and propagated by bots. That is clearly absurd… mostly. First, I feel like articles written by AI today are pretty recognizable as being “off,” let alone what the quality would have been five years ago.
Second, in a moment of supreme irony, we’re already pretty inundated with vacuous articles written by human beings trying to trick algorithms, to the detriment of human readers. It’s called “Search Engine Optimization” and it’s everywhere. Ever wonder why cooking recipes on the internet have paragraphs of banal family history before giving you the steps? SEO. Are you annoyed when a piece of video game news that could have been summed up with two sentences takes three paragraphs to get to the point? SEO. Things have gotten so bad though that you pretty much have to engage in SEO defensively these days, lest you get buried on Page 27 of the search results.
And all of this is (presumably) before AI has gotten involved belting out 10,000 articles a second.
A lot has already been said about polarization in US politics and misinformation in general, but I do feel like the dilution of utility of the internet has played a part in that. People have their own confirmation biases, yes, but it also true that when there is so much nonsense everywhere, that you retreat to the familiar. Can you trust this news outlet? Can you trust this expert citing that study? After a while, it simply becomes too much to research and you end up choosing 1-2 sources that you thereafter defer to. Bam. Polarization. Well, that and certain topics – such as whether you should force a 10-year old girl to give birth – afford no ready compromises.
In any case, I do see there being a potential nightmare scenario of a Cyberpunk-esque warring AI duel between ones engaging in auto-SEO and others desperately trying to filter out the millions of posts/articles/tweets crafted to capture the attention of whatever human observers are left braving the madness between the pockets of “trusted” information. I would like to imagine saner heads would prevail before unleashing such AI, but… well… *gestures at everything in general.*
ChatGPT
Jan 26
Posted by Azuriel
Came across a Reddit post entitled “Professor catches student cheating with ChatGPT: ‘I feel abject terror’”. Among the comments was one saying “There is a person who needs to recalibrate their sense of terror.” The response to that was this:
Although I am bearish on the future of the internet in general with AI, the concerns above just sort of made me laugh.
When it comes to doctors and lawyers, what matters are results. Even if we assume ChatGPT somehow made someone pass the bar or get a medical license – and they further had no practical exam components/residency for some reason – the ultimate proof is the real world application. Does the lawyer win their cases? Do the patients have good health outcomes? It would certainly suck to be the first few clients that prove the professionals had no skills, but that can usually be avoided by sticking to those with a positive record to begin with.
And let’s not pretend that fresh graduates who did everything legit are always going to be good at their jobs. It’s like the old joke: what do you call the person who passed medical school with a C-? “Doctor.”
The other funny thing here is the implicit assumption that a given surgeon knowing which drug to administer is better than an AI chatbot. Sure, it’s a natural assumption to make. But surgeons, doctors, and everyone in-between are constantly lobbied (read: bribed) by drug companies to use their new products instead. How many thousands of professionals started over-prescribing OxyContin after attending “all expenses paid” Purdue-funded conferences? Do you know which conferences your doctor has attended recently? Do they even attend conferences? Maybe they already use AI, eh?
Having said that, I’m not super-optimistic about ChatGPT in general. A lot of these machine-learning algorithms get their base data from publicly-available sources. Once a few of the nonsense AI get loosed in a Dead Internet scenario, there is going to be a rather sudden Ouroboros situation where ChatGPT consumes anti-ChatGPT nonsense in an infinite loop. Maybe the programmers can whitelist a few select, trustworthy sources, but that limits the scope of what ChatGPT would be able to communicate. And even in the best case scenario, doesn’t that mean tight, private control over the only unsullied datasets?
Which, if you are catering to just a few, federated groups of people anyway, maybe that is all you need.
Posted in Commentary
4 Comments
Tags: AI, ChatGDP, Dead Internet Theory, Machine Learning