This AI Ain’t It
Wilhelm wrote a post called “The Folly of Believing in AI” and is otherwise predicting an eventual market crash based on the insane capital spent chasing that dragon. The thesis is simple: AI is expensive, so… who is going to pay for it? Well, expensive and garbage, which is the worst possible combination. And I pretty much agree with him entirely – when the music stops, there will be many a child left without a chair but holding a lot of bags, to mix metaphors.
The one problematic angle I want to stress the most though, is the fundamental limitation of AI: it is dependent upon the data it intends to replace, and yet that data evolves all the time.
Duh, right? Just think about it a bit more though. The best use-case I have heard for AI has been from programmers stating that they can get code snippets from ChatGPT that either work out of the box, or otherwise get them 90% of the way there. Where did ChatGPT “learn” code though? From scraping GitHub and similar repositories for human-made code. Which sounds an awful like what a search engine could also do, but nevermind. Even in the extremely optimistic scenario in which no programmer loses their jobs to future Prompt Engineers, eventually GitHub is going to start (or continue?) to accumulate AI-derived code. Which will be scraped and reconsumed into the dataset, increasing the error rate, thereby lowering the value that the AI had in the first place.
Alternatively, let’s suppose there isn’t an issue with recycled datasets and error rates. There will be a lower need for programmers, which means less opportunity for novel code and/or new languages, as it would have to compete with much cheaper, “solved” solution. We then get locked into existing code at current levels of function unless some hobbyists stumble upon the next best thing.
The other use-cases for AI are bad in more obvious, albeit understandable ways. AI can write tailored cover letters for you, or if you’re feeling extra frisky, apply for hundreds of job postings a day on your behalf. Of course, HR departments around the world fired the first shots of that war when they started using algorithms to pre-screen applications, so this bit of turnabout feels like fair play. But what is the end result? AI talking to AI? No person can or will manually sort through 250 applications per job opening. Maybe the most “fair” solution will just be picking people randomly. Or consolidating all the power into recruitment agencies. Or, you know, just nepotism and networking per usual.
Then you get to the AI-written house listings, product descriptions, user reviews, or office emails. Just look at this recent Forbes article on how to use ChatGPT to save you time in an office scenario:
- Wrangle Your Inbox (Google how to use Outlook Rules/filters)
- Eliminate Redundant Communication (Ooo, Email Templates!)
- Automate Content Creation (spit out a 1st draft on a subject based on prompts)
- Get The Most Out Of Your Meetings (transcribe notes, summarize transcriptions, create agendas)
- Crunch Data And Offer Insights (get data analysis, assuming you don’t understand Excel formulas)
The article states email and meetings represent 15% and 23% of work time, respectively. Sounds accurate enough. And yet rather than address the glaring, systemic issue of unnecessary communication directly, we are to use AI to just… sort of brute force our way through it. Does it not occur to anyone that the emails you are getting AI to summarize are possibly created by AI prompts from the sender? Your supervisor is going to get AI to summarize the AI article you submitted, have AI create an agenda for a meeting they call you in for, AI is going to transcribe the meeting, which will then be emailed to their supervisor and summarized again by AI. You’ll probably still be in trouble, but no worries, just submit 5000 job applications over your lunch break.
In Cyberpunk 2077 lore, a virus infected and destroyed 78.2% of the internet. In the real world, 90% of the internet will be synthetically generated by 2026. How’s that for a bearish case for AI?
Now, I am not a total Luddite. There are a number of applications for which AI is very welcome. Detecting lung cancer from a blood test, rapidly sifting through thousands of CT scans looking for patterns, potentially using AI to create novel molecules and designer drugs while simulating their efficacy, and so on. Those are useful applications of technology to further science.
That’s not what is getting peddled on the street these days though. And maybe that is not even the point. There is a cynical part of me that questions why these programs were dropped on the public like a free hit from the local drug dealer. There is some money exchanging hands, sure, and it’s certainly been a boon for Nvidia and other companies selling shovels during a gold rush. But OpenAI is set to take a $5 billion loss this year alone, and they aren’t the only game in town. Why spend $700,000/day running ChatGPT like a loss leader, when there doesn’t appear to be anything profitable being led to?
[Fake Edit] Totally unrelated last week news: Microsoft, Apple, and Nvidia are apparently bailing out OpenAI in another round of fundraising to keep them solvent… for another year, or whatever.
I think maybe the Dead Internet endgame is the point. The collateral damage is win-win for these AI companies. Either they succeed with the AGI moonshot – the holy grail of AI that would change the game, just like working fusion power – or fill the open internet with enough AI garbage to permanently prevent any future competition. What could a brand new AI company even train off of these days? Assuming “clean” output isn’t now locked down with licensing contracts, their new model would be facing off with ChatGPT v8.5 or whatever. The only reasonable avenue for future AI companies would be to license the existing datasets themselves into perpetuity. Rent-seeking at its finest.
I could be wrong. Perhaps all these LLMs will suddenly solve all our problems, and not just be tools of harassment and disinformation. Considering the big phone players are making deepfake software on phones standard this year, I suppose we’ll all find out pretty damn quick.
My prediction: mo’ AI, mo’ problems.
Posted on September 11, 2024, in Commentary, Philosophy and tagged AI, ChatGPT, Dead Internet Theory, Large Language Models, Luddite, What Could Possibly Go Wrong?. Bookmark the permalink. 2 Comments.
The problem with all the commentary on AI is that it’s just futurology and we know how accurate that is. In my lifetime, very few things have ever turned out to be much like people said they were going to be but a lot of things have turned out to be like nothing I heard anyone say they would. Personally, my feeling is that AI will just morph into another background tool, used by humans for much the same purposes they previously used other tools. If AI does the jobs better, it will replace those tools; if it doesn’t, it won’t. It will never replace people per se although peoples’ jobs may change to involve the use of the new AI tools.
As for the cost, if AI ends up doing those jobs as well but no better but costs more , once again people will stick mostly with the old tools. Either way, the whole thing will soon enough become just another tech deal that almost no-one else thinks or cares about.
As for the idea that the internet is getting less useful and heading towards not becoming useful at all, it gets a lot of airtime these days but I have yet to see any indication of it in my own usage. Everything I use the web for works pretty much as well as it ever did and most of it works a lot better than it used to. Maybe it depends what we’re using it for?
I’m guessing how useful it remains is going to have more to do with that than with how many AI-created “facts” it acquires. I mean, it was always full of garbage that we discounted, wasn’t it? How is this different? Is it just a case of scale? Or are we suggesting that humans will intentionally stop adding factually accurate information to the web now AI has arrived?
The main problem from my perspective isn’t AI but it is cost-related. The insane growth of advertising is responsible for the most apparent degradation of the experience for me and that’s there because everyone is trying to hold the line of the internet being “free”. I do believe it’s possible that the whole thing will collapse under the weight of the unreal expectations placed upon it but I think it will be because of a final recognition that all of this “free” information and entertainment actually has to be paid for and the cost is unsustainable. We don’t need AI for that although if the megacorps are determined to pretend that’s going to be “free” as well, then it certainly will accelerate the collapse.
And that’s futurology!
LikeLike
What I will concede is that technically issues like SEO-optimized clickbait articles, fake product reviews, political misinformation, and so on have always existed. But the internet absolutely has gotten measurably and increasingly worse for me (and other people). And why wouldn’t it? The tools for creating believable spam have improved to where it takes less time and effort on the spammer’s part, but the monetary advantages have remained… for now.
For me, product reviews are already nigh useless. For years I’ve had to use a separate website to “vet” and grade Amazon product reviews, for example, but even that is almost useless these days. I used to append “+Reddit” to search results to try and find legit human responses to products, but that too is declining in usefulness. Perhaps we can say that it has always been best to simply find a few people you trust to review something, but A) you have to find them somehow, and B) you have to maintain some level of faith in humanity that has not already been eroded by systemic corruption.
And that’s the insidious part of the Tragedy of the Internet Commons. It won’t happen all at once – people will slowly change their behavior and/or give up. We talked about it with Blaugust and the AI infiltration there. Maybe the rules will get tightened up for next time, but who’s to say that in 2025 another “blogger” doesn’t just sign up with five sites on the down-low that spit out 31 posts based on the available prompts? Even if caught, it’s going to make everyone less likely to add new sites to their rolls, or become disillusioned with participating at all.
I suppose we’ll see how it ultimately shakes out in the “futurology” department. For me though, the lose-faith-in-humanity part of that future is already here.
LikeLike