N(AI)hilism
Wilhelm has a post up about how society has essentially given up the future to AI at this point. One of the anecdotes in there is about how the Chicago Sun-Times had a top-15 book lists that only included 5 real books. The other is about how some students at Columbia University admitted they complete all of their course-work via AI, to make more time for the true reason they enrolled in an Ivy League school: marriage and networking. Which, to be honest, is probably the only real reason to be going to college for most people. But at least “back in the day” one may have accidentally learned something.
From a concern perspective, all of this is almost old news. Back in December I had a post up about how the Project Zomboid folks went out of their way to hire a human artist who turned around and (likely) used AI to produce some or all of the work. Which you would think speaks to a profound lack of self-preservation, but apparently not. Maybe they were just ahead of the curve.
Which leads me to the one silver-lining when it comes to the way AI has washed over and eroded the foundations of our society: at least it did so in a manner that destroys its own competitive advantage.
For example, have you see the latest coming from Google’s Veo 3 video AI generation? Among the examples of people goofing around was this pharmaceutical ad for “Puppramin,” a drug to treat depression by encouraging puppies to arrive at your doorstep.
Is it perfect? Of course not. But as the… uh, prompt engineer pointed out on Twitter, these sort of ads used to cost $500,000 and take a team of people to produce over months, but this one took a day and $500 in AI credits. Thing is, you have to ask what is eventual outcome? If one company can reduce their ad creation costs by leveraging AI, so can all the others. You can’t even say that the $499,500 saved could be used to purchase more ad space, because everyone in the industry is going to have that extra cash, so bids on timeslots or whatever will increase accordingly.
It all reminds me about the opening salvo in the AI wars: HR departments. When companies receive 180 applications for every job posting, HR started utilizing algorithms to filter candidates. All of a sudden, if you knew the “tricks” and keywords to get your resume past said filter, you had a significant advantage. Now? Every applicant can use AI to construct a filter-perfect resume, tailored cover letter, and apply to 500 companies over their lunch break. No more advantage.
At my own workplace, we have been mandated to take a virtual course on AI use ahead of a deployment of Microsoft Claude. The entire time I was watching the videos, I kept thinking “what’s the use case for this?” Some of the examples in the videos were summarization of long documents, creating reports, generating emails, and the normal sort of office stuff. But, again, it all calls into question what problem is being solved. If I use Claude to generate an email and you use Claude to summarize it, what even happened? Other than a colossal waste of resources, of course.
Near as I can tell, there are only two endgoals available for this level of AI. The first we can see with Musk’s Grok, where the AI-owners can put their thumbs (more obviously) on the scale to direct people towards skinhead conspiracy theories. I can imagine someone with less ketamine-induced brain damage would be more subtle, nudging people towards products/politicians/etc that have bent the knee and/or paid the fee. The second endgoal is presumably to actually make money someday… somehow. Currently, zero of the AI companies out there make any profit. Most of them are free to use right now though, and that could possibly change in the future. If the next generation of students and workers are essentially dependent on AI to function, suddenly making ChatGPT cost $1000 to use would reintroduce the competitive advantage.
…unless the AI cat is already out of the bag, which it appears to be.
In any case, I am largely over it. Not because I foresee no negative consequences from AI, but because there is really nothing to be done at this point. If you are one of the stubborn holdouts, as I have been, then you will be ran over by those who aren’t. Nobody cares about the environmental impacts, the educational impacts, the societal impacts. But what else is new?
We’re all just here treading water until it reaches boiling temperature.
Posted on May 26, 2025, in Commentary and tagged AI, ChatGPT, Google Veo 3, Nihilism. Bookmark the permalink. 3 Comments.
I spent the whole morning and some of the afternoon working on one new song. A lot of that, admittedly, was old school drafting and re-drafting the lyrics and singing guide vocals into the Voice Recorder function of my phone but the reason for both was to have a version AI would follow accurately. I didn’t count how many tries it took to get a final version I was happy with but it was probably thirty at least. All the rest apart from that one will be trashed at some point.
That’s one song out of maybe fifty or sixty I’ve done so far this year. I’ll have to sort them and count them at some point. I can say how many attempts I’ve made, though, because the AI keeps track. It’s just shy of two thousand, not counting the ones I trashed immediately. For that, I’ve paid about $30 in total. It’s only because it’s so insanely cheap that I can keep at it the way I have been until I get the results I want.
The thing is, though, when the AI gets it right, the results are incredible. If they ever get it to the stage when it gets it right first or second time, then we’ll really be getting somewhere. Until then, though, it’s all about burning down the forest to find the one tree you’re looking for. Whether that’s sustainable I wouldn’t like to speculate.
LikeLike
It’s happening in all fields. Research journals are getting submerged by cheap/mediocre stuff which is auto-generated. Student’s reports have become a thing of the past: now if you want to test a student it’s in person (we always did this, so actually it changes nothing for us).
For me the most scary part is that the young “ask chatGPT” and believe the answer, and the problem is that while the text is nice and convincing, the underlying information can often be completely wrong. I’ve had one student fail one exam because he used chatGPT to study and revise…..
LikeLike
Ironically, I have less of an issue with misinformation coming from ChatGPT, simply because there are millions of people already getting their “facts” from Facebook meme posts and Russian bot farms. Indeed, on average, the ChatGPT information is probably more true than where they would have heard it from otherwise.
That said, yeah, custom information is going to be the bane of any ChatGPT inquiry. How is AI going to know what “Dr. White” from University College wants as an answer to his second quarter quiz? Unless Dr White has been using AI to write his lesson plans, anyway…
LikeLike