Wilhelm has a post up about how society has essentially given up the future to AI at this point. One of the anecdotes in there is about how the Chicago Sun-Times had a top-15 book lists that only included 5 real books. The other is about how some students at Columbia University admitted they complete all of their course-work via AI, to make more time for the true reason they enrolled in an Ivy League school: marriage and networking. Which, to be honest, is probably the only real reason to be going to college for most people. But at least “back in the day” one may have accidentally learned something.
From a concern perspective, all of this is almost old news. Back in December I had a post up about how the Project Zomboid folks went out of their way to hire a human artist who turned around and (likely) used AI to produce some or all of the work. Which you would think speaks to a profound lack of self-preservation, but apparently not. Maybe they were just ahead of the curve.
Which leads me to the one silver-lining when it comes to the way AI has washed over and eroded the foundations of our society: at least it did so in a manner that destroys its own competitive advantage.
For example, have you see the latest coming from Google’s Veo 3 video AI generation? Among the examples of people goofing around was this pharmaceutical ad for “Puppramin,” a drug to treat depression by encouraging puppies to arrive at your doorstep.
Is it perfect? Of course not. But as the… uh, prompt engineer pointed out on Twitter, these sort of ads used to cost $500,000 and take a team of people to produce over months, but this one took a day and $500 in AI credits. Thing is, you have to ask what is eventual outcome? If one company can reduce their ad creation costs by leveraging AI, so can all the others. You can’t even say that the $499,500 saved could be used to purchase more ad space, because everyone in the industry is going to have that extra cash, so bids on timeslots or whatever will increase accordingly.
It all reminds me about the opening salvo in the AI wars: HR departments. When companies receive 180 applications for every job posting, HR started utilizing algorithms to filter candidates. All of a sudden, if you knew the “tricks” and keywords to get your resume past said filter, you had a significant advantage. Now? Every applicant can use AI to construct a filter-perfect resume, tailored cover letter, and apply to 500 companies over their lunch break. No more advantage.
At my own workplace, we have been mandated to take a virtual course on AI use ahead of a deployment of Microsoft Claude. The entire time I was watching the videos, I kept thinking “what’s the use case for this?” Some of the examples in the videos were summarization of long documents, creating reports, generating emails, and the normal sort of office stuff. But, again, it all calls into question what problem is being solved. If I use Claude to generate an email and you use Claude to summarize it, what even happened? Other than a colossal waste of resources, of course.
Near as I can tell, there are only two endgoals available for this level of AI. The first we can see with Musk’s Grok, where the AI-owners can put their thumbs (more obviously) on the scale to direct people towards skinhead conspiracy theories. I can imagine someone with less ketamine-induced brain damage would be more subtle, nudging people towards products/politicians/etc that have bent the knee and/or paid the fee. The second endgoal is presumably to actually make money someday… somehow. Currently, zero of the AI companies out there make any profit. Most of them are free to use right now though, and that could possibly change in the future. If the next generation of students and workers are essentially dependent on AI to function, suddenly making ChatGPT cost $1000 to use would reintroduce the competitive advantage.
…unless the AI cat is already out of the bag, which it appears to be.
In any case, I am largely over it. Not because I foresee no negative consequences from AI, but because there is really nothing to be done at this point. If you are one of the stubborn holdouts, as I have been, then you will be ran over by those who aren’t. Nobody cares about the environmental impacts, the educational impacts, the societal impacts. But what else is new?
We’re all just here treading water until it reaches boiling temperature.
N(AI)hilism
May 26
Posted by Azuriel
Wilhelm has a post up about how society has essentially given up the future to AI at this point. One of the anecdotes in there is about how the Chicago Sun-Times had a top-15 book lists that only included 5 real books. The other is about how some students at Columbia University admitted they complete all of their course-work via AI, to make more time for the true reason they enrolled in an Ivy League school: marriage and networking. Which, to be honest, is probably the only real reason to be going to college for most people. But at least “back in the day” one may have accidentally learned something.
From a concern perspective, all of this is almost old news. Back in December I had a post up about how the Project Zomboid folks went out of their way to hire a human artist who turned around and (likely) used AI to produce some or all of the work. Which you would think speaks to a profound lack of self-preservation, but apparently not. Maybe they were just ahead of the curve.
Which leads me to the one silver-lining when it comes to the way AI has washed over and eroded the foundations of our society: at least it did so in a manner that destroys its own competitive advantage.
For example, have you see the latest coming from Google’s Veo 3 video AI generation? Among the examples of people goofing around was this pharmaceutical ad for “Puppramin,” a drug to treat depression by encouraging puppies to arrive at your doorstep.
Is it perfect? Of course not. But as the… uh, prompt engineer pointed out on Twitter, these sort of ads used to cost $500,000 and take a team of people to produce over months, but this one took a day and $500 in AI credits. Thing is, you have to ask what is eventual outcome? If one company can reduce their ad creation costs by leveraging AI, so can all the others. You can’t even say that the $499,500 saved could be used to purchase more ad space, because everyone in the industry is going to have that extra cash, so bids on timeslots or whatever will increase accordingly.
It all reminds me about the opening salvo in the AI wars: HR departments. When companies receive 180 applications for every job posting, HR started utilizing algorithms to filter candidates. All of a sudden, if you knew the “tricks” and keywords to get your resume past said filter, you had a significant advantage. Now? Every applicant can use AI to construct a filter-perfect resume, tailored cover letter, and apply to 500 companies over their lunch break. No more advantage.
At my own workplace, we have been mandated to take a virtual course on AI use ahead of a deployment of Microsoft Claude. The entire time I was watching the videos, I kept thinking “what’s the use case for this?” Some of the examples in the videos were summarization of long documents, creating reports, generating emails, and the normal sort of office stuff. But, again, it all calls into question what problem is being solved. If I use Claude to generate an email and you use Claude to summarize it, what even happened? Other than a colossal waste of resources, of course.
Near as I can tell, there are only two endgoals available for this level of AI. The first we can see with Musk’s Grok, where the AI-owners can put their thumbs (more obviously) on the scale to direct people towards skinhead conspiracy theories. I can imagine someone with less ketamine-induced brain damage would be more subtle, nudging people towards products/politicians/etc that have bent the knee and/or paid the fee. The second endgoal is presumably to actually make money someday… somehow. Currently, zero of the AI companies out there make any profit. Most of them are free to use right now though, and that could possibly change in the future. If the next generation of students and workers are essentially dependent on AI to function, suddenly making ChatGPT cost $1000 to use would reintroduce the competitive advantage.
…unless the AI cat is already out of the bag, which it appears to be.
In any case, I am largely over it. Not because I foresee no negative consequences from AI, but because there is really nothing to be done at this point. If you are one of the stubborn holdouts, as I have been, then you will be ran over by those who aren’t. Nobody cares about the environmental impacts, the educational impacts, the societal impacts. But what else is new?
We’re all just here treading water until it reaches boiling temperature.
Posted in Commentary
3 Comments
Tags: AI, ChatGPT, Google Veo 3, Nihilism