Blog Archives

Infiltration

I’ve talked a lot about AI in hypotheticals, although it is certainly having real-world effects in various fields already. Nevertheless, nothing else quite hits the same as it occurring at your place of business.

For context, general staff where I work have been using ChatGPT for more than a year now. Mostly, it has been used to summarize their own notes, check grammar, and so on. All of which had been expressly against policy, considering how the notes in question are filled with sensitive personal information that has now been consumed by OpenAI servers and possibly regurgitated into someone else’s results. There’s no fighting the tide though, so the executives worked with various regulatory agencies to bring in a “walled-off” version of (I think) Microsoft’s version of AI into the fold.

Things took a particularly different track the week before the holidays.

See, there have been a number of initiatives and requests over the years from staff to get electronic versions of paper forms, and/or to modernize the electronic forms they already use. Our IT shop is small for an organization of our size, with just 2-3 programmers on staff for such requests. The latest form to be modernized has taken almost two years to be completed, for… various reasons I won’t go into here. So when a new request came in for a different form, it was not much of a surprise to learn that the estimate was ~2,000 programming hours. For us, that sounded about right.

And then… the senior programmer just completed the entire form in about 100 hours via AI.

During the impromptu “demo,” he outline his methodology, which was essentially guiding the AI into building each specific section of the form separately. Interestingly, as each section was built and tweaked by the programmer, the entire codebase was (apparently) re-entered into the prompt to ensure later sections integrated with the earlier pieces. All of which took the stated 100 hours and around $50 worth of AI credits. I think this was via Github Copilot, on one of the lower “settings.”

Brilliant, right? If this outcome is reproducible with other forms – and the form in question doesn’t self-destruct down the road – it will effectively save the organization millions of dollars over time. There is nothing but unambiguous good about it, yeah? People dream about a 20x improvement in efficiency!

So, here’s the thing. The form we’re talking about already exists as a PDF and Word document. It now also existing in what amounts to a company website could possibly save some amount of staff time, but the overall return on investment by any metric was dubious from the get-go. Is going from 2000 to 100 impressive? Yes. Would I expect our junior programmer to be able to reproduce the same 100-hour outcome? No. Is there a very real possibility that the next round of hirings will exclude junior programmer positions? Yes. Does anyone have grand designs for how junior programmers will become senior programmers in the future? No. Will future AI use for a similar effort also cost $50? Fuck no.

If you’re pessimistic about an AI future, you’re a Luddite, fighting an absurd battle to keep buggy whips relevant. “Get with the times, old man! The future is now!” Sure, okay. Question, though: do you feel we are well positioned, politically or economically, for the fruits of AI investments to improve the lives of the working class? Or perhaps are we just automating the creation of shit sandwiches?

Alas.

Blurry Lines

It occurs to me that in the last post, I brought up Larian Studios’ use of AI for “boring” work, but did not otherwise offer an opinion on the subject. Do they deserve to get the scarlet AI letter brand? Or perhaps will they received a reprieve, on account of not being Activision-Blizzard (etc)?

It helps to level-set. Here is more/most of the transcript from the Larian interview:

JS: Speaking of efficiency, you’ve spoken a little bit about generative AI. And I know that that’s been a point of discussion on the team, too. Do you feel like it can speed up production?

SV: In terms of generation, like white boxing, yes, there’s things, but I’m not 100% sure if you’re actually seeing speed-ups that much. You’re trying more stuff. Having tried stuff out, I don’t actually think it accelerates things. Because there’s a lot of hype out there. I haven’t really seen: oh this is really gonna replace things. I haven’t seen that yet. I’ve seen a lot of where you initially get excited, oh, this could be cool. And then you say, ah, you know, in the end it doesn’t really do the thing. Everything is human actors; we are writing everything ourselves. There’s no generated assets that you’re gonna see in the game. We are trying to use generated assets to accelerate white-boxing. But I mean to be fair, we’re talking about basic things to help the level designers.

JS: What about concept art?

SV: So that’s being used by concept artists. They use it the same like they would use photos. We have like 30 concept artistis at this point or something like that. So we bought a boutique concept art firm at the moment that everybody was using reducing them because they were going to AI, in our case it just went up. If there’s one thing that artists keep on asking for its more concept artists. But what they do is they use it for exploration.

[…]

SV: I think experimentation, white boxing, some broader white boxing, lots and lots of applications and retargeting and cleaning and editing. These are things that just really take a lot of time. So that allows you to do more. So there’s a lot of value there in terms of the creative process itself. It helps in doing things. But I haven’t seen the acceleration. So I’m really curious to see because there’s all studios that said, oh, this is gonna accelerate… If you look at the state of the art of video games today, these are still in their infancy. Will they eventually manage to do that at large scale? I don’t know how much data centers they’re gonna need to be able to do it.

So what I would say is that what Larian is doing is materially different than a company, say, using AI to generate random newspapers to place around a city. Or, you know, use AI to voice characters entirely. Copy & pasting AI-generated output directly into your game seems like a pretty clear line not to cross.

Personally though, there are other lines that are blurrier and on a slippery decline.

Take the concept artists. Larian hired a bunch at a time when many were getting replaced with AI. Good Guy Larian. Even if, perhaps, they may have been on a bit of a discount on account of, you know, AI pressure on their field of work. Whatever, humans good. We then learn that these same concept artists use generative AI for “exploration,” either instead of or, optimistically, in tandem with more traditional photos. That’s where things start to break down for me.

Suppose a concept artist wants to draw a lion. To do so, they would like to have a bunch of photos of lions for reference material. I understand the allure of saving time by simply getting ChatGPT or whatever to spit out 30 lion photos in various states of movement, rather than manually doing Google searches, going to zoo websites, and so on. The seduction of the follow-up prompt is right there though. “Lions roaring.” “Lions pouncing on antelope.” “Lion with raven wings attacking a paladin.”

[AI generated image] My meaningless contribution to entropy just to make an internet point.

And yeah, that looks goofy as shit. The artists will redraw it in the style that fits the game and nobody will notice. But maybe the team likes the look of the dust and mountainous landscape. They incorporate that. Maybe they include an armor set that matches that style. Or the sun symbol. Over time, maybe the team takes what the people themselves came up with and start running it through the prompts “just in case” the AI spits out something similarly interesting. And so on and so forth.

“So what? What’s the harm?”

Well, how much time do you have? I’m going to focus exclusively on the videogame angle here, rather than environmental impacts, cognitive studies, and apocalypse risk from agentic, self-improving AI.

The first concern is becoming reliant upon it. Larian is apparently hiring concept artists today, but maybe in the not so distant future, they don’t. Anyone can type in a prompt box. Over time, the entire concept artist field could disappear. And what is replacing it? An AI model that is incentivized in giving you exactly what it thinks you want. This will lead to homogenization, the sort of “AI glow,” and even if you wanted to fight the current… who is still economically capable of producing the bespoke human work? And would they not just turn around and tweak AI output and call it a day (it’s happened before)?

Incidentally, the other part of AI reliance is the fact that you own none of it. Right now, these AI firms lose money any time people use it, but that is going to stop eventually. When it does, you are either going to be on the hook for tens of thousands of dollars a month for a license, or desperately trying to filter out ad placements from the output. Maybe open source LLMs (or corporate saboteurs) will save us from such a fate, but there won’t be a non-AI fallback option because the job doesn’t exist anymore.

The second concern is something that these companies definitely feel the effects of already, but apparently don’t give much thought about: we are very much in a crowded, attention economy. On one end you have short-form video eating into gamer time, and on the other you have legacy games continuing to dominate playtimes. For example, Steam’s year-end report showed that just 14% of gamer time was spent playing games released in 2025. Is that figure skewed by Steam exclusives like Counter-Strike 2? Maybe. Then again, Steam is the largest, most ubiquitous PC storefront in the world and had 1.5+ million concurrent players in Counter-Strike 2 yesterday. That’s a lot of people who could be playing anything other than a game from 2012.

Now imagine that all of the promises of AI have come true for videogame devs. Six year timelines become four years or even two. Great! Who is going to be playing your game with what time? Over 19,000 games came out on Steam in 2025. Are all of them AAA titles winning awards? Of course not. But what does AAA even mean in a flowers-and-rainbow AI scenario? Maybe $50-$100+ million still makes a big difference in quality, fine. But that certainly didn’t save Black Ops 7, Borderlands 4, Concord, the dead-on-eventual-arrival Highguard, and so on.

Now imagine what happens when there are… 190,000 games released in a year.

As a player, I suppose in this hypothetical we come out ahead; there are more games specifically tailored to our exact preferences. For the game makers though, well, most of them are going to fail. Or perhaps the hobbyist ones survive, assuming a lower AI license cost. I don’t see how AAA survives with the increased competition and a reduced competitive edge (mo-cap, CGI, etc); hell, they are struggling to survive already. To say nothing about the discoverability issues. Maybe AI will fix that too, yeah?

In summation, my thoughts on the matter:

  1. Copy & pasting literal AI assets in your game is bad
  2. Using AI for inspiration leads to being trapped in an AI ecosystem
  3. AI-shortened development times leads to no one making any money

Of course, the cat genie is out of the lamp bag and never going back into the toothpaste tube. Taking a hard stance on #1 – to include slapping AI labels on Steam pages and the like – is not going to prevent #2 and #3. Hell, everyone in the industry wants shortened development times. I just don’t think anyone fully appreciates what that sort of thing would look like, until after the monkey paw curls.

In the meantime, as a gamer… eh, do what you want. I personally don’t want any generative AI elements in the games I play, for all the reasons I already outlined above (plus the ones I intentionally skipped). At the same time, I don’t have the bandwidth to contemplate how much GitHub Copilot use by a random programmer constitutes too much for them to qualify for a GOTY award. And if you’re not turning off DLSS 3 or FSR out of principal, what are you even doing, amirite?

“Normal People Don’t Care”

There is a minor, ongoing media kerfuffle with the internet-darling Larian Studios (makers of Baldur’s Gate 3, Original Sin 2, etc). It started with this Bloomberg article, wherein Jason Schreier writes:

Under Vincke, Larian has been pushing hard on generative AI, although the CEO says the technology hasn’t led to big gains in efficiency. He says there won’t be any AI-generated content in Divinity — “everything is human actors; we’re writing everything ourselves” — but the creators often use AI tools to explore ideas, flesh out PowerPoint presentations, develop concept art and write placeholder text.

The use of generative AI has led to some pushback at Larian, “but I think at this point everyone at the company is more or less OK with the way we’re using it,” Vincke said.

There are possible charitable and a not-so-chartable takes on those words, and suffice it to say, many people chose the latter. Vincke responded with a “Holy fuck guys [chill out]” Twitter response, with clarifications and emphasis that they only use AI for reference material and other boring things, and not with actual content. Jason Schreier also chimed in with an original transcript of the interview, as a response to others suggesting that what Schreier wrote was itself misleading.

As a side note, this portion of the transcript was extra interesting to me:

JS: It doesn’t seem like it’s causing more efficiency, so why use it?

SV: This is a tech driven industry, so you try stuff. You can’t afford not to try things because if somebody finds the golden egg and you’re not using it, you’re dead in this industry.

I suppose I should take Vincke’s word on the matter, considering how he released a critically-acclaimed game that sold 20 million copies, and I have… not. But, dead? Larian Studios has over 500 employees at this point, so things are likely different at these larger scales. I’m just saying the folks that made, you know, Silksong or Megabonk are probably going to be fine without pushing AI into their processes.

Anyway, all of that is actually a preamble to what sent me to this keyboard in the first place. In the Reddit comments of the second Schreier piece, this exchange took place:

TheBlightDoc: How could he NOT realize how controversial the genAI comments would be? Has he been living under a rock? Or does he himself believe AI is not a big deal? :laughing:

SexyJazzCat: The strong anti AI sentiment is a very chronically online thing. Normal people don’t actually care.

do not engage… do not engage… do not engage

Guys, it’s hard out here in 2025. And I’m kinda all done. Tapped out. Because SexyJazzCat is correct.

Normal people don’t actually care. We know this because “normal” people voted the current administration back into office. Normal people don’t understand that measles can reset your immune system, erasing all your hard-fought natural immunities. Normal people don’t understand that every AI data center that springs up in your area is subsidized by increases to your own electric bill. Normal people don’t understand that tariffs are taxes that they end up paying for. Normal people don’t understand that even if they didn’t use ACA subsidies, their health insurance is going to wildly increase anyway because hospitals won’t be reimbursed for emergency care from newly uninsured people. Nevermind the, you know, general human misery this creates.

Normal people don’t actually care about AI. But they should. Or perhaps should have, past tense, because we’re far past the end of a very slippery slope and fully airborne. Normal people are just going to be confused as to why computers, phones, and/or videogame consoles are wildly more expensive in 2026 (e.g. RAM crisis). Or if AI successfully demonstrates real efficiency gains, surprised when they are out of a job. Or if AI crashes and burns, why they also still lost their job and their 401k cratered (e.g. 40% of S&P 500 value is in AI companies).

The only thing that I still wish for these days, is this: people have the kind of day they voted for.

Authentic Wirehead

Bhagpuss has a post out called “It’s Real if I Say It’s Real,” with a strong argument that while people say they desire authenticity in the face of (e.g.) AI-generated music, A) people often can’t tell the difference, and B) if you enjoyed it, what does it even matter?

It was the clearest, most positive advocacy of the wirehead future I’ve ever seen in the wild.

Now, speaking of clarity, Bhagpuss didn’t advocate for wirehead in the post. Not directly. I have no personal reason to believe Bhagpuss would agree with my characterization of his post in the first place. However. I do believe it is the natural result and consequence of accepting the two premises.

Premise one is that we have passed (and are perhaps far beyond) the point at which the average person can easily differentiate between AI-generated content and the “real thing.” Indeed, is there really anyone anywhere ready to argue the opposite? Linked in the Bhagpuss’ post was this survey showing 97% of respondents being unable to tell the difference between human-made and AI-generated music across three samples. ChatGPT 4.5 already passed the classical three-way Turing Test, being selected as the human 73% of the time. Imagine that other person the research subject was texting with, and being so resoundingly rejected as human.

Then again, perhaps the results should not be all that surprising. We are very susceptible to suggestion, subterfuge, misdirection, and marketing. Bhagpuss brought up the old-school Pepsi vs Coke challenge, but you can also look at wine tasting studies where simply being told one type was more expensive led to it being rated more highly. Hell, the simple existence of the placebo effect at all should throw cold (triple-filtered, premium Icelandic) water on the notion that we exist in some objective reality. And us not, you know, just doing the best we can while piloting wet bags of sentient meat.

So, premise #1 is that it has become increasingly difficult to tell when something was created by AI.

Premise #2 is when we no longer care that it was artificially generated. For a lot of people, we are already well past this mile marker. Indirectly, when we no longer bother trying to verify the veracity of the source. Or directly, when we know it is AI-generated and enjoy it anyway.

I am actually kind of sympathetic on this point, philosophically. I have always been a big believer that an argument stands on its own merits. To discredit an idea based on the character of the person who made it is the definition of an ad hominem fallacy. In which case, wouldn’t casting aspersions on AI be… ad machina? If a song, or story, or argument is good, does its origins really matter? Maybe, maybe not.

Way back in my college days, I studied abroad in Japan for a semester. One thing I took was a knock-off Zune filled with LimeWired songs, and it was my proverbial sandbar while feeling adrift and alone. Some memories are so intensely entangled with certain songs, that I cannot think of one without the other. One of my favorites back then was… Last Train Home. By lostprophets. Sung by Ian Watkins.

So… yeah. It’s a little difficult for me to square the circle that is separating the art from the artist.

But suppose you really don’t care. Perhaps you are immune to “cancel culture” arguments, unmoved from allegations of a politician’s hypocrisy, and would derive indistinguishable pleasure between seeing the Mona Lisa in person and a print thereof hanging on your wall. “It’s all the same in the wash.”

To which I would ask: what distance remains to simply activating your nucleus accumbens directly?

What is AI music if not computer-generated noises attempting to substitute for the physical wire in your brain? Same for AI video, AI games, AI companions. If the context and circumstances of the art have no meaning, bear no weight, then… the last middle-man to cut out is you. Wirehead: engage.

I acknowledge that in many respects, it is a reductive argument. “Regular music is human-generated noises attempting to substitute for the wire.” We do not exist in a Platonic universe, unmoored from biological processes. Even my own notion that human-derived art should impart greater meaning into a work is itself mental scaffolding erected to enhance the pleasure derived from experiencing it.

That said, this entire thought experiment is getting less theoretical by the day. One of the last saving graces against a wirehead future is the minor, you know, brain surgery component. But what if that was not strictly necessary? What if there was a machine capable of gauging our reactions to given stimuli, allowing it to test different combinations of outputs in the form of words, sounds, and flashing lights to remotely trigger one’s nucleus accumbens? They would need some kind of reinforcement mechanism to calculate success, and an army of volunteers against which to test. The whole thing would cost trillions!

Surely, no one would go for that…

Self-Correcting

I feel there are many elements about AI that will eventually be self-correcting… in a sort of apocalyptic, crash-and-burn kind of way. For example, the AI-summarized web doesn’t leave much economic oxygen for people to create content worth summarizing. Assuming, of course, that ad-based revenue streams continue to make sense at all as we cross into over 50% of all internet traffic being bots.

On an individual level, I am experiencing some interesting changes that may also be self-correcting.

I have mentioned it a few times, but I have had a problem with watching Youtube (Shorts). As in, I would pop on over to quickly decompress from some other activity, and then 2-3 hours later, awaken from my fugue, algorithmic state having not accomplished anything that I had set out to. It’s a problem.

…or, at least, it was. Because I am now beginning to encounter (presumed) AI-directed, curated, and/or created content. And it repulses me in an uncanny valley way. Takes me right out of whatever hypnosis I was under and immediately causes me to close the tab. Which, of course, is great for me.

I put “presumed” up there though, because sometimes I cannot really tell. For example, this video about “15 forgotten garden traditions” is probably AI generated – it features generic voiceover on top of stitched-together montage of others peoples’ (at least attributed) content. Much like the now-maligned em dash however, perhaps that style of video is now just guilty by association? Another video was on The Saver’s Paradox and my AI-dar went off immediately. Looking further into the channel and thinking about what it would require to prompt that level of video though, it seems like it’s legit.

Perhaps neither of those videos bothered you in the slightest. In which case, congratulations! You are absolutely set up for a future filled to the brim with… content. For me though, the magic is gone.

It may well be inevitable that the quality of AI generation is such that it become indistinguishable from human content. In which case, why would I be on Youtube at all, instead of in my own prompt?

Self-correcting! As it turns out, even black holes evaporate eventually.

AI Won’t Save Us from Ourselves

I came across a survey/experiment article the other day entitled The Hidden Penalty of Using AI at Work. The “penalty” in this case being engineers more harshly judging a peer’s code if they were told the peer used AI to help write it. The overall effect is one’s competence being judged 9% worse than when the reviewer is told no AI was used. At least the penalty was applied equally…

The competence penalty was more than twice as severe for female engineers, who faced a 13% reduction compared to 6% for male engineers. When reviewers thought a woman had used AI to write code, they questioned her fundamental abilities far more than when reviewing the same AI-assisted code from a man.

Most revealing was who imposed these penalties. Engineers who hadn’t adopted AI themselves were the harshest critics. Male non-adopters were particularly severe when evaluating female engineers who used AI, penalizing them 26% more harshly than they penalized male engineers for identical AI usage.

Welp, that’s pretty bad. Indeed one of the conclusions is:

The competence penalty also exacerbates existing workplace inequalities. It’s reasonable and perhaps tempting to assume that AI tools should level the playing field by augmenting everyone’s capabilities. Our results suggest that this is not guaranteed and in fact the opposite could be true. In our context, which is dominated by young males, making AI equally available increased bias against female engineers.

This is the sort of thing I will never understand about AI Optimists: why would you presuppose anything other than an entrenchment of the existing capitalist dystopian hellscape and cultural morass?

I don’t know if you have taken a moment to look around lately, but we are clearly in the Bad Place. If I had once held a hope that AI tools would accelerate breakthroughs in fusion technology and thus perhaps help us out of the climate apocalypse we are sleepwalking into, articles like the above serve to ground me in the muck where we are. Assuming AI doesn’t outright end humanity, it certainly isn’t going to save us from ourselves. Do you imagine the same administration that is trying to cancel solar/wind energy and destroy NASA satellites monitoring CO2 is going to shepherd in a new age of equitable prosperity?

Or is it infinitely more likely the gains will be privatized and consolidated into the techno-feudal city-states these tech bros are already dreaming up? Sorta like Ready Player One, minus the escapism VR.

I could be wrong. I hope I’m wrong. We are absolutely in a transition period with AI, and as the survey pointed out, the non-adopters were more harsh than those familiar with AI. But… the undercurrent remains. I do not see what AI is going to do to solve income inequality, racism, sexism, or the literal death cult currently wielding all levers of government. I’m finding it a bit more likely that AI will be used to, you know, oppress people in horrible new ways. Or just the old ways, much more efficiently.

Wherever technology goes, there we are.

AIrtists

There’s some fresh Blizzard drama over a Diablo Immortal + Hearthstone colab artwork:

Going to need an AI editor to correct the AI mistakes…

The top comment (1700+ upvotes) is currently:

Guess $158 pets aren’t enough to pay an artist to draw the image for their colab lmao.

I’m all for piling onto Blizzard at this moment, precisely because what they are currently doing in, for example, Hearthstone is especially egregious. It’s not just the pets, though. The dev team had been advocating for reducing the power level of sets for a while – ostensively to fight power creep – but after like the third flop set in a row, their efforts are beginning to become indistinguishable from incompetence. The Starcraft miniset has been nerfed like 2-3 times now, but people are still playing cards from there because they’re more powerful than the crap we got today. First week of the expansion, and the updated Quest decks all had winrates of less than 30%.

Having said that, it isn’t all that clear that the AI artwork is actually Blizzard’s fault.

Last year, there was another AI art controversy with Hearthstone regarding the pixel hero portraits. While there was no official announcement, all signs pointed towards the artist themselves being the one to submit the AI-generated product rather than Blizzard actively “commissioning” such a thing. And remember, even the small indie devs from Project Zomboid got burned when they hired the same person that made their original splash screen and said artist turned around to submit AI-smeared work.

This sort of thing used to sound insane to me. Why would an artist use a tool that specifically rips off artists and makes their very own future work less valuable? Is there no sense of self-preservation?

On the other hand, that Hearthstone hero portrait “artist” almost got paid if it weren’t for those pesky Reddit kids. Considering that Microsoft is now requiring its employees to use AI in their jobs, perhaps the artists were just ahead of the curve. In my own meatspace job, AI tools are being made available and training being required if only to styme certain employees from blindly pasting sensitive, personal data into ChatGPT or Grammarly. Because of course they do.

Regardless, I am interested in seeing how it goes down and what eventually wins. AI does, obviously. But do people stop caring about AI-generated product art because so many examples eventually flood the zone that it becomes impossible to keep up? Will it be a simple generational change, with Gen Alpha (etc) being OK with it? Or will AI advance enough that we can no longer spot the little mistakes?

All three are going to happen, but I wonder which will happen first.

Human Slurry

Scrolling on my phone, I clicked into and read an article about Yaupon, which is apparently North America’s only native caffeinated plant. Since we’re speed-running the apocalypse over here in the US, the thought is that high tariffs on coffee and tea might revitalize an otherwise ultra-niche “Made in America” product. Huh, interesting.

I scroll down to the end and then see this:

The human slurry future

I’ve seen summarized reviews on Amazon, but never comments. Honestly, I just laughed.

It’s long been known that the comments on news articles are trash: filled with bots or humans indistinguishable from bots. But there is something deeply… I don’t know a strong enough word for it. Cynical? Nihilistic? Absurd? Maybe just fucking comedic about inviting your (presumably) human readers to comment on a story and then just blending them all up in a great human slurry summary so no one has to actually read any of them. At what point do you not just cut out the middle(hu)man?

If want a summary of the future, that’s it. Wirehead, but made out of people.

N(AI)hilism

Wilhelm has a post up about how society has essentially given up the future to AI at this point. One of the anecdotes in there is about how the Chicago Sun-Times had a top-15 book lists that only included 5 real books. The other is about how some students at Columbia University admitted they complete all of their course-work via AI, to make more time for the true reason they enrolled in an Ivy League school: marriage and networking. Which, to be honest, is probably the only real reason to be going to college for most people. But at least “back in the day” one may have accidentally learned something.

From a concern perspective, all of this is almost old news. Back in December I had a post up about how the Project Zomboid folks went out of their way to hire a human artist who turned around and (likely) used AI to produce some or all of the work. Which you would think speaks to a profound lack of self-preservation, but apparently not. Maybe they were just ahead of the curve.

Which leads me to the one silver-lining when it comes to the way AI has washed over and eroded the foundations of our society: at least it did so in a manner that destroys its own competitive advantage.

For example, have you see the latest coming from Google’s Veo 3 video AI generation? Among the examples of people goofing around was this pharmaceutical ad for “Puppramin,” a drug to treat depression by encouraging puppies to arrive at your doorstep.

Is it perfect? Of course not. But as the… uh, prompt engineer pointed out on Twitter, these sort of ads used to cost $500,000 and take a team of people to produce over months, but this one took a day and $500 in AI credits. Thing is, you have to ask what is eventual outcome? If one company can reduce their ad creation costs by leveraging AI, so can all the others. You can’t even say that the $499,500 saved could be used to purchase more ad space, because everyone in the industry is going to have that extra cash, so bids on timeslots or whatever will increase accordingly.

It all reminds me about the opening salvo in the AI wars: HR departments. When companies receive 180 applications for every job posting, HR started utilizing algorithms to filter candidates. All of a sudden, if you knew the “tricks” and keywords to get your resume past said filter, you had a significant advantage. Now? Every applicant can use AI to construct a filter-perfect resume, tailored cover letter, and apply to 500 companies over their lunch break. No more advantage.

At my own workplace, we have been mandated to take a virtual course on AI use ahead of a deployment of Microsoft Claude. The entire time I was watching the videos, I kept thinking “what’s the use case for this?” Some of the examples in the videos were summarization of long documents, creating reports, generating emails, and the normal sort of office stuff. But, again, it all calls into question what problem is being solved. If I use Claude to generate an email and you use Claude to summarize it, what even happened? Other than a colossal waste of resources, of course.

Near as I can tell, there are only two endgoals available for this level of AI. The first we can see with Musk’s Grok, where the AI-owners can put their thumbs (more obviously) on the scale to direct people towards skinhead conspiracy theories. I can imagine someone with less ketamine-induced brain damage would be more subtle, nudging people towards products/politicians/etc that have bent the knee and/or paid the fee. The second endgoal is presumably to actually make money someday… somehow. Currently, zero of the AI companies out there make any profit. Most of them are free to use right now though, and that could possibly change in the future. If the next generation of students and workers are essentially dependent on AI to function, suddenly making ChatGPT cost $1000 to use would reintroduce the competitive advantage.

…unless the AI cat is already out of the bag, which it appears to be.

In any case, I am largely over it. Not because I foresee no negative consequences from AI, but because there is really nothing to be done at this point. If you are one of the stubborn holdouts, as I have been, then you will be ran over by those who aren’t. Nobody cares about the environmental impacts, the educational impacts, the societal impacts. But what else is new?

We’re all just here treading water until it reaches boiling temperature.

(AI)Moral Hazard

There are a lot of strong feelings out there regarding the use of AI to generate artwork or other assets for videogames. Regardless of where you fall on the “training” aspect of AI, it seems clear that a game developer opting for AI art is taking away an employment possibility for a human artists.

One possibility I had not previously imagined though, is when a paid human artist themselves (allegedly) uses AI to generate the art:

Released as part of [Project Zomboid] build 42, these new images for the survival game seemingly contain some visual anomalies that may be attributable to AI generation tools. In the picture of the person using the radio, for example, the handle of the radio is misaligned with its main casing, the wire on the headphones seems to merge into the character’s hair, and there is an odd number of lines on the stand-up microphone – on one side of the microphone there are five indentations, but on the other side, which ought to be symmetrical, there are six.

It is worth noting that this is all forum speculation – AI has not been proven, although it certainly seems suspicious. Moreover, the “AAA concept artist” commissioned is not some rando, but the very one that did the still-used cover art of Project Zomboid from back in 2011. So this particular controversy is literally the worst of all possible worlds: game developer did the right thing by hiring a professional artist with proven track record for thousands of dollars, and received either AI-assisted artwork (bad), or non-AI artwork with human error that is now assumed to be because of AI (worse).

All of which is a complete distraction to another otherwise commendable game update (worst).

“Either way, they are gone for now – likely forever, as frankly after two years of hard work from our entire team in getting build 42 done, it would break my heart if discussion as to whether we’d used AI on a few loading screens that were produced externally to the company pretty recently was to completely overshadow all that effort and passion and hard work the team put into getting B42 out there.”

Truly, it is an unenviable time to be an artist. AI technology is only going to improve, and as it does, you will be increasingly competing against both “Prompt Engineers” and anonymous internet sleuths hunting for clues to “expose” you for Reddit karma. Eventually, AI-generated content will be so prevalent that none of it will matter; I could imagine ads that are dynamically drawn in, say, anime-style because it noticed you had CrunchyRoll open in another tab, or with the realistic likeness of a TV star from your most-watched Netflix show.

Right now, utilizing AI as a business is a sign of being cheap and invites controversy. Perhaps it remains so, presuming the ad-based hellscape imagined above. But at a certain point, AI will probably figure out symmetry and how many Rs are in strawberry and we will likely be none the wiser.

Or we will just assume everything is AI-generated and it won’t matter. Same difference.