Category Archives: Philosophy

Blurry Lines

It occurs to me that in the last post, I brought up Larian Studios’ use of AI for “boring” work, but did not otherwise offer an opinion on the subject. Do they deserve to get the scarlet AI letter brand? Or perhaps will they received a reprieve, on account of not being Activision-Blizzard (etc)?

It helps to level-set. Here is more/most of the transcript from the Larian interview:

JS: Speaking of efficiency, you’ve spoken a little bit about generative AI. And I know that that’s been a point of discussion on the team, too. Do you feel like it can speed up production?

SV: In terms of generation, like white boxing, yes, there’s things, but I’m not 100% sure if you’re actually seeing speed-ups that much. You’re trying more stuff. Having tried stuff out, I don’t actually think it accelerates things. Because there’s a lot of hype out there. I haven’t really seen: oh this is really gonna replace things. I haven’t seen that yet. I’ve seen a lot of where you initially get excited, oh, this could be cool. And then you say, ah, you know, in the end it doesn’t really do the thing. Everything is human actors; we are writing everything ourselves. There’s no generated assets that you’re gonna see in the game. We are trying to use generated assets to accelerate white-boxing. But I mean to be fair, we’re talking about basic things to help the level designers.

JS: What about concept art?

SV: So that’s being used by concept artists. They use it the same like they would use photos. We have like 30 concept artistis at this point or something like that. So we bought a boutique concept art firm at the moment that everybody was using reducing them because they were going to AI, in our case it just went up. If there’s one thing that artists keep on asking for its more concept artists. But what they do is they use it for exploration.

[…]

SV: I think experimentation, white boxing, some broader white boxing, lots and lots of applications and retargeting and cleaning and editing. These are things that just really take a lot of time. So that allows you to do more. So there’s a lot of value there in terms of the creative process itself. It helps in doing things. But I haven’t seen the acceleration. So I’m really curious to see because there’s all studios that said, oh, this is gonna accelerate… If you look at the state of the art of video games today, these are still in their infancy. Will they eventually manage to do that at large scale? I don’t know how much data centers they’re gonna need to be able to do it.

So what I would say is that what Larian is doing is materially different than a company, say, using AI to generate random newspapers to place around a city. Or, you know, use AI to voice characters entirely. Copy & pasting AI-generated output directly into your game seems like a pretty clear line not to cross.

Personally though, there are other lines that are blurrier and on a slippery decline.

Take the concept artists. Larian hired a bunch at a time when many were getting replaced with AI. Good Guy Larian. Even if, perhaps, they may have been on a bit of a discount on account of, you know, AI pressure on their field of work. Whatever, humans good. We then learn that these same concept artists use generative AI for “exploration,” either instead of or, optimistically, in tandem with more traditional photos. That’s where things start to break down for me.

Suppose a concept artist wants to draw a lion. To do so, they would like to have a bunch of photos of lions for reference material. I understand the allure of saving time by simply getting ChatGPT or whatever to spit out 30 lion photos in various states of movement, rather than manually doing Google searches, going to zoo websites, and so on. The seduction of the follow-up prompt is right there though. “Lions roaring.” “Lions pouncing on antelope.” “Lion with raven wings attacking a paladin.”

[AI generated image] My meaningless contribution to entropy just to make an internet point.

And yeah, that looks goofy as shit. The artists will redraw it in the style that fits the game and nobody will notice. But maybe the team likes the look of the dust and mountainous landscape. They incorporate that. Maybe they include an armor set that matches that style. Or the sun symbol. Over time, maybe the team takes what the people themselves came up with and start running it through the prompts “just in case” the AI spits out something similarly interesting. And so on and so forth.

“So what? What’s the harm?”

Well, how much time do you have? I’m going to focus exclusively on the videogame angle here, rather than environmental impacts, cognitive studies, and apocalypse risk from agentic, self-improving AI.

The first concern is becoming reliant upon it. Larian is apparently hiring concept artists today, but maybe in the not so distant future, they don’t. Anyone can type in a prompt box. Over time, the entire concept artist field could disappear. And what is replacing it? An AI model that is incentivized in giving you exactly what it thinks you want. This will lead to homogenization, the sort of “AI glow,” and even if you wanted to fight the current… who is still economically capable of producing the bespoke human work? And would they not just turn around and tweak AI output and call it a day (it’s happened before)?

Incidentally, the other part of AI reliance is the fact that you own none of it. Right now, these AI firms lose money any time people use it, but that is going to stop eventually. When it does, you are either going to be on the hook for tens of thousands of dollars a month for a license, or desperately trying to filter out ad placements from the output. Maybe open source LLMs (or corporate saboteurs) will save us from such a fate, but there won’t be a non-AI fallback option because the job doesn’t exist anymore.

The second concern is something that these companies definitely feel the effects of already, but apparently don’t give much thought about: we are very much in a crowded, attention economy. On one end you have short-form video eating into gamer time, and on the other you have legacy games continuing to dominate playtimes. For example, Steam’s year-end report showed that just 14% of gamer time was spent playing games released in 2025. Is that figure skewed by Steam exclusives like Counter-Strike 2? Maybe. Then again, Steam is the largest, most ubiquitous PC storefront in the world and had 1.5+ million concurrent players in Counter-Strike 2 yesterday. That’s a lot of people who could be playing anything other than a game from 2012.

Now imagine that all of the promises of AI have come true for videogame devs. Six year timelines become four years or even two. Great! Who is going to be playing your game with what time? Over 19,000 games came out on Steam in 2025. Are all of them AAA titles winning awards? Of course not. But what does AAA even mean in a flowers-and-rainbow AI scenario? Maybe $50-$100+ million still makes a big difference in quality, fine. But that certainly didn’t save Black Ops 7, Borderlands 4, Concord, the dead-on-eventual-arrival Highguard, and so on.

Now imagine what happens when there are… 190,000 games released in a year.

As a player, I suppose in this hypothetical we come out ahead; there are more games specifically tailored to our exact preferences. For the game makers though, well, most of them are going to fail. Or perhaps the hobbyist ones survive, assuming a lower AI license cost. I don’t see how AAA survives with the increased competition and a reduced competitive edge (mo-cap, CGI, etc); hell, they are struggling to survive already. To say nothing about the discoverability issues. Maybe AI will fix that too, yeah?

In summation, my thoughts on the matter:

  1. Copy & pasting literal AI assets in your game is bad
  2. Using AI for inspiration leads to being trapped in an AI ecosystem
  3. AI-shortened development times leads to no one making any money

Of course, the cat genie is out of the lamp bag and never going back into the toothpaste tube. Taking a hard stance on #1 – to include slapping AI labels on Steam pages and the like – is not going to prevent #2 and #3. Hell, everyone in the industry wants shortened development times. I just don’t think anyone fully appreciates what that sort of thing would look like, until after the monkey paw curls.

In the meantime, as a gamer… eh, do what you want. I personally don’t want any generative AI elements in the games I play, for all the reasons I already outlined above (plus the ones I intentionally skipped). At the same time, I don’t have the bandwidth to contemplate how much GitHub Copilot use by a random programmer constitutes too much for them to qualify for a GOTY award. And if you’re not turning off DLSS 3 or FSR out of principal, what are you even doing, amirite?

Authentic Wirehead

Bhagpuss has a post out called “It’s Real if I Say It’s Real,” with a strong argument that while people say they desire authenticity in the face of (e.g.) AI-generated music, A) people often can’t tell the difference, and B) if you enjoyed it, what does it even matter?

It was the clearest, most positive advocacy of the wirehead future I’ve ever seen in the wild.

Now, speaking of clarity, Bhagpuss didn’t advocate for wirehead in the post. Not directly. I have no personal reason to believe Bhagpuss would agree with my characterization of his post in the first place. However. I do believe it is the natural result and consequence of accepting the two premises.

Premise one is that we have passed (and are perhaps far beyond) the point at which the average person can easily differentiate between AI-generated content and the “real thing.” Indeed, is there really anyone anywhere ready to argue the opposite? Linked in the Bhagpuss’ post was this survey showing 97% of respondents being unable to tell the difference between human-made and AI-generated music across three samples. ChatGPT 4.5 already passed the classical three-way Turing Test, being selected as the human 73% of the time. Imagine that other person the research subject was texting with, and being so resoundingly rejected as human.

Then again, perhaps the results should not be all that surprising. We are very susceptible to suggestion, subterfuge, misdirection, and marketing. Bhagpuss brought up the old-school Pepsi vs Coke challenge, but you can also look at wine tasting studies where simply being told one type was more expensive led to it being rated more highly. Hell, the simple existence of the placebo effect at all should throw cold (triple-filtered, premium Icelandic) water on the notion that we exist in some objective reality. And us not, you know, just doing the best we can while piloting wet bags of sentient meat.

So, premise #1 is that it has become increasingly difficult to tell when something was created by AI.

Premise #2 is when we no longer care that it was artificially generated. For a lot of people, we are already well past this mile marker. Indirectly, when we no longer bother trying to verify the veracity of the source. Or directly, when we know it is AI-generated and enjoy it anyway.

I am actually kind of sympathetic on this point, philosophically. I have always been a big believer that an argument stands on its own merits. To discredit an idea based on the character of the person who made it is the definition of an ad hominem fallacy. In which case, wouldn’t casting aspersions on AI be… ad machina? If a song, or story, or argument is good, does its origins really matter? Maybe, maybe not.

Way back in my college days, I studied abroad in Japan for a semester. One thing I took was a knock-off Zune filled with LimeWired songs, and it was my proverbial sandbar while feeling adrift and alone. Some memories are so intensely entangled with certain songs, that I cannot think of one without the other. One of my favorites back then was… Last Train Home. By lostprophets. Sung by Ian Watkins.

So… yeah. It’s a little difficult for me to square the circle that is separating the art from the artist.

But suppose you really don’t care. Perhaps you are immune to “cancel culture” arguments, unmoved from allegations of a politician’s hypocrisy, and would derive indistinguishable pleasure between seeing the Mona Lisa in person and a print thereof hanging on your wall. “It’s all the same in the wash.”

To which I would ask: what distance remains to simply activating your nucleus accumbens directly?

What is AI music if not computer-generated noises attempting to substitute for the physical wire in your brain? Same for AI video, AI games, AI companions. If the context and circumstances of the art have no meaning, bear no weight, then… the last middle-man to cut out is you. Wirehead: engage.

I acknowledge that in many respects, it is a reductive argument. “Regular music is human-generated noises attempting to substitute for the wire.” We do not exist in a Platonic universe, unmoored from biological processes. Even my own notion that human-derived art should impart greater meaning into a work is itself mental scaffolding erected to enhance the pleasure derived from experiencing it.

That said, this entire thought experiment is getting less theoretical by the day. One of the last saving graces against a wirehead future is the minor, you know, brain surgery component. But what if that was not strictly necessary? What if there was a machine capable of gauging our reactions to given stimuli, allowing it to test different combinations of outputs in the form of words, sounds, and flashing lights to remotely trigger one’s nucleus accumbens? They would need some kind of reinforcement mechanism to calculate success, and an army of volunteers against which to test. The whole thing would cost trillions!

Surely, no one would go for that…

Human Slurry

Scrolling on my phone, I clicked into and read an article about Yaupon, which is apparently North America’s only native caffeinated plant. Since we’re speed-running the apocalypse over here in the US, the thought is that high tariffs on coffee and tea might revitalize an otherwise ultra-niche “Made in America” product. Huh, interesting.

I scroll down to the end and then see this:

The human slurry future

I’ve seen summarized reviews on Amazon, but never comments. Honestly, I just laughed.

It’s long been known that the comments on news articles are trash: filled with bots or humans indistinguishable from bots. But there is something deeply… I don’t know a strong enough word for it. Cynical? Nihilistic? Absurd? Maybe just fucking comedic about inviting your (presumably) human readers to comment on a story and then just blending them all up in a great human slurry summary so no one has to actually read any of them. At what point do you not just cut out the middle(hu)man?

If want a summary of the future, that’s it. Wirehead, but made out of people.

I Get No Respec

The Outer Worlds 2’s game director believes implementing 90+ perks with no respec option will lead to role-playing consequences.

“There’s a lot of times where you’ll see games where they allow infinite respec, and at that point I’m not really role-playing a character, because I’m jumping between — well my guy is a really great assassin that snipes from long range, and then oh, y’know, now I’m going to be a speech person, then respec again, and it’s like–” […]

“We want to respect people’s time and for me in a role-playing game this is respecting somebody’s time,” Adler argues. “Saying your choices matter, so take that seriously – and we’re going to respect that by making sure that we give you cool reactivity for those choices that you’re making. That’s respecting your time.

Nah, dawg, having an exit strategy for designer hubris and incompetence is respecting my time.

Imagine starting up Cyberpunk 2077 on launch day and wanting to role-play a knife-throwing guy… and then being stuck for 14 months (until patch 1.5) before the designers get around to fixing the problem of having included knife-throwing abilities with no way to retrieve the knives. As in, whatever you threw – which could have been a Legendary knife! – just evaporated into the ether. Or if you dedicated yourself to be a Tech-based weapon user only to find out the capstone ability that allows tech-based weapons to ignore enemy armor does nothing because enemies didn’t actually have an armor attribute. Or that crafting anything in general is an insane waste of time, assuming you didn’t want to just print infinite amounts of currency to purchase better-than-you-can-craft items.

Or how about in the original release Deus Ex: Human Revolution when you go down the hacking/sneaking route. Only… surprise! There are boss fights in which hacking/sneaking is useless. Very nice role-playing consequences there. Devs eventually fixed this two years later.

The Outer Worlds 2 will not be released in a balanced state; practically no game is, much less ones directed by apparent morons. Undoubtedly we will get the option for inane perks like +50% Explosive Damage without any information about how 99% of the endgame foes will have resistances to Explosive Damage or whatever. In the strictest (and dumbest) interpretation I suppose you could argue that “role-playing” an inept demolition man is still a meaningful choice. But is it really a meaningful choice when you have to trap players into making it? If players wanted a harder time, they could always increase the game difficulty or intentionally play poorly.

Which honestly gets to the heart of the matter: who are you doing this for? Not actual role-players, because guess what… they can (and should) just ignore the ability to respec even if it is available. Commitment is kind of their whole schtick, is it not? No, this reeks of old-school elitist game dev bullshit that was pulled from the garbage bin of history and proudly mounted over the fireplace.

But I’ll tell you, not every game is for every single person. Sometimes you have to pick a lane.” 

And yet out of all the available options, you picked the dumbass lane.

It’s funny, because normally I am one to admire a game developer sticking to their strong vision for a particular game. You would never get a Dark Souls or Death Stranding designed by a committee. But by specifically presenting the arguments he did, it is clear to me that “no respecs” is not actually a vision, it’s an absurdist pet peeve. Obsidian is going to give us “cool reactivity” for the choices we make? You mean like… what? If I choose the Bullets Cause Bleed perk my character will say “I’ll make them bleed”? Or my party members will openly worry that I will blow everyone up when I pick the Explosion Damage+ perk? You can’t see it, but I’m pressing X to Doubt.

[Fake Edit]

I just came across developer interviews on Flaws and Character Building. Flaws are bonus/penalty choices you get presented with after a specific criteria is met during gameplay. One example was Sungazer, where you after looking at the sun too many times, you can choose permanent vision damage (bloom and/or lens flair all the time), +100% ranged damage spread, but you can passively heal to 50% HP when outside in the daytime. The other is Foot-In-Mouth where if the game notices you quickly breezing through dialog options, you can opt to get a permanent +15% XP gain in exchange for only having a 15-second timer to make dialog options, after which everything is picked randomly.

While those are probably supposed to be “fun” and goofy examples, this is exactly the sort of shit I was talking about. Sungazer is obviously not something a ranged character would ever select, but suppose I was already committing to a melee build. OK… how often will I be outside? Does the healing work even in combat? How expensive/rare are healing items going to be? Will the final dungeon be, well, a dungeon? I doubt potentially ruining the visuals for the entire rest of the game will ever be worth it – and we can’t know how bad that’s going to be until we experience it! – but even if that portion was removed, I would still need more information before I could call that a meaningful choice.

“Life is full of meaningful choices with imperfect information.” Yeah, no, there’s a difference between imperfect information because the information is unknowable and when the devs know exactly how they planned the rest of the game to go. Letting players specialize in poison damage and then making all bosses immune to poison is called a Noob Trap.

The second video touches more directly on respecs and choices, and… it’s pretty bad. They do their best and everything sounds fine up until the last thirty seconds or so.

Yes, you can experiment and play with it a bit. And you may find something… ‘I try this out and I don’t really like it too much’ you know… you might load a save. You might want to do something different, you might try a different playthrough.

This was right after the other guy was suggesting that if you discover you like using Gadgets (instead of whatever you were doing previously), your now-wasted skill points are “part of your story, part of your experience that no one else had.” Oh, you mean like part of my bad experience that can be avoided by seeing other players warning me that X Skill is useless in the endgame or that Y Skill doesn’t work like it says it does in-game?

Ultimately, none of this is going to matter much, of course. There will be a respec mod out there on Day 1 and the mEaNiNgFuL cHoIcEs crowd will get what they want, those who can mod will get what we want, and everyone else just kind of gets fucked by asinine developers who feel like they know better than the ones who made Baldur’s Gate 3, Cyberpunk 2077, Elden Ring, and Witcher 3.

Dollar Per Hour of Entertainment

Today I want to talk about the classical “$1 Per Hour of Entertainment” measurement of value. This has been a staple of videogame value discussions for decades. There are multiple problems with the formula though, and I think we should collectively abandon its use even as general rule of thumb.

The first problem is foundational: what qualifies as “entertainment”? When people evoke the formula, they typically do so with the assumption that hours spent playing a game are hours spent entertained. But is that actually the case? There are dozens and dozens of examples of “grind” in games, where you must perform a repetitive, unfun task to achieve a desired result. If you actively hate the grinding part but still do it anyway because reward is worth it, does the entire process count as entertainment? Simply because you chose to engage with the game at all? That sounds like tautology to me. May as well add the time spent working a job to get the money used to buy the game in that case.

Which brings me to the second problem: the entertainment gradient. Regardless of where you landed with the previous paragraph, I believe we can all agree that some fun experiences are more fun than others. In which case, shouldn’t that higher tier of entertainment be more valuable than the other? If so, how does that translate into the formula? It doesn’t, basically. All of us have those examples of deeply personal, transformative gaming experiences that we still think about years (decades!) later. Are those experiences not more valuable than the routine sort of busywork we engage with, sometimes within the same game that held such highs? It is absolutely possible that a shorter, more intensely fun experience is “worth” more than a mundane, time-killing one that we do more out of habit.

Actually, this also brings up a third problem: the timekeeping. I would argue that a game’s entertainment value does not end when you stop playing. If you are still thinking about a game days/months/years after you stopped playing, why should that not also count towards its overall value? Xenogears is one of my favorite games of all time, and yet I played through it once for maybe 80 hours back in 1998. However, I’ve thought about the game many, many times over the intervening decades, constantly comparing sci-fi and/or anime RPGs to it, and otherwise keeping the flame of its transformative (to me) memory alive. Journey is another example wherein I played and completed it in a single ~3 hour session, and I still think about it on occasion all these years later. Indeed, can you even say that your favorite games are the same ones with the highest dollar per hour spent playing?

The fourth problem with the formula is that it breaks down entirely when it comes to free-to-play games. Although there are some interesting calculations you can do with cash shop purchases, the fact remains that there are dozens of high-quality games you can legitimately play for hundreds of hours at a cost of $0. By definition, these should be considered the pinnacle of entertainment value per dollar spent, and yet I doubt anyone would say Candy Crush is their favorite gaming experience of all time.

The final problem is a bit goofy, but… what about inflation? The metric has been $1 per hour of entertainment for at least 20 years, if not longer. If we look at 1997, $1 back then is as valuable as $2.01 today. Which… ouch. But suggesting that the metric should now be $2 per hour of entertainment just feels bad. And yet, $1 per two hours of entertainment also seems unreasonable. What games could hit that? This isn’t even bringing up the other aspect of the intervening decades: loss of free time. Regardless of which way inflation is taken into account, fundamentally I have less time for leisure activities than I did back in high school/college. Therefore the time I do have is more valuable to me.

At least, you’d think so. Lately I’ve been playing Hearthstone Battlegrounds (for free!) instead of any of the hundreds of quality, potentially transformative game experiences I have on tap. Oh well.

Now, I get it, nobody really uses the $1 per hour of entertainment metric to guide their gaming purchases – they would otherwise be too busy playing Fortnite. But, fundamentally, calculating the per hour rate is about the worst possible justification for a discretionary purchase, the very last salve to ease the burn of cognitive dissonance. “At least I played this otherwise unremarkable game for 60+ hours.” Naw, dawg, just put it down. Not every game is going to be a winner for us individually, and that’s OK. Just take the L and move on. Everything is about to start costing $80, and you sure as shit don’t have 20 more hours per game to pretend you didn’t get bamboozled at checkout.

But you know what? You do what you want. Which is hopefully doing what you want.

This AI Ain’t It

Wilhelm wrote a post called “The Folly of Believing in AI” and is otherwise predicting an eventual market crash based on the insane capital spent chasing that dragon. The thesis is simple: AI is expensive, so… who is going to pay for it? Well, expensive and garbage, which is the worst possible combination. And I pretty much agree with him entirely – when the music stops, there will be many a child left without a chair but holding a lot of bags, to mix metaphors.

The one problematic angle I want to stress the most though, is the fundamental limitation of AI: it is dependent upon the data it intends to replace, and yet that data evolves all the time.

Duh, right? Just think about it a bit more though. The best use-case I have heard for AI has been from programmers stating that they can get code snippets from ChatGPT that either work out of the box, or otherwise get them 90% of the way there. Where did ChatGPT “learn” code though? From scraping GitHub and similar repositories for human-made code. Which sounds an awful like what a search engine could also do, but nevermind. Even in the extremely optimistic scenario in which no programmer loses their jobs to future Prompt Engineers, eventually GitHub is going to start (or continue?) to accumulate AI-derived code. Which will be scraped and reconsumed into the dataset, increasing the error rate, thereby lowering the value that the AI had in the first place.

Alternatively, let’s suppose there isn’t an issue with recycled datasets and error rates. There will be a lower need for programmers, which means less opportunity for novel code and/or new languages, as it would have to compete with much cheaper, “solved” solution. We then get locked into existing code at current levels of function unless some hobbyists stumble upon the next best thing.

The other use-cases for AI are bad in more obvious, albeit understandable ways. AI can write tailored cover letters for you, or if you’re feeling extra frisky, apply for hundreds of job postings a day on your behalf. Of course, HR departments around the world fired the first shots of that war when they started using algorithms to pre-screen applications, so this bit of turnabout feels like fair play. But what is the end result? AI talking to AI? No person can or will manually sort through 250 applications per job opening. Maybe the most “fair” solution will just be picking people randomly. Or consolidating all the power into recruitment agencies. Or, you know, just nepotism and networking per usual.

Then you get to the AI-written house listings, product descriptions, user reviews, or office emails. Just look at this recent Forbes article on how to use ChatGPT to save you time in an office scenario:

  1. Wrangle Your Inbox (Google how to use Outlook Rules/filters)
  2. Eliminate Redundant Communication (Ooo, Email Templates!)
  3. Automate Content Creation (spit out a 1st draft on a subject based on prompts)
  4. Get The Most Out Of Your Meetings (transcribe notes, summarize transcriptions, create agendas)
  5. Crunch Data And Offer Insights (get data analysis, assuming you don’t understand Excel formulas)

The article states email and meetings represent 15% and 23% of work time, respectively. Sounds accurate enough. And yet rather than address the glaring, systemic issue of unnecessary communication directly, we are to use AI to just… sort of brute force our way through it. Does it not occur to anyone that the emails you are getting AI to summarize are possibly created by AI prompts from the sender? Your supervisor is going to get AI to summarize the AI article you submitted, have AI create an agenda for a meeting they call you in for, AI is going to transcribe the meeting, which will then be emailed to their supervisor and summarized again by AI. You’ll probably still be in trouble, but no worries, just submit 5000 job applications over your lunch break.

In Cyberpunk 2077 lore, a virus infected and destroyed 78.2% of the internet. In the real world, 90% of the internet will be synthetically generated by 2026. How’s that for a bearish case for AI?

Now, I am not a total Luddite. There are a number of applications for which AI is very welcome. Detecting lung cancer from a blood test, rapidly sifting through thousands of CT scans looking for patterns, potentially using AI to create novel molecules and designer drugs while simulating their efficacy, and so on. Those are useful applications of technology to further science.

That’s not what is getting peddled on the street these days though. And maybe that is not even the point. There is a cynical part of me that questions why these programs were dropped on the public like a free hit from the local drug dealer. There is some money exchanging hands, sure, and it’s certainly been a boon for Nvidia and other companies selling shovels during a gold rush. But OpenAI is set to take a $5 billion loss this year alone, and they aren’t the only game in town. Why spend $700,000/day running ChatGPT like a loss leader, when there doesn’t appear to be anything profitable being led to?

[Fake Edit] Totally unrelated last week news: Microsoft, Apple, and Nvidia are apparently bailing out OpenAI in another round of fundraising to keep them solvent… for another year, or whatever.

I think maybe the Dead Internet endgame is the point. The collateral damage is win-win for these AI companies. Either they succeed with the AGI moonshot – the holy grail of AI that would change the game, just like working fusion power – or fill the open internet with enough AI garbage to permanently prevent any future competition. What could a brand new AI company even train off of these days? Assuming “clean” output isn’t now locked down with licensing contracts, their new model would be facing off with ChatGPT v8.5 or whatever. The only reasonable avenue for future AI companies would be to license the existing datasets themselves into perpetuity. Rent-seeking at its finest.

I could be wrong. Perhaps all these LLMs will suddenly solve all our problems, and not just be tools of harassment and disinformation. Considering the big phone players are making deepfake software on phones standard this year, I suppose we’ll all find out pretty damn quick.

My prediction: mo’ AI, mo’ problems.

Cut the Concord

Statistically, you have never heard of it, but Sony is shutting down their new Overwatch-like hero shooter called Concord. After two whole weeks. On Steam, Concord apparently never broke 700 concurrent players at launch. The writing was probably on the wall from earlier when the open beta population was worse than closed beta – plus it launching as a $40 B2P in a sea of F2P competitors – but the sheer scale of the belly flop is shocking nonetheless. It is rumored to have cost $200 million, and for sure has been in development for eight (8!) years.

And now it’s gone.

You know, there are people out there that argue games should be more expensive. Not necessarily because of the traditional inflation reasons – although that factors in too – but because games costs more to make in general. Which really just translates into longer development times. And yet, as we can see with Concord along with many other examples, long development times do not necessarily translate back into better games. There is obviously some minimum, but longer isn’t better.

And yet, we have these industry leaders who suggest MSRP should be higher than the now-standard $70. To be “priced accordingly with its quality, breadth & depth,” as if any of that is really knowable from a consumer standpoint prior to purchase. We have reviews, sure, and Concord score a 70 from IGN. What does that tell you?

The overall way games are made is itself unsustainable, and an extra $10-$20 per copy isn’t going to fix anything. Indeed, there seems to be a blasé attitude in the industry that a rising MRSP will lift all the boats instead of, you know, causing the ones on the periphery to slide down the demand wave curve. Suppose GTA6 is released at $80. Is the argument that a consumer’s gaming budget will just indefinitely expand by an extra $10/year? Or will they, I dunno, just spend $10 less on other titles? Higher prices are certainly not going to expand the market, so… what?

As far as I can see it, the only reasonable knob to turn appears to be development time and nobody seems able to do it. I’m not trying to handwave away the effort and brute labor necessary to digitally animate mo-capped models in high fidelity. Or creating and debugging millions of lines of bespoke code. But I am asking how long it does take, how much of it is necessary, and how often these visually stunning games fall flat on their faces in the one function of their intended existence, e.g. being fun.

Throwing more money at the problem certainly doesn’t seem to be working.

What’s In A Game?

Dragon Age: the Veilguard is coming out on October 31st. How I feel about it is… complicated. Veilguard’s first trailer received a lot of flak, but there have been a few subsequent ones that show a more a traditional Dragon Age vibe, as opposed to what felt sort of Fortnite/Borderlands-ish irreverence. Besides, what’s not to love about it being more Dragon Age, featuring a romanceable Scout Harding, and the fact that it’s a BioWare game.

or is it?

I mean, yes, it’s a BioWare game. It’s also an EA game, proven by the fact that it has a deluxe edition, pre-order bonuses, and a $150 Collector’s Edition with a physical LED dagger and other items that hilariously doesn’t come with a copy of the game. You seriously can’t make this shit up.

But what is a “BioWare game,” or any game for that matter? Not in an existential sense, but what is meant when we say these things? When I say BioWare game, emotionally I’m referring to a nebulous sort of companion-focused, squad-based RPGs with branching endings based on player dialog choices. Basically, the Mass Effect and Dragon Age series. Which I have historically enjoyed, including even (eventually) Mass Effect: Andromeda. It’s a type of game with a particular vibe to it.

Having said that, being a “BioWare Game” is really just branding and marketing. BioWare also released Anthem, which was a commercial failure; Andromeda wasn’t that hot either, considering how all DLC and follow-up expansions were canceled. Rationally, there should be no expectation that just because BioWare is releasing Veilguard, that it will be of the same quality of [insert favorite Dragon Age game here], especially after the franchise’s 10-year hiatus. But that touch of skepticism should still be the case even if Anthem and Andromeda were smash hits.

I have long cautioned against the sort of hero worship that game developers sometimes generate, especially when it comes to “rockstar” designers. There are people who fall to their knees at the altar of Fallout: New Vegas and Chris Avellone. To which I say: why? Even if New Vegas is your favorite game, there were a lot of cooks in that kitchen. In fact, you probably should be worshiping at the feet of John Gonzalez instead. Or, preferably, worshiping no one, including the companies themselves.

Game design is a collaborative endeavor – solo indie titles aside – and it’s a nigh-impossible task to nail down exactly who did what to make the game as compelling an experience as it was. Especially including the very staff who did it. Back in the day, there was an argument that Blizzard was sending in their B-Team for the Wrath of the Lich King expansion, and that is why subscriptions started to decline for the first time (notwithstanding the 12 million sub peak). As it turns out, that wasn’t the case – most everyone from vanilla and TBC ended up working on Wrath and subsequent expansions. Hell, the most controversial addition to the game (Looking for Dungeon tool) was something the original “rockstars” devs wanted to release in vanilla. It wasn’t the bean counters or the C-Suites or whatever design boogeyman you want to define; the calls were coming from inside the house.

There are times where it appears one very visible person seems to make a difference. Hideo Kojima immediately comes to mind. It is also difficult to argue against the apparent influence of, say, Yoshi-P when it comes to FF14. Or Hidetaka Miyazaki of FromSoftware fame. They could not build their games alone, of course, but leadership can and does set expectations and gives direction in these endeavors. There is a level of consistency – or consistent craziness in Kojima’s case – that is pretty rare in gaming.

By and large, though? Every game is a gumbo and no knows what went into it or why it tastes the way it does. That’s… a pretty strong nihilistic take, I admit, but riddle me this: if someone figured it all out, why is it so hard to bottle the lightning again? Boredom? Fear? Ever-changing audience mores? There are so many moving parts, between the game designers coming and going from the company, to the gaming zeitgeist of the time, to competing game releases, all of which can influence a title’s success. You can’t just say “Obsidian should just make New Vegas 2 and it will be a smash hit” because A) most everyone from the original team has left, B) none of the people who left appear to have taken the secret sauce with them, and C) New Vegas was massively outsold by Fallout 4 in any case.

So, am I still looking forward to Veilguard? Well, two words: Scout Harding.

Seriously though, I don’t want the takeaway to be that you shouldn’t look forward to anything. I have no idea what the plans are for Mass Effect 5, but I still want to find out. Just not on Day 1 (probably), and not with any sort of expectations that because Company A or Game Dev B made it that the end result will be C. If you’re going to base your hype on anything, base it on what the game is promising, not the people who made it. After all, the best games end up taking on a life of their own.

Unsustainability

Senua Saga: Hellblade 2 recently came out to glowing reviews and… well, not so glowing concurrent player counts on Steam. Specifically, it peaked at about 4000 players, compared to 5600 for the original game back in 2017, and compared to ~6000 for Hi-Fi Rush and Redfall. The Reddit post where I found this information has the typical excuses, e.g. it’s all Game Pass’s fault (it was a Day 1 release):

They really don’t get that gamepass is unsustainable. It works for Netflix because movies and tv shows can be made in a year or less so they can keep pumping out content each year. Games take years to make and they can’t keep the same stream of new content releasing the same way streaming services do.

Gamepass subs are already stagnating, they would make more money if they held off putting new exclusives on gamepass like movies do with putting them in theatres first before putting them on streaming. (source)

Now, it’s worth pointing out that concurrent player counts is not precisely the best way to measure the relative success of a single-player game. Unless, I suppose, you are Baldur’s Gate 3. Also, Hellblade 2 is a story-based sequel to an artistic game that, as established, only hit a peak of 5600 concurrent players. According to Wikipedia, the original game sold about 1,000,000 copies by June 2018. Thus, one would likely presume that the sequel would sell roughly the same amount or less.

The thing that piqued my interest though, was the reply that came next:

Yeah, even “small” games like Hellblade and Hi-Fi Rush, which are both under 10h to complete, took 5/6 years to develop. It’s impossible to justify developing games like these with gigantic budgets if you’re going to have them on your subscription service.

I mean… sure. But there’s an unspoken assumption here that these small games with gigantic, 5-6 year budgets would be justified even without being on a subscription service. See hot take:

Hellblade 2 really is the ultimate example of the flaw of Xbox’s “hands off” approach to game dev.

How has a studio been able to take 5 years making a tiny game that is basically identical to the first?

How did Rare get away with farting out trailers for Everwild despite the game literlaly not existing?

Reddit may constantly slag off strict management and studio control, but sometimes it’s needed to reign studios in and actually create games…

Gaming’s “sustainability problem” has long been forecast, but it does feel like things have more recently come to a head. It is easy to villainize Microsoft for closing down, say, the Hi-Fi Rush devs a year after soaking up their accolades… but good reviews don’t always equate to profit. Did the game even make back its production costs? Would it be fiduciarily responsible to make the bet in 2024, that Hi-Fi Rush 2 would outperform the original in 2030?

To be clear, I’m not in favor of Microsoft shutting down the studio. Nor do I want fewer of these kind of games. Games are commercial products, but that is not all they can be. Things like Journey can be transformative experiences, and we would all be worse off for them not existing.

Last post, I mentioned that Square Enix is shifting priorities of their entire company based on poor numbers for their mainline Final Fantasy PS5 timed-exclusive releases. But the fundamental problem is a bit deeper. At Square Enix, we’ve heard for years about how one of their games will sell millions of copies but still be considered “underperforming.” For example, the original Tomb Raider reboot sold 3.4 million copies in the first month, but the execs thought that made it a failure. Well, there was a recent Reddit thread about an ex-Square Enix executive explaining the thought process. In short:

There’s a misunderstanding that has been repeated for nearly a decade and a half that Square Enix sets arbitrarily high sales requirements then gets upset when its arbitrarily high sales requirements fail to be met. […]

If a game costs $100m to make, and takes 5 years, then you have to beat, as an example, what the business could have returned investing $100m into the stock market over that period.

For the 5 years prior to Feb 2024, the stock market averaged a rate of return of 14.5%. Investing that $100m in the stock market would net you a return of $201m, so this is our ROI baseline. Can the game net a return higher than this after marketing, platform fees, and discounts are factored in?

That… makes sense. One might even say it’s basic economics.

However, that heuristic also seems outrageously unsustainable in of itself. Almost by definition, very few companies beat “the market.” Especially when the market is, by weight, Microsoft (7.16%), Apple (6.12%), Nvidia (5.79%), Amazon (3.74%), and Meta (2.31%). And 495 other companies, of course. As an investor, sure, why pick a videogame stock over SPY if the latter has the better return? But how exactly does one run a company this way?

Out of curiosity, I found a site to compare some game stocks vs SPY over the last 10 years:

I’ll be goddamned. They do usually beat the market. In case something happens to the picture:

  • Square Enix – 75.89%
  • EA – 276.53%
  • Ubisoft – 30.56%
  • Take Two – 595.14%
  • S&P 500 – 170.51%

And it’s worth pointing out that Square Enix was beating the market in August 2023 before a big decline, followed by the even worse decline that we talked about recently. Indeed, every game company in this comparison was beating SPY, before Ubisoft started declining in 2022. Probably why they finally got around to “breaking the glass” when it comes to Assassin’s Creed: Japan.

Huh. This was not the direction I thought this post was going as I was writing it.

Fundamentally, I suppose the question remains as to how sustainable the videogame market is. The ex-Square Enix executive Reddit post I linked earlier has a lot more things to say on the topic, actually, and I absolutely recommend reading through it. One of the biggest takeaways is that major studios are struggling to adjust to the new reality that F2P juggernauts like Fortnite and Genshin Impact (etc) exist. Before, they could throw some more production value and/or marketing into their games and be relatively certain to achieve a certain amount of sales as long as a competitor wasn’t also releasing a major game the same month. Now, they have to worry about that and the fact that Fortnite and Genshin are still siphoning up both money and gamer time.

Which… feels kind of obvious when you write it out loud. There was never a time when I played fewer other games than when I was the in the throes of WoW (or MMOs in general). And while MMOs are niche, things like Fortnite no longer are. So not only do they have to beat out similar titles, they have to beat out a F2P title that gets huge updates every 6 weeks and has been refined to a razor edge over almost a decade. Sorta like how Rift or Warhammer or other MMOs had to debut into WoW’s shadow.

So, is gaming – or even AAA specifically – really unsustainable? Possibly.

What I think is unsustainable are production times. I have thought about this for a while, but it’s wild hearing about some of the sausage-making reporting on game development. My go-to example is always Mass Effect: Andromeda. The game spent five years in development, but it was pretty much stitched together in 18 months, and not just because of crunch. Perhaps it is unreasonable to assume the “spaghetti against the wall” phase of development can be shortened or removed, or I am not appreciating the iteration necessary to get gameplay just right. But the Production Time lever is the only one these companies can realistically pull – raising prices just makes the F2P juggernaut comparisons worse, gamer ire notwithstanding. And are any of these games even worth $80, $90, $100 in the first place?

Perversely, even if Square Enix and others were able to achieve shorter production times, that means they will be pumping out more games (assuming they don’t fire thousands of devs). Which means more competition, more overlap, and still facing down the Fortnite gun. Pivoting to live service games to more directly counter Fortnite doesn’t seem to be working either; none of us seem to want that.

I suppose we will have to see how this plays out over time. The game industry at large is clearly profitable and growing besides. We will also probably have the AAA spectacles of Call of Duty and the like that can easily justify the production values. Similarly, the indie scene will likely always be popping, as small team/solo devs shoot their shot in a crowded market, while keeping their day jobs to get by.

But the artistic AA games? Those may be in trouble. The only path for viability I see there is, ironically, something like Game Pass. Microsoft is closing (now internal) studios, yes, but it’s clearly supporting a lot of smaller titles from independent teams and giving them visibility they may not otherwise have achieved. And Game Pass needs these sort of games to pad out the catalog in-between major releases. There are conflicting stories about whether the Faustian Game Pass Bargain is worth it, but I imagine most of that is based on a post-hoc analysis of popularity. Curation and signal-boosting is only going to become increasingly required to succeed for medium-sized studios.

Past is Prologue

Starfield has been a wild success. Like, objectively: it was the best-selling game in September and has since become the 7th best-selling game for the year. And those stats are based on actual sale figures, unmuddied by Xbox Game Pass numbers. Which is astounding to think about.

[Fake Edit] A success… except in the Game of the Year department. Yikes. It’s at least nominated for Best RPG, along with (checks notes) Lies of P? No more sunlight between RPG-elements and RPG anymore, I guess. Doesn’t matter though, Baldur’s Gate 3 is going to continue drinking that milkshake.

Starfield having so many procedurally-generated planets though, is still a mistake. And its a mistake that Mass Effect: Andromeda took on the chin for all of gamedom a decade ago.

Remember Andromeda? The eagerly-anticipated BioWare follow-up to their cultural phenomenon trilogy? It ended up being a commercial flop, best remembered for terrible facial animations and effectively killing the golden goose. What happened? Procedurally-generated planets. Andromeda didn’t have them, but the (multiple) directors wanted them so badly that they wasted months and years fruitlessly chasing them until there was basically just 18 months left to pump out a game.

You can read the Jason Schreier retrospective (from 2017) for the rest of the story. And in total fairness, the majority of the production issues stemmed from EA forcing BioWare to use the Frostbite engine to create their game. But it is a fact that they spent a lot of time chasing the exploration “dream.”

Another of Lehiany’s ideas was that there should be hundreds of explorable planets. BioWare would use algorithms to procedurally generate each world in the game, allowing for near-infinite possibilities, No Man’s Sky style. (No Man’s Sky had not yet been announced—BioWare came up with this concept separately.)

[…] It was an ambitious idea that excited many people on the Mass Effect: Andromeda team. “The concept sounds awesome,” said a person who worked on the game. “No Man’s Sky with BioWare graphics and story, that sounds amazing.”

That’s how it begins. Granted, we wouldn’t see how No Man’s Sky shook out gameplay wise until 2016.

The irony though, is that BioWare started to see it themselves:

The Mass Effect: Andromeda team was also having trouble executing the ideas they’d found so exciting just a year ago. Combat was shaping up nicely, as were the prototypes BioWare had developed for the Nomad ground vehicle, which already felt way better to drive than Mass Effect 1’s crusty old Mako. But spaceflight and procedurally generated planets were causing some problems. “They were creating planets and they were able to drive around it, and the mechanics of it were there,” said a person who worked on the game. “I think what they were struggling with was that it was never fun. They were never able to do it in a way that’s compelling, where like, ‘OK, now imagine doing this a hundred more times or a thousand more times.’”

And there it is: “it was never fun.” It never is.

I have logged 137 hours in No Man’s Sky, so perhaps it is unfair of me to suggest procedural exploration is never fun. But I would argue that the compelling bits of games like NMS are not the exploration elements – it’s stuff like resource-gathering. Or in games like Starbound, it’s finding a good skybox for your base. No one is walking around these planets wondering what’s over the next hill in the same way one does in Skyrim or Fallout. We know what’s over there: nothing. Or rather, one of six Points of Interest seeded by an algorithm to always be within 2km walking distance of where you land.

Exploring a procedurally generated world is like reading a novel authored by ChatGPT. Yeah, there are words on a page in the correct order, but what’s the point?

Getting back to Starfield though, the arc of its design followed almost the reverse of Andromeda. In this sprawling interview with Bruce Nesmith (lead designer of Skyrim and Bethesda veteran), he talked about how the original scope was limited to the Settled Systems. But then Todd Howard basically pulled “100 star systems” out of thin air and they went with it. And I get it. If you are already committed to using procedural generation on 12 star systems, what’s another 88? A clear waste of time, obviously.

And that’s not just an idle thought. According to this article, as of the end of October, just over 3% of Xbox players have the “Boots on the Ground” achievement that you receive for landing on 100 planets. Just thinking about how many loading screens that would take exhausts me. Undoubtedly, that percentage will creep up over time, but at some point you have to ask yourself what’s the cost. Near-zero if you already have the procedural generation engine tuned, of course. But taking that design path itself excludes things like environmental storytelling and a richer, more tailored gaming experience.

Perhaps the biggest casualty is one more felt than seen: ludonarrative. I talked about this before with Starfield, but one of the main premises of the game is exploring the great unknown. Except everything is already known. To my knowledge, there is not a single planet on any system which doesn’t have Abandoned Mines or some other randomly-placed human settlement somewhere on it. So what are we “exploring” exactly? And why would anyone describe this game as “NASApunk” when it is populated with millions of pirates literally everywhere? Of course, pirates are there so you don’t get too bored exploring the boring the planets, which are only boring because they exist.

Like I said at the top, Starfield has been wildly successful in spite of its procedural nonsense. But I do sincerely hope that, at some point, these AAA studios known for storytelling and/or exploration stop trying to make procedural generation work and just stay in their goddamn lane. Who out here is going “I really liked Baldur’s Gate 3, so I hope Larian’s next game is a roguelike card-battler”? Whatever, I guess Todd Howard gets a pass to make his “dream game” after 25 years. But as we sleepwalk into the AI era, I think it behooves these designers to focus on the things that they are supposedly better at (for now).

We learn from our mistakes eventually, right? Right?