Category Archives: Philosophy
Authentic Wirehead
Bhagpuss has a post out called “It’s Real if I Say It’s Real,” with a strong argument that while people say they desire authenticity in the face of (e.g.) AI-generated music, A) people often can’t tell the difference, and B) if you enjoyed it, what does it even matter?
It was the clearest, most positive advocacy of the wirehead future I’ve ever seen in the wild.
Now, speaking of clarity, Bhagpuss didn’t advocate for wirehead in the post. Not directly. I have no personal reason to believe Bhagpuss would agree with my characterization of his post in the first place. However. I do believe it is the natural result and consequence of accepting the two premises.
Premise one is that we have passed (and are perhaps far beyond) the point at which the average person can easily differentiate between AI-generated content and the “real thing.” Indeed, is there really anyone anywhere ready to argue the opposite? Linked in the Bhagpuss’ post was this survey showing 97% of respondents being unable to tell the difference between human-made and AI-generated music across three samples. ChatGPT 4.5 already passed the classical three-way Turing Test, being selected as the human 73% of the time. Imagine that other person the research subject was texting with, and being so resoundingly rejected as human.
Then again, perhaps the results should not be all that surprising. We are very susceptible to suggestion, subterfuge, misdirection, and marketing. Bhagpuss brought up the old-school Pepsi vs Coke challenge, but you can also look at wine tasting studies where simply being told one type was more expensive led to it being rated more highly. Hell, the simple existence of the placebo effect at all should throw cold (triple-filtered, premium Icelandic) water on the notion that we exist in some objective reality. And us not, you know, just doing the best we can while piloting wet bags of sentient meat.
So, premise #1 is that it has become increasingly difficult to tell when something was created by AI.
Premise #2 is when we no longer care that it was artificially generated. For a lot of people, we are already well past this mile marker. Indirectly, when we no longer bother trying to verify the veracity of the source. Or directly, when we know it is AI-generated and enjoy it anyway.
I am actually kind of sympathetic on this point, philosophically. I have always been a big believer that an argument stands on its own merits. To discredit an idea based on the character of the person who made it is the definition of an ad hominem fallacy. In which case, wouldn’t casting aspersions on AI be… ad machina? If a song, or story, or argument is good, does its origins really matter? Maybe, maybe not.
Way back in my college days, I studied abroad in Japan for a semester. One thing I took was a knock-off Zune filled with LimeWired songs, and it was my proverbial sandbar while feeling adrift and alone. Some memories are so intensely entangled with certain songs, that I cannot think of one without the other. One of my favorites back then was… Last Train Home. By lostprophets. Sung by Ian Watkins.
So… yeah. It’s a little difficult for me to square the circle that is separating the art from the artist.
But suppose you really don’t care. Perhaps you are immune to “cancel culture” arguments, unmoved from allegations of a politician’s hypocrisy, and would derive indistinguishable pleasure between seeing the Mona Lisa in person and a print thereof hanging on your wall. “It’s all the same in the wash.”
To which I would ask: what distance remains to simply activating your nucleus accumbens directly?
What is AI music if not computer-generated noises attempting to substitute for the physical wire in your brain? Same for AI video, AI games, AI companions. If the context and circumstances of the art have no meaning, bear no weight, then… the last middle-man to cut out is you. Wirehead: engage.
…
I acknowledge that in many respects, it is a reductive argument. “Regular music is human-generated noises attempting to substitute for the wire.” We do not exist in a Platonic universe, unmoored from biological processes. Even my own notion that human-derived art should impart greater meaning into a work is itself mental scaffolding erected to enhance the pleasure derived from experiencing it.
That said, this entire thought experiment is getting less theoretical by the day. One of the last saving graces against a wirehead future is the minor, you know, brain surgery component. But what if that was not strictly necessary? What if there was a machine capable of gauging our reactions to given stimuli, allowing it to test different combinations of outputs in the form of words, sounds, and flashing lights to remotely trigger one’s nucleus accumbens? They would need some kind of reinforcement mechanism to calculate success, and an army of volunteers against which to test. The whole thing would cost trillions!
Surely, no one would go for that…
I Get No Respec
The Outer Worlds 2’s game director believes implementing 90+ perks with no respec option will lead to role-playing consequences.
“There’s a lot of times where you’ll see games where they allow infinite respec, and at that point I’m not really role-playing a character, because I’m jumping between — well my guy is a really great assassin that snipes from long range, and then oh, y’know, now I’m going to be a speech person, then respec again, and it’s like–” […]
“We want to respect people’s time and for me in a role-playing game this is respecting somebody’s time,” Adler argues. “Saying your choices matter, so take that seriously – and we’re going to respect that by making sure that we give you cool reactivity for those choices that you’re making. That’s respecting your time.
Nah, dawg, having an exit strategy for designer hubris and incompetence is respecting my time.
Imagine starting up Cyberpunk 2077 on launch day and wanting to role-play a knife-throwing guy… and then being stuck for 14 months (until patch 1.5) before the designers get around to fixing the problem of having included knife-throwing abilities with no way to retrieve the knives. As in, whatever you threw – which could have been a Legendary knife! – just evaporated into the ether. Or if you dedicated yourself to be a Tech-based weapon user only to find out the capstone ability that allows tech-based weapons to ignore enemy armor does nothing because enemies didn’t actually have an armor attribute. Or that crafting anything in general is an insane waste of time, assuming you didn’t want to just print infinite amounts of currency to purchase better-than-you-can-craft items.
Or how about in the original release Deus Ex: Human Revolution when you go down the hacking/sneaking route. Only… surprise! There are boss fights in which hacking/sneaking is useless. Very nice role-playing consequences there. Devs eventually fixed this two years later.
The Outer Worlds 2 will not be released in a balanced state; practically no game is, much less ones directed by apparent morons. Undoubtedly we will get the option for inane perks like +50% Explosive Damage without any information about how 99% of the endgame foes will have resistances to Explosive Damage or whatever. In the strictest (and dumbest) interpretation I suppose you could argue that “role-playing” an inept demolition man is still a meaningful choice. But is it really a meaningful choice when you have to trap players into making it? If players wanted a harder time, they could always increase the game difficulty or intentionally play poorly.
Which honestly gets to the heart of the matter: who are you doing this for? Not actual role-players, because guess what… they can (and should) just ignore the ability to respec even if it is available. Commitment is kind of their whole schtick, is it not? No, this reeks of old-school elitist game dev bullshit that was pulled from the garbage bin of history and proudly mounted over the fireplace.
But I’ll tell you, not every game is for every single person. Sometimes you have to pick a lane.”
And yet out of all the available options, you picked the dumbass lane.
It’s funny, because normally I am one to admire a game developer sticking to their strong vision for a particular game. You would never get a Dark Souls or Death Stranding designed by a committee. But by specifically presenting the arguments he did, it is clear to me that “no respecs” is not actually a vision, it’s an absurdist pet peeve. Obsidian is going to give us “cool reactivity” for the choices we make? You mean like… what? If I choose the Bullets Cause Bleed perk my character will say “I’ll make them bleed”? Or my party members will openly worry that I will blow everyone up when I pick the Explosion Damage+ perk? You can’t see it, but I’m pressing X to Doubt.
[Fake Edit]
I just came across developer interviews on Flaws and Character Building. Flaws are bonus/penalty choices you get presented with after a specific criteria is met during gameplay. One example was Sungazer, where you after looking at the sun too many times, you can choose permanent vision damage (bloom and/or lens flair all the time), +100% ranged damage spread, but you can passively heal to 50% HP when outside in the daytime. The other is Foot-In-Mouth where if the game notices you quickly breezing through dialog options, you can opt to get a permanent +15% XP gain in exchange for only having a 15-second timer to make dialog options, after which everything is picked randomly.
While those are probably supposed to be “fun” and goofy examples, this is exactly the sort of shit I was talking about. Sungazer is obviously not something a ranged character would ever select, but suppose I was already committing to a melee build. OK… how often will I be outside? Does the healing work even in combat? How expensive/rare are healing items going to be? Will the final dungeon be, well, a dungeon? I doubt potentially ruining the visuals for the entire rest of the game will ever be worth it – and we can’t know how bad that’s going to be until we experience it! – but even if that portion was removed, I would still need more information before I could call that a meaningful choice.
“Life is full of meaningful choices with imperfect information.” Yeah, no, there’s a difference between imperfect information because the information is unknowable and when the devs know exactly how they planned the rest of the game to go. Letting players specialize in poison damage and then making all bosses immune to poison is called a Noob Trap.
The second video touches more directly on respecs and choices, and… it’s pretty bad. They do their best and everything sounds fine up until the last thirty seconds or so.
Yes, you can experiment and play with it a bit. And you may find something… ‘I try this out and I don’t really like it too much’ you know… you might load a save. You might want to do something different, you might try a different playthrough.
This was right after the other guy was suggesting that if you discover you like using Gadgets (instead of whatever you were doing previously), your now-wasted skill points are “part of your story, part of your experience that no one else had.” Oh, you mean like part of my bad experience that can be avoided by seeing other players warning me that X Skill is useless in the endgame or that Y Skill doesn’t work like it says it does in-game?
Ultimately, none of this is going to matter much, of course. There will be a respec mod out there on Day 1 and the mEaNiNgFuL cHoIcEs crowd will get what they want, those who can mod will get what we want, and everyone else just kind of gets fucked by asinine developers who feel like they know better than the ones who made Baldur’s Gate 3, Cyberpunk 2077, Elden Ring, and Witcher 3.
Dollar Per Hour of Entertainment
Today I want to talk about the classical “$1 Per Hour of Entertainment” measurement of value. This has been a staple of videogame value discussions for decades. There are multiple problems with the formula though, and I think we should collectively abandon its use even as general rule of thumb.
The first problem is foundational: what qualifies as “entertainment”? When people evoke the formula, they typically do so with the assumption that hours spent playing a game are hours spent entertained. But is that actually the case? There are dozens and dozens of examples of “grind” in games, where you must perform a repetitive, unfun task to achieve a desired result. If you actively hate the grinding part but still do it anyway because reward is worth it, does the entire process count as entertainment? Simply because you chose to engage with the game at all? That sounds like tautology to me. May as well add the time spent working a job to get the money used to buy the game in that case.
Which brings me to the second problem: the entertainment gradient. Regardless of where you landed with the previous paragraph, I believe we can all agree that some fun experiences are more fun than others. In which case, shouldn’t that higher tier of entertainment be more valuable than the other? If so, how does that translate into the formula? It doesn’t, basically. All of us have those examples of deeply personal, transformative gaming experiences that we still think about years (decades!) later. Are those experiences not more valuable than the routine sort of busywork we engage with, sometimes within the same game that held such highs? It is absolutely possible that a shorter, more intensely fun experience is “worth” more than a mundane, time-killing one that we do more out of habit.
Actually, this also brings up a third problem: the timekeeping. I would argue that a game’s entertainment value does not end when you stop playing. If you are still thinking about a game days/months/years after you stopped playing, why should that not also count towards its overall value? Xenogears is one of my favorite games of all time, and yet I played through it once for maybe 80 hours back in 1998. However, I’ve thought about the game many, many times over the intervening decades, constantly comparing sci-fi and/or anime RPGs to it, and otherwise keeping the flame of its transformative (to me) memory alive. Journey is another example wherein I played and completed it in a single ~3 hour session, and I still think about it on occasion all these years later. Indeed, can you even say that your favorite games are the same ones with the highest dollar per hour spent playing?
The fourth problem with the formula is that it breaks down entirely when it comes to free-to-play games. Although there are some interesting calculations you can do with cash shop purchases, the fact remains that there are dozens of high-quality games you can legitimately play for hundreds of hours at a cost of $0. By definition, these should be considered the pinnacle of entertainment value per dollar spent, and yet I doubt anyone would say Candy Crush is their favorite gaming experience of all time.
The final problem is a bit goofy, but… what about inflation? The metric has been $1 per hour of entertainment for at least 20 years, if not longer. If we look at 1997, $1 back then is as valuable as $2.01 today. Which… ouch. But suggesting that the metric should now be $2 per hour of entertainment just feels bad. And yet, $1 per two hours of entertainment also seems unreasonable. What games could hit that? This isn’t even bringing up the other aspect of the intervening decades: loss of free time. Regardless of which way inflation is taken into account, fundamentally I have less time for leisure activities than I did back in high school/college. Therefore the time I do have is more valuable to me.
At least, you’d think so. Lately I’ve been playing Hearthstone Battlegrounds (for free!) instead of any of the hundreds of quality, potentially transformative game experiences I have on tap. Oh well.
Now, I get it, nobody really uses the $1 per hour of entertainment metric to guide their gaming purchases – they would otherwise be too busy playing Fortnite. But, fundamentally, calculating the per hour rate is about the worst possible justification for a discretionary purchase, the very last salve to ease the burn of cognitive dissonance. “At least I played this otherwise unremarkable game for 60+ hours.” Naw, dawg, just put it down. Not every game is going to be a winner for us individually, and that’s OK. Just take the L and move on. Everything is about to start costing $80, and you sure as shit don’t have 20 more hours per game to pretend you didn’t get bamboozled at checkout.
But you know what? You do what you want. Which is hopefully doing what you want.
This AI Ain’t It
Wilhelm wrote a post called “The Folly of Believing in AI” and is otherwise predicting an eventual market crash based on the insane capital spent chasing that dragon. The thesis is simple: AI is expensive, so… who is going to pay for it? Well, expensive and garbage, which is the worst possible combination. And I pretty much agree with him entirely – when the music stops, there will be many a child left without a chair but holding a lot of bags, to mix metaphors.
The one problematic angle I want to stress the most though, is the fundamental limitation of AI: it is dependent upon the data it intends to replace, and yet that data evolves all the time.
Duh, right? Just think about it a bit more though. The best use-case I have heard for AI has been from programmers stating that they can get code snippets from ChatGPT that either work out of the box, or otherwise get them 90% of the way there. Where did ChatGPT “learn” code though? From scraping GitHub and similar repositories for human-made code. Which sounds an awful like what a search engine could also do, but nevermind. Even in the extremely optimistic scenario in which no programmer loses their jobs to future Prompt Engineers, eventually GitHub is going to start (or continue?) to accumulate AI-derived code. Which will be scraped and reconsumed into the dataset, increasing the error rate, thereby lowering the value that the AI had in the first place.
Alternatively, let’s suppose there isn’t an issue with recycled datasets and error rates. There will be a lower need for programmers, which means less opportunity for novel code and/or new languages, as it would have to compete with much cheaper, “solved” solution. We then get locked into existing code at current levels of function unless some hobbyists stumble upon the next best thing.
The other use-cases for AI are bad in more obvious, albeit understandable ways. AI can write tailored cover letters for you, or if you’re feeling extra frisky, apply for hundreds of job postings a day on your behalf. Of course, HR departments around the world fired the first shots of that war when they started using algorithms to pre-screen applications, so this bit of turnabout feels like fair play. But what is the end result? AI talking to AI? No person can or will manually sort through 250 applications per job opening. Maybe the most “fair” solution will just be picking people randomly. Or consolidating all the power into recruitment agencies. Or, you know, just nepotism and networking per usual.
Then you get to the AI-written house listings, product descriptions, user reviews, or office emails. Just look at this recent Forbes article on how to use ChatGPT to save you time in an office scenario:
- Wrangle Your Inbox (Google how to use Outlook Rules/filters)
- Eliminate Redundant Communication (Ooo, Email Templates!)
- Automate Content Creation (spit out a 1st draft on a subject based on prompts)
- Get The Most Out Of Your Meetings (transcribe notes, summarize transcriptions, create agendas)
- Crunch Data And Offer Insights (get data analysis, assuming you don’t understand Excel formulas)
The article states email and meetings represent 15% and 23% of work time, respectively. Sounds accurate enough. And yet rather than address the glaring, systemic issue of unnecessary communication directly, we are to use AI to just… sort of brute force our way through it. Does it not occur to anyone that the emails you are getting AI to summarize are possibly created by AI prompts from the sender? Your supervisor is going to get AI to summarize the AI article you submitted, have AI create an agenda for a meeting they call you in for, AI is going to transcribe the meeting, which will then be emailed to their supervisor and summarized again by AI. You’ll probably still be in trouble, but no worries, just submit 5000 job applications over your lunch break.
In Cyberpunk 2077 lore, a virus infected and destroyed 78.2% of the internet. In the real world, 90% of the internet will be synthetically generated by 2026. How’s that for a bearish case for AI?
Now, I am not a total Luddite. There are a number of applications for which AI is very welcome. Detecting lung cancer from a blood test, rapidly sifting through thousands of CT scans looking for patterns, potentially using AI to create novel molecules and designer drugs while simulating their efficacy, and so on. Those are useful applications of technology to further science.
That’s not what is getting peddled on the street these days though. And maybe that is not even the point. There is a cynical part of me that questions why these programs were dropped on the public like a free hit from the local drug dealer. There is some money exchanging hands, sure, and it’s certainly been a boon for Nvidia and other companies selling shovels during a gold rush. But OpenAI is set to take a $5 billion loss this year alone, and they aren’t the only game in town. Why spend $700,000/day running ChatGPT like a loss leader, when there doesn’t appear to be anything profitable being led to?
[Fake Edit] Totally unrelated last week news: Microsoft, Apple, and Nvidia are apparently bailing out OpenAI in another round of fundraising to keep them solvent… for another year, or whatever.
I think maybe the Dead Internet endgame is the point. The collateral damage is win-win for these AI companies. Either they succeed with the AGI moonshot – the holy grail of AI that would change the game, just like working fusion power – or fill the open internet with enough AI garbage to permanently prevent any future competition. What could a brand new AI company even train off of these days? Assuming “clean” output isn’t now locked down with licensing contracts, their new model would be facing off with ChatGPT v8.5 or whatever. The only reasonable avenue for future AI companies would be to license the existing datasets themselves into perpetuity. Rent-seeking at its finest.
I could be wrong. Perhaps all these LLMs will suddenly solve all our problems, and not just be tools of harassment and disinformation. Considering the big phone players are making deepfake software on phones standard this year, I suppose we’ll all find out pretty damn quick.
My prediction: mo’ AI, mo’ problems.
Cut the Concord
Statistically, you have never heard of it, but Sony is shutting down their new Overwatch-like hero shooter called Concord. After two whole weeks. On Steam, Concord apparently never broke 700 concurrent players at launch. The writing was probably on the wall from earlier when the open beta population was worse than closed beta – plus it launching as a $40 B2P in a sea of F2P competitors – but the sheer scale of the belly flop is shocking nonetheless. It is rumored to have cost $200 million, and for sure has been in development for eight (8!) years.
And now it’s gone.
You know, there are people out there that argue games should be more expensive. Not necessarily because of the traditional inflation reasons – although that factors in too – but because games costs more to make in general. Which really just translates into longer development times. And yet, as we can see with Concord along with many other examples, long development times do not necessarily translate back into better games. There is obviously some minimum, but longer isn’t better.
And yet, we have these industry leaders who suggest MSRP should be higher than the now-standard $70. To be “priced accordingly with its quality, breadth & depth,” as if any of that is really knowable from a consumer standpoint prior to purchase. We have reviews, sure, and Concord score a 70 from IGN. What does that tell you?
The overall way games are made is itself unsustainable, and an extra $10-$20 per copy isn’t going to fix anything. Indeed, there seems to be a blasé attitude in the industry that a rising MRSP will lift all the boats instead of, you know, causing the ones on the periphery to slide down the demand wave curve. Suppose GTA6 is released at $80. Is the argument that a consumer’s gaming budget will just indefinitely expand by an extra $10/year? Or will they, I dunno, just spend $10 less on other titles? Higher prices are certainly not going to expand the market, so… what?
As far as I can see it, the only reasonable knob to turn appears to be development time and nobody seems able to do it. I’m not trying to handwave away the effort and brute labor necessary to digitally animate mo-capped models in high fidelity. Or creating and debugging millions of lines of bespoke code. But I am asking how long it does take, how much of it is necessary, and how often these visually stunning games fall flat on their faces in the one function of their intended existence, e.g. being fun.
Throwing more money at the problem certainly doesn’t seem to be working.
What’s In A Game?
Dragon Age: the Veilguard is coming out on October 31st. How I feel about it is… complicated. Veilguard’s first trailer received a lot of flak, but there have been a few subsequent ones that show a more a traditional Dragon Age vibe, as opposed to what felt sort of Fortnite/Borderlands-ish irreverence. Besides, what’s not to love about it being more Dragon Age, featuring a romanceable Scout Harding, and the fact that it’s a BioWare game.
…or is it?
I mean, yes, it’s a BioWare game. It’s also an EA game, proven by the fact that it has a deluxe edition, pre-order bonuses, and a $150 Collector’s Edition with a physical LED dagger and other items that hilariously doesn’t come with a copy of the game. You seriously can’t make this shit up.

But what is a “BioWare game,” or any game for that matter? Not in an existential sense, but what is meant when we say these things? When I say BioWare game, emotionally I’m referring to a nebulous sort of companion-focused, squad-based RPGs with branching endings based on player dialog choices. Basically, the Mass Effect and Dragon Age series. Which I have historically enjoyed, including even (eventually) Mass Effect: Andromeda. It’s a type of game with a particular vibe to it.
Having said that, being a “BioWare Game” is really just branding and marketing. BioWare also released Anthem, which was a commercial failure; Andromeda wasn’t that hot either, considering how all DLC and follow-up expansions were canceled. Rationally, there should be no expectation that just because BioWare is releasing Veilguard, that it will be of the same quality of [insert favorite Dragon Age game here], especially after the franchise’s 10-year hiatus. But that touch of skepticism should still be the case even if Anthem and Andromeda were smash hits.
I have long cautioned against the sort of hero worship that game developers sometimes generate, especially when it comes to “rockstar” designers. There are people who fall to their knees at the altar of Fallout: New Vegas and Chris Avellone. To which I say: why? Even if New Vegas is your favorite game, there were a lot of cooks in that kitchen. In fact, you probably should be worshiping at the feet of John Gonzalez instead. Or, preferably, worshiping no one, including the companies themselves.
Game design is a collaborative endeavor – solo indie titles aside – and it’s a nigh-impossible task to nail down exactly who did what to make the game as compelling an experience as it was. Especially including the very staff who did it. Back in the day, there was an argument that Blizzard was sending in their B-Team for the Wrath of the Lich King expansion, and that is why subscriptions started to decline for the first time (notwithstanding the 12 million sub peak). As it turns out, that wasn’t the case – most everyone from vanilla and TBC ended up working on Wrath and subsequent expansions. Hell, the most controversial addition to the game (Looking for Dungeon tool) was something the original “rockstars” devs wanted to release in vanilla. It wasn’t the bean counters or the C-Suites or whatever design boogeyman you want to define; the calls were coming from inside the house.
There are times where it appears one very visible person seems to make a difference. Hideo Kojima immediately comes to mind. It is also difficult to argue against the apparent influence of, say, Yoshi-P when it comes to FF14. Or Hidetaka Miyazaki of FromSoftware fame. They could not build their games alone, of course, but leadership can and does set expectations and gives direction in these endeavors. There is a level of consistency – or consistent craziness in Kojima’s case – that is pretty rare in gaming.
By and large, though? Every game is a gumbo and no knows what went into it or why it tastes the way it does. That’s… a pretty strong nihilistic take, I admit, but riddle me this: if someone figured it all out, why is it so hard to bottle the lightning again? Boredom? Fear? Ever-changing audience mores? There are so many moving parts, between the game designers coming and going from the company, to the gaming zeitgeist of the time, to competing game releases, all of which can influence a title’s success. You can’t just say “Obsidian should just make New Vegas 2 and it will be a smash hit” because A) most everyone from the original team has left, B) none of the people who left appear to have taken the secret sauce with them, and C) New Vegas was massively outsold by Fallout 4 in any case.
So, am I still looking forward to Veilguard? Well, two words: Scout Harding.
Seriously though, I don’t want the takeaway to be that you shouldn’t look forward to anything. I have no idea what the plans are for Mass Effect 5, but I still want to find out. Just not on Day 1 (probably), and not with any sort of expectations that because Company A or Game Dev B made it that the end result will be C. If you’re going to base your hype on anything, base it on what the game is promising, not the people who made it. After all, the best games end up taking on a life of their own.
Unsustainability
Senua Saga: Hellblade 2 recently came out to glowing reviews and… well, not so glowing concurrent player counts on Steam. Specifically, it peaked at about 4000 players, compared to 5600 for the original game back in 2017, and compared to ~6000 for Hi-Fi Rush and Redfall. The Reddit post where I found this information has the typical excuses, e.g. it’s all Game Pass’s fault (it was a Day 1 release):
They really don’t get that gamepass is unsustainable. It works for Netflix because movies and tv shows can be made in a year or less so they can keep pumping out content each year. Games take years to make and they can’t keep the same stream of new content releasing the same way streaming services do.
Gamepass subs are already stagnating, they would make more money if they held off putting new exclusives on gamepass like movies do with putting them in theatres first before putting them on streaming. (source)
Now, it’s worth pointing out that concurrent player counts is not precisely the best way to measure the relative success of a single-player game. Unless, I suppose, you are Baldur’s Gate 3. Also, Hellblade 2 is a story-based sequel to an artistic game that, as established, only hit a peak of 5600 concurrent players. According to Wikipedia, the original game sold about 1,000,000 copies by June 2018. Thus, one would likely presume that the sequel would sell roughly the same amount or less.
The thing that piqued my interest though, was the reply that came next:
Yeah, even “small” games like Hellblade and Hi-Fi Rush, which are both under 10h to complete, took 5/6 years to develop. It’s impossible to justify developing games like these with gigantic budgets if you’re going to have them on your subscription service.
I mean… sure. But there’s an unspoken assumption here that these small games with gigantic, 5-6 year budgets would be justified even without being on a subscription service. See hot take:
Hellblade 2 really is the ultimate example of the flaw of Xbox’s “hands off” approach to game dev.
How has a studio been able to take 5 years making a tiny game that is basically identical to the first?
How did Rare get away with farting out trailers for Everwild despite the game literlaly not existing?
Reddit may constantly slag off strict management and studio control, but sometimes it’s needed to reign studios in and actually create games…
Gaming’s “sustainability problem” has long been forecast, but it does feel like things have more recently come to a head. It is easy to villainize Microsoft for closing down, say, the Hi-Fi Rush devs a year after soaking up their accolades… but good reviews don’t always equate to profit. Did the game even make back its production costs? Would it be fiduciarily responsible to make the bet in 2024, that Hi-Fi Rush 2 would outperform the original in 2030?
To be clear, I’m not in favor of Microsoft shutting down the studio. Nor do I want fewer of these kind of games. Games are commercial products, but that is not all they can be. Things like Journey can be transformative experiences, and we would all be worse off for them not existing.
Last post, I mentioned that Square Enix is shifting priorities of their entire company based on poor numbers for their mainline Final Fantasy PS5 timed-exclusive releases. But the fundamental problem is a bit deeper. At Square Enix, we’ve heard for years about how one of their games will sell millions of copies but still be considered “underperforming.” For example, the original Tomb Raider reboot sold 3.4 million copies in the first month, but the execs thought that made it a failure. Well, there was a recent Reddit thread about an ex-Square Enix executive explaining the thought process. In short:
There’s a misunderstanding that has been repeated for nearly a decade and a half that Square Enix sets arbitrarily high sales requirements then gets upset when its arbitrarily high sales requirements fail to be met. […]
If a game costs $100m to make, and takes 5 years, then you have to beat, as an example, what the business could have returned investing $100m into the stock market over that period.
For the 5 years prior to Feb 2024, the stock market averaged a rate of return of 14.5%. Investing that $100m in the stock market would net you a return of $201m, so this is our ROI baseline. Can the game net a return higher than this after marketing, platform fees, and discounts are factored in?
That… makes sense. One might even say it’s basic economics.
However, that heuristic also seems outrageously unsustainable in of itself. Almost by definition, very few companies beat “the market.” Especially when the market is, by weight, Microsoft (7.16%), Apple (6.12%), Nvidia (5.79%), Amazon (3.74%), and Meta (2.31%). And 495 other companies, of course. As an investor, sure, why pick a videogame stock over SPY if the latter has the better return? But how exactly does one run a company this way?
Out of curiosity, I found a site to compare some game stocks vs SPY over the last 10 years:

I’ll be goddamned. They do usually beat the market. In case something happens to the picture:
- Square Enix – 75.89%
- EA – 276.53%
- Ubisoft – 30.56%
- Take Two – 595.14%
- S&P 500 – 170.51%
And it’s worth pointing out that Square Enix was beating the market in August 2023 before a big decline, followed by the even worse decline that we talked about recently. Indeed, every game company in this comparison was beating SPY, before Ubisoft started declining in 2022. Probably why they finally got around to “breaking the glass” when it comes to Assassin’s Creed: Japan.
Huh. This was not the direction I thought this post was going as I was writing it.
Fundamentally, I suppose the question remains as to how sustainable the videogame market is. The ex-Square Enix executive Reddit post I linked earlier has a lot more things to say on the topic, actually, and I absolutely recommend reading through it. One of the biggest takeaways is that major studios are struggling to adjust to the new reality that F2P juggernauts like Fortnite and Genshin Impact (etc) exist. Before, they could throw some more production value and/or marketing into their games and be relatively certain to achieve a certain amount of sales as long as a competitor wasn’t also releasing a major game the same month. Now, they have to worry about that and the fact that Fortnite and Genshin are still siphoning up both money and gamer time.
Which… feels kind of obvious when you write it out loud. There was never a time when I played fewer other games than when I was the in the throes of WoW (or MMOs in general). And while MMOs are niche, things like Fortnite no longer are. So not only do they have to beat out similar titles, they have to beat out a F2P title that gets huge updates every 6 weeks and has been refined to a razor edge over almost a decade. Sorta like how Rift or Warhammer or other MMOs had to debut into WoW’s shadow.
So, is gaming – or even AAA specifically – really unsustainable? Possibly.
What I think is unsustainable are production times. I have thought about this for a while, but it’s wild hearing about some of the sausage-making reporting on game development. My go-to example is always Mass Effect: Andromeda. The game spent five years in development, but it was pretty much stitched together in 18 months, and not just because of crunch. Perhaps it is unreasonable to assume the “spaghetti against the wall” phase of development can be shortened or removed, or I am not appreciating the iteration necessary to get gameplay just right. But the Production Time lever is the only one these companies can realistically pull – raising prices just makes the F2P juggernaut comparisons worse, gamer ire notwithstanding. And are any of these games even worth $80, $90, $100 in the first place?
Perversely, even if Square Enix and others were able to achieve shorter production times, that means they will be pumping out more games (assuming they don’t fire thousands of devs). Which means more competition, more overlap, and still facing down the Fortnite gun. Pivoting to live service games to more directly counter Fortnite doesn’t seem to be working either; none of us seem to want that.
I suppose we will have to see how this plays out over time. The game industry at large is clearly profitable and growing besides. We will also probably have the AAA spectacles of Call of Duty and the like that can easily justify the production values. Similarly, the indie scene will likely always be popping, as small team/solo devs shoot their shot in a crowded market, while keeping their day jobs to get by.
But the artistic AA games? Those may be in trouble. The only path for viability I see there is, ironically, something like Game Pass. Microsoft is closing (now internal) studios, yes, but it’s clearly supporting a lot of smaller titles from independent teams and giving them visibility they may not otherwise have achieved. And Game Pass needs these sort of games to pad out the catalog in-between major releases. There are conflicting stories about whether the Faustian Game Pass Bargain is worth it, but I imagine most of that is based on a post-hoc analysis of popularity. Curation and signal-boosting is only going to become increasingly required to succeed for medium-sized studios.
Past is Prologue
Starfield has been a wild success. Like, objectively: it was the best-selling game in September and has since become the 7th best-selling game for the year. And those stats are based on actual sale figures, unmuddied by Xbox Game Pass numbers. Which is astounding to think about.
[Fake Edit] A success… except in the Game of the Year department. Yikes. It’s at least nominated for Best RPG, along with (checks notes) Lies of P? No more sunlight between RPG-elements and RPG anymore, I guess. Doesn’t matter though, Baldur’s Gate 3 is going to continue drinking that milkshake.
Starfield having so many procedurally-generated planets though, is still a mistake. And its a mistake that Mass Effect: Andromeda took on the chin for all of gamedom a decade ago.
Remember Andromeda? The eagerly-anticipated BioWare follow-up to their cultural phenomenon trilogy? It ended up being a commercial flop, best remembered for terrible facial animations and effectively killing the golden goose. What happened? Procedurally-generated planets. Andromeda didn’t have them, but the (multiple) directors wanted them so badly that they wasted months and years fruitlessly chasing them until there was basically just 18 months left to pump out a game.
You can read the Jason Schreier retrospective (from 2017) for the rest of the story. And in total fairness, the majority of the production issues stemmed from EA forcing BioWare to use the Frostbite engine to create their game. But it is a fact that they spent a lot of time chasing the exploration “dream.”
Another of Lehiany’s ideas was that there should be hundreds of explorable planets. BioWare would use algorithms to procedurally generate each world in the game, allowing for near-infinite possibilities, No Man’s Sky style. (No Man’s Sky had not yet been announced—BioWare came up with this concept separately.)
[…] It was an ambitious idea that excited many people on the Mass Effect: Andromeda team. “The concept sounds awesome,” said a person who worked on the game. “No Man’s Sky with BioWare graphics and story, that sounds amazing.”
That’s how it begins. Granted, we wouldn’t see how No Man’s Sky shook out gameplay wise until 2016.
The irony though, is that BioWare started to see it themselves:
The Mass Effect: Andromeda team was also having trouble executing the ideas they’d found so exciting just a year ago. Combat was shaping up nicely, as were the prototypes BioWare had developed for the Nomad ground vehicle, which already felt way better to drive than Mass Effect 1’s crusty old Mako. But spaceflight and procedurally generated planets were causing some problems. “They were creating planets and they were able to drive around it, and the mechanics of it were there,” said a person who worked on the game. “I think what they were struggling with was that it was never fun. They were never able to do it in a way that’s compelling, where like, ‘OK, now imagine doing this a hundred more times or a thousand more times.’”
And there it is: “it was never fun.” It never is.
I have logged 137 hours in No Man’s Sky, so perhaps it is unfair of me to suggest procedural exploration is never fun. But I would argue that the compelling bits of games like NMS are not the exploration elements – it’s stuff like resource-gathering. Or in games like Starbound, it’s finding a good skybox for your base. No one is walking around these planets wondering what’s over the next hill in the same way one does in Skyrim or Fallout. We know what’s over there: nothing. Or rather, one of six Points of Interest seeded by an algorithm to always be within 2km walking distance of where you land.
Exploring a procedurally generated world is like reading a novel authored by ChatGPT. Yeah, there are words on a page in the correct order, but what’s the point?
Getting back to Starfield though, the arc of its design followed almost the reverse of Andromeda. In this sprawling interview with Bruce Nesmith (lead designer of Skyrim and Bethesda veteran), he talked about how the original scope was limited to the Settled Systems. But then Todd Howard basically pulled “100 star systems” out of thin air and they went with it. And I get it. If you are already committed to using procedural generation on 12 star systems, what’s another 88? A clear waste of time, obviously.
And that’s not just an idle thought. According to this article, as of the end of October, just over 3% of Xbox players have the “Boots on the Ground” achievement that you receive for landing on 100 planets. Just thinking about how many loading screens that would take exhausts me. Undoubtedly, that percentage will creep up over time, but at some point you have to ask yourself what’s the cost. Near-zero if you already have the procedural generation engine tuned, of course. But taking that design path itself excludes things like environmental storytelling and a richer, more tailored gaming experience.
Perhaps the biggest casualty is one more felt than seen: ludonarrative. I talked about this before with Starfield, but one of the main premises of the game is exploring the great unknown. Except everything is already known. To my knowledge, there is not a single planet on any system which doesn’t have Abandoned Mines or some other randomly-placed human settlement somewhere on it. So what are we “exploring” exactly? And why would anyone describe this game as “NASApunk” when it is populated with millions of pirates literally everywhere? Of course, pirates are there so you don’t get too bored exploring the boring the planets, which are only boring because they exist.
Like I said at the top, Starfield has been wildly successful in spite of its procedural nonsense. But I do sincerely hope that, at some point, these AAA studios known for storytelling and/or exploration stop trying to make procedural generation work and just stay in their goddamn lane. Who out here is going “I really liked Baldur’s Gate 3, so I hope Larian’s next game is a roguelike card-battler”? Whatever, I guess Todd Howard gets a pass to make his “dream game” after 25 years. But as we sleepwalk into the AI era, I think it behooves these designers to focus on the things that they are supposedly better at (for now).
We learn from our mistakes eventually, right? Right?
Incentivizing Morality
In the comments of my last post, Kring had this to say:
In an RPG I don’t think there should be a game mechanic rewarding “the correct way” to play it. The question isn’t why aren’t we all murder-hoboing through a game where you can be everything. The question is, if you can be anything, why would you choose to be a murder-hobo?
In the vast majority of games, “the good path” is incentivized by default. This usually manifests in terms of a game’s ending, which sees the hero and his/her scrappy teammates surviving and defeating the antagonist when enough altruistic flags are raised. Conversely, being selfish and/or evil typically results in a bad ending where the hero possibly dies, or becomes just as corrupt as the original antagonist, and most of the party members have abandoned you (or been killed). It’s almost a tautology that way – the good path is good, the bad path is bad.
Game designers usually layer on addition incentives for moral play though. The classical trope is when the hero saves the poor village and then refuses to accept the reward… only to be given a greater reward later (or sometimes immediately). I have often imagined a hypothetical game in which the good path is not only unrewarding, but actively punished. How betrayed do you think players would feel if doing good deeds resulted in the bad guy winning and all your efforts come to naught? It would probably be as unsatisfying in such a game as it is IRL.
Incentives are powerful things that guide player behavior. And sometimes these incentives can go awry.
Bioshock is an example of almost archetypal game morality. As you progress through the game, you are given the choice of rescuing Little Sisters or harvesting them to consume their power. While that may seem like an active tradeoff, the reality is that you end up getting goodies after rescuing three Little Sisters, putting you about on par with where you would have been had you harvested them. By the end of the game, the difference in total power (ADAM resource) is literally about 8%. Meanwhile, if you harvest even one (or 2?) Little Sister, you are locked into the bad ending.
An example of contrary incentives comes from Deus Ex: Human Revolution. In this one, you are given the freedom of choosing several different ways to overcome challenges. For example, you can run in guns blazing, sneak through ventilation shafts, and/or hack computers. The problem is the “and/or.” When you perform a non-lethal takedown, for example, you get some XP. You also get XP for straight-up killing enemies. But what if you kill someone you already rendered unconscious with a non-lethal takedown? Believe it or not, extra XP. Even worse, the hacking minigame allows you to earn XP and resources whereas acquiring the password to unlock the device gives nothing. The end result is that the player is incentivized to knock out enemies, then kill them, search everywhere for loot but ignore passwords/keys so you can hack things instead, and otherwise be the most schizophrenic spy ever.
Does DE:HR force you to play that way? Not directly. Indeed, it has a Pacifist achievement as a reward for sticking just to non-lethal takedowns. But forgoing the extra XP means you have less gameplay options for infiltrating enemy bases for a longer amount of time, which can result in you pigeonholing yourself into a less fun experience. How else could you discover the joy that is throwing vending machines around with your bare augmented hands without having a few spare upgrades?
Speaking of less fun experiences, consider Dishonored. This is another freeform stealth game where you are given special powers and let loose to accomplish your objective as you choose. However, if you so happen to choose lethal takedowns too many times, the environment becomes infested with more hostile vermin and you end up with the bad ending. I don’t necessarily have an issue with the enforced morality system, but limiting oneself to non-lethal takedowns means the majority of weapons (and some abilities) in the game are straight-up useless. This leads you to tackle missions in the exact same way every time, with no hope of getting any more interesting abilities, tools, or even situations.
I bring all this up to answer Kring’s original question: why choose to be the murder hobo in Starfield? Because that’s what the game designers incentivized, unintentionally or not. Bethesda crafted a gameplay loop that:
- Makes stealth functionally impossible
- Makes non-lethal attacks functionally impossible
- Radically inflates the cost of ammo
- Severely limits inventory space
- Gates basic character functions behind the leveling system
- Has Persuasion system ran by hidden dice rolls
- Feature no lasting consequences of note
Does this mean you have to steal neutral NPCs’ spaceships right from under them and pawn it lightyears away? Or pickpocket every named NPC you encounter? No, you don’t. Indeed, some people would suggest that playing that way is “optimizing the fun out of the game.”
But here’s the thing: you will end up feeling punished for most of the game, because of the designer-based incentives not aligning with your playstyle. In every combat encounter – which will be the primary source of all credits and XP in the game regardless of how you play1 – you will be acutely aware of how little ammo you have left, switching to guns that you don’t like and also take longer to kill enemies with, being stuck with smaller spaceships that perform worse in the frequent space battles, and don’t offer quality of life features you will enjoy having. Sinking points into the Persuasion system will make those infrequent opportunities more successful, but those very same points mean you have less combat or economic bonuses which, again, will leave you miserable in the rest of the game.
Can you play any way you want in Starfield in spite of that? Sure. Well… not as a pacifist. Or someone who sneaks past enemies. Or talks their way out of every combat encounter. But yes, you can avoid being a total murder hobo. You can also turn down the graphical settings to their lowest level and change the resolution to 800×600 to roleplay someone with vision problems. Totally possible.
My point is that gameplay incentives matter. Game designers don’t need to create strict moral imperatives – in fact, I would prefer they didn’t considering how Dishonored felt to play – but they should take care to avoid unnecessary friction. Imagine if Deus Ex: Human Revolution did not award extra XP for killing unconscious NPCs, and using found passwords automatically gave you all the bonus XP/resources that the hacking game offers. Would the game get worse or more prescriptive? No! If anything, it expands the roleplaying opportunities because you are no longer fighting the dissonance the system inadvertently (or sloppily) creates.
In Starfield’s case, I’m a murder hobo because the game doesn’t feel good to play any other way. But at the root of that feeling, there lies a stupidly simple solution:
- Let players craft ammo.
That’s it. Problem solved – I’d hang up my bloody hobo hat tomorrow.
Right now the outpost system is a completely pointless, tacked-on feature. If you could craft your own ammo though, suddenly everyone wants a good outpost setup, which means players are flying around and exploring planets to find these resources. Once players have secured a source of ammo, credits become less critical. This removes the incentives for looting every single gun from every single dead pirate, which means less time spent fighting the awful inventory and UI. With that, being a murder hobo is more of a lifestyle choice rather than a dissonance you have to constantly struggle against.
That’s a lot of words to essentially land on leveraging the Invisible Hand to guide player behavior. And I know that there will be those that argue that incentives are irrelevant or unnecessary, because players always have the choice to play a certain way even if it is “suboptimal.” But I would say to you: why play that game? Unless you are specifically a masochist, there are much better games to roleplay as the good guys in than Starfield. You can do it, and there are good guy choices to make, but even Bethesda’s other games are infinitely better. And that’s sad. Let’s hope that they (or mods, as always) turn it around.
- I have read some blogs that suggest you can utilize the outpost system to essentially farm resources, turn them into goods to vendor, which nets both credits and crafting XP. So, yeah, technically you don’t have to rely on combat encounters for credits. However, you can’t progress through the story this way, and I’m not sure that using outposts in this fashion is all that functionally different from simply stealing everything. ↩︎
Human Slurry
Jul 16
Posted by Azuriel
Scrolling on my phone, I clicked into and read an article about Yaupon, which is apparently North America’s only native caffeinated plant. Since we’re speed-running the apocalypse over here in the US, the thought is that high tariffs on coffee and tea might revitalize an otherwise ultra-niche “Made in America” product. Huh, interesting.
I scroll down to the end and then see this:
I’ve seen summarized reviews on Amazon, but never comments. Honestly, I just laughed.
It’s long been known that the comments on news articles are trash: filled with bots or humans indistinguishable from bots. But there is something deeply… I don’t know a strong enough word for it. Cynical? Nihilistic? Absurd? Maybe just fucking comedic about inviting your (presumably) human readers to comment on a story and then just blending them all up in a great human slurry summary so no one has to actually read any of them. At what point do you not just cut out the middle(hu)man?
If want a summary of the future, that’s it. Wirehead, but made out of people.
Posted in Commentary, Philosophy
4 Comments
Tags: AI, Comments, Human Slurry, Wirehead