Category Archives: Philosophy
Pebbles are small, but if one finds its way into your shoe and you can’t get it out, it can be enough to ruin your day. Or in this case, your gaming experience.
I started playing Divinity: Original Sin 2 (DOS2) recently, and it’s been fun thus far. There are a lot of interesting new design directions this time around, and I might talk about them in a different post. In this post though, we need to talk about a pebble: inventory management.
…actually, that might not be the root of the issue. This pebble has layers.
DOS2 and the series in general makes a big deal about the autonomy and uniqueness of each character. Characters have origin stories, personal quests, unique special abilities, and their own dialog options. Talk with one distraught woman as Ifan and she shouts “stay away from me you disgusting pig!” Talk with the same woman as Sebille, and you’ll hear her story. It’s immersive… to a point. It’s also awkward, considering you are a player controlling four unique beings, one of which is supposed to be the “main” character.
The awkwardness extends out into the game proper too. Some of the “Civil Abilities” you can put points into are Persuasion and Bartering. The former will let you overcome conversation checks, while also improving your discount with a vendor; the latter improves just the latter. That’s fine, right? It’s typical for CRPGs to essentially encourage specialization, such as you have someone really good at disarming traps, someone running interference for your wizards, and so on.
The problem is when the “main” character isn’t the one with the Persuasion skills. I had been playing for about 5 hours and wanted to offload some goods at a local vendor, only to realize that the person with the biggest discount wasn’t carrying any of the merchandise. And there was zero way to move items around except one at a time. That’s the pebble. There’s a “workaround” where you stash everything inside a backpack that you can then pass around, but that still involves manually moving one item at a time into the backpack. Why isn’t there a “move all items” option?
My characters are like level 3, and the difference between the “main” character I had been controlling and scooping up all the loot with and the guy with the highest discount is 2%. No big deal, yeah? Also, there is apparently a magic mirror in Act 2 or whatever that allows you to freely respec all your characters any number of times, so I’ll be able to solve this Persuasion situation to make my “main” character also be the primary seller.
Like I said, it’s a pebble, not some bottomless chasm.
…at the same time, this little pebble is drawing my attention to the fact I’m walking on a trail full of them. With sandals. I made Ifan a Summoner, who is apparently going to need to be the most Persuasive out of the bunch if I want to be using him to click on treasure chests and dead bodies. Or I could keep the Red Prince as the sell-bot since he’s already the best at it, but that would mean I’ll need to be using him to pick up stuff and talk to people. That would mean I’ll miss out on Ifan’s dialog options though, so I’ll need Ifan to be the sell-bot. But he’s a Summoner, not a warrior, so my carrying capacity is lower. I guess I could move crafting material around to compensate…
By the way, there’s another Civil Ability called Lucky Charm that gives you a chance of finding special loot in every container you check. Originally, this proc’d only if the character who had the skill checked the container. It’s since been patched to be party-wide, which is nice. Because that is otherwise insane. Which is what is kinda feels like for the rest of these abilities.
All of the above because I noticed a 2% discount between characters. But try walking for 80+ hours with a pebble in your shoe and tell me it doesn’t become a big deal over time. And make you question why you can’t just take off your shoe for a second and get it the hell out.
Sometimes gaming progress does not happen smoothly. Instead of one thing immediately leading into another, there is a sort of gap that must be leapt across. While not insurmountable, this break in progress can become a source of resistance to continuing to play a game at all.
I am playing Oxygen Not Included (ONI) again. As I have described before, the game is deceptively easy at the start, but there are disasters looming in every detail. Some things are obvious, like your Dupes running out of Oxygen. Other things are much less so, like the fact that your Dupes just dug out a section of rock – which you told them to do – and then placed the 40°C (!!) rock in a storage container in the middle of your base, and now everything is heating up. Oops.
For the most part, it is generally easier to start a new game with a new map than it is to try and fix a disaster in progress. Plus, it’s fun seeing what goodies the RNG fairies might deliver to you. Cold biome nearby? Natural Gas Geyser ready to be tapped? Awesome.
Nevertheless, there is a specific transition gap that I inevitably reach and often quit playing rather than make the jump. In ONI, that gap is the Electrolyzer. This is a device that turns water into Oxygen and Hydrogen, and is pretty much the solution for breathable air for the rest of any ONI run.
It’s also a pain in the ass.
Up to this point, you make air by burning Algae, and it’s relatively straight-forward. With the Electrolyzer, you have to worry about piping the Hydrogen somewhere else, as otherwise it will clog the ceiling of whatever room you are in.
In ONI-land, there is the mythical SPOM, or Self-Powering Oxygen Module. This is a solved solution for creating an effectively infinite air source with no maintenance or upkeep aside from water; a Hydrogen Power station powers the Electrolyzer, which supplies the station with fuel.
Despite there being a ready-made solution to the problem, or perhaps in spite of this fact, I typically end my ONI runs here. The SPOM is not particularly intuitive, so I basically need to copy it part-by-part from a Youtube guide. Even if I don’t create the SPOM specifically, the Electrolyzer still necessitates your base to account for mixed gases. Ignore the problem long enough, and it’ll be even more a pain in the ass later.
Finally, even with a cut-and-paste SPOM, you still need a ton of water at the ready to feed the beast. Where will all that water come from? Typically, the only long-term solution is to find a Steam Geyser somewhere on the map, but that could take a while, and possibly be nowhere close to you. If you set up a functioning plumbing system, you can technically harvest some additional H20 via that route. Of course, that will also require extensive planning of your base, and how you’ll be handling the hot water that comes out of a Water Sieve.
Good times. Or, maybe not so much.
I have bridged the the Electrolyzer gap before. It’s not an insurmountable problem, especially considering the ubiquitous of the SPOM design in guides. It just takes a lot of mental headspace at a very specific moment in an hitherto casual colony management sim. Or rather, it is at this moment that Oxygen Not Included reveals itself to be a more complicated beast than you have imagined.
Many games have these transition gaps. The best designed among them either shorten the gap, or get you in the habit of hopping long before you reach the gap that matters. Otherwise, the devs risk players landing on their face. Or perhaps worse: practicing to make the leap, doing so, and then being bored on the other side.
I really wish game developers would just let us eat the damn marshmallows already.
If you have never heard of the test before:
The Stanford marshmallow experiment was a series of studies on delayed gratification in the late 1960s and early 1970s led by psychologist Walter Mischel, then a professor at Stanford University. In these studies, a child was offered a choice between one small reward provided immediately or two small rewards if they waited for a short period, approximately 15 minutes, during which the tester left the room and then returned. (Wiki)
I have been playing Prey lately, and noticed it does something similar. Over the course of gameplay, you accumulate a number of Neuromods, which are essentially skill points. At the beginning, you can only assign these points in “traditional” skills, such as hacking, increased weapon damage, more inventory space, and so on. A few more hours of gameplay later, you will be able to invest points in “alien” skills, like Kinetic Blast, short-term mind control, flame traps, etc. The game warns you though, that if you start gaining alien skills, the security system (e.g. turrets) in the space station will start registering you as an alien. It might also affect which ending you receive, although I have resisted looking at spoilers for that.
That is basically the marshmallow test. You can either be rewarded with fun new toys now… or you can abstain and be “rewarded” with a better ending later.
Prey is nowhere near the worst offender here. I have also been playing through the DLC of Dishonored off and on, and it’s a thousand times worse. In Dishonored, killing people (instead of knocking them out) increases the “chaos” of the city, which not only leads to a bad ending, it also makes the game harder by spawning swarms of rats that attack you on sight (and are immune to typical assassination skills). Which would be somewhat fine, if it were not for the fact that damn near 95% of the abilities and skills you unlock through gameplay revolve around killing people.
Life is full of delayed gratification. Most of us spend ~40 hours a week doing something we’d prefer not doing, in order to receive money weeks from now to finance the things we actually do want to do. Delaying our already-delayed gratification is some Inception-style nonsense.
Now, I do not necessarily have an issue with the best endings being difficult to achieve, or the existence of Achievements, or even just choice in general. What I have an issue with is a game that gives you a carrot and then beats you with a stick for eating it. The original Deus Ex made you choose between invisibility to humans and invisibility to robots. That’s a good choice! Note how the designers didn’t give you access to invisibility and then tell you there would be dire consequences to using it. That would be dumb.
Do not make your players choose between Fun and No Fun. Because some of them are dumb enough to choose No Fun, even when they hate marshmallows. Save us from ourselves.
I was browsing Kotaku the other day, and a paragraph struck me:
Nobody ever asks why anyone stopped playing Halo 2. No response would merit it. The game came out in 2004, and three years later, there was Halo 3. At some point, it got old. Another game came along. Friends moved on. It was just a thing you did, and then you went and did something else.
This is something I struggle with, internally. Not Halo 2, but with the general concept.
I used to play a lot of Counter-Strike back in the day. So much so that I was extremely bitter when version 1.6 came out and changed the way a lot of the guns fired (1.5 for life). I transitioned into Warcraft 3-modded Counter-Strike servers – Night Elves went invisible when they stopped moving, Undead had low-gravity and regain health when dealing damage, etc – before finally moving on entirely to Battlefield 2. I played that damn near daily for like four years. Then Magic Online for a while, then World of Warcraft for a decade.
Looking back, what can I even say about any of those decades of gaming?
“I had fun playing Counter-Strike.” Maybe someone else can say “me too,” and then commiserate about X or Y change in the intervening years. But that’s it. We can’t really share our experiences in any further detail – you had to be there in that moment, else it’s just a vague sentiment, if one tries to communicate the feeling at all. WoW is different in the sense that I eventually met my guildmates in the real world – and invited each other to our weddings – but I can’t imagine meaningfully talking with some random WoW player on the street.
Contrast that with, say, any of the Final Fantasy games. Or Silent Hill. Or really any single-player, narrative experience. If someone says their favorite game is Xenogears, I could meaningfully talk with them for hours. We could discuss our favorite team compositions, how shocked we were about X revelation, how funny the mistranlations were, and so on. That means something in a way that “This one time on de_dust…” does not. We played the same game, but had different experiences.
At the same time, I don’t want to denigrate other peoples’ experiences. I wouldn’t suggest that someone hiking in the woods or fishing is wasting their time, despite those discreet events being equally ephemeral and unrelatable. There are people who simply enjoy wandering around virtual worlds, like there are people wandering around the real world. If that’s what you like, keep doing it.
I worry about myself though. I started Hellblade: Senua’s Sacrifice the other day, and enjoyed the play session. After that, it’s been days and days of Slay the Spire (Ascension 12 with the Silent) and 7 Days to Die. The latter is especially egregious, considering it is in an unfinished Alpha state. Why not put it down and go back to Hellblade, which is – by all accounts – a much deeper experience? Because, in that moment, these other (potentially vapid) experiences are 5% more pleasurable.
“If you’re having fun, what does it matter?” Well… wirehead. Also, having fun, in of itself, is not relatable. Which, I suppose, belies an underlying desire of mine to be relatable or at least capable of conveying relatable experiences. Even if there were people who wanted to read “I had fun playing videogames today,” I wouldn’t want to write just that. There should be something more.
I dunno. It would be one thing if the dilemma was between playing videogames and completing some meaningful task IRL. It’s not. There is nothing more #firstworldproblems than angst surrounding which two leisure activities provides the most long-term utility. Nevertheless, the worry exists, alongside a deeper one as to whether wirehead experiences have increased my fun tolerance beyond the reach of narrative games altogether. Or perhaps I am simply playing the wrong narrative games.
My patience with enforced 50% win rates is paper graphene-thin.
“A fair game is one in which you win half the time.” It’s hard to argue against such a notion. What is more fair than a coin flip? The problem is that players aren’t equal sides of a coin, nor are the thousands of potential actions reducible to two, easily predictive binary outcomes. Some approximation is required. Or a developer thumb on the scale.
I am still playing Clash Royale despite the disastrous pivot towards blood stone squeezing, and the conceptual breakdown of all progression for long-term players. But some of their shit is driving me up a wall, and will eventually drive me from the game entirely. Specifically, Clan Wars, and even more specifically, a particular game mode with preconstructed decks.
To be sure, there are learning curves involved. Supercell basically took some “top decks” and added them to a pool, from which you are randomly assigned one for a single game. The problem is that some of these decks are just objectively terrible with no redeeming qualities, and still others are straight-up countered by some of the other matchups. For example, these two Classic Decks Battles:
In the first match (at the bottom), my Royal Hogs are immediately countered by Valkyrie, Mega Knight is immediately countered by Inferno Tower. Amusingly, Royal Hogs are also countered by Inferno Tower and Mega Knight by Valkyrie, assuming my opponent times it right. Meanwhile, while I can counter his Goblin Barrel with Arrows, they both cost 3 Elixir and thus end in a wash… with the slightest error on my part resulting in easily >30% tower damage. Meanwhile, my Zappies are basically useless, my Inferno Dragon even more useless, and I can’t use Arrows to counter his Princess or Goblin Gang because then I become vulnerable to Goblin Barrel. I also can’t hope to Fireball him out because he also has Rocket, which deals way more damage than Fireball. The ONLY way anyone could possibly win with the deck I was given was if the opponent was AFK. 1
For a WoW analogy, think Warrior (me) vs Frost Mage (opponent).
The second matchup wasn’t technically as lopsided, but still awful. Bandit is straight-up countered by pretty much every card in the opponent’s deck. Rascals + Zap took care of Minion Horde every time I threw one down, and Hog Rider/Mortar/Goblin Gang meant I could be punished immediately for dropping Elixir Collector or Three Musketeers. Which is what happened, pretty consistently. If I played better, I might have been able to distract a Mortar with my Valkyrie or Bandit in the other lane, and then split a Three Musketeers or something in the middle, followed by a split Minion Horde. Even then, if he played defensive for 20 seconds, my shit would have been countered.
Were these match-ups truly random? Or “enforced” 50% win rates? There is no direct economic incentive for Supercell to “rig” the Classic Decks Battle mode, but the RNG is opaque and it would certainly be a method to ensure that winrates do not get too lopsided.
The third clan war battle I played was Draft. In this game mode, you are given a choice of one of two cards, four times total; whatever you don’t pick goes to your opponent. I’m not sure if the card pairings are 100% random, but you can absolutely get stuck with some extremely shitty decks and/or matchups. And yet I’m fine with that. You as the player have some agency, even with imperfect information, e.g. choosing Minion Horde when opponent might have chosen Arrows. Indeed, Minion Horde in particular is a classical risky pick because of how many cards can counter it… but if your opponent doesn’t have any of those counters, it can be an overwhelming advantage.
My feelings on enforced winrates have changed over the years. Initially, it seemed fine. Necessary, even. But it is rigging, especially in the methods that many game developers go about it: pairing you with terrible teammates, matching you against strong counters, etc. The end result is that I simply cannot trust game developers with (opaque) RNG anymore. They have no incentive to be actually fair – however fairness is defined – and every incentive to produce favorable (to the devs) results. Even if they showed me the specific game code that chooses the matches, I have no reason to believe it operates in that way. This age of monetization and consumer surplus erosion has pushed me past the Cynicism Horizon, from which no trust can escape.
The only thing that game designers can do, and the thing they should be doing, is increasing player agency in the RNG elements. Drafting feels fair, even when the results are not. Maybe it is just another psychological trick to employ, giving someone the “choice” between a rock or a hard place. But it is an important one for not appearing so nakedly rigged in favor of one particular outcome.
1 If you can produce some videos of pros beating non-AFK people with the decks I was given, I’ll concede that I need to L2P. I typically end the season at 4800 trophies and can acknowledge mistakes, but on paper and in practice, those match-ups felt lopsided as hell.
Estebon had an interesting comment on my prior Entitlement Culture post, in defense of the experts:
There is, unfortunately, a general zeitgeist of mistrust toward expertise in the world today, which has bled over to gaming. Gamers, particularly of the self-identified variety, make for an especially fertile ground for that sort of thing, for cultural reasons.
Game devs are supposed to be the experts in their field. They’re the ones who, at least in theory, beat the hiring/funding gauntlet on their merits. That their opinion on how to make a good game ought to carry greater weight than that of the person in the street used to be… more or less self-evident, as with any other profession.
It’s difficult to imagine a set of statements that I disagree with more strongly on a fundamental level.
First, suggesting game developers are “experts in their field” because… they’re game developers… is a tautology. We might assume that these bigger game companies have some kind of hiring standards, but that never really seems to be the case. Instead, it’s often more recursive like “previously sold a popular game” or “already worked for us in QA” or “nobody else applied.”
Remember Greg Street (aka Ghostcrawler) of WoW (in)fame(y)? From his Wikipedia article:
Street graduated from McDaniel College in 1991 with Bachelor of Arts degrees in Biology and Philosophy, later earning a PhD in marine science. Between 1996 and 1998, Street worked as a Research Assistant Professor at the University of South Carolina. […]
Game Design career
Ensemble Studios, the team behind the real-time strategy series Age of Empires, employed Street as a designer in 1998. With no education or experience in the game industry, Street suspects he was accepted due to his “writing and teaching experience, historical breadth, personal hygiene, gudd speling [sic], creativity, [and] my talent at capturing live alligators”, as well as the user-created scenario for Age of Empires he submitted with his application, which later appeared in Age of Empires: The Rise of Rome. Street helped develop every Age of Empires game from Rise of Rome on, until his departure from the company. At first he designed in-game scenarios and maps, and later graduated to being the team’s lead designer.
Street was hired by Blizzard Entertainment in February 2008, and was the lead systems designer on the MMORPG World of Warcraft until November 2013.
Now, you can hate Ghostcrawler’s philosophy during his WoW tenure – I personally thought it was fine overall – but the fact remains that this marine biologist worked for like two years, wrote an Age of Empires scenario, and then a decade later became a billion-dollar franchise game developer (or a prominent cog in the machine thereof). Twice! We have to either assume that Ghostcrawler is a hidden genius, or there are no particular standards that apply to game designers generally.
There is a third option too: the M. Night Shyamalan effect. You know, the producer of the 1999 cultural touchstone film, Sixth Sense? He followed-up with Unbreakable and Signs which were whatever. After that, it was solid decade of unremitting garbage films. Shyamalan is a supposed expert in his field, as evidenced by movie companies continuing to hire him, but clearly he lost whatever magic he had. Or perhaps more likely, the seam of magic he just happened to tap into shifted, and he wasn’t able to find another.
I bring a lot of this up because I find the hero worship of brands or developers (or anyone) to be… misguided, at best. For one thing, if these people were “experts in their field,” one would expect less game studios to be closing down or laying off staff. As I pointed out a few years ago, most of the same people have been working on WoW this whole time, so any declines in perceived quality can be attributed to the Shyamalan effect.
The only measure that matters for an expert (game developer) is continued, consistent results. Did they make your favorite game back in the early 2000s? Good for you… but why are you still waiting for them? It boggles my mind whenever someone talks about Bethesda and Morrowind, for example. That game came out in 2002. It can still be great, but you knew after Oblivion that something changed. How many new Shyamalan films are you going to sit through before you give up?
From the player side, Estebon pointed out:
J. Allen Brack got memed for his “you think you do but you don’t” line, and devs and customer relations reps have long been trained to pay lip service to the idea that the untutored mob knows best, but people routinely say and demand things that are not remotely reflected in their behaviour or proclivities as reflected in the internal metrics available to game developers. Elsewhere, insane fortunes have been built by paying attention to what people do, not say, and giving us things we never asked for or imagined we needed.
I actually agree with that. Players are generally bad with coming up with the solutions to their problems, even when the solutions aren’t inherently contradictory. What players are exceptionally good at though, is identifying that a problem exists in the first place. The problem might only be impacting them, specifically, but that’s all that really should matter to them or anyone.
All of this is to set up my title analogy.
Game developers are chefs. You don’t need to go to culinary school to be a good chef, and having a degree doesn’t mean you always cook tasty food. Being the best chef in the world will not stop a dish tasting like shit if there is too much salt/it’s burnt/etc. We might expect a master chef to avoid rookie mistakes, but there is another integral component to the dish: the tastes of the person eating it.
In a restaurant, we can assume the customer is choosing a dish they think they will like. If it comes out too salty to their taste, no one bats an eye at said customer complaining about it. “Entitled diners not wanting their food caked in salt!” The relationship is inherently transactional, and there is an expectation of quality. There are limits, of course; no one should expect Chik-Fil-A or KFC to sell burgers, for example. It is also unreasonable for ten chefs to cater to the individual palettes of ten million individuals.
Is that going to stop you from complaining when you get served a salty steak, or if the French Fries are limp at a chain restaurant? Should that stop you? No. I couldn’t cook a restaurant-quality meal, but I sure as shit can criticize one if it comes out poorly. Gaming today is no different.
Granted, it used to be different. The last bit of Estebon’s comment was:
I struggle to think of any other form of entertainment where the audience claims the right to meddle in the details of the creation process quite to the same extent, as opposed to just letting the product succeed or fail as a whole, in a binary way.
Back in the day, games were done. Cartridges were manufactured, CDs were pressed, and physical media was sent to stores. If there were still game-breaking bugs or exploits that got past QA, well, hopefully they weren’t bad enough to sink an entire $10+ million investment. Games in that era were more akin to traditional entertainment like movies or books in the sense that fans could only possibly influence future decisions. Once it was out, it was out.
As we are abundantly aware today though, games are now a service. Something like a Day 1 patch clocking in at 40 GB is not uncommon. No one expects to unwrap a PS4 on Christmas and immediately start playing anything. Moreover, game developers want us to know that development is an ongoing process. A game in maintenance mode is “dead,” and one which is no longer receiving updates is “abandoned.” We barely even have the language to describe a finished product anymore.
Gamer entitlement didn’t get us here. Game makers leveraging social media for free PR and turning “lip service” into a competitive advantage got us here.
Which is just as well, because I’m not especially convinced anyone knows what they are doing. Did Notch know he created a $2 billion game when he released Minecraft? The original dev team for WoW certainly didn’t know they would have 8 million subscriptions by the end of 2008, nor have they been able to do much to stem the bleeding over the last decade. We can’t attribute all of this to corporate malice, because that doesn’t explain why these rockstar developers can’t recapture lightning in a bottle when they move elsewhere.
If you can’t reproduce results, what does that say about your expert game development science?
I think the important thing is to not put game developers on a pedestal. They aren’t scientists (anymore) doing peer-reviewed studies changing the way we understand the world. They’re just people who have eaten food before and think they could come up with something better. Occasionally they do, and even more occasionally they do it on purpose. But can they do it again?
Well, here is the 2018 version, inspired by this section from MMOBro’s recent post:
The problem with trends is that businesses chase them to the detriment of innovation and traditional success stories. It also reinforces the entitlement culture gamers have developed over the years. Read responses to any game developer’s tweet if you don’t believe me. “I supported you for 10 years and now you RUINED Magic Turtle Kingdom by adding BLUE HAIR! READ THE LORE! You’re so stupid I uninstall and never support you again.” This is an issue with society at large, but game design continues to move in a direction that feeds player entitlement. Games tell players they earn their wins but aren’t to blame for their losses, and egos balloon as a result.
All of this creates more toxic communities, games developed for the common denominator, less creative character development, and less chances to show player skill. It’s not where I want see game development money heading, but you can’t outrun a tsunami.
The Bro’s overall post was about the lamentation of the “MMOification” of all gaming genres. Which is a thing more commonly referred to as “adding RPG elements,” but seeing as RPGs are becoming rather scarce these days, MMOs are probably a good enough example to explain what is happening. Which, basically, is a cross-generation acknowledgment that XP and seeing meters fill up is pretty universally compelling (to a point).
But what I actually want to talk about is this part:
It also reinforces the entitlement culture gamers have developed over the years.
First, using “entitlement” as a pejorative is a thought-terminating cliche that absolves one of examining whether the implicit claims make any sense. By saying “entitled gamer” you really mean “gamer who erroneously believes their opinion has value” without bothering to explain A) why that opinion holds no value, and B) why your own opinion does.
But it’s worse than that. The (presumably hypothetical) example of a gamer tweeting criticism of an apparent lore discrepancy is meant to make the entire exchange seem ridiculous. Not just the threatening of uninstalling part, but also, implicitly, the giving so much of a shit about lore/story/world in the first place. I agree that such a tweet is bombastic and the tone counter-productive. But instead of having a conversation about whether the designers actually ignored the rules of their own game fiction, we’re talking about “entitled gamers.”
Second, there is a presupposition that gamers have changed over the years at all. Did you really not know anyone who behaved like this hypothetical entitled gamer prior to the age of MMOification? Did not see them in high school, or the Returns section of Wal-Mart, or at the sidelines of their kids’ soccer games? Did you not encounter them playing Magic: the Gathering, or in Counter-Strike lobbies, or in your D&D group? Did you perhaps only encounter them once you started playing with large groups of completely random people from across the country/globe?
What changed was access. If someone was really upset about Super Metroid, they mailed a letter to Nintendo Power or otherwise shouted into the void. You never heard it. These days, they shout in your Twitter feed, your Facebook timeline, or in your subreddit. None of which existed prior to 2004, by the way, and didn’t get really popular until years later. We’re barely a decade into this grand “give everyone a voice” experiment, and as it turns out, not everyone has something nice to say.
Even worse/better, the developers want the shouting! Probably not the death threats and general ugliness, but absolutely the feedback and passionate, free advertising that spreads by digital word-of-mouth. These companies are not handing down stone tablets from on high – they are selling a product. And when you are in sales, it literally pays to attend to the ministrations of your customers.
This positive attention, not generalized entitlement, is what encourages a quite literal feedback loop. Maybe this loop counts as changed behavior, but that’s a function of attention, not egos inflated by game mechanics. I still contend that we’re only more aware of the nonsense these days because the devs have Twitter accounts (etc) to conveniently compile all the nonsense in a single location, which we then encounter as we try to glean nuggets of design wisdom from the chicken entrails.
In summation: when you pool everything in the same place, of course the turds float to the top.
The irony is that, at the end of the day, we all want better games, yeah? We may disagree on what “better” consists of or how to accomplish it, but we all desire fun things to play. The one sure-fire way to not achieve that goal is to claim one’s opponents as “entitled” or that there is an “entitlement culture” and thereby erode the very notion that gaming can (or should) be taken seriously at all.
If the kind of games you want to pay for are no longer being made, that’s a market failure. Threatening to quit over blue-haired turtles is rather silly, but I’d rather have developers attentive to details than the opposite, and you should too. Because, eventually, it will impact your favorite game.
And then you will not be entitled to complain about it.
Official reviews are coming in regarding Fallout 76, and almost all of them are universally bad. Like, real bad. In reading them though, it’s very clear that Bethesda did not live up to games these people invented in their head:
The collision of Fallout and multiplayer sparks all sorts of exciting ideas in my mind, most of which have to do with post-apocalyptic role-playing. What if I ran a town, hosting elections and keeping the peace? What if I opened a shop, selling exotic items to other players in a desperate bid to raise enough caps to survive the harsh wasteland? What if I worked behind a bar, serving drinks to other players, passing on gossip and words of wisdom? What if I was the head honcho of a group of raiders, ordering other players to attack camps and loot the corpses of our enemies? What if I founded my own faction, something like Caesar’s Legion from Fallout New Vegas, perhaps? What if I wanted to infiltrate a player-run faction I didn’t get on with, befriending their leader before stabbing them in the back?
Unfortunately, Fallout 76 does not facilitate any of those fantasies. What it does instead is facilitate boredom, frustration and game-breaking bugs.
Like, what the shit, Eurogamer? “Bethesda didn’t make EVE/Star Wars Galaxy mashup, 0/10 stars.”
The rest of that review is slightly less ridiculous. There are complaints about the tutorial quests that ask you to boil water and pick up bottles:
Most missions are little more than fetch quests. Go here, get the thing, bring it back, interact with a robot, job done. It’s mind-numbing in the extreme. It’s Fallout at its worst: basic, monotonous and lacking nuance.
Of course, that had me trying to reach back and remember the quests in Fallout 4, New Vegas, 3, and so on. Replace “interact with a robot” with “talk with an NPC” and… does that not describe basically everything, in any game? A lot of people post memes about how Fallout 3 was finding your daddy and Fallout 4 was about finding your son, and yet here we are lamenting about being free from such mundane burdens.
To an extent, that’s an unfair comparison. Fallout’s best stories were always side-quests, with the main narrative basically acting as a vehicle to drive you around the wasteland looking for them. While holo tapes can be poignant, they just aren’t the same when you can never affect the world.
At the same time… I don’t know that I miss any of that.
I want you to remember all the things you did in Fallout 3, New Vegas, and Fallout 4. Think about what was fun for you. Was it…
- Striking out and going wherever you wanted to go
- Exploring ruins, caves, and cities
- Collecting junk to craft gear
- Leveling up skills, getting Perks
- Shooting things in the face
- Solving moral dilemmas among various NPC groups
Hey, what do you know, Fallout 76 has five out of those six things! And arguably does those five better than any Fallout has before.
I am not trying to denigrate story and narrative here. I’m just saying that I don’t miss it in Fallout 76. In fact, the whole thing is making me question the cohesiveness of the prior games. For example, how much does the ability to strike out and roam around really improve, say, New Vegas’ narrative? Back when I played, I didn’t give two shits about finding Benny beyond the fact that I had a quest entry that wouldn’t go away otherwise. As I wrote back in 2013:
But the overarching narrative of revenge never felt personally compelling, and the coming clash between NCR and Caesar’s Legion seemed a digression. This game was Fallout when I was just wandering around, eager to scavenge what I can out of crumbling ruins I see just on the horizon. When I was the Courier just trying to make a final delivery for no particular reason? Not so much. […] I wasn’t protecting my home, my family, nor was I my own person. I was… the Courier, a stranger in familiar skin, following a past everyone knows about but me.
This is the same problem I had with Witcher 3 – the setting and the story were at complete odds at each other. Your motivation is to find Ciri before the Wild Hunt can, but oh hey, look, there are 40 hours of sidequests you can do over here first. All of which are a hundred times more interesting and immersive than the main, ostensibly racing against time one.
I appreciate the fact that you could kill just about anyone in New Vegas. Or kill next to no one. It is fairly uncommon in gaming to be able to resolve conflict in many different ways. But you don’t need the Fallout scaffolding to do that. By which I mean the wandering around, the looting abandoned buildings, the Power Armor, the Fast Travel ferrying of dozens of pipe rifles to sell to vendors for Caps to buy new shit. I was not “the Courier” when I was hunting for Wonderglue in a half-collapsed shack. I did that for gameplay reasons and because it physically felt good to do so.
So when I hear people say things like this:
To be fair what the hell is Fallout without the story and the player options/personalised quests/interesting world side of things beyond a clunky shooter.
…I feel like I’m going crazy. Open the map, walk over there, kill something along the way in an alternative-history post-apocalyptic 1950s. THAT’S FALLOUT (since 2008). You sure as shit aren’t playing New Vegas for 300+ hours for the storyline alone, son. You play it for that long because it’s fun walking around in that world, fun interacting with things, fun immersing yourself in the wasteland life.
Fallout 76 has systemic problems. The main one being the random server system, from which all other problems follow. All that glorious made-up shit Eurogamer was pining for could become a reality if there is a Moonguard-esque server that people specifically sought out and congregated on. Always-on PvP servers could also be a thing, with forced respawn areas and such. Pretty much everything is solved with servers, actually.
But all these people talking about the gunplay and the “emptiness” of the world? Clunky compared to what? New Vegas? Empty compared to what? Human NPCs with relatable human stories are fantastic, I agree. I just don’t need them to push me over the horizon and into the ruins – the hunt for Gears and Ballistic Fiber is motivation enough. There is still map to see, still ever-stronger enemies to face, and more guns to shoot them in the face with.
Fallout 76 is like when you finish (or ignore) the main story in a Fallout game but you just keep playing anyway. If you don’t do that sort of thing, then yeah, this game is not for you.
In addition to Hollow Knight, I have been playing a bunch of Dead Cells lately.
Because apparently I hate myself.
Dead Cells is basically a roguelike Metroidvania that has more in common with Rogue Legacy and Binding of Isaac than, say, Hollow Knight. Defeating enemies occasionally gives you a currency (Cells) that you can spend at the end of each level to unlock permanent upgrades and blueprints of items that are then seeded into the item pool of future runs. Of course, that assumes you make it to the end of the level – die before then, and you lose everything you were carrying, and have to start over at the beginning of the game.
Of course, that’s how roguelikes work. It’s expected that you start over a bunch of times. And in this regard, I definitely felt less terrible after a death in Dead Cells than I did in Hollow Knight.
…up until The Hand of the King encounter, that is.
The final boss in Dead Cells is so absurdly more difficult than anything that comes before it. While its attacks are not inherently “unfair” beyond their massive power – they can be dodged just like everything else – most of them will prevent you from utilizing health potions, lest you get hit again mid-swig. Thus, you have very little opportunity to practice learning his moves, and dying here means it’ll take at least ~30 minutes of re-clearing everything else along the way to get another shot.
Well, after 26 hours /played in Dead Cells, I finally killed the last boss.
According to conventional wisdom, I should be feeling a sense of pride and accomplishment. I died to this boss at least ten times, re-clearing the entirety of the game to get another chance each time. The fight itself is difficult, and difficult = rewarding. Permadeath confers a sense of risk, and overcoming risk = rewarding. Right?
I feel none of that. And it sorta makes you question the whole “difficulty” edifice.
To be fair, I did not expect to win on the particular run that I did. The items offered on each run are random, and while you can sometimes affect the odds by resetting shop items, the best gear drops from bosses and you don’t have many shots at those. I had strolled up to the final boss several times before with what seemed to be unassailable combos, only to die embarrassing deaths. On the winning run, I made a last minute substitution that basically had no particular synergy with anything – it simply offered an extra 30% damage reduction, which apparently was enough to get me over the finish line.
I have never particularly believed that difficulty was valuable in of itself. But the total emptiness of having beat Dead Cells makes me question why I ever tried to debate anyone on difficulty previously. It is often taken as a given that “log in, collect epix” is bad, and defeating the game on extreme permadeath Ironman mode (or whatever) is good. But I know for a fact that I would have enjoyed Dead Cells more had I beaten the last boss two runs earlier than I did two runs later. And that disappointment and dissatisfaction I felt at losing was not made up by eventually winning.
What makes the situation all the more absurd is that there is a lot more left to Dead Cells. Defeating the last boss unlocks “Boss Cells” which are essentially bonus modifiers you can apply to all enemies and bosses. Defeat the last boss on this new, higher difficulty and you unlock another Boss Cell slot. And so on, up to 4, which is the current limit. Ergo, the last boss could have been easier, and everyone else who craved a harder game could have been more than satisfied with four additional difficulty tiers.
I don’t know. Maybe I’m still just salty from winning when I didn’t expect to, and losing (several times) when I did. Perhaps that was the secret sauce all along – expecting to lose from the start led me to have lower anxiety levels during the fight. Or maybe I had seen the boss’s moves enough to commit them to muscle memory.
All that I know for certain is that difficulty, by itself, doesn’t particularly add anything meaningful to a game. In fact, it often can poison an entire experience. I’m not sure how you balance a game such that there are difficult moments without being frustrating, but Dead Cells ultimately did not get it right when it comes to the final boss. Which is a damn shame, because I otherwise had fun.
Novelty is a finite resource. The best we can hope for in a game is that it ends before the novelty wears off. Too soon and the game feels like it missed its full potential (which it literal did). Too late, and well… we feel relief when the credits finally roll. Assuming we can bring ourselves to crawl across the finish line at all.
As mentioned previously, I have been playing Hollow Knight. It’s a decently fun game (with some reservations) with amazing music and visuals. It also has an almost tangible sense of novelty that you can feel slipping by, as sand through your fingers. Unfortunately, the last third of the game is the “gritty fingers” part of the experience.
Metroidvanias have a delicate balancing act. The hallmarks of the genre are exploration, boss fights, character progression, and backtracking. Specifically, you explore new areas, fight new bosses, unlock new powers and/or movement abilities, and then go back to previously unreachable areas to unlock new zones. Repeat until done.
The problem is when either the new zone or new power steps become exhausted and the game just continues on. This is what happens in the Hollow Knight. The base game “ended” around hour 15 but it took an additional 10 hours to finish.
Now, technically I unlocked the ability to fight the end boss and achieve an ending before the map and/or powers were totally completed. The issue is that this would have been a Bad Ending, and who the hell wants that? So I continued on, capping my movement abilities, and essentially farming harder versions of bosses I already fought for a currency to unlock the Real Ending(s). And there were technically new areas to explore too… but they weren’t the same.
A lot of games work this way, but the latter half of Hollow Knight basically transforms from what it originally was… into Super Meat Boy/Dark Souls. The White Palace area straight-up abandons any pretext of grounded world-building and populates halls with floating buzzsaws and thrusting spears. I was fine with the platforming aspects earlier in the game, because the thorns and spikes made in-universe sense. But what lumber were these subterranean bugs cutting in the stone palace, exactly?
Why are we going to such extremes to begin with? Bosses and puzzles getting harder over time is Game Design 101. But at a certain point, an ever-higher ceiling turns your living room into an auditorium into a cathedral. The entire purpose of the room changes. In an MMO, the transition is necessary as you move from the solitary leveling game into a daily/weekly set of multiplayer chores in order to keep players subscribed. Single-player games have no such need. It’s certainly disappointing when the final boss is weaker than a prior boss – be it due to greater player skill or character power – but I’m not sure erring on the side of absurdity is much better.
I did end up defeating the “true boss” in the base game a few days ago. It was an incredibly frustrating experience, because each time you fail, you have to defeat the “fake boss” all over again just to get another shot. Rather than satisfaction at finally completing a difficult task, all I felt was relief that my toil was finally over.
Those who enjoy the Super Meat Boy/Dark Souls experience will likely be happy with endgame content, and happier still with all the extended DLC that supposedly ups the ante even further. Anyone else who fell in love with Hollow Knight’s first 15 hours of gameplay, on the other hand, can presumably go fuck themselves.