Category Archives: Philosophy
EA has temporarily removed the loot boxes from Star Wars: Battlefront 2, right before the official launch of the game:
We hear you loud and clear, so we’re turning off all in-game purchases. We will now spend more time listening, adjusting, balancing and tuning. This means that the option to purchase crystals in the game is now offline, and all progression will be earned through gameplay. The ability to purchase crystals in-game will become available at a later date, only after we’ve made changes to the game. We’ll share more details as we work through this.
I am honestly quite surprised. The negative press surrounding GTA Online’s Shark Cards or Shadows of War’s single-player loot boxes affected zero change, but here we have EA, of all people, turning off the cash spigot right before the water main gets connected. Then again, EA did get mentioned in half a dozen news article for having the most-downvoted comment in Reddit history (-676,000 at the time of this writing). Not exactly the narrative you want to be having right before the game’s release.
It’s tempting to pat ourselves on the back, at least those of us who actually care about game design and our fellow human beings. But the victory feels… well, like EA says, “temporary.” They did the right thing… under withering criticism. It’s like a politician apologizing for a decades-old scandal – an apology is more than we can expect these days, but it would have been nice if they had apologized before it was news. Or, you know, never did the action in the first place.
Alas, here we are.
It will be interesting indeed to see under what conditions the microtransactions return in SWBF2, and what possible new permutations they might take in other EA games. Will Battlefield Whatever’s design be impacted by this learning experience? Is this a learning experience at all, or simply an unfortunately-timed (for EA stockholders) zeitgeist?
We already know that the suits from TakeTwo don’t give a shit:
It appears that the GTA Online/MyCareer model is going to be the standard for big Take-Two Games going forward. People have expected a GTA Online type environment for Red Dead Redemption 2, which launches next year, though Rockstar has not announced what its online features will be.
“One of the things we’ve learned is if we create a robust opportunity, and a robust world, in which people can play delightfully in a bigger and bigger way, that they will keep coming back,” Zelnick told investors. “They will engage. And there is an opportunity to monetize that engagement.”
And that sort of underscores the vice gamers are put in to begin with. SynCaine pointed out that anyone buying SWBF2 is complicit with its monetization scheme, even if they don’t spend cash on loot boxes. That is technically accurate. But by that same token so is anyone who bought GTA V, given the Shark Card shenanigans. Do we really need to commit to never touching Red Dead Redemption 2 or the inevitable GTA VI?
I dunno. On the one hand, I am obviously an idealist when it comes to the purity of elegant game design. When the pieces fit together, when the various game systems synergize so perfectly… it’s orgasmic. Microtransactions have literally no place in any such gaming schema, any more than the concession stand does for the symphony performance. The symphony or game might rely on outside money in order to exist originally (artists have to eat), but once created, the art does (and should) exist independently.
Also, Consumer Surplus. It’s a thing.
On the other hand, we live in an absurd universe in which any sort of meaning or value is surprising. Thus, EA’s capitulation here, however temporary, is something to be celebrated. I certainly don’t think any of us expected it, especially given the likelihood that whales would have justified the PR hit by buying thousands of dollars of loot boxes on Day 1. And even if EA hadn’t backed down, if it’s possible for you to enjoy playing the game, what particular sense does it make to deny oneself? They’re microtransactions, not blood diamonds. Go have fun – nothing matters anyway.
All things considered though, I do think I’m giving SWBF2 a pass for now. Who is buying a game at full MSRP a literal week before Black Friday? Wait a month or two, save some cash, play your thirty other Steam games, and see how it all plays out. At least, that’s my plan. You do you.
In a recent debate with Gevlon, he replied with the following:
You still don’t realize how obsoleting content is against the defining feature of the MMO genre: persistent world, defined as “previous gaming sessions significantly affect the current”. It’s a genre. It’s not for everyone. But if you throw it away, you are competing with MOBAs and I think LoL is a better MOBAs than WOW.
Now, the topic was at hand was a criticism of catch-up mechanisms. I, of course, disagree that there is anything wrong with the “End Game Content” model, but that is neither here nor there.
What I want to ponder on though, aside from the question of whether WoW has a persistent world, is whether a persistent world is actually a feature of MMOs, should be a feature, or ever really works as a feature. As I see it, there are three elements of persistence: Space, Consequences, and Advantage.
In strict, technical terms I do believe that a “persistent” world is a defining feature of MMOs. Specifically, that the world exists. The alternative to a persistent world is a lobby-based world featured in a lot of otherwise throwaway action RPGs – the world exists as little arenas, created on demand, which disappear when you exit the stage. In WoW, Goldshire exists independently of whether or not you are to witness the shenanigans which transpire in the Inn. In fact, that shenanigans can transpire at all is because the world is persistent, e.g. meeting other people in virtual space.
At the same time… phasing and shard technology exists. Can we really say that Goldshire is a part of a persistent world if there exists Goldshire 1, Goldshire 2, etc? I am even conceding that Goldshire on Server 1 counts despite there being another Goldshire on Server 2. But these days, the Cross-Realm technology is almost a strict “Channel” system which (albeit seamlessly) drops you in a shared instance of the world, rather than “the” world. Does it really matter that there exists a Goldshire Prime somewhere that doesn’t turn off when you leave, considering you’ve never been there? So, arguably, we’re kinda already in a lobby-based experience, and it’s only shared insofar as other people get dropped in our lobby.
I’m not so much trying to argue against the notion that WoW’s world is persistent, but rather that the distinction is kind of moot these days. I do find that Azeroth is more overtly contiguous than many other MMOs, like FFXIV and GW2, which feature hard breaks at their borders. Cramming thousands of people into a singular space doesn’t exactly improve the gameplay experience, so I’m not sure what benefit that is supposed to provide in the first place. As long as people can naturally congregate and interact at will, I believe that’s enough to count as persistence.
Way back in 2011, I pointed out the following:
One of the hallmarks of the MMO genre is a notion of a persistent world, but that persistence is always in tension with the fact that other players exist. Players say they want a world where consequences matter, that if a town gets burned down it stays burned down. But do they really want a world in which the choice of saving the town is never given to them because some noob 4 years ago logged off in the middle of the quest to put the fire out and the town burned down?
Persistence, on a more metaphorical level, means lasting consequences and mutual exclusivity. The town cannot be both burned down to you and not burned down to me, and still be considered persistent. However, what is the desirability or utility of that persistence in the first place?
On the one hand, it can be used to good effect in games like EVE. If some Corp muscles into your star system, blows up your space station and then places their own… well, you’re out. That star system is now theirs, until the same thing happens to them at some point in the future. There are tangible consequences to game world actions, which persist beyond you switching accounts or logging off. There being finite space to fight over also underpins the gameplay loop of full-loot PvP – you care about moon goo because your ship blowing up tangibly reduces your wealth, so you need to control wealth-generating resources.
On the other hand, look at the player housing situation in FFXIV. The housing plots are finite and exclusive – if someone bought the plot you want, well, tough shit. The developers’ goals appear to be for these “neighborhoods” to feel real, and anchored into the game world. You aren’t just buying a house, but this particular house, situated in this particular location, exclusively.
And that’s dumb. Unimaginably dumb.
In FFXIV, it’s dumb because it serves no gameplay purpose. Getting a housing plot is a matter of having the money and clicking faster. After that, you simply continue paying the upkeep fee and that’s it. There are no gameplay elements to the neighborhood around you, and no homeless player is going to walk around gawking at your decorations. There is no reason to be there, specifically there, even for the homeowner themselves. Absolutely nothing changes if housing were instanced.
So, the only time persistent consequences makes sense is in player-directed ways, underpinning core game mechanics. And, as the term implies, the only way for persistence to make sense is for it to be consistent. Nothing else about the FFXIV world is exclusive or provides lasting consequences. So why have it?
The final element of persistence is really an off-shoot of the previous one: persistent advantage. I’m not going to spend a lot of time on it because, conceptually, it does nothing good for any game. I mean, I guess it could be argued that in EVE it’s nice to be able to log into the game years later and fly around a reasonable ship. But I would argue that that is not so much because one’s advantage has been maintained than there being a low barrier to (re)entry. Sort of like, I dunno, logging into WoW and breezing your way back up to the current level cap and snagging some easy gear from a vendor.
The truest form of persistent advantage is essentially the attunement. And it’s terrible for all the same reasons it was in 2012 and earlier. It gates content arbitrarily, based not on skill or merit, but seniority. It squeezes out the middle class gamer, who either gets into a guild that carries them through the attunement, or they forgo whatever is gated behind it. In this case – and in all cases, really – the “challenge” is one of logistics. It’s difficult enough corralling 10/20/40 people into one place at the same time, much less adding pointless bureaucracy on top of it.
So, taken together, the desirability of persistence is vastly overstated, honestly. Persistence is a tool to achieve a specific effect, not some ideal or higher calling. WoW and all the rest are still MMOs by any reasonable definition of the term, in spite of allowing you to actually quest and explore locations without having it all be destroyed by a failed Deathwing raid years ago.
Often unnoticed, but never unfelt, matchmaking in multiplayer games forms the invisible core of our gaming experience. In the old days, happenstance determined the characteristics of our neighbors. Maybe one server was labeled “Recommended,” but for the most part players were left to their own devices. If you were lucky, you might discover that mythical “Good Server” which featured players with similar skill levels as yourself. If not, perhaps there was some means of at least balancing the teams occasionally, by forced shuffling or similar. Otherwise, players were left to “self-deport.”
Automated matchmaking has been around for a long time now; long enough to demonstrate both its virtues and its vices. The virtue is, of course, being intelligently matched based on a whole raft of heuristics. The vice meanwhile… is being maliciously matched based on those same heuristics. Gevlon has long warned about overt rigging of games for monetary profit, but we have truly crossed the Rubicon when Activision itself has submitted (in 2015) a patent specific to that purpose.
And it was granted a few weeks ago. Feel free to read the whole patent yourself.
Granted, it isn’t entirely an engine of evil. The patent covers a process in which matches are made on a variety of characteristics. For example:
In another example, if a player has been performing poorly (e.g., getting killed at a rate higher than the player’s historical rate), the scoring engine may dynamically adjust one or more coefficients to match the player in a game that will improve the player’s performance. For example, the player may be matched with easier opponents, matched with better teammates, and/or placed in a game that is more tailored to the player’s preferences (e.g., players that play in games more closely aligned with their preferences tend to perform better).
This sort of balancing matchmaking is not hypothetical – Supercell, makers of Clash Royale – have already admitted in a Reddit AMA last month that there is indeed a “losing streak” pool in which you are placed after X numbers of losses. Why Supercell thinks this is a particularly good idea in 2v2, I do not know. For every person who just happened to statistically fall into a losing streak (e.g. 50% win rate), there are many more who are losing because they are tilted, trying out new decks they have no experience with, and so on. Grouping people this way is a sure-fire method of condemning players to ELO Hell, until and unless they happen to be paired up with truly abysmal opponents. So, in this regard, I prefer Activision’s method of “correcting” winrates.
Of course, the problem with picking winners and losers is when you are selected to be the loser. For every time you are gifted strong teammates to help you out of a losing streak, your opponents are punished by withholding of the same. We all want fair fights, being matched not just on skill levels but progression level too. It’s cruel to have new Hearthstone players face people with dozens of Legendary cards, even if the impartial ladder states they are equivalent players. Actively sabotaging games, though? We want fair fights, but not like this.
That is not even the most nefarious part of this engine, though. The true evil arises in plain text, in an approved US patent application:
In one implementation, the microtransaction engine may target particular players to make game-related purchases based on their interests. For example, the microtransaction engine may identify a junior player to match with a marquee player based on a player profile of the junior player. In a particular example, the junior player may wish to become an expert sniper in a game (e.g., as determined from the player profile). The microtransaction engine may match the junior player with a player that is a highly skilled sniper in the game. In this manner, the junior player may be encouraged to make game-related purchases such as a rifle or other item used by the marquee player.
“Matched” in this case, largely reads as matched against. In other words, the matchmaking system will notice you choosing the sniper role, then placing a more-skilled sniper opponent with a P2W rifle on the other side, for the express purpose of “encouraging” you to also purchase the rifle. It is bad enough having P2W elements in a game generally, but here we have a mechanism by which it can specifically be rubbed in your face. On purpose. To get you to buy shit.
This level of evil is not Google reading your email and popping up ads for dandruff shampoo. This is Google sending Fabio to your workplace to specifically call out the dandruff on your shirt, in front of your coworkers.
Could things get any worse with this patent? Activision is asking you to hold their beer:
In one implementation, when a player makes a game-related purchase, the microtransaction engine may encourage future purchases by matching the player (e.g., using matchmaking described herein) in a gameplay session that will utilize the game-related purchase. Doing so may enhance a level of enjoyment by the player for the game-related purchase, which may encourage future purchases. For example, if the player purchased a particular weapon, the microtransaction engine may match the player in a gameplay session in which the particular weapon is highly effective, giving the player an impression that the particular weapon was a good purchase. This may encourage the player to make future purchases to achieve similar gameplay results.
There it is, ladies and gentlemen. Activision settled the debate. Because now even in scenarios in which in-game purchases don’t directly increase one’s power (e.g. naked P2W), it’s quite likely that a matchmaking engine engineers a scenario in which you are more likely to win. For having paid. So even “purely cosmetic” purchases can end up becoming de facto P2W.
And much like loot box reward odds, companies will obfuscate the inner workings of their matchmaking systems such that it will be impossible to know either way. Are we to just trust their word that no matchmaking shenanigans are taking place, when they otherwise have every possible economic incentive to do so? Activision is just the first company openly patenting the process, not the first company to use these methods. Who would actually go on record to admit it?
Do you see now? Do you see it? This is precisely why you should be caring about Consumer Surplus; this is why you should be up in arms about gambling loot boxes; this is why you never act as an Apologist to a game (or any) company. There is a straight fucking line between Oblivion’s infamous horse armor and Activision (et al) literally patenting the rigging of games for cash. And that line is still going lower, and will continue to do so, until acted upon by an outside force.
We are nowhere close to bottom.
The days in which game companies made their money by selling more copies – and thus had every incentive to make the best possible game – is over. Voting with your wallet isn’t going to bring it back either; in the US, where money is speech, the voice of the guy spending $15,000 on Mass Effect 3 multiplayer loot boxes drowns out everyone else.
“You need to understand the amount of money that’s at play with microtransactions. I’m not allowed to say the number but I can tell you that when Mass Effect 3 multiplayer came out, those card packs we were selling, the amount of money we made just off those card packs was so significant that’s the reason Dragon Age has multiplayer, that’s the reason other EA products started getting multiplayer that hadn’t really had them before, because we nailed it and brought in a ton of money. It’s repeatable income versus one-time income.
“I’ve seen people literally spend $15,000 on Mass Effect multiplayer cards.”
When every economic incentive is directed towards Consumer Surplus extraction instead of, you know, improving the gameplay experience… this is what we get. Always-online multiplayer in every game, single-player game studios getting shut down, loot boxes everywhere.
Play stupid games, (pay to) win stupid prizes.
Big props to Eph for bringing my attention to a recent Gamasutra article entitled “How the Data Implosion will trigger the Great Game Dev Correction.” In it, the author put his “100% predictive accuracy” record on the line to portend the coming (Date: TBA) collapse of the F2P market.
If you want the short version of the 3100-word article, here it is: erosion of Consumer Surplus.
Really though, the author points to two primary trends that have entangled with one another in a negative feedback loop. The principle one is that the User Acquisition Cost, e.g. how much money spent on advertising/etc, continues to increase. One of the main drivers of that is the simple fact that there are thousands of competing titles on the market, with more arriving all the time. While we like to imagine that more options are better, the truth is that nobody really goes past the first two pages of Google results, much less browsing all 21,000 new games that came out in the last month. By “mathematical certainty,” costs go up trying to find new customers, revenue goes down as a result, and studios close their doors.
…but not before engaging in some Consumer Surplus shenanigans.
See, the second part of the feedback loop is how most F2P game companies are engaging in their data-driven quest to extract the maximum amount of Consumer Surplus from each user. Think lockboxes and timers and “special, one-time deals” that are psychologically honed to trick you into believing them to be worthwhile purchases. The very real problem though is that consumers have finite money. Shocking, I know. Since all of these F2P titles are trying to extract the same pool of dollars, all that happens is that each individual app only receives a smaller share of them.
And even worse than that is what we as gamers come to understand intuitively: these games just have less value as a result. In every sense of the term. Studios are spending more time and development dollars on ever more novel ways of tricking you to part with your cash, than they are with creating content worth purchasing in the first place. But even when those two points intersect, we’re left with little to no Consumer Surplus. At a certain point, you are better off watching Netflix than having to spend precisely the amount of money as enjoyment received from a game.
Now, the author is predicting a Correction at some point, with the Creative forces – as opposed to Big Data – rising up from the ashes of a devastated (F2P) game market and commanding a higher salary since we all suddenly realize we want better content again. I’m… not so sure.
For one thing, the F2P genie is out of the Cash Shop bottle. There is zero reason to believe that the surviving games of a post-Correction world will leave that
extracted Consumer Surplus money on the table. Secondly, the game industry itself has proven rather resistant to the notion that content creators should be paid practically anything. Undoubtedly part of that is due to the fact that everyone wants to be a (armchair) game designer and thus there is no market pressure to improve working conditions/pay. Hell, I wanted that job so much that I spent two years of college studying programming and Japanese so I could try to break into the industry back in the early 2000s.
Finally, there’s Minecraft. You know, that little indie game that was sold to Microsoft for $2.5 billion three years ago? While an excellent case study in why Creatives are better than Big Data, the fact remains that this “simple” game won the lottery in a way that will inspire decades of copycats and dreamers, just as WoW convinced everyone that MMOs were the next big thing. The MMO fever has mostly died down, but that’s because it costs $60 million a pop to roll the dice. Meanwhile, there are hundreds of thousands of people creating apps in their basements for free, let alone the corporate code monkeys churning out thousands of Flappy Bird derivatives. The cost of each attempt is so low, and the payout is potentially so high, that there is no reason to believe investors wouldn’t keep some pocket change flowing into basically purchasing Powerball tickets each week.
So, while I do agree there will be a Correction of some sort in the game industry, it’s ultimately not going to fix the flooding of garbage games. What I expect to see is a return to Curation: a sifting through the river of shit for those few nuggets of value. People will find the voices that they trust, and those voices will end up picking the winners and the losers. At least, up until the Curators become corrupted by studios throwing money at them, and the great cycle repeats.
Gaming has gotten pretty complicated for me these days.
The annoying part of this situation is that the complication is all by design. Clash Royale recently celebrated its 1-year anniversary, for example, which means I have been playing this mobile game off-and-on for about a year. Just the other day they teased a “one time sale” that included 100,000g and a Magical Chest for roughly $25. At the stage of development I’m at in the game, that amount of gold would effectively allow me to upgrade two units. Two. For $25.
And I was seriously considering it.
The only real thing that stopped me was that the deal wasn’t as good as the prior deals I did take advantage of. The $25 thing was only a “x4 value” whereas I dropped $25 on a different package several months ago that was a x10 value. At the time, it offered a rather significant boost of power, and allowed me to finally snag an Ice Wizard, which I have used in every deck to this day. Conversely, it is not entirely clear that upgrading two units for 100,000g would see similar returns.
In addition to Clash, I am playing three separate gacha-esque games with similar payment models. Four, technically, if you include Fire Emblem: Heroes in there. I haven’t spent near as much in those as I have in Clash, but I do boot them up every single day for the feeling of incremental progression. And all of them are offering “amazing” deals for $10, $25, even $99.
Then look what happened with WoW. There is currently a “sale” on character services, which means it “only” costs $18.75 for server transfers. Since I had over $180 in Blizzard Bux from cashing in WoW Tokens, I decided to use some of those funds to move the survivors of Auchindoun-US over to Sargeras-US. Moved about four toons thus far, and thinking of a fifth. That’s $75 already. Not $75 from my bank account per se, but I could have nearly bought StarCraft 2: Legacy of the Void and 50 packs of Hearthstone’s latest expansion with that same amount of funds.
All of this is why I take a somewhat adversarial stance with game designers. If these were all B2P games, we would not be having this discussion; instead I would be lamenting about how there aren’t enough hours in the day to play all these great games. Instead I’m talking about services within a game, or progression boosters, any of which are more expensive than actual, other games. I just bought Mass Effect: Andromeda from GMG for $41 and some change. That’s roughly two character transfers in WoW, or a few unit upgrades in Clash Royal.
Now, there’s the argument that there aren’t that many games you could even play for a whole year and not tire of. Doesn’t Clash Royal deserve my money for how much amusement it has generated? Isn’t plopping down some cash on these games technically cheaper than paying full price for new releases every few weeks/months anyway?
I think those are the wrong questions, and intentionally engineered to take advantage of cognitive dissonance. Because we aren’t asking those questions up front – we are asking them after having “invested” dozens (or hundreds) of hours into the game. If you told me at the beginning that it took 50,000g to upgrade units in Clash Royal, I would have balked. But having stewed in a pot of nearly boiling water for a year, it all seems reasonable. “Of course it makes sense that I used to get upgrades every three days, and now only get one a month.” Not really, no.
(Especially not when they end up nerfing units a month later. No refunds here.)
The value of money is mostly relative. Going from making $20k to $30k is life-changing, whereas going from $100k to $110k is likely not. However, money is also fungible. Dropping $10 or $25 here and there might make sense in the context of whatever game you are currently playing long-term, but those same dollars could buy anything else.
It is important, IMO, to consider the full picture of what your gaming dollars may or may not be purchasing. A server transfer in an MMO that will save your waning interest may seem a bargain. Hell, it might actually be a bargain in the final analysis. Just be cognizant that the decision should not be “do I spend money or not,” but rather “do I give up X or not.” I decided that two unit upgrades in Clash Royal isn’t worth half a Mass Effect. Framing it this way helps me resist all the fallacies (Sunk Cost, Gambler’s, etc) working on the decision to make it seem reasonable (when it is not), and gives me an answer I can live with.
Maybe your gaming budget is such that you don’t mind dropping hundreds of dollars a month into whatever. In which case, feel free to Paypal some my way, chief. Otherwise, we all have to look out for each other a bit, because the game designers and the in-house psychoanalysts on their payroll certainly are not.
There have been two games I played recently that have started with a cold open, e.g. one with no tutorial that just sort of throws you into the game. The first was The Long Dark, and the second is a space-sim called Hellion; both are in Early Access and both are survival-based games. So, in a sense, it’s difficult to determine whether either one intentionally set out to have cold opens, or if this simply reflects their current, unfinished states.
There is a lot to be said regarding the power of cold opens. In an age of 24/7 information coming from every angle, it is refreshing to be thrust into an unknown environment without any sort of hand-holding. It absolutely appeals to Explorer-types, and also those looking for more difficulty in their games. Plus, many times it makes thematic sense, say, if you just woke from cryo-sleep in an otherwise abandoned life pod.
Personally, I find cold opens to be exceptionally difficult to pull off well.
The fundamental issue I have is the dissonance between what the player expects and what the designers intend. What ends up happening is that players must essentially “metagame” how the designers actually intended the game to be played.
For example, in Hellion you awake from cryo-sleep inside a life pod without functioning Life Support. While there are a few tablets on the ground which give you a general idea of steps to take, that is basically all the guidance you are given. I searched the area and did not find enough items onboard to repair the Life Support. I found a jetpack without fuel, and supposedly a charging station for said jetpack, but could not determine a way to refuel.
So… what now? Did I miss an item in the search of the ship? Am I supposed to try and space walk without a jetpack? Is it a bug that there weren’t enough items to repair the Life Support? I have mentioned before that I am fine with tough puzzles, as long as I understand where the pieces are. What I absolutely despise is not knowing whether my failures are due to not performing correctly, or because I didn’t trip some programming flag from 10 minutes ago, or some other nonsense.
I had a similar issue in The Long Dark, of which I played about an hour before turning off. It takes 30 game minutes to break a stick into pieces by hand? Okay, fine. But having found a shelter and tools, I saw no particular way to locate food, or reconcile my exhaustion meter with my temperature meter with the time of day, e.g. how was I to sleep and keep warm in the middle of the day and still survive the night? I understand that perhaps the intention is for the player to be constantly on edge in the quest for survival, but again, I’m not even sure how food really even works in this game yet. I have not seen any flora or fauna beyond sticks and snow.
Flailing around in the darkness is not my idea of quality game time.
I’m not saying game designers should go full Ocarina of Time and have Navi pester you for hours. Minecraft has (had?) a cold open that was relatively straightforward once you got over the intellectual hump of punching trees. Don’t Starve is a much better example of how to do a cold open – there isn’t much of an explanation of anything, but I still felt a sense of agency in being able to interact with things.
And maybe that’s just it: I might not be doing the right things, but being able to do something is important.
I dunno. I think the best compromise would be to have cold opens with a fairly robust PDA/AI Assistant/Crafting Menu. Those that want to wander around blindly can, but those who want to know what they can do… well, can.
With my dead 970 graphics card just now reaching the RMA warehouse, I am having to seriously sort through my gaming library for titles that will boot up on a 560ti. WoW runs fine, for example, but 7 Days to Die maybe pushes 20 fps if there isn’t anything going on.
Enter XCOM 2, which I purchased for $12 whole dollars in a recent Humble Monthly Bundle.
I started my first game on “normal” difficulty with Ironman enabled, as I did with the prior title almost four year ago. A few hours later, I abandoned that game and started anew without Ironman.
On the one hand, the decision was easy. XCOM 2 is filled with such crazy amounts of bullshit that I didn’t even feel bad for opening the door to save scumming. The third enemy type you face in the game, a Sectoid, has the ability to Mind Control your units through walls. And create zombie troops from dead bodies. Which is great when your squad consists of only 4 people and you lose one of them to Mind Control off the bat, and that one ends up killing another (who then turns into a zombie). Killing the Sectoid breaks the Mind Control and (re)kills the zombie troops, but that gets a little difficult when one of your guys is Mind Controlled.
Or how about that mission with the Faceless ones? Rescue six civilians… oh wait, one of them morphs into a putty creature with claws and you just ended your turn in melee range. Hey, six damage to your 6 HP dude, that’s convenient. Then you have the snake creatures that can move, then grab a sniper off the top of a train 30 feet away with their tongue, then instantly coil around them, permastunning them and dealing 2 damage per turn. I mean, I suppose I should be grateful there isn’t a chance I could shoot my own coiled guy when I shoot the snake, but I was absolutely expecting that to be a thing. Because fuck you.
None of these things are insurmountable. They just happen to be inane, “gotcha!” bullshit that artificially increases the difficulty of Ironman games. And not even permanently, as once you (the player) know about the existence of these abilities, you can play around them in the future. Which is the point, of course, but I see no reason to structure a game this way while also punishing you long-term for these same blind pitfalls.
On the other hand, after playing a few more hours in non-Ironman mode, I started to wonder about the philosophical ramifications of Save Anywhere.
Fundamentally, a Save Anywhere feature makes eventual success a forgone conclusion. Even in extremely skill-intensive or luck-intensive sections of gameplay, any incremental progress is permanent progress. Some tactical games have RNG protection, e.g. all dice rolls are determined in advance, to dissuade save scumming a 15% chance attack into a critical hit, but that doesn’t prevent you from simply coming in from a different angle or using a different ability.
The other problems with Save Anywhere are the player behavior ramifications. If you can save the game at any time, there is an advantage to doing so, which means there is an incentive to. Tapping F5 is not onerous, but I consider the mental tax of “needing” to remember to do so… well, taxing. It’s not that saving after every attack ruins the game (it does), it’s that I now have to devote constant attention to an out-of-game mechanic. Is there anything worse than thinking you hit Save before turning the corner, but realizing later on that you didn’t, and now you’re stuck with a poor outcome “unnecessarily?” Feels completely different than if the designers make that decision for you.
I feel like there is a middle way, especially in games like XCOM. Specifically: saving inbetween missions. This lets you avert complete disasters like the mission that eventually scuttled my Ironman attempt – a total squad wipe one square from the extraction point – while still disincentivizing save scumming inside each mission. At least then you can weigh the option of losing an elite soldier to some bullshit versus 30-40 minutes of your time.
In-between my many WoW sessions, I have been working on Deus Ex: Mankind Divided. The short version is that it is pretty much Human Revolution with new plot and Augments; if you enjoyed the first game, then you will enjoy this one as well.
But one of the things I have noticed over the course of 30 hours is that… well, it’s easy.
I am playing on the highest difficulty – Give Me Deus Ex – and just breezing my way through, even without Augments. Indeed, I spent half the game with 11+ Augment Points banked just waiting for a situation in which I needed to spontaneously develop wall-punching or remote hacking skills. However, this may be more a systemic issue with stealth games generally.
When we talk about stealth games, what we’re really talking about is extremely simplified, often binary, gameplay. If you are outside the cone of an enemy’s vision, you are hidden. The alarm is either raised or it is not. The enemy is fully active or they are incapacitated. This binary nature even extends to after the enemy is alerted, as almost every stealth game features the conceit that guards eventually completely forget that they watched their compatriots die, and go about their usual patrols.
This is not a criticism of stealth gameplay, per se. There is a good reason why “more realistic” behavior is not often implemented: it is less fun. Ever play a stealth game where the enemies patrolled randomly? It’s an exercise in frustration. Without a pattern to recognize and exploit, incapacitating/avoiding guards either requires RNG (which doesn’t feel good) or just attacking them straight out. And if the guards never reset after the alarm is raised, why wouldn’t the average player not simply reload their last save?
So if you are going to have “stealth mechanics” in your game, you have to make some concessions. This means it is incredibly difficult to introduce varying levels of difficulty with stealth mechanics without running afoul of annoying gameplay.
Still, I do have some suggestions. These mostly pertain to Mankind Divided – and especially at the higher difficulties – but I feel like they can be applied more generally as well.
1) Incapacitated Guards wake up.
Some stealth games have this already, but its lack seems especially egregious in Mankind Divided since you are in the same city for most of the game. Basically, once you knock out the guards, they are unconscious permanently. Since you already get more XP for using nonlethal methods, there really is no reason to not knock them out rather than use lethal methods. Yes, unconscious guards can be awoken if discovered by other guards. But considering how easily guards can be taken out – and no one cares about patrols reporting in – this is an irrelevant concern.
So… have the guards wake up, randomly. Not immediately, mind you, just over a period of a few minutes or so. This will still give you enough time to stroll through the area, but maybe not enough for a thorough searching of every file cabinet. If you want safety in your looting, you will need to kill the guards instead.
2) Nonlethal takedowns are more difficult.
In the last two Deus Ex games, you get the option of lethal and nonlethal takedowns any time you are within melee distance. In truth, there is just one rational option: nonlethal. Not only do you get more XP with nonlethal, but the actual nonlethal action is quiet, whereas the lethal one makes a lot of noise. Considering you can just shoot sleeping guards in the face with a silenced pistol afterwards, there is zero reason to go lethal initially.
How about we just reverse that? Lethal takedowns are quiet, but nonlethal makes some noise as the target struggles. This can extend to tranquilizer dart weapons as well, considering nobody really seems to care about a huge syringe poking out of their leg, even after it was fired out of a sniper rifle 50 yards away. Let them make some noise for the few seconds of consciousness they have remaining. Or, alternatively, make the tranquilizer take a random amount of time to fully go into affect, so that it’s possible they fall unconscious after walking to a less discreet location.
Because, honestly, the situation with the Tranq rifle in Mankind Divided is just silly broken. Headshots will instantly knock out guards, and body shots take a few seconds more. But, really? The delay is actually a boon. I can tag three guards in a row, all in the leg, and by the time the first dude hits the floor, the others start passing out before they have a chance to raise an alarm. That shit would be impossible with a regular sniper rifle, even a silenced one. Speaking of which…
3) Silencers not being silent.
This is one of those universal videogame/movie sacred cows, but silencers on guns don’t actually make them silent. As it turns out, propelling lead out the end of a metal tube by way of igniting gunpowder is still kinda loud. So let’s have those guns still be loud with silencers attached. This will shorten effective engagement range for stealth runs, thereby increasing the chance that a guard could discover you, e.g. making the gameplay a bit more difficult.
4) Guards check in with each other.
There is a level in Mankind Divided that sees you skulking about a research facility with a PA system. While I was taking out guards left and right, I got a little nervous by what I heard. “West wing reporting all clear.” “Brzezinski, please report to docking bay 12.” Did I already take out Brzezinski? Would he be missed?
Then I remembered I was playing a standard stealth game, and none of that stuff matters.
To an extent, having guards checking in (or being sought by other guards) is one of those realistic features that end up making the gameplay feel worse. After all, if you are going to be so actively punished for taking guards out, you may as well remove the ability to take guards out at all. But what if the mechanics were more nuanced than that? What if you could get some kind of guard manifest that lists which ones need to check in, or figure out when they already checked in such that you are free to take them out afterwards? What if their absence is noted and guards are sent to investigate, but they eventually disperse if they don’t find any foul play?
Basically, instead of having each guard be a puzzle individually, perhaps force the player to consider a more holistic approach to rendering a base unconscious.
5) Blood stains.
Just so it doesn’t seem like all these changes make nonlethal useless compared to lethal methods of infiltration, let’s have guards react to blood stains. And, you know, have blood stains result from wetwork, assuming specific methods are not employed. Moving bodies might still be useful, especially for distraction purposes, but it shouldn’t be a Get Out of Jail Free card either.
I will be honest with you here: I’m not even sure any of the above will result in a better gameplay experience. All I do know is that my current experience with stealth games (and Deus Ex in particular) has made all of them not only play the same, but play easily. If I choose the highest difficulty in a stealth game, that difficulty has to be a function of changing stealth mechanics and not just making it easier for me to die once a firefight starts. Because a firefight will never start when I’ve knocked out every guard everywhere with impunity.
While Syncaine laments that the MMO genre hasn’t gone anywhere in 12 years, I was left pondering a different question: can the MMO genre go anywhere? Can there be another breakout success?
I would suggest the question is less straightforward than it might seem, for a few reasons.
The first reason is due to the nature of the genre itself. Even if you are a super-fan of Half-Life 2 and believe it to be the best game ever invented… you still likely bought and paid for other FPS titles in the past 12 years. The same is not necessarily true of MMOs. I’d wager that most people that stick with the MMO genre long-term generally find one game and settle in. And why wouldn’t you? Someone would move on from Half-Life 2 because eventually you would run out of content to explore. That is much less likely in MMOs, because they are updated regularly, expansions are released, other players generate content, and so on.
The above generates the curious (and fairly unique) phenomenon that a lot of MMO players – possibly even a majority – are still actually playing the most influential MMOs (Ultima Online 1997; EverQuest 1999; EVE Online 2003; Second Life 2003; World of Warcraft 2004). If the market for FPS titles is 40 million people, each new FPS has 40 million potential customers. Meanwhile, the market for MMOs is X – Y, where Y is the number of people currently satisfied with their present virtual home.
The second issue is one of definitions. While it might not seem so at first, “MMO” as traditionally defined is rather restrictive. For example, most people would suggest that Crowfall is a MMO, despite its “persistent” worlds having an expiration date. That sounds more like a long-lasting lobby to me. But why is Crowfall an MMO and Destiny not? Or PlanetSide 2, which is arguably more persistent than either? A game like Fallout 3 can be said to move both the FPS and RPG genres forward in specific ways, but MMO-ish games often fall outside the standard MMO purview, thus limiting potential genre-changing titles. In other words, experimental MMOs can innovate themselves right out of the genre.
Third, a given game can only really be considered influential if it, or its derivatives, are a success. Consider the glaring omission from the Top 50 list: Star Wars Galaxies. I would have thought that with the amount of name-drops SWG receives in just about every MMO dev design sheet, it would be a shoo-in contender for sure. But if you think about it, not only has SWG shut down, but I don’t even know if any other game claiming its mantle has survived or even been released yet. Anyone know of any? Regardless, this means a given game must both shake up the genre and be successful in a general sense to count – just the first is not enough. Which leads me to the next point.
Fourth, not to be alarmist or anything, but… I’m pretty sure the MMO genre as we know it has peaked. As recently as 3-4 years ago, over half the MMO market was just WoW, and WoW has lost half of said playerbase since then, and is still top dog by a factor of 3-4, minimum. Where did all the bodies go? Not to other MMOs, for sure.
This leads me to the question in the title: CAN there by another MMO success? FF14 has come the closest, but is there anyone out there that seriously believes we will see a second WoW-like coming ever again? I personally doubt it. There was always an element of “right time, right place” to WoW’s meteoric rise, and not only has that time passed, but there is pressure coming in from other genres co-opting the traditional MMO strengths, in the same way we see “RPG elements” everywhere today.
So, basically, I do not see that list of late 90s/early 00s-only influential titles as a deficiency of development testicular fortitude, but rather a simple systemic and semantic issue. Other genres can take greater risks because they need only make one sale, not twelve per year in a F2P environment, while also maintaining a healthy population. Even if smaller MMOs were released and did innovate, chances are they remain too small to be “Massive” or just shut down after a few years and thus no longer be influential.
It is lose-lose-lose for everyone, but there it is.