AI Won’t Save Us from Ourselves

I came across a survey/experiment article the other day entitled The Hidden Penalty of Using AI at Work. The “penalty” in this case being engineers more harshly judging a peer’s code if they were told the peer used AI to help write it. The overall effect is one’s competence being judged 9% worse than when the reviewer is told no AI was used. At least the penalty was applied equally…

The competence penalty was more than twice as severe for female engineers, who faced a 13% reduction compared to 6% for male engineers. When reviewers thought a woman had used AI to write code, they questioned her fundamental abilities far more than when reviewing the same AI-assisted code from a man.

Most revealing was who imposed these penalties. Engineers who hadn’t adopted AI themselves were the harshest critics. Male non-adopters were particularly severe when evaluating female engineers who used AI, penalizing them 26% more harshly than they penalized male engineers for identical AI usage.

Welp, that’s pretty bad. Indeed one of the conclusions is:

The competence penalty also exacerbates existing workplace inequalities. It’s reasonable and perhaps tempting to assume that AI tools should level the playing field by augmenting everyone’s capabilities. Our results suggest that this is not guaranteed and in fact the opposite could be true. In our context, which is dominated by young males, making AI equally available increased bias against female engineers.

This is the sort of thing I will never understand about AI Optimists: why would you presuppose anything other than an entrenchment of the existing capitalist dystopian hellscape and cultural morass?

I don’t know if you have taken a moment to look around lately, but we are clearly in the Bad Place. If I had once held a hope that AI tools would accelerate breakthroughs in fusion technology and thus perhaps help us out of the climate apocalypse we are sleepwalking into, articles like the above serve to ground me in the muck where we are. Assuming AI doesn’t outright end humanity, it certainly isn’t going to save us from ourselves. Do you imagine the same administration that is trying to cancel solar/wind energy and destroy NASA satellites monitoring CO2 is going to shepherd in a new age of equitable prosperity?

Or is it infinitely more likely the gains will be privatized and consolidated into the techno-feudal city-states these tech bros are already dreaming up? Sorta like Ready Player One, minus the escapism VR.

I could be wrong. I hope I’m wrong. We are absolutely in a transition period with AI, and as the survey pointed out, the non-adopters were more harsh than those familiar with AI. But… the undercurrent remains. I do not see what AI is going to do to solve income inequality, racism, sexism, or the literal death cult currently wielding all levers of government. I’m finding it a bit more likely that AI will be used to, you know, oppress people in horrible new ways. Or just the old ways, much more efficiently.

Wherever technology goes, there we are.

Posted on August 6, 2025, in Commentary and tagged , , , . Bookmark the permalink. 7 Comments.

  1. Side-stepping the will it/won’t it debate, the thing that immediately struck me about those quoted statistics was the way they apply mathematical precision to what surely has to be a subjective judgment. How, exactly, does one measure peer criticism by percentage points? Is there some accepted, industry standard points system they’re using? I read the whole linked article and it’s not only never explained. They just say they gave everyone the exact same code and told them it either had or hadn’t been written with AI. How do you rate that by percentage? Doesn’t it either run or not run?

    The article is clearly in favor of AI adoption, too, as can be seen in the tone throughout but especially in the conclusion, which proposes “three targeted interventions” to mitigate against or avoid the current discriminatory bias against employees who do use AI, something the authors obviously want to encourage. If it’s just a problem of the old guard not liking the new ways, it’s hardly going to be an unfamiliar problem.

    Like

    • There is, in fact, a thing programmers do called Code Review. As for the percentages, here is a link to the paper’s abstract, which then has a link to the PDF of the study. The specific mechanism was this:

      After reviewing the identical code (see Materials and Methods for detailed instructions), participants rated: (1) the quality of the code (“Please rate the quality of the code”) and (2) the competence of the code author (“Please rate his/her coding ability”), both on an 11-point scale (0–10), with higher scores indicating more favorable evaluations. These two items allowed us to explore whether AI adoption impacts only competence evaluation or also affects perceived work quality. In AI-assisted conditions, participants also estimated the relative contribution of the engineer versus the AI to the final output (0%–100%, in 10% increments), allowing us to examine whether competence penalties varied based on perceived human contribution in human-AI collaboration.

      While the old-guard had the most pronounced bias, that will presumably be reduced over time as AI adoption is increased. What does not appear to decrease though, is sexism in general.

      Like

      • Ah, thanks for the links. It’s interesting that what’s under consideration doesn’t appear to be effectiveness so much as style. It would also seem to put the whole process of Code Review in doubt, wouldn’t it, if literally identical code receives such varied reviews? The study is looking to explain low take-up by highlighting apprehension of receiving specific biases and it does so but presumably all reviews using the Code Review method must be constantly at risk of inaccuracy if, again, literally the same code, doesn’t receive the same assessment. I’d hate to have my promotional prospects and pay increases rely on a colleague’s subjective impression of my personal style.

        Like

  2. I watched Babylon 5 thirty years ago. Nothing’s saving us from ourselves. Wherever we are, humans we’ll be.

    Nothing to do with whatever tech we have available; everything to do with who we are as a species. Gonna be good people, gonna be evil people, gonna be many people in between doing selfish and selfless things whenever it suits them.

    With or without AI, we’re going someplace where the tragedy of the commons is gonna hit everyone. We’re just gonna have to adapt to wherever we are, with whatever’s there, or we’ll all just die out.

    In the meantime, people keep on being people.

    Liked by 1 person

    • Indeed. However it is important to remember exactly that when you get some bright-eyed (or closed-eye) AI accelerationist who believes the end result is going to be some kind of UBI utopia.

      Like

    • The part that concerns me isn’t the “humans gonna human” part so much as the suspected evidence of physical, neurological changes in brain development caused by some of the new technologies. If those fears turn out to be true, the new humans won’t be the same humans as the old humans so we’re in a totally new ball park…

      Like

    • Very much agreed with this. Obligatory SMAC, I guess:

      “Evil lurks in the datalinks as it linked in the streets of yesteryear. But it was never the streets that were evil.”

      – Sister Miriam Godwinson, “The Blessed Struggle”

      Correct to say that AI won’t lead us to a workless post-scarcity utopia (let alone to some paradisiacal nirvana state in which senior devs are no longer dicks), but by the same token (sorry) a lot of the complaints I see leveraged against it are just complaints about capitalism and human society, not specific to AI.

      It destroys jobs? So do other innovations, other automation processes, outsourcing, redundancy trims, and every mechanism by which corporations try to cut labour costs. (In fact, the corporation is a better fit for what people fear about superintelligent AI; it has personhood in many legal systems, it’s bigger than you, smarter than you, it decides cultural content, you exist on its sufferance, and it’s ruthless in pursuit of its Goodharted metrics.)

      It stultifies the commons? Yeah, so do scam charter schools, reality TV, assembly-line romantasy, by-the-numbers pop music, and endless Whedon-quipping cape-and-tights resequels. (And pandering to gamers’ ideas of frictionless fun, of course!)

      In a world where the alpha chase and circulation of capital weren’t quite so frenetic, in which we collectively owned the model weights, the algos, the robots, and the genetic sequences, AI would be just another tool. Even crypto would find genuine niche uses like secure and private identity verification. But no one’s quite ready to spit on their hands and hoist the red flag.

      Liked by 1 person