Blog Archives

“Normal People Don’t Care”

There is a minor, ongoing media kerfuffle with the internet-darling Larian Studios (makers of Baldur’s Gate 3, Original Sin 2, etc). It started with this Bloomberg article, wherein Jason Schreier writes:

Under Vincke, Larian has been pushing hard on generative AI, although the CEO says the technology hasn’t led to big gains in efficiency. He says there won’t be any AI-generated content in Divinity — “everything is human actors; we’re writing everything ourselves” — but the creators often use AI tools to explore ideas, flesh out PowerPoint presentations, develop concept art and write placeholder text.

The use of generative AI has led to some pushback at Larian, “but I think at this point everyone at the company is more or less OK with the way we’re using it,” Vincke said.

There are possible charitable and a not-so-chartable takes on those words, and suffice it to say, many people chose the latter. Vincke responded with a “Holy fuck guys [chill out]” Twitter response, with clarifications and emphasis that they only use AI for reference material and other boring things, and not with actual content. Jason Schreier also chimed in with an original transcript of the interview, as a response to others suggesting that what Schreier wrote was itself misleading.

As a side note, this portion of the transcript was extra interesting to me:

JS: It doesn’t seem like it’s causing more efficiency, so why use it?

SV: This is a tech driven industry, so you try stuff. You can’t afford not to try things because if somebody finds the golden egg and you’re not using it, you’re dead in this industry.

I suppose I should take Vincke’s word on the matter, considering how he released a critically-acclaimed game that sold 20 million copies, and I have… not. But, dead? Larian Studios has over 500 employees at this point, so things are likely different at these larger scales. I’m just saying the folks that made, you know, Silksong or Megabonk are probably going to be fine without pushing AI into their processes.

Anyway, all of that is actually a preamble to what sent me to this keyboard in the first place. In the Reddit comments of the second Schreier piece, this exchange took place:

TheBlightDoc: How could he NOT realize how controversial the genAI comments would be? Has he been living under a rock? Or does he himself believe AI is not a big deal? :laughing:

SexyJazzCat: The strong anti AI sentiment is a very chronically online thing. Normal people don’t actually care.

do not engage… do not engage… do not engage

Guys, it’s hard out here in 2025. And I’m kinda all done. Tapped out. Because SexyJazzCat is correct.

Normal people don’t actually care. We know this because “normal” people voted the current administration back into office. Normal people don’t understand that measles can reset your immune system, erasing all your hard-fought natural immunities. Normal people don’t understand that every AI data center that springs up in your area is subsidized by increases to your own electric bill. Normal people don’t understand that tariffs are taxes that they end up paying for. Normal people don’t understand that even if they didn’t use ACA subsidies, their health insurance is going to wildly increase anyway because hospitals won’t be reimbursed for emergency care from newly uninsured people. Nevermind the, you know, general human misery this creates.

Normal people don’t actually care about AI. But they should. Or perhaps should have, past tense, because we’re far past the end of a very slippery slope and fully airborne. Normal people are just going to be confused as to why computers, phones, and/or videogame consoles are wildly more expensive in 2026 (e.g. RAM crisis). Or if AI successfully demonstrates real efficiency gains, surprised when they are out of a job. Or if AI crashes and burns, why they also still lost their job and their 401k cratered (e.g. 40% of S&P 500 value is in AI companies).

The only thing that I still wish for these days, is this: people have the kind of day they voted for.

Authentic Wirehead

Bhagpuss has a post out called “It’s Real if I Say It’s Real,” with a strong argument that while people say they desire authenticity in the face of (e.g.) AI-generated music, A) people often can’t tell the difference, and B) if you enjoyed it, what does it even matter?

It was the clearest, most positive advocacy of the wirehead future I’ve ever seen in the wild.

Now, speaking of clarity, Bhagpuss didn’t advocate for wirehead in the post. Not directly. I have no personal reason to believe Bhagpuss would agree with my characterization of his post in the first place. However. I do believe it is the natural result and consequence of accepting the two premises.

Premise one is that we have passed (and are perhaps far beyond) the point at which the average person can easily differentiate between AI-generated content and the “real thing.” Indeed, is there really anyone anywhere ready to argue the opposite? Linked in the Bhagpuss’ post was this survey showing 97% of respondents being unable to tell the difference between human-made and AI-generated music across three samples. ChatGPT 4.5 already passed the classical three-way Turing Test, being selected as the human 73% of the time. Imagine that other person the research subject was texting with, and being so resoundingly rejected as human.

Then again, perhaps the results should not be all that surprising. We are very susceptible to suggestion, subterfuge, misdirection, and marketing. Bhagpuss brought up the old-school Pepsi vs Coke challenge, but you can also look at wine tasting studies where simply being told one type was more expensive led to it being rated more highly. Hell, the simple existence of the placebo effect at all should throw cold (triple-filtered, premium Icelandic) water on the notion that we exist in some objective reality. And us not, you know, just doing the best we can while piloting wet bags of sentient meat.

So, premise #1 is that it has become increasingly difficult to tell when something was created by AI.

Premise #2 is when we no longer care that it was artificially generated. For a lot of people, we are already well past this mile marker. Indirectly, when we no longer bother trying to verify the veracity of the source. Or directly, when we know it is AI-generated and enjoy it anyway.

I am actually kind of sympathetic on this point, philosophically. I have always been a big believer that an argument stands on its own merits. To discredit an idea based on the character of the person who made it is the definition of an ad hominem fallacy. In which case, wouldn’t casting aspersions on AI be… ad machina? If a song, or story, or argument is good, does its origins really matter? Maybe, maybe not.

Way back in my college days, I studied abroad in Japan for a semester. One thing I took was a knock-off Zune filled with LimeWired songs, and it was my proverbial sandbar while feeling adrift and alone. Some memories are so intensely entangled with certain songs, that I cannot think of one without the other. One of my favorites back then was… Last Train Home. By lostprophets. Sung by Ian Watkins.

So… yeah. It’s a little difficult for me to square the circle that is separating the art from the artist.

But suppose you really don’t care. Perhaps you are immune to “cancel culture” arguments, unmoved from allegations of a politician’s hypocrisy, and would derive indistinguishable pleasure between seeing the Mona Lisa in person and a print thereof hanging on your wall. “It’s all the same in the wash.”

To which I would ask: what distance remains to simply activating your nucleus accumbens directly?

What is AI music if not computer-generated noises attempting to substitute for the physical wire in your brain? Same for AI video, AI games, AI companions. If the context and circumstances of the art have no meaning, bear no weight, then… the last middle-man to cut out is you. Wirehead: engage.

I acknowledge that in many respects, it is a reductive argument. “Regular music is human-generated noises attempting to substitute for the wire.” We do not exist in a Platonic universe, unmoored from biological processes. Even my own notion that human-derived art should impart greater meaning into a work is itself mental scaffolding erected to enhance the pleasure derived from experiencing it.

That said, this entire thought experiment is getting less theoretical by the day. One of the last saving graces against a wirehead future is the minor, you know, brain surgery component. But what if that was not strictly necessary? What if there was a machine capable of gauging our reactions to given stimuli, allowing it to test different combinations of outputs in the form of words, sounds, and flashing lights to remotely trigger one’s nucleus accumbens? They would need some kind of reinforcement mechanism to calculate success, and an army of volunteers against which to test. The whole thing would cost trillions!

Surely, no one would go for that…

Self-Correcting

I feel there are many elements about AI that will eventually be self-correcting… in a sort of apocalyptic, crash-and-burn kind of way. For example, the AI-summarized web doesn’t leave much economic oxygen for people to create content worth summarizing. Assuming, of course, that ad-based revenue streams continue to make sense at all as we cross into over 50% of all internet traffic being bots.

On an individual level, I am experiencing some interesting changes that may also be self-correcting.

I have mentioned it a few times, but I have had a problem with watching Youtube (Shorts). As in, I would pop on over to quickly decompress from some other activity, and then 2-3 hours later, awaken from my fugue, algorithmic state having not accomplished anything that I had set out to. It’s a problem.

…or, at least, it was. Because I am now beginning to encounter (presumed) AI-directed, curated, and/or created content. And it repulses me in an uncanny valley way. Takes me right out of whatever hypnosis I was under and immediately causes me to close the tab. Which, of course, is great for me.

I put “presumed” up there though, because sometimes I cannot really tell. For example, this video about “15 forgotten garden traditions” is probably AI generated – it features generic voiceover on top of stitched-together montage of others peoples’ (at least attributed) content. Much like the now-maligned em dash however, perhaps that style of video is now just guilty by association? Another video was on The Saver’s Paradox and my AI-dar went off immediately. Looking further into the channel and thinking about what it would require to prompt that level of video though, it seems like it’s legit.

Perhaps neither of those videos bothered you in the slightest. In which case, congratulations! You are absolutely set up for a future filled to the brim with… content. For me though, the magic is gone.

It may well be inevitable that the quality of AI generation is such that it become indistinguishable from human content. In which case, why would I be on Youtube at all, instead of in my own prompt?

Self-correcting! As it turns out, even black holes evaporate eventually.

AI Won’t Save Us from Ourselves

I came across a survey/experiment article the other day entitled The Hidden Penalty of Using AI at Work. The “penalty” in this case being engineers more harshly judging a peer’s code if they were told the peer used AI to help write it. The overall effect is one’s competence being judged 9% worse than when the reviewer is told no AI was used. At least the penalty was applied equally…

The competence penalty was more than twice as severe for female engineers, who faced a 13% reduction compared to 6% for male engineers. When reviewers thought a woman had used AI to write code, they questioned her fundamental abilities far more than when reviewing the same AI-assisted code from a man.

Most revealing was who imposed these penalties. Engineers who hadn’t adopted AI themselves were the harshest critics. Male non-adopters were particularly severe when evaluating female engineers who used AI, penalizing them 26% more harshly than they penalized male engineers for identical AI usage.

Welp, that’s pretty bad. Indeed one of the conclusions is:

The competence penalty also exacerbates existing workplace inequalities. It’s reasonable and perhaps tempting to assume that AI tools should level the playing field by augmenting everyone’s capabilities. Our results suggest that this is not guaranteed and in fact the opposite could be true. In our context, which is dominated by young males, making AI equally available increased bias against female engineers.

This is the sort of thing I will never understand about AI Optimists: why would you presuppose anything other than an entrenchment of the existing capitalist dystopian hellscape and cultural morass?

I don’t know if you have taken a moment to look around lately, but we are clearly in the Bad Place. If I had once held a hope that AI tools would accelerate breakthroughs in fusion technology and thus perhaps help us out of the climate apocalypse we are sleepwalking into, articles like the above serve to ground me in the muck where we are. Assuming AI doesn’t outright end humanity, it certainly isn’t going to save us from ourselves. Do you imagine the same administration that is trying to cancel solar/wind energy and destroy NASA satellites monitoring CO2 is going to shepherd in a new age of equitable prosperity?

Or is it infinitely more likely the gains will be privatized and consolidated into the techno-feudal city-states these tech bros are already dreaming up? Sorta like Ready Player One, minus the escapism VR.

I could be wrong. I hope I’m wrong. We are absolutely in a transition period with AI, and as the survey pointed out, the non-adopters were more harsh than those familiar with AI. But… the undercurrent remains. I do not see what AI is going to do to solve income inequality, racism, sexism, or the literal death cult currently wielding all levers of government. I’m finding it a bit more likely that AI will be used to, you know, oppress people in horrible new ways. Or just the old ways, much more efficiently.

Wherever technology goes, there we are.

AIrtists

There’s some fresh Blizzard drama over a Diablo Immortal + Hearthstone colab artwork:

Going to need an AI editor to correct the AI mistakes…

The top comment (1700+ upvotes) is currently:

Guess $158 pets aren’t enough to pay an artist to draw the image for their colab lmao.

I’m all for piling onto Blizzard at this moment, precisely because what they are currently doing in, for example, Hearthstone is especially egregious. It’s not just the pets, though. The dev team had been advocating for reducing the power level of sets for a while – ostensively to fight power creep – but after like the third flop set in a row, their efforts are beginning to become indistinguishable from incompetence. The Starcraft miniset has been nerfed like 2-3 times now, but people are still playing cards from there because they’re more powerful than the crap we got today. First week of the expansion, and the updated Quest decks all had winrates of less than 30%.

Having said that, it isn’t all that clear that the AI artwork is actually Blizzard’s fault.

Last year, there was another AI art controversy with Hearthstone regarding the pixel hero portraits. While there was no official announcement, all signs pointed towards the artist themselves being the one to submit the AI-generated product rather than Blizzard actively “commissioning” such a thing. And remember, even the small indie devs from Project Zomboid got burned when they hired the same person that made their original splash screen and said artist turned around to submit AI-smeared work.

This sort of thing used to sound insane to me. Why would an artist use a tool that specifically rips off artists and makes their very own future work less valuable? Is there no sense of self-preservation?

On the other hand, that Hearthstone hero portrait “artist” almost got paid if it weren’t for those pesky Reddit kids. Considering that Microsoft is now requiring its employees to use AI in their jobs, perhaps the artists were just ahead of the curve. In my own meatspace job, AI tools are being made available and training being required if only to styme certain employees from blindly pasting sensitive, personal data into ChatGPT or Grammarly. Because of course they do.

Regardless, I am interested in seeing how it goes down and what eventually wins. AI does, obviously. But do people stop caring about AI-generated product art because so many examples eventually flood the zone that it becomes impossible to keep up? Will it be a simple generational change, with Gen Alpha (etc) being OK with it? Or will AI advance enough that we can no longer spot the little mistakes?

All three are going to happen, but I wonder which will happen first.

Human Slurry

Scrolling on my phone, I clicked into and read an article about Yaupon, which is apparently North America’s only native caffeinated plant. Since we’re speed-running the apocalypse over here in the US, the thought is that high tariffs on coffee and tea might revitalize an otherwise ultra-niche “Made in America” product. Huh, interesting.

I scroll down to the end and then see this:

The human slurry future

I’ve seen summarized reviews on Amazon, but never comments. Honestly, I just laughed.

It’s long been known that the comments on news articles are trash: filled with bots or humans indistinguishable from bots. But there is something deeply… I don’t know a strong enough word for it. Cynical? Nihilistic? Absurd? Maybe just fucking comedic about inviting your (presumably) human readers to comment on a story and then just blending them all up in a great human slurry summary so no one has to actually read any of them. At what point do you not just cut out the middle(hu)man?

If want a summary of the future, that’s it. Wirehead, but made out of people.

N(AI)hilism

Wilhelm has a post up about how society has essentially given up the future to AI at this point. One of the anecdotes in there is about how the Chicago Sun-Times had a top-15 book lists that only included 5 real books. The other is about how some students at Columbia University admitted they complete all of their course-work via AI, to make more time for the true reason they enrolled in an Ivy League school: marriage and networking. Which, to be honest, is probably the only real reason to be going to college for most people. But at least “back in the day” one may have accidentally learned something.

From a concern perspective, all of this is almost old news. Back in December I had a post up about how the Project Zomboid folks went out of their way to hire a human artist who turned around and (likely) used AI to produce some or all of the work. Which you would think speaks to a profound lack of self-preservation, but apparently not. Maybe they were just ahead of the curve.

Which leads me to the one silver-lining when it comes to the way AI has washed over and eroded the foundations of our society: at least it did so in a manner that destroys its own competitive advantage.

For example, have you see the latest coming from Google’s Veo 3 video AI generation? Among the examples of people goofing around was this pharmaceutical ad for “Puppramin,” a drug to treat depression by encouraging puppies to arrive at your doorstep.

Is it perfect? Of course not. But as the… uh, prompt engineer pointed out on Twitter, these sort of ads used to cost $500,000 and take a team of people to produce over months, but this one took a day and $500 in AI credits. Thing is, you have to ask what is eventual outcome? If one company can reduce their ad creation costs by leveraging AI, so can all the others. You can’t even say that the $499,500 saved could be used to purchase more ad space, because everyone in the industry is going to have that extra cash, so bids on timeslots or whatever will increase accordingly.

It all reminds me about the opening salvo in the AI wars: HR departments. When companies receive 180 applications for every job posting, HR started utilizing algorithms to filter candidates. All of a sudden, if you knew the “tricks” and keywords to get your resume past said filter, you had a significant advantage. Now? Every applicant can use AI to construct a filter-perfect resume, tailored cover letter, and apply to 500 companies over their lunch break. No more advantage.

At my own workplace, we have been mandated to take a virtual course on AI use ahead of a deployment of Microsoft Claude. The entire time I was watching the videos, I kept thinking “what’s the use case for this?” Some of the examples in the videos were summarization of long documents, creating reports, generating emails, and the normal sort of office stuff. But, again, it all calls into question what problem is being solved. If I use Claude to generate an email and you use Claude to summarize it, what even happened? Other than a colossal waste of resources, of course.

Near as I can tell, there are only two endgoals available for this level of AI. The first we can see with Musk’s Grok, where the AI-owners can put their thumbs (more obviously) on the scale to direct people towards skinhead conspiracy theories. I can imagine someone with less ketamine-induced brain damage would be more subtle, nudging people towards products/politicians/etc that have bent the knee and/or paid the fee. The second endgoal is presumably to actually make money someday… somehow. Currently, zero of the AI companies out there make any profit. Most of them are free to use right now though, and that could possibly change in the future. If the next generation of students and workers are essentially dependent on AI to function, suddenly making ChatGPT cost $1000 to use would reintroduce the competitive advantage.

…unless the AI cat is already out of the bag, which it appears to be.

In any case, I am largely over it. Not because I foresee no negative consequences from AI, but because there is really nothing to be done at this point. If you are one of the stubborn holdouts, as I have been, then you will be ran over by those who aren’t. Nobody cares about the environmental impacts, the educational impacts, the societal impacts. But what else is new?

We’re all just here treading water until it reaches boiling temperature.

(AI)Moral Hazard

There are a lot of strong feelings out there regarding the use of AI to generate artwork or other assets for videogames. Regardless of where you fall on the “training” aspect of AI, it seems clear that a game developer opting for AI art is taking away an employment possibility for a human artists.

One possibility I had not previously imagined though, is when a paid human artist themselves (allegedly) uses AI to generate the art:

Released as part of [Project Zomboid] build 42, these new images for the survival game seemingly contain some visual anomalies that may be attributable to AI generation tools. In the picture of the person using the radio, for example, the handle of the radio is misaligned with its main casing, the wire on the headphones seems to merge into the character’s hair, and there is an odd number of lines on the stand-up microphone – on one side of the microphone there are five indentations, but on the other side, which ought to be symmetrical, there are six.

It is worth noting that this is all forum speculation – AI has not been proven, although it certainly seems suspicious. Moreover, the “AAA concept artist” commissioned is not some rando, but the very one that did the still-used cover art of Project Zomboid from back in 2011. So this particular controversy is literally the worst of all possible worlds: game developer did the right thing by hiring a professional artist with proven track record for thousands of dollars, and received either AI-assisted artwork (bad), or non-AI artwork with human error that is now assumed to be because of AI (worse).

All of which is a complete distraction to another otherwise commendable game update (worst).

“Either way, they are gone for now – likely forever, as frankly after two years of hard work from our entire team in getting build 42 done, it would break my heart if discussion as to whether we’d used AI on a few loading screens that were produced externally to the company pretty recently was to completely overshadow all that effort and passion and hard work the team put into getting B42 out there.”

Truly, it is an unenviable time to be an artist. AI technology is only going to improve, and as it does, you will be increasingly competing against both “Prompt Engineers” and anonymous internet sleuths hunting for clues to “expose” you for Reddit karma. Eventually, AI-generated content will be so prevalent that none of it will matter; I could imagine ads that are dynamically drawn in, say, anime-style because it noticed you had CrunchyRoll open in another tab, or with the realistic likeness of a TV star from your most-watched Netflix show.

Right now, utilizing AI as a business is a sign of being cheap and invites controversy. Perhaps it remains so, presuming the ad-based hellscape imagined above. But at a certain point, AI will probably figure out symmetry and how many Rs are in strawberry and we will likely be none the wiser.

Or we will just assume everything is AI-generated and it won’t matter. Same difference.

This AI Ain’t It

Wilhelm wrote a post called “The Folly of Believing in AI” and is otherwise predicting an eventual market crash based on the insane capital spent chasing that dragon. The thesis is simple: AI is expensive, so… who is going to pay for it? Well, expensive and garbage, which is the worst possible combination. And I pretty much agree with him entirely – when the music stops, there will be many a child left without a chair but holding a lot of bags, to mix metaphors.

The one problematic angle I want to stress the most though, is the fundamental limitation of AI: it is dependent upon the data it intends to replace, and yet that data evolves all the time.

Duh, right? Just think about it a bit more though. The best use-case I have heard for AI has been from programmers stating that they can get code snippets from ChatGPT that either work out of the box, or otherwise get them 90% of the way there. Where did ChatGPT “learn” code though? From scraping GitHub and similar repositories for human-made code. Which sounds an awful like what a search engine could also do, but nevermind. Even in the extremely optimistic scenario in which no programmer loses their jobs to future Prompt Engineers, eventually GitHub is going to start (or continue?) to accumulate AI-derived code. Which will be scraped and reconsumed into the dataset, increasing the error rate, thereby lowering the value that the AI had in the first place.

Alternatively, let’s suppose there isn’t an issue with recycled datasets and error rates. There will be a lower need for programmers, which means less opportunity for novel code and/or new languages, as it would have to compete with much cheaper, “solved” solution. We then get locked into existing code at current levels of function unless some hobbyists stumble upon the next best thing.

The other use-cases for AI are bad in more obvious, albeit understandable ways. AI can write tailored cover letters for you, or if you’re feeling extra frisky, apply for hundreds of job postings a day on your behalf. Of course, HR departments around the world fired the first shots of that war when they started using algorithms to pre-screen applications, so this bit of turnabout feels like fair play. But what is the end result? AI talking to AI? No person can or will manually sort through 250 applications per job opening. Maybe the most “fair” solution will just be picking people randomly. Or consolidating all the power into recruitment agencies. Or, you know, just nepotism and networking per usual.

Then you get to the AI-written house listings, product descriptions, user reviews, or office emails. Just look at this recent Forbes article on how to use ChatGPT to save you time in an office scenario:

  1. Wrangle Your Inbox (Google how to use Outlook Rules/filters)
  2. Eliminate Redundant Communication (Ooo, Email Templates!)
  3. Automate Content Creation (spit out a 1st draft on a subject based on prompts)
  4. Get The Most Out Of Your Meetings (transcribe notes, summarize transcriptions, create agendas)
  5. Crunch Data And Offer Insights (get data analysis, assuming you don’t understand Excel formulas)

The article states email and meetings represent 15% and 23% of work time, respectively. Sounds accurate enough. And yet rather than address the glaring, systemic issue of unnecessary communication directly, we are to use AI to just… sort of brute force our way through it. Does it not occur to anyone that the emails you are getting AI to summarize are possibly created by AI prompts from the sender? Your supervisor is going to get AI to summarize the AI article you submitted, have AI create an agenda for a meeting they call you in for, AI is going to transcribe the meeting, which will then be emailed to their supervisor and summarized again by AI. You’ll probably still be in trouble, but no worries, just submit 5000 job applications over your lunch break.

In Cyberpunk 2077 lore, a virus infected and destroyed 78.2% of the internet. In the real world, 90% of the internet will be synthetically generated by 2026. How’s that for a bearish case for AI?

Now, I am not a total Luddite. There are a number of applications for which AI is very welcome. Detecting lung cancer from a blood test, rapidly sifting through thousands of CT scans looking for patterns, potentially using AI to create novel molecules and designer drugs while simulating their efficacy, and so on. Those are useful applications of technology to further science.

That’s not what is getting peddled on the street these days though. And maybe that is not even the point. There is a cynical part of me that questions why these programs were dropped on the public like a free hit from the local drug dealer. There is some money exchanging hands, sure, and it’s certainly been a boon for Nvidia and other companies selling shovels during a gold rush. But OpenAI is set to take a $5 billion loss this year alone, and they aren’t the only game in town. Why spend $700,000/day running ChatGPT like a loss leader, when there doesn’t appear to be anything profitable being led to?

[Fake Edit] Totally unrelated last week news: Microsoft, Apple, and Nvidia are apparently bailing out OpenAI in another round of fundraising to keep them solvent… for another year, or whatever.

I think maybe the Dead Internet endgame is the point. The collateral damage is win-win for these AI companies. Either they succeed with the AGI moonshot – the holy grail of AI that would change the game, just like working fusion power – or fill the open internet with enough AI garbage to permanently prevent any future competition. What could a brand new AI company even train off of these days? Assuming “clean” output isn’t now locked down with licensing contracts, their new model would be facing off with ChatGPT v8.5 or whatever. The only reasonable avenue for future AI companies would be to license the existing datasets themselves into perpetuity. Rent-seeking at its finest.

I could be wrong. Perhaps all these LLMs will suddenly solve all our problems, and not just be tools of harassment and disinformation. Considering the big phone players are making deepfake software on phones standard this year, I suppose we’ll all find out pretty damn quick.

My prediction: mo’ AI, mo’ problems.

Blarghest

The last time I officially joined Blaugust was back in 2015. Back then, the conclusion I came to was that it wasn’t really worth the effort: posting every single day for a month did not meaningfully increase page views. I’m not trying to chase page views per se, but you can’t become a fan of something if you don’t know about it. Discoverability is a real issue, especially if you don’t want to juice SEO metrics in suspect ways. So, on a lark, I decided to rejoin Blaugust nine years later (e.g. this year) to at least throw my hat back in the ring and try to expand my (and others’) horizons.

What I’m finding is not particularly encouraging.

More specifically, I was looking at the list of participants. I’m not going to name names, but more than a few of the dozen I’ve browsed thus far appear to be almost nakedly commercial blogs (e.g. affiliate-linked), AI-based news aggregate sites, and similar nonsense. I’m not trying to be the blogging gatekeeper here, but is there no vetting process to keep out the spam? I suppose that may be a bit much when 100+ people/bots sign up, but it also seems deeply counter-productive to the mission statement of:

Posting regularly builds a community and in this era of AI-slop content, our voices are needed even more than we ever have been at any point in the past.

Ahem. The calls are coming from inside the house, my friends.

[Fake Edit] In fairness, after getting through all 76 of the original list, the number of spam blogs did not increase much. Perhaps a non-standard ordering mechanism would have left a better first impression.

Anyway, we’ll have to see how this Blaugust plays out. I have added 10-20 new blogs to my Feedly roll, and am interested to see where they go from here. Their initial stuff was good enough for my curiosity. The real trick though, is who is still posting in September.