Infiltration
I’ve talked a lot about AI in hypotheticals, although it is certainly having real-world effects in various fields already. Nevertheless, nothing else quite hits the same as it occurring at your place of business.
For context, general staff where I work have been using ChatGPT for more than a year now. Mostly, it has been used to summarize their own notes, check grammar, and so on. All of which had been expressly against policy, considering how the notes in question are filled with sensitive personal information that has now been consumed by OpenAI servers and possibly regurgitated into someone else’s results. There’s no fighting the tide though, so the executives worked with various regulatory agencies to bring in a “walled-off” version of (I think) Microsoft’s version of AI into the fold.
Things took a particularly different track the week before the holidays.
See, there have been a number of initiatives and requests over the years from staff to get electronic versions of paper forms, and/or to modernize the electronic forms they already use. Our IT shop is small for an organization of our size, with just 2-3 programmers on staff for such requests. The latest form to be modernized has taken almost two years to be completed, for… various reasons I won’t go into here. So when a new request came in for a different form, it was not much of a surprise to learn that the estimate was ~2,000 programming hours. For us, that sounded about right.
And then… the senior programmer just completed the entire form in about 100 hours via AI.
During the impromptu “demo,” he outline his methodology, which was essentially guiding the AI into building each specific section of the form separately. Interestingly, as each section was built and tweaked by the programmer, the entire codebase was (apparently) re-entered into the prompt to ensure later sections integrated with the earlier pieces. All of which took the stated 100 hours and around $50 worth of AI credits. I think this was via Github Copilot, on one of the lower “settings.”
Brilliant, right? If this outcome is reproducible with other forms – and the form in question doesn’t self-destruct down the road – it will effectively save the organization millions of dollars over time. There is nothing but unambiguous good about it, yeah? People dream about a 20x improvement in efficiency!
So, here’s the thing. The form we’re talking about already exists as a PDF and Word document. It now also existing in what amounts to a company website could possibly save some amount of staff time, but the overall return on investment by any metric was dubious from the get-go. Is going from 2000 to 100 impressive? Yes. Would I expect our junior programmer to be able to reproduce the same 100-hour outcome? No. Is there a very real possibility that the next round of hirings will exclude junior programmer positions? Yes. Does anyone have grand designs for how junior programmers will become senior programmers in the future? No. Will future AI use for a similar effort also cost $50? Fuck no.
If you’re pessimistic about an AI future, you’re a Luddite, fighting an absurd battle to keep buggy whips relevant. “Get with the times, old man! The future is now!” Sure, okay. Question, though: do you feel we are well positioned, politically or economically, for the fruits of AI investments to improve the lives of the working class? Or perhaps are we just automating the creation of shit sandwiches?
Alas.
Posted on January 2, 2026, in Commentary and tagged AI, Capitalist Dystopian Hellscape, Copilot, Job, Luddite. Bookmark the permalink. 7 Comments.
If the form does break (and it will), and the AI can’t fix it, does the programmer who used the AI know how to fix that code now? And can they do it in X hours so the time savings are maintained?
LikeLike
The senior programmer could definitely fix the code, and in all likelihood could have coded the form within 200-300 hours or less if he felt so inclined. I’ve never watched Silicon Valley, but from the Youtube Shorts I’ve seen, he very much reminds me of Bertram Gilfoyle sort of guy.
The maintenance is where we would be screwed. Our senior programmer is going to retire in less than 5 years, and I don’t feel there is a viable replacement.
LikeLike
This works because in reality most of the current “software developement” is adapting already existing stuff to a new case, not creating anything really “new”. There’s the joke that you study a lot of stuff in computer science, but then 95% of products are just databases and database interfaces, so you’ll never need the rest unless you go into some niche market.
The case study you show seems to fall exactly in this category, so it’s not a surprise that AI does an excellent job at it. Try to have AI create something which interfaces to an undocumented piece of lab equipment which does something more or less unique, and you’ll see that doing things yourself is actually faster, because the amount of information you need to provide is so large that writing the code yourself takes less keystrokes….
My experience with AI and coding is limited (I did more test with teaching and it was useless, always regurgitating stuff I already knew), but the approach you describe works well: have the AI generate something, then fix/tweak it, either with more help from the AI, either manually (because sometimes it’s just faster to correct one thing instead of explaining how to correct it to the AI).
I’m really unsurprised that in the hand of someone who know what he’s doing, AI is a good tool, taking care of all the “cut-n-paste coding” which many projects inevitably have. But it seems to me that it’s the same story as with any tools: extremely advanced and powerful tools actually require the user to have MORE knowledge in the field to be able to use them effectively. A newbie will be lost since he’ll have trouble understand what the tool is doing.
LikeLike
The grand irony is that the way the overall “form refresh” project was originally approved was predicated on investing a lot of time in building out “modules” in the first forms that would thereafter be used to speed up later form development. That plan did not end up working out for us internally, as each department needed tweaks to better match the workflow specific to them. Saying “take it or leave it” would often result in them not using the form at all, which is huge loss for us.
But, yeah, you’re right about it technically being a good use-case for AI.
…until the senior programmer leaves/retires. No one in our current pipeline is good enough to replace him, and certainly no one is capable of maintaining these AI forms should they start to mess up. The worst case is the executives getting excited about the 100-hour figure, mandating us to create 20 AI-forms, and then our senior programmer leaves. It’d be a nightmare.
LikeLike
I agree with Helistar. Vibecoding sometimes hits the bullseye. More often, it creates a morass. The more serious and non-standard and security-sensitive the project, the greater the odds of the latter.
This might change, or the models might plateau. Unfortunately, the modus operandi of modern capitalism is to oversell. Claiming it already does the thing is the only way to gather the insane money to have a shot at making it do the thing. There is no other. The junior dev extinction event is a barometer of the success of this strategy.
In a more dirigiste economy, AI development would be a government-led project. A NASA-style budget would be assigned. The infrastructure would be built out slower, in pace with the energy demands. Clear-eyed specs would be written and specific uses in industry would be developed and tested concurrently. Slower but saner progress. If that sounds like I think China will be the country to get way more practical use out of arguably inferior LLMs on inferior fabs long after various bubbles pop, dreams of godhood die, and massive social wreckage is inflicted in the United States… yeah, put me down for that.
LikeLike
China could literally do nothing and end up on top based on the active looting and destruction of US science and research currently going on. It’s unfathomably stupid what is taking place over here.
Like… imagine that you’re playing a Civilization-style game, or an RTS or whatever, and you have full control over the direction of the country. Would anyone look at the available science “unlocks” and say to themselves “Nope, I reject solar/wind/tide power. Let’s cancel medical research, make citizen healthcare prohibitively expensive, and discourage any young adults from higher education.” What is even the idealized endgame? Pay off the government debt, disband the government entirely, and then… what? Techno-feudalism? Bow down to the superior advancements from other countries only made possible by their profitless, incremental research steps?
Reality being what it is, is the strongest proof of Absurdism that anyone needs.
LikeLike
I brought up this exact argument at work. Senior programmers can use AI effectively, but this is the very work that turns junior developers into senior developers and yes, they are looking to replace the lower echelons of developers with AI.
LikeLike