Blog Archives

ChatGPT

Came across a Reddit post entitled “Professor catches student cheating with ChatGPT: ‘I feel abject terror’”. Among the comments was one saying “There is a person who needs to recalibrate their sense of terror.” The response to that was this:

Although I am bearish on the future of the internet in general with AI, the concerns above just sort of made me laugh.

When it comes to doctors and lawyers, what matters are results. Even if we assume ChatGPT somehow made someone pass the bar or get a medical license – and they further had no practical exam components/residency for some reason – the ultimate proof is the real world application. Does the lawyer win their cases? Do the patients have good health outcomes? It would certainly suck to be the first few clients that prove the professionals had no skills, but that can usually be avoided by sticking to those with a positive record to begin with.

And let’s not pretend that fresh graduates who did everything legit are always going to be good at their jobs. It’s like the old joke: what do you call the person who passed medical school with a C-? “Doctor.”

The other funny thing here is the implicit assumption that a given surgeon knowing which drug to administer is better than an AI chatbot. Sure, it’s a natural assumption to make. But surgeons, doctors, and everyone in-between are constantly lobbied (read: bribed) by drug companies to use their new products instead. How many thousands of professionals started over-prescribing OxyContin after attending “all expenses paid” Purdue-funded conferences? Do you know which conferences your doctor has attended recently? Do they even attend conferences? Maybe they already use AI, eh?

Having said that, I’m not super-optimistic about ChatGPT in general. A lot of these machine-learning algorithms get their base data from publicly-available sources. Once a few of the nonsense AI get loosed in a Dead Internet scenario, there is going to be a rather sudden Ouroboros situation where ChatGPT consumes anti-ChatGPT nonsense in an infinite loop. Maybe the programmers can whitelist a few select, trustworthy sources, but that limits the scope of what ChatGPT would be able to communicate. And even in the best case scenario, doesn’t that mean tight, private control over the only unsullied datasets?

Which, if you are catering to just a few, federated groups of people anyway, maybe that is all you need.