ChatGPT

Came across a Reddit post entitled “Professor catches student cheating with ChatGPT: ‘I feel abject terror’”. Among the comments was one saying “There is a person who needs to recalibrate their sense of terror.” The response to that was this:

Although I am bearish on the future of the internet in general with AI, the concerns above just sort of made me laugh.

When it comes to doctors and lawyers, what matters are results. Even if we assume ChatGPT somehow made someone pass the bar or get a medical license – and they further had no practical exam components/residency for some reason – the ultimate proof is the real world application. Does the lawyer win their cases? Do the patients have good health outcomes? It would certainly suck to be the first few clients that prove the professionals had no skills, but that can usually be avoided by sticking to those with a positive record to begin with.

And let’s not pretend that fresh graduates who did everything legit are always going to be good at their jobs. It’s like the old joke: what do you call the person who passed medical school with a C-? “Doctor.”

The other funny thing here is the implicit assumption that a given surgeon knowing which drug to administer is better than an AI chatbot. Sure, it’s a natural assumption to make. But surgeons, doctors, and everyone in-between are constantly lobbied (read: bribed) by drug companies to use their new products instead. How many thousands of professionals started over-prescribing OxyContin after attending “all expenses paid” Purdue-funded conferences? Do you know which conferences your doctor has attended recently? Do they even attend conferences? Maybe they already use AI, eh?

Having said that, I’m not super-optimistic about ChatGPT in general. A lot of these machine-learning algorithms get their base data from publicly-available sources. Once a few of the nonsense AI get loosed in a Dead Internet scenario, there is going to be a rather sudden Ouroboros situation where ChatGPT consumes anti-ChatGPT nonsense in an infinite loop. Maybe the programmers can whitelist a few select, trustworthy sources, but that limits the scope of what ChatGPT would be able to communicate. And even in the best case scenario, doesn’t that mean tight, private control over the only unsullied datasets?

Which, if you are catering to just a few, federated groups of people anyway, maybe that is all you need.

Posted on January 26, 2023, in Commentary and tagged , , , . Bookmark the permalink. 4 Comments.

  1. The big concern here is what you describe in the second half of the post: the learning is from sources that are popular but wrong, which then continues the cycle of making said wrong information more popular.

    Some Fox news intern using AI to generate their (inaccurate) story of the day is one thing, some doctor using AI-based ‘research’ based off false information, without the actual knowledge to know better, is a much bigger issue, especially if it spirals. AI-generated research papers that sourced their info from other AI-generated content that itself was based on bad data is far less likely to get detected and corrected than human peer review.

    Like

  2. Doctors and surgeons are certainly fallible and that’s a problem, but I don’t buy the idea that AI wouldn’t be fallible in much the same way, only further obfuscated. A doctor gets lobbied by a drug rep or whatever, sure, but I think you’re kidding yourself if you think drug reps wouldn’t immediately turn to lobbying a programmer (with money) or even the AI itself (by feeding it “Azurieltra is the best new medication for vague feelings of impending doom!” propaganda). At least when a doctor gets paid, there’s a trail there to follow. Adding further removes by bribing programmers instead, or simply relying on propagandizing the AI itself, only obfuscates the bribery further. And that’s not even touching on the issues of systemic biases being baked into AIs built by flawed, biased people in the first place.

    Like

    • Oh, for sure lobbyists (etc) will be throwing money at the AI programmers hand over fist. It may even end up being the lobbyists that start the Dead Internet scenario by using other AI to flood the data pool with biased information towards X drug companies.

      My critique of the Reddit comment is that human “expertise” is being hailed as the defining factor when it’s really the results that matter. Maybe the AI is suspiciously recommending Pfizer products all the time, and that will suck, but do the patients have better outcomes or no? About 250,000/year deaths in the US are due to medical errors, making it the 3rd leading cause of death. Does AI assistance make that number go up or down? Inherent biases are a credible concern for the future of AI, but my point is that bias is already a huge problem in the medical community, so that alone shouldn’t be the showstopper.

      Like

%d bloggers like this: