> I've gotten a lot of value out of reading the views of experienced engineers; overall they like the tech, but they do not think it is a sentient alien that will delete our jobs.
I normally see things the same way you do, however I did have a conversation with a podiatrist yesterday that gave me food for thought. His belief is that certain medical roles will disappear as they'll become redundant. In his case, he mentioned radiology and he presented his case as thus:
A consultant gets a report + X-Ray from the radiologist. They read the report and confirm what they're seeing against the images. They won't take the report blindly. What changes is that machines have been learning to interpret the images and are able to use an LLM to generate the report. These reports tend not to miss things but will over-report issues. As a consultant will verify the report for themselves before operating, they no longer need the radiologist. If the machine reports a non-existent tumour, they'll see there's no tumour.
I've seen this sort of thing a few times. "Yes, I'm sure AI can do that other job that's not mine over there.". Now maybe foot doctors work closer to radiologists than I'm aware of. But the radiologists that I've talked to aren't impressed with the work AI had managed to do in their field. Apparently there are one or two incredibly easy tasks that they can sort of do, but it comprises a very small amount of the job of an actual radiologist.
> But the radiologists that I've talked to aren't impressed with the work AI had managed to do in their field.
Just so I understand correctly: is it over-reporting problems that aren't there or is it missing blindingly obvious problems? The latter is obviously a problem and, I agree, would completely invalidate it as a useful tool. The former sounded, the way it was explained to me, more like a matter of degrees.
I'm afraid I don't have the details. I was reading about certain lung issues the AI was doing a good job on and thought, "oh well that's it for radiology." But the radiologist chimed in with, "yeah that's the easiest thing we do and the rates are still not acceptable, meanwhile we keep trying to get it to do anything harder and the success rates are completely unworkable."
AI luminary and computer scientist Geoffrey Hinton predicted in 2016 that AI would be able to do all of the things radiologists can do within five years. We're still not even close. He was full of shit and now almost 10 years later he's changed his prediction, though still pretending he was right, by moving the goal posts. His new prediction is that radiologists will use AI to be more efficient and accurate, half suggesting he meant that all along. He didn't. He was simply bullshitting, bluffing, making an educated wish.
This is the nonsense we're living through, predictions, guesses, promises that cannot possibly be fulfilled and which will inevitably change to something far less ambitious and with much longer timelines and everyone will shrug it off as if we weren't being mislead by a bunch of fraudsters.
I doubt this simply because of the inertia of medicine. The industry still does not have a standardized method for handling automated claims like banking. It gets worse for services that require prior authorization; they settle this over the phone! These might sound like irrelevant ranting, but my point is that they haven't even addressed the low-hanging fruits, let alone complex ailments like cancer.
IMO prior authorization needing to be done on the phone is a feature, not a bug. It intentionally wastes a doctor's time so they are less incentivized to advocate for their patients and this frustration saves the insurance companies money.
Heard. I do wonder why hospitals haven't automated their side though. Regardless, the recent prior auth situation is a trainwreck. If I were dictator, insurance companies would be non-profit and required to have a higher loss ratio.
2 quibbles: 1) a more ethical system would still need triage-style rationing given a finite budget, 2) medical providers are also culpable given the eye-watering prices for even trivial services.
I would love to know how much rationing is actually necessary. I have literally 0 evidence to support this but my intuition says that this is like food stamps in that there is way less frivolous use than an overly negative media ecosystem would lead people to believe.
I normally see things the same way you do, however I did have a conversation with a podiatrist yesterday that gave me food for thought. His belief is that certain medical roles will disappear as they'll become redundant. In his case, he mentioned radiology and he presented his case as thus:
A consultant gets a report + X-Ray from the radiologist. They read the report and confirm what they're seeing against the images. They won't take the report blindly. What changes is that machines have been learning to interpret the images and are able to use an LLM to generate the report. These reports tend not to miss things but will over-report issues. As a consultant will verify the report for themselves before operating, they no longer need the radiologist. If the machine reports a non-existent tumour, they'll see there's no tumour.