August 15, 2024
Thursday Open Comments
Will Arty-Intel LLMs prove to be better at medicine than human physicians? That’s a trick question, because they already test better than nine in ten physicians taking their licensing exams.
Arnold Kling cites Scott Gottlieb and Peter Attia, et al, in the second and third segments within this article. The quotes are short, and I haven’t followed through to read the linked items. But the ideas are intriguing, considering that health care is now estimated to represent a sixth to a quarter of the American economy, much of it funded by government.
In more than one of the YouTube discussions that I typically watch of medicos in the low-carb diet community, they mention the physician burn-out rate is so high because doctors are generally expected to do no more than gather info on a patient’s symptoms and then to prescribe a medication for whatever is diagnosed. The color-by-numbers approach is frustrating, and remarkably rigid, enforced as it is by insurers and government bureaucracies who define the “standards of care.” Physicians who don’t comply are likely to lose their licenses and livelihoods. The system logic treats patients (as one researcher quipped) as if they are suffering from a pharmaceutical deficiency, and remedying that is their only job.
Don’t get me wrong, the pharmaceutical industry has had fabulous achievements, including fighting infectious disease and so on. But it has become a massive socio-political stakeholder with gargantuan funding, and it accounts for large parts of the public’s retirement investments, giving an even larger excuse to meddling politicians and bureaucrats. The lobbying to retain the industry’s fiefdoms is predictably fierce. We might say something similar about the processed food industry—the sugary beverage people and the snack-food multinationals, to name a couple examples—without going into elaborate details of ornate conspiracy theories. Suffice it to say that there’s a happy convergence of interests between a processed food industry that might be a leading cause of long-term chronic illness and a pharmaceutical industry that needs its regular fix of long-term chronic prescription drugs (for regulating blood cholesterol, managing hypertension, and treating diabetes).
At any rate, if doctors are meant mainly to prescribe pills to patients, and pills are prescribed on the basis of checklists obtained by taking patients’ vital statistics, it would make sense that smart machines and computers could do the job much more evenly and efficiently than humans, who are known to suffer from emotional imbalances, physical and mental exhaustion, and the resulting lapses in judgment. Patients, too, might be more honest and forthcoming when interacting with computers than with human doctors (or vice-versa, for that matter, if they fear negligence with personal information).
Maybe I’ve talked myself into it. Computers and AI ftw! Tell them where it hurts!
Is that what the doctor ordered?

I'm listening to Eliana Johnson and Matt Continetti (subbing for Chris Stirewalt) on "Ink Stained Wretches". They're discussing the media's lockstep reporting on the Harris campaign ... all the joy! They're saying they don't think it will last, and that "Harris reality" will kick in at some point.
I think there is good chance they're wrong, and that most media will remain deeply in the tank, and that being someone other than Trump or Biden may be sufficient, regardless of everything else.
Very interesting topic. I wonder if patients would be more likely to follow the guidance of a computer than a person, since there is conditioning to believe and trust "objective" information from a machine.