Yesterday we heard from Roger Pielke, Jr., on how scientific research can become corrupted by activism. But sometimes it can become corrupted to such an extent that it becomes the established dogma of a whole field. This is what science journalist Gary Taubes has been researching and writing about when it comes to human health and nutrition.
In the process Taubes has made himself into an historian of medical science as it regards human nutrition. He recently published a new history book on the study of diabetes, Rethinking Diabetes: What Science Reveals About Diet, Insulin, and Successful Treatments. I have not read it yet, but have followed some of the interviews with Taubes regarding his research.
I can attest that I’ve found Taubes very convincing in his previous writings, mainly in contradiction of what has become standard nutrition teaching and practice as established by vested interests and central government. He has courted controversy by not merely criticizing the accepted health science, but by weighing in in favor of views shunted aside by what we might call the “establishment,” for lack of a better term.
Here was Taubes in discussion of his book with Abby Schwartz of Mt. Sinai Health in New York City. It is an interesting discussion that covers a lot of ground regarding how the scientific understanding reached its present-day conclusions.
One comment Taubes has made I have found generally interesting: His early work tended to upset the medical establishment, which led its inhabitants to try to discredit Taubes with whatever they could find. He has always presented his own work in interviews with a degree of humility, often saying some version of “If my research is accurate” the logical conclusion is Y, in this case, standard nutrition advice is wrong, and should probably be turned on its head. He has even admitted occasions when he was wrong and publicly owned up to the fact.
As he says, he thought it was important to admit his mistakes, because he thought it would demonstrate his honesty. His critics, he says, have taken his admission of error as a sign that this has made him all the less trustworthy. They say, “You really can’t trust Taubes—he even admits he was wrong! We must presume he is therefore generally untrustworthy because he’s incompetent.”
It returns us to the general rule of thumb for life: You just can’t win for losing.
That's likely what happened, but my frustration is that no one caught it.
Hi. Just got finished with my copyediting deadline crunch. Next week I look at the PDF, but now I have the weekend to do weekendy things before I have to go work at the store on Monday.
Not on the mothership, but on Mr. Scoop Drucker's Tuesday column, which I liked very much, I encountered in the comments something that really seems like a troll that is a bot. Its handle is Lchrisberg and it claims to be a Reaganite born in 1963.
I responded to him/it initially because it used the phrase "uber selfish baby boomers." But there were more insults where that one came from. I wasn't sure until today, when it answered me back again with three comments. It is hard to imagine an individual being that bullheaded when called out on a bogus argument. And I know I'm not the one who's off base, because it was the old generation-stereotype thing, and other commenters had already chimed in and told me I was right. The heart of the issue was, why stereotype and generalize when the basic point is individual responsibility?
But it continues making lame arguments interspersed with mild personal insults ("Get some help") that are just below the level I'd consider reportable. It keeps going "English much?" and taking slams at my copyediting ability. Doesn't seem accidental. Aside from the personal insult angle, I think it is programmed AI that digs up definitions and rewords them as if to lecture me, but is unable to use logic or take opposing arguments on board. It keeps on insisting that generalizations are correct and meaningful. One of the definitions is irrelevant in an especially interesting way, citing policemen--wording suggesting something scraped up by AI. A human would back off a bit and come back with a statement that moved the goalpost just enough to save face, then drop the subject. Basically it is just churning out sentences reiterating definitions, but can't think about whether the ideas in the definitions are applicable to the situation. This is what AI does, right? It talks imitatively but doesn't actually know facts or work out what does and doesn't make actual sense.
So I finally told it it was a bot. No response to that, yet. You may want to check it out and see if you agree.
https://thedispatch.com/article/traditional-republicans-feel-unwelcome-in-trumps-gop/