When Not to Trust the Experts

Loading

By Twilight Patriot

Gain-of-function research should continue.  Such is the opinion of the Biden White House, as explained in this press conference at the end of February.  It is “important to help prevent future pandemics,” says communications man John Kirby.
 
Last fall, researchers in Boston went public about creating new variants of COVID-19, while in a recent letter, over 150 prominent virologists joined together to air their concerns that new regulations might “overly restrict the ability of scientists to generate the knowledge needed to protect ourselves from these pathogens.”
 
By now, most serious people are in agreement that the 2019 coronavirus originated in a laboratory in Wuhan.  Even federal agencies like the FBI are falling into line.  The evidence for what happened is overwhelming.
 
To begin with, the bats that host the virus’s nearest relatives don’t live near Wuhan itself, but 1,100 miles away, along the border between China and Laos, in just the place where the Wuhan researchers repeatedly went to collect wild bat viruses for their gain-of-function experiments.  Then include the fact that, shortly after the first outbreak, the Chinese authorities deleted their viral genome archives.  Why do that if you don’t have something to hide?  And we shouldn’t forget about China’s dismal lab safety protocols.  And so forth.
 
For the first year after COVID appeared, mainstream outlets like CNN and Twitter censored these things for political reasons.  After President Trump left office, the taboo was relaxed, and now even mainstream opinion is coalescing around the lab leak theory.
 
China, obviously, doesn’t come away looking good, but the United States isn’t off the hook, either.  It was the U.S. government that funded this research, and the only reason that Chinese scientists were doing it in the first place was because so many American scientists — whom the Chinese see as their superiors — had devoted their careers to making gain-of-function research look necessary, important, and safe.
 
How do people in the pure and applied sciences advance their careers?  By doing the things that the higher-ups consider necessary, important, and safe.
 
And when a lot of high-status people have clustered around a set of prestigious ideas, it’s rare for anything as mundane as evidence or results to break the prestige of those ideas.  Hence the fact that, even after three years of COVID, nearly all of the leading authorities on gain-of-function research are still in favor of gain-of-function research.
 
In the kind of country that America once was, it would be a surprise if a man like Anthony Fauci were able to escape impeachment for repeatedly lying to Congress about this research and his own role in promoting and funding it.  But nowadays, Fauci not only got to spend the remainder of his career as the No. 1 expert authority on the crisis he helped create, but also retired as perhaps the most admired medical professional in the country.
 
To be a scientific expert is to be a self-watching watchman.  Way too many people will keep on trusting you no matter what you actually do with that trust.  There is no accountability.
 
Also, hardly anyone thinks about the fact that people who make careers out of a particular field of research are often emotionally incapable of processing evidence that what they’re doing isn’t necessary, important, and safe — and that it might actually be useless or even harmful.
 
Obviously, the problem of biased expertise is not new.  It existed in the 1950s, when lobotomies were common, and in the late 1800s, when it was normal for women to be institutionalized for “nymphomania,” and even all the way back in the late Middle Ages, when the default response to unexplained illnesses (whether physical or mental) was to find some evil-looking person in the village and try her for witchcraft.
 
The people who earned a living by performing lobotomies had a much higher opinion of lobotomy than the general population.  The people who ran late-19th-century asylums soberly insisted that all those women who were being locked up for experiencing sexual desire to the same degree that men did really had a mental disorder and really needed treatment.
 
And of course Heinrich Kramer, author of the Malleus Maleficarum (by far the number-one witch-hunting guide of all time) could give you all kinds of evidence to back up his claims that witchcraft was the top threat to the peace and order of Europe, and also his claims that if judges at witch trials carefully followed the handbook he had written for them, there would be hardly any risk of innocent people being executed by mistake.
 
In the end, these people’s life’s work ended up in history’s dustbin because not everybody trusted the experts.
 
By now we know that you shouldn’t listen uncritically when a man who’s spent his life hunting witches tells you how to hunt witches, or when a doctor who’s earned his way to fame by performing lobotomies tells you about how lobotomies are usually beneficial and only rarely harmful.  And so forth.
 
One can only wish for this kind of levelheadedness in the medical controversies of the present day.
 
Consider the debate over childhood ADHD and the drugs (mainly stimulants like Ritalin, Adderall, etc.) that are used to treat it.  The overwhelming opinion within the psychiatric industry is that the ADHD diagnosis is scientifically sound, and that the drugs (which have of course been very well studied) are effective and safe.
 
Outside the industry, there’s a lot more suspicion.  People with common sense know that children are, by nature, more rambunctious and distractible than adults, and that half of them are more so than the average child.  They know that these traits (despite being burdensome for children in the present-day school system) are not a mental disorder.  And they suspect that it might be a bad idea to start a child on a lifetime of hard drug dependency in order to improve his performance in grade school.
 
Curiously, there is a lot of scientific research that supports this point of view.  We know, for instance, that the academic benefits of ADHD treatment usually last for only a year or two (unlike the ill effects of drug dependency, which last a lifetime).  We know that stimulants suppress children’s physical growth.  And we know that prolonged drug dependency during childhood leads to anomalies in brain development, including permanent deficiencies of the same neurotransmitters that the drug is boosting in the short term.
 
We also know that the drugs are damaging the dopaminergic system, which regulates rewards/pleasure, so that drugged children can grow up to suffer from low motivation, erratic moods, and depression.
 
Also, we know that the diagnostic criteria for ADHD are sketchy.  (For instance, one Canadian study found that children born in December, who are constantly being compared to classmates a little older than themselves, are 47 percent more likely to be medicated for ADHD than children born in January.)
 
Do the majority of psychiatrists (and pediatricians, etc.) make serious attempts to confront these findings?
 
No.  They just produce more handbooks about how to use the drugs “properly” and more studies showing that ADHD drugs achieve their short-term goal of producing a quiet, well behaved child.  But they avoid putting serious thought into the ethical question of whether all this damage is an acceptable price to pay.
 
It isn’t very different from how, even after three years of COVID, almost everybody involved in gain-of-function research has steadfastly refused to change his mind about gain-of-function research.

Read more
 

0 0 votes
Article Rating
Subscribe
Notify of
1 Comment
Inline Feedbacks
View all comments

When the M.S. Media calls them Experts their not experts