A widely reported, peer-reviewed study that concluded the Omicron variant of the Covid-19-causing, SARS-CoV-2 virus evolved through months of circulation in West Africa has been retracted, and some scientists are now saying – wait for it – that research would have been better served if the work had been first reported online as a non-peer-reviewed “preprint.”
Welcome to the messy world of science today.
“The paper drew criticism almost from the moment it was published, and some scientists say the problem could have been avoided if the study had been posted as a preprint first, allowing independent scientists to comment,” the online journal Science is now reporting, while quoting evolutionary virologist’s Aris Katzourakis’s observation that “this would have been slaughtered on Twitter within a few days of being on preprints.”
Preprints are today’s free-for-all forum where science does what science is supposed to do: Question everything.
The forum has, however, drawn considerable criticism from some in the scientific community who contend that letting contrarian scientists, not to mention the masses, in on scientific discussions can lead to all sorts of “misinformation,” and God knows many in science and the media today have a misinformation fixation. I admit to once sharing it.
Luckily, a journalist I came to consider something of an idiot-savant long ago taught me that people who cannot understand logic can still sometimes come up with excellent ideas. This is not, however, a universal view.
“Preprints…stand to disrupt traditional science publishing if researchers see them as an end product rather than a stepping stone to peer review,” Michael Mullins argued in a commentary in The Scientist last year.
“Some authors choose to stop with the preprint with no attempt at further publication. Preprints give authors a permanent digital object identifier (DOI), and preprints are immediately searchable and citable. These features create incentives for preprint authors to skip the scrutiny of peer review.
“If the preprint becomes the end product (or a ‘no-print,; or no formal publication) for some authors, we risk a flood of publications of low quality. Peer review aims to assure the quality and veracity of published scientific work.”
Wheat and chaff
This would be all well and good if peer review was as efficient as a thresher, even an ancient thresher. Unfortunately, the process is fraught with human error often linked to “groupthink,” a phenomenon that occurs when a group of well-intentioned people makes irrational or non-optimal decisions spurred by the urge to conform or the belief that dissent is impossible – and confirmation bias, the natural human inclination to want to spin information to fit with what one wants to believe.
The infamous “Salem Witch Trials” in Massachusetts were a classic example of both.
“A strong belief in the devil, factions among Salem Village families and rivalry with nearby Salem Town combined with a recent small pox epidemic and the threat of attack by warring tribes created a fertile ground for fear and suspicion,” a history of that event recounts. “Soon, prisons were filled with more than 150 men and women from towns surrounding Salem; their names had been “cried out” by tormented young girls as the cause of their pain.”
The bulk of Salem residents managed to convince themselves the devil was real. The minority of non-believers decided it best they go along with the prevailing belief. Evidence was gathered pointing to a variety of people as the devil’s accomplices. And as a result, 19 people were hanged; one man was crushed to death; and several people died in prison.
Thankfully, the belief in devils and witches has faded in the years since 1692, but groupthink and confirmation bias remain problems.
The now-retracted paper on the evolution of Omicron is itself an illustration of these problems. It had 87 co-authors, learned men and women all. One would think that someone in that peer group would have caught the mistakes in the paper even before it was submitted for official peer review.
But they didn’t, and neither did the peer-review panel, which is something that happens with alarming regularity.
Scientists were talking about this problem long before SARS-CoV-2 entered the picture, and the new virus, coming at a time of pandemic and rising political tribalism in the U.S., (see the Salem example above) only makes the problem worse.
Mullins at The Scientist ignores the nature of all this and fobs recent failures off with the excuse that “peer-reviewed research is not infallible, of course. In the earliest months of the COVID-19 pandemic, leading medical journals began cranking out studies that pertained to the new pathogen, and more articles received peer review with shorter turnaround times. This allowed some questionable research to slip through and led to high-profile retractions in The Lancet and The New England Journal of Medicine, among others.”
Short-turn-around times are the old, old “deadline” excuse for bad journalism: “Oh, I was working on a tight deadline and didn’t have time to check the information.”
It’s an easy excuse. It’s all so usually a lame one, especially when more than one person is involved in the work. Groupthink almost always enters the picture there, and any debrief of what happened invariably leads to the discovery that there were people who had doubts about the story but didn’t voice them because of “peer pressure,” the evil twin of “peer review.”
It seems hard for many people, journalists and scientists among them, to embrace the warning of the late Gen. George S. Patton, who once observed that “if everyone is thinking alike, then somebody isn’t thinking.” Instead of welcoming critics and encouraging discussion and debate, they do the opposite and try to suppress it.
These days we have Anthony Fauci, the nation’s Covid czar, declaring that “attacks on me, quite frankly, are attacks on science” with seemingly no understanding of how science is intended to work, which is pretty simple as the American Museum of Natural History outlines:
Or at least that’s the first step. The second step comes when others copy your research and reach the same conclusion or don’t. That’s when things get interesting, and everyone gets the chance to try and sort through differing opinions from dueling experts because science is seldom a black-and-white business.
This is worth repeating; science is seldom a black-and-white business.
Moses does not come down from the mount carrying the words of God. Science is not religion.
Scientists come down from the mountain with theories. Other scientists attack those theories. Over time, repeatable evidence can lead to a conclusion that some of the conclusions represent the “truth.” Others are eventually proven false. And some live in limbo for a long, long time.
Archeologists, for instance, are still arguing over how the first humans reached North America based on new and accumulating data. The argument is actually less settled now than it was decades ago when there was a general consensus that our species first trooped east over the so-called “Bering Land Bridge” between Europe and North America and then eventually fanned out to populate the entire continent.
The latest data to question that theory comes from Cooper’s Ferry, Idaho, where spear points dating to nearly 16,000 years before present have been uncovered. The Cordilleran Ice Sheet at the time covered British Columbia and most of the U.S. Pacific Northwest.
An ice-free corridor between the Cordilleran and the Laurentide Ice Sheet covering Canada and much of the U.S. to the east as far south as St. Louis wouldn’t open up for another 3,000 years. Thus the new data adds evidence for those scientists who believe the first Americans came to the continent by sea along the coast rather than by land over the Bridge.
Furthermore, the researchers working at Cooper’s Ferry have observed, the stem points they found are most similar to those found in a “blade-point industry dating from approximately 32,000 to 20,000 calendar years before present in Hokkaido and northern Honshu (Japan)…which may reflect the arrival of different human groups with different cultural adaptations” in North America.
The arrival date for humanoids in North America is being steadily pushed back in time with one group of scientists claiming evidence some member of our species or a closely related species was here 130,000 years ago. The claim is much debated, but not wholly impossible.
It is now well accepted that Neanderthals, one of our ancient relatives, appeared in Europe hundreds of thousands of years before Homo Sapiens arrived to displace them.
“Current evidence from both fossils and DNA (now) suggests that Neanderthal and modern human lineages separated at least 500,000 years ago,” according to the National History Museum. “Some genetic calibrations place their divergence at about 650,000 years ago.”
Once long thought to be a separate species, Neanderthals were linked to modern humans by developing genetic research able to track ancient DNA. Scientists have now found DNA showing that ancient Neanderthals bred with both Denisovans, another extinct offshoot of our family tree, and with us.
This is science at work, stumbling as it does toward conclusions as to how the world functions. Those screaming “listen to the scientists” during the height of the pandemic might have been those with the least understanding of how the process works because, to repeat, scientists are not priests.
And if they were, Western society would be in the same predicament it was in back in the Sixteenth Century when the Protestant Reformation split the Catholic Church. Then people were forced to decide whether they wanted to follow the teachings of Martin Luther or the dictates of Pope Leo X.
The modern corollary for the pandemic would be the teachings of Sweden’s Anders Tegnell or the dictates of Fauci.
Early in the pandemic, Tegnell was vilified by many – including Fauci and his then boss former President Donald Trump – for Sweden’s restrained response to the SARS-CoV-2. While the U.S. was locking down and masking up, Swedes were going about their business largely as normal and masking only in special circumstances.
The verdict is still out on which policy was best, but at this time, according to Worldometer data, Sweden has recorded deaths at the rate of 2113 per million since the pandemic began in 2000, while the U.S. death toll stands at 3335 per million or almost 60 percent higher than that of Sweden.
Still, it will be a long time yet before it is determined whether the Swedish or U.S. approach to the pandemic was the “right” one. Key differences that might affect Covid-19 mortality exist between Swedes and Americans, starting with obesity rates. There are approximately three times as many obese people in America as in Sweden, according to the intergovernmental Organisation for Economic Co-operation and Development.
And obesity has been linked to metabolic syndrome, which is among the major co-morbidities now helping to determine which of those who catch Covid-19 die. The interaction of co-morbidities, so-called non-pharmaceutical interventions (NPIs) such as masks and lockdowns, the age structure of various populations (another big factor in death counts) and Covid-19 mortality are destined to be argued about for years to come.
Then, too, the pandemic and the responses to it came with economic and social costs yet to be fully weighed, plus which the SARS-CoV-2 virus is still killing people despite vaccines that help protect people from developing into Covid-19. The U.S. death rate could fall in the years ahead, or a new variant of the virus, which continues to mutate, could develop to push the death rate higher here or in Sweden or elsewhere.
Scientists cannot know for sure what will happen because, again, they are not Gods, no matter how much some of them might like to believe they are. And it is the latter belief that plays a big role in causing the same damage to science today that it caused to mainstream journalism in the decade past.
Loss of faith
When the Pew Research Center polled Americans in February, fewer than 30 percent expressed a great deal of faith in scientists to act in the public interest.
Scientists were, admittedly, in better shape than journalists, business leaders and politicians. Only 22 percent of Americans expressed no or little confidence in scientists. Journalists, business leaders and especially politicians, racked up big numbers there with distrust numbers hitting 60 percent, 60 percent and 76 percent, respectively.
But then business leaders and elected officials have long been distrusted. Pew polling showed their numbers for 2021 about the same as for 2016, and journalists have been on a long slide since the start of the decade. A majority of Americans still had a fair to great deal of faith in the people who report the news in 2018, but that turned to majority distrust in 2020 and Americans have only grown more distrustful since.
Blame the great partisan divide, really more an urban-rural divide, for a big part of this. The “hicks from the sticks,” as some have called them, seem to every day grow more distrustful of the urban elites looking down on them.
With pandemic disputes raging over the value of social distancing and masking, not to mention the benefits and potential risks of vaccines and repurposed drugs, Red-Country faith in scientists has plummeted. More than a third of Republicans and those who said they lean Republican expressed distrust for the men and women of science when Pew polled.
On the other hand, Pew found a deep faith in science these days among college-educated Democrats, which is somewhat ironic given that many of them are the same people embracing now President Joe Biden’s push to recognize “Indigenous Knowledge as one of the many important bodies of knowledge that contributes to the scientific, technical, social, and economic advancements of the United States and our collective understanding of the natural world.”
Historically, the so-called “New World,” of which the U.S. is the major part, did make huge contributions to the so-called “Old World” on the opposite side of the Atlantic Ocean. Just as the latter contributed to the former.
“Potatoes, sweet potatoes, maize, tomatoes, chili peppers, cacao and peanuts were just some of the crops brought back from the New World to flourish in the Old. And the Old World provided the New with horses, whether through reintroduction or supplementation of whatever small numbers of horses might still have existed on the continent.
The history of the horse in North America is another issue now in debate, but there is no argument about how horses introduced from Spain to the American West in the 1600s changed Indian culture.
“The introduction (or reintroduction) of horses into plains Native tribes changed entire cultures,” according to the University of Nebraska. ‘Some tribes abandoned a quiet, inactive lifestyle to become horse nomads in less than a generation. Hunting became more important for most tribes as ranges were expanded. More frequent contact with distant tribes made competition and warfare more likely. Eventually, in most tribes, a person’s wealth was measured in horses, and great honors came to those who could capture them from an enemy.
“Before horses, dogs were the only pack animals on the plains. The harnesses and equipment originally designed for dogs were easily adapted to horses. Obviously, horses could carry much larger loads than a dog.
“Horses reached Nebraska by the 1680s and the upper Missouri by the 1750s. Tribes in eastern Nebraska (Pawnee, Ponca, Omaha, and Oto) used horses for buffalo hunts, but continued to grow maize and live in earth lodge villages. In the western part of the state, the Sioux, Cheyenne, and Arapaho lived in skin tepees and roamed over most of western Nebraska as nomadic hunters. Horses allowed them to expand their traditional nomadic lifestyle across the plains.”
These nomadic hunters did not, however, develop any idea of how to manage and conserve the great buffalo herds, and they later – encouraged by a U.S. government that recognized it could starve Native tribes into submission – were led into playing a pivotal role in the disappearance of the buffalo.
“What most people don’t consider in their `Dances With Wolves’ version of history is that Indians were involved in the market,” the University of Montana’s Dan Flores told the Chicago Tribune in 1999, before such statements became politically incorrect. “They were cashing in on buffalo in the 1840s as their principal entree into the market economy, and very few (wild) species are able to survive when they become a commodity.”
The whole idea of Native Americans as the “first conservationists” is largely built on the myth that they had developed a knowledge-based conservation philosophy that somehow preceded John Muir, George Bird Grinnell, Gifford Pinchot and a handful of others who at the end of the 19th Century and the start of 20th championed the idea of wild resources – fish, timber, wildlife – as renewable resources.
There were no doubt some tribes, primarily Pacific salmon fishing tribes on the North American West Coast, that practiced rudimentary forms of conservation, but the concept of scientifically limiting harvests to ensure the renewability of the resource did not exist.
“Appealing as this image of a Native American environmental ethic is, it is not accurate,” as the free-market environmentalist Terry Anderson once summarized. “The spiritual connection attributed to Native Americans frequently does not mesh with the history of Indian resource use.”
The first homo sapiens to inhabit the North American continent were phenomenal hunters and observers. One does not survive in a wilderness using only primitive tools without being so intellectually equipped. They were living off the land for generations upon generations before white boy Chris McCandless, made famous by the largely fictional book Into the Wild, managed to starve to death there in less than three months only to have some define him as “competent” at wilderness survival because he lasted that long.
But the idea that Native Americans were the “original conservations,” as Anderson writes, “obscures the fact, fully acknowledged by historians, that American Indians transformed the North American landscape. Sometimes these changes were beneficial, at other times harmful. But they were a rational response to abundance or scarcity in the context of institutions that governed resource use.”
That “rational response” accepted, among other things, the number one, fundamental reality of living off the land: Tomorrow means nothing if you don’t survive today, and personal survival trumps whatever concerns you might have about the fate of what you need to kill to survive.
Conservation of wild resources, for better or worse, is a luxury afforded societies that have found ways to free themselves from dependence on wild resources. Most of the one-time Asians to invade the continent from the west had yet to reach that state of luxury before the invasion of Europeans from the east, and the two groups together didn’t really figure it out until they’d laid waste to a good share of the continent’s wildlife.
By 1890, as the conservation movement was just forming, the U.S. Biological Survey estimated the country’s entire population of whitetail deer numbered 300,000. There are now an estimated 1.2 million living in New York state alone; Texas is home to 5.5 million, and the national estimated population is 34 million – more than 100 times the population from back in the gold, old days of 1890.
Many, if not most Red, rural residents with dirt under their fingernails are aware of the history and success of wildlife management on the continent. Many, if not most, Blue, Ivy-League-educated, urban residents, are not. Is there any wonder that trust fades when the later, who largely embrace, “Indigenous Knowledge” start ranting at the former to “listen to the scientists.”
And when the scientists themselves are the associated with the Blue, Ivy-league-educated, urban elite….