Firestein, Barwich, and How To Spot Good Science

Stuart Firestein is trying to solve the problem of spotting dodgy scientific advice – urgent today, but always tangled with another vital task: how to choose between scientific theories.

In stepping into all this he runs the risk of contradicting himself – an unexpected trap for those new to it. And if he gets it wrong but is listened to, he risks adding to the already mountainous obstacles faced by good new scientific theories.

He admits it’s an age-old challenge:

“Separating real science from pseudoscience is not a new problem.”

So how did he do?

The web editor has made “Spot the fakes by their certainty” the “jump out” message, but the main theme (and title) is how they get away with it, followed by what we can do, featuring the Certainty weapon – with which he shoots himself, amongst other errors.

But first he says how anti-sciencers copy science, starting with doubting authority. And how they dwell on fostering doubt. Of course he wants us to doubt them, yet to trust him as an authority. Can we? He says how Galileo et al. marked themselves out not just by doubting the church:

“But Galileo and those who followed him produced alternative explanations, and those alternatives were based on data that arose independently from many sources and generated a great deal of debate, and, most importantly, could be tested by experiments that could prove them wrong.”

Anti-vaxers do offer an alternative explanation: vaccination causes [whatever]; it’s short on detail but it is an account of what lies behind the actions and observations. Galileo et al. based their theory on the same astronomical observations of the later supporters of the old theory, not new evidence. Generating debate isn’t what marks out a good theory. And actually prior to Kepler’s ellipses, the new theorists still stuck to circles; that was “proved wrong” since they did not explain the observations any better than the old theory plus epicycles; the new theory also needed it’s own kludges. Firestein does not look like an authority here.

“There are those who try to cloak their motives in the trappings of science by claiming they are taking the scientific posture of doubt. Science, after all, depends on doubt. Every scientist doubts every finding they make.”

We have to search hard for actual admissions of doubt in his own arguments, despite claims that good scientists must have doubts. He displays few. He does have hope though: that we accept his authority. A few scientists express doubt but many don’t, and many seem to claim certainty. Who? Every one who unquestioningly speaks of Facts; one excellent example: Richard Dawkins. Another? The author himself.

“The same is true for the bewildering variety of brain-function enhancers being pitched with their ‘scientifically proven benefits.’ The purveyors of these products ask if you seem to be more forgetful, have less energy, are less able to concentrate? Well, of course you have all those symptoms. Everyone does. Unless you track down an underlying pathology with a brain scan or sophisticated psychological test, the ‘cure’ will always work because there is no disease.”

Don’t think “Poor old Susan” here – think about how Stuart mentions not just sophisticated psychological tests as a source of certain authority, whose underlying stats blunders have made them all suspect for years, but the new source of unreliability: brain scans for identifying subtle variations.

“A lengthy series of objective tests may lead to a diagnosis of Parkinson’s disease. This is followed by narrowly prescribed drug treatments that have been worked out over several years of controlled trials and clinical experience. Even then there are no magic bullets. The drugs must be carefully dosed to each individual, and they may only have a limited effectiveness. But in this case there is a disease with recognizable pathology and a potential treatment aimed specifically at that pathology.”

Here he acknowledges that the treatment might not work but has No Doubt that a reliable meaningful valid diagnosis will be made. All Hail DSM – the magic book of perfect spells!

Now he considers what is to be done. Can a non-scientist even hope to spot fraud?:

“How does one learn to spot the con without getting a Ph.D. and spending years in a laboratory?”

Well what a good job having a Ph.D. and years in a lab will assure freedom from following bad theories or even propagating them oneself. Ten years ago I asked, as he does here, what can be done to help the public, journalists, and scientists, ensure good understanding and practice of sound science. For relative experts I recommended the deceptively simple strategy of choosing the best explanation for the observations, though this is the strategy for experts. Also of course, resist the temptation to claim unreal authority for science by claiming science produces facts, proofs and truths, because this can be be convincingly argued against, thus crippling science’s authority. My view is that laypeople can be helped to spot pseudoscience by a series of dodgy heuristics NOT to be used by experts when judging scientific theories. Dealing with fraud is largely a matter of morality, and social and political strategies for making people damn well do what it is clear they ought to be doing, or similarly, not doing.

Firestein addresses the problem of what to do when you need maths to understand something, to which I will add that experts also very often make mathematical mistakes. There was a well-publicised case of a majority of medical doctors getting the simplest Venn diagram problem wrong and getting confused between likelihoods and prior probabilities. There is also the totally unappreciated problem of the mathematically proficient forcing the wrong mathematical model onto a problem, when a more complex, or different model, or even a more algorithmic or qualitative approach would be better. AI has suffered terribly from this, and so has psychology. Totally unappreciated is that genetics has too. I know how hard it is to convince a field of their systematic mathematical blunders when they have less mathematical understanding, but also the hugely tougher challenge of showing a field they’re following the wrong maths when those leading them astray have unquestionable mathematical expertise.

“And half the audience is gone with each equation (Stephen Hawking said that).”

Actually Hawking was quoting his publisher. Here we have an opportunity for a gesture towards the multiple evils of the science publishing industry, and to repeat that we’ve had far too much of publishing gatekeepers making important decisions in science.

Finally he makes the claim that you can judge a pseudoscientist by their certainty. Admittedly here he does say “most of the time”, unlike many examples elsewhere. But unfortunately most examples of pseudoscience I’ve seen come from those currently employed as scientists – and they have no doubt at all that they’re right.

Without much delay, he once more honours his argument in the breach:

“Mystery and uncertainty may not strike you right off as desirable or strong traits, but that is precisely where one finds the creative solutions that science has historically arrived at. Yes, science accumulates factual knowledge,…”

Factual knowledge undeniably implies acceptance of Absolute Truth. And Certainty.

“But like your blood pressure medicine, the stuff we know is reliable even if incomplete.”

Unqualified use of “reliable”, implies 100% certainty somewhere.

We need strategies for identifying good science that work for all the purposes for which it’s required. Relying on Certainty or Good Maths don’t work. Too many good scientists are too certain, and too many good mathematicians do faulty science. Detecting blatant cynical fraud designed to fool the public, or other scientists, are two different problems. There is no elegant solution to the first, and the second is a matter for society. Judging plausible theories offered in good faith by at least pretty good scientists, is a third important issue, and Must Not have solutions applied to it that are supposed to work for the other two problems but probably don’t even work for them anyway. Peer review is a prime example of the latter criterion, and good maths, and apparent certainty in the advocate are others, alongside a host of other pseudoscientific social dodges from Overton Window to gut feel to personal career convenience.

He says pseudoscientists often try to imitate scientific approaches. (I am reminded of a bullshitting pseudoscientific Wikipedia editor who accused me of having all the appearance of a proper scientist and yet… he could tell I wasn’t because if it quacks and walks like a duck…!)

“Imitation may be the sincerest form of flattery. The problem, as we all know, is that flattery will get you nowhere.”

Ha! A.S.Barwich INSISTED ON FLATTERY! Instead of wasting far more time on critiquing his piece than it and Firestein deserved, I’d described it in a quick tweet far short of 280 characters. Barwich responded on Twitter by calling me “an arse on the internet”, and saying she was muting me. Presumably because she had only met me via Twitter, my opinion was valueless. Of course I had only encountered her over Twitter, and I hadn’t even heard of her current university.

But trust me – if Barwich and Firestein don’t get their hands on the running and ruining of your science, it will only be for lack of opportunity.

This entry was posted in Philosophy of science, Uncategorized and tagged , , . Bookmark the permalink.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.