Sabine Hossenfelder – the phenomenology of quantum gravity – has I’m glad to say an interest in philosophy of science… and a popular blog called Backreaction.
In her areas of interest, as in mine, there are problems, but in hers it seems it’s always philosophy’s fault, whereas in mine, I tend to blame the practitioners. I don’t think Popper is The Pooper.
First let’s comment on one of her recent postings (I’ll look at earlier ones in later posts).
“Popper is dead.” “…his philosophy, that a scientific idea needs to be falsifiable, is dead….”
Working steadily towards “falsifiable”… no, “Popper” isn’t dead because there is still nothing to beat hypothetico-deductionism, for which “Popper/Popperism” is often used as a handy label. (I do so shamelessly.) Already in 1908, when Popper was only 6, Student’s t test was judging two theories by comparing what each predicted, against the evidence. If you take the one that best predicts the evidence (‘the treatment had an effect’ vs. ‘the treatment didn’t’) , that’s elementary hypothetico-deduction.
Popperism isn’t exactly the same thing as hypothetico-deductivism, and he didn’t have the last word on it any more than he had the first word. But his philosophy is still basically that of H-D:
Science is the search for theories that best explain or predict the observations.
It’s centered on predicting, and sometimes explaining. It’s not actually centered on testing and falsifying. And testing is a subtle issue, though most people think it’s a cube of concrete. He realised the necessity of rooting out “cheating theories” which can never be disproved; but these aren’t good theories because they tend to be bad at PREDICTING what won’t happen.
Why is the heart of the matter prediction? Lifeforms need to work out what to do, and when to do it. That’s why we remember, feel and think: they help us predict the best things to do at each moment. This process soon starts to involve predicting not just our best choice of actions but also simply “what will happen”. That way we can can base our action on what we think will happen, not only on what has just happened. (Theories that always say “anything can happen now”, are as unhelpful as those saying “anything we do now will be just as good for our survival as anything else”. It would be much clearer if the word “predict” meant that predicting one thing implied other things were predicted not to happen. Perhaps there is a better word, meaning “exclusively predicts”.)
Sabine says that according to Popper a good scientific theory has to be testable. But that’s two things: ‘Good’, and ‘Scientific’.
Before going on to her comments about what it means for a theory to be scientific, it’s vital to realise that testing a theory may only become possible with technology (or statistical or philosophical skills) developed in a thousand years’ time. I’m sure some current archaeological theories, mooted or latent, can only be tested by trawling through DNA held in the soil surrounding specimens. It was NOT unscientific to posit such theories prior to 1953, even though few could envisage such a test before then. Much more damage is done to science through unjustified “testing-related” criticism, than by people proffering “untestable” theories. That’s why I considered it essential to cover Testing thoroughly in my guidelines:
11) An untestable theory is one which is intrinsically logically untestable, not one for which no technique for testing it is yet known to some person, or indeed anyone. Deducing the scope of implications, effects, or influences of a hypothesis (via which it might be tested) can be slow and unending. ‘Untestable’ is a rare category, not to be routinely flung at everyone else’s new theory.
12) Tests a new theory can uniquely pass are best offered, and may be needed for superiority, but not for pseudo-criteria like theory status, false ‘testability’, or truth. Insisting on a mechanism for a theory is a classic error. A theory often inspires the discovery of its mechanisms and special tests.
In other words, you Do Not need to worry about testing until Later. And you shouldn’t need to worry about anyone yelling Can You Test That???!? before you’ve even finished working out what your theory is.
Sabine’s blog post:
“In practice, scientists can’t falsify theories. That’s because any theory can be amended in hindsight so that it fits new data. Don’t roll your eyes – updating your knowledge in response to new information is scientifically entirely sound procedure.”
Indeed, refutation is technically impossible in theory. But the other thing – proof positive – is so much worse. That’s why Popper stressed the importance of judging based on evidence against theories. But although you can’t criticise Popper for preferring “evidence against”, you can criticise him for stressing black-and-white, yes/no refutation judgements. In most sciences, even physics, we have to judge our favourite from a number of possible theories, sometimes none absolutely refutable, and each better at some things than their competitors:
2) The worth of a theory depends on such aspects as accuracy, generality, simplicity, and the degree to which its implications are genuine predictions (and the more surprising the better).
Indeed, a theory as originally stated, might make a wrong prediction, but by twiddling the parameters, it can be made to predict any required observations. Popper hated parameter-twiddling, but having seen how animals learn vital skills from their mistakes, I felt it necessary to offer the reminder:
6) A theory is refuted only when all its reasonable instantiations are refuted, not just one. Fixing faults in a theory can mar its other qualities, yet repair is still basic to knowledge development.
Remember from 2), that a more complex, and less general theory, even though twiddled to be more accurate (as yet), might well lose out. But our rules for theorising must allow theory development: theory development isn’t a once-and-for-all vote, it’s a marathon.
SH herself rightly claims that in practice, theories can be judged to be inferior, rather than absolutely refuted:
“That’s because repeatedly fixed theories become hideously difficult, not to mention hideous, period. What happens instead of falsification is that scientists transition to simpler explanations.”
We judge by use of subtle qualities, and sometimes, indeed usually, using probabilities. Refutation and even falsification are not words I would have chosen. The words I use are:
1) Science is the generation, judging and honing of theories which model (i.e. explain or predict) the best.
Perhaps “refute” has a slightly different, less absolute, meaning in Popper’s native language. SH would be able to answer that better than me.
“But many physicists not only still believe in Popper, they also opportunistically misinterpret the original Popper.”
Yes. The latter is a big problem.
“Even in his worst moments Popper never said a theory is scientific just because it’s falsifiable.”
I think he did. But it was not “in bad moments” because he was trying to address the “demarcation problem”: “What is a scientific theory?” (I think this is to do with the Wittgenstein business of saying you have no right to speak of subjects about which you know nothing/little or whatever; also there were claims that you could deal only with “facts”, which makes as much sense in advanced cognitive technology as dealing with good and evil in biology. That was the context, and as we might now see it, out-of-date detritus, to be polite, that had to be dealt with at the time.)
That question – what is a scientific theory – is COMPLETELY DIFFERENT from What is a GOOD theory. I think SH confuses those two:
“It’s not hard to come up with theories that are falsifiable but not scientific. By scientific I mean the theory has a reasonable chance of accurately describing nature.”
Yup – she does confuse them. It’s a two-stage process: Check first that the theory does not embody some kind of logical barrier to its evalutation (i.e. that it’s testable – but don’t get obsessed with this for reasons I gave above; it’s all right – we don’t have Wittgenstein or the Vienna Circle or Freddy Ayer to worry about now)… and Second: see if it predicts/explains well (works as a good model – see above also).
“If the only argument that speaks for your idea is that it’s compatible with present data and makes a testable prediction, that’s not enough.”
Oh but it is, so long as its qualities like those mentioned in 2) above are met, compared with other theories.
“My idea that Trump will get shot is totally compatible with all we presently know. And it does make a testable prediction. But it will not enter the annals of science, and why is that? Because you can effortlessly produce some million similar prophecies.”
It’s a perfectly good model of the future, assigning a probability to a future event. Doesn’t deal much with scientific issues, but S H has only introduced it as a red herring, and as far as that goes, it works for her. Also, there ARE indeed infinitely many theories that explain the observations we’ve made – amongst them the theories that will one day take physics forwards beyond the current ones.
“In the foundations of physics, compatibility with existing data is a high bar to jump, or so they want you to believe. That’s because if you cook up a new theory you first have to reproduce all achievements of the already established theories.”
That goes without saying. That’s what it means to model as well as competing theories.
“This bar you will not jump unless you actually understand the present theories, which is why it’s safe to ignore the all-caps insights on my timeline.”
Just because you will probably have to understand existing theories to know what observations they already explain, it doesn’t follow that anyone trying hard to get you to learn the basics of practical philosophy of science, is wrong.
S H draws a picture of her science becoming contaminated by crap theories. I can sympathise – two of my sciences have: three areas of palaeontology have been completely destroyed by it, and social psychology has been seriously contaminated. But she’s not helping much, as her use of “predictions” in the following, shows:
“This overproduction of worthless predictions is the theoreticians’ version of p-value hacking.”
She criticises the easy proliferation of theories that in practice can never be tested… because nature’s deeper secrets are harder to get at. That just shows how far her science has got, not that searching for better theories is bad science.
“The point is that the easier it is to come up with predictions the lower their predictive value.”
Never mind about all that or however many theories it may be theoretically possible to come up with, the theories she hates to see proliferating haven’t done the business: they have NOT explained/predicted better than other theories, and that’s all you need to say to disparage them adequately. No need to confuse theorisation with prediction, or to confuse passing the demarcation test with scientific quality, nor even to confuse the likelihood of evidence given a theory with ‘likelihood of a theory being right’:
“In this argument you don’t want to show that the probability for one particular theory is large, but that the probability for any particular theory is small.”
And no need to throw mud at hypothetico-deductivism. H-D is the process of building the best models of the universe we can, and that’s all we can ever do.
It seems she’s written a book on all this. I make the easy prediction that it will damage the widespread understanding of the philosophy of science. But I can sympathise with what she said on her earlier blog post on the book:
“The current situation in the foundation of physics is a vivid example for how science fails to self-correct. The reasons for this failure, as I lay out in this book, are unaddressed social and cognitive biases.”
I can sympathise with that, to put it mildly. As I laid out in my book.
“But this isn’t a problem specific to the foundations of physics. It’s a problem that befalls all disciplines, just that in my area the prevalence of not-so-scientific thinking is particularly obvious due to the lack of data.”
What physics needs is some huge machine built to produce terragigions of new data per second. That should fix it.
“This isn’t a nice book and sadly it’s foreseeable most of my colleagues will hate it.”
Yup, they will.
“By writing it I waived my hopes of ever getting tenure.”
Do you know, I’ve just realised that my chances of ever getting a top place in palaeo at say Berkeley, AMHN or the NHM in London, may have been seriously compromised by saying everyone there were idiots.
“…But I have waited two decades for things to change and they didn’t change and I came to conclude at the very least I can point to the problems I see.”
Seriously now, you tried to do The Right Thing. But another thing you could do is find the people who know some good practical philosophy of science, and listen to them before writing a book on the subject.
I’m sorry to be mean but she’ll never listen to my advice, but if she did say anything it would be cheeky. However there were lots of comments to her post, many interesting. It would be cheap to say that not everyone agrees with her, since she said they wouldn’t at the start. But these are some I’m glad I read, starting with one by a certain ‘gowers’ – a Timothy I’d guess, by the classiness. Soon, another talks of mathematicism , and I think he has a point. It was that that nearly killed neural nets. Panda-girl mentions the demarcation problem, and Jayarava Attwood makes my point about the logical positivists. In between them, SH says she isn’t criticising Popper (only burying him, no doubt). Finally someone says Sean Carroll says don’t worry too much about falsifiability. I don’t like Sean Carroll but he’s right that the testability/urgent falsifiablity idea has gone way past its station. And that poster’s comment about Kuhn, has a point. For example, Kuhn saying what people tend to do (demand you provide an alternative theory once you’ve disproved another’s, for example) has too often been taken as advice on what scientists ought to do.
gowers said…
This discussion reminds me of the question of what makes a good mathematical conjecture. It’s not hard at all to come up with a statement that nobody has any idea how to prove or disprove, and therefore that is (i) consistent with the evidence we have so far and (ii) testable (in the sense that somebody might one day come up with a proof or disproof). But that doesn’t make it a good conjecture. A good conjecture is one that makes surprising and testable predictions.
6:55 PM, November 06, 2017
PhDstudent said…
Nice post. I am reminded of the people who post every possible prediction on Twitter for, e.g., the outcome of the 2020 presidential election, then delete all but the one that turned out to be correct after the event and try to convince people that they were psychic.
7:18 PM, November 06, 2017
bud rap said…
“That’s because repeatedly fixed theories become hideously difficult, not to mention hideous, period.”
Well, that succinctly describes particle physics and cosmology alright. Both disciplines are an unscientific compendium of mathematical fantasias that, in the aggregate, bear only a glancing resemblance to physical reality. And it is that lack of resemblance that makes them both so horrible in scientific terms. Observed reality does not contain the features that are prominent in the standard models.
Of course, to a mathematicist (like Max Tegmark for instance), this does not present a problem because the mathematics is thought to underlie and be determinate of reality. If their mathematical models require fractionally charged particles and dark matter, then such must exist. The absence of evidence is not evidence of absence, as the popular sophistry has it. Reality is deficient, not the models which are, by definition, always correct or at least always correctable.
While Tegmark may be an extreme example, it could be argued that mathematicism is the dominant paradigm in the scientific academy and has been for the better part of the past century. Empirical science no longer constitutes an open ended investigation into the nature of physical reality but is now a mere adjunct of theory, dispatched to remote realms in search of confirmation, no matter how threadbare, for the preferred standard models. For evidence of this look no further than the LHC and LIGO, where grandiose claims, of models triumphant, are spun from minuscule evidence that has been lovingly massaged from enormous piles of carefully pawed over data. Science has become a lab assistant in the department that bears its name.
Sabine, you and Lee Smolin are right to be uneasy with the current situation and it is undoubtedly brave of you to speak up, especially since it has rendered your employment situation difficult. Many Worlds, Parallel Universes and similar vapid theoretical concepts are the direct offspring of mathematicism. But mathematicism is essentially just a kind of modern day secular mysticism. Its objects of concern lie in the supernatural realm of the human imagination, far beyond the reach of proper scientific inquiry.
May I suggest that to defeat mathematicism, if that is your purpose, you need only rise to the defense of empiricism and logic as the foundational elements of science. Math will consequently reacquire its proper relevance as a branch of logic, and an essential modeling tool alongside qualitative analysis. Mathematicism will then be free to slink off to the philosophy department, if they’ll have it.
Best of luck. Your blog is a pleasure to read.
8:25 PM, November 06, 2017
panda-girl said…
I agree with richard the naivetheorist. Falsifiability alone cannot make a theory (or hypothesis) scientific. But if a theory is not falsifiable, using your example, one cannot even amend that theory after a test that is not consistent with its prediction. An unfalsifiable theory cannot not contribute to scientific progress. I am sure you would agree with this.
I believe, if you abandoned the falsifiability criteria, you probably would need to replace it with something else to address the demarcation problem. Do you agree, Sabine?
12:52 AM, November 07, 2017
Sabine Hossenfelder said…
Brian,
I am not criticizing Popper. I’m saying Popper alone isn’t enough.
2:36 AM, November 07, 2017
Jayarava Attwood said…
I read this post yesterday and was too boggled by it to comment. I’m still confused today, but allowing it to sink in.
I think to be fair to Popper one must situate him historically. The falsifiability criteria was a response to the Logical Positivists. They argued that a proposition can only be true if it is verified. And Popper countered with the now famous black swan argument. Philosophical truth cannot be sought by verification; we can only show that something is not true by finding counterexamples. There may always be a black swan waiting to come along and falsify something believed to be true.
Of course Popper did not allow for retrospectively changing one’s prediction to fit new data. And maybe in retrospect, that was a mistake. And science doesn’t really seek the truth, IMO it seeks accuracy of explanation and prediction (some scientists believe that this amounts to truth, but naive realism is another story).
I think I knew that Popper was at least incomplete because of Higgs. The LHC was not made to falsify the predictions of Peter Higgs. It was made to “search for the Higgs boson”. To verify the prediction. A lot of scientists are apparently still logical positivists.
Thanks for making me *think*!
BTW the field in which this process has the largest impact is not physics, but economics. The standard economic models constantly fail to predict the real world, but are tweaked to fit the data retrospectively. Economists believe that if they can do this then they understand what is going on. They fail to predict events like the deepest and longest recession in living memory, but keep their jobs anyway. Because their models can be endlessly tweaked.
5:24 AM, November 07, 2017
Phillip Helbig said…
Sean Carroll and others have argued that we should give up falsifiability as an important criterion in science. I disagree. Any scientific theory has to be falsifiable in principle, pretty much by definition. Some things might not be falsifiable in the short term, but that is no reason not to work on them. But one shouldn’t equate these with things which are not falsifiable, even in principle.
I think Kuhn has done much more harm than Popper. 😐