Defending Baconian Induction

In conversation with Jagdish Hattiangadi

Karl Popper was never as wrong as when he spoke about Francis Bacon. And it begins at the end, with the children of Bacon’s work and the darkish hole that Popper – and Thomas Kuhn alike – saw them as crawling from. In his paper On the Sources of Knowledge and Ignorance, Popper was unfortunately sucked-in by an old mistake, something that had been knocking around the halls of philosophy departments for centuries: Bacon’s scientific method was the precursor of John Locke. From here, the mistakes continue…

There are ways to wriggle out of this, as there are with anything, but this comparison is a little hard to explain from a man who was ordinarily so rigorous (to the point of often doing his own translations). For Locke, our human senses are the whole game, they are the source of knowledge and completely free from error. So he is also the type of empiricist that Popper hated, the type who thinks of truth and knowledge as floating in the air, bombarding us with an accurate picture of the world out there. If the world doesn’t lie, and neither do our senses, then all – and any – mistakes can only possibly happen when we misinterpret our, otherwise perfect, sensory experiences.

Locke was wrong. But was Bacon also guilty of the same empiricist mistake? From his 1620 book Novum Organum, Bacon talks in similar sounding terms, but he means something very different. His recommendation to budding scientists was to begin building tables of the natural world, charting the degrees to which things occurred and were sensed, as well as the degrees to which those things were not (“tables of presence and absence”).

Slowly and experimentally building-up a natural history of things in this way, is not the same as Locke’s blunted idea that nature comes to us clear and ready to be understood. Compile all the sensory reports you like (of the kind Locke recommends) and you will still never get close to what Bacon is asking from us – you will never get the all-important discrepancies. Bacon spoke in empiricist language – talking about the “essences” and “natures” of things – but he did so in much more nuanced ways.

When someone speaks about the nature of an object today, we tend to imagine something singular, absolute and unchanging. Bacon’s use of the term was much closer to that of a tentative nature or a nominal nature. In fact his whole point had little to do with sensations at all, rather he was talking about an experimental natural history where the primary thing we are hoping to find, and record, are deceptive appearances.

To think like this, is to start with a hard anti-empiricist attitude: if the things out there, beyond ourselves, have natures or essences waiting to be discovered, then we cannot ever fully know what they are. As things appear to us, they are illusory and misleading. So we experiment, not to explain away – or disguise – the inconsistencies and the problems we find with our senses (as Locke did), but so that we can isolate those errors, learn from them, and record them in a way that emphasises the deception. The whole point is to avoid the Lockean or Aristotelian hope (and formulas) for an intelligible collection of recorded sense data. Here we can begin to see the “enigma of his [Bacon’s] kind of natural history.”

Some of the mistakes here come back to the use of the word experiment. Despite the time in which he wrote, Bacon didn’t whittle the term down to something as simple as “laboratory work, or work with instruments and measuring devices.” Already he had larger and more sophisticated ideas about observing objects and data in previously unobservable circumstances – isolating and removing aspects of the natural world, and seeing how they behave without all the background corruption. But this doesn’t quite get us to the “new Baconian sciences” as Thomas Kuhn called it, leaving the meaning of experimentation as something focussed primarily on the source, or location, of what is being observed.   

The collection of data was not the emphasis that Bacon wanted science to have. Forget the sensory impressions, forget the large and growing tables of observations, forget the isolation of phenomena, even forget the discovery of repetitions in nature, what matters most is that we “learn from our errors”. And to head-off another common misconception about Bacon, this is not just “perceptual error” but also “conceptual error”. Here it all starts to sound very Popperian, and perhaps it would be more accurate to describe Bacon as a precursor to Popper rather than to Locke, but there are important differences. The most important being that, rather than “errors in hypotheses, guesses, or conjectures”, Bacon talks of “errors in appearances, including perceptual appearances”; which “must also exist” if we take the Popperian position that our objective experience is “theory-laden”.

We get to this happy shore by cross-examining the world around us, holding our own perceptions to account and criticism, and thereby interrogating nature itself. With a twist by Robert Boyle that made space for mechanical principles as well, Bacon’s method of “true induction” was adopted in the middle of the 17th century by the Royal Society. And with that, this slightly modified “experimental philosophy” became the centre of the scientific world.

But success isn’t enough, we are looking for truth after all. And so fallibilism is a good place to start. The Aristotelian method which had dominated for two thousand years before Bacon hit the scene, involved a vocabulary of first principles, of proper knowledge, and of higher forms; while denigrating the hypothetical, the conditional, and even the mathematical, as lower types of knowledge. It searched for bedrock, and yet showed no way of actually achieving it – the Socratic Method is a wonderful way to produce refutations, but it is not the affirmative foundation that Aristotle wanted.

To wriggle free from this problem, Aristotle in the Prior Analytics took us into a new language of logic, demonstration and intuition. And it too didn’t work! No matter how useful or valid a demonstration of this logic was, it could only ever produce knowledge if the premises of the argument or statement were already known and taken for granted. A self-referential, and redundant, syllogism. Something that pushes the problem onto the premises, then onto further premises, and into an infinite regress.

Circular arguments are not impressive things to stake the future of science and progress on. And if we must rely on deduction of this kind, we are doomed. So what is there left for us that can produce results and guide the way to truth? This is the question that births induction for Aristotle, as another method whereby knowledge is drawn from observations, later “extracted from our memory”, and then “followed by a mental discernment of its essence from its many remembered attributes”. Here the hard work of induction – as well as the essence of first principles – comes to us not from that endless chain of demonstrations, that infinite regress, but directly from “mental intuition”.

And Francis Bacon is having none of it. Again, in strong Popperian tones, he talks about our initial observations as always being prone to error, always letting us down, and never capable of the high task that Aristotle demands from them. Then he is onto the imperfections of our language, our powerlessness to describe the true nature – or the essence – of anything. The only thing that is happening when language feels as important and surgical as Aristotle wants it to be, is that we are imposing it onto the world, not grasping some deeper reality through it. It can, and should, be dismissed as a “childish” or “naïve” method, unable to produce results.

Instead Bacon compares the human mind to a “broken mirror”, reflecting nothing as it really is, distorting everything. And when Bacon talks about his method of true induction, he is imagining something that would bypass (or correct) all those mistaken ideas about objective reality and the human mind. Baconian induction is a way of weeding through the distortions and finding the reality hidden far behind them. In short, his unique contribution to epistemology is to “extract affirmative knowledge” via a “method of refutation”.

Or call it “error analysis by division” as Jagdish Hattiangadi does, either way, there are more Popperian pre-echoes to be found here. Though it might be tempting to chart significant sections of reality all at once, and then decipher them at large, it is a better method to limit things to singular phenomena and singular tests, journaling individual distortions in piecemeal fashion. There are, after all, infinitely many ways to be wrong about a single observation, and infinitely many lenses to view it through.

And so we don’t just build and perform experiments to challenge our theories, but we also do so to challenge our previous experiments. This is what Bacon means when he speaks about the cross-examination of nature, not only the running of tests and the compiling of data, but observing the conditions under which that data was achieved, changing those conditions to see how reproducible the effect really is, and then changing again and again, to root-out errors and avoid false conclusions.

Even when an overwhelmingly large natural history is created in this way, we should still expect the result to be absolutely “bewildering”. The slow – and always incomplete – process of whittling things down to individual truths, is damaged and difficult too, because the process tends to always involve some retreat to an established metaphysical theory. The stripping-back of error has no endpoint, no clear path, and no non-opaque indications that things are heading in the right direction; all we can ever do is “attend to the errors at hand” and then try to find more of them (through the building of more and more Baconian natural histories).

Under no illusions about the difficulty of his project, Bacon often referred to it as deciphering a near-impossibly coded message, or finding an exit to a labyrinth. Choose the word you prefer – a Kuhnian puzzle or a Popperian problem – this is a philosophy that doesn’t play the rudimentary game of induction that so many people have posthumously ascribed to it. It also elevates the role and purpose of science to a status that Kuhn and Popper would have approved of: “power over nature”.

Where things diverge is with Bacon believing the foundationalist (all knowledge comes from finding its foundations and building upward from there) idea that first principles can be reached, known, and used to ascend the epistemological ladder. However, he is always conscious to note that we might always be in error, there might always be higher or lower rungs still to be explored, that there is never a final rung where the whole project ends, and that – with each step up or down – we carry fallibilism with us as an unavoidable passenger.

So why did Popper miss all of this happy subtleness, and falsely compare Bacon to people like Locke, when he was staring his own philosophical ancestor – his own family resemblance – so firmly in the face? Probably because everyone else did as well. For over two hundred years the academia surrounding epistemology and the scientific method pretended as if it did not exist. Even when Isaac Newton came along – as well as thinkers from the French Enlightenment – championing a near exact copy of Bacon’s method, the connection back to Bacon was never properly drawn. And tellingly, when the connection was occasionally made, it invariably came with the same mistake that Popper had stumbled into: coupling Bacon with Locke, rather than with the scientific successes that burnt bright around them.

So what would Bacon say about Popper and his sceptical philosophy, if he could look back on it all today? He would look for discrepancies and errors, and after scanning through most of it in nodding approval, stop, scratch his head, and ask aloud just what it is Popper thinks a good scientist should do. Conjectures and refutations, sure! But how do they, and we, affirm something as being true – is it really good enough to consider an unrefuted theory as just the best available option? And if so, how is this not akin to the conventionalism (“the philosophical attitude that fundamental principles… are grounded on (explicit or implicit) agreements in society, rather than on external reality”) that Popper claimed to hate so much?

Or as Hattiangadi puts it:

On the weak fallibilist endorsement of theory, suggested by Karl Popper, we can affirm that our best hypothesis may be true, given the evidence. On the stronger fallibilist endorsement of some theories by Francis Bacon, we can affirm our best hypothesis because it must be true, given the evidence. It must be true because it alone can solve the riddle in the evidence. Its presumed uniqueness makes all the difference, even though our judgment remains fallible.

It is a glitch in his methodology that Popper was well aware of: what to say of, and where to stand on, the truth content of a theory? It is the aspect of his philosophy he was most desperate to address during his life, and has remained the softest of underbellies to attack since his death. In chapter 10 of Conjectures and Refutations, Popper tried to clean up some of his earlier ideas, trying to explain how a change from one theory to another is appropriately called progress or the growth of knowledge, as opposed to just change. After all, to be worthy of the name/s doesn’t something new need to be known, as opposed to something old simply being mistaken?

Popper flapped around in these deep waters, hoping that he might eventually find a raft; or at least learn to swim. This is where verisimilitude comes onto the scene: we cannot say that a new theory is definitively true, but we can say that it has more “truth-likeness” (less false consequences and more true consequences than the previous theory). It sounds ideal. No better than that. It sounds like common sense. And so no wonder Popper stuck to it with such loving attention for so long. It was only in 1974, after multiple iterations of verisimilitude had come and gone, that David Miller and Pavel Tichy finally put the theory to bed. What they showed was unpleasant viewing for Popper: “verisimilitude could not be satisfied by two false theories.”

So did Bacon have a point? Popper thought not! He had run out of criterion to make verisimilitude work, but not to defend his overarching theory. In substitution he proposed “three new requirements” of any good theory; things that would allow for the growth of knowledge: 1. It “should proceed from some simple, new, powerful, unifying idea”. 2. It “should be independently testable (it must have new and testable consequences)”. 3. It “should be successful in some of its new predictions; secondly, we require that it is not refuted too soon – that is, before it has been strikingly successful.”

Popper would never have admitted it, but it certainly looks like he is reaching here, searching for an affirmative platform for new theories to sit on. Let’s look at his first requirement: by a “new, powerful, unifying idea” Popper has in mind something like gravitational attraction that connects things as distant as planets and apples. Hattiangadi doesn’t see this as possible or historically accurate. So let’s stick with gravity and with Isaac Newton: every phenomenon that his new theory unified and connected were, in fact, already connected from the Copernican debate. None of those relationships were made new or unified or powerful upon the arrival of Newton’s theory, only more coherent.

The second and third of Popper’s requirements are protections against the construction of ad hoc theories. Together, they need a good theory to be independently testable, to have independent consequences, and be able to “pass at least one such test before we abandon it.” And as far as ensuring the forward movement of science, they work. But this might also leave us in a state of “normative limbo”. Do we use conventionalist strategies or not? If we do use them, are they only temporary solutions that help us to gain some traction in the search for truth and reality? And how long must we hold onto a theory (not rejecting it) simply because it may pass some independent test in the distant future?

The rabbit holes keep appearing, and it is just much simpler to say that phenomena become unified after we discover discrepancies between previously unrelated phenomena, and when we build natural histories. Requiring a lot of heavy lifting, breakthroughs of Baconian induction happen rarely, but they avoid the Popperian traps of conventionalism and of non-affirming the truth of theories.

There are Popperian answers to this… good ones! And of course the work is not yet final, and never will be. But the world of epistemology and the scientific method was split in two by Bacon – between the natural philosophers who followed Newton, and the people who felt that induction had no logical basis to it, and so could not be saved. This line has become an unpleasant one, riddled with its own set of misconceptions and errors; most of which relate back to Bacon himself, what he said, what he thought, and on whose philosophical family tree he belongs.

The largest shame is that most of these mistakes would have been avoided, if more people had done their scholarly due-diligence. If they had only “concluded that another source of Baconian science, surprisingly enough, is to be found in Francis Bacon’s writing.”

*** The Popperian Podcast #15 – Jagdish Hattiangadi – ‘Defending Baconian Induction’ The Popperian Podcast: The Popperian Podcast #15 – Jagdish Hattiangadi – ‘Defending Baconian Induction’ (libsyn.com)