The Paradox of Tolerance

In conversation with James Kierstead

 

Philosophy has a tragic way of exporting only its worst elements to popular culture. For Popperian philosophy this has been the lengthy endnote to The Open Society and Its Enemies, where Karl Popper digs greedily back into the questions of tolerance and intolerance. The modern references run along similar themes and with similar enemies – Nazis, Islamists, anti-capitalists, communists – substituted into the breach; and because it is popular, catchy, and pseudo-intellectual, most people never actually read the original text, nor do they venture beyond into the wider range of Popper’s work and personal letters.

As endnotes go, even long ones, it is less than “a fully thought-out theory.” But the question itself does matter – it mattered back then, it matters now, and it will continue to matter, as well as continue to challenge the most fundamental notions that we hold about ourselves. Popper may not have dwelt too long – in terms of ink and paper and academic energy – in these waters, but he had been watching them (from near and afar) all his life; struggling with the question: “Where exactly should we set the limits of toleration in speech and action”?

Building-out his arguments for pluralism and accommodation, most of what Popper wrote on tolerance comes to us through the pages of The Open Society. There he uses tolerance as a tool of sorts, a “common denominator” from which we can build a world of extreme difference, and yet have it also remain peaceful and non-coercive. But quoting Voltaire, Popper also saw the harder epistemological edge to it: “What is tolerance? It is a necessary consequence of our humanity. We are all fallible, and prone to error. Let us then pardon each other’s follies’ (Popper’s own translation).

Without tolerance, rationality suffers. We all make mistakes, all of the time; and we only ever see those mistakes – recognising them for what they are, and having the chance to correct them – through criticism… the criticism from other people with whom we also disagree. Sitting in this precarious landscape, the one un-retrievable error that we can make, the one thing that could bring the whole project of enlightenment values and scientific progress and moral improvement down upon itself, is to silence or limit criticism. If you accept that you are fallible, then you must accept tolerance as an “important moral consequence”.

Tolerance must also be an antidote then, a protection against unpleasant worldviews (enemies of The Open Society) that corrupt what we have, and could have. The religious zealots claiming divine sanction, the nationalists claiming ownership over a people's identity, the decoders of historical inevitability claiming to know the future, the tribalists of all stripes and creeds. What they have in common is commitment – commitment to their ideas rather than to truth, and a deep belief that they couldn’t possibly be wrong.

This is where the paradox begins to build. The place where Popper stops his audience and demands something more from them, something more than what his enemies are willing to offer: “We must be tolerant”, Popper wrote in a 1971 letter to Jacques Monod, “even towards what we regard as a basically dangerous lie.” It is an easy thing to dismiss certain ideas as “outdated” or obviously “wrong”, or even to dismiss them for not reciprocating the courtesy that they are given, but we should not forget that the people holding them are largely honest; they believe the things that they say and argue for. They deserve our tolerance.

To not do so, runs its own horrible risks that Popper warned against in a 1973 letter, this time to Paul Kurtz and Edwin H. Wilson: “we must not become dogmatic and churchlike ourselves.” To replace intolerance with a different kind of intolerance, is to not replace much of anything at all. Critical rationalists like himself should be wanting something more, something that breaks with all that prior tradition, rather than continuing with it under different colours.

The trouble hits with a question of functionality and practical decision-making. Imagine that you are the captain of a large ship and you are looking to hire a crew. Just like yourself, they are all fallible and so you don’t expect them all to be particularly good at their jobs. Some are going to be lazy, some will drink too much hard liquor, some might battle with sea sickness, loneliness, or depression. But what you don’t expect is for one of them to not want the ship to float – who, once you are out in the ocean, begins trying to scuttle the hull and drown everyone, along with himself. If you were to know his plan before sailing, what would it say about your captaincy if you allowed him on board anyway? If you discovered his intentions mid-trip but before he could do too much damage, would you have any obligation to keep him on board?

So if we expand this out to a large and functioning democratic society, Popper’s quick reference point is always Germany in the 1930’s (the Weimar Republic) and the rise of Hitler. Popper was of course writing his original passage during the Second World War, and as an exiled Jew he would have been forgiven for thinking about what had happened – and what was happening – in harder, less measured, less philosophical, more emotional tones than that of tolerance and its natural limits.

What matters is always violence. Once you really see it, really feel its coercive shadow, have it change the world around you as well as yourself, it becomes impossible to join in with the apologetics and false analogies:

I shall never forget how often I heard it asserted, especially in 1918 and 1919, that ‘capitalism’ claims more victims of its violence on every single day than the whole social revolution will ever claim. And I shall never forget that I actually believed this myth for a number of weeks before I was 17 years old, and before I had seen some of the victims of the social revolution. This was an experience which made me forever highly critical of all such claims, and of all excuses for using violence, from whatever side. I changed my mind somewhat when Goering, after the Nazis had come to power by a majority vote, declared that he would personally back any stormtrooper who was using violence against anybody even if he made a little mistake and got the wrong person. Then came the famous ‘Night of the Long Knives’ — which is what the Nazis called it in advance. This was the night when they used their long knives and their pistols and their rifles…After these events in Germany, I gave up my absolute commitment to non-violence: I realised that there was a limit to toleration.

All those grand statements about empowerment and human rights and self-determination and everyone expressing themselves and civic responsibility go out the window here for Popper. The reason why democracy is important is simply because it is a means by which we can remove bad leaders and bad policies without having to resort to violence. Neither the intrusive antisemitism he suffered, nor the unpleasant language and speech around him, made the young Popper believe that the line of tolerance had been crossed. The bloodshed and the violence did!

But actual violence seems a high bar, and in later notes Popper begins to flush out the details: “we must not behave aggressively towards views and towards people who have other views than we have” he wrote in his letter to Kurtz and Wilson, “provided that they are not aggressive.” So violence includes the threat of violence – and anything short of that deserves our tolerance, regardless of how nasty, unpleasant, or irrational it appears.

Perhaps all this talk about violence and threats isn’t the most helpful – after all, in often-corrupted modern turns of language, many people will have very different ideas about what those terms mean and what they look like on the ground. A less fashionable, and so more plainly understood term like coercion might be a better fit – something that might still be hard to define when asked, but something that can be easily recognised when felt. Who deserves our tolerance? Anyone who will talk and argue for their theories; anyone who wants to convince you rather than suppress you; and anyone who can be countered by “rational argument” and kept in check by “public opinion”.

The danger now comes from two directions: 1. from the intolerant people and ideas that want to coerce us into supporting them, or into silence, and 2. from ourselves! More than just a core aspect of modern society, tolerance is what makes pluralism and our increasingly peaceful lives possible. And our moral institutions have done a very effective job of building the term into our daily lives and into our identities; to call someone intolerant in this day and age is a burdensome insult that cuts into their very personhood. So we are vulnerable targets of a kind – open to being exploited and shamed into confusion by intolerant invaders, accusing us of hypocrisy for not tolerating them, despite their violence, their aggression, their coercive ideas.

It is not a mistake we should be making. The challenging part of Popper’s brief work on tolerance is its implications which, if you have agreed with him to this point, will appear as unavoidable as they are troubling. The final sentence of Popper’s footnote reads like this: “we should consider incitement to intolerance and persecution as criminal, in the same way as we should consider incitement to murder, or to kidnapping, or to the revival of the slave trade, as criminal.” If you consider intolerance to be as dangerous as Popper does, then this makes clear sense. It should also make you feel deeply uneasy!

In the late 1970’s, Routledge editor Rosalind Hall wrote to Popper. She was seeking his permission to reprint “some” of his footnote on tolerance from The Open Society. Then beginning to see how his words were already being misused and catch-phrased, Popper wrote back saying yes she could, but only if she quoted the entire paragraph, and did not exclude the disclaimer (like so many other people had) that “I want it to be clear that this is proposed by me only incidentally and not as my main statement about tolerance.”

It is never going to be easy, or exact, trying to draw the appropriate limit of tolerance. But reciprocity is as good a starting place as any. It can help to cut through the philosophy and high-mindedness of it all, and instead offer a simple tool for analysing and labelling intolerance in the real world: is the person or the idea across from you playing by the same rules as you? Are they returning the same courtesy that they are being offered? Or are they the dangerous few that Popper had in mind when he spoke about the paradox of tolerance? If tolerance is not a mutual exercise, then one side has an unfair – and unnecessarily generous – advantage.

Later in his life when Popper was visiting the subcontinent, he heard a slightly humorous – and clear – example of his paradox at work. Whether the story was true or just legend, it does something that Popper’s best efforts failed to for so many years – cut through the misunderstanding in a sharp and sudden stroke. It also no doubt helped to endear him to his audience in New Delhi:

“I once read a touching story of a community which lived in the Indian jungle, and which disappeared because of its belief in the holiness of life, including that of tigers. Unfortunately the tigers did not reciprocate.”

 

*** The Popperian Podcast #21 – James Kierstead – ‘The Paradox of Tolerance’ The Popperian Podcast: The Popperian Podcast #21 – James Kierstead – ‘The Paradox of Tolerance’ (libsyn.com)

Memes: Rational, Irrational, Anti-Rational

In conversation with Susan Blackmore

 

We human beings are difficult little animals. In some obvious enough ways we are just like all the other species out there: fragile sacks of genes and cells and blood and lungs and brains and hearts; eating, fighting, hunting, procreating, aging, and dying. But in other – still obvious ways – we are of a different kind: talking, writing, reading, building, and theorising our way to space travel and nuclear power and quantum mechanics and cures for disease and robotics and pocket computers and the internet.

So what is it that makes us so special, and allows us to achieve such unbelievable – magic-like – progress from such a mundane biological starting point? To what do we owe our intelligence, our consciousness, and our comfortable lives? Here we have never been short of a theory or two, and never close to a satisfactory one… until now! And just like with its cousin-theory – Darwinian natural selection – this great new idea bounces between complexity and simplicity: easy enough to understand at a glance, hard to get your head around in its detail, and so profoundly unassuming that it often feels incomplete; leaving us dumbfounded as to how this small attribute could have such an enormous reach, considering all that has been done in its wake.

What makes us different” writes Susan Blackmore, “is our ability to imitate”. It is an ability that we share in primitive ways with other animals as well, but they don’t take to it with the vigour and intuition that we do. Try waving at your cat, and see what happens. Keep on waving for hours or days or months, and the cat is never going to wave back. You can train the cat with rewards or affection or punishment to do certain things, but you cannot teach it by demonstrating the behaviour yourself.

Take an impossibly foreign human being though, someone with no understanding or experience of a waving hand, and after seeing you do it in his direction a couple of times, he will instinctively begin to copy you. He will then also, very quickly, make the connection between the gesture and its meaning. He will then wave at other people, they will wave back, and the imitation has a life of its own. “When you imitate someone else, something is passed on”. That something is a meme… and it explains who we are as a species.

The big brain. So many of the traditional answers to our previous question – what makes us different from other animals? – are quick references to the brains we have. So much so that the question is often asked differently: why do we have such enormous brains? And before you answer, make sure that you are not making one of two mistakes. 1. Slipping into a circular argument of the kind: we have big brains because they make us more intelligent, we became intelligent because it helps us to survive, and so we developed big brains to help us survive. Nothing is being answered here other than what we already know: our brains are big and our brains are useful. 2. Forgetting that biological evolution moves in small increments through random mutation, and that it is not indulgent. That something may be beneficial on paper does not mean that it will be selected for by nature, nor that each small, incremental change on the path there carries enough individual survival benefits to make an evolutionary difference.

Take primitive hunter-gatherer societies – our ancestors – and people whose survival rested upon understanding the world around them: seasons, migration patterns, basic tools, alternative food sources, cooperation… Having a larger brain would certainly have helped them with this, but did it have to be as large as the ones we have? The range of things they needed to do in order to endure and thrive was limited, and with each increase in brain size comes an intolerably large increase in survival-related problems.

Human babies are hopeless little creatures. That big brain comes with a big head, and that big head is a tremendous hassle. With a close ratio between pelvis and skull, childbirth is an unhealthy ordeal, where some mothers – and babies – don’t survive the birth; even then, human babies are born extremely premature, soft-skulled and unable to fend for themselves, becoming a heavy burden upon the species; and the brain itself does not sleep or rest like our other muscles, so it is always burning energy, needing a disproportionate amount of calories to keep it going. Expensive to build and expensive to maintain, we have a problem to solve here: “a smaller brain would certainly save a lot of energy, and evolution does not waste energy for no reason.”

So there must have been a selection pressure for our bulbous brains, something that outweighed the trouble they cause. It also happened fast (in evolutionary terms). Somewhere around 2.5 million years ago, with our hominid ancestors, our brains began to grow dramatically into what we have today. And so the theories often begin with the archaeological record, and with toolmaking: the pressure first comes from the environment and the animals within it, and then our need to outwit prey, to evade predators, to mould the landscape in small ways; technology was needed, and for this we needed bigger brains. But surely we didn’t need ones as big and costly as what evolved – a much smaller, less cumbersome brain would still be able to make tools and still be able to form hunting strategies, as we can see in limited ways today in other species.

If it wasn’t toolmaking that made the difference, perhaps it was a matter of discovery, spatial awareness, and map (cognitive) building. Foraging for food is no easy business, and environments are unpredictable, dangerous places to wander about aimlessly. Here a bigger brained ancestor of ours would have had a competitive advantage by storing more locations of food in his mind, more dangerous places that ought to be avoided, and more landmarks for quicker navigation. The trouble is – again – that big brain of ours is overkill. Animals that make complex cognitive maps do also show an increase in brain structure, but not in overall size. Rats and squirrels make complex maps of their environments in order to survive, and yet they do so with fairly small brains.

How about consciousness then? Imagine an early ancestor of ours who has suddenly developed the ability to look into his own mind and contemplate his own existence. He is the first of the species with this ability, and it comes with an enormous benefit. By analysing his own mind and his own behaviour, he is able to build primitive theories about the other members of his tribe: who will be aggressive in what circumstances, how emotions of sadness or happiness or surprise or anger or confusion will affect people’s behaviour, and how to build firmer relationships with allies and sexual partners. All he has to do is think how he would respond in the same circumstances.

The problem with the consciousness answer though is fairly obvious. First, it is hard to pin down whether consciousness is an evolved – selected for – function, or an epiphenomenon of another function like language or intelligence or attention. Second, it is incredibly tenuous to say that consciousness is a uniquely human quality, and that it provided such a fierce advantage as to make large brains – with all their downsides – necessary. Finally, and most critically, we still don’t know what consciousness is, and “you cannot solve one mystery by invoking another.”

The Machiavellian Intelligence hypothesis is the most fun. Forget any notions of improved cooperation, compassion, understanding, or relationship-building, instead what our big brains evolved to do was to con, outwit, and betray our fellow members of the tribe. Social life has certain rules – often unspoken – that guide how we should act, in what circumstances, to whom, and what should be reciprocated. This is all nice enough, but anyone willing to break these rules has a clear advantage over the rest, who don’t. Especially if those rules are broken in cunning, devious ways that hide your indiscretions, diminish or destroy your enemies, and craftily propel you upward in terms of status, power, resources, and survival. And all this scheming requires a lot of brain power.

This takes the social function of the brain to a whole new level. Arms races are not foreign to biology, but if inter-species competition is the endgame of our big brains, then it seems to have dramatically overshot its landing. Being larger, stronger, or faster are all ways of outcompeting your neighbour for food, or shelter or safety or sexual partners that are not nearly as biologically expensive as having enlarged brains. Why the pressure was heavily set upon social skills still needs explaining, as does the growing complexity of social life. Besides, is it really acceptable to say that our ability to do mathematics, paint cathedrals, build computers, or understand the universe, comes down to our improved social skills? This seems like a jump. It certainly leaves more questions than it answers.

Instead maybe, just maybe, the “turning point in our evolutionary history was when we began to imitate each other”. But before we get to that, we must talk about language and its origins.

One of those incomplete thoughts about the evolution of our big brains is about the fact that we talk, a lot. We just don’t shut up! Think of any common-enough human activity, and then think what it is mostly about. We meet friends for lunch, have drinks after work, settle-down to watch a football game, and it is all – largely – a pretext for having a conversation. Perhaps it is easier to think of it this way: think of a group of people – two, three, four, five, it doesn’t matter – sitting together somewhere, anywhere. Are they talking, or are they happy in the “companionable silence”?

Language too comes at a high cost – “if you have ever been very ill you will know how exhausting it is to speak” – and so it takes some explaining; and for many socio-biologists it is connected to our big brains. Communication matters. It helps us to form social bonds, to pass on useful information, to cooperate more effectively, and it helps us to build culture. The more detailed and precise the communication, the more effective it will be in achieving these things; and all those extra layers of nuance and complexity require more brain power, and so the two evolve together. But because they do, the same hurdle catches both. If it is all about competitive advantage, then why has language – as with our brain size – got so out of control? If we just talked a little less, we would save energy, and by using less energy than our chatty neighbours, it would be us – and not them – who had the competitive advantage. Evolution didn’t produce creatures who are capable of complex language, it produced “creatures that talk whenever they get the chance”.

So much of Blackmore’s work here is an attack on this type of socio-biology, these incomplete theories that push the problem of our intellectual evolution someplace else, and then announce loudly that because it moved it is solved. The mistake is always of the same kind: trying to explain our extraordinary – and unique – development in terms of genetic advantage. It is an understandable mistake… but still wrong!

So back to that quote from Blackmore – “When you imitate someone else, something is passed on” – and the importance of imitation. Look around yourself, and just like all those people before Charles Darwin, you are likely to not notice the most extraordinary aspect of our reality; oblivious to the driver of all life, to all evolution, to all knowledge, to everything that matters about being the creatures we are. The trouble comes about largely because we have become so damn good at it. Despite how extremely rare it is (drawing the hard line between ourselves and the animals), all that imitation and copying and replication tends to pass us by unnoticed – so successful and so constant, it has become almost boring.

Genes. Evolution is helpfully thought of as competition; competition between replicators. It is less helpfully thought of as something that happens for the “good of the species”. The mistake is thinking of the organism as a whole, that evolution cares about the survival of the animal in question. It doesn’t! Biological evolution happens at the level of individual genes, and those genes have only one – deeply selfish – purpose (not the right word because genes don’t have intentions in the human sense): to replicate. To be passed on to the next generation.

It is tempting to look at the development of a new wing or a larger body or better camouflage or any given Gene X, as working to improve its host’s chances of survival. But genes don’t have foresight. They don’t have desires of any kind. They simply become what they are through mutation, and are either successfully passed-on through sexual reproduction, or not. The ones that are not, die out, and the ones that do live on, reproducing again and again, until they too die out through changes in the environment or competition from other genes. The crossover that trips us up, and has people using the language of intentions, desires, wills, hopes and purposes is the connection between passengers and their hosts – “between ‘replicators’ and their ‘vehicles’”. It just so happens that if we die, our genes die with us. So we have that pitch in common: the human vehicle wanting to live-on for a variety of reasons, the individual genes wanting (again not the right word) the human vehicle to live-on as well, or at least long enough to have sex, and so allowing the genes to replicate in new vehicles (offspring).

In this, our genes are more like indifferent and greedy parasites (getting as much as they can, as quickly as they can), than members of a team pulling in the same direction. The errors of language in the previous few paragraphs tell a story in itself: just how hard it is to think about evolution, and how hard it is to talk about it accurately, despite it otherwise making intuitive sense. And so it can’t be stressed enough, what it comes down to is replication, replication, replication. Or with a little more elegance, and in the words of evolutionary biologist Richard Dawkins: “all life evolves by the differential survival of replicating entities”.

Memes. Look around yourself now, go outside and really look at your fellow human beings. See if you can break through that background noise of normality. See if you can notice the next step in replication, the non-biological kind. Look at the clothes people wear, the music they listen to, the cars they drive, the food they eat, the gestures they make, the catch-phrases and turns of language they use, the hairstyles they sport, the movies they watch, the books they read, the ideas they profess, the tools and the technology they use… Once you slow down enough, and spend the time to re-notice the things you take for granted, you will see these habits and preferences and desires and fashions and fears for what they are, and what they are doing. Jumping from host to host, from brain to brain, they have a life of their own, and a goal of their own: replication!

This is the world of memes, and it is indistinguishable from the world of human beings. Each and every meme, just like each and every gene, evolved individually and in groups (memeplexes or genomes), with different and connected histories. They are unique, they evolved, and they make us do things. They make us speak in certain tones with certain words, drink Coke or Pepsi, wear a green shirt rather than a blue one, and eat pizza rather than a hamburger. What they all have in common is you! “Each of them” writes Blackmore, “is using your behaviour to get itself copied.”

You heard that right! Your food choices, clothing choices, language and thoughts, are using you, not the other way around (well at least not in the same malicious way). The next great technological invention is likely to spread around the world because it is useful, improves lives, and so that makes it something worth getting a copy of. The next breakthrough in science might spread too, because it has truth on its side and makes the building of new technology possible, but it is likely to spread with less fecundity because it is harder to understand, and has fewer immediate uses for the average person. A catchy tune or song on the other hand – take Happy Birthday to You as an example – might ripple effortlessly around the world, across language barriers, copying itself again and again, to the point where just hearing the title, or thinking about a birthday party, brings it faithfully back to life in your head.

As you hum that tune and remember those lyrics, ask yourself the hard question: “where did that come from?” It is firmly locked away in your memory, just as it is locked away within the minds of millions of other people, and yet its beginnings, its history, its origin, doesn’t matter. What matters is how it came to you, why it stuck when so many other tunes didn’t, and what it makes you do (sing it at birthday celebrations). What is the cause of all that extraordinary imitation? Something under the surface of the behaviour itself (remember there is a difference between replicators and vehicles) is lodging itself within the minds of its hosts (me and you) – “some kind of information, some kind of instruction” – and causing itself to replicate when it comes into connection with other hosts. This something is the meme!

Some of these memes are helpful to us, like new technology; some are entertaining, like songs; and some can be positively harmful, like pyramid schemes, financial frauds, false medical cures, unhealthy body images, social media addictions, dangerous ideologies or bigotries. Memes are indiscriminate and uncaring, like genes they are selfish and are only interested (the wrong word once again) in spreading as widely as possible. It is a challenging idea – striking “at our deepest assumptions about who we are and why we are here” – but one that satisfies the Popperian criteria for being a true theory: 1. Has testable predictions, and survive those tests, 2. Solves problems and explains things better than the rival theories.

Some memes succeed, whereas others fail, for obvious enough reasons. We all have limited processing capacity in our brains, as well as limited storage capacity. And so, no matter how well adapted a new meme is to our psychology, or how well-geared it is to being imitated and selected, it is always going to struggle in such a competitive landscape. The best ones are the most evolved ones, the memes that arrive in our minds through countless variations and combinations of old memes; the errors and baggage slowly carved away, with gradual improvement upon gradual improvement adding-up to make the meme a ruthlessly efficient copier.

But we human beings are fallible, we make mistakes, constantly, so all this combination and selection is a tricky business. Especially if everything hinges upon our passing on something we hear or see – a song or a story or a theory or a fashion trend – faithfully enough that it can then be passed on by others, and not diminished with each replication. So the most successful memes have something about them: depth. A joke is a great example of a meme with this quality. When a joke is told for a second time, and then a third time, and then a millionth, it is rarely ever told the same way; but the joke is unmistakably replicating, and so it must be replicating for a good reason.

That reason is that the joke is humorous, it makes people laugh, and that makes people happy, which then makes them want to share the joke. The exact format of the words don’t matter, what matters is the underlying structure or pacing or punchline that makes it funny. Get your hands on a joke, change the words completely, even change the setting from a jungle to a café for example, and the joke might still work if you are clever enough with how you adapt it. But then forget the punchline or a core feature that makes the punchline work, and then no one will laugh, and the meme will die. The most important thing about the meme is not the raw details, but the meaning behind those details.

The way a joke of any kind gets into your mind, is the way in which everything else does. They might be individual memes, surviving alone and replicating on their own weight, or giant clusters of memes, all bound together, feeding off each other, surviving together, replicating together. These, for lack of a better word, are memeplexes, or as Dawkins calls them with some helpful imagery, “viruses of the mind”. Think cults, religions and other dangerous ideologies, and you have a reasonable picture of what a memeplex looks like, though they don’t have to be necessarily pernicious. Add enough memes together, and enough memeplexes, and what you have is a complete human mind.

But these memes have another evolved talent, something that makes their lives a little easier. They don’t just find a place within a given mind, but once there they begin renovating, actually working to “restructure a human brain” according to Daniel Dennett, “in order to make it a better habitat for memes.” Take farming as an example: contrary to what we tend to think, it did not improve the lives of those early adopters, it did not reduce disease, and most counterintuitively it did not increase the quality of nutrition. When people like Colin Tudge look at the skeletons of the earliest farming communities from Egypt, all that looks back is “utter misery”; starvation, illness, deformed bones from the excess workloads, and everyone dying long before their thirtieth birthdays. So why did farming catch on?

The answer is fairly simple, and connects the two mysteries: those farmers’ lives didn’t improve for the exact same reason that farming became so popular! The more food they managed to produce, the more children they were able to have, and with more mouths to feed there was more work to do, more food to produce, more land to be bought or seized, and ever more children to feed; children who would grow up to be farmers, and who would run through the same cycle. Then, with less and less land available for their traditional way of life, hunter gatherers would have few choices but to drop their spears and take up plows. Step back and what does this look like: replication, replication, replication!

Farming took off in the way it did, and spread rapidly across the globe, not because it made people happier, healthier, or more comfortable, but because it was a good meme; well-adapted to its human hosts. With a “meme’s eye view”, the world looks a very different place. Instead of asking how new ideas or technologies benefit human beings, we should be asking “how they benefit themselves.” The subjective experiences of the people whom these memes are running through, and the emotions they feel, are a part of a much more complex process, triggering some things to be imitated, and others to not be. At this point, our memes are well and truly off the leash, living a “life of their own”, causing themselves to be evermore replicated, and manipulating our behaviour to get this done.

Sure, genes are a prerequisite for memes – the creation of brains that are capable of imitation was necessary before the first meme could ever be formed. But once that happened, once brains of that kind evolved, all the popular talk of biological advantage, and of evolutionary psychology, almost entirely misses the point. Memes, once born, are independent of their genetic origins, they are a second replicator, acting entirely in their own interests. Sometimes those interests coincide with biological and psychological health, and sometimes they can be positively harmful. So we better begin to understand them in as much detail as possible.

To do this requires a return to the meme’s eye view. Memes look at the world – and at us – in a very singular way: “opportunities for replication”. Every time we speak, we produce memes. The trouble is that most of these die immediately, never finding a new host, and never being spoken again by ourselves. If one of those memes manages to get onto a radio broadcast, a television program, or into the pages of a book, it has dramatically increased its chances of replication, and so it has a competitive advantage. Our brains and our minds and our behaviours are nothing more than opportunities from the meme’s eye view.

Think of what you are reading now, the words on this page. It started with a thought in the mind of Richard Dawkins. That thought caused Dawkins to write a brief aside – 15 pages – in his book The Selfish Gene. Which was then read by Susan Blackmore, and it caused her to flesh out the theory over 250 pages in her book The Meme Machine. The physicist David Deutsch read that book, and added some missing details to the theory in his own book The Beginning of Infinity. I read Deutsch’s book, which caused me to go back and read Blackmore’s, which caused me to interview her and publish our discussion, which caused me to write the words you are currently reading. The theory of memes is itself a meme, though only a mildly successful one.

It might be easier to think of this in terms of the types of brains and minds we have, and the changes that memes have made to them. Try for a moment a little self-experiment: try to stop thinking! Stick at it for more than a few seconds and some thought or another will pop into your mind. Push that thought away, try again, and you will likely only last another second or two before you are bombarded by thought after thought. The whole practice of meditation is built around being able to calm our ever thinking minds and give us a few more moments of peace.

All this thinking is extremely stressful, as we worry about the glance someone has given us, whether we turned off the lights before leaving the house, what we should eat for dinner, how we should dress for that business meeting tomorrow. Try as we may, emptying the mind is a nearly impossible achievement; and yet one that would be very beneficial to all of us from time to time. All that thinking and worrying drives unnecessary stress and anxiety and depression into our lives. What is obvious, if you pay enough attention to it – perhaps through meditation – is that we are not in control of what we think; thoughts just happen, and we cannot turn them off.

All that thinking also requires a lot of energy and calories, so what on earth is it all about? Why do our minds do this to us? The answer to this question – and so many others like it – goes back to the same starting point: you have to think in terms of brains which are capable of imitation, and “in terms of replicators trying to get copied.” A meme that isn’t paid any attention is doomed, slipping silently out of its hosts’ minds. Memes that capture and dominate our attention on the other hand, are much more likely to get acted upon and then passed-on to other minds who will do the same. So memes evolve to capture more and more of our attention over time and with that, the reason you can’t stop thinking begins to make sense: “millions of memes are competing for the space in ‘my’ brain.”

With far more thoughts (memes) competing for the same limited space in our minds, Blackmore likes to think of it in terms of a vegetable garden. You can try to clear the soil, and plant the seeds you want, but before long the green tips of weeds will appear. Wait a little longer and there will be more. All the clearing and pruning and de-weeding of the mind (through practices like meditation) have an effect, but the process continues: weeds fighting for sunlight and water and nutrients, competing for space against other weeds and against the vegetables you actually want to grow. Memes are “tools for thinking” and so they thrive most successfully in hosts that think a lot.

These memes also have another sinister trick up their sleeves. Not content with the brains which evolution gave us (and so gave them) our memes target our genes. They change our behaviour, and so they also change the genetic grounding for why we have sex, when we do it, what we consider sexually desirable, and how we raise children. Once the prisoners of genes, when memes arrive we are sprung from that prison, and our genes take our place behind the bars. By changing certain behaviours of ours and by working towards making their home (our minds) more hospitable, our memes turn us toward their own light.

Memes want to spread, it is all they want! So they prefer to find themselves in a human-host that is genetically well-adapted to this purpose: be it people who are inclined to religiously follow trends, people who are naturally more charismatic and so capable of influencing others, or people with better focus – and more attention to detail – who are therefore better able to accurately copy and share memes. For this, they need certain things from their hosts (us): more proficient brains with more memory capacity and processing power, better sensory organs to perceive memes and then copy them faithfully, and certain personality traits conducive to replication and imitation, like the ones mentioned above. And they can get these things done in the way they get everything done: by changing our behaviour. In this case our sexual behaviour.

The sale of sex in modern society is not about spreading genes”, how could it be, with all its anti-evolutionary (biological evolution that is) qualities? Rather, “sex has been taken over by the memes”, and with it the rest of our biology.

So after a long detour around the world of memes, back to those big brains we first puzzled over, and back to a better answer. The high point in our evolution, when everything began to change for us, was when we started to imitate each other. Before this, biological evolution inched its way forward, and the type of rapid and unusual change that we see with the development of our outsized brains, was seemingly impossible. The necessary selection pressures just weren’t there, and our best socio-biological theories didn’t work as explanations. But with imitation, a second replicator (other than the genetic one) is let loose, changing the environment, changing our behaviour, and changing which genes are selected for; radically altering our evolutionary path.

The exact moment this happened is lost to history, but unlike all those socio-biological theories, the “selective (genetic) advantage of imitation is no mystery.” If your neighbour has developed some sort of good trick, something useful, or valuable, it is clearly beneficial to yourself to be able to copy him. Running things back again to our hunter-gatherer past, perhaps this neighbour has discovered a new way to find food, a new mechanism for building shelter, or a new skill for fighting. The people who saw this and ignored it, choosing instead to continue along seeking food or other improvements as if nothing had happened to the man next door, paid a price. The people who noticed these small jumps in progress, and decided to copy them, learned valuable skills, new knowledge, and were better-off for it.

But imitation is no easy thing. It just seems easy to us now because our memes have been running the show for so long, and our genes are now so fine-tuned to memetic purposes. There are three requirements, or skills: 1. Deciding who or what to imitate, which is no easy thing. 2. Decoding information from complex behaviours, technology, theories, and transforming that into your own behaviours, technology, theories. 3. Accurately matching bodily actions. Very basic versions of all these skills can be found in primates today, and five million years ago our ancestors had the same latent abilities.

Two and a half million years ago, the first stone tools were made and we had our first obvious signs of imitation. Without rehashing all the mistaken ways that big brains have been thought to evolve, memetic theory has the Popperian benefit of being able to explain the phenomenon (big brains) as well as the empirical content of all these other theories. All those cooperative and bonding social skills, the cunning and the deception of Machiavellian intelligence, the navigation and pathfinding of cognitive map building, the leaps forward in survival that come from tool making, and all the spin-off benefits of consciousness, are explainable by a single development. A threshold, that when crossed by our ancestors, transformed so much, and took on such a life of its own, that it became hard to even recognise through the enormous dust cloud of change and success.

The first step is what Blackmore calls “selection for imitation”. And it’s the simplest. Somewhere in our evolutionary mess, a genetic variation for imitation happened. The people with this variation had an immediate advantage, copying the best primitive tool makers (to use that as an example). Building better spears, better baskets, better huts, they thrived and so the gene spread. The next step is where things become really interesting: “selection for imitating the imitators”. When everything around you is changing, and that change is speeding up due to imitation, it is not always so easy to know what to imitate. But a successful imitator is a much more obvious target. Instead of trying to select which spear works best, copy the spear of the most successful hunter; instead of trying to choose which shelter to imitate, copy the shelter of the healthiest family. By imitating the best imitators, the growth of memes finds a whole new gear.

The third step is where the question of genetic advantage begins to fade away, and where memes are gradually let off their leash: “selection for mating with the imitators”. Because imitation was an advantage to our ancestors who inherited the skill, they would also have been seen as genetically desirable; high-value sexual partners. They would have thrived when others did not, and so stood out from the crowd. By choosing a good imitator to mate with, you get close access to their imitation skills and all the benefits that come from them. Your children will also then benefit by inheriting these imitation skills in turn. Through generations of this selection pressure, crude and embryonic imitation becomes much more refined and effective.

The final step is a little predictable, but it is where that big brain of ours finds its explanation: “sexual selection for imitation”. Think of the peacock and its ridiculous tail feathers. These feathers are used for attracting female peahens, and nothing else. But the peahens are attracted to them. The bigger and brighter the tail, the more attractive the male peacock is to potential mates. So peacocks with ever larger and ever brighter feathers have more sexual opportunities, have more offspring, and those offspring (the male ones) will have similarly ridiculous tail feathers. The feathers are cumbersome, making their hosts easier targets for predators, but that one advantage of sexual selection is enough for the feathers to continue growing and to continue sparkling. This is called a “runaway sexual selection” and it should sound familiar.

As mentioned, the ability to imitate is not an easy task. It requires a lot from our biology, specifically it requires a lot of brain power. And also as mentioned, memes are great at exploiting sexual selection. If the selection pressure for the peahens was something like ‘mate with the peacock with the grandest tail feathers’, then the selection pressure for early human beings (and all human beings since) was probably something like “mate with the man with the most memes”. And just as with those tail feathers, before long this one characteristic (imitation) begins to dominate all others in terms of genetic reproduction. Our brains grow to accommodate more memes and better replication, those memes and that imitation are then sexually selected for, our brains continue to grow in order to handle more memes and better selection, and we end up with huge, cumbersome brains, as a case of “runaway sexual selection”.

Back to language. Again, without rehashing the whole space of socio-biological theories about how language evolved and its impact on our intelligence and species-wide success, memetics has a better answer (without the inconsistencies) and a different approach (one that encompasses and explains all those other theories). The difference between a silent person and a talkative one, is that the talkative one is likely to be a much better spreader of memes. So from a basic starting point, language is a tool for our memes. The only thing that memes want to do (again the wrong word) is replicate, and when we start to think about language in this way, aspects of it begin to make more sense.

Imagine you have heard some juicy gossip. The choice to tell someone or not, often doesn’t feel like a choice at all. Your biology seems to be firing against you, demanding that you repeat the words you just heard. If not gossip, then some current event you saw in the news. Some new movie you saw and liked. Or, of course, the clearest example of that hilarious joke you just listened to. It is easy to dream-up alternative theories about the origins of language, but much harder to find a theory that accounts for the fact that we are often overwhelmed by the compulsion to talk. Stick someone alone in solitary confinement, and they will soon begin having conversations with themselves. Tell someone else that you have booked them a week-long stay at a silent retreat, and watch them sink with dread before your eyes.

People who hold the meme for talking, will spread more memes; it is much easier to tell someone something than to act it out silently. And people who hold the meme for talking compulsively will have more opportunities, and wider audiences, to continue spreading their memes to. And in this way, as with our big brains and imitation, the meme pool begins to change the gene pool through sexual selection. So the question ‘why do we talk so much?’ has a nice, clean, encompassing, and deeply-explanatory answer: “We are driven to talk by our memes.”

The final ribbon on this theory of language sits in the details of its evolution. Look around and listen to the words we use, the phrases and sentences and paragraphs and conversations and debates and expressions, and what should hit you first is how innate it all is. There are differences, gaps, and outliers, but on the whole almost everyone you see using language, uses it as grammatically well as anyone else. And yet none of us learn language by being taught its structure, being corrected for our mistaken usage, nor even (and this is important) by “slavishly copying what they hear.” The grammar we study in school is but a tiny part of the natural structure (grammar) of language.

There was once a famous chimpanzee named Washoe, and an even more famous gorilla named Koko. They were famous because they could speak – well, they could use basic sign language. Having been taught a few key words, Washoe and Koko would build short – “three word” – sentences, requesting certain things, and even expressing themselves. Then after the “excitement and wild claims” had faded slightly, psychologists, linguists and native deaf signers, began to offer some doubt. The primates weren’t actually signing anything close to what language is. There was no structure, no grammar, no order to things, no understanding of what they were doing. Washoe and Koko had simply learnt a few symbols (they had to be trained and coerced), and were using those symbols to request things.

Young children on the other hand, do something extraordinary. Without too much effort they seem to absorb the language they hear, and the rules for its use. It is largely inexplicit. They often don’t even realise that they are learning or improving upon what they have already learnt, and yet without the need for reward or punishment, they pick it up and use it; in all its complexity and depth and structure. “The human capacity for language is unique.”

How this unique ability evolved is a tricky enough question, if for no other reason than languages don’t leave behind happy accidents in the fossil record. Archaeologists can’t go digging around in the mud for clues in the same way as they can for tools or bones. Extinct languages are lost, forever! There are clues – like the discoveries of art, burial rites, tool making, and trades – but they are distant and weak. The idea being that for such things to happen, language would had to have been on the scene. This is really just an argument that language would make such things so much easier, and so we are guessing that it might have been present. Not a convincing theory! Especially when language and thinking are so deeply wrapped together, it is almost impossible to speculate what might be possible without language.

Those big brains likely had something to do with it, but this misses the biological complexity of speech. A delicate and accurate control of breathing is needed, requiring the development of specific muscles in – and around – the chest and diaphragm. And the interplay between them is vital, overriding the mechanism of one in favour of the other, at just the right moments and in appropriate ways; allowing us to talk and breathe and function. We also need a wide variety of sounds, sounds that are distinct enough to convey the clear meaning of words. For this, our larynx is considerably lower than it is in other primates. But muscles and larynxes don’t fossilise either. Digging through what we know of our deep ancestors may never take us to the origins of language, but an easier answer might come our way if instead we simply “knew what language was for.”

A good replicator needs three things: Fidelity, fecundity, longevity. Let’s start with the second. In a world where genes have evolved creatures (ourselves) who are capable of passing on memes, how wide and how far those memes spread is an obvious challenge. For a meme to replicate it needs other hosts (people) to copy into, and who can then continue spreading it. The need is always for more hosts. And language becomes an extraordinary tool in this, allowing you to pass on the meme to large crowds all at once, even if none of them are even looking at you. Instead of using signs and gestures, speech allows memes to continue replicating, be it face to face, face to faces, or in the dark, or around corners, or over reasonable distances.

Fidelity. How does language help to improve the accuracy of what is being copied? This is fairly straightforward. Think back to Washoe and Koko, imagine they are together in a room and one of them has a primitive sort of meme running through their mind. The work that meme has to do in order to be replicated in the other, involves some heavy lifting. Signs can be ambiguous, gestures need to be deciphered, and behaviour is a mess of movement and sound: finding the one thing to copy (the meme) through the background corruption and superfluous activity is no easy process. Now add language, and everything becomes clearer. The meme can be communicated with much more accuracy, and in the event that the wrong thing is still copied, it is as easily corrected as saying don’t copy that! Copy this!

So what about longevity? It would appear at a glance that the problem of life-extension for memes is a problem of memory capacity. Someone communicates a meme to you, it is then stored in your brain until you can communicate it to someone else. If the meme is hard to remember, it might be partially forgotten, or lost entirely. Here language comes to the rescue. If you hear a series of random numbers or words or sounds and are asked to repeat them back a few minutes later, you will find it very hard to do so. If you are read a simple sentence on the other hand, remembering it will be a much easier task. Language adds structure and meaning to the sounds we hear, and this makes it considerably more memorable.

Besides, language doesn’t need to be repeated in an exact replica to convey the same meaning. If we are hunter-gatherers and I say something useful to you (a meme) like don’t go up that mountain because there are lots of dangerous bears, the message you pass on to someone else might be there are hungry bears on that hill, stay away, or scary animals live on those slopes, avoid them. There would be countless ways to express the same meme, and for this we have language to thank. It doesn’t take much imagination: think of a group of people who tend to copy each other. Now add language. Are they better or worse at copying? The evolution of memes explains the evolution of language.

So with all these memes running through us, and with all these changes that memes have made to our genetic code and our behaviour, what are we? We are all, down to the man or woman or child, gigantic memeplexes that bundle together in such a way that makes it all feel complete and singular. It makes us feel as a self! Blackmore calls this the selfplex – that constant, nagging feeling that ‘I’ am somewhere behind the rest of the human show. Perhaps it is better said in the title of her book: we are meme machines, and so if we ever want to understand who we are, to be happier, healthier, smarter, more productive, or more relaxed (insert whatever progress means to you), then we had better begin to understand our memes.

If you feel like you understand memes now, but are still oddly confused, that makes sense. Those early listeners to Darwin’s theory of evolution also understood the plain meaning of the words he spoke, but were also confused about the weight and inferences they carried; as was he. And make no mistake, Blackmore’s theory, with a few tweaks here and there, has the same astonishing explanatory value of Darwin’s, and the same world-shifting implications.

There are two ways to look at the path forward. We can do as Dawkins hoped, and begin to fight against our memes: “we alone on earth, can rebel against the tyranny of the selfish replicators”. Or we can follow Blackmore’s suggestion and discover that we are “truly free – not because we can rebel against the tyranny of the selfish replicators but because we know that there is no one to rebel”. Either way, it is worth seeking them out, looking into yourself, searching for the things you think and do compulsively, the things in your life that feel like they are stuck on repeat, the things that seem to have more control over you than you do over them. Not all our memes are good or valuable or worth having, many are downright harmful, and they can, by some effort, be de-weeded from your memeplex.

This is about three times as long as any article I wanted to write in this series. And I am tempted to say that this is a matter of how much this book, and how much this theory, meant to me. And I admit that I am captivated. And I really do think that Blackmore is onto something huge here. And I am sure that if we only understand our memes better, we can understand ourselves better and improve our lives. But if I am going to accept the theory, then I need to do a better job of thinking in terms of the theory. It would be more accurate to say that I have been infected by the meme of memes. Now let’s see if I am a good carrier or a good host or a good vehicle. “The driving force behind everything that happens is replicator power”, so the judgement of my success or failure will come down to whether or not I have managed to – in some small way – also infect you with this meme of memes.

 

*** The Popperian Podcast #20 – Susan Blackmore – ‘Memes - Rational, Irrational, Anti-Rational’ The Popperian Podcast: The Popperian Podcast #20 – Susan Blackmore – ‘Memes - Rational, Irrational, Anti-Rational’ (libsyn.com)

Rules of the Game

In conversation with Joseph Agassi

 

Have you ever thought of yourself as a genius? Then chances are you thought you were a young genius at that. But why? The question catches in the unpleasant grooves between scholarship, success and glamour. It is an “attractive” question though writes Joseph Agassi, and “I deem only attractive questions worthwhile”; but this doesn’t save it from difficulty and neglect. Just as with the big questions in science, attractive questions in philosophy struggle under their own weight: vague, logically ambiguous, and reliant upon too much background knowledge which is not yet available. But don’t let that stop us, boldness matters, and Agassi is bold. So we’ll ask and answer it anyway: “Are all geniuses’ infant prodigies?”

History is a problem here, because not only do we begin to lose our way with the shortened question, what is a genius? but when it comes to the people who we most commonly think fit the category – “Newton, Einstein, Masaccio, Leonardo, Keats, and Schubert” – we know very little about their actual childhoods; much less still about how the people around them judged their youthful intelligence. But we can make a harder distinction about when it all ends; if, by your late teens, no one has commented – loudly and publicly – on your prodigious talent, then you have lost the right to the title forever. So let’s be Popperians about this then, and change the question once more to help us out. Negatively phrased inquiries tend to offer clearer answers than positive ones, so: “Can genius show up in individuals well past early adult life?”

Ask around performing artists, scholars, media professionals, scientists, entrepreneurs, company executives, politicians, mathematicians, and musicians – as Agassi has done over his life – and you will find “one indisputable fact”… the very question “troubles them greatly”. But not in the way you might think. After all, there is a rich vein of evidence supporting the rise and late discovery of mature geniuses. Here is Agassi’s list: “Moses, Muhammad, and Sigmund Freud; Vincent Van Gogh, Paul Gauguin, and Henri Rousseau; Johann Heinrich Pestalozzi and Homer Lane; Ben Franklin, Michael Faraday, and Max Planck; Georg Cantor, Bertrand Russell, Kurt Gödel, and Abraham Robinson.”

By almost any metric these people are geniuses. And yet none of their school-aged peers or teachers or friends or family had anything exceptional to say about them until much, much later in life. This certainly does add some weight to our revised hypothesis, but also opens up a challenge of a kind. Is this late blooming less a matter of emerging genius, and more that of poor judges and missed opportunities? Perhaps Van Gogh The Late-Genius, was also Van Gogh The Child-Prodigy, yet without the access to paint, to canvases, to encouragement, to guidance, to motivation, or to knowledgeable-enough eyes. And yet something about this doesn’t sit well with our worldly expectations. Take Einstein for example, and now imagine him as a young student scribbling away in his technical college. To make this work, you now also have to imagine that none of his teachers, at any point in his education, noticed even the slightest spark of something special about him.

Part of the problem here is clearly a matter of recognition – the poor and limited ways we are tuned-in to the abilities of the people around us. If the people who are expected to first notice, then announce, and then cultivate, and then anoint us with the title of genius, are bad at their jobs, then what else is there… other than a childhood of misunderstanding and neglect. But why should this bother us so much? The neglect of one’s talent can often be the freedom it needs to grow, unnoticed and so unbound by social expectations. Why does the question cause so much angst and so many late, worried-filled nights for the young adults that Agassi talks to? People who are: “highly concerned with this matter and in an obviously painful manner.”

Genius casts a dark shadow. The youth today – as in every day – are both “highly ambitious and highly frustrated”. They want to achieve what the people before them achieved: the wealth, the comfort, the status, the happiness, the meaning, the purpose, the career, the prospects, the accomplishments, the recognition… the genius. Listen closely enough, and you can hear under their breaths the light whisper of neurotic desire: “I will not be satisfied with my output, I will not be satisfied with my life, unless I both achieve ingenious results and am recognized for them.”

The rest of us are stuck living lives of mild-chloroform, intolerable, unacceptable, small. All except for one unpleasant little group of adults, people whose role in this issue is disproportionate to who they are, but not to who they think they are. They are the one group in all of this that Agassi has open contempt for, the people who have had their genius-like moments, have exhausted them, and yet can’t stomach the courage to get out of the way for fear of being overtaken:

Those senior members of our cultural and intellectual elite. (I am rather poor at expressing my pity for them.) For, as I watch the ambitious young professionals press themselves hard toward the precipice, I view their older colleagues, their senior advisers, as those who have already fallen off the cliff, who do not dare move a limb for fear of discovering that their bones are broken or even that their bodies are paralysed. Fear of paralysis, it is well known, is quite paralysing.

These people are the gatekeepers to this world, and to a talent that they largely never held; with the few who did now bearing the scars of being corrupted by the experience. For Agassi, they are “phony” in their attitude and intellect. Creatures who are afraid of their own honest advice, career counselors who abuse their positions in order to shy away the young from following in their footsteps; the fewer competitors the better. But even then they have something in common with the next flowering generation of prodigies. An opinion about themselves: if you are not in the Genius Club – and recognised as such – by your late teens or early twenties, then you will never be!

And this is what the whole confusion might be about. The vague answer to our vague question. The “myth of the young genius” as Agassi likes to call it, is not really a question about genius at all, not about intelligence or success or ability or work ethic or knowledge or talent, but about the mud and grime of human life. It is a plea for the ambitious amongst us to just stop, to drop what we are doing and accept the ordinary undertow of existence: “frustration and futility”.

The Myth is a cautionary bedtime story, a folktale, a taboo, that reads like this: before you dare to plan for great achievement, before you start the gears of energy and sweat, just remember you are already, most likely, almost certainly, too late. That PhD you are writing, that novel or that poem, that note you are trying to catch on your violin, cello or flute, that pose, that manoeuvre, that brushstroke, that idea, that revision, that criticism, that theory, that scene, that very inkling of a thought, is doomed to failure because you are not already a raging success… a recognised genius.

So the myth of the young genius might be better called the myth of the magnanimous senior professional. The myth of an older generation that is happy to see the rise of the next – of mature professionals leaving their self-pity and career aspirations at the door of truth and progress. Ordinarily these people would not matter. Gatekeeping can only go so far, hold back the young barbarians for so long, until success begins to speak for itself and the whole game comes to an end. But all this talk of genius and its recognition matters now, because those senior professionals have managed to infect their junior colleagues with this nonsense, pre-emptively talking them out of aspiring to great deeds, with a fear of being too late to the game.

Yet the question of young geniuses troubles the old as much as it does the young. And for this they deserve some sympathy with our disdain. They too were raised on The Myth, and had the same hard – and sudden – judgment pushed onto their lives: early prodigy or a long life of mediocrity. This is what gatekeeping and mythmaking is for, it perpetuates what has come before and, worse, it does our thinking for us. Those senior professionals are likely as unaware of the harm they are doing now, as they were aware of the harm once done to them. But why does the question trouble them then? To be once labeled a young genius – as many of these people have – is to have an unpleasant thought nag at you through your aging life: is that the best I’ll ever be?

Those who have never been lucky enough to feel the embrace of being a recognised-genius, have another unconscious reason for keeping The Myth alive. Despite all that they are and have achieved, they have endured the stigma of not being in that select club, and so the frustrations of their life tend to feel like the unavoidable pitfalls of destiny. They begin to look upon their problems in a non-Popperian light: they resent them rather than loving them. Dug for so long into self-loathing and frustration, the thought that it was a grand mistake – all avoidable – can feel less like relief than a compound error. One more mistake in a life of mistakes. It is just easier – and less psychologically confronting – to accept what has happened and to wish the same failure upon the next generation; their soon-to-be-failure making yours more tolerable.

So perhaps it is a good thing that these geriatric gatekeepers are so bad at their jobs. You might expect people who push The Myth to at least be relatively good at recognising talent when they see it, to ensure that no actual young geniuses are left out in the cold, with their bright futures snuffed-out by the disappointment of rejection. Or, failing that – to run the Popperian line a little further – you might expect, or hope, that they look upon their errors as falsifications: indications that something might be wrong with the underlying theory. Yet the phenomenon of the unrecognised genius is so common that it has slipped into cliché, an irony-rich trope which cradles one hard fact to its chest: if genius is a childhood development then, at the very least, we are all very poor at identifying it in those early stages.

Here the pseudoscientific mind is earning its dues, blinded to the self-fulfilling nature of The Myth. The error runs like this: the late development of certain geniuses does not mean that they were not extraordinary youths, but rather that they were! Sadly there were not enough adult geniuses around them at the time to notice (it takes a genius to recognise a genius). And for the infant prodigies that go on to fulfill all that expectation and promise? Well, the fact they became geniuses proves that they must have always been, even if the label is only applied retrospectively. Dig hard enough into the photo albums and family stories of adult geniuses, and you will always find some small spark of talent or precociousness; just enough to satisfy uncritical minds and keep The Myth alive.

Ask these people what genius looks like, and they say things like: I know it when I see it. Ask instead what does genius not look like, and you get no answer at all.

The problem of genius is the problem of all knowledge, of life… of us! There can never be a theory of genius, any more than there can be a theory of human beings. We can always talk about who has been seen to be a genius in the past, and even try our hand at saying what combination of talent and success should constitute the title today, but tomorrow this will always be wrong! We human beings are creative, it’s what makes us what we are, as well as what makes us completely unpredictable. And one of the few things which is uncontroversially true about geniuses, is that they are highly creative, and so highly unpredictable.

Any theory about what a human being is, can only ever be based on today’s best knowledge. What we all do though – for every problem or hassle or difficulty or limitation or failure – is create new knowledge to improve things about the world (and about ourselves). Our ancestors, though genetically identical to us in every way, were living dramatically different – unrecognisable – lives for one reason only: their knowledge was poles apart from ours. Dream-up any theory of genius that you like, any category for which the next one ought to fit, and that human problem hits back at you.

The next genius will disrupt what we currently know, including what we currently know about geniuses themselves. If the next great talent in some field follows the current trends of the field, we may appreciate their ability, their dedication, their work, but – as they simply repeat someone else’s breakthrough or follow someone else’s formula – we can never call them geniuses. The only real way we have of judging these prodigies, these masterminds, these virtuosos of our time, is by the single criterion of extreme creativity. Something not just different or difficult to understand, but something as otherworldly and strange and incomprehensible (at first) as magic. The prerequisite of all things genius is that it not be immediately – nor easily – appreciated.

This is why people struggle so much when it comes to recognising the geniuses in their midst. It is also why our education systems struggle so painfully when it comes to teaching creativity, or cultivating it, or even tolerating its existence and expression. So much of what we talk about when it comes to these questions, according to Agassi, is a surreptitious way of discouraging people away from “productive careers” and into “unproductive” ones. And from the beginning there is a practical type of apology for this: “Infant prodigies will not be detected in a social milieu which has no use for their talents”. But the problem of education runs deeper…

Any degree of talent or ability is a matter of knowledge, and so it is also – to some degree – socially determined. Here Agassi has an observation, a personal anecdote of sorts. Asking around his colleagues and friends to see if “they could remember the teachers who made a difference to them”, more often than not Agassi received the same answer: “It often turned out that it was a single teacher who had showed critical appreciation and who, quite by accident, had even helped his or her more active charges to decide the direction of their mental development.” The problem became a matter of rarity, “from the age of 6 to 26, there were only two or three seminal people who really affected them”.

Infant prodigies face the same career decisions as the rest of us, the same hassles, the same anxiety, the same limitations of chance and circumstance. Einstein dreamed of being a mathematician rather than a physicist, but based on the state of the two fields in continental Europe during his day, if he wanted to work on something grand or comprehensive (which he did) it would have to be physics. The French chemist responsible for discoveries in pasteurization, microbial fermentation, and vaccination, Louis Pasteur, always regretted his career choice of chemistry over biology. And if Max Planck had his way and had pushed back upon the guiding arms of social pressure, he would have been a musician rather than the Nobel Prize winning theoretical physicist that he was.

To have your career charted-out for you in some way – large or small – is inevitable. Even if that charting is nothing more than the cold, hard press of job insecurity. So why does the world of standardised education make such a mess of the talent, the creativity, the soon-to-be-geniuses that we hand to it? It’s a question that comes with another myth, a romantic one about the need for geniuses to walk their own paths, find their talent on their own, away from comfort and corrupting voices, “to wander in the desert and agonize”.

It is true that Planck was “embittered” about his genius, a prisoner of his own career. But the Einsteins of our world run across this theory, peaceful, happy, content with themselves and with their second-choice jobs. Then Agassi talks about the violinist Yehudi Menuhin, and the more stereotypical image we tend to hold of the extremely talented amongst us – anguished, depressive, and unhappy because they pursued what they wanted, because they lived the career they wanted, and because they succeeded. We instinctively think that genius causes hardship, rather than hardship causing genius.

The pain of being an infant prodigy is obvious enough: they standout. And though the burden of their talent might be tormenting enough, it is likely nothing compared to the problem of being noticed for who they are: the one amongst the many. Child psychologists fill textbooks with the longing to feel normal that they see in their patients; and so perhaps what we owe those infant prodigies are cold eyes. Instead of recognising their talents at all, we might help their wellbeing – as well as the development of those talents – by simply downplaying the extraordinary things they do.

The development of young geniuses is nobody’s business but theirs. Any instinct to try to help these people along, should be quickly shaken to silence by the obvious truth that we don’t know how to. We might get it right and help in some way, but we might also get it wrong and cause damage; more importantly we wouldn’t know where to start, nor how to judge our success or failure. So choosing to – at a minimum – take away any unnecessary pressure, is likely to be a good thing. As Agassi comments in wistful tones “we know that quite a few very brilliant individuals suffered from pressure so much that as adults they were resolved not to use the special skills and talents they had developed under that pressure.”

Anyone willing to say that genius-level talent needs our firm hand on its shoulder, guiding and prodding it towards its potential, needs to also admit in the next breath that they have no evidence for that. And anyone who feels it is true regardless, ought to be very cautious of how close that attitude is to real world tyranny. The myth of the young genius is certainly a myth, but so are all our motherly worries about “warm feelings” and appropriate “encouragement”. The acquisition of skills and knowledge and talent are all their own reward, valuable in their own right, and intuitively desirable. So long as we are not actively discouraging these things – through clumsy schooling, tyrannical parenting, or jealous gatekeeping – then genius, whatever it turns out to be, will take care of itself.

 

*** The Popperian Podcast #19 – Joseph Agassi – ‘Rules of the Game’ The Popperian Podcast: The Popperian Podcast #19 – Joseph Agassi – ‘Rules of the Game’ (libsyn.com)

 

Karl Popper’s Hopeful Monsters

In conversation with Joseph Agassi

 

Talking with Joseph Agassi is an uncoordinated affair. He speaks, he stops, and he interrupts at the most improbable and surprising of places. In unnatural lurches, he jokes while being serious, is kind when talking over you, and elaborates with single word answers. The bewilderment and confusion you feel hits your mind like a panic: this is not so much a casual walk in the philosophical park, as a knee-hugging collapse in a muddy trench; the bombs resonating a little closer each time.

Getting him on the phone is both easy and hard. Agassi always responds to emails with the speed of a plugged-in, tech-savvy teenager, but still in the blunt, busy tones of his natural voice. Dates are made, calendars marked, and he isn’t there. New dates, new times, and still no luck. Eventually I ring him at home without prior announcement. The phone is half way through its first yawn when a confident voice cuts-in: “Agassi here!” Despite our letters, he seems unsure of who I am, but as soon as I say “I was hoping to talk to you about philosophy and about Karl Popper” something shifts. The loud Hebrew music from the next room is turned off, guests are politely ushered out the door, and the man whom Rafe Champion once anointed as an “Intellectual Irritant” is ready, leaning into his microphone and poking for combat.

But he starts with a lament: “I have a constant sense of failure due to my inability to sustain reasonably good relations with the person to whom I am most indebted, both intellectually and personally.” Popper understood as much as anyone that criticism is always a sign of respect, but there is something about Agassi that seemed to hit a nerve. And although they spend long hours together at Popper’s home, their relationship – and Agassi’s criticism – is always a little too close to the bloodstream. From his closing front door (long past midnight), Popper is muttering to himself as to whether his student is more trouble than he's worth. With Agassi – walking down the cobbled path to the gate – lost in self-reflective thoughts about why he is putting himself through the late hours and the abuse; already planning his escape.

Jerusalem. Years earlier and somewhere in the corridors of Hebrew University. Agassi is young and balancing an education between his mandatory stint in the Israeli military. An undergraduate degree in physics, a master’s degree in physics, and then a change that is more profound than it might appear. What need does science have for the philosophy of science? For outsiders who never will – nor want to – step into a laboratory, but who nonetheless find a calling in telling those insiders what their lives are all about, why they behave the way they do, what they are trying to achieve each and every day… and why they fail to do it. Or to phrase the question as Agassi did to himself: is it a role of necessity or simply nuisance?

London. The award of an overseas scholarship gives Agassi the corrective space he needs. It also forces him to go looking for a teacher, someone with the old rabbinical school fire that Agassi is used to, but someone also “underestimated” enough to be open to new, unknown, door-stopping students. In 1950’s Britain, there is only one name that meets this criteria, and although he is well known within the pages of academia and literature, the general public still walk past him – and glance over his books – with lazy, anonymous eyes. Karl Popper is not Bertrand Russell, and for this he feels “amazingly underestimated”.

When that scholarship elapses, Agassi is stuck in a strange and appealing orbit. The thought of leaving is not a thought at all. He bullies his way into a research position, and when that too runs its course, he twists into a Ph.D. despite Popper advising against it. The whole point is to not leave… at least not until the intellectual partnership is completely exhausted, and the personal relationship completely broken. He takes the nuisance part of the philosophy of science to heart and pesters his teacher-supervisor-employer-friend with the happiest/nastiest criticism and argument he can find. And Agassi is grateful for everything it gives him: “Such intellectual success as I have enjoyed is almost entirely thanks to my work under Popper’s tutelage”.

When the falling-out comes, it is easy to forget the closeness of the two men. With enough clout and job security to avoid his campus obligations, Popper spends most of his time locked-up in his house in Kent. The few students that visit him are those who have been personally invited. A rich jealousy grows over the Popperian School, everyone fighting for even the slightest of chats over the daintiest cups of tea. What they all remember of their few stopovers is this: when they arrive Agassi is already there, comfortable and fed as if it were his own home; when they leave, they leave alone, with Agassi and Popper waving goodbye in unison, preparing for their evening debate.

The jealousy and the ego comes also from Popper. And he has a strange rule that seems both personal and out of place. While it is ok and appropriate to criticize other scientists and philosophers as loudly and publicly as possible, the Popperian School is different – a place where all intellectual exchanges must remain private. Why Popper chooses this, is up in the clouds of psychologism, but it doesn’t make much sense for his philosophy. Break the rule and Popper breaks it too, a public feud from a public, good-faith expression of critical rationalism. And a feud it is, with the quality of “expressions'' and “disagreement” becoming “brief, ad hominem, and worthless at best.”

Six years is all Agassi can take (this is more than most people). He gets out with his doctorate and his sanity and a richer mind. Then someone hands him a copy of Popper’s latest book, Objective Knowledge, and asks him to write a review. Unfortunately for both men “the book was very poor”, an empty philosophical statement “buried under a thicket of misconceptions.” The worst – and most shameful –tendency of Popper is there, smeared across the pages, sparking hard memories for Agassi and tragic emotions for the reader: Popper is constructing his own myth, writing his own biography, planning for historical applause.

The pen portrait. A philosopher who cares more, and thinks in higher tones, than his contemporaries. He argues deep into nights, mornings, and broken relationships because the truth matters, it reaches beyond itself, changes the world and the people it touches, and so the fight is a question of duty and honour. He is straightforward when others are not. He is interesting when others are not. And his veins run with an unnatural amount of common-sense. This explains the bitterness and the jealousy that stalks the slow moving, fast thinking, semi-recluse. It is the burden of high rationality and of bravery… and it is the narrative of Popper’s life that has won the day!

The camera portrait. A bullying old man, angry and bitter about the recognition he feels is being denied to him. Arguing with his adversaries and his friends and with his students, Popper tries to prove this tragic neglect true by proving everyone else wrong. If he can always come out on top, always batter his opponents into submission, then it stands that he must be a philosopher of extraordinary quality. And if those victories are public to the point of gossip, everyone else will begin to see that quality, along with the mistreatment. On the wrong end of grandiosity and myth-building, students like Agassi are stuck with the “bullying”, the “dogmatic”, the “cruel”, the “domineering”, the “capricious”, the “complaining”, the “disrespect”…

It starts with an unpleasant little sentence where Popper says: “I have always been interested in Goldschmidt’s theories, and I drew Goldschmidt’s ‘hopeful monsters’ to the attention of I. Lakatos, who referred to them in his ‘Proofs and Refutations’. Agassi reads this and gets a “jolt” – he has seen this before, felt the same odd taste of trafficked recognition over recent years. Perhaps it is true that “the arrow which has left the hunter’s bow belongs to the hunter no longer”, but it certainly doesn’t then belong to the person who simply saw it fly. Having done nothing to deserve it, Popper is trying to claim responsibility from Lakatos for his use of Goldschmidt, and responsibility from Goldschmidt for Lakatos’ continuation of his theory.

Just who told Popper about Goldschmidt in the first place we will never know – Popper has left that link of the chain unreferenced. But Popper has an Agassi-shaped problem here, who quickly senses something wrong with the dates and with the philosophical claims. Popper says he wrote his original paper – Evolution and the Tree of Knowledge – in 1961, but being “no expert” on the topic, decided against publishing. In 1963 Lakatos publishes his own work, with its hard influences from Goldschmidt’s book. And in 1973 Popper circles back around to his old paper, now seeing it as worthy of publication. It is a timeline that begs its own doubts and questions: was it Popper who introduced Lakatos to Goldschmidt, or the other way around?

There is a degree of historical nit-picking to this, but Agassi is just sharpening his blade. He is a Popperian of course, an admirer of the man as well as his philosophy, and this new book – Objective Knowledge – doesn’t live up to either. Speaking in over-the-top deference to offset what is to come – calling Popper “Sir Karl” – Agassi restates Popper’s own methodology: 1. Start with a problem. 2. Pay your predecessors, and past solutions, their dues. 3. Show the error of these solutions. 4. Present your own, improved, solution (something that explains more). 5. Make sure your solution is immune to the previous level’s criticism. 6. And finally, acknowledge other valid, unfalsified, solutions. Agassi finishes by saying that Objective Knowledge fails each and every step.

The Darwinian theory of knowledge – or evolutionary epistemology – goes like this: think about knowledge for just a moment or two, and you are likely to get quickly tied-up in all sorts of bad ideas. And this has a lot to do with the type of questions we ask and answer: how do we know something is true? How can we be certain of things? How do our senses produce truth? Bad questions lead to bad answers, as they did for all the predecessor theories to Darwinian evolution. Asking questions such as why do birds have wings? they were setting themselves up for error: birds have wings so that they can fly! And just like that, from a bad question, you will likely produce theories of godly design, rather than evolution by natural selection.

What allowed Darwin to make the breakthrough that he did, was a new question which opened the space for better answers. Ask instead what kind of process would lead to a bird having wings? and suddenly the true theory becomes easier to find. And so it is also the case with knowledge creation. If instead of asking questions along the lines of how it is that we can know things, we asked how does knowledge grow? a lot of the confusion and mistakes could have been avoided. It would also have opened our eyes to some wonderful similarities between biology and epistemological processes.

Biology. In any population of any species, there are genetic variations, meaning that despite living in the same environment and evolving together, we/they are all different in some way. Many of those variations will be irrelevant, not helping or hurting the host organism; these will die out slowly over time. Some will be harmful, and will die out much more quickly as their hosts die too. And some will be improvements, giving their hosts a competitive advantage within their ecological niche. These are the survivors, the ones who hang around a little longer, who are better adapted to the dangers of their world, who are more likely to reproduce and pass those genetic improvements on to their offspring.

This explains both the wide variety of life that we see in nature, the dramatic success of certain genetic changes, as well as the higher heritability of success, rather than failure. The process is a process, and it continues. Environments change, making past genetic improvements less helpful than they once were, or simply making a radically new ecology to which the species is poorly adapted. The other species change too. They evolve, and some of that evolution will be targeted at hunting this example species to extinction; or to expelling it from its territory. What is successful today won’t be tomorrow. Nothing lasts. And with every successful improvement comes a host of unforeseen problems. The good news is that there is always another possible variation that could fix the new problem…if only it materializes quickly enough. The species that don’t adapt, die!

Epistemology. Or how does knowledge grow? Just as there is constant variation in genes, knowledge too is always changing and multiplying and dying and succeeding. Most new ideas are fairly neutral, some are harmful, and only very few will ever be successful. But those select few have something on their side. They are improvements on what came before them, they explain the world in more complete, more accurate, or more vibrant terms. They give their hosts a competitive advantage. This is noticed by other people (it is only people who are capable of explanatory knowledge) and copied, spreading the new knowledge and spreading the competitive advantage. The people and societies that don’t copy these successful ideas pay a high price, stagnating, suffering, falling-behind, and dying.

But knowledge comes from problems, problems with existing theories or problems with our understanding of the world (these are theories as well, but a helpful distinction for where we are heading). Just as with biological organisms, knowledge too lives within an ecological niche of a kind. It doesn’t build up from nothing, but answers a need or a selection pressure. An asteroid heading to earth makes the knowledge about how to deflect such objects vital, or the discovery of a new pandemic-causing virus makes knowledge of potential vaccines and mitigation policies suddenly important. The best theories – the ones we consider as true – are simply the best surviving ones… and by extension, the ones that help us to survive and hopefully thrive.

The real selection pressure, however, is always criticism. Criticism from us, about our best existing theories. It might be an asteroid or a virus that causes knowledge to change, grow and become relevant, but more often than not it is nothing more than one human being disagreeing with another human being: a theory that another theory is false. There is always some way to criticize even our best, and surest-footed theories, and when those ways are exhausted, new ones can always be thought up. Here we have an endless landscape of variation, analogous with that of gene mutation and gene coupling. And with each new criticism comes the possibility of improvement, that the new theory is better than the incumbent, and so – through selection pressure and adaptation – takes over, survives, and reproduces, while the old one slowly dies out.

To quote myself from an earlier paragraph: “What is successful today, won’t be tomorrow. Nothing lasts. And with every successful improvement comes a host of unforeseen problems. The good news is that there is always another possible variation that could fix the new problem…if only it materializes quickly enough.” As true as this is for both biological and epistemological evolution, there is a difference. Natural selection in living species is a nasty, painful, wasteful, and unbearably long process. Millions of deleterious variations are likely to happen before a single positive one occurs, it then takes thousands of years, generations upon generations, of handing those genes down to offspring for the variation to become stable within the genepool and reasonably widespread.

Worse, for a genetic change to make a significant difference in the survival of a species, there has to be a problem on the ground for it to solve, or improve upon. Which means for those thousands, perhaps millions, of years waiting around for a successful genetic mutation, the species in question is suffering… a lot. For the evolution of camouflage to make a difference to a lizard, it can only be because it was being hunted to near extinction – without any protection – beforehand. An evolved increase in strength or size implies that the smaller, weaker species was easy prey. An increase in speed comes from a need to outrun predators more effectively. And improvements in dexterity, or other hunting abilities, points to constant starvation and food insecurity in one's ancestors.

Any biological sensations of pain or unease or discomfort or anxiety or worry or fear or panic or misery that you feel today, can only be because the human body evolved to feel it. And that it evolved to feel it, can only be because the members of your distant family tree who didn’t have those feelings suffered horribly and died as a result. Every small change in our biology is thanks to an unfathomable amount of carnage and mortality; and the reason why Susan Blackmore calls biological evolution “design by death”.

Epistemology on the other hand evolves with more efficiency and less bloodshed. For a start, it’s not blind. Rather than waiting patiently for random variation after random variation, hoping that one of these might become helpful to the species before it’s too late, explanatory knowledge evolves within a mind after the discovery of a specific problem that needs solving. Though they might still fail, all these variations/solutions that are thought-up have an advantage – they are targeted at solving the problem at hand; there is no randomness to the process, and less waste. And whereas biological evolution is restricted to small, incremental, physical changes, epistemology is driven not only by a knowing purpose, but also by human creativity; removing all physical limits on what is possible, as well as allowing for larger jumps forward in evolution (without the need for all the smaller, intermediate steps).

It gets better still. Epistemological evolution – explanatory knowledge – is also faster than its biological competitor… much faster. All those thousands and millions and billions of years of change, can happen at the speed of a neuron firing between two synapses of a human brain. That’s all it takes for the world – and life on it – to change forever, in the most dramatic of ways. Rather than waiting for a problem to manifest itself, and then desperately scratching for a solution, epistemological evolution can reach beyond our current safety, imagine future problems before they manifest, and then go searching for pre-emptive solutions. A process in which no one needs to suffer at all for improvements to be found.

This is the largest – and most significant – difference between biological and evolutionary epistemology: the cost of progress. An animal with a bad evolutionary code – an error of some kind which needs replacing by Darwinian natural selection – is doomed. It is something that can only be solved by future mutations in future offspring, and so the host will always die before seeing a positive change. And if the genetic error comes in the form of a competitive disadvantage to other members of the species, then the host will die out without even that faint possibility of having offspring, and without the fainter possibility of genetic improvements in the next generation. Either way, errors kill their hosts. In epistemology however, we can discover our errors and eliminate them without anyone having to die. All we have to do is change our minds, and they are gone, no longer harmful and no longer a problem. We can let our false theories die in our place!

Spinoffs. No sooner had evolutionary epistemology found light, than philosophers were hard at work making it incomprehensible. The instructionist school was born, then the selectionist: should we be judging the growth of knowledge by the behaviour of the people who hold it, or by the underlying truth-claims? How do we get the knowledge – replicating itself across hosts – through the cloud of psychological nonsense we all have in our minds? What is the appropriate unit of study: the success of an idea in spreading, or the competitive advantage of the people who adopt it? Are the ideas stored within individuals or within an inexplicit culture? What is the replicator: what is the one-to-one analogy for the gene, the cell, the phenotype etc? Are our theories of epistemological evolution contingent upon our theories of knowledge (empiricist, inductivists etc)? Then we are off in the wilderness talking about hypothetical realism, epistemological dualism, adaptationism, perspectivism, embodied theories, disembodied organisms… And none of it can save Karl Popper.

Back to Joseph Agassi. He reads Popper’s new work, rechecks Popper’s own method for inquiry, and goes to war. Instead of starting with a problem – as he should – Popper only has a distinction, a delicate turning of slight details. What his theory solves, is already solved by Goldschmidt’s; and what his theory explains, is already explained. So on points one and two of his methodology (1. Start with a problem. 2. Pay your predecessors, and past solutions, their dues) Popper has failed. Popper’s great new twist was to look at both biological and epistemological evolution as not just similar in appearance and outcome, but also as doing the same thing: problem solving.

This is fairly uncontroversial when it comes to explanatory, human-created, knowledge. But biological evolution is also knowledge creation, of a sort. The ability and awareness of how to run faster, how to avoid predators, and how to find food, how to reproduce most effectively, are encoded within the genes of animals (as well as human beings). It is rigid, confined, slow-moving, but it is unmistakably still knowledge. And it evolved to solve problems – problems with being hunted by other animals, problems with starvation and hunger, problems with passing on our genes within a competitive environment. Genetic knowledge is just the slower, dimmer cousin of explanatory knowledge; different in ability, but not in kind. 

Agassi gives Popper his dues here – he has added something to the existing theory (“it connects the amoeba and Einstein as problem-solvers”), but not much. On the third methodological step (3. Show the error of these [previous] solutions) Popper finds a slight foothold. But it is less of an error in Goldschmidt’s theory that he is pointing out, than an incompleteness or a lack of emphasis. When it comes to four (4. Present your own, improved, solution (something that explains more)) Agassi is unimpressed: “here comes Popper’s claim that he has an explanatory theory. He has none that I can see.” By this same failure, point five (5. Make sure your solution is immune to the previous level’s criticism) fails too.

The worst sin that Agassi sees in the pages of Objective Knowledge, is with six (6. And finally, acknowledge other valid, unfalsified, solutions), and how it relates to two (2. Pay your predecessors and past solutions their dues). Having already stolen the credit for Lakatos’ theory by referencing Goldschmidt, and saying that he (Popper) deserves recognition for dubiously connecting the two men, Popper then goes on to diminish Goldschmidt to a single, decade-old afterthought, in a single footnote. Agassi writes: “I mean, how does Goldschmidt come into Objective Knowledge: through the back door in a 1972 Addendum to a 1961 paper”.

Having cut his way through Objective Knowledge, as well as some favourites from Popper’s back catalogue – corroboration, scientific credibility, what constitutes an explanation – Agassi turns around to stamp the final vestiges of life from the book: “looking again at Popper’s excursions into biology, I am amazed to find how much pointless though valid criticism it includes… I am amazed to see that they [his papers] start with attacks. No problems, no discussion of strength of valid solutions to be attacked.”

When Popper reads Agassi’s words, he does the unthinkable for someone who believes that “all criticism is constructive” – he ignores it! And for Agassi, this is “painful”, after all “any criticism is better than a dismissal or an oversight”. When messages begin leaking through to him from mutual friends and colleagues, the gossip and the second-hand professionalism is too much for Agassi to tolerate. He phones Popper to talk about his review – “I was used to him shouting at me” Agassi writes, but this time all he did was “scoff at me”.

All these moments in Popperian history can ring as distantly as stories of Socrates plodding around the agora. And so it is hard to imagine that Agassi was there, beginning to end, and at 95 years old he is joyfully still here. His memory is strong and unshaken by age, his stories rich, long and wonderfully personal: the whims of Paul Feyerabend, the plagiarism of Imre Lakatos, the soldier’s honesty of John Watkins, the persistent fraud of Ludwig Wittgenstein, the regrettable weakness of Thomas Kuhn, and the intense anger of Karl Popper.

Perhaps Agassi has earnt that label of Intellectual Irritant, and that is the place he will hold in this history when people inevitably write his story. But what lingers from speaking with him is only admiration. I admire that he doesn’t back down, doesn’t retreat at any cost, and fights to blood and bone. I also admire that he drips with emotion and regret when thinking about the toll it all took, and the harm it may have produced… whether the fault is his or theirs: “No amount of justification of an action may allow us to ignore the pain it causes”. And what lingers too, despite the unpleasantness, is the gratitude he still feels for a single, chance event, which changed his life for the better:

I do not know how much I am indebted to Sir Karl Popper, except that but for my having been his student and research associate I would not be what I now am. I consider that fact my greatest fortune.

 

*** Shortly after writing this article (publication delayed) Joseph Agassi died at his home in Tel-Aviv (1927-2023).

 

*** The Popperian Podcast #18 – Joseph Agassi – ‘Karl Popper’s Hopeful Monsters’ The Popperian Podcast: The Popperian Podcast #18 – Joseph Agassi – ‘Karl Popper’s Hopeful Monsters’ (libsyn.com)

 

Karl Popper’s Social Turn

In conversation with Rafe Champion

 

A group of unrelated and unknown people meet in a room. They don’t ask about credentials and they have nothing in common beyond their desire to be in that room. Outside on the street, in their cars, in restaurants, with their families and friends, these people are as insufferable and flawed as the rest of us: gossiping, threatening, sweet-talking, fighting – dogmatic, tribal, loyal, and full of prejudice. But inside that room everything changes: they argue and criticise and interrogate, but all their other human baggage is left at the door. If you were to accidentally walk in and watch them for a few minutes, you would find it impossible to guess who the senior members were, and who were the juniors, who had had more professional success and who had less, or what their lives and personalities might look like when the meeting ends and everyone goes home.

These hypothetical people are scientists, and for most of us this is enough to explain the unusual behaviour of that room. By extreme good fortune we have become accustomed to the sudden shift in attitude and rigour. But what actually happens beyond that threshold takes some clearing up. And asking the scientists themselves likely won’t get you there – for the most part, they don’t know either! So how is it that these deeply individualistic people manage to leave their egos at home and work towards something impersonal and collective, without being conscious of the process themselves?

It is what has come to be known as the “social turn” by Ian Jarvie, the “institutional turn” by Rafe Champion, or “the organization of inquiry” by Gordon Tullock, in the philosophy of science; and it belonged first to Karl Popper:

what we call ‘scientific objectivity’ is not a product of the individual scientist’s impartiality, but a product of the social or public character of scientific method; and the individual scientist’s impartiality is, so far as it exists, not the source but rather the result of this socially or institutionally organized objectivity of science.

Now this seems odd… counterintuitive. Most Popperian readers will have a clear-enough image in their minds of what science ought to look like. It involves as many competing theories as possible, as much criticism and as many tests as possible, and as much advocacy and argument. We will never reach an upper limit on this, a place where we stop and decide that science has become drunk on its own health, and so the question of organisation doesn’t seem like much of a question at all: if you want science to succeed, simply get out of its way; let the free choice of free individuals rule, hugging as close as we can to laissez faire principles.

Popper would agree with those words, and disagree with the implication. How those individuals fit together also plays an important role in this – without which, ‘scientific objectivity’ would be impossible. At the end of the day, science actually achieves things, it solves problems, moves forward, improves the world, and makes progress. It is more than just a collection of unconnected scientists building their own hypotheses and running their own tests. If it were only this, it would be a miracle if even a single problem was solved.

Perhaps a clearer way of looking at the problem is in reverse. What would you do if you wanted to cripple the progress of science and the growth of knowledge? Here is Popper:

By closing down or controlling laboratories for research, by suppressing or controlling scientific periodicals and other means of discussion, by suppressing scientific congresses and conferences, by suppressing Universities and other schools, by suppressing books, the printing press, writing, and, in the end, speaking. All these things which indeed might be suppressed (or controlled) are social institutions…Scientific method itself has social aspects.

There are two interrelated motives to everything scientific: understanding the natural world and controlling the natural world. It is the difference between “pure” and “applied”, between curiosity and a practical purpose. Recognised in nearly every lab, in every country, the line is as appropriate as it is fuzzy. Like it or not, once the step of understanding the natural world has been successful, the controlling of the natural world has already – largely – come to life, with the many immediate uses, tests, experiments, applications, and implications, being present and explained by the pure research discovery. There is never a hard and obvious demarcation in the scientist’s mind – unavoidably slipping between the pure and applied titles as he works.

The objective truth of things is unknown and out there to be discovered, and we only ever get there by guessing (conjectures) at what that unknown world looks like. This is the way knowledge creation happens, whether it is in the mind of a great theoretical scientist or in the dusty corner of an undergraduate laboratory (still in the mind, of course). These people might have different pressures, different funding incentives, or different motivations (creating something vs. becoming rich), but as they are both trying to create new knowledge (the knowledge of how something works or the knowledge of what to do with it), they are both scientists of the same kind; the same kind of scientist that we all are… every single one of us!

At all stages, the same game is being played and the same question asked: do my theories about the world actually correspond to reality? And there is no such thing as a theory which doesn’t try to connect with the world beyond our minds. Without a phenomenon of some kind that needs explaining, a scientific theory is never born. This was Galileo’s great crime, not discovering a new motion of the planets, but claiming that this represented a new law of nature and a better way to explain what was actually out there in the universe. The charge against him was presented by the Jesuit cardinal in this way: “act prudently” and “content yourself with speaking hypothetically and not absolutely.”

If the work of science was to only state what we see, and then revise what we see, and never draw a connection to underlying laws of nature – making our theories “simply abbreviated statements of observations” – then we have some hard lifting before us. When the Einsteinian system replaced the Newtonian one, it wasn’t because it explained more observations; after all there are very few observations that Einstein’s theory can handle more simply, or effectively, than Newton’s. If the whole game were about data collection and accuracy, then general relativity would never have been regarded as anything more than a “minor step forward”.

Albert Einstein was supposed to have said, “if you want to know what a scientist really believes, don’t listen to what he says, but observe what he is working on.” And very few will ever be caught working on theories or experiments in a way that would indicate a scepticism about objective reality, or as if their research were only “devices for conveniently summarising experimental results.” Pick a scientist at random, controlling only for their being committed to their job, as well as being honest. Working day and night, sweating and suffering for their theories and experiments, what they probably don’t do is study their own interests and motivations; asking themselves why certain problems were chosen, why particular methods used, or how truth relates to what they are doing.

Because they don’t spend much time on these issues, and because so much of what they do understand is inexplicit, they are liable to make some strange mistakes when pushed – grabbing hold of “various ill-conceived theories”. It is, after all, a hard thing to say publicly – as well as to oneself – that I don’t really know what I am doing! Left alone to scurry around their labs, these scientists take plenty for granted, and stumble onto unspoken answers: the possibility of truth is the only thing that brings meaning to their work, and progress to science. Without it, all their achievements would be miracles!

We (scientists and non-scientists alike) don’t begin by collecting data, we form theories, and then test those theories through experiments, against other theories, and finally against the collection of data. The point being, the accumulation of information gets us absolutely nowhere, until a human mind creates a theory to make sense of it; as well as to make the accumulation possible. Without a theory how does anyone know what to collect in the first place, or what constitutes a data point?

Straighten this out as well as you can, find your methodological groove, and what is left – according to Popper – is only still ever conditionally accepted as true. It is always open to revision and scepticism, and this also leads many scientists to make elementary mistakes about the nature of their work, thinking that the critical attitude which has been drilled into them has unhelpful implications about the scientific enterprise. It goes like this: if I am required to be sceptical about every possible theory, this surely means that they are all untrue. Or: if I am required to be sceptical about every possible theory, then I ought to also be sceptical about the very existence of truth.

And all sorts of horrible little ideas fill that opening space: science is about consensus or science is about workability or science is about the collection of data. The shame and the errors are magnified by one important – and true – implication, being the one which is most commonly missed: our “theories seem not to last”; that the history of science has been a history of radical change and of disproof. Nothing that we do ever seems to stand the test of time – all it takes is a little bit of well applied criticism, and everything we once thought to be true crumbles inevitably to our feet. We are always likely to be wrong (in one way or another), and yet never likely to see this for ourselves.

Its origins are as long and deeply carved as you could hope for, stretching back to The Logic of Scientific Discovery and to The Open Society and Its Enemies, but this social turn in Popper’s philosophy always happens at the intersection of human fallibility. As a small proof of this, Popper himself seemed to not fully understand the significance of his turn; snow-blinded to his own enormous discovery. “Popper’s consistent ability to think socially also does much to account for his originality” writes Jarvie in The Republic of Science, “since it is hard to do and its difficulty is attested by how often readers and critics of Popper do not grasp that this is what he is doing.”

So wrapped-up in defending science against the charge of being a “mere social construction”, Popper’s snow blindness was largely self-inflicted. Perhaps the word turn is not helpful here, indicating a slight or casual change, a subtle mention or glance in an otherwise ignored direction. This underplays just how essential “thinking socially” is to Popper’s work, much more so than “logically or psychologically”. It is so central to everything Popper thought, and to the scientific method, that Jarvie believes it amounts to a “proto-constitution of science.”

It is what keeps science afloat and functioning, even when the best methodology is not used. Alexander Fleming’s discovery of Penicillium rubens was not the accident that most people think it to be. Thousands of other researchers had seen the same contamination in their slides – and millions more had seen it on the non-microscopic level – but only Fleming realised its importance. An accident may have led to the contamination, but not to the creation of the theory; that was all Fleming! Still, it was an example of bad methodology, and the reason why the theory holds today is the same reason why most others fail: community!

Not a community of like-minded colleagues, nor a community hugged together by a governing body or a set of laws, but a group of otherwise disconnected people committed to doing the one thing for each other that we cannot do (particularly well) for ourselves: expose our mistakes. No one gives commands, there are no hard organising principles, and none of it is consciously designed. It all hinges on the key truth that, if you desire new knowledge, then you must want to find and eliminate error. And it is this which requires a community… of a kind.

The theories that survive also owe their lives to this social world of scientists, without whom (and their best efforts at criticism and refutation) they would be indistinguishable from the crowd of false theories. A perfectly true hypothesis can never be considered as such until it moves from its host’s mind into the minds of other scientists; with all the doubt and difficulty and explanation and testing and predictions and implications that goes along with that.

There are better and worse ways for this to happen though. Dissemination is always a challenge and a balancing act – trying to get new theories exposed to the scientific community as quickly and cleanly as possible, whilst also filtering-out the frivolous, the nonsensical, the fictional, the non-rigorous, the fraudulent, and the fabricated. We all have a limited amount of time and attention to spare, after all. When this is done well, scientific knowledge grows sharply, benefiting us all. Done badly, and the social fabric tears under a deluge of un-distilled information and phony publications. The problems of science are the problems of the human condition; Tullock puts it like this:

There is no reason to believe that scientists are much more thoughtful and honest than other men. The obvious high degree of truthfulness in scientific research comes not from the superior moral probity of the individual scientists, but from the social environment in which they operate.

A scientist caught faking his experiments or fudging his results is not an existential threat to science – nor to the scientific community – but he does represent a muddying risk to the smooth function of things, and the pace of progress (a non-trivial problem when your whole goal is to improve things, through knowledge creation, as quickly as possible). Matching the seriousness of the crime, the punishment is invariably excommunication, the end of a career, the thorough collapse of reputation, and the questioning of all previous work.

In the 1920’s, Paul Kammerer was one of Europe’s most prominent and well respected biologists. After years and years of success and rigour and achievement, he was then associated with a single faked experiment… and it “ruined him”. Kammerer’s downfall was sudden, dramatic, complete, and irreversible; ending, sadly, with his suicide in 1926. The fraud he took part in speaks of a tragic desperation that he must have been feeling. Fabricated theories are unpredictable, fabricated experiments are unrepeatable, and fabricated discoveries have no practical applications. Kammerer was always going to be exposed, with his deception sooner or later bubbling to the surface; which is why the scientific world is more truthful and honest than the non-scientific: lies are better and more seriously policed!

So if everything comes back to the question of effective dissemination – allowing scientists to check each other's work as quickly as possible, whilst also filtering-out the obvious errors – then how should this be done? These days everyone has the answer, infecting their vocabulary like a pathogen escaping a laboratory: peer review! And it means absolutely nothing – no content implied! How peer review works, and how it should work to be more effective, remains a mystery beyond the plain meaning of those two words. At its bones, it is as crude as the worst aspects of human life: “scientific advances are disseminated” writes Tullock, “through the same channels of advertising, salesmanship, and public relations as other commercial products… [and] this does have some effect on the development of science.”

So, much of what is good science misses out on publication, not because it lacks merit or quality, but because it doesn’t line-up with editorial guidelines, audience expectations, or the reviewers simply don’t understand it. All journals are specialised, but this specialisation can only go so far, and sadly the “most important” new research often falls on deaf, and confused, ears. Then the reputation of the author becomes a problem: notable scientists get too much of a free pass, while the unknown are rejected on the assumption that their unknownness is a symptom of their poor research. After that comes the problem of the sheer number of journals that now operate (all trying to meet publication deadlines), meaning that ten rejections of an article has no necessary implication for the future of the research. Short on content for the next quarter, the eleventh will take it. If not the eleventh, then the twelfth…

How to save science from its own institution? And from becoming a victim of diminishing returns? Choose your poison: improve the tenure process, offering more protection for researchers at all stages in their careers; do away with the résuméd-importance of having a flock of grad-students (encouraged to support and advance their supervisor’s work); expand access to funding and equipment; place more value on the receiving of awards (encouraging boldness) rather than the accumulation of publications; or perhaps something much smaller, much simpler, but which would have a disproportionate downstream effect…

For this we go back to Gordon Tullock and his hard look at the structure buttressing the scientific enterprise. One idea, one requirement upon the editorial boards of academic journals, would be revolutionary: make them publish a list of their rejections. From this, we could see how discerning they are, how much they reject vs. how much they publish; we would also begin to see any biases that they might have, if they were consistently rejecting papers from one view point for example; and the next time a ground-breaking paper is published, you could go back and see all the journals that rejected it before it was published, popular, and acclaimed.

Or, as Rafe Champion points out, you could simply give new scientists “a good introduction to the works of critical rationalism” and Karl Popper.

 

*** The Popperian Podcast #17 – Rafe Champion – ‘Karl Popper’s Social Turn’ The Popperian Podcast: The Popperian Podcast #17 – Rafe Champion – ‘Karl Popper’s Social Turn’ (libsyn.com)

Finding Consolation in Truth

In conversation with Michael Ignatieff

I am visiting a friend who lost his wife six months ago. He is frail but unsparingly alert. The chair where she used to sit is still in its place across from his. The room remains as she arranged it. I have brought him a cake from a café that they used to visit together when they were courting. He eats a slice greedily. When I ask him how things are going, he looks out the window and says quietly, “If only I could believe that I would see her again.” There is nothing I can say, so we sit in silence. I came to console or at least comfort, but I can’t do either. To understand consolation, it is necessary to begin with the moments when it is impossible.

Michael Ignatieff has lived an interesting life, in different – but connected – worlds. A young journalist scouring over questions of disputed nationalism and civil war; an award winning novelist and non-fiction writer; an historian and Harvard professor; a politician and leader of the Liberal Party of Canada; President and Rector of Central European University in Budapest, Hungary (until the university was expelled from the country by Prime minister Viktor Orban); then back to his roots as a professor and writer… and Popperian.

At different moments, the theme has returned to Ignatieff, running lecture series, conferences, editing and writing books, about Karl Popper; and particularly about The Open Society. As much as the term means anything, he has been – and remains – an academic hero of mine, someone who stood-up for the right things in the hardest moments; courageous, principled and questioning.

I had been hearing the name for years before it meant anything to me, littered through the footnotes and references of the political science books I was studying in my undergraduate days. Then by happy chance I was looking to do some private reading on the war in Bosnia, and I picked up a copy of The Warrior's Honor. I was immediately stuck to my seat! Here was a professor of history from the fanciest of Ivy League schools, writing about a foreign civil war… and he was there in the blood and the mud and the horror! Instead of following the fighting from a soft bed in America, or a café in some safe neighbouring country, he camoed-up, travelled to the frontlines, slept in the trenches, dodged bullets and artillery, and spoke face-to-face with the ethnic warriors he wanted to document.

In many ways Ignatieff is now heading back to those earlier moments in his career, looking at them now through an ageing lens and the slow-growing prospect of the end of his own life. Asking himself questions about the meaningfulness of existence, the value already exacted and the value still waiting to be found, and how to approach the inevitable harder moments, when everything appears lost; when consolation seems impossible. In good Popperian tradition, things don’t start with a hopeless definition about that word consolation, but with a growing thought – bright, dominating, challenging – and a very human problem: “There is no true consolation in illusion, so we must try, as Vaclav Havel said, ‘to live in truth.’”

So is it true? Is only truth capable of consoling us? It is certainly capable of some heavy lifting: outside the gates of Kresty prison in 1938 a line of women is stretching around a bricked wall. It is Leningrad, it is winter, and everyone is desperately cold… and with each minute the line grows longer. They are waiting to see the men inside, their men, the men they love and lived with, until the Yezhov terror and the late night arrests took them away. The purges and the air of fear has done its job, and so the women whisper to each other, not knowing whom to trust, not wanting to draw attention to themselves, nor to their family on the other side of the wall. As Stalin’s regime swept millions from the face of the earth, these were the silent witnesses, cowed, scared, worrying, and the only proof that other people once lived.

As the frozen wind bites harder, one of the women softly exclaims “Can you describe this?” The crowd remains still and quiet, and then a slightly louder whisper answers “I can!” It was the poet Anna Akhmatova, in line at the prison to see her son, and as the two women caught eyes, a small, delicate smile appears on the first woman’s face. That smile did a lot! We know nothing about what happened to her or her family, only that it was likely heartbreaking, as were the times. But Akhmatova turned that small facial expression into poetry, and so she stood for a moment in time, for a faceless people, and for an inhuman tragedy; people who refused to be forgotten by history.

Over the next twenty years Akhmatova continued to suffer with and to write about and to immortalise the victims. These were people dragged to the limits of the human experience, driven to insanity with fear and hope. Most of them would never see their fathers, brothers, husbands and sons again, in fact as they waited outside the gates at Kresty the people they loved were likely already dead, or already transferred to some distant Siberian gulag. And they probably knew this. What else did they have? They could silently accept the ghostly new world they found themselves in, or they could reclaim the lost moral authority of their nation; futile as it might seem.

When everything has been taken away, and the prospects for change are so miserable, sometimes the only thing left to people is to stand as witnesses and wait – decades perhaps, lifetimes even, for vindication and for the madness to finally wash away. Life is also reclaimed in this way, looking back on the year he spent behind the fence at Auschwitz, with his family, his people, and himself on the edge of death, Primo Levi admitted that it was also “when he felt most fiercely alive.”

But more than anything, we were their consolation. Their hope was as much political as it was moral. As they ached through the most unspeakable pain, they were thinking about us. When people like Akhmatova and Levi put pen to paper, they were consciously writing to the future with a hard epistemological idea: the truth matters, the future can always be better than the past, progress is possible in every circumstance, and even if we never actually feel the consolation that we need so badly, that doesn’t mean that it will never come, never be vicariously felt by others: “they had suffered for a faith, not a belief in paradise or salvation, but instead a resolute conviction that hell existed and that they had an obligation to chronicle it.”

There is a sad tendency to approach history with a detached sense of apathy: it is lost, it is over, and the forces working through it – and over us – are too large to bother with. What will happen, will happen! As calming as this might be for some people – and even psychologically healthy, helping them to accept the horrible things that have come their way – it is also not true. History weighs impossibly over us all, but as Vaclav Havel noted, it “is not something that takes place elsewhere: it takes place here. We all contribute to making it”.

Havel knew this as much as anyone could. A leading figure in the resistance to communist rule, he could be heard across the underground radio stations of Czechoslovakia. When this was suppressed, he moved into publishing absurdist satires and plays. When the printing houses and theatres banned him, he moved again, this time into the heart of the political opposition, choosing to become more, rather than less prominent. He was arrested multiple times, constantly surveilled by the secret police, prosecuted, tortured, and then repeated. As a political prisoner he continued to write letters and push for change.

His last and longest prison sentence ended in 1983. Soon enough he was leading the Velvet Revolution which toppled the communist system, and “within seven years of leaving jail, he was president of his country.” What stuck with him most during the intermediate years, was a sense of failure. That he had let too many people down, too often. Just like the long arch of history, shame of this kind is a difficult thing to deal with, but it is not helped by imagining that it belongs to a previous self, or previous people. Optimism about the future comes from acknowledging error, not by avoiding it. By accepting the truth of your failures and living in a way that corrects them.

When you error-correct your own life in the hardest terms, external judgement mostly arrives as an old, neutral story; and the bits that don’t, as happy new visitors. Three years before leading his country to freedom and becoming president, Havel was asked by a journalist how he felt about the future: “Hope is definitely not the same thing as optimism. It is not the conviction that something will turn out well, but the certainty that something makes sense, regardless of how it turns out.”

When it comes to leading countries and suffering through darkness, Abraham Lincoln – and the consolation of war – deserves to be mentioned. A president of gratuitous empathy, Lincoln visited the barricades, talked with his soldiers, and carried their agony home with him; their young faces rippled by “the noise, the blood, and the terror.” And when they died, he wrote letters to their widows and to their orphans, knowing full well that nothing he said could change their heartache… he wrote anyway: “I feel how weak and fruitless must be any words of mine which should attempt to beguile you from the grief of a loss.”

The letters he received were of a different kind. Every day mothers’ and wives’ wrote to him pleading for clemency, hoping their imprisoned sons and husbands might be released, and not have to face the ultimate penalty for their desertion. Others simply begged him to end the war and allow the soldiers to return home. There was never a moment when Lincoln wasn’t aware of the horrible place he was leading his nation into, as well as the power he wielded over so many lives.

As this fog of suffering eased over him, Lincoln bit down on an unpleasant truth: “If war was to be waged…it must be waged with ferocious intensity.” He pushed hard into the back of Ulysses S. Grant and his army amassed at Richmond – the political commands and encouragement coming from the Oval Office giving away nothing of the internal torture of the man inside. The battle would run as long as it needed to, the Union forces would scrape and crawl and continue to pay a heavy price, just so that the enemy would have to pay a higher one.

The downward spirals of the human condition and of history, are always a weaker cousin of progress and improvement. One can only destroy or suppress; the other has an infinite vista of options and choices and possibilities and solutions and creativity before it. One is completely predictable; the other endlessly flexible, capable of being born anew each and every day. One fears the future; the other invents it. As the “terrible grandeur” of the Civil War reverberated inside halls of politics, and as confidence in the idea of America weakened, suspicions, deception, confusion, revenge and retaliation found momentum. And Lincoln was reminded each and every day just how little control any one man has over history, even a president: “I claim not to have controlled events but confess plainly that events have controlled me.”

But truth is different! It reaches out between people, between nature and minds, and between the past and the future. If slavery really was the abomination that Lincoln believed it to be, then he would be more than just able to assemble a better fighting force, he would be also able to eventually convince the Confederacy of their mistakes. But first he would have to win, and he would have to do so with all the self-doubt and self-questioning that comes with the pursuit of truth and progress.

And so Lincoln spoke in universal terms. He could have easily – and must have been tempted to – define the war in terms of Southern provocations and Southern slavery, and with that dumping unbearable, but satisfying, condemnation upon his enemy. Instead he declared the cause of the war to be “American slavery”, an “offense” that every man and woman, North and South, needed to own and to bear and to take responsibility for. This was war fought not for the future of a nation and its people, but for a moral truth… and for all moral truths to come.

The consolation here must also be found in high principles. Success on the battlefield would end the war and stop the horror, but this wasn’t what Lincoln was fighting for. He knew that “these are not the days of miracles” – those were as distant to him – even in victory – as we are to him now. Lincoln understood that the South would need help to accept their defeat, that the North would need help to forgive the South for making the war necessary, and that both sides would have to learn to look across at each other equally… as victims in the swell of history and ignorance.

In the thick mist of a Paris night, two men – strangers – knocked on the door of Marie Rose Vernet. They carried another man in their arms, weak, exhausted, sick, and a fugitive. This was the height of the Jacobin terror, everyone was under suspicion, and the price of helping an outlaw would be the death of you both. They asked Vernet to shelter their friend until he recovered, and hide him from the guillotine. She had only one question: “Is he virtuous?” When told that he was, she had only one answer: “Then let him come”.

The man being carried had a mouthful of a name, Marie-Jean-Antoine-Nicolas de Caritat, marquis de Condorcet, and a prominent resume: secretary of the Royal Academy of Sciences, deputy of the National Convention, a politician, a scholar, a mathematician, and now a criminal from his own revolution. Months earlier Condorcet walked proudly down the same Paris streets that he now hid from, wearing the new uniform of the National Guard, a prominent figure in the revolution.

Yet, whereas his fellow Guards wore a sword on their lapel, Condorcet chose an umbrella. For him, the fire and the violence were unpleasant necessities. Built upon the new sciences of probability, calculus and economics, the French republic that he dreamed of involved an end to superstition, to tribalism, to ignorance, and the “lackadaisical incompetence that had doomed the ancien régime.” Here was the hard-won opportunity not for change, but for liberation. He drafted legislation that would outlaw slavery across the French colonies, he published pamphlets arguing for equal rights for women, and he wrote the draft of the French constitution in 1792.

And it cost him! Condorcet’s aristocratic friends drifted away, issue by issue. The royal scientific societies retracted his honorary memberships, and then he committed the most unpardonable of sins: voting in the National Assembly to convict the King of high treason but not to execute him. If the revolution was to matter, and was to be worthy of the name, it should be different to what came before it, it should not kill its enemies. Growing anger within the Jacobins had found its flame – Condorcet’s draft of the new constitution was voted down, and the halls of politics were stormed for a second time; the moderates and their allies arrested.

Lucky to escape the mob, Condorcet recovered in Vernet’s quiet house. And as the months ran into years, he resumed an old project to stave-off his “sinking mood”. It began as a multivolume history of science, and grew into the encompassing story of progress and knowledge and growth: the “Enlightenment narrative”. He was trying to recast the revolution in its proper light, but also to explain what gave meaning to his own life, as well as what gave meaning to all humanity: problem solving and improvement.

Much too much negativity about the species had seeped into philosophy, and then into daily life. All change brings with it a new set of problems, but those problems are also soluble just as the previous ones were, and just as future ones will be. For all the inequality and disassociation that the rise of capitalism had brought, there was also more wealth, more choice, and much less actual poverty. In the paraphrased words of Adam Smith, “an average day labourer in England lived better than many an African king.”

Robespierre and the Jacobin terror were justifying their violence in the opposite terms, claiming to see a “fatal pattern” to history, and so were preventing the inevitable slide back into tyranny. Condorcet’s vision of the human condition, and its potential, was something very different; something for which he had the best theories of science and economics on his side. Rather than being defined by the trends and predictive twists of history, humanity renewed itself every day and bent only towards happiness and truth. Both of which can never be suppressed for too long, always wriggling free to find the light.

The revolution may have been slipping away, but it had not been in vain. It was a call to progress, and that call would soon again find its feet, if only men like him – and people like us – were willing to work for it. Coercion and bloody terror can never win for too long, for although they may consider (falsely) history to be on their side, truth never is: “The perfectibility of man is truly indefinite, and the progress of this perfectibility, from now onwards independent of any power that might wish to halt it, has no other limit than the duration of the globe upon which nature has cast us.”

But dying is the end of something, and it is coming for us all. In the mid-twentieth century an old institution from the Middle Ages was reinvented by the English doctor, Cicely Saunders. Watching physicians, nurses, and patients struggle to retain their hope and sense of purpose, she had become fixated on the same philosophical question that tied Anna Akhmatova to Vaclav Havel to Abraham Lincoln to Marquis de Condorcet: the relationship between consolation and truth.

Saunders had seen the hard reality of death steal away the consolation of her patients, filling their final moments with fear and panic and worry. Wrapped-up in a foreign world of medical decisions and unnecessary procedures, these people lost hope as they died. They had more to accomplish, more to resolve, and the growing shadow of death didn’t have to diminish them. Saunders’ insight was to create an institution for palliative care, and most importantly, for consolation: the hospice.

Every patient was different, but in each and every case truth was important. Some people needed more easing and soothing than others, but “false hope was no consolation at all.” Perhaps the greatest failing of the medical establishment in this regard, was the inability of doctors to deal with their own fears around death: “many of them couldn’t tell patients the truth because they couldn’t tell themselves the truth.” The hospice instead built a community around respect for, and the individual needs of, patients, through an unflinching eye on death.

Instead of running from truth, and believing that our deaths must be lonely, unpleasant, and cold events, Saunders turned the institutional dial on this; returning purpose to our final days. Death rarely happens as isolated and deserted as poetry likes to imagine it – more often than not, death is among the most public and socially involved moments of our entire lives. Moments not just where the dying receive the consolation they need, but where they also are desperate to console the people they love and who they are leaving behind: “the giving of consolation was essential to the receiving of it.”

Sometimes what consolation needs most of all, is simply the opportunity for truth to settle comfortably in its own space. When Michael Ignatieff writes about his parents and their death thirty years ago, the words hit me as only true things can – knowing instinctively in that moment how it would feel when the same desolation eventually comes my own way: “They had been the audience before whom I played out my life, and with those two seats in the theatre suddenly empty, the play itself seemed to have little point.”

Separated from them in their last moments, Ignatieff’s parents died in hospital beds, leaving him “inconsolable” with “deep scars”. And when he writes things like “I wish my parents could have had a good death”, it savages the reader with shared compassion. But those last moments that could have been shared, last conversations and meaningful words that could have been spoken, last hands that could have been held, are not as lost as they seem. There was no time and no place for them to happen, and with that comes regret and a deepening of grief, but just as there is no such thing as an insoluble problem, there is also no such thing as an inconsolable situation.

We may be crippled and disabled by the sorrows of life, but this is always a temporary condition. If we cannot consciously find the appropriate place and background for consolation, our unconscious minds will often do the work for us, digging down into the “recesses of our souls”, recovering lost hope, and restoring meaning to a meaningless circumstance: “It is the most arduous but also the most rewarding work we do, and we cannot escape it. We cannot live in hope without reckoning with death, or with loss and failure.”

As the churches, mosques and synagogues empty out, it becomes obvious enough that consolation is losing its institutional setting. This is true… and it’s not. The buildings and the shared rituals are one thing – but the tradition that Ignatieff is a part of here is something much more important. When we struggle with notions of fate, and fight back against the very human impulse of resignation, we inspire others, consoling them as well as ourselves in the process. How lucky you are to have something exceptional enough to grieve for in its absence – it could have all been much worse, and it could all be infinitely better in the future. Our misery is never just our own… and never permanent.

I was struck by how emotional I was talking with Michael Ignatieff. I could hear in his voice, and his words, that he was too. As I write this now, there are soft tears collecting in my eyes. I am emotional again… but also, unmistakably, consoled!

*** The Popperian Podcast #16 – Michael Ignatieff – ‘Finding Consolation in Truth’ The Popperian Podcast: The Popperian Podcast #16 – Michael Ignatieff – ‘Finding Consolation in Truth’ (libsyn.com)

Defending Baconian Induction

In conversation with Jagdish Hattiangadi

Karl Popper was never as wrong as when he spoke about Francis Bacon. And it begins at the end, with the children of Bacon’s work and the darkish hole that Popper – and Thomas Kuhn alike – saw them as crawling from. In his paper On the Sources of Knowledge and Ignorance, Popper was unfortunately sucked-in by an old mistake, something that had been knocking around the halls of philosophy departments for centuries: Bacon’s scientific method was the precursor of John Locke. From here, the mistakes continue…

There are ways to wriggle out of this, as there are with anything, but this comparison is a little hard to explain from a man who was ordinarily so rigorous (to the point of often doing his own translations). For Locke, our human senses are the whole game, they are the source of knowledge and completely free from error. So he is also the type of empiricist that Popper hated, the type who thinks of truth and knowledge as floating in the air, bombarding us with an accurate picture of the world out there. If the world doesn’t lie, and neither do our senses, then all – and any – mistakes can only possibly happen when we misinterpret our, otherwise perfect, sensory experiences.

Locke was wrong. But was Bacon also guilty of the same empiricist mistake? From his 1620 book Novum Organum, Bacon talks in similar sounding terms, but he means something very different. His recommendation to budding scientists was to begin building tables of the natural world, charting the degrees to which things occurred and were sensed, as well as the degrees to which those things were not (“tables of presence and absence”).

Slowly and experimentally building-up a natural history of things in this way, is not the same as Locke’s blunted idea that nature comes to us clear and ready to be understood. Compile all the sensory reports you like (of the kind Locke recommends) and you will still never get close to what Bacon is asking from us – you will never get the all-important discrepancies. Bacon spoke in empiricist language – talking about the “essences” and “natures” of things – but he did so in much more nuanced ways.

When someone speaks about the nature of an object today, we tend to imagine something singular, absolute and unchanging. Bacon’s use of the term was much closer to that of a tentative nature or a nominal nature. In fact his whole point had little to do with sensations at all, rather he was talking about an experimental natural history where the primary thing we are hoping to find, and record, are deceptive appearances.

To think like this, is to start with a hard anti-empiricist attitude: if the things out there, beyond ourselves, have natures or essences waiting to be discovered, then we cannot ever fully know what they are. As things appear to us, they are illusory and misleading. So we experiment, not to explain away – or disguise – the inconsistencies and the problems we find with our senses (as Locke did), but so that we can isolate those errors, learn from them, and record them in a way that emphasises the deception. The whole point is to avoid the Lockean or Aristotelian hope (and formulas) for an intelligible collection of recorded sense data. Here we can begin to see the “enigma of his [Bacon’s] kind of natural history.”

Some of the mistakes here come back to the use of the word experiment. Despite the time in which he wrote, Bacon didn’t whittle the term down to something as simple as “laboratory work, or work with instruments and measuring devices.” Already he had larger and more sophisticated ideas about observing objects and data in previously unobservable circumstances – isolating and removing aspects of the natural world, and seeing how they behave without all the background corruption. But this doesn’t quite get us to the “new Baconian sciences” as Thomas Kuhn called it, leaving the meaning of experimentation as something focussed primarily on the source, or location, of what is being observed.   

The collection of data was not the emphasis that Bacon wanted science to have. Forget the sensory impressions, forget the large and growing tables of observations, forget the isolation of phenomena, even forget the discovery of repetitions in nature, what matters most is that we “learn from our errors”. And to head-off another common misconception about Bacon, this is not just “perceptual error” but also “conceptual error”. Here it all starts to sound very Popperian, and perhaps it would be more accurate to describe Bacon as a precursor to Popper rather than to Locke, but there are important differences. The most important being that, rather than “errors in hypotheses, guesses, or conjectures”, Bacon talks of “errors in appearances, including perceptual appearances”; which “must also exist” if we take the Popperian position that our objective experience is “theory-laden”.

We get to this happy shore by cross-examining the world around us, holding our own perceptions to account and criticism, and thereby interrogating nature itself. With a twist by Robert Boyle that made space for mechanical principles as well, Bacon’s method of “true induction” was adopted in the middle of the 17th century by the Royal Society. And with that, this slightly modified “experimental philosophy” became the centre of the scientific world.

But success isn’t enough, we are looking for truth after all. And so fallibilism is a good place to start. The Aristotelian method which had dominated for two thousand years before Bacon hit the scene, involved a vocabulary of first principles, of proper knowledge, and of higher forms; while denigrating the hypothetical, the conditional, and even the mathematical, as lower types of knowledge. It searched for bedrock, and yet showed no way of actually achieving it – the Socratic Method is a wonderful way to produce refutations, but it is not the affirmative foundation that Aristotle wanted.

To wriggle free from this problem, Aristotle in the Prior Analytics took us into a new language of logic, demonstration and intuition. And it too didn’t work! No matter how useful or valid a demonstration of this logic was, it could only ever produce knowledge if the premises of the argument or statement were already known and taken for granted. A self-referential, and redundant, syllogism. Something that pushes the problem onto the premises, then onto further premises, and into an infinite regress.

Circular arguments are not impressive things to stake the future of science and progress on. And if we must rely on deduction of this kind, we are doomed. So what is there left for us that can produce results and guide the way to truth? This is the question that births induction for Aristotle, as another method whereby knowledge is drawn from observations, later “extracted from our memory”, and then “followed by a mental discernment of its essence from its many remembered attributes”. Here the hard work of induction – as well as the essence of first principles – comes to us not from that endless chain of demonstrations, that infinite regress, but directly from “mental intuition”.

And Francis Bacon is having none of it. Again, in strong Popperian tones, he talks about our initial observations as always being prone to error, always letting us down, and never capable of the high task that Aristotle demands from them. Then he is onto the imperfections of our language, our powerlessness to describe the true nature – or the essence – of anything. The only thing that is happening when language feels as important and surgical as Aristotle wants it to be, is that we are imposing it onto the world, not grasping some deeper reality through it. It can, and should, be dismissed as a “childish” or “naïve” method, unable to produce results.

Instead Bacon compares the human mind to a “broken mirror”, reflecting nothing as it really is, distorting everything. And when Bacon talks about his method of true induction, he is imagining something that would bypass (or correct) all those mistaken ideas about objective reality and the human mind. Baconian induction is a way of weeding through the distortions and finding the reality hidden far behind them. In short, his unique contribution to epistemology is to “extract affirmative knowledge” via a “method of refutation”.

Or call it “error analysis by division” as Jagdish Hattiangadi does, either way, there are more Popperian pre-echoes to be found here. Though it might be tempting to chart significant sections of reality all at once, and then decipher them at large, it is a better method to limit things to singular phenomena and singular tests, journaling individual distortions in piecemeal fashion. There are, after all, infinitely many ways to be wrong about a single observation, and infinitely many lenses to view it through.

And so we don’t just build and perform experiments to challenge our theories, but we also do so to challenge our previous experiments. This is what Bacon means when he speaks about the cross-examination of nature, not only the running of tests and the compiling of data, but observing the conditions under which that data was achieved, changing those conditions to see how reproducible the effect really is, and then changing again and again, to root-out errors and avoid false conclusions.

Even when an overwhelmingly large natural history is created in this way, we should still expect the result to be absolutely “bewildering”. The slow – and always incomplete – process of whittling things down to individual truths, is damaged and difficult too, because the process tends to always involve some retreat to an established metaphysical theory. The stripping-back of error has no endpoint, no clear path, and no non-opaque indications that things are heading in the right direction; all we can ever do is “attend to the errors at hand” and then try to find more of them (through the building of more and more Baconian natural histories).

Under no illusions about the difficulty of his project, Bacon often referred to it as deciphering a near-impossibly coded message, or finding an exit to a labyrinth. Choose the word you prefer – a Kuhnian puzzle or a Popperian problem – this is a philosophy that doesn’t play the rudimentary game of induction that so many people have posthumously ascribed to it. It also elevates the role and purpose of science to a status that Kuhn and Popper would have approved of: “power over nature”.

Where things diverge is with Bacon believing the foundationalist (all knowledge comes from finding its foundations and building upward from there) idea that first principles can be reached, known, and used to ascend the epistemological ladder. However, he is always conscious to note that we might always be in error, there might always be higher or lower rungs still to be explored, that there is never a final rung where the whole project ends, and that – with each step up or down – we carry fallibilism with us as an unavoidable passenger.

So why did Popper miss all of this happy subtleness, and falsely compare Bacon to people like Locke, when he was staring his own philosophical ancestor – his own family resemblance – so firmly in the face? Probably because everyone else did as well. For over two hundred years the academia surrounding epistemology and the scientific method pretended as if it did not exist. Even when Isaac Newton came along – as well as thinkers from the French Enlightenment – championing a near exact copy of Bacon’s method, the connection back to Bacon was never properly drawn. And tellingly, when the connection was occasionally made, it invariably came with the same mistake that Popper had stumbled into: coupling Bacon with Locke, rather than with the scientific successes that burnt bright around them.

So what would Bacon say about Popper and his sceptical philosophy, if he could look back on it all today? He would look for discrepancies and errors, and after scanning through most of it in nodding approval, stop, scratch his head, and ask aloud just what it is Popper thinks a good scientist should do. Conjectures and refutations, sure! But how do they, and we, affirm something as being true – is it really good enough to consider an unrefuted theory as just the best available option? And if so, how is this not akin to the conventionalism (“the philosophical attitude that fundamental principles… are grounded on (explicit or implicit) agreements in society, rather than on external reality”) that Popper claimed to hate so much?

Or as Hattiangadi puts it:

On the weak fallibilist endorsement of theory, suggested by Karl Popper, we can affirm that our best hypothesis may be true, given the evidence. On the stronger fallibilist endorsement of some theories by Francis Bacon, we can affirm our best hypothesis because it must be true, given the evidence. It must be true because it alone can solve the riddle in the evidence. Its presumed uniqueness makes all the difference, even though our judgment remains fallible.

It is a glitch in his methodology that Popper was well aware of: what to say of, and where to stand on, the truth content of a theory? It is the aspect of his philosophy he was most desperate to address during his life, and has remained the softest of underbellies to attack since his death. In chapter 10 of Conjectures and Refutations, Popper tried to clean up some of his earlier ideas, trying to explain how a change from one theory to another is appropriately called progress or the growth of knowledge, as opposed to just change. After all, to be worthy of the name/s doesn’t something new need to be known, as opposed to something old simply being mistaken?

Popper flapped around in these deep waters, hoping that he might eventually find a raft; or at least learn to swim. This is where verisimilitude comes onto the scene: we cannot say that a new theory is definitively true, but we can say that it has more “truth-likeness” (less false consequences and more true consequences than the previous theory). It sounds ideal. No better than that. It sounds like common sense. And so no wonder Popper stuck to it with such loving attention for so long. It was only in 1974, after multiple iterations of verisimilitude had come and gone, that David Miller and Pavel Tichy finally put the theory to bed. What they showed was unpleasant viewing for Popper: “verisimilitude could not be satisfied by two false theories.”

So did Bacon have a point? Popper thought not! He had run out of criterion to make verisimilitude work, but not to defend his overarching theory. In substitution he proposed “three new requirements” of any good theory; things that would allow for the growth of knowledge: 1. It “should proceed from some simple, new, powerful, unifying idea”. 2. It “should be independently testable (it must have new and testable consequences)”. 3. It “should be successful in some of its new predictions; secondly, we require that it is not refuted too soon – that is, before it has been strikingly successful.”

Popper would never have admitted it, but it certainly looks like he is reaching here, searching for an affirmative platform for new theories to sit on. Let’s look at his first requirement: by a “new, powerful, unifying idea” Popper has in mind something like gravitational attraction that connects things as distant as planets and apples. Hattiangadi doesn’t see this as possible or historically accurate. So let’s stick with gravity and with Isaac Newton: every phenomenon that his new theory unified and connected were, in fact, already connected from the Copernican debate. None of those relationships were made new or unified or powerful upon the arrival of Newton’s theory, only more coherent.

The second and third of Popper’s requirements are protections against the construction of ad hoc theories. Together, they need a good theory to be independently testable, to have independent consequences, and be able to “pass at least one such test before we abandon it.” And as far as ensuring the forward movement of science, they work. But this might also leave us in a state of “normative limbo”. Do we use conventionalist strategies or not? If we do use them, are they only temporary solutions that help us to gain some traction in the search for truth and reality? And how long must we hold onto a theory (not rejecting it) simply because it may pass some independent test in the distant future?

The rabbit holes keep appearing, and it is just much simpler to say that phenomena become unified after we discover discrepancies between previously unrelated phenomena, and when we build natural histories. Requiring a lot of heavy lifting, breakthroughs of Baconian induction happen rarely, but they avoid the Popperian traps of conventionalism and of non-affirming the truth of theories.

There are Popperian answers to this… good ones! And of course the work is not yet final, and never will be. But the world of epistemology and the scientific method was split in two by Bacon – between the natural philosophers who followed Newton, and the people who felt that induction had no logical basis to it, and so could not be saved. This line has become an unpleasant one, riddled with its own set of misconceptions and errors; most of which relate back to Bacon himself, what he said, what he thought, and on whose philosophical family tree he belongs.

The largest shame is that most of these mistakes would have been avoided, if more people had done their scholarly due-diligence. If they had only “concluded that another source of Baconian science, surprisingly enough, is to be found in Francis Bacon’s writing.”

*** The Popperian Podcast #15 – Jagdish Hattiangadi – ‘Defending Baconian Induction’ The Popperian Podcast: The Popperian Podcast #15 – Jagdish Hattiangadi – ‘Defending Baconian Induction’ (libsyn.com)

Whiffs of Induction

In conversation with Anthony O'Hear

*Induction (definition): “the doctrine of the primacy of repetitions” – Karl Popper

If you are going to be a non-hypocritical Popperian, then you are going to have to love your enemies – those people who go out of their way to kick holes in your philosophy. Looking at what he wrote attacking Karl Popper, Anthony O'Hear admits now that perhaps he wasn’t “generous enough” all those years ago, that his “book has a certain nit-picking quality” to it. So let’s be thankful that it does!

In many ways, Popper built his reputation and career against the back of the inductive method, stabbing wilfully into the flesh, weakening the giant to its knees, and then to its death; waiting for its enormous shadow to finally clear from the landscape of philosophy and logic. The fact that this hasn’t happened, is explainable firstly by the craft and intellect and rigor of modern inductivists like O'Hear, and secondly by Popperian scholars failing, almost entirely, to recognise the work that such people have done; instead, continuing to argue against a much cruder – and long since dead – inductivism from philosophical history.

Carrying around the words of Popper, and parroting them in back-slapping agreement, there are vastly too many Popperians out there in the world today who are great at remembering the prose, and yet terrible at living the philosophy. Terrible at admitting to the uncomfortable paradox that: if it is true, then it also must – in many different ways – be false! That these falsehoods need to be chased and hunted and celebrated by Popperians themselves. And that when we run out of ideas in this pursuit, criticism from without should be welcomed as medicine for a sick patient.

When O'Hear says – after his brief nit-picking admission – that, “On the other hand, I don’t think I would go back on the things I have said in the book in general about induction and verification”, he deserves our thanks, our admiration, and our love. He has, after all, written the most punching, and demanding, and difficult to evade assault on Popper’s work that exists today. And so, if we are to be true to ourselves and to our philosophy and to the man himself, it is also the best book on Popper’s work that exists today!

Let’s start at the beginning then, with the simplest argument for induction, the crude one: “Suppose, for example we are trying to discover the cause of cancer. We examine a large number of cases and notice that they are all heavy smokers, but that they seem to have nothing else in common. We would then naturally formulate a hypothesis to the effect that it was the smoking that caused the cancer.”

This type of claim might appear reasonably profound and uncontroversial, until you begin to think about all the possible things which could have been “naturally formulate[d]”. In this case it was smoking, but only because on the short list of things in common, it seemed to be the most likely culprit. There were infinitely many more things which all those cancer patients had in common, which were ignored or not noticed: their clothes, their language, where they live, the things they eat, the air they breathe,… However it came about that the inductivist researcher selected smoking, it couldn’t possibly have been from compiling lists of all the features of the patients’ lives, and then selecting the one feature which repeated the most (or they would still be working on those lists today).

Our impossible troubles with observing all aspects of an event or phenomenon, also stretch-out to the question of what constitutes a repetition: “a farmer may see his ploughing his field on two days a repeated task” writes O'Hear, but the “mouse whose nest is ploughed up on the second day will be more impressed by the distinguishing features of the two days’ activity.” The lesson being, every time we notice a repetition of some kind, it always involves the prior adoption of one or another points of view. This is O'Hear giving Popper his theory-laden dues.

Popper’s view on this is fairly straightforward: nothing that you could learn about experiencing Phenomenon A could help you to understand – or reason about – Phenomenon B, which you have not yet experienced. And when inductivists have tried to moor their theory to firmer ground, they generally haven’t helped themselves, by arguing in regressive circles: what justifies the inductive method? Past successes of the inductive method! What justifies those past successes? The successes before that, and so on.

What happens when the inductive method fails, and an inductivist is facing the disproof of his theory, even within that infinite regress? He runs a little further into the darkness, arguing only that the method or the principle has been misapplied, that relevant differences between connected events weren’t noticed, or that other relevant connections were overlooked. When the inductivist sees the sun rising and falling with regularity, he forms the theory that it rises and sets every 24 hours. He then visits Norway, sees the midnight sun and, instead of admitting that his method and his theory were wrong, he claims they were still correct, just missing a few extra details; details that he can now add after seeing them.

So it is a mistake to build a theory on the assumption that the future will resemble the past. But is it wrong to say that the accumulation of past evidence makes certain aspects of the future probable? For Popper it is, because a theory is either correct or incorrect – probably correct is a meaningless statement. And when you turn the wheels of probability theory on this, Popper appears vindicated. To run the experiment in a “universe such as ours”, the ratio would have to factor-in all possible (conceivable) tests and counter-theories. This being infinitely large, the probability of any one theory being correct would be zero (“or something very close to it”).

Regardless, in small and failed shifts like this, you can begin to see the steady refinement of the inductive method in response to Popperian criticism. And it continues. In fact the hard centre of O'Hear’s inductivism is a chase of sorts – Popper running slightly ahead, O'Hear snapping at his heels, getting closer by each step, until finally they turn down a bricked-up, blackened alley; nowhere else to run: “Popper’s attempt to dispense with induction is unsuccessful. We have found that inductive reasoning, removed from one part of the picture, crops up in another.”

All that talk of probability theory failing to help the inductivists out, is turned around on Popper, and turned around on the principle of falsifiability. If it is meaningless to say that a theory is probably correct based on some criterion, then it must also be meaningless to say that it is probably wrong! When a theory fails a test of some kind, at the bones of things this means that an empirical statement has clashed with an empirical observation. But – again back to that endlessly complex world of ours – there are infinitely many ways that this might happen, and not happen. So to talk about falsification without the help of probabilities, Popper and Popperians need a classification system, something that sizes different types of clashes (empirical statement vs. empirical observation), and designates what the consequences should be of each.

And O'Hear is happy to volunteer some of the preliminary work: “If the class of potential falsifiers of x include the classes of potential falsifiers of y and some more as well, x is more falsifiable than y.” For example, take these two statements and try to falsify them: 1. The planetary orbits are circular, 2. The planetary orbits are elliptical. The latter theory sits in a different class to the first, as it requires “six singular statements describing a curve” in order to falsify it, whereas the first theory requires only four.

O'Hear’s help and kindness quickly escapes him as we run deeper into things. Back in that infinite human world of possible experiences and observations and statements, theory selection is a fraught and difficult place for Popperian philosophy. How is it – from that limitless space within, and beyond, us – that we ever get around to choosing a single theory and calling it true? Millions upon millions of different theories are capable of explaining any single observation, the vast majority of which are not falsified and never will be. O'Hear and his inductivists have a simple answer: you choose the theory that has been most successful to date, the one whose predictions have most consistently come true.

The Popperian answer is the least impressive part of his philosophy: you choose the theory with the highest degree of corroboration! Meaning you choose the theory that is best corroborated by surviving the most severe tests. (At this stage you should be sensing the blood in the water! But let’s continue). Successful predictions, lots of them, are important things for a theory to have before we consider it to be true. And clearly we prefer theories that survive tests to theories which fail them. Now even if Popper says, something like all the successful corroborations in the world won’t give us reason to think that the corroborated theory will continued to be true into the future, will continue to be corroborated, he does still have a problem on his hands here.

Imagine you are a commander on a battlefield, and in the next few minutes you have one of two decisions to make. The enemy is attacking from an unknown side of the mountain range in front of you, so you need to either send your reserve troops to the left to reinforce the line, or to the right. With easier terrain, more cover, and better firing angles, all the attacks to date have come from the left flank. So the theory that the enemy prefers to attack from the left is a well corroborated theory. The inductivist agrees – completely! For them, the theory that the enemy prefers to attack from the left is true because it has been successfully predicted time and time again. The difference? The inductivist thinks that this should inform your decision about where to send your troops, whereas Popper thinks it shouldn’t.

Well what is the point of talking about corroboration, or truth in general, if it does not help to guide our decisions, if it does not help us with theory selection? Because, after all, you do need to make a selection in moments like the one just mentioned, despite the best efforts of some Popperians to pretend like you don’t – to say obfuscating things like, I would first have to more fully understand the ‘problem situation’. When induction does its best to meet Popper’s criticism head-on, the shame only belongs with the people playacting as if nothing were actually being said.

The question remains: as the commander on the battlefield, do you take into account the past behaviour of your enemy or not? Do you make your decisions with those patterns in mind, or do they hold zero value? In other words, back to that previous question: what is the point in corroboration if it’s only worth is to retrodict an explanation for past events, and not to predict future ones? And if it does have some predictive value for the future – if those previously observed repetitions should be factored into your decision making process in some way – then corroboration begins to sound a lot like induction.

It is important to head things off here, and remind any angry Popperians out there just who it is they are talking with. It’s been said once, but it’s worth echoing: O'Hear is not some unsophisticated lout from an earlier time, shouting about the inductive method through drunken breath at passers-by. Because the enemy has always attacked from the left does not mean that they will always attack from the left in the future (they might choose to attack from the right to catch you off guard, or the weather might change making the right flank easier to traverse), but only that this past behaviour should be a very important part of your calculations.

So not a hard inductive theory, nor the primacy of repetitions; but a theory of inductive inferences, and the importance of repetitions.

Either way, it is verisimilitude to the rescue – to save Popper’s corroborations from the “whiffs of induction’ and from that question: what does it mean to say that something is true? And what does it mean to select one unfalsified theory over another unfalsified theory? To give all of this the sense of purpose and progress that our “intuitive desire” needs, verisimilitude is a way of speaking about truth in terms of our distance from it. Theory A explains a phenomenon, and so does Theory B. Neither are falsified, both produce accurate predictions, but if Theory B explains the phenomenon as well as others, and/or has more precision, we can reject Theory A not because it is false, but because it is less true (has less truth content). In this way we can see the verisimilitude of any theory or statement being “its truth content minus its falsity content.”

So we can now make Popperian sense of the act of comparing theories, and of theory selection. But not quite! If you want to appraise the verisimilitude of a theory, then that appraisal will have to rely upon how we view the tests it has passed – it will have to rely upon “inductive background assumptions”. Call it “background knowledge” if you like, but the problem remains – the outcomes of all those tests matter – and if they matter then you have a spoon in the inductive soup. Verisimilitude 2.0 drills harder into the two categories that a theory can hold (truth or falsity) and talks instead of “excess truth-content” and “excess falsity-content”. 3.0 involves deriving numerical values from the accurate predictions of theories. 4.0 is a “language-dependent” version, taking apart the propositional language and primitive sentences.

Collapsing back into the uncertainties of probability theory, into inaccuracy, and into induction, whichever way verisimilitude has been constructed over the years, it has always failed to “fulfil all Popper’s original desiderata.” Adding another layer of problems without solving any, Popper stuck with the theory as it chased its own tail, “continuing to stress the importance of the idea” and trying desperately to save it; behaving in a very un-Popperian way!

So much of this comes back to testing, severe testing. This involves not just trying to shoot down a given theory, but doing so in places where that theory appears weakest and most likely to break: the riskiest predictions, the most unlikely consequences, and the most probable types of counterexamples; tilting the scale – as much as possible – towards falsification. The trouble is, once you have completed a severe test, it becomes less severe the second time, less so the third, and so on. You are repeating the event, and so the risks of falsification diminish. As the tests come and go, the theory in question moves further and further into established fact (background knowledge).

So in straightforward language. How does a theory move from risky theory to background knowledge: the repetition of severe tests… or induction! If a good Popperian doesn’t want to use corroboration as a guide to future success, but does want to claim that well-corroborated theories should become a part of our background knowledge (things that we take for granted in order to test other things), then the inductivist will nod away, saying: at least you have admitted that background knowledge is “covertly inductive”, and that inductive reasoning has its place. Still doubtful? How else, if not inductively, does the reproduction of a test and its outcome make it increasingly less severe?

If not a whiff of induction, then how about a whiff of verificationism. Popper’s philosophy is always running away from direct observations and towards theoretical statements. It goes like this: the statement “this is red” might appear as a clear observation of some object in reality, but by making such an assertion – no matter how clearly red the object in question is – you are impractically committing yourself to that truth holding into the future; for an infinite number of statements and objects. All of which we are not in a position to verify. Will it always look red, under all conditions, from all angles, and with all the coming advances in technology? This scepticism bleeds from Popper’s “general feeling that we can never rule out every possible mistake”.

It is interesting to think in this way, and it is certainly true, but – as O'Hear points out – “this position has no practical import. We do not [and cannot] act as if we might at any time have to revise well-tested empirical judgements about everyday realities”. Popper is making two errors here: 1. Suggesting that there is always evidence for a given observation, 2. Missing the common-sense way in which we talk about things, and why their enduring nature (even if false) is necessary for the “rest of our conceptual scheme” to hold. With Popper speaking in this way, it is hard to imagine how any empirical statement could hold any meaning at all.

If you are sensing some Wittgensteinian tones to this debate, then so is O'Hear, and so is Paul Feyerabend, and so is W.W. Bartley: “Bartley’s innocent comparison of Popperian methodology with a Wittgensteinian language game possibly so enraged Popper because of its closeness to the truth.” And he is running fast through the open door of another house that he doesn’t want to be in.

The scientific realism that Popper defends, is the claim that our theories actually give us knowledge of the world, as opposed to scientific instrumentalism which claims that our theories are just instruments from which we can get accurate predictions (just as a screwdriver doesn’t need to mirror the screw, our theories don’t mirror the world, but simply help us to bridge the gaps to what we want – they are tools, nothing more). Popper instead values greater universality and greater depth of explanations. He believes that – by encouraging the endless probing of even more fundamental truths – he can steer his philosophy into anti-instrumentalist seas. At this point in things, and surprised by the weakness of Popper’s argument, O'Hear has lost some of his patience: “an instrumentalist could agree that they are desirable properties [universality and depth] of a tool, because the more applications a tool has, the more useful it is.”

O'Hear also senses a whiff of – unavoidable – relativism in Popper, but this is best left for another day, and for readers of his book. It can all be brought back to a simple claim about what is reasonable and unreasonable to believe. Might it not be the case – and do we not have appropriately good reasons to believe – that it is just “our biological good fortune” to be able to notice regularities in the world? And to then be able to use those regularities to make successful predictions? It would, after all, be impossible to think and to function and to survive if those regularities were to suddenly disappear tomorrow (you became unable to notice them). And the best arguments for the method of induction come back to just that, practical, common-sense, decision making.

Piecing through Popper’s work with a maliciously sharp blade, O'Hear eventually finds his way to close agreement; or rather he believes that Popper agrees with him, and not the other way around. Especially when it comes to his claim about physical regularities and their importance for knowledge creation (“at least insomuch as this is acquired by ‘physical methods’”). Saying that our assumptions about the world could always fail us, that we could always be wrong, doesn’t hurt the O'Hearian inductivist in any way.

It is impossible to walk away from this book and this man, without thinking that he has a point; a point that scholars of Karl Popper are almost wilfully missing. Other than with his argument about Wittgensteinian language games (which now seems uncontroversially true), there are some good answers to many of O’Hear’s challenges (which he acknowledges himself); but there is no conceivable way of not being a better critical rationalist after reading the work of Anthony O'Hear – the worst enemy, and greatest friend, of Popperian thought.

**** The Popperian Podcast #14 – Anthony O'Hear – ‘Whiffs of Induction’ The Popperian Podcast: The Popperian Podcast #14 – Anthony O'Hear – ‘Whiffs of Induction’ (libsyn.com)

Karl Popper vs. Friedrich Nietzsche

In conversation with Ken Gemes

Have you not heard of that madman who lit a lantern in the bright morning hours, ran to the market place, and cried incessantly: "I seek God! I seek God!" As many of those who did not believe in God were standing around just then, he provoked much laughter. Has he got lost? asked one. Did he lose his way like a child? asked another. Or is he hiding? Is he afraid of us? Has he gone on a voyage? emigrated? Thus they yelled and laughed.

The madman jumped into their midst and pierced them with his eyes. "Whither is God?" he cried; "I will tell you. We have killed him, you and I. All of us are his murderers. But how did we do this? How could we drink up the sea? Who gave us the sponge to wipe away the entire horizon? What were we doing when we unchained this earth from its sun? Whither is it moving now? Whither are we moving? Away from all suns? Are we not plunging continually? Backward, sideward, forward, in all directions? Is there still any up or down? Are we not straying, as through an infinite nothing? Do we not feel the breath of empty space? Has it not become colder? Is not night continually closing in on us? Do we not need to light lanterns in the morning? Do we hear nothing as yet of the noise of the gravediggers who are burying God? Do we smell nothing as yet of the divine decomposition? Gods, too, decompose. God is dead. God remains dead. And we have killed him…

Within the walls of academia, Ken Gemes has lived a “schizophrenic” life. As a young man, he looked-out at his fellow human beings – people striving for things they didn’t understand, failing at every step, and then trying again to make sense of things – and saw most clearly the “mess” of it all. As an Australian with a “strong bullshit detector”, he had read Freud and had seen quickly through the gloss and veneer to the pseudoscience underneath. He wanted a surer footing for his work, a colder place of theorems and facts, somewhere far from all that mess… he became a philosopher of science!

Flying to Yale, Gemes settled into the philosophical rigor of things by publishing on verisimilitude, bayesianism, hypothetico-deductivism, confirmationism, verificationism… It was all parochially interesting, and as he had hoped, all very cold! Then one day he decided to write a “jokey” article, poking hard into the delicate ribs of Karl Popper. Famous for his anti-inductivism, Popper had claimed that Object A having Property F, would give you no reason to assume that Object B should have Property F. Even if there were a thousand such objects with Property F, this would still give you no reason to assume anything about object B.

Gemes’ short paper (three pages long) was a mathematical proof (using probability theory) showing that this was all wrong – and from across the Atlantic ocean “Popper went ballistic”. One of Gemes’ supervisors at the time, Clark Glymour, was strolling around an international conference, and found himself in an elevator with Popper. Not knowing his relationship to Gemes, Popper turned to Glymour, and launched into a tirade, “There is some idiot in America. This guy called Ken Gemes, who wrote this idiotic article. Do you know anything about it?” Backed and cornered into the shrinking elevator, Glymour was almost nose-to-nose with the Austrian, and so had nowhere else to look but directly into those hot eyes, muttering softly under his breath: “Yes, he’s my student.”

Then life intervened. Gemes was dealt a few “difficult punches”, and found that his old taste for the escapist heights of academia was lost. For personal reasons, he needed to find something more “flesh and blood”. Freud and his gang of psychoanalysts were out there, but they were also still bullshit. Luckily there was someone else, someone who had pre-echoed much of the psychoanalytic movement, just in much more interesting and insightful ways. He was also the most flesh and blood philosopher imaginable: Friedrich Nietzsche. And it all begins with the death of God… and a Madman.

Despite what we might like to think about ourselves, Nietzsche believed that we modern people have never properly appreciated what it means for God to be dead. Most of us have a rather simplistic, mechanical view of apostasy: once you have given up God, you have also escaped religion, and are free from its nightmares, its hangovers, its meaning. This is all wrong! Like it or not, we are still living in “Christian times” – it is still there, floating in the air around us, and in the values that we hug most closely: the value of compassion and the value of truth!

God was once the animator of all we were and wanted to be, including truth itself. So once he is dead and no longer providing for us, no longer telling us what to follow and what not to, all things become suddenly – and painfully – up for grabs. Why should I continue to care about my neighbour, loving him as I would myself? Why hold on to the value of truth as if it is still sacrosanct? Why should I care about it at all? It might take 200 years Nietzsche wrote, but a deep “Dostoyevskian…nihilism of disorientation” of this kind is on its way: “incredibly prescient of him to see our current situation in this so-called post-truth era”.

The other nihilism is the “nihilism of despair”, a collapse of the human spirit, rather than a sudden map-lessness and a drifting at sea. Without God the great meta-narratives of our lives – the ultimate values behind things – can never be fully realised (even though they still exist). And it is here on questions of nihilism and truth, where Nietzsche and Popper would have found each other to be soldiers in the same trench, fighting the same battle, plodding-on through mud and disease and injury, looking for an audience for their overlapping philosophy.

Both nihilisms have a Popperian flavour to them, and a Popperian disgust: disorientation is an appropriate thing to feel, in fact it is the natural state of things as we all doubt, search, and scrape to find a truth that we can never be certain of. But being lost should not lead us anywhere close to nihilism, with all its weak, relativistic horrors (there is no such thing as objective truth).

Still, despair is worse! It steals away the beauty of our world, and leaves us sulking about not having the final solution to our problems. Always with work to do – with knowledge to birth, and creativity to apply – to be map-less is a wonderful thing; it is what gives life its purpose, it is what keeps us going, it is how those ultimate values will be found, and it is the one thing that ought to keep those feelings of nihilism at bay.

And there is nothing wrong with embracing your critical, destructive side and announcing proudly to the world, as Nietzsche does, “I am dynamite”. Dynamite is just criticism on a larger scale, and with a larger target – it is bold, and a worthy thing to be proud of. Contrary to popular readings, all that Nihilism that Popper would have hated, Nietzsche did too – talking about these dark turns of the mind, and of group psychology, only in order to predict their coming, and to warn against them: a diagnostician, never an advocate.

The nihilist out there – and perhaps within us all – has a readymade answer to this: if your whole purpose is to go out into the world and falsify everything you see and hear, then sooner or later you will be left with nothing, or at least nothing but disorientation and despair. The Popperian counter is: if you try to falsify everything, you are indeed left with nothing… except for all the things that aren’t falsified. Whether or not Nietzsche would agree with this, is another problem of bad readings.

The concept of objective knowledge – and its possibility – gets a hard rattling within the pages of Nietzsche’s books, and leaves all of us on the outside of his mind wondering what on earth he is on about: surely he can’t be making such a simple, nihilistically-flavoured mistake, as to say that the pursuit of truth is nothing more than a religiously infused error. The problem runs first back to language, and then more deeply to the type of philosopher that Nietzsche is, and finally on to what his fatherly hopes are for his readers.

In reverse order: when Nietzsche predicts that nihilism will become the future of Europe, he is saying this as a doctor might to a wilting patient in his surgery. Staring into the jaundiced eyes, the thinning hair, the loose teeth, the folds of obesity, the declining posture, the worrying blood tests, the horror-esque habits, Nietzsche is warning people about the ghastly future that awaits them only so that they might avoid it. He wants those people – he wants us – to fully appreciate the meaning of the death of God, to construct our own master narratives, creating and celebrating replacement values; to become gods ourselves.

The type of philosopher he is: anyone who reads Nietzsche well, reads him as a psychologist – prefiguring all the best work of Sigmund Freud, without any of the scientism (the effort to apply the veneer of science to places where it does not belong). He is a loud champion of the Dionysian spirit, not because he prefers the emotional side of the human condition – the intoxicated, disordered, passionate side. But because, ever since Socrates, the Apollonian spirit has won the day – with logic, reason, and progress dominating our truth-obsessed lives.

Caring deeply about his patients, Nietzsche sees a coming clash, something that a little more Dionysian indulgence can help with: our deep psychological need to understand truth vs. our need to find meaning in the world. The God that we so ruthlessly killed, did more than explain the otherwise unexplainable, he also gave significance and purpose to our small, individualistic lives. Religion both bound us together and lifted us up… the fact that it was also harmful – particularly to creative spirits – and needed to be replaced, should not cause us to lose sight of why it held for so long, and why it animated so many lives.

The lament, the malaise, the depression, the disorientation, the despair, the nihilism of our times, is largely because the world no longer appears enchanted in the way that it once did. Without our myths, all we have is truth. And an obsessive, compulsive lust for more and more of it – buffering the things that give life its meaning back into darkness and scorn. Nietzsche is here cheerleading for a return of myth, fairy tales and folklore, because this is the aspect of his readers’ psyche which then (and perhaps currently) needed the biggest champion.

Finally back to language, and to a place where Popper and Nietzsche stand far apart: working in a time of obscurant philosophical language, and of leading intellectuals deliberately writing so as to be misunderstood, Popper lived by a refreshing motto of a kind: anything that can be said, can and should be said simply and clearly. Nietzsche didn’t play games with his prose like those contemporaries of Popper, but he was consciously writing for an audience. And so the pages of his books are dripping with bombast, with drama, and with inspiration. He was screaming into the darkness, hoping to catch the ear of the next great creative talent, and to guide them away from the herd. To pull apart his language analytically, looking for the simple and clear meaning, is to lose sight of the philosopher and his philosophy.

But for both men truth does matter! It is never certain, it is always open to change, and yet it does exist out there, waiting to be found by us. For Popper, us literally meant all of us. His philosophy was written to finally – and scientifically – put to bed a nagging idea from history: that great men drive it, and drag the rest of us along in their wake. Again, Nietzsche was having none of this. That audience for whom he was writing, for whom he sweated through illness, psychosis and rejection, was a rare breed of characters – the self-creating, elitist, Ubermensch (superman or overman).

The rest of us (which Popper cared so much about) Nietzsche was happy to dismiss with the forgetful tone of a schoolyard bully: “Let the values of the majority rule… in the majority.” Just who made it into the ranks of Nietzsche’s Ubermensch wasn’t so clear: for a time his friend Richard Wagner (along with other composers like Beethoven) were there, and then they weren’t; then his old “teacher” Arthur Schopenhauer was there, and then also removed. The one constant name? Nietzsche himself.

Questions of ego and grandiosity aside, all this talk of becoming supermen has its place within Popperian philosophy. More than just a document for domination and power, Nietzsche had in mind a much more internal individualistic triumph. He was encouraging his followers to stretch the boundaries of what it was to be human, to create new and beautiful things, to ignore the disapproval of the masses, to be their own metaphorical executioners (as well as executioners to what they hold most precious) – so that they can become much, much more than their “human, all too human” origins: “all great things must bring about their own destruction through an act of self-overcoming”. Or… to be bold, to take risks, to embrace fallibility, and to enjoy the Popperian pleasure of burning your own theories to the ground.

We might all have the desire and the capability to overcome what we are, but Nietzsche wants more from us – he wants us to have the will as well. In more of those Popperian tones, the world isn’t some colourless project, but a value-laden (or theory-laden if you prefer) interactive phenomenon. There is nothing that can be said about the world which doesn’t come with a set of presupposed values attached to it. Every action and thought and observation involves a thick tapestry of values – so why not cultivate your “will to power” and make those values your own, make them worthy of the place within your mind, and within the world.

And of course, being Nietzsche, he also wants us to suffer for it… a lot! Not just to prove something to ourselves or to others, nor to achieve a known outcome, but because it is where life and meaning are to be found: “To those human beings who are of any concern to me I wish suffering, desolation, sickness, ill-treatment, indignities—I wish that they should not remain unfamiliar with profound self-contempt, the torture of self-mistrust, the wretchedness of the vanquished: I have no pity for them”. He has no pity, and he wishes you wouldn’t either, because they are the lucky few who are capable of making their lives worthy of the name.

While the rest of us are left to wallow in the herd with our bitterness and weakness and fragility and resentment and self-loathing and fear and hatred and procrastination and spite and impotence and malevolence and vengefulness and shortcomings and instability and malice, all polished-up into an excusing slave morality. People who avoid suffering, and so will never amount to anything.

During his shortened life, Nietzsche suffered as much as anyone. He also felt horribly ignored, even having to pay for his final few books to be published, telling anyone who might listen to him that he had been born posthumously. Popper too suffered, felt underappreciated, and shared with Nietzsche an unconcealed dislike for academia and academics. But Popper and his work were wildly successful (professionally speaking) during his life. As Popper’s last years were spent on university payrolls and receiving knighthoods, Nietzsche’s were in poverty, illness, obscurity, and eventually madness.  

When Ken Gemes looks back at his own turn towards the work of Nietzsche, he sees it as “really disjoint from my philosophy of science work”. I am not so sure that it is! For me, there are more similarities between the two than people tend to think, and when both are done well they ought to be boiling red with emotion and import.

Perhaps most of the confusion is a matter of timing. Within his era, Popper had largely won the debate over the history and the future of science and knowledge creation, while Nietzsche had won nothing but a life of extreme suffering. Then slowly, bit by bit, building into an irrepressible drumbeat, Nietzsche’s posthumous birth has happened. And while the name Karl Popper might one day fade and disappear (hopefully because his ideas become so mainstream as to not need the reference), the name Friedrich Nietzsche will never die, never wilt, never again lack for an audience:

…How shall we comfort ourselves, the murderers of all murderers? What was holiest and mightiest of all that the world has yet owned has bled to death under our knives: who will wipe this blood off us? What water is there for us to clean ourselves? What festivals of atonement, what sacred games shall we have to invent? Is not the greatness of this deed too great for us? Must we ourselves not become gods simply to appear worthy of it? There has never been a greater deed; and whoever is born after us, for the sake of this deed he will belong to a higher history than all history hitherto.

Here the madman fell silent and looked again at his listeners; and they, too, were silent and stared at him in astonishment. At last he threw his lantern on the ground, and it broke into pieces and went out. "I have come too early," he said then; "my time is not yet. This tremendous event is still on its way, still wandering; it has not yet reached the ears of men. Lightning and thunder require time; the light of the stars requires time; deeds, though done, still require time to be seen and heard. This deed is still more distant from them than the most distant stars, and yet they have done it themselves.

- Friedrich Nietzsche, The Parable of the Madman (1882)

*** The Popperian Podcast #13 – Ken Gemes – ‘Karl Popper vs. Friedrich Nietzsche’ The Popperian Podcast: The Popperian Podcast #13 – Ken Gemes – ‘Karl Popper vs. Friedrich Nietzsche’ (libsyn.com)

The Constitution of Knowledge

In conversation with Jonathan Rauch

 

If you want to clear the room at a cocktail party” writes Jonathan Rauch, “say epistemology”. It is one of those horrible words, lengthy, a mouthful of enunciation, and isolating with its polish of professional jargon. It is also, largely, redundant! That poor guy next to you at the party, speed drinking his martini for an excuse to walk away, might otherwise pause mid-gulp, turn to face you, even lean-in for a closer listen with the light returning to his eyes, if you only replaced epistemology with truth or knowledge or information…

Philosophers build careers around ‘ologies’, and with every tweak of language (every ‘ology’ of this kind) comes a little refinement, a little more accuracy, and a new corner of academia that can build upon itself. But for the unfamiliar, the uninitiated, or the simply forgetful amongst us, these words only bring frustration, a stroke-like numbness, and quickly emptying rooms.

Which is a shame, because in many ways epistemology is also the catchphrase of our day… albeit expressed a little differently, and couched in a thick layer of doom and gloom. Browse the shelves of any surviving bookstore and titles like these will stare back at you with Best Seller stickers across their covers: “The Misinformation Age”, “Truth Decay”, “Post Truth”, “The Death of Truth”. A completely new genre of publication: The Epistemic Crisis Book!

Everywhere you look, people are obsessed with the question of knowledge, and seemingly distraught at the fiasco it now finds itself in.

The first thing that must be pushed back upon here is that hinge-word “now”. It is always tempting to imagine that what we are going through is unique, a special case of pain and difficulty from what has come before, or from what will come after. However you try and pinch things though, there is nothing  particularly new about dealing with deceptive leaders, with the lies of our fellow citizens, with the overriding interests of tribal loyalty; with discerning truth from falsity. In fact, it is the perpetual challenge of our species and of all others: either discover what is true about the world and so be able to adapt, and even thrive; or fail to discover truth, stagnate as a result, and then eventually die.

Still, it would be much too uncharitable to say that all these Epistemic Crisis authors are simply conjuring-up an emergency in the hopes of making some quick cash. They are not idiots, and most of them are not callous. They are on to something – a feeling, a sense, an unpleasant tingle in the bones that things are different this time around. And just what that something is, is an area that Rauch has a well-trained ear to.

In the 1990s, decades before most people began noticing that the soft, embracing kindness of modern social justice movements were morphing into the poisonous, intolerant inquisitions that we often see today, Rauch was publishing a foreshadowing book on what was to come. He also had a painful memory, and a shared history (of a kind), with this new generation of witch-hunters.

As a young, gay man of a slightly different era, Rauch grew-up with the worst of things. All around him were laws and prohibitions against who he was: against marriage, against employment, against businesses, against sex, against affection, against biology… So the gay rights movement was born from this sin, this public and open discrimination. Rauch watched as people marched against this injustice, as they filed petition after petition, as they demonstrated, launched legal battles, and “confronted the psychiatric profession with the irrationality of its pathologising of homosexuality”.

Above all it took bravery. Every one of that early generation of activists suffered. As did everyone who joined later! In 1996 Rauch allied himself with the public battle, fighting for the legalisation of gay marriage and, though hoping for more, was resigned to the fact that “I might see some success in two or three generations, if ever.” Eight years later gay marriage was first legalised in a single state, Massachusetts. By 2015 it was legal across all fifty. And Rauch was left with the happy, and impossible to ignore thought that, “I should have had more confidence in liberal science. You cannot be gay in America today and doubt that.”

But writing back then, as gay-rights turned a fast corner towards victory, Rauch could sense that his “confidence in liberal science” wasn’t so widely shared amongst his fellow travellers; that there were plenty of doubters in Gay America. Their liberal society had given them the freedom, and the right, to call-out prejudices against them. To loudly challenge those prejudices – and to defeat them. But it also gave their enemies the same freedoms and rights to fight-back against those defeats – to try to step society and its laws into bigotry once more.

So with their personal slice of salutary progress in the bank, many of these activists decided enough was enough. That liberalism – those freedoms, and those rights – which had been so useful to them, was suddenly a problem, a weapon that needed to be destroyed, lest it prove useful to someone else. “Today I fear that many people on my side of the gay-equality question are forgetting our debt to the system that freed us.”

It is an old and worn metaphor, but one that is useful and clear. It pushes to the heart of things, and to why those activists were making such an enormous error. That metaphor is the “marketplace of ideas”. A place of widely discordant views and opinions, all swirling around in competition. A place not of chaos (though it can often look that way) but of constant criticism. Rather than being a license for hatred (though it is also that by default), this marketplace is a mechanism for the discovery of truth. Somewhere in which ideas rise and fall on the strength of their arguments, and the quality of their explanations. A world where people talk directly to each other, and change their minds once – and only when – they are convinced to do so…

But in the years since, Rauch has begun to have his doubts. And it comes back to epistemology, a critical eye on those Epistemic Crisis authors, and a long, unpleasant gaze at the modern landscape of fake news, of misinformation, of relativism, of cancelling, of shaming, of trolling, of weaponising news, of normalising lies and falsehood, of the siloing of communities, of politicising truth, of Donald Trump:

Long before Donald Trump began his political career, he explained his attitude toward truth with characteristic brazenness. In a 2004 television interview with Chris Matthews on MSNBC, he marveled at the Republicans' successful attacks on the wartime heroism of Senator John Kerry, the Democrats' presidential candidate. "[I]t's almost coming out that [George W.] Bush is a war hero and Kerry isn't," Trump said, admiringly. "I think that could be the greatest spin I've ever seen." Matthews then asked about Vice President Dick Cheney's insinuations that Kerry's election would lead to a devastating attack on the United States. "Well," replied Trump, "it's a terrible statement unless he gets away with it." With that extraordinary declaration, Trump showed himself to be an attentive student of disinformation and its operative principle: Reality is what you can get away with. 

George Orwell imagined a shadowing and nosey government. One that branded free thought as traitorous, that made individuality impossible to the point of death, and which slowly suffocated its citizens into passivity, compliance and adoration. Thomas Hobbes saw us all in terms of our animal origins, fighting to bloody ends over limited resources… unless restrained by a powerful and controlling state. When most people think of social suppression they have something like this in mind: a Leviathan stomping them into silence and conformity. And that without such a structure, we could – and would – all flourish in new and beautiful ways; letting our full range of cognitive abilities off the leash.

What is too often missed is the internal mess of recurring errors that we have within us: biases. We tend to overestimate our chances of success; we overestimate the probability of eye-catching (but rare) events such as terror attacks; we like to extrapolate familiar data points from our lives, believing they are therefore universal to everyone else; we tend toward conformity within the groups we belong to; we notice evidence that confirms what we already think, while ignoring evidence that might contradict us… Studies have documented well over a hundred such identifiable biases/errors of these kinds, and this doesn’t take into account the whole category of meta-biases – those biases that blind us to our other biases.

All of this is a long way around to saying that reasoning is hard… very hard. And that to get anywhere with it, we first need what Charles Sanders Peirce called “network epistemology”. With truth being so elusive, we need a community around us – people who hold everything that we say to account with criticism and error-correction. Whenever knowledge creation isn’t a social behaviour, the enterprise is doomed! “It will appear”, Peirce wrote, “that individualism and falsity are one and the same.”

Science – when done well – is just such an escape from individual falsity. A process of constant trials and errors, of conjectures and refutations. An institution that doesn’t just find mistakes, but which revels in their discovery; hoping to find as many as possible, as quickly as possible, so that they can be just as quickly error-corrected.

The professional ranks that Rauch joined out of college looked a lot like this. As a young journalist hoping to tell “enlightening” and “true” stories (in a stereotypically solitary occupation), he couldn’t possibly have imagined how little space he would have to himself:

Apart from the lonely process of writing a first draft, I could do nothing on my own. Facts were gathered from interviews and sources; analysis was checked with experts; every sentence was edited, copy-edited, and often fact-checked; tipsters suggested story ideas, sources waved me off bad leads, and challenges to my claims percolated in conversations within the newsroom and outside of it. The sense of having joined something much greater than myself, and of swearing allegiance to the exacting standards of a great tradition, made the enterprise of journalism appealing and compelling to me even on the days when the practice of journalism seemed grinding and routine (which was often).

Today it is the changed and changing nature of the media environment that has Rauch doubting what he once believed: whether an open space for reporting and opinion and information gathering and data storage and publication and fact checking and second sources and third sources and of transparency, of evaluation, of interviews, of witnesses, of cross checking, of investigation, of trusted sources, and of critical feedback, is enough. Instead, we all need to be paying a lot more attention to the structure of the “knowledge-making business”.

The ‘marketplace of ideas’ metaphor implies – and needs – a lot more than a raucous, unguarded, unpoliced, unimpeded space where true theories survive and bad ones die. In much the same way as governments need constitutions and institutional arrangements to ensure their proper function, our Marketplace needs delicately tuned social settings for it to work; an agreed-upon collection of rules, a constitution of knowledge.

Some of this is merely a problem of bandwidth. Popperians can talk endlessly about the free flow of conjectures and refutations, of love-inducing problems, and of beautiful solutions, but only a very small fraction of the swirling thoughts, philosophies, notions, concepts, designs, and criticisms are ever likely to be noticed. So rather than imagining the open spaces of a Market, with all the available produce labelled and displayed for your careful inspection, a more apt metaphor might be what Rauch calls the “social funnel” – a place where, even if the persuasion of an opponent were possible, the battle to first grab his attention is near-hopeless.

The modern media landscape – with its targeted reporting and endless variety – appears to drive this social funnel ever narrower. Take a quick glance at the viewing habits of the average citizen, and you are likely to feel that all is lost; that we are all splintering into epistemic tribes, communities that talk across each other, but who never meet to hash-out their issues. “The commercial internet was born with an epistemic defect” writes Rauch, “its business model was primarily advertisement-driven and therefore valued attention first and foremost.”

And perhaps it is here where things take their worst turn. For all its promise and undoubtable good, today’s internet appears to be accelerating untruth at dizzying speed. With such a solitary focus on attention and ad sales, outrage becomes the ugly cousin, belatedly let out of the cupboard after the party is over and the guests have left; running around the already messy living room, burning pent-up energy and making an already unpleasant scene look a whole lot worse.

When a quiet news day means a loss of profit, the temptation to play upon an audience’s impulsivity is hard to ignore. A sprinkling of fake news and disinformation might just be the way to spice things up, and keep eyes on your channel. But so might a little “troll epistemology”, whereby you poke at conspiracy theories, at desecration, at insult, and at shock value, with the single-minded hope of winding people up. Call it a “firehose of falsehood” or “flood[ing] the zone with shit”, it is the type of tactic that has no interest in creating knowledge, in settling disagreements, or building trust. It only wants people off their seats, red-hot, and ready to fight.

Way off in the distance, but still visible, is another – but just as troubling – world of news media that bathes each day in “emotional safetyism”. These are often the traditional bastions of good journalism, the large shining lights of the industry who turned against pluralism, diversity, and value-rich disagreements, instead deciding that such things were abruptly too dangerous for the average listener to handle. Filtering their content through prudish self-censorship, they look down upon their readers, listeners, watchers, with a child-rearing concern: Sure, I can handle the truth of the world, but most people aren’t built like me. I am special. And they need protecting, from worry-inducing knowledge, and from themselves.

So Popper’s model needs new settings, for a new world. But what are they? What should this constitution of knowledge look like? It begins with a minimalist compromise, a balancing of simple, easily agreed-upon rules – something that ensures the dynamism that knowledge creation requires, but which also hinges heavily around stability. An accommodation that cuts through the inherent antagonisms of the current system, and which produces a much more functional institution (akin to the medical or legal establishments). A place with slightly more procedures, hierarchies and restrictions, but only insofar as better, more positive, and more reliable outcomes are achieved: a Madisonian epistemology to compliment the Popperian incumbent.

But of course, as much as anything, culture matters here! This all starts with people pushing back, unmuting themselves, finding their courage, speaking-out… and in doing so “remember[ing], you are never as alone as silencers want you to believe.” Still all this talk of cultural change and institution building can be a little overwhelming, and a little too isolating – much like all that previous talk about epistemology was. So how does one go about this without clearing the room at the cocktail party? Start small, with things that are easy to follow, easy to recall, easy to understand, and easily consented to, yet which will have disproportionately large downstream effects (a lesson that many new Popperians should take to heart).

So take two stone tablets, and carve into them the following maxims:

* No one gets the final say!

* No one has personal authority!

 

*** The Popperian Podcast #12 – Jonathan Rauch – ‘The Constitution of Knowledge’ The Popperian Podcast: The Popperian Podcast #12 – Jonathan Rauch – ‘The Constitution of Knowledge’ (libsyn.com)

Wittgenstein's Poker

In conversation with David Edmonds

 

There he stood, flames at his back, weapon in his hand, yelling the small room into silence; his voice cracking with anger. Ludwig Wittgenstein was the preeminent philosopher of the day – an “atom bomb” of thought and intellect. Those watching-on, trying to smuggle-in a word or two through the “tornado” of noise and emotion, were only slightly less eminent in their own right. Most were household names in (and beyond) the world of philosophy: John Wisdom, C.D Broad, Alfred Ewing, Richard Braithwaite, G.E. Moore, Margaret Masterman, Bertrand Russell, and an increasingly smug looking guest around whom all the fuss was building.

On that wet autumn night, Karl Popper had been invited (for the first, and only, time) to attend the regular meeting of Cambridge University’s Moral Science Club. He was asked to bring with him a philosophical “puzzle”. Instead Popper showed-up with a handful of philosophical “problems” and a grudge of sorts against the club’s president: “I admit that I went to Cambridge hoping to provoke Wittgenstein into defending the view that there are no genuine philosophical problems, and to fight him on the issue.”

Following established tradition, “the guest opened the meeting”… and that is where all the courtesy, kindness, and tolerance, ended. Puffed-up for battle, Popper went immediately for blood and victory, attacking the wording and implication of his invitation. Wittgenstein literally sprang from his seat to challenge the “upstart” in all his “foolishness”. Back and forth they went, interrupting, berating, shouting each other down, until Wittgenstein stormed over to the fireplace, and pulled out a glowing red poker. Waving it around in strong, violent strokes, he demanded that Popper provide a single “example of a moral rule”.

With the poise and delivery of a stand-up comedian, Popper replied “Not to threaten visiting lecturers with pokers.” Everyone roared with shock and laughter, while the slighted Wittgenstein dropped the poker on the ground and “stormed out of the room, banging the door behind him”.

Or did he?

The clash between Wittgenstein and Popper had been a long time coming. Both men were raised in the heady atmosphere of inter-war Vienna; both from assimilated Jewish families. Popper grew up firmly middle class, his father was a prominent lawyer, while his home was decorated with rare luxuries: pianos and a “library of ten thousand books”. Yet even before the hyperinflation of the early 1920s wiped out the savings of the Popper family, they – along with everyone else in Austria – were being looked down upon by the Wittgensteins. Not out of contempt or animosity of any kind, but from the disinterested and escapist heights of extreme wealth.

A “business genius”, Ludwig’s father Karl had built an empire on the back of the steel trade. In the evenings prominent scientists, musicians, painters, sculptors, and all manner of people from Vienna’s cultural elite would stop-by the Wittgenstein estate for dinner, drinks, and debate. With the image and riches of the American Rockefellers or Carnegies, Ludwig might not have known who Popper was, but Popper certainly was aware of Wittgenstein.

And he judged Wittgenstein accordingly, telling people that Ludwig “couldn’t tell the difference between a coffee house and a trench”, and that his book Tractatus Logico-Philosophicus, “smelled of the coffee house.” On this point, Popper couldn’t have been more wrong! During the First World War, Wittgenstein volunteered for duty, and refused the safe posting that his family connections would have afforded him. Instead he asked to join the frontlines as an artillery officer, and fought until captured, quite literally in the trenches. And it was there, in that mud and fear and agony and exhaustion and death, that Wittgenstein wrote the Tractatus.

At each turn in his life, Wittgenstein continued in this way – against the grain of assumed privilege. The youngest of nine children, three of Ludwig’s older brothers committed suicide, and he once confessed to a colleague that “all his life there had hardly been a day, in which he had not at one time or another thought of suicide as a possibility.”

After his release from a prisoner of war camp, he trained as a teacher and later worked at a rural elementary school. He left the job in a hurry after beating a particularly slow-witted student unconscious. He then tried his hand at architecture. Before that he was a gardener at a monastery, and had previously studied to become an engineer at the University of Manchester. Then at the height of his philosophical fame, Wittgenstein left it all behind for the isolation of a log cabin in the arctic forests of Norway; remaining there for years, communicating with the outside world only through letters. And when his father died, Wittgenstein chose to give away the entirety of his vast inheritance.

Back as a young boy in Austria, Wittgenstein briefly attended a state school (he was home schooled until then), K.U.K. Realschule in Linz, only a few grades apart from his fellow pupil, Adolf Hitler. Decades later, when Nazi forces were annexing Austria, all that long-denied privilege was suddenly – and understandably – too hard to ignore. Travelling back to Vienna from his naturalised British home (and then on to Berlin), Wittgenstein cut deals with influential politicians, paid bribes, and leant on his old family connections. And it worked! Wittgenstein’s sisters were allowed to live out the war years in safety, while their fellow Jews of Vienna were being dragged away into concentration camps; amongst whom – without the money nor resources of the Wittgensteins – were 18 members of Karl Popper’s family.

Fleeing the collapse of mainland Europe, Popper and his wife pulled hard on the few strings they had. They applied for British citizenship, were rejected; applied again, and were rejected again. Searching the globe for safe harbour, only a single, solitary offer ever materialised: the quiet hills of New Zealand… literally the other side of the globe. Even then, the Poppers had a hell of a time securing the necessary exit permits and visas (“our departure problems are appalling”). All this, while Wittgenstein was quickly handed British naturalisation on the backs of personal recommendations from the country’s elite, and then rode-out the war on a Cambridge scholarship.

A world away in the eerie silence of Christchurch, what little philosophical news managed to bounce its way across the oceans to Popper’s ears, always looked and sounded the same: Wittgenstein, Wittgenstein, Wittgenstein… Ludwig walked the halls of Trinity with a flock of acolytes in tow, copying his fashion, mimicking his mannerisms, and adoring his every word. So much so that Bertrand Russell was soon saying out loud that Wittgenstein had surpassed him, becoming the teacher in their relationship. But Wittgenstein was also becoming a man famous beyond his philosophy, a mystical public figure. While no one beyond a small – and parochial – group of close colleagues even knew Popper’s name; an outsider amongst outsiders.

It wasn’t the first time that Popper felt unjustly excluded from a world dominated by Wittgenstein’s shadow. As a young academic growing up in inter-war Austria, each and every Thursday evening Popper would sit at home, stewing in isolation, painfully aware of the party he wasn’t invited to.

That rare collection of Europe’s leading scientists, mathematicians, and philosophers, known as the Vienna Circle would meet each week to build upon the then-fashionable idea of logical positivism (“the view that scientific knowledge is the only kind of factual knowledge and that all traditional metaphysical doctrines are to be rejected as meaningless.”) and to discuss novel breakthroughs in knowledge creation. But what they spoke about most of all, was Wittgenstein, walking through his Tractatus line-by-line, revelling in its complexity and in the intellect of its author.

Invited to join the meetings countless times, and even made an honorary member of the Circle (considering how important his work was to them), Wittgenstein still never bothered to turn up, not even once. Popper on the other hand was desperate to join – publishing articles that he hoped would catch the Circle’s attention, while also chasing down its members on University campuses – and yet was never invited.

To Popper’s credit though, he had a very good – and unpopular – reason for wanting the eyes and ears of the Circle: he thought they were wrong… about everything that mattered!

Obsessed with building a criterion of meaning, the logical positivists believed that there were only two types of valid statements: “statements such as ‘All bachelors are unmarried men’, equations such as ‘2+2=4’, and logical inferences such as ‘All men are mortal; Socrates is a man; therefore Socrates is mortal’. And those which were empirical and open to verification: ‘Water boils at 100 degrees Celsius’, ‘the world is flat’ (which, being open to verification, is meaningful even if false).” All other statements – those not fitting within these categories – are literally meaningless by this account.

‘Does God exist?’ is impossible to verify, and is classified as a meaningless statement/question. But so is the claim that ‘Murder is wrong’, as it too sits beyond the scope of verification, and therefore belongs in the intellectual rubbish bin next to ‘Does God exist’. Even if you follow the logical positivists in this line of reasoning, and accept that ‘Murder is wrong’ is indeed unverifiable, why should that also make it meaningless? Why bundle the two together?

As you might expect, all this hinges around just what counts as verification. And it is here where the Vienna Circle found the philosophy of Wittgenstein so useful, digging into the Tractatus and embracing Wittgenstein’s ideas as their own. Ideas that Popper disregarded as “facile”.

Popper pushed back at the Circle by “polish[ing] up a two hundred year old artefact” from David Hume: the problem of induction. Restated here by Bertrand Russell: “the man who has fed the chicken every day throughout its life, at last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken.” Or to move this image out of the farm and into the laboratory, “no number of experiences can prove the validity of a theory.” So, in short, verification of this kind was impossible!

But Popper wasn’t done there. He railed against the Circle’s central project: the attempt to delineate sense from nonsense. He did so not only because by following this criterion they were carelessly discarding important philosophical problems, but also because the important demarcation was rather that between science and non-science. The question of meaning – as Popper saw things – simply had nothing to do with it.

And as Popper would note, they (the Circle) were people and a theory (logical positivism) that couldn’t survive by their/its own standard: logical positivism is itself a metaphysical claim, and one that is unverifiable in the same way as ‘Murder is wrong’ is unverifiable. So by its own light, logical positivism declares itself meaningless.

If these were the cold arguments of fact and theory, where Popper took things a little more personally was with Wittgenstein’s claim that there were no such thing as philosophical problems, only philosophical puzzles. That is, all apparent problems were really only problems of language – things that could be soothed-out with some “linguistic therapy”. And that all such issues could be easily avoided by simply not using language in unfamiliar ways (“when language goes on holiday”). For Wittgenstein, when we unravel important questions in philosophy, we are not exposing a hidden logic or underlying explanation, but rather just reminding ourselves of how language is properly used.

This was much too much for Popper to handle. What he saw in Wittgenstein here was an indifference to the real world around him, an indifference to the important questions within that world, and worst of all an indifference to the everyday people out there who longed for progress and a little less suffering: “Wittgenstein used to speak of ‘puzzles’, caused by the philosophical misuse of language. I can only say that if I had no serious philosophical problems and no hope of solving them, I should have no excuse for being a philosopher: to my mind, there would be no apology for philosophy.”

Which brings things back to the heat and fury within the Moral Sciences Club, and with that red-hot poker waving around the startled room. Popper had come for a fight, with a lifetime of bitterness and injustice to correct, while Wittgenstein had walked into an ambush of sorts. And yet even without this tensely built stage, things were always unlikely to go well between the two men. Both were unbelievably intolerant at the best of times… and notoriously so.

Bryan Magee once described the typical meeting with Popper like this: “an intellectual aggressiveness such as I had never encountered before… in practice it meant trying to subjugate people.” A former student and colleague of Popper’s, John Watkins, remembered typical seminars where invited lecturers would get as far as “announc[ing] his title”, before “Popper would interrupt” so much that “the speaker got through his title and nothing more.” Badgered with the energy and fixation of a schoolyard bully, these visiting dignitaries would routinely leave the seminars in tears, leading many people to joke under their breath that Popper’s book The Open Society and Its Enemies ought to be renamed The Open Society by One of Its Enemies.

And yet somehow, in terms of sheer prickliness and hostility – as well as a complete lack of social etiquette – Wittgenstein managed even to out-do Popper. The novelist Iris Murdoch said that Wittgenstein imposed “confrontation on all his relationships”, with his favourite advice to aspiring young students of philosophy being “abandon the subject” and instead “work with [their] hands”. As a school teacher he would beat his pupils beyond what were then reasonable standards. While living in Norway he once threatened to attack his neighbour with a stick. And when, in 1929, Cambridge was manufacturing a way to award Wittgenstein a doctorate for his Tractatus, he scoffed and belittled his examiners, Bertrand Russell and G.E. Moore (titans of philosophy in their own right), from across the table.

He ended the meeting – and the questioning of his work – abruptly by slapping Moore on the shoulder and saying “Don’t worry. I know you’ll never understand it.” Wittgenstein would later smirk that Moore was living proof of just how far someone could get in life with “absolutely no intelligence whatever.”

So, it would seem, that the fateful meeting at the Moral Sciences Club was always going to end with fireworks, anger, and intrigue. But perhaps not intrigue that would linger and remain hot over half a century later!

In 1998, while working as journalists at the BBC, the Times Literary Supplement fell on the desks of David Edmonds and John Eidinow. Within its pages was a letter claiming that Popper’s account of the meeting with Wittgenstein “was a lie”. A week later another letter arrived from a new author, saying that they had witnessed the encounter, and that both Popper and the previous letter were wrong. Then, remarkably, another week later someone else wrote-in saying that ‘no’, everyone else was mistaken and that he had the true story.

Edmonds and Eidinow were hooked, and began digging into the question, recovering old documents and compiling witness accounts. Memory is a difficult thing, but what we can – and should – say from this reporting, is that Popper did, in fact, lie. Or at the very least he embellished certain details in his favour to fluff-out a long cultivated self-image, as well as the pages of his autobiography.

It was, and is, a damn good story nonetheless!

 

*** The Popperian Podcast #11 – David Edmonds – ‘Wittgenstein's Poker’ https://popperian-podcast.libsyn.com/the-popperian-podcast-11-david-edmonds-wittgensteins-poker

Deutsch's Theory of the Pattern: The Widespread Compulsion to Legitimise Hurting Jews

In conversation with Richard Landes

 

There they were in the no-man’s-land between blood and bullets, Israeli military on one side, Palestinian forces on the other. Muhammad al-Durrah was 12 years old and screaming in terror, his father struggling to shield him behind a small concrete barrier. Then the camera suddenly shakes, dust sifts across the lens, focus returns, and al-Durrah is lying dead in his father’s lap. This was September 30, 2002, two days into the uprising of the Second Intifada, and the world had an image that it couldn’t ignore.

Media of all stripes and languages began running headlines about an Israeli genocide of the Palestinians, Jewish indifference to life, and the cold-blooded killing of a child. Across the Arab world al-Durrah became an instant martyr, with pictures of his dying body soon used on postage stamps around the Middle East; while the heat and intensity of the Intifada burned brighter with rockets, suicide bombings, and more of that gunfire.

And it was all “the cheapest kind of fake!”

The video of al-Durrah and his father that was released to the media wasn’t even close to what critical journalists would ordinarily accept. Instead of raw, continuous footage, we got six separate ten second long clips, cut and glued together; all grainy, all unfocussed, and showing the before, and the after, but not the actual killing. The reporting was that al-Durrah was shot multiple times in the legs and stomach, but there was no visible blood. Blame was immediately and persistently attached to the Israeli military, but when you look at the angle that the bullets must have come from, the shooters could only have been Palestinian.

The discrepancies built thick and fast for anyone asking honest questions… and then an extra couple of frames were leaked. Al-Durrah is lying dead, his father lost in grief and trauma, a few more seconds tick by, and the young corpse rolls slightly onto his side, moves his hands away from his face, opens his eyes, and stares knowingly down the camera lens. Not dead, not wounded, just an actor on a dusty stage. When Richard Landes first saw the footage, he immediately understood what it was: “Pallywood” (the Palestinian practice of forging evidence against Israelis, in the hope of turning global outrage against them).

By far the most alarming thing about these obvious forgeries is the greedy, unquestioning way in which they are accepted. Despite clear echoes from The Protocols of the Elders of Zion, a significant French anchor on a mainstream news channel stared-down her audience after watching the al-Durrah footage, and said “this picture erases/replaces the picture of the boy in the Warsaw Ghetto”. With Israel as the “new Nazi”, everywhere you looked people like our news anchor were lining up to free themselves, and their countries, from the hungover shame of the Holocaust.

There was a heat and an energy built around this particular instance of Pallywood that had Landes worried. After the Holocaust, open anti-Semitism and the manufacturing of modern blood libels (false, and persistent, allegations that Jews murder Christian children for religious rituals), largely stepped-back into the shadows as the accounting of those horrors stepped-out into the light. Sure, there were always those communities so tight within their own ideology, and religiosity, that they never felt the appropriate guilt, never corrected course, and never dealt with where their bigotry had led them.

But in those years of post-war reckoning, the Western world had been largely different. Though a catch phrase of a sort, “never again” was taken seriously, and hugged close, as a reminder that they too had shared in Hitler’s hatred, and so also bore a distant responsibility for his camps. “The West had resisted blood libels” Landes says, “until this one!”

On the streets of cities in France, chants of “death to Jews” began reverberating for the first time since the Nazi era. Jews were being attacked and abused and harassed with a righteous glee – “given what is happening in Palestine, what do you expect? You have brought this on yourselves.” – while most people turned away in silence and acceptance. No left-leaning political party uttered a concerned word, no anti-hate NGO did their self-declared job and campaigned against it, and most journalists didn’t even bother to file a report.

The obvious fraud of the video, combined with the near revelry and joy that people took in being able to legitimately target Jews again had something atavistic, and intensely memetic, lurking just below the surface. The sight of ordinary, and otherwise rational, people, going about their distant lives, turning their heads in a happy instant, and uncritically accepting the most outlandish of claims, took some explaining; and mere anti-Semitism just wasn’t going to cover it.

On the other side of the world from Landes, and deep into the writing of an upcoming book on irrationality and self-destructive behaviour, Oxford physicist David Deutsch had an uncomfortable answer:

I went to visit a twitter friend, the physicist David Deutsch. He’s writing a book about patterns of irrational thought that sabotage human creativity and progress. He has a chapter on the Jews in which identifies a pattern (he calls it “the Pattern”) concerning the Jews. The key to people’s behaviour in this regard, he argues, is the need to preserve the legitimacy of hurting Jews, for being Jews. This legitimacy is much more important than actually hurting Jews. And it targets only the Jews. It is not, accordingly, either a hatred or a fear, a form of racism or prejudice in the conventional sense, even though it can lead to those feelings and attitudes. But it is actually unique. No other group can substitute for the Jews as the target whom it is legitimate to hurt.

The truth of this is going to be found, or lost, in the details. Of course, a pattern needs to be explained, but much more so, it needs to be shown. It is not a claim that something is there, but rather that something repetitious is there. So where does someone start to show that the hatred of Jews is more than your garden-variety bigotry? At the beginning!

His blood be on us”, reads the gospel of Matthew, “and on our children.” Deicide! The murder of God! It is a crime that no other people have ever been accused of, and a guilt which no other children have ever inherited. In fact, it wasn’t until 1965 – a full 20 years after the end of the Holocaust – that the Catholic Church finally, and begrudgingly, revised its official doctrine which named all Jews, past, present, and future, as directly responsible for the death of Christ; as the heirs to Matthew’s blood curse.   

The worry around this issue burrows a little deeper still… and in a slightly different direction. As the keepers of the Old Testament, the killing of God (or the son of God) can strangely be excused through that same canon. After all, Christ didn’t actually die on that cross, and was quickly resurrected for his trouble. More importantly, as every Christian will tell you, his death was the event which brought salvation to all of mankind. So as Landes humorously puts it, “perhaps the Jews should be thanked by Christians, not vilified?”

Maybe this could have been true, if history had played out a little differently. But the Jews have a richer and more persistent stain on their record, something even more unforgivable than murder: denial! As the gatekeepers of prophesy they saw Christ in the flesh, and rejected the obvious truth (for all Christians) that he was the long talked about, and anxiously waited for, biblical Messiah. How could they not see what was before them? The answer for so many people is, they did! And they lied… because it is in their nature. This is a sentiment similarly shared by many Muslims, substituting Muhammad for Jesus.

For The Pattern to hold as true, more is needed. What is being proposed here by Deutsch, is not that an ancient hatred still lingers in the minds of many people today, but something much more insidious and destructive. Here the Jews are not just the scapegoats for the ills and violence of the world (no other group has ever been substituted for the Jews), but a special category of delight; where people find joy and celebration in being able to harm them – a prostrate, and deserving enemy.

Harm is the important word there, because it is not about being able to commit violence, but about the legitimacy of doing so, if it were ever chosen. Most people operating under The Pattern never have, and never will, actually raise a fist against a Jew – rather what they want is for fist-raising against Jews to be legitimate and correct… and they want every Jew to know that this is the case, well-founded in evidence, and always hanging over their heads. It is about preserving the right to harm Jews, not just about physical violence.

Embraced when it is there, and craved in its absence, The Pattern has the resilience and dopamine-high of an addiction. And as with every addiction, work-arounds and excuses are always at hand, always manufactured, so that those who are afflicted can happily, and greedily, indulge themselves.

The Enlightenment – with all its commitments to reason, liberty, tolerance, and progress – became a problem for The Pattern. Squeezed beyond respectable circles for the first time, The Pattern would slowly adapt and evolve and find its way back to prominence through a sleight of hand, best encapsulated by the fraudulent Protocols of the Elders of Zion. Here the rapacious and cunning Jews were placed as the manipulators of our kindness and naivety.

It, and similar hoaxes, ran like this: why do the Jews support democracy, a free press, and the free flow of capital? Well it’s not because they are a benevolent people who believe deeply in these ideas. And not even because the Jews prosper when everyone plays by democratic rules. But rather because they are seeking to enslave all of mankind, and these are the tools with which to do it! So even the Enlightenment quickly becomes a Jewish plot for global domination.

But this retrofitting of evil intentions has its limitations, and tends to hit the heroin addict’s vein with the disappointing tang of methadone. Hot flushes, cold sweats, and sleepless nights, become frantic heartbeats, horrible cramps, and skin so itchy that it is scratched away to blood. Soon the physical symptoms are intolerable to the point of crime and hospitalisation, with the addict now in so much withdrawal that he walks the streets in complete desperation – willing to do anything, anything, for a hit of that long denied drug (The Pattern).

And when it is eventually found again, it came with all the pent-up frustration and anxiety of a sudden overdose – with Hitler, the Third Reich, the Holocaust, and six million dead Jews. Spurned out loud for so long, The Pattern was back again in the public eye. And with so much catching-up to do, never had it been so efficient, so ruthless, so explicit, and so proud.

It was the kind of binge that addicts only wake up from on hospital beds, in jail cells, or in front of judges (that’s if they wake up at all). As quickly as it exploded in European rampage (and collapsed with defeat at the end of the Second World War), The Pattern was hushed back into silence, sent away to a court-ordered rehab centre, where again it would become much less acceptable, and much harder to express.

Despite the horrible recognition of where The Pattern had brought us all, there were still pockets of the world that never paused – for that small twinkling of self-doubt and reflection. Particularly in the Arab world, blood libel-type stories found a new voice and urgency at the very moment at which the full extent of Hitler’s crimes were under post-mortem. In their minds, and in their propaganda, those cunning Jews had tricked the world again, faked all those stories of massacre and genocide, and were plotting as always for global domination.

What could possibly explain such an inconceivably tone-deaf response to human suffering? Answer: The Pattern was under threat. And the danger was then, as it always had been, and as it remains today, either defend it or risk losing it forever. The scorn and the embarrassed looks from (some) Western eyes is an easy price to pay for the continuation of your favourite, and most satisfying, addiction; after all, they will likely thank you for it later on.

But the greatest threat to The Pattern was still on its way. Riding that wave of international sympathy and guilt, in 1948 the unthinkable happened: the state of Israel was established. Unthinkable because now every Jew in the world had a safe haven and a home, free of persecution and violence. There it was as a bright and unavoidable banner, speaking the new language of internationalism and human rights – the Jewish people had sovereignty… they were protected within the borders of their own country. The might of international law was on their side.

And of course, it was a return home of a kind. In the second century CE, the Romans had expelled the Jewish population from what is currently Israel, Gaza, the West Bank, and western Jordan. Renamed after the Philistines (ancient Jewish enemies), the region became Palestine by Roman decree. And there it stood, while the homeless Jews wandered from persecution to persecution, through pogroms, inquisitions, and violent conspiracy theories.

It wasn’t until 1894 that the Zionist movement was founded by the disappointment, the fear, and the exasperation, of the Austrian journalist Theodor Herzl. With his own assimilation going badly (along with most of European Jewry), Herzl quickly found momentum for the cause, and the first Zionist congress was held in Basil, Switzerland, in 1897. The issue swirled, locations were offered and reneged upon, and importantly the control of the great empires collapsed; replaced slowly by the ideal of the nation-state.

With little to expect from the kindness of strangers, Jewish philanthropists took matters into their own hands and began buying-up land in Palestine for the resettlement of European Jews; people who would re-join the surviving 10,000-odd population that had escaped Roman expulsion, and who had quietly eked-out lives within the city of Jerusalem. 40,000 Jews arrived from Russia between 1905 and 1914, as they were hunted out of Tsarist society. 600,000 more arrived between then and the Second World War.

With each step toward safety and sovereignty, Arab violence grew worse; most sharply in 1929, when the Jewish community of Hebron was massacred and run-out of town. In 1933, as the Nazis stepped into control of Germany, the Grand Mufti, al-Husseini, immediately contacted the German Consul General in Jerusalem, offering to help with Jewish eradication. By mid-war, the Mufti was broadcasting Nazi propaganda, organising attacks on British troops, and recruiting Arabs to enter the war from Yugoslavia. For his efforts, the Nazi high command appointed him to the ranks of the SS with the title of Major-General.

The war ended in 1945, Hitler had lost, and in that same year the League of Arab States (or Arab League) was formed. Its first order of business? Declaring a regional boycott of all Jewish farms, Jewish stores, and Jewish employment in Palestine.

But the Jews had a mandate from the freshly-minted United Nations, an allotted territory, and for once in the long history of their people they had momentum at their backs. The British left, and Israel was declared with David Ben-Gurion as the first Prime Minister. America officially recognised the new state, then it was the Soviet Union, then the rest of the world followed suit… Except for a significant holdout, with not a single Arab country willing to acknowledge that Israel had a right to exist.

As long as Israel stood, the Jewish people had borders from which to legitimately defend themselves, and so The Pattern was in sudden, and permanent, danger. Then as the world waited to likewise recognise another new state – the long since championed Palestinian state – in the neighbouring portion of the territory, those same Arab countries instead took the first modern step towards delegitimising Israel: the armies of Jordan, Syria, Lebanon, Iraq, Saudi Arabia, and Egypt, invaded Palestine.

In control of Gaza, Egypt systematically expelled the entire Jewish population. Jordan, in control of the other territory, forbade the very use of the word Palestine, instead naming it the West Bank. And this is where relations remain frozen, with the stifling realisation that any recognition of an independent Palestinian state – with its lines drawn next to the Israeli state – would, by its own existence, also formalise the existence and borders of Israel.

Attempts to destroy Israel through calculation and war, came in 1948, 1967, and 1973, and went, in each case with Jewish victory and a deepening sense of Arab humiliation: they were losing their grip on The Pattern.

Not to be deterred, the failure of open warfare was replaced by the Intifada, a persistent campaign of low-level, non-stop violence. The Intifada ran from 1987 to 1991, after which – through all the bombs, stabbings, riots, shootings, and thrown rocks – 20 Israelis had been murdered by Arabs under the banners of Islamic Jihad, Hamas, and the Palestine Liberation Organization (PLO). During the same period, 528 Arabs were murdered by their fellow Arabs, and those three organisations. Their crimes? Collaboration! Disagreeing with the violence, warning Jews about impending attacks, or for simply maintaining friendly relationships with the Jewish community.

Worried that Palestinian support was shifting to Islamic Jihad and to Hamas, in 1993 the PLO decided to meet with Israeli negotiators to discuss a peace deal. The meetings took place in Oslo, and due to the anti-collaborator atmosphere that had been drummed-up during the Intifada, it all happened in complete secrecy. Led by Yasser Arafat, the PLO walked out of the summit two years later with the signed Cairo Treaty in hand, and an agreed-upon “two-state solution”.

The Palestinian side of the bargain required a repudiation of terrorism, an end to anti-Jewish and anti-Israeli propaganda, and the altering of the PLO constitution to remove language which promised the destruction of Israel. For his efforts, Arafat pocketed the Nobel Peace Prize, flew back to Ramallah, stood before a crowded Mosque, and demanded that Palestinians “continue their Jihad until they had liberated Jerusalem” (both in its violence and territorial claim, a violation of the Cairo Treaty he had just signed).

The official PLO emblem remained unchanged as a map of the entire pre-1947 region (before the existence of Israel). Fatah’s emblem (Arafat’s core constituents within the PLO) remained the same as the PLO’s, just with rifles and grenades added for a little flare and emphasis. Palestinian schools continued to teach and lecture about the importance of destroying Israel. Anti-Jewish blood libel stories became a core part of Palestinian culture, taught, repeated, and accepted uncritically. An ancient law stating that any Arab who sells their property to a Jew will be executed and denied a Muslim burial, was revived. And when Arafat was subsequently rewarded by the Palestinian electorate for running away from what he had agreed upon in Oslo (receiving 90% of the votes), terrorism and violence increased sharply across Palestine.

In 1999, Ehud Barak was elected Prime Minister of Israel, and along with American President Bill Clinton, made a desperate attempt to rescue the commitments of Oslo. Hashed-out at Camp David, almost every demand of the Palestinian negotiators was acceded to. East Jerusalem (including the Jewish holy sites there) would become part of Palestine, all Jewish settlements not contiguous with Israel would be evacuated (by force if necessary), and the whole of Gaza and 96% of the West Bank would form the new Palestinian state, with Israel adjusting its own border and agreeing to land swaps to make this possible.

Given almost all that he had asked for – and so also on the precipice of having to give up The Pattern – Arafat again panicked. He walked-out of the meetings, promising to return in a few days to polish the final details. Instead he flew back to Ramallah and launched the Second Intifada against Israel. Soon the world was watching those faked images of Muhammed al-Durrah bleeding to death on his father’s lap, and with that, everyone (not just the Palestinians and the Arabs) was free and happy to roll again in the thick mud of Jewish hatred.

The language had shifted to appease modern sensibilities, and to provide enough cover for those who still cared about such things. Unchanged in purpose and intent, the hatred of Jews had become the hatred of Israel! All just another step, another evolution of The Pattern, “needing to preserve the legitimacy of hurting Jews, simply for being Jews”.

Once something like this hooks itself into culture, a huge amount of effort and persistence is needed to break it… as well as even to notice what it is in the first place. All patterns exist in their details as much as their explanations, and so with The Pattern it is found in what passes unchallenged nearly every day, from the mouths of activists, to global newscasts, and into accepted truth:

* ‘Israel is an apartheid state’ – there are 1.9 million Arabs living inside Israel (and growing each year) with full rights; while the Jewish populations across all Arab countries combined, have shrunk from 800,000 in 1948 to less than 9,000 today (Egypt: 80,000 in 1948 down to 100 today; Yemen: 60,000 in 1948 down to 50 today; Iraq: 140,000 in 1948 down to 5 today; Libya: 35,000 in 1948 down to 0 today; and so on…

* ‘Israel are an occupying power’ – there have never been any such similar claims made about Egyptian, Jordanian, or Syrian occupations of Palestine, and when asked what is meant by the phrase “occupation”, the response is often a reference to 1948 and the creation of Israel itself. Meaning that the very existence of Israel is what many people consider to be “occupation”. A fact further emphasised by the multiple offers of statehood to the Palestinian authorities, and the rejections of these offers.

* ‘The Israeli military are a vengeful institution who massacre Palestinians (men, women, and children) – indiscriminately’ – Israel has been attacked multiple times by its neighbours, and has never launched a war itself that was not in self-defence. When Israeli forces enter places like Gaza in response to rocket fire, they take the unprecedented – and near comically cautious – steps of warning Gazan residents beforehand through leaflets and text messages. And when Israeli soldiers do commit crimes or human rights abuses, they are publicly tried in court; while Palestinian suicide attackers are still honoured as martyrs, with their families paid lifetime stipends for their sacrifice to the nation.

* ‘Israel are committing a genocide of the Palestinians’ – The Palestinian population has increased from under 1 million in 1950, to over 5 million today.

Before sitting down to start work on his upcoming book, David Deutsch was part of a project to write an up to date history of Israel. He decided to start things off with a bit of humour… and from best intentions, and light-hearted fun, The Pattern still managed to squirm its way into the light, jumping back off the page at Deutsch and his fellow authors:

Once upon a time, we wrote a parody history of Israel, intended for the blog Setting The World To Rights (now in suspended animation), in which every sentence contained at least one lie.

But the reactions of many of our friends who read it were alarming. Instead of falling about laughing, most of them read it as fact. These were not opponents of Israel, but people sympathetic to it. We hadn’t realised how pervasive the prevailing distortions and falsehoods are. Considering that the parody began: “Judaism is unique among religions in being exclusive to a particular ethnic group (the Jews). It teaches (in its doctrine of ‘the Chosen People’) that all other races are genetically inferior to the Jewish one and that Jews are entitled to rule over them”, this was alarming.

We realised that we couldn’t put the parody into the public domain. After all, The Protocols of the Elders of Zion is also a crude forgery, but is now part of the standard repertoire of the Pattern usually called ‘antisemitism’. For instance it is in the Charter of Hamas. We didn’t want to be responsible for another anti-Jewish canard that might last the next few centuries.

 

*** The Popperian Podcast #10 – Richard Landes – ‘Deutsch's Theory of the Pattern - The Widespread Compulsion to Legitimise Hurting Jews’ https://popperian-podcast.libsyn.com/the-popperian-podcast-10-richard-landes-deutschs-theory-of-the-pattern-the-widespread-compulsion-to-legitimise-hurting-jews

Karl Popper, Friedrich Hayek and the Future of Liberalism

In conversation with Jeremy Shearmur

 

In the 1960s the London School of Economics (LSE) was in a rare and fascinating position – it had a walking, talking, godlike presence hanging over it. A name that everyone already knew would soon be carved into statues – and who would define the institution for centuries to come – was alive and amongst them. He only turned-up once a week, but by then that was enough. From being the first (and only) staff member in the Department of Logic and Scientific Method, to forming a research academy and attracting a remarkable list of international students and colleagues, Karl Popper had made the LSE his own, in a number of ways.

This was the strange world that Jeremy Shearmur walked into as a fresh-faced undergraduate student. Popper was not the kind of professor to waste his time on the grind and repetition of teaching, but in those corridors and those lecture halls, and in the language of his professors, Shearmur could see a firm and domineering shadow:

I had a ‘Popperian’ education in philosophy, but largely not from Popper.

I am struck, when looking back at my time at the L.S.E., by the fact that Popper’s approach to philosophy was at the center of the course, but the Department was characterised by lively debates about it, rather than its uncritical acceptance. 

When those undergraduate days were finally out of the way, Shearmur briefly erred between pursuing a PhD and a career in librarianship. And it was while testing the waters of this latter path during a graduate apprenticeship at Durham University library, when a remarkable opportunity fell into his lap. The then-assistant to Karl Popper was leaving his post, and a quick replacement was needed. Through word-of-mouth the position was offered to Shearmur; he responded like this: “The answer was yes: it was, for me, a bit as if a young Catholic had been asked if he wanted to work as personal assistant to the Pope!”

With the leader revered and elevated above everyone else, a flock of adoring acolytes dropping everything important in their lives for the opportunity to be a part of things, and a healthy dose of fear and insecurity and insignificance and punishment hanging in the air, the Popperian Church was, unmistakably, church-like. But this was also a church, and a Pope, who refused and rebelled against infallibility and claims to higher knowledge. The papal robes were torn and muddied, the stained glass cracked and unwashed, and the people who attended regular mass were loud, aggressive, and critical… of everything.

When he wasn’t ferrying library materials to Popper’s home or chasing-down obscure translations, Shearmur began to sniff-out his own role within the church, something beyond his current-position of altar boy; he started looking for problems within Popper’s philosophy. And what Shearmur found was a world both too small, and too expansive, at the same time. He saw unnatural and unnecessary limitations tying-down and restricting the possibilities of critical rationalism. In his mind, criticism should be stretched beyond science and into metaphysics; after all, knowledge of all kinds is a “social phenomenon”.

If we put logic, formalization and technical nit-picking back in its proper box, this will enable us to reflect on wider issues which were opened up in The Logic of Scientific Discovery but were not pursued there. This includes the role of ‘methodological rules’ (which, I have suggested elsewhere, may be understood in partly sociological terms). This would, at once, get rid of the artificial division between critical rationalism and sociological approaches to knowledge, but would allow us to pursue the latter with a Popperian concern for critical appraisal and the improvement of our institutions. 

Embrace just such an approach, break those barriers free, and what do we get according to Shearmur? A new, important, and central appreciation for classical liberalism.

It is one of those wonderful products of history which, due to its success and adoption in the world, has lost much of its gloss and meaning and value. The roots of classical liberalism drag us back to medieval Europe, to Britain, to kings, monarchs, duties to god, and the idea that individual rights are due to the generosity of governments – loving gifts that cannot be turned around as weapons against those same governments.

Classical liberalism emerges in this moment – insisting upon a sea change in our understanding of those rights. Rights are not gifted to us but rather owed to us. It is also not the role of government to present us with our rights but to protect them against encroachment. And when they cannot live up to this, or when they are the ones doing the encroaching, then those same rights are what we use to remove and replace them with someone better. Classical liberalism turns-around the bulk of human history and places consent at the centre of governance.

And in many ways this sounds Popperian… but also in many ways it is not! The problem begins first with drawing a line between the Two Karl Poppers. The earlier Popper, writing in The Open Society and Its Enemies, was a much more interventionist man, worrying about the dangers of “unregulated capitalism” and preferring a firm hand of state control within economic life. The later Popper twists out of this and lands on a streamlined, minimalist, indirect view of the state, whose role is never, ever, to attempt anything as presumptuous as to try and make people happy.

This, it appears, is where we can find the slow-working influence of Popper’s old friend at the LSE, Friedrich Hayek, and the entry point for negative utilitarianism (seeking to only reduce or minimise disvalue, rather than trying to maximise value). Here the state, in all its power and reach, is largely asleep behind the wheel, only ever shaking itself awake – and to action – in order to protect individual freedoms. Why would anyone want such a thing, why would anyone hope for a largely impotent government?

It adds to clarity in the field of ethics if we formulate our demands negatively, i.e., if we demand the elimination of suffering rather than the promotion of happiness. Similarly, it is helpful to formulate the task of scientific method as the elimination of false theories (from the various theories tentatively proferred) rather than the attainment of established Truths.

Just how that ethical clarity and the elimination of false theories comes about needs some explaining. And here Friedrich Hayek’s firm hand can almost be seen beneath Popper’s ink as he begins to explain the raw, exposed, uncorrupted, and constantly reaffirmed relationship between buyer and seller.

It may not feel like it for most people, but every time you wander into a store of any flavour, hand over your money, and walk out with a product of any kind in its place, you are becoming an important and indispensable part of an epistemological loop. A clean and unambiguous system of accountability, reaffirmed anew with each and every purchase.

Famously for Popper, knowledge creation is a process of conjectures and refutations. And that refutation part can often feel like the trickiest. It is where we test our theories against reality and against other theories… it is where other people begin to really scuff things up. The laws, the culture, the institutions, the bureaucracies that we build all have a logic behind them, a reason for their creation and for how they look. They tighten our hold on certain things. They protect us, and our values, from unwanted, poorly conceived, and short-sighted criticism. They prevent some of that scuffing!

And so we have a problem! Albeit a problem that doesn’t immediately sound like one.

Imagine that you are an activist at some point in our recent history – someone with truth and justice on your side. You fight and you suffer and you are arrested and you lose your job and are threatened. Yet each day you continue, no matter the cost.

At first it is just you! Then slowly another activist joins you on those streets, then two, then a handful… After decades of this a majority of your fellow citizens are on your side; you have very slowly, bit by bit, managed to convince a country of voters to correct its errors. Old discriminatory laws are rescinded, better ones are adopted in their place, and as far as you can see – as far as morality and law touches your life – things are changed for the better.

But none of this lets you sleep any easier at night… not yet! People can be frivolous and manipulated, they often change their minds, and if you could convince them to drop their previous, deeply held, beliefs (as you just did), then what is there to stop your enemies from doing the same? Tomorrow they might rally their troops, resharpen their arguments, copy your tactics, take to the streets, and eventually persuade everyone to walk back all those changes that you fought so hard for.

So to protect your achievements you decide to build something around them, something resilient to shifts of opinion and protest, something isolated from the whims of those masses. You create an institution! Through a series of steps, hierarchies, procedures, and bureaucracy you clog the muscles of your enemies with a rich and sustaining lactic acid. From here out, social change becomes more arduous, painful, time consuming, and ambiguous.

But this still won’t do it, this still doesn’t seem safe enough. So you decide to wrap your hard-fought progress up in a new type of reactive culture, one that says ‘To question this is evil, to try and change this is hateful, and to even doubt this is bigotry’. And it often works… because, as chance would have it, it is simply much easier to centralise knowledge (moral knowledge in this case) than it is to create it.

Here we find ourselves at a place of good intentions, and yet somewhere profoundly un-Popperian. For Popper error is all around us, at all times; so much so that error is the natural state of things. Which is why that second part of Popper’s epistemology (refutation) matters so much. Surrounded in all directions by so many wrong ideas, the only hope we have to make any sort of progress is to actively seek out these mistakes, and to remove them wherever they are found. We must embrace a wet blanket of constant and biting criticism. And we must avoid the creation of hollowed-out spaces in society, spaces where criticism is seen not as a corrective but as a problem.

Back to Hayek! Back to negative utilitarianism! Back to those impotent governments and a little more “clarity”. Well-meaning errors of the kind mentioned above find life within an open marketplace much less hospitable. So this time imagine a different pathway in life: instead of being an activist out to change the world, you are a small business owner out to get rich. And you start where all things do – with a problem… multiple problems.

You are hungry and cold (problem) so at first you look for a job to buy food and clothes (conjectured solution), but no one will hire you (criticism). Still in need of money (problem) you decide to sell your skills, labour and time by, for example, fixing shoes on a street corner (conjectured solution). You make some money, but not enough (criticism) and you don’t enjoy the work (criticism again). You decide to change the scale of your business (conjectured solution), and to make things a little more comfortable on yourself you rent a small shop and fill the shelves with products that you think people want to buy (conjectured solution again).

No customers enter your shop (criticism). You decide that you need a better location, somewhere with more human traffic, but when you move your shop across town (conjectured solution) despite now having plenty of customers browsing your shelves, no one ever buys anything (criticism). You need new/better products so you get a bank loan and invest in new merchandise (conjectured solution). About half of your new products begin to sell quickly, but the others remain unwanted and untouched (criticism). You use your profits to replace your poorly selling merchandise with different ones (conjectured solution).

Now about 60% of your products are selling reasonably well, but the other 40% are still cluttering-up your shelves (criticism). You continue error-correcting like this in a piecemeal fashion (conjectured solutions), but trends and shopping habits all change (more criticisms), and you are always chasing the tastes of your customers (endless conjectured solutions). You are also constantly discovering new and unexpected ways in which you are failing to meet their expectations (endless criticism).

It all sounds exhausting! But this is the case for all knowledge. And in this unhindered marketplace, where everyone has the free choice of what they sell and what they buy (and at what price), knowledge is being created at an incredible pace; with every single transaction.

So now imagine yourself in a much more familiar and intuitive position (at least for most people): being asked, as we are at every election, to choose how we want to distribute goods and services around the community; and to decide upon a way of life. To do this you can place your trust in politics and political leaders, with all their inherent leanings toward compromise, avarice, and corruption. Or you can follow Hayek’s lead and embrace the free flow of conjectures and refutations, of trial and error, of delicate fine-tuning to the needs and desires of the community, to a place where critical feedback is thick in the air.

And in appropriately Popperian ways, it is a place that doesn’t ask much from us… nor anyone!

Few things have the ability to drown most people in intellectual deep water quite like macroeconomic theory with all its talk of opportunity costs, supply chains, globalisation, sovereign risk factors, surpluses, deficits, recessions, depressions, elasticity, liquidity, seasonal adjustments, asset turn-over, marginal standings, business cycles, companies, industries, resources, gross domestic product, inflation, stagflation, classicalism, Keynesianism, monetarism… And yet none of it matters, none of it is necessary, none of it needs to be understood in order for you – or anyone – to participate.

It is an argument that former British Prime Minister, Margaret Thatcher, often used against the overreach of the state. With all the might and time and resources that governments have, they are in unique positions to understand all that terminology above, in all its detail and permutations. In fact, this is what they work so hard to achieve – developing grand theories and predictions for every corner of economic life, from the largest corporation to the smallest transaction. And yet despite all the resources they have, and despite knowing all that they do, they consistently get it all very wrong! Their predictions fail, their theories collapse, and so they start again with another grand enterprise.

The error that they are making has to do with – in hard Thatcherite terms – “local knowledge”. It is a mistake so common, and so hard to shake, that we have all fallen for it/suffered from it ourselves.

Back to our imagination, and with a lack of appreciation for local knowledge turned against us. It could be a new law passed through parliament, a new ethical guidance for your workplace, a new regulation for your community, or simply, perhaps, a recommendation from a friend. Regardless, someone, somewhere, has decided to solve a problem for you, whether you recognise it as a problem or not. And to do this they have either worked their way down to you from an all-encompassing theory, or have worked their way across (or up) to you via an extrapolation from their experience, and their own local knowledge.

There you are, handed these solutions, this reform agenda from afar, and almost immediately you realise that none of it will work! Perhaps it might somewhere else, but it misses all the particulars and challenges of your situation. There are too many small details missing, too many problems that are overlooked, not enough understanding of why things currently look the way they do, not nearly enough nuance… not enough local knowledge!

The beauty of using the marketplace as our primary source of knowledge creation – of conjectures and refutations – is that no such imposition of this kind should ever happen. And going back to an earlier statement, it never even asks for it. Local knowledge is enough!

It might sound counterintuitive, but there is nothing inherently parochial about everyone doing their own thing, in their own little space, making their own decisions and solving their own problems. Just as with various different scientific breakthroughs all coming together from different corners of society to form a single body of science, this can also be the case with political and economic knowledge. Indeed it is how culture works, silently stitching together a patchwork of truth and pragmatism. No one ever needs to know everything, nor to legislate for everyone and every behaviour. Knowledge always works best as a collaborative exercise… sharing it is enough! Just as you don’t need to understand heart surgery to benefit from heart surgery, you don’t need to understand all the connected details of a supply chain to prosper from that supply chain.

If so much that we value can be pieced together from market forces, then why have governments at all? What’s the point in having them, especially when their existence and tendency toward bureaucratisation, to institutionalisation, to overreach, is such a risk. The answer: we need them as back-ups to ensure trust within those marketplaces; as the enforcers of contract law, of bankruptcy law, to protect private property, and so on. They are there to ensure that anarchy and power wielded by the strong can never step into our lives and shake us from our freedoms… the very same freedoms that allowed that market to work in the first place.

Governments are also there to smooth out the jagged moral corners of society, to hand us pensions when we get old, to deliver us health care when we get sick, and social welfare when we lack the basics in life. All the things that we believe are necessary, yet which the invisible hand of the marketplace hasn’t yet got around to providing a complete enough answer for. Finding this balance was Hayek’s great challenge, and one which we are still fumbling with today. Finding the line between the philosophies of Popper and Hayek remains with us too. Though Shearmur has a partial answer:

The crucial difference between Popper and Hayek... is that while they both make use of epistemological argument for a broadly liberal position, Popper’s views centre on the fallibility of scientific knowledge, while Hayek is concerned not with scientific knowledge but with political lessons which might be extracted from what could be called the social division of information. Further to this, central to Popper’s vision of politics is the political imposition of a shared ethical agenda, through a process of trial and error: of piecemeal social engineering. What is central for Hayek are markets and their associated institutions which, on his account, form a kind of skeleton for a free society—one which, at the same time, enables us to make cooperative use of socially divided knowledge, and to enjoy a broadly ‘negative’ conception of individual freedom.

So what can, and should, we expect from a re-embrace of classical Hayekian liberalism? Not much. Only that it is better at knowledge creation, and so also better at ordering society, than all other less free alternatives. It all hinges around what all that freedom allows for: a roaring and constant flood of criticism. And with that comes the quick exposure of errors, and their quick correction to something better. With that flood of criticism comes an equally large flood of knowledge creation. And so it also stands exposed to its own refutation: all that needs to be shown to put classical liberalism in its grave, is that another system (whatever it may be) once implemented creates more knowledge, and does so more quickly. In that moment it would all be over. And this is easy to measure, just stare out at the world today and pay attention; this experiment is being run over-and-over before our eyes.

That being said, would it be possible to create a more centrally controlled society, with centrally planned institutions, which side-steps some of those market forces, which reaches deeper into our lives with more coercion and more regularity, and which also manages to produce more criticism than classical liberalism? A system where more is driven for, and more is demanded, than just an opening of space for dialogue and feedback. Rather a society where criticism is actively manufactured and applied, where people are coerced to seek out criticisms that they might not ordinarily find or care about, filling-in gaps that the market might miss, and adding more voices and opinions to the places where criticism already exists (similar to the way in which compulsory voting systems coerce people into thinking more about politics, policies and elections).

I suspect the answer is yes! If so, it will take some effort, carry with it a unique set of risks, and require knowledge that is yet to be born… still we should be open to the possibility.

God-like auras cultivate their own natural resentment. Churches fracture and fall precisely because they are churches. Pedestals are just unpleasant things to be around for too long. So perhaps it was appropriate that, in spite of all that he was, and all that he gave to the London School of Economics, it wasn’t until 1995 that a statue of Karl Popper made its way onto campus. And even then it was a donation from the Austrian President, Thomas Klestil – a small bronze bust now catching dust in the quiet halls of the philosophy department.

It is likely that Popper would approve of all this understatement – the lionisation of people or ideas is often the first step towards shielding them from criticism; criticism that Popper would have insisted upon hearing.

But I suspect even he – if alive today – might bristle when, upon taking a tour of his former campus, and looking around all that new architecture, he would realise that amongst the redevelopments, and changes in design and infrastructure, that his old office had been cleaned of his belongings, his name pried unceremoniously from the door, and the empty space turned into a public toilet! 

 

*** The Popperian Podcast #9 – Jeremy Shearmur – ‘Karl Popper, Friedrich Hayek and the Future of Liberalism’ The Popperian Podcast: The Popperian Podcast #9 – Jeremy Shearmur – ‘Karl Popper, Friedrich Hayek and the Future of Liberalism’ (libsyn.com)

New Zealand and the Authoritarianism of Plato

In conversation with James Kierstead

 

It was, and still is, an unenviable journey. For most people it is the other side of the world, a drowsy corner of boredom and isolation and stillness and parochial concerns. But a good job is a good job, and so James Kierstead found himself packing up his life in America and trekking-out on an academic relocation to the sheepy fields of New Zealand. He was in small company. Very few colleagues of note had made that move before him – and running down the list of ex-faculty nothing jumped-off the page, none of those names, despite all they had achieved, were particularly recognisable… except for one! Someone rich in controversy from all directions:

The mixed nature of Popper’s reputation was made clear to me only a few weeks before I myself moved to New Zealand, at a dinner following an interdisciplinary seminar on ancient political thought at Stanford. When I mentioned my impending move, the conversation soon turned to New Zealand classicists and philosophers, and in this context the name of Karl Popper was one of the first to come up. Very soon the dinner table was divided: though everyone had heard of Popper, only the political scientists in attendance showed unguarded interest; the classicists were unenthusiastic, and the ancient philosophers (both of them Platonists) were openly hostile. The only person actually to praise Popper was an exchange student from China, who was actively engaged in his country’s prodemocracy movement and lauded Popper’s insistence that our future is ours for the making.

Why would classicists care about Karl Popper in the first place, let alone be “unenthusiastic”? Why on earth would those philosophers – the people you might expect to appreciate and embrace Popper the most – be “openly hostile”? And why would an exchange student (and part-time democracy activist) from an authoritarian country be so full of “praise”? Well it all comes down to an unlikely villain and an interesting kind of “war effort”.

Karl Popper’s time in New Zealand was one of exile rather than choice. Pushed out of his native Austria just before the Second World War, New Zealand was the first, and most solid, offer of safe harbour that came Popper’s way. Settling down to the otherworldly calm of Christchurch, Popper was motivated to do his bit, whatever he could, from the distance at which he then sat. It was there, watching back on the horrors unfolding in Europe, that he wrote The Open Society and Its Enemies. The Enemy was naturally totalitarianism and brutality and coercion and disenfranchisement and oppression of all varieties. But His Enemy – the person that Popper named as the divine progenitor of all this carnage – was what shocked his readers and turned so many of them off: the Greek philosopher, Plato.

It was first said by Albert North Whitehead, and it echoes still as a raging cliché around philosophy departments today, that “European philosophical tradition… consists of a series of footnotes to Plato”. A clear embellishment, it is not so much meant to be taken literally as it is to represent the titanic figure that Plato was. He did so much work, of so much significance, so early in the history of philosophy, and in such a welcoming style, that a level of profound awe is certainly appropriate. But when The Open Society was first published a feeling of near-religiosity hung in the air; and so Popper was stomping upon sacred ground.

The level of complete and fawning veneration – both inside and outside of academic circles – for Plato was hard to overstate. People like Richard Livingstone – President of Corpus Christi College, Oxford, for nearly twenty years, and Vice-Chancellor of the entire University from 1944 – were making loud and un-controversial names for themselves by saying that Plato’s Republic was not only an important text in politics or philosophy, but also “the greatest of all books on education”.

Popper wasn’t just challenging this, he was casting a wide and staining moral judgement upon men and women like Livingstone. Far from being a great book on education, Popper saw the Republic as something uniquely dangerous… on a par with Mein Kampf. So those people lining-up to praise Plato, might as well have been crowding-in to the parade grounds at Nuremburg, goose-stepping in unison, and saluting Hitler with an extended Sieg Heil!

And if that didn’t hurt sensibilities enough, there was a tone to Popper’s attack, and a twinkle in his prose, that stood well outside philosophical tradition. Kierstead picks out a few representative examples of this:

The accusation that Plato’s literary skill served only to throw a veil over “the complete absence of rational arguments”; the dismissal of one of his inferences as “a crude juggle”; even the description of the ideal of the philosopher-king as “a monument of human smallness.”

In response to this inflammatory language, Gilbert Ryle wrote back in a kind of shock and disbelief (despite his sympathy with Popper’s analysis) that would sum up many of his colleagues:

[Dr. Popper’s] comments… have a shrillness which detracts from their force. It is right that he should feel passionately. The survival of liberal ideas and liberal practices has been and still is in jeopardy. But it is bad tactics in a champion of the freedom of thought to use the blackguarding idioms characteristic of its enemies. His verdicts are, I think, just, but they would exert a greater influence if they sounded judicial.

But it wasn’t just that he sounded a little too lyrical and bombastic for the taste buds of his day – people like Ryle sniffed out something much more problematic as they saw it. Popper was writing with an unconcealed contempt and a near-belligerent hostility. He was not deliberately dicing pithy, throwaway phrases, into his work to catch eyes and draw attention. He was not playing the role of provocateur, but earnestly talking-down to Plato as might an intellectual superior.

Kierstead’s question: “So which parts of his argument stand up to scrutiny, and which do not?”

The problems start with Popper’s use of the word tribalism. A tribal society for Popper was a closed society, in contrast to his Open Society. It is the dark cave that Greek democracy and Athenian society crawled from, muddied, sick and immoral, and it was where, in Popper’s estimation, that Plato wanted us to return: “Plato was longing for the lost unity of tribal life”.

It was a hard case for many people to swallow. Before Popper, there existed an instinctive difference between those simple, raw, tribal societies, and the highly efficient, centrally planned totalitarian states that he grouped together under the tribal umbrella. What made this comprehensive grouping newly appropriate for Popper was the common diagnosis of Historicism, the idea that history is determined by certain laws, and so the future can be accurately predicted by understanding those laws. Or to quote Gilbert Ryle again, history is “not a bus but a tram.”

When he stabbed this charge into the philosophies of Hegel and Marx, no one was very shocked. Even the blood-red adherents of those philosophies smiled back at him approvingly, nodding their heads and saying out loud, yes, we are historicists, we just don’t think that historicism is a dirty word. But with Plato things didn’t seem to fit quite so neatly… and the champions of Platonic philosophy were much less interested in playing nice.

For many readers, Plato was a man interested in questions of the good life, about how we should live, and what the proper way to be a part of society was; not in grand designs about history, nor about general laws that govern all human development.

The key confusion it seems was around the question of his metaphysical Theory of Forms, the belief that the physical world around us is just an imperfect copy of the Realm of the Forms (an ideal world populated only by perfection). Popper thought that Plato carved his political philosophy directly from this foolish idea, and attributed to him the thought that all change is therefore a negative, something that takes us further away from those ideal forms, and so something that is always corrosive. And make no mistake about it, this is certainly an image of Plato The Historicist.

The trouble is that when classicists dig into relevant passages from Phaedo, they often emerge with something very different in their hands. It is not that with each step we are further away from perfection, but rather when things do get worse it is due to an increased distance from the Forms. So conversely when things noticeably improve it is because that distance to the Forms has been shortened. “Things take on certain qualities because the Forms come to be in them; when a man becomes just, for example, he comes to partake in the Form of justice.”

On this reading not all change is change in one negative direction. And so this is Plato The Anti-historicist. But Popper would bridle at all this talk about perfection and original beauty, and as far as Plato claimed to already know what the end game of history was (of what we should be trying to achieve, not only now but forever) the label still seems to have plenty of purchase. And while Popper does talk about Plato as “the embodiment of an unmitigated authoritarianism”, he is also quick to offer his enemy a few charitable excuses:

"My thesis here is that Plato's central philosophical doctrine, the so-called Theory of Forms or Ideas, cannot be properly understood except in an extra-philosophical context; more especially in the context of the critical problem situation in Greek science which developed as a result of the discovery of the irrationality of the square root of two."

"It seems likely that Plato's Theory of Forms is both in its origin and in its content closely connected with the Pythagorean theory that all things are in essence numbers. The detail of this connection and the connection between Atomism and Pythagoreanism are perhaps not so well known."

But things are about to get much, much darker! And in the eyes of many classicists, as well as many philosophers, much, much less forgivable. Popper wasn’t only saying that Plato was wrong, nor only that he was an authoritarian, but also that he was deliberately dishonest, manipulating and distorting the philosophy and character of his teacher, Socrates. The language is typically rough, and Popper is here fighting to rehabilitate the historical Socrates from his student’s “betrayal”: “the philosopher king is Plato himself, and the Republic is Plato’s own claim for kingly power.”

By assigning psychological motivation to someone thousands of years dead, Popper was clearly reaching here, but the ways in which Plato is commonly defended from this aren’t very impressive either. The first goes like this: Plato’s dialogues are so deeply complex and layered and profound and nuanced and difficult that trying to pull the actual philosophy from them is an impossible task; they are irreducible to any one thing. That this has even been entertained within serious academic circles is an embarrassment to the field! Worse, it is blatantly irrational. That something is hard or difficult or nuanced or complex does not mean that it is therefore impossible. This is a picture of well-trained adults running away from their problems rather than trying to solve them.

But this nonsense does have a slightly less ugly sister, an argument with just a bit more purchase… but only a bit. That is, the dialogues should not be understood as communicating philosophy at all. A better way to describe them for some people – prominent among them was Leo Strauss – were as dramas. As Shakespearian plays of a kind, designed to draw-out the emotions of the audience, to entertain, and to inspire, but not to philosophise. It is an argument that Kierstead has little time for:

There are plenty of ways in which the comparison with Shakespeare is misleading. For a start, a single character dominates a large number of Plato’s dialogues; this is not the case with Shakespeare’s plays. Moreover, though characters in Shakespeare often say things that are of philosophical interest, they do not engage in systematic philosophical enquiry, either on their own or with others. But systematic and cooperative philosophical enquiry does not only happen repeatedly in Plato’s works—it constitutes the lion’s share of the content of almost all of the dialogues.

Which brings us to the Mouthpiece Argument – Popper’s claim that Plato betrayed Socrates, and used him as a puppet for his own philosophy; otherwise known as the Socratic Problem. It is certainly problematic to assign the opinions of any characters to that of their author, but there is a difference here that matters. If the anti-democratic views belong to Plato as Popper claimed, then it would certainly make sense for Plato to write them as Socrates’ instead (as he did). Such authoritarianism and such dissent against the Athenian democracy would not have been able to be voiced publicly at the time… at least not as one’s own.

This is all beside the point. Whether it was Plato or Socrates or even someone else, it is all just tinkering on the fringes of an argument – as are long debates about the nature of authoritarianism, and how authoritarian Plato actually was. As Kierstead explains, even if Popper were wrong in the strength of the label, and had to backtrack on that claim of “the embodiment of an unmitigated authoritarianism”, he would still have accomplished his goal:

In particular, it strikes me that Karl Popper himself would have been quite happy with the statement that Plato, though an authoritarian and even a totalitarian, was not an extreme totalitarian. An acceptance that Plato’s philosophy bore some resemblance to fascism would have been more than he was hoping for; but he probably would not have been terribly upset with it.

If all that people got from the episode was that Plato was indeed an enemy of the Open Society, then the book was still a roaring success. And so it was! The aura of Plato was over, never to recover nor return as it was before Popper took his aim. He was brash, bombastic, loud, at times obnoxious, and he deliberately rattled the sensibilities of the day, and perhaps this was exactly what was needed. For Kierstead, and for so many others, “Popper’s most important contribution was bursting the bubble of the complacent Plato worship that had been carried out for decades”.

And it all started with that journey to the other side of the world and the new quiet life he found there. It is a heritage that New Zealand holds proudly today. They were home to Karl Popper (if only briefly) and from their shores came The Open Society and Its Enemies. But it all might have been different if the peculiar fascinations of Popper had won the day. When he was applying for university positions in New Zealand and Australia, Popper wrote to his old friend Ernst Gombrich with a profound dilemma:

You kindly advise me to prefer Otago [New Zealand] to Perth [Australia], in spite of the Cangeroos [sic]. But I think you don’t really know enough of Australia by far: the nicest animal there (and possibly the loveliest animal that exists) is the Koala bear. Cangeroos may be nice, but the opportunity of seeing a Koala bear is worth putting up with anything, and it is without reservation my strongest motive in wishing to go to Australia.

 

*** The Popperian Podcast #8 – James Kierstead – ‘New Zealand and the Authoritarianism of Plato’ The Popperian Podcast: The Popperian Podcast #8 – James Kierstead – ‘New Zealand and the Authoritarianism of Plato’ (libsyn.com)

Karl Popper and Africa

In conversation with Oseni Taiwo Afisi

 

He fled as an aged and battled 35 year old. A man between worlds, with a thin and eclectic resume. Karl Popper had tried his hand at teaching, at psychology (of sorts), and even at carpentry, completing an apprenticeship as a cabinet-maker. He had a relatively-new doctorate of philosophy from the University of Vienna hanging on his wall, and a young wife, Hennie, to support. He was also Jewish!

1930’s Vienna was one of those extraordinary places and times to be alive. The cultural centre of Europe and rich in cosmopolitan politics, artists, writers, actors, musicians, and public intellectuals all rushed in for a taste; desperate to somehow squeeze-in, to be a part of that indescribable moment, in whatever small way they could.

Popper grew up in the heart of this. His father, Simon, was a lawyer who wrote satirical plays in his spare time, as well as building a formidable personal library of over ten thousand books where he would add his own German translations of Greek and Roman classics. The family were bourgeois, comfortable, and deeply integrated into the vibrant circus around them.

The young Karl Popper had a lot to be thankful for: he was too young to have served and suffered in the trenches of the First World War, and sure he had lived through the collapse of the Austro-Hungarian monarchy, but from its ashes came a rare type of cultural re-birth, and an intellectual revolution whose impact ran for generations. The peak of this was that collection of scientists and philosophers and logicians and mathematicians who called themselves the Vienna Circle.

The members of the Circle ran as a who’s who inter-war intellectual life: Moritz Schlick, Hans Hahn, Philipp Frank, Otto Neurath, Olga Hahn-Neurath, Rudolf Carnap, Herbert Feigl, Richard von Mises, Karl Menger, Kurt Godel, Friedrich Waismann, Felix Kaufmann, Viktor Kraft, Edgar Zilsel… And the work they discussed each week belonged to a similarly impressive showcase of names: Ernst Mach, David Hilbert, Henri Poincare, Pierre Duhem, Gottlob Frege, Bertrand Russell, Ludwig Wittgenstein, Albert Einstein…

A little late to the game and much too fresh-faced, Popper sat on the periphery of the Circle, never a participant in any of the meetings. But he did build friendships with those on the inside – the first was Otto Neurath whom he bumped into on the grounds of the University of Vienna. And it was Neurath who would later give Popper the title that he would wear as a badge of honour for the rest of his life: “the official opposition”.

The opposition to the philosophical blinders that he saw upon the Circle, the inferiority of the people within it, the glorification of idols such as Plato, Hegel, Marx, Freud and Wittgenstein, and particularly the opposition to the terrible ideas that they founded and publicised to the world, of which logical positivism was the worst offender. Even then Popper had a keen eye for the long-term dangers of bad ideas. He saw those beautiful Viennese streets a little differently:

I certainly disliked the existing society in Austria, in which there were hunger, poverty, unemployment, and runaway inflation—and currency speculators who managed to profit from it. But I felt worried about [Communism’s] obvious intention to arouse in its followers what seemed to me murderous instincts against the class enemy. I was told this was necessary, and in any case not meant quite so seriously, and that in a revolution only victory was important, since more workers were killed every day under capitalism than would be killed during the whole revolution. I grudgingly accepted that, but I felt I was paying heavily in terms of moral decency.

There was also another social and political movement flooding those cobblestones with the promises of revolution and utopia. Each day more and more young men were gathering, holding rallies, marching in strict unison, and singing a new type of patriotic song. In the early moments of this, out for his evening stroll, Popper was stopped by a uniformed teenager holding a large pistol. Popper tried to reason with the boy, who wasn’t there to rob him, but rather to police him, to ensure that he wasn’t up to no good. The young lad looked back at Popper with indifference and said, “What, you want to argue? I don’t argue, I shoot.” On his shoulder was a newly-sewn Swastika.

Shaken and scared, Popper shut his mouth and walked quickly home. That night, alone in his study, the first seeds of The Open Society and its Enemies were conceived.

A rich and pervasive anti-Semitism meant that Vienna at the time boasted the highest conversion rate of Jews to Christianity in Europe. New possibilities were opened for converted Jews: they were allowed to marry non-Jews, were eligible for new promotions and professional opportunities, and could live largely unmolested lives. Like many of those around them, the Popper family followed suit and assimilated their religion and their culture.

This was done so seamlessly that, while growing up, the only real involvement that Karl had with Jewish culture was from the outside looking in, as an intellectual analysis. Despite this, Jews still made up about ten percent of the city’s population. Then came Nuremberg and the Nuremberg Laws. Hitler was ramping up his race war, tracing the bloodlines of his enemies, and squeezing Jewish life and culture to impossible limits. The panic for assimilation was on!

And it was all horribly misplaced. When the Anschluss happened on March 12th, 1938, things tilted beyond hope. German forces walked into Austria to rapturous applause, and the two countries fused together into a single Nazi state. Those inter-marriages were annulled, Jews were fired from their jobs and arrested on the streets; no claims to previous religious conversion would save anyone. Karl Popper had got out just in time… eighteen of his relatives who stayed behind died in the Holocaust.

Stateless and desperate, Popper twice applied for British citizenship, and was twice rejected because he failed the residency requirements. So he leaned as much as he could on a distant but admiring colleague, Bertrand Russell, whom he had met at a philosophy conference in France in 1935. As formulaic and unimpressive as the letter sounded, this was still a recommendation from Russell, and so worth its weight in gold:

“Dr Karl Popper is a man of great ability, whom any university would be fortunate in having on its staff.”… “I learn that he is a candidate for a post at Canterbury University College, Christchurch, New Zealand, and I have no hesitation in warmly recommending him.”

Classified only as a “friendly alien”, Popper was still without a permanent home, and without a citizenship to fall back on. But he was alive and he had a job, as well as relatively safe harbour for the rest of the war years. Looking back on the carnage unfolding in Europe, Popper felt motivated to begin his own “war effort”. In his own words, New Zealand was “infinitely remote”, “not quite the moon, but after the moon… the farthest place in the world.” Here – three months away from Europe by mail, five weeks away by ocean travel, and beyond the reach of direct air routes – The Open Society and Its Enemies began to take shape.

Popper looked back on his time in New Zealand with a rare and sentimental fondness:

There was no harm in the people: like the British they were decent, friendly, and well disposed… I had the impression that New Zealand was the best-governed country in the world, and the most easily governed… I liked New Zealand very much… and I was ready to stay there for good.

For his wife Hennie, not so much! For her these were “the nightmare years”. Her husband’s meagre salary wasn’t really the issue, nor was it the need for her to grow backyard vegetables just to get by. The problem was the steadily rising manuscript before them, and her full-time job as both typist and editor. Karl would routinely pass his handwritten drafts to her, and she would retype the same pages as before, with increasingly minor changes added in the margins. By the time it was finished, she had run this task nearly twenty different times; for a book that sits close to a thousand pages.

From epistemology and the Vienna Circle, Popper was now stretching his title of “official opposition” in new ways. He was thinking back to those movements that were rampaging across his former home and pushing his family into gas chambers, as well as newly encountered oppressive societies such as the native Maoris in New Zealand. Popper was trying to tear down the fabric of Western political tradition, while exposing what lay at the heart of all despotism, all repression, all totalitarianism.

The Circle, Popper showed, had lost their way by trying to find certainty in science. It was a simple, innocuous, even intuitive sounding mistake, but one that flowed quickly downstream with disproportionate momentum and harm. Now he was warning against other commonly-held ideas which had the same dangerous reach, such as historicism (the notion that history is determined by certain laws, and so the future can be accurately predicted by understanding those laws), and even banal sounding truisms such as politics is about electing the best leaders and policies.

It was just this however – seeking the best leaders and the best policies – that led Plato away from democracy (where the uninformed and easily influenced rabble were in charge) and into an intellectual dictatorship, run in perpetuity by The Best. It also allowed the Caesars to rule over Rome through strength and violence, it gave Constantine and those after him the religious legitimacy to stay in power, it was the reference point for every monarch and aristocrat to further silence the unhappy masses, it was why Karl Marx decided against elections altogether…

The Open Society looked different, and for some, a lot less grand. The place for the great men of history shining a light for the ordinary people to follow, was gone. In their place, were those ordinary people, the unwashed and uninformed crowds making small, endless, and seemingly parochial choices about their lives, hoping to “minimise avoidable suffering.” Gone too were the utopias and the revolutions, replaced by something much less exciting: “piecemeal social engineering”. The Open Society was a world of ordinary people, making ordinary choices about their ordinary lives, embracing criticism and their own fallibility.

Stabbing at so many deeply held convictions and at so many still-revered thinkers, The Open Society and Its Enemies was nearly as difficult to publish as it was to write. In Popper’s own words: “it will be a colossal job for everybody concerned. It was a colossal job [writing it] here and I was (and am) very ill while doing it.”

To compound things, this was 1944 and the war was still raging, the manuscript was long and dense, and with his previous book not yet translated into English no one beyond a few small academic corners knew the name Karl Popper. Rejection after rejection flooded-in, and the publishing task was handed-over to an old friend back in England, Ernst Gombrich.

In the meantime, Karl and Hennie were falling out of love with New Zealand, and by 1945, almost as soon as the last gun in Europe fell silent, the Poppers were heading back to the continent. Another soon-to-be famous friend, Friedrich Hayek, had managed to pull a few strings at the London School of Economics, and despite a series of bureaucratic frustrations – “Our departure problems are appalling” – husband and wife were soon sailing towards a new job and the granting of naturalization and British citizenship.

In a letter to Gombrich, Karl Popper spoke about the journey before them:

Dear Ernst, This time we are really off, I think. We have been allotted berths—in two different four-berth cabins, though—on the M.V. “New Zealand Star.”… It is a frighter [sic], Blue Star Line, carrying normally 12 passengers, and at present (in the same cabins) 30. We are not terribly pleased to pay 320 pounds for the pleasure of spending 5 or 6 very rough weeks in the company of strangers… The passage will be very rough since we sail via Cape Horn—perhaps the roughest spot in the Seven Seas. Our corpses are expected to arrive, by the New Zealand Star, on January 8th or thereabouts. Please receive them kindly.

When they finally landed in England – seasick, miserable, dirty – and staggered gingerly to dry land, they were greeted by a beaming Gombrich, waving excitedly towards them. In his hand, high above his head, was the first edition of The Open Society and Its Enemies.

Popper settled in quickly to British life and a career at the London School of Economics, completing his exile where he always wanted to start it. And in his eyes, the exile really was still active. Alan Musgrave was Popper’s research assistant from 1963 to 1965, and said about Popper that, despite realising the magnitude and impact of his work, he remained “also very bitter” about his life. After the war, Popper was once asked if he would ever consider returning to those once vibrant streets of Vienna, to reminisce, and to see what had changed. He shot back bluntly, “No, never.”

Even the allure of a cushy, full-time professorship in Austria wouldn’t do it. It was a past that was better left where it was. When he did look back though on the horrors of that time, it was through the analytic lens of The Open Society, and that focus on the importance of every individual. The simple sounding error that the Nazis made, was collectivism. The same error (just different in magnitude) that was still being made across Africa, the Middle East, Asia, and by the Maoris on the sleepy shores of New Zealand. There were no benign cousins, no reasonable variants. Wherever this mistake happened the outcome would inexorably be terror and oppression.

The word that Karl Popper used to describe all such societies?

Tribal!

 

*** The Popperian Podcast #7 – Oseni Taiwo Afisi – ‘Karl Popper and Africa’ The Popperian Podcast: The Popperian Podcast #7 – Oseni Taiwo Afisi – ‘Karl Popper and Africa’ (libsyn.com)

Karl Popper vs. Thomas Kuhn

In conversation with Steve Fuller

 

In 1935 a former-peasant farmer, Trofim Lysenko, was out to change the face of Soviet agriculture. Decades of clawing his way towards political power were paying off, and there he finally stood delivering a speech to a full session of the Politburo. And he had a theory about how to grow a higher yielding wheat.

The theory originally belonged to French zoologist Jean-Baptiste Lamarck, who believed that the traits acquired by one generation of an organism could be passed-on and inherited by the next. In the hands of Lysenko – as he looked-out from behind that lectern – it was transformed into Marxist Genetics. A theory that not only promised to radically transform the farming landscape, but which had the added benefit of being ideologically attractive to his fellow communists (seen as a way to breakdown peasant opposition by directly engaging them in an “agricultural revolution”).

The years of terror and purges and gulags were building momentum, and Lysenko seized the opportunity. He loudly attacked his critics in the audience, and across the Soviet Union, as being enemies of the revolution and against Marxist-Leninism. As he wound-down a single voice called back at him from the hall, “Bravo, Comrade Lysenko, Bravo.” It was Stalin!

By 1940 Lysenko was the director of the Institute of Genetics at the Soviet Academy of Sciences, and his detractors were either in prison or dead. The data was increasingly catching-up with Lysenko's extravagant claims, but with his model then being taught as “the only correct theory”, nearly every geneticist and biologist in the country was playing an understandable game of duplicity; falling over themselves to offer public support, while denouncing anyone not doing so with enough vigour.

Through failure after failure, Lysenkoism continued as state policy, with millions dying in famines across the Soviet Union. Even then it was still exported to Maoist China as a success story, where it contributed to the policies of the Great Leap Forward and another forty million deaths.

Just how you see and understand the mistakes here, will also define your sympathies with the greatest ever split in the philosophy of science!

For Thomas Kuhn, Lysenko’s blunder involved believing in an immature theory, one that wasn’t established within an existing paradigm. For Karl Popper, the problem was Lysenko’s inability to acknowledge the errors that were happening all around him, and accept that his theory had been falsified. At first glance it may not seem like that rich of a debate, both men were, after all, in agreement that Lysenko was wrong; they just had different reasons for thinking it. But this was no minor disagreement in the parochial corners of academia. The Popper vs. Kuhn debate shook the ground of epistemology as well as popular imagination and public attention. It was nothing less than “the struggle for the soul of science”.

The two men couldn’t have been more different. Kuhn started his career as a physicist, whose academic transition was forced upon him by colleagues who considered his research “too philosophical”. Despite his reputation as a radical theorist, Kuhn preferred to keep his own counsel, avoiding comments on the nature of contemporary science as well as the political climate unfolding around him. As Steve Fuller notes, ask anyone about Kuhn and “usually the response is positive, even enthusiastic”.

Popper by contrast began his academic life as a child psychologist, and came of age thinking about social progress and the role of science within it. Developing a reputation as a “grumpy autocrat”, he “thundered against virtually every dominant tendency in the physical, biological and social sciences.” A man who “rarely received the recognition he thought he deserved – and never tired of reminding everyone of it”… odd behaviour for someone who received a knighthood long before his retirement years.

Reputations aside, timing was also a factor here. Popper’s seminal work, The Logic of Scientific Discovery, was first published in 1934 in German. Never one to avoid the spotlight when offered, Popper spoke and lectured and argued and wrote and published constant addendums and clarifications as the decades went by. So known more by his “reputation than by readership”, when The Logic of Scientific Discovery was finally translated into English in 1959, the public response was muted, even “bemused”. Sure, they might not have already scrolled through its pages… but in another way, they had!

Everything that Popper wished for himself, fell neatly into Kuhn’s lap. The Structure of Scientific Revolutions is an expanded version of an encyclopaedia entry: sparsely referenced and written in non-technical language, the thirteen short chapters explain in simple detail how science has changed and what its phases are. Ending-up at just a little over 200 pages, it is also uncharacteristically short. And yet, again in the words of Fuller, it “was the most influential book on the nature of science in the second half of the 20th century – and arguably, the entire 20th century.”

For Kuhn science looks a little friendlier and a little less grand than Popper would have it. Everything begins with a paradigm, a slice of research that is considered so outstanding that it is adopted by the broader scientific community. This research provides a blueprint for future research, and this subsequently happens. Most scientists follow the pattern and the prevailing standards, tinkering in the lab to improve the technical details of the paradigm. This behaviour is what Kuhn calls normal science. Not trying to break new ground, not trying to change the world, but simply solving minor puzzles within a larger, unquestioned, theory.

But soon, something else bubbles to the surface. Not all of those puzzles are solved, and over time they accumulate one-upon-the-other, until a point of crisis is reached. It is only here, as a last desperate and unwelcome moment in the life cycle of science, that things fracture, people begin to ask difficult questions about the future direction of their research, and an irreversible revolution occurs. The changeover happens quickly, a new paradigm is agreed upon, and everyone happily moves back to the practice of normal science.

Kuhn’s understanding comes from a deep look into the history of science, and how it truly looks on the ground. Popper thought differently, and lamented as to why on earth someone would talk this way, why someone would try to extract prescriptive lessons from the history of science and encourage young minds to continue like this when, in his view, very few people had ever been any good at actually doing science.

For Popper the “core scientific ethic” was falsifiability – all knowledge, at all times, should be exposed to constant and deliberate criticism. It comes down to the problem of epistemology – that perpetual question of how knowledge forms. What all latter-century philosophers – including Kuhn – owed to Popper was his solution to this problem. Famously it was that of conjecture and refutation: we guess at truth, and then criticise those guesses. The theories that survive this are still never accepted as true, only not rejected as false. And if this is how we make progress, then the same must also fit for science.

So, for Popper, someone is being scientific only when they are aggressively testing the limits of existing science. There is no place in this picture for protective paradigms, and certainly none for the type of work that happens within those paradigms: trying to prove a theory correct (a harmful and impossible exercise) and trying to shield it from falsifiability.

Kuhn’s response was simple: he could see the value in what Popper was saying, but as he scoured the annals of history, he found very little evidence of falsifiability working away as a key ethic in science. Popper’s response to this was also simple: exactly!

But Kuhn did have a point here, he noticed something that many others would later also do. Something that Popper seemed a lot less tuned-in to. “Criticism is productive” writes Steve Fuller, “but only under certain conditions”. It is easy to see why in the very early stages of research that harsh and piercing criticism might not be all that useful. Better to wait, if only a little, until the person in question has a clearer understanding of their own theory, and the predictions that it makes. This is a line that Imre Lakatos tried to splice between both Popper and Kuhn, bridging the difference between them in some ways.

From early disagreements, the fight here gets personal, fast. There were two versions of history according to Popper, one in the legacy of Socrates and the other in the legacy of Plato. The Socratic version imagines progress struggling forward through a dialectic of trial and error and then failure, followed by more trial and error. And hopefully this brings us closer to where we want to be. The Platonic version is indifferent, it says that no matter what we do the outcome will always be the same. Already known for his short temper, this is where Popper starts to get really angry.

To be mistaken is the common state of things, we are always wrong in some way, but for Popper this Platonic vision was much more than that, it was historicism (the idea that history is completely determined by certain laws, and so the future can be accurately predicted by understanding those laws). Refusing to admit error is different to just making errors – it means that we never change course or accept our faults. We continue as we are, because it is all out of our control anyway. Popper saw the analogues of this type of thinking all around him, in the worst crimes and most oppressive tyrannies that the world had to offer.

Historicism was not just wrong, but immoral – and insofar as Popper and his followers saw historicism hugged-tight within the concept of normal science, Kuhn was also immoral.

As philosophical disagreements go, this one was excruciatingly personal. After reading through Kuhn’s arguments for the first time, Paul Feyerabend – then still a key disciple of Popper – called it nothing more than “ideology covered up as history”. Another person deep within Popper’s circle, John Watkins put it like this: “Kuhn sees the scientific community on the analogy of a religious community and sees science as the scientist’s religion.”

It looks the same from different angles, and comes down to a question of emphasis and language. The word Kuhn used to describe that pivotal moment in science, when everything was on the cliff edge of great change, was crisis. And it seemed much more than an aesthetic choice. Kuhn didn’t have the taste for upheaval and insecurity, nor debate and criticism. So when this happened in science, it was also unpleasant – a moment of “confusion and despair… a spiritual catastrophe”. That such catastrophes were also when science jumped forward and when rapid progress happened, was beside the point. It certainly wasn’t enough to make him embrace a Popperian type of permanent revolution. Instead Kuhn made an unimpressive choice, and elevated normal science over extraordinary science.

Life inside a Kuhnian paradigm stretches the religiosity a little further still. Here the community of researchers create science in their own image: setting the standards, recruiting people who will continue with those standards, and then hovering as divine judgement over how well they go about doing this. It is a mini-Vatican, a state unto itself where the only safeguards are those which they create. If you don’t like the religious analogy, then Fuller has a couple of others that fit: a “royal dynasty” or “the Mafia”. It is a horribly circular world, where no-one is ever accountable to anything, or anyone, outside of themselves: “a paradigm is simply an irrefutable theory that becomes the basis for an irreversible policy.”

It plays out in education as well, where perhaps Popper’s hope that students “learn to live with an unrelieved sense of insecurity” might seem a little questionable, but not when compared to Kuhn, who “reduced science education to an indoctrination strategy.” It also shows up in the details of what Kuhn considers to be his trump card: his own scholarship about the history of science. Across those famous pages are a few glaring omissions of fact. Kuhn chooses to end his study of chemistry for example way back in the 1850s, and also stops talking about physics after the 1920s. The reason for this? Fuller has an idea:

Given Kuhn’s exclusive interest in science as pure enquiry, it is reasonable to conclude that he believed that after those dates, those disciplines ceased to be relevant to his model.

There is still a purpose behind the madness here for Kuhn. He wanted something for science, something Popper also wanted: independence. And in a way normal science did this! Isolated within a self-enclosed paradigm, the field was protected from swings in policy and changes of government. Here scientists could get down to perfecting their craft without intrusions from the chaos and whims of the outside world.

This is certainly independence of a kind, but not the kind that Popper thought was important. Those high walls that Kuhn built around science unquestionably cleared a space where it could have autonomy, but it was only autonomy for the larger scientific community. What about the individual scientist? What about his ability to make his own decisions, even if that meant deciding that everyone else is wrong? Popper looked back at his colleague in obvious disbelief over this point, how could he not see the hypocrisy and the breakdown in his own values. Soon Popper would start calling Kuhn’s version of independence a “heads down approach” to science: a pejorative description that Kuhn would have seen as a complement. 

More than just the damage it does to itself, a heads down approach to science comes at an unpleasant social and moral cost. To flesh this out Fuller makes an interesting analogy from the recent history of philosophy. Martin Heidegger was a hugely important figure in the 20th century whose work on phenomenology, hermeneutics, existentialism, and particularly his book Being and Time, is still taught in nearly every half-decent undergraduate philosophy course. He was also an unreformed, and unrepentant, Nazi.

It comes down to a question of social responsibility, and individual integrity. Kuhn’s moment of reckoning was the Cold War, and how his commitment to doing normal science played out over those years. At few prior times in history had the work of science and scientists ever reached such prominence, as well as national recognition. Yet however you might view the nuclear standoff and those terrifying decades, the important standard for most people was, and still is, that you act on, and speak about, what you believe to be true. Courage matters!

They can still be criticised for being wrong, but the people in Popper’s circle did just that – they took risks, they put their careers on the line in order to speak their minds. They upset a lot of people in positions of power: Imre Lakatos against the military industrial complex, Paul Feyerabend against government funding in science, and Popper himself against the Vietnam War.

Happy with his life in the lab, Kuhn did the opposite and kept conspicuously quiet… about nearly everything! He had his paradigm in place, his days were kept busy by normal science, and so he let the world outside float uncritically by. Even as his book, The Structure of Scientific Revolutions, was being used to justify the silencing of dissent and the control of science by government and the military. Rather than just letting cowardice overcome him, Kuhn saw this repression as a positive – a stabiliser behind which normal science could continue.

More than just a means to an end, it all sounds incredibly totalitarian, from top to bottom. And if nothing else, Kuhn surely had a responsibility to address the implications of his theory. Especially as they began to manifest in nuclear conflict. Feyerabend sums it up well:

The recipe [for a successful science], according to [Kuhn and his followers], is to restrict criticism, to reduce the number of comprehensive theories to one, and to create a normal science that has only this one theory as its paradigm. Students must be prevented from speculating along different lines and the more restless colleagues must be made to conform and ‘to do serious work’… Is it his intention to provide a historico-scientific justification for the ever growing need to identify with some group?

Over the years, Kuhn declined countless opportunities to speak publicly – to correct the record, to offer any sort of opinion – as well as opportunities to debate his academic colleagues; bringing Fuller to ask the necessary question: “Is Thomas Kuhn the American Heidegger?” But throughout his silence, there was one request he just couldn’t refuse, one instance that bucked the trend. In July 1965 Kuhn, then aged 43, turned up at Bedford College, University of London, for the International Colloquium in the Philosophy of Science. He had turned up to debate Karl Popper!

Organised by Imre Lakatos, it was delicately staged: the young vs. the old (Popper was 63 at the time), the authoritarian vs. the libertarian, the shy vs. the fiery. And yet, somehow, the rare importance of what they had before them was lost to the egos of the men involved. It began with Popper refusing to accept “equal billing with the upstart Kuhn.” After too much late night anger, he decided instead to chair the debate and leave the actual fighting to Lakatos and Feyerabend. Lakatos however saw this as an opportunity not to argue by proxy for his mentor, but to argue for himself and his middle-ground philosophy (between the extremes of Popper and Kuhn). And Feyerabend – “Popper’s most radical follower” – continued down that path, deciding to push his own branded philosophy (epistemological anarchism) to an audience that had come to listen to something else.

As it turned out, this came to little anyway, because both men, inexplicably, failed to finish their papers in time for the debate. Again things had to shift, and the hard Popperian lifting was assigned to Jagdish Hattiangadi. At which point it was Kuhn’s turn. His own sensibilities strained too far, and now flaring with white anger, he refused outright to share a stage with the younger man. So in near comic circumstances Hattiangadi’s advisor, John Watkins, literally picked up his student’s notes and delivered them on stage in his stead.

And that was it! The two greatest philosophers of science brought together for the most significant event ever staged – “a landmark in 20th century philosophy” – and it all ended in a soft and ignominious fizzle. In the aftermath you can almost see the figure of Thomas Kuhn on his slow and disappointing boat ride back to America. Staring out over the railings and into the sunset. Thinking aloud and struggling to make sense of what had just happened – the infighting, the childishness, the tardiness, the self-admiration, the lack of substance… And in that moment making a firm promise to himself never, ever, to do it again!

 

*** The Popperian Podcast #6 – Steve Fuller – ‘Popper vs. Kuhn’ The Popperian Podcast: The Popperian Podcast #6 – Steve Fuller – ‘Popper vs. Kuhn’ (libsyn.com)

Diagnosing Pseudoscience: Why the Demarcation Problem Matters

In conversation with Maarten Boudry

 

Average-sized, middle-income, and in a mundane corner of the world, the fictional country of Turania is unremarkable in nearly every way. The dominant ethnic group – the Turans – have a familiar story that they like to tell about themselves, as well as about their neighbours, the Urartians. It goes like this: from their lowliest citizen all the way up to their king, the Turans are a noble and generous people; kind, but not to a fault. They are also proud. Which is where Urartia comes into the picture… because, as the story goes, they are none of these things.

Instead the Urartians are a treacherous bunch, invading, enslaving and massacring the Turans at every opportunity. It’s in their nature, somewhere deep in their bones. Wars have been fought between the two countries, and at different moments both sides have occupied the territory of the other. Of course, the fighting is always instigated by the Urartians, with the Turanians only ever killing out of self-defence, and struggling at every turn to halt the Urartian natural instinct to commit war crimes.

Inside Turanian territory live a small community of ethnic Urartians. Not too far away, but just across the border is a similar situation: a small minority of ethnic Turans living within Urartia and calling it home. The Turans are tolerant and welcoming of this group, but it’s not easy. Their guests behave poorly, leeching off the benefits of society while refusing to assimilate. Instead they try to undermine the Turanian state, conspiring with their distant government, trying to topple Turania from within. On the other side of the border things are different. The Turan minority live in constant fear, attacked each day simply for who they are; besieged by a state that is trying to ethnically cleanse them.

This story is told and told again. It is taught in Turanian schools, through patriotic events and holidays, and through popular culture. Even the historians are on board, just in slightly different tones. They tend to steer away from certain aspects of the story, largely ignoring the quasi-religious elements and the origin myths of the Turanian kings. Nor do they spend any time speaking about the more outlandish and conspiratorial aspects of the story. They are professionals after all, and so they talk about actual history: the dates of the many wars, the nature of the fighting, and the people involved. They also do something else…

Held tight within the broader culture, the Turanian university system similarly promotes Turanian nationalism. Government and private funding relies upon this being done, and so do academic careers. Anyone showing insufficient zeal is quickly overlooked for promotions and grants. So as these historians write about their country’s history, they do so with an emphasis towards that national story. All the dates are real and all the events actually happened, but it is shaded in one direction. They still talk about the benevolence of the Turanian kings compared to their Urartian counterparts, and the suffering of Turanians at the hands of their neighbours; all the time downplaying the crimes of their side while emphasising those of the other.

It doesn’t take long for life and routine to take over, and soon these historians don’t even realise that they are twisting evidence in this way – the whole country bound together by the embellishment, some people simply entertaining more sophisticated versions of it. Soon everyone considers the story to be not only true, but also completely uncontroversial. So whenever foreign media or foreign diplomats offer a different understanding, it is reflexively dismissed as malicious propaganda, likely funded by the Urartian government. And of course, any Turanian who might dare to express public doubt pays a fast and painful price; not necessarily in violence, but always in ostracism and social outrage.

Then comes along a young Turanian citizen, someone brought-up on a diet of non-fanatical nationalism; but a nationalist nonetheless. She is taught the story by her parents and family and friends, at each level of her schooling her teachers have reinforced it, and every time she turns on the TV or reads a book or opens a newspaper, it’s there. In short, she holds several key beliefs about history which are glaringly false. 

Then one day, in a less-policed part of town, she stumbles into a dissident book store. On the shelves she finds a translated book by a foreign – and internationally reputable – historian. As she reads through the pages she discovers an incredibly convincing counter-narrative to everything that she has been taught. At this point, Maarten Boudry has an important question: what does she do next? What is the rational next step for her? Does she do the Popperian thing and consider her beliefs to be falsified, and so abandon them? Or, perhaps, “Still, is it possible for her to rationally affirm the Standard Story?”

Questions of this kind are old in the history of philosophy, and only slightly less old in the history of the philosophy of science. They burn down into those seemingly perpetual problems of knowledge, truth, and deception. And once started in this way, it is natural to soon rephrase things, and to ask: What is science? How do we distinguish between science and non-science? Otherwise known as the Demarcation Problem. Popper’s famous answer goes like this:

He was not interested, as some people were (such as the logical positivists) in drawing a line between what is meaningful and not meaningful, but only in diagnosing that which makes science special. And what matters is falsifiability! If for example you have a theory in front of you, and you want to know on which side of the demarcation it falls, you should think not about the evidence that supports it, but rather how it might be proven wrong. You need to create a category of refutation: some kind of possible observation that, if witnessed, would cause you to abandon the theory in an instant. If you cannot do this, then what you have is not a science.

Consider two great minds of the era, two men who both emerged at around the same time, and who dominated academic fascination during Popper’s earlier life: Albert Einstein and Sigmund Freud. Freud’s work focused on the individual and the psyche – Einstein’s on his general theory of relativity. And both made clear predictions, the former that childhood experiences have a huge and continuing impact on our adult selves; the latter about how light travels through space. And while most people were caught-up in adulation for both of these promising new sciences, Karl Popper noticed something unsettling about Freudian psychoanalysis, something that his colleagues strangely saw as a positive aspect of the theory:

I found that those of my friends who were admirers of Marx, Freud, and Adler, were impressed by a number of points common to these theories, and especially by their apparent explanatory power. These theories appeared to be able to explain practically everything that happened within the fields to which they referred. The study of any of them seemed to have the effect of an intellectual conversion or revelation, opening your eyes to a new truth hidden from those not yet initiated. Once your eyes were thus opened you saw confirming instances everywhere: the world was full of verifications of the theory. Whatever happened always confirmed it. Thus its truth appeared manifest.

Imagine it like this: a young, angry man walks into Freud’s clinic, and explains the personal issues he is having with rage and aggression. Freud naturally explains to the man that all his negative emotions are due to the behaviour of his parents while he was still a child. This might seem reasonable at first, but it is a theory that bends to the facts. If his parents regularly beat him, then he is living out that violence today. The same could be said if they were kind to him but fought with each other. Or if they were distant and unloving – this causing anger within their son due to the lack of attention and affection. If they never fought, always denounced violence, and smothered their son with love, then this too produced feelings of anger because he was never allowed to express violence as a child, and so is now rebelling against his parents’ kindness. Once the psychological theory is there, any and all types of childhood experience will confirm it; or as Popper explains things, psychoanalysis “resembled astrology rather than astronomy.”

Einstein was doing something very different. His general theory of relativity made the clear prediction that light would be gravitationally attracted by large cosmic bodies – such as stars and planets – just as material bodies are. This was hard to test though, because it needed to be done under the perfect alignment of a solar eclipse. But Einstein made the prediction nonetheless. He wasn’t looking backwards for confirming data, but looking forward and predicting his own refutation. Something that most people believed would happen, even Popper himself: “few of us at the time would have said that we believed in the truth of Einstein’s theory of gravitation.”

Einstein was happily exposing himself to risk and failure, because under such an eclipse if light didn’t bend the way his theory predicted, then the theory would be disproven in that moment. He would be wrong, the world would know it, and general relativity would be false. Everybody waited. Until, in 1919, when the conditions were finally appropriate, a solar eclipse occurred, and Einstein’s prediction was witnessed. Rather than walking away from general relativity, as the scientific community (or those who hadn’t already done so) were largely expected to do, they were forced to embrace it, and instead the community walked away from the previous theory, Newtonian gravity; debunked in an instant by a single observation.

The difference in the behaviour was, for Popper, the difference between a pseudoscience (Freud) and a real science (Einstein). If you look hard enough, “it is easy to obtain confirmations, or verifications, for nearly every theory”, but if these confirmations are not “risky predictions” then they aren’t worth very much. Here are some of Popper’s core conclusions from these two historical episodes:

* Every ‘good’ scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is. 

* A theory which is not refutable by any conceivable event is nonscientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.

* Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.

* One can sum up all this by saying that the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability.

And for some decades this is where the battle rested. Popper had drawn the demarcation in a way that seemed to make sense, and though the debates still hummed-along, and new solutions would come and go, no one dared to think that perhaps the distinction just wasn’t important: that the Demarcation Problem was either “misguided” or “intractable”. Then slowly they did!

First it was Pierre Duhem and W.V.O. Quine, who pushed back at Popper’s premature declaration of victory. Science as they saw it was much too elastic to sit neatly on one side of the falsifiability line, with all else falling to the other. It just wasn’t as unified of an enterprise as all of that, it was instead a broader and ever-changing landscape of activities, and one that regularly connected with the non-scientific world. Most prominent though, was Larry Laudan, who, in 1983, declared the issue dead on arrival, nothing more than “hollow phrases which do only emotive work for us”: a pseudo-problem! Every attempt to draw a line between science and non-science had, according to Lauden, failed, and so the whole game should be abandoned, with no hope of ever finding a solution.

And there really are some headaches here for Popper. What he is describing involves more than just a particular theory, but rather a relationship: the connection between a theory and its predictions (observation statements). Something that is never straightforward, and always carries the taste of ambiguity. On one hand (and less importantly) it assaults the sensibilities of most scientists, because it implies that any claim – no matter how detached and ridiculous it seems – can be called scientific if it only proposes an observation that would unequivocally prove it false.

On the other hand scientific theories generally are not so friendly to us. They don’t neatly connect with reality in the way that the famous all swans are white example does. In this case the falsifiable observation pretty much writes itself: unless you find a non-white swan, such as the black swans that exist in Australia. Instead most scientific theories have degrees-upon-degrees of complexity to them, connected to reality only through long chains of background theory, boundary conditions, and subordinate assumptions. A scientific theory is always several steps removed from its observations. Or as Imre Lakatos put it: “It is not that we propose a theory and Nature may shout NO; rather, we propose a maze of theories, and Nature may shout INCONSISTENT.”

Imagine you have a theory and then you want to do the Popperian thing – you want to ensure that it is scientific – so you create predictions that would falsify it if they happen. And you then watch those predictions transpire before your eyes. What do you do next? Yes, you might scrap the theory, or you might just as reasonably blame one of the countless other conditions, theories and assumptions that led to you making the prediction. You can never know for certain just what is being falsified, only that something is, and so you can always rescue your theory to fight another day. For many people, Popper’s falsifiability criterion is at one-time much too lenient, while conversely being much too strict.

Maarten Boudry agrees that Popper’s demarcation isn’t drawn in the appropriate place, but he also thinks that Lauden is badly mistaken about the impossibility of correcting the error, and of re-drawing it somewhere better. It starts from an observation about us: Despite many modern-day philosophers following Lauden’s thinking, and avoiding the Demarcation Problem altogether as an endless sinkhole of intellectual activity, something odd has also happened… “the rest of society somehow failed to take notice.

Without understanding all the reference points nor the history behind it, everywhere you turn people are losing sleep and energy trying to revive Popper’s game. Our judicial systems refuse arguments and evidence of certain kinds, our school curricula reject the encroachment of theories such as creationism, public health officials run campaigns warning against alternative therapies… the list goes on. Indeed many countries have made-for-purpose, and government-funded, professional organisations whose job it is to fight these battles wherever they appear. And what they are fighting is pseudoscience!

Step back to look at this phenomena for a moment, and it soon hits you just what is going on. Sure, there is something about pseudoscience most people quickly recognise as harmful, but more importantly there is something about pseudoscience that most people simply recognise. They may not have explicitly solved the Demarcation Problem, but they do seem to intuitively know what a pretender-science is when they see it, what it looks like, what it sounds like, and how it behaves.

Here Boudry plants his flag, and begins disentangling the problem in reverse: “Rather than demarcating science and non-science on first principles, we should start from the common usage of the term ‘pseudoscience’, in particular the real-life doctrines and activities that are most often designated as such.” Starting with a comprehensive explanation of what science is, might just be too much of a leap and too tricky a definition. So instead we begin with all those things that science excludes – phrenology, graphology, creationism, homeopathy, astrology… – and the interesting thought that if most people are recognising these things as non-scientific, then perhaps everyone is, tacitly, also sharing the same demarcation values; it’s just that they haven’t been spoken aloud yet.

And when you deconstruct pseudosciences a few things become clear and common: 1. They recognise, and agree with, the authority that real science holds within society, 2. So they try to mimic science in convincing ways, 3. They build evidence and seek broad support, 4. They avoid counter-evidence and build immunisation mechanisms for this into the details of their theories, 5. There is an asymmetry between how they deal with evidence and counter-evidence, 6. They often appeal to antiquity, and claim that the longevity of their theory implies that it must be true, 7. Their experiments are often unrepeatable, 8. They use hyper-technical and obscurant language, 9. They avoid peer review…

This is not so much a checklist, but an accumulation of evidence. If what you have before you fits a number of these categories, then what you likely have is a pseudoscience. Perhaps we don’t need a clear cut line to still have a line! Popper would have hated all of this, and dismissed it as psychologism (the belief that problems in epistemology can be solved through psychological study). It’s a charge that Boudry welcomes: “psychologism is exactly what allows us to escape from Popper’s logicist straitjacket.

If we follow this alternative approach to the Demarcation Problem, then there is still a lingering issue that needs some sort of explanation. If Popper was wrong, then why has his falsificationism endured and remained popular amongst working scientists? Boudry puts it like this: despite Popper being wrong, his philosophy of science still got one very important thing correct; a matter of emphasis about how good scientists behave. They take risks! They “stick out their necks”! After all “they can afford to do so”, because they are seeking only truth, and not the validation of their theories. For them, mistakes and errors are things to be found and corrected, as quickly and loudly as possible…

So let’s go back to that mundane corner of the world, and our young Turanian citizen who has just wandered into a dissident book store. Leafing through a convincing and alternative version of her nation’s history, she is in the process of discovering that everything she believes to be true, is actually just a pseudoscience. And her years of uncritical behaviour towards it, making her a pseudoscientist, of a kind. She has that one pressing question before her: what to do next? At what cost?

It wouldn’t be an easy moment. First off, she doesn’t have the expertise to properly judge what she is reading, and acquiring this background knowledge would take an enormous amount of time and energy. But even if she somehow did – returning to the bookstore day after day, reading more and more conflicting accounts of her nation’s story – what then? In patriotic Turania disbelief has negative consequences. Her fellow citizens are hypervigilant, always on the lookout for waning loyalties and the subtle indications of dissent. So to avoid punishment this new found knowledge will have to be kept a secret. She will have to lie, she will have to deceive, and she will have to act, all the time carrying a different kind of burden; unbearably harmful in its own way.  

To lie convincingly is a difficult game. Our young Turanian will need to keep up appearances, simulating the genuine patriotism of those around her; matching their level of enthusiasm, while also being cautious to not overdue things and draw suspicions in that way. And she will need to keep this up forever, never letting her real feelings affect her behaviour. Living with constant fear, anxiety and paranoia, the psychological drain will be tremendous. And then there is the moral burden of what her dishonesty is perpetuating – all those possible friends, colleagues, and family members who might also be having doubts, but are silenced by her loud and proud nationalism. How long before that guilty conscience bubbles-up and exposes her in some way?

We all have these nice thoughts about ourselves as rational beings, seeking truth, trying to better align facts and theories, and valuing cognitive accuracy. What we don’t like to spend much time on, is just what happens when these nice thoughts separate from our real world interests. Maybe sometimes, maybe often, “people are better served by falsehood than by the truth”.

The same can be said for the Turanian state, and their national story. If what they value is loyalty and commitment, then truth just isn’t worth very much. If every citizen could check on the story they are being told, and easily find that it was both accurate and overwhelming, then it wouldn’t help to discriminate between loyal patriots and un-loyal dissidents. For the story to function as a sign of commitment (as opposed to a sign that it is true), the story needs to have falsehoods, it needs to stretch peoples’ credulity. It needs to be a pseudoscience…

She is still there in that bookshop, our young Turanian citizen. Scrolling through the pages of an uncomfortable truth. With terror, the full ramifications of what she is reading are dawning upon her. Until this moment she considered herself unwaveringly rational, always committed to truth. But until this moment rationality has, always, been to her advantage in life. This time however, there will be no praise and encouragement – this time it comes at a huge personal cost. That same question hangs painfully in the air: what does she do next?

 

*** The Popperian Podcast #5 – Maarten Boudry – ‘Diagnosing Pseudoscience - Why the Demarcation Problem Matters’ The Popperian Podcast: The Popperian Podcast #5 – Maarten Boudry – ‘Diagnosing Pseudoscience - Why the Demarcation Problem Matters’ (libsyn.com)

Pre-echoes of Popper: Xenophanes and Parmenides

In conversation with Robin Attfield

 

A man famous for his enemies as well as his problems, it was a local squirrel that put the most fear into Karl Popper’s later years.

A notorious and paranoid over-editor of his work, Karl Popper would often sit on his finished books, waiting sometimes decades before committing them to publication, correcting ever more slight details in prose and reason. A change of sorts came over Popper as he aged, a new found nostalgia for the classics. His last book was a series of essays and thoughts on pre-Socratic philosophy. And just as with all his other works, The World of Parmenides had to wait for its author’s patience to slowly wear thin.

Placed inside a simple yellow folder on a windowsill above Popper’s writing desk, the manuscript swelled at one point to nearly 1200 pages. Until a lazy day in early spring when the elderly philosopher and his wife returned home and noticed a large squirrel balancing over the open window. In search of nesting materials it had pilfered its way throughout the house, and now was proudly holding the yellow folder in its mouth. Clearly not expecting disaster of this kind, Popper hadn’t made any other copies of the unpublished book.

To the horror of husband and wife, the squirrel quickly backed out of the window and ran across the garden, dragging the heavy folder behind it. And the chase was on: an overburdened squirrel leading a couple of senior citizens (both deep into their retirement years) in a race across the back garden. As luck would have it, all those slight edits and extra thoughts and footnotes and amendments paid off. The folder finally proved too heavy for the squirrel as it tried to carry it up a tree. Picking up the errant pages of his book as they blew across the lawn, Popper finally conceded to himself that it was time to submit the draft to a third party editor, and then to publication.

With it suddenly much easier to persuade Popper to take a step back, Arne Petersen took over the job, and the volume was finally published in 1998. “Which is how we come to have the book, The World of Parmenides” and how Robin Attfield got his hands on it…

When you find something as significant as what Popper did, an uncomfortable – or perhaps pleasant (depending on the nature of your ego) – thought automatically follows: someone else must have discovered this previously! David Deutsch often talks about human history in terms of “mini Enlightenments”. All those moments when our not-too-distant ancestors made rapid progress, and developed new knowledge of the world. These mini Enlightenments all failed for one reason or another, but to achieve what they did – even if only briefly – they must have understood the basics of epistemology and the scientific method… they must have understood what Popper later would!

The European Enlightenment is the one that stuck, and the one that we are all the beneficiaries of today. And here, Popper famously wrote, was his method in action – specifically in the work of Kepler and Galileo. “What is less well known” writes Attfield, “is that already in 1963 Popper claimed to have discovered this method in use… also at the outset of the ancient Greek Enlightenment.”

Born in 570 BCE in the Greek city of Colophon, the Persian invasion of Asia Minor (545 BCE) soon forced him to flee westward across the Mediterranean – at the time a “long and perilous journey”. Here, in Elea on the southern coast of Naples, Xenophanes settled-in to a new life – one that would considerably overlap with the “famous son of Elea, Parmenides”.

Both Plato and Aristotle would later credit Xenophanes as the founder of the Eleatic school, a school whose prize member was Parmenides. Of the surviving fragments of Xenophanes’ work, we know that he wrote in poetic hexameters, something that Parmenides would also choose to do. In fact the number of similarities between the two men often led people to deliberately muddle the differences in their philosophies and arguments, as to better align them with each other.

Despite all of this, while the legacy of Parmenides survived and floated quickly to the ether of philosophical tradition and admiration, Xenophanes was more commonly denigrated as a “‘wandering poet and theologian’ rather than a philosopher”, “a mere rhapsode”, “a minstrel”. Slandered and “lied about” for millennia, it was the unlikely figure of Karl Popper that changed things. Reviving an interest in the pre-Socratics, Popper put on the hat of historical detective and swam back into the depths of those original texts, slowly rehabilitating the image of Xenophanes.

The first, and weakest of the smears against his character came from Heraclitus, who – furious that his theory of logos wasn’t being taken seriously by those around him – built and published a list of the ignorant. Among the prominent names were Hesiod, Pythagoras, Hecataeus, and of course Xenophanes. Rather than hold this against him (as it has been done), Popper explains otherwise: all this shows was that Xenophanes was considered important enough during his lifetime to be mentioned.

A more serious and lasting attack came from Aristotle, who latched-on to a fragment of Xenophanes’ writing, and a single word in particular:

At our feet we can see how the Earth with her uppermost limit borders on air; with her lowest, she reaches down to Apeiron.

The common translation of Apeiron is infinity, and so it was natural for people – after his time – to think that Xenophanes was insisting on a kind of infinite depth to our planet. To back this up, Aristotle also claimed to have found in Xenophanes’ writing – though it has since been lost – the belief that the Sun never actually sets due to the infinite size of the Earth; rather it is created new each day. Popper shoots back at this last point with the observation that as someone who travelled the oceans and watched the sun set – as Xenophanes did – it is unlikely that his constant observations allowed him to think such a thing.

But the two points are connected, and it is with that word Apeiron and with the “gross departures from common sense” of other scholars that Popper is most animated. The mistake is made by a loss of context, and a lack of understanding of Xenophanes’ fellow Ionian cosmologists, particularly the work of Anaximenes and Anaximander. This is how Popper explains things, and re-shines Xenophanes’ claim as an “intelligent conjecture”:

The standard translation of ‘apeiron’ is ‘infinity’, and this is what gave rise to the belief that Xenophanes held that the Earth has infinite extension, because it supposedly ‘reaches down to infinity’. But another meaning is both possible and appropriate, in view of the fact that Anaximenes’ Ionian predecessor Anaximander held that the origin of all things is ‘the apeiron’, or the unbounded, or, as it is usually translated, ‘the indeterminate’. So Xenophanes’ couplet could well be saying that the lower side of the Earth stretches down to this all-encompassing but unfathomed substance, ‘the apeiron’, the unknown fluid put forward by the predecessor of his predecessor and the teacher of his teacher, Anaximander.

The distinctiveness of the philosophy, and the reason why Popper was so impressed, becomes a little more obvious from here out. This is Popper’s translation of perhaps the most famous surviving section of Xenophanes’ work (written, of course, as a hexameter):

The Ethiops say that their gods are flat-nosed and black,

While the Thracians say that theirs have blue eyes and red hair.

Yet if cattle or horses or lions had hands and could draw

And could sculpture like men, then the horses would draw their gods  

Like horses, and the cattle like cattle, and each would then shape

Bodies of gods in the likeness, each kind, of its own.

With different eyes to most people, what Popper saw here was something much more profound than an argument against anthropomorphism (which it was) or an argument against theology (which it was not). This was the first example (extant) in the world of thought and philosophy of Popper himself, and of his epistemology. The error that Xenophanes is drawing attention to here is that of using piecemeal local information, and experience, to create theories that apply beyond it.

It was important – Xenophanes claimed – to reject the notion of Homeric gods, with divinity and heaven as flawed and limited and naive as we are. Rather, if we are to accept the existence of gods, then we must also accept that they don’t have the same problems with knowledge that we do (otherwise they wouldn’t be gods at all, but just our fellow travellers, and so nothing worth revering). Truth is independent of human minds. What Xenophanes is poking at here is the place we find ourselves in, at all times: struggling, and always failing, to get past our perceptions and opinions.

And this struggle, as well as Xenophanes’ account of the gods, says something very important about truth itself: that it does exist! That it should be pursued, that it can be found… it’s just that there is no way of knowing what it is, even if we by chance had it. If you think that this leap, from ruminating about the description of gods to Popper’s critical rationalism, is tangential, then perhaps the next hexameter will ease things:

But as for certain truth, no man has known it,

Nor will he know it; neither of the gods

Nor yet of all the things of which I speak.

And even if by chance he were to utter

The perfect truth, he would himself not know it;

For all is but a woven web of guesses.

Here we have a clearer affirmation of Xenophanes’ realism (that truth exists, and that it is independent of human beings), but we also have something much more – something that must have excited Popper as he stumbled across it during translation: “For all is but a woven web of guesses.

… A woven web of guesses!

As Robin Attfield goes on to show, from the original ancient Greek, the word δóκοs might just as easily be translated to conjectures as it is to guesses. A term central to Popper’s understanding of knowledge, and also in the title of his famous book Conjectures and Refutations. So within the work of Xenophanes is the important claim that there is a difference between subjective certainty and objective truth, and it is this bridge which can never be fully crossed because, according to Popper “we can never, or hardly ever, be really sure we are not mistaken; our reasons are never fully sufficient”. But there is again, also, something more, something incredibly prescient considering the time in which it was said.

The difficulty of discovering truth has always been a central problem within philosophy, but just how we go about doing so, just how we solve this problem, is a prize held only by Popper. Though considering what we have just read, perhaps not!

The big moment in Popper’s academic life, was his solution to this question of knowledge. Before Popper the world of philosophy was awash with all sorts of wrong ideas about how we can know anything. The best minds went searching for answers in reductionism, seeking a true foundation from which all other knowledge could be constructed. Others believed that the answer could be found with empiricism or inductivism, with knowledge provided to us by our senses. Many people continued to believe that truth was simply revealed to us through authorities. Then came Karl Popper and soon the whole landscape was torn-down – and in its place, those simple sounding words conjecture and refutation. We guess and then we criticise, and then guess again at something better. It may not sound all that elegant, but there is just no other way around it!

It is an answer that upset people… it still often does. But perhaps only because they hadn’t been properly warmed-up to the idea through Xenophanes. Strange then that he is still often regarded as an advocate of philosophical scepticism (those who deny the very possibility of knowledge); an impression that Popper was particularly keen to address. It is one thing to claim that objective truth can never be discovered, it is another to then say that our hopes to improve things are doomed and not worth the effort. It is clear – again from Popper’s translation – that Xenophanes did not think this way, but instead championed the search and the struggle for knowledge:

The gods did not reveal, from the beginning,

All things to the mortals; but in the course of time,

Through seeking they may get to know things better.

Later Xenophanes speaks in a way that rounds much of this together, and shines a little more light on that problematic word: truth. Just like so much of language, it is often automatic, used to convey a fast and universal meaning; not an instrument of philosophical accuracy and nuance. Instead of truth, what we should be saying – according to Popper – is something like “approximation to objective truth” or “closeness to truth” or “affinity with truth”. A constant reminder of where we find ourselves in the universe. Or, it can be said like this (Xenophanes):

Let us conjecture that these things are like the truth.

Impressed as he is by the quality of the work and the translation, Attfield has a pressing question, one that many readers will likely have as well: sure, Popper is clearly excited by what he has found here in the pre-Socratic world, but is it true that “Xenophanes was really committed to the methodology of conjectures and refutations”? Or is Popper himself making that earlier error of extrapolating local information to situations where it no longer applies? Is Popper a victim of motivated reasoning, picking through ancient writings, and finding only what he wants to find?

Calling Xenophanes “the founder of epistemology”, Popper sets a high bar here, especially when considering that a lot hangs on very little. Or specifically, a lot hangs on fragments upon fragments of much larger, and now lost, works. There is also the problem of Parmenides, who was certainly a rationalist in the Popperian tradition, and someone who had Xenophanes as a direct teacher and mentor. Again though, that previous error comes back at us. It would be a mistake to attribute the beliefs of a student to those of his teacher.

As Attfield rightly points out from that earlier quote and interpretation, “Much turns on what Xenophanes would have counted as ‘getting to know things better’”. Popper might be right, and Xenophanes was referring to an unending process of conjectures and refutations, but he might just as well have seen this as too restrictive. And so made the Millian or Baconian error of believing “that ‘getting to know things better’ can sometimes be achieved through inductions based on experience”. Indeed Attfield holds up a passage to show that Xenophanes was not averse to thinking that knowledge could be extracted from experience:

If God never had chosen to make the light-yellow honey,

Many a man would think of figs as being much sweeter.

Popper keenly noticed this section of writing as well, and felt the need to address its implications. He does so like this: the phrase “much sweeter” should be understood as “much sweeter than figs appear to him now, because the comparison with honey reduces the impact of the sweetness of figs”. Rather than deriving knowledge from experience, this becomes a process of correcting or criticising knowledge through experience. So maybe it would be more accurate to describe Xenophanes (as Popper later does) as using the method of “critical empiricism”.

Flavour it as you like, or as you can bear, but it is also plausible to find hints of inductivism within the writing of Xenophanes, particularly when it comes to the accumulation of sensory data. Attfield believes that, if only he had been exposed to the idea of abduction (a type of logical inference which seeks likely conclusions from observations) Xenophanes would have eaten it up, and of course disappointed Popper. In fact, “his own reasoning about the gods appears to instantiate such a methodology.

And so leaving things on the appropriate tone, Attfield asks another important question: “Is Popper's Study of Xenophanes Strictly Popperian?”

Both Xenophanes and Popper died at the age of 92, and it was only the latter’s death that brought an overdue end to the editing process and allowed his last book to be published. And as fortunate as we are to still have access to the writing of Xenophanes, the same should also be said for this final work of Karl Popper. Because of course this might never have happened. Lost in the legality of his will and testament, or in the confusion or the sorrow, The World of Parmenides might never have reached an audience beyond its long-suffering author… that is, had it not been for the curiosity, the persistence, and the burglary of a single squirrel.

 

*** The Popperian Podcast #4 – Robin Attfield – ‘Pre-echoes of Popper - Xenophanes and Parmenides’ The Popperian Podcast: The Popperian Podcast #4 – Robin Attfield – ‘Pre-echoes of Popper - Xenophanes and Parmenides’ (libsyn.com)

More Popperian than Popper

In conversation with Nicholas Maxwell

 

“Observe!”

The room looks back at him in silence.

“Observe what?” someone eventually asks.

A sly smile grows across Karl Popper’s face. “Exactly!”

 

As rumour had it, Karl Popper liked to begin his seminars this way – with this cringe worthy, smug, academic stunt that is now a firm part of the folklore and myth surrounding the man.

Paddling upstream against intuition, Popper needed something of his own, something “very simple, very basic, very elementary” that would get his message across. Inductivism, the claim that we acquire knowledge about the world by observing it, was an idea that should have been dead and buried long ago. And its death was central to everything Popper was trying to build and explain.

But everywhere he turned, inductivism was still there, holding firm in the minds of people who should have known better. It just seemed to make sense. Long, careful, abstract arguments weren’t doing it, so instead Popper found something fast and gimmicky, something that was easily understood and which could help to shatter that intuitive barrier.

Those people watching Popper as he commanded them to “observe” knew immediately that something important was missing. With any glance in any direction, there are infinitely many things to see, feel, hear, touch, smell – enough observations in a single lecture hall to fill a lifetime. So a filter of sorts must be there before anything is specifically noticed. You need a problem, something that draws your attention (you need to know what Popper is referring to) but you also need a huge array of background theories to even begin to understand what you are looking at.

To make use of an old, worn, philosophical cliché, even if he were asking you to simply look at a red cup, you would still first need theories about what water is, what drinking is, why a cup is used, what problem it solves for people, what the colour red is, what colours are… Theories always come first, they give us our eyes and our senses. Without theories the world is an incomprehensible mess of light and movement and noise and objects and behaviours – we are newborn babies once again, or worse!

Watching in the audience all those years ago at the London School of Economics (LSE), was a young professor recently hired to run tutorials for Popper’s lectures. But when the news filtered up that Nicholas Maxwell was on the payroll and working in his department, Karl Popper was “furious”.

The two men had clashed previously at another seminar, where Maxwell became quickly besotted with the older philosopher, who it seemed couldn’t open his mouth without saying “really interesting things”, things “at odds with all the usual rubbish”. And yet someone who was also “fairly terrifying”, with it not uncommon for guest presenters to collapse inward, “reduced to tears” by the remorseless criticism that Popper would hurl at nearly every sentence from their mouths – a uniquely “harrowing experience”.

At the head table, sandwiched between Popper and his assistant John Watkins, Maxwell listened as they argued across him, about his ideas. He would manage to get a few words out, and then “Popper would interrupt”, Watkins would then interrupt Popper, and Maxwell would sit silently for the theatre to playout. Soon the two men would turn – breathless and exhausted – back to their guest and ask for some clarity, which would quickly set them off again.

After a few hours of this, Maxwell laid-out a theory of the mind that seemed to hit the limit of Popper’s patience. The older philosopher stood up in anger and – near-yelling – demanded to know how on earth he came up with such a ridiculous idea. Maxwell shrugged and replied in the deepest of Popperian language: it was “a bold conjecture!” Through the laughter of the room, Popper could be heard muttering under his breath: “it’s not so bold as all of that!

All truth seeking starts with a problem, and Maxwell’s problem has to do with falsifiability. Popper’s great breakthrough with the philosophy of knowledge was to show that our truth-claims about the world can never be verified, never proven correct. Instead what we do is create theories, then burrow into those theories to produce experiments, and then derive predictions about what we will see and what will happen. Here we test and test and test, hoping for error. No amount of positive predictions can ever confirm a theory, but it only needs one incorrect prediction to falsify a theory.

When something is shown to be wrong in this way, it leaves a burden upon us all – we need to create something better. This, for Popper, is how science and knowledge advances, this is how we make progress: through conjectures and refutations.

This is also where Maxwell senses that something is off, that something is missing, something important. It runs like this: let’s say that you have a theory that has just been falsified, and so you are searching for a new theory that better accounts for what you are seeing in reality, well how do you choose? There are infinitely many rival theories that can be thought up, all of which explain our observations.

Popper’s answer was empirical content: pick the theory which predicts more than the others, and which explains more phenomena. But this still doesn’t fix our problem. We can increase the empirical content of a theory simply by asserting “ugly”, “hodge podge”, “bizarre” add-ons. Instead of Newton’s theory of gravity, we could create Newton’s theory of gravity plus the claim that in the year 2050 it will change from being an attractive force to a repulsive one, and we will all “fly off the earth”.

It is an absurd theory, so absurd that it will never even be tested, but it does predict more, and it does have more empirical content, “exactly the criteria according to Popper that should lead us to accept this theory”. So what makes us reject this type of ad hoc theory? Unity! Or rather disunity! We want our theories to apply to all things, universally. This, for Maxwell, is what Popper (and so many other philosophers) missed or ignored. An underlying metaphysical assumption within science that values unified theories over dis-unified ones, regardless of how empirically successful they are.

Popper’s demarcation doesn’t allow a place for metaphysics within science (“for Popper the basic aim of science is truth, nothing being presupposed about the truth”), but he does recognise a problem of sorts here. His answer this time is simplicity. Across his work, two such versions emerge (one from his earlier work and a later, more ambiguous one from Conjectures and Refutations), but essentially they amount to either the unsatisfying and counterintuitive claim that the more empirical content that a theory has – “what it says about the world” – the simpler it is, or that we “should leave it to scientists to fight for their theories”. As Maxwell notes, these are notions of simplicity that seem to create complexity, horribly so.

Rather Maxwell wants to start where we know we are already, with what “actually operates in physics” and with “what physicists seek”. Instead of pretending that this preference for unity isn’t there, we need to make it explicit – thereby making it into something that we can improve along with the rest of science. 

In this enterprise the basic aim of science shifts as well, from Popper’s notion of truth without presupposition to instead “discover[ing] the underlying unity in nature that we presuppose, and we have to presuppose in order to proceed, exists”. And so a map of sorts begins to develop, a way to not only understand which theories are likely false, but also which ones we should be pursuing into the future. Something that Maxwell develops into a methodological hierarchy, or aim-orientated empiricism.

It runs from the claim that the universe is at least partly knowable – “if that is false we have more or less had it whatever we assume” – down to evidence and our currently accepted, fundamental, theories such as Einsteinian relativity and the Standard Model. 

Maxwell.png

Those assumptions near the bottom of Maxwell’s ladder are much more substantial, and also much easier to revise once new knowledge is developed. While higher-level assumptions are less substantial but also more likely to be true. It also becomes a two way progression up and down the ladder, as the latest breakthroughs in knowledge also improve our means of finding knowledge itself, as we see with the creation of instruments such as microscopes or telescopes.

And it isn’t always the case that evidence influences theories – sometimes it is the theories that influence evidence (most obvious when experiments have been mishandled). This is an important aspect of aim-orientated empiricism because it accounts for the possibility of new theories changing our metaphysical assumptions. As Maxwell accepts, we don’t actually know that the universe is unified, or that a theory of everything is on its way, it’s all a “massive assumption”.

But by making our metaphysical assumptions explicit, and then stretching criticism all the way up to their door, Maxwell is trying to increase the size and the quality of possible refutations, and so with them possible improvements. He is, in his own words, being “more Popperian than Popper”.

Thomas Kuhn was another contemporary philosopher who had a talent for angering Popper. The change that Kuhn wanted to make in this whole picture of scientific discovery involved a deeper look at the history of science, and a firmer understanding of the behaviour of scientists. He presented it like this: sure, occasionally science looks very Popperian with everyone working to prove existing theories wrong and with grand shifts in the landscape of thought (“revolutionary science”), but most of the time something very different is happening. Day-in day-out your average scientist is not trying to disprove the current paradigm or consensus, but is actually working to strengthen it (“normal science”).

It is an emphasis that Kuhn thought was fundamental, and which Popper dismissed as both trivial and dangerous at the same time. Seeing it as a risk not only to science but also civilization, Popper was horrified that someone would want to encourage others into the mundane grunt-work of normal science; into a life without critical thought and hard questions. He also didn’t see much originality in the idea, flippantly saying that it was something “I discovered anyway long before Kuhn did”.

In this argument between the two most influential philosophers of science, Maxwell’s sympathies are firmly with Popper. Everything should, at all times, be open for criticism. In fact aim-orientated empiricism tries to show that unquestioned paradigms do exist, none more pressing than the assumption of unity which is built deeply into the whole enterprise of science. But it also explains the need for revolutions… more revolutions… more times and in more ways. Again, Maxwell in his own words is trying to be “more Kuhnian than Kuhn”.

The man who hired Maxwell at the LSE without consulting Popper, Imre Lakatos, tried to blend the ground between Kuhn and Popper by talking about the place for “provisional” refutations between research programs. It is appropriate to always question our theories and try to prove them wrong, and we should never accept paradigms (or the “hard core” in his own terminology) but neither should we expect rapid shifts in scientific direction. People need time to change their minds, and to judge the errors within theories, it just cannot be as absolute and as sudden as Popper hoped for.

Lakatos’ framework is still profoundly Popperian, and so different from Maxwell’s, but by imagining that a hard core for the whole of science was possible, there is a pre-echo of aim-orientated empiricism within his work. So perhaps also more Lakatosian than Lakatos?

With something this lavish, the criticism also – necessarily – comes thick and heavy. And it tends to look like either:

1.       There isn’t in fact a unity underlying all of science, just the appearance of it; or that unity is only a novel aspect of physics, and not of science in general.

2.       That the unity we see is a result of science, not an a priori aspect of it. Science only reveals a unity in nature, not the other way around.

3.       That by imagining all un-rejected theories (“infinitely many ad hoc theories”) to be still reasonable possibilities of science – even when they appear absurd and are not taken seriously – Maxwell is looking for a principle that allows for them to be excluded (unity), and so he is making the mistake of justificationism. He wants a reason for the ridiculous to be rejected, rather than just allowing that they are ridiculous.

This last criticism is the weightiest and most often repeated. It is also the one that Maxwell admits “does baffle me”. A former student of Popper’s, and a firm defender of critical rationalism, David Miller, picks up on a single phrase from Maxwell’s writing to show this justificationist element: “in effect”.

In persistently excluding infinitely many . . . empirically successful but grotesquely ad hoc theories, science in effect makes a big assumption about the nature of the universe, to the effect that it is such that no grotesquely ad hoc theory is true, however empirically successful it may appear to be for a time. Without some such big assumption as this, the empirical method of science collapses. Science is drowned in an infinite ocean of empirically successful ad hoc theories.

Miller remarks:

The words ‘in effect’ here are tendentious. Since scientific hypotheses in modern times never mention God, it might be said that science ‘in effect’ makes ‘a big assumption’ of atheism. But it does not make this assumption, and many scientists privately assume the opposite. Hypotheses that bring in God are simply excluded, rightly or wrongly, from empirical consideration. That does not mean that they might be discussed non-empirically, as indeed they are being discussed in this paragraph.

There is an old joke that Miller uses to better explain this. A guy a walks into a bar in a busy part of town and orders a beer. Soon he starts to snap his fingers, over and over. Eventually the bartender asks him “why are you snapping your fingers?” “Because it keeps the elephants away”, the guy answers. The bartender looks around the crowded bar bemused, “but there aren’t any elephants here”. The guy smiles back at him, “See, it works.”

Where Maxwell wants some sort of interdictive behaviour to explain why there aren’t any elephants near the bar, Miller thinks that even if snapping ones fingers did indeed keep elephants away, it wouldn’t be needed to keep them away from that bar, nor bars in general. It is enough to simply say that elephants rarely enter towns and rarely, if ever, go near noisy bars. No more explanation is needed. In Miller’s view, despite us seeing unity and tidiness in our theories, we should still hold that science “assumes neither that the world is tidy nor that it is untidy”…“I shall change my mind about ‘the true unified theory of everything’ when it is discovered.”

Whatever the truth of this, Maxwell did enough all those years ago to shake Popper’s resolve. Two weeks after their clash, something completely unheard of happened. A public event was created, and a lecture announced: Popper had authored a response! It was scathing of course, but as Maxwell sat listening to the onslaught, Lakatos leaned over to him in near disbelief and whispered in his ear “this is a great honour to you, Nick.”

It was an indication that Popper took Maxwell seriously. Structured, thought-out criticism takes time – you only do it, if you first consider it important or worthwhile. It is a sign of intellectual respect. And it was also reciprocal: in his early days as a student, first touching on the world of academic philosophy, a younger Maxwell remembers sitting alone in his tiny apartment reading The Open Society And Its Enemies, and crying a flood of tears. At last there was a philosopher who was “actually doing something”, someone who revolutionised epistemology and what it means to be rational, but also someone who extended that logic into other fields.

Through all of this, despite contradicting the older philosopher, Maxwell still considers his work to be highly Popperian. Particularly considering the tradition of criticism that Popper glorified so much. Maxwell’s only gripe is that “I think he should follow suit. If there is an argument that shows that scientific practice straightforwardly refutes his methodology it is something that he should take seriously and not try to push aside with bluster, which is in effect what he does”.

It leads Maxwell to speculate – slightly tongue-in-cheek – that by being so hostile to, and dealing so poorly with, personal criticism, Popper might also, rightly, be considered as an enemy of the Open Society.

Appropriately the whole affair ends on a personal note: “Popper never really liked me…”

 

*** The Popperian Podcast #3 – Nicholas Maxwell – ‘More Popperian than Popper’ https://popperian-podcast.libsyn.com/the-popperian-podcast-3-nicholas-maxwell-more-popperian-than-popper-0

Karl Popper vs. Paul Feyerabend

In conversation with Matteo Collodel

 

At some point in the mid-1960’s he turned, sudden, angry and defensive. Once a disciple of Karl Popper, Paul Feyerabend was out to do the most Popperian thing imaginable: prove Popper wrong! But it was also more than this. An intellectual disagreement that was always personal and emotion-driven. Storming out of the front door, rejecting the critical rationalism that he had once adored, Feyerabend began burning his old home to the ground; behind him a mess of intrigue, infighting, animosity, battle scars and broken relationships.

The Department of Logic and Scientific Method at the London School of Economics (LSE) began with one member of staff. It was waiting for him. Sitting patiently until the title of Professor was awarded in 1949. The new school quickly changed from a teaching department – focussed on a small cohort of honours students – to a research academy. And before long a remarkable – and strikingly famous – set of names began passing through the halls: William Bartley, John Watkins, Imre Lakatos, Joseph Agassi, Ian Jarvie, Alan Musgrave, Jerzy Giedymin, David Miller…

The man who created this lively academic world – who had attracted all that international talent – seemed a little less impressed by it all. Pulling himself from the cottage armchair and village life in Penn, Karl Popper only travelled down to his London office once a week, on Tuesdays. But it was still, unmistakably, an education in Popper’s own image:

A school in which young people could learn without boredom, and would be stimulated to pose problems and discuss them; a school in which no unwanted answers to unasked questions would have to be listened to; in which one did not study for the sake of passing examinations

It would also soon collapse inward under poor management. Popper was just not cut out for the job, preferring to work from home, and not appreciating his role as administrator. Matteo Collodel describes it like this: “his figure should have looked more like that of a workshop foreman or of the father of an intellectual family, than that of a school director.” There was a huge measure of healthy collaboration and research, but with so little time for his students a battle for Popper’s attention also formed. Political machinations walked hand-in-hand with the philosophy.

Favoured by the master, and so a regular guest at Fallowfield (Popper’s home in Penn), Paul Feyerabend spent a large amount of time beyond this battlefield. Long, exhaustive, happy days of philosophical discussion in the English countryside seemed to only break when Feyerabend would occasionally idle over to the corner of the living room, where he played with and “talked to his [Popper’s] cat”. Whatever the treatment of (and the reasonable complaints) the LSE students, Feyerabend was largely immune, with Popper taking such an interest in him as to pester him with “often unrequested advices on personal matters.”

Then it all changed. Feyerabend would soon write to one of the few people lucky enough to join him on those Fallowfield excursions, Joseph Agassi, in tones bordering on mental breakdown: “But seriously, I just cannot take academic philosophy seriously any longer – including Popperianism.” He began calling himself a “philosophical bum” someone “loitering in the halls of wisdom and knowledge” and explained that “I have started publishing on aesthetics […]. My next step will be into the philosophy of religion. Philosophy of science be damned”.

The intellectual content of this abrupt decision was likely the influence of Thomas Kuhn. Feyerabend left the University of Bristol in 1955 (a position that Popper had helped him secure) and moved to the University of California, Berkeley. He arrived in America still pushing critical rationalism to anyone that would hear him: handing out newly translated copies of Popper’s books as gifts and aggressively trying to get his old mentor a visiting professorship.

The course that Feyerabend decided to teach at Berkeley was on the scientific method, and it had only one core textbook: The Logic of Scientific Discovery. Details then get a little foggy, but at some point Feyerabend discovered an interesting neighbour in Southern California. The closer in contact he got with Thomas Kuhn and particularly Kuhn’s book The Structure of Scientific Revolutions, the more that Feyerabend seemed to leave behind his older influences and undergo a personal and intellectual mutiny.

He would eventually land on “epistemological anarchism”. It was a title that fit the man as much as his philosophy. Popper’s scientific method was famously that of conjecture and refutation: we can never hope to prove our scientific theories (conjectures) correct, so instead we should try to falsify them (refutations). What doesn’t get disproven isn’t accepted as truth, but just not discarded as false. A fairly minimalistic approach to science and knowledge, but still not enough for Feyerabend.

The answer was – just as it would be with Kuhn – buried within the history of science, with paradigm shifts and incommensurability. If you look at what the great scientists actually did and how the important breakthroughs occurred, it was by abandoning method altogether. Philosophers make rules, the rules don’t work, and so the scientists ignore them. No matter what method you can think of, the history of science shows that at some point progress was only possible, and was only achieved, by forsaking it for another. There is no formula for science, and as far as one does exist, it is, in Feyerabend’s words, “anything goes!”

The example that he uses is art, where our instincts are better tuned to the message, and where he is most comfortable having strongly considered (more than once) giving away the academic life for a musical career. The next great artistic movement – be it in music, painting, sculpture, cinema... – whatever it may be, has only one criteria that we know it must meet: it must defy everything that came before it. It must do violence to previous methods. Break the existing paradigm. Be incommensurable.

To build-out his new creative – artistic – science, Feyerabend lionised both Galileo and Einstein, but also, explicitly, gave credit to astrology, religious solutions, and all manner of superstition. He particularly liked folk medicine. In the 1950’s the Chinese communists began forcing hospitals to use traditional rather than Western medicine on their patients. According to Feyerabend – in his book Against Method – what happened next wasn’t the failure and falsification of Chinese medicine, but rather that: “Acupuncture, moxibustion, pulse diagnosis have led to new insights, new methods of treatment, new problems both for the Western and for the Chinese physician.” 

Any attempt to draw a line between what is science and what is not – as Popper had done with his demarcation criteria (science requires testable theories) – was a standard that, if followed, would have forbidden knowledge of this kind. And so rules of science become only dogma, only harmful; too narrow and too stifling of what science actually needs: pluralism and unbridled creativity.

In light of this, Feyerabend’s acrimonious exit from Karl Popper and critical rationalism begins to make more sense. Feyerabend was a rhetorician. He wanted to provoke, shock and anger people towards the intellectual anarchy he admired. What he loved so much about those great scientists was not so much their theories, but their revolutionary spirit. And so it is likely no coincidence that Feyerabend got cold feet at just the moment when Popper’s fame was reaching the general public, and when people began talking of the LSE’s Department of Logic and Scientific Method as The Popperian School.

Watching his old friend using the “odd slogananything goes, Joseph Agassi went a step beyond a simple contrarian explanation. Agassi noted that “things changed” with Feyerabend after he witnessed “the student revolution” – a change that was “political, not intellectual”. In short, he believed that Feyerabend had been “converted to Trotskyism, from which he was never freed”. Whether or not this is true, the instinct to resist joining a formal school is something completely understandable for even the most mildly rebellious spirit.

It was something that Popper understood all too well. When the socialist leader Mario Soares came to power in Portugal in the 1970’s, he invited a small group of people that he not only admired but who had influenced his thinking, to Lisbon. Popper was on the list, and the first item on the tour guide was an excursion around the presidential palace. Popper collected his things and headed out by himself to see the grounds, only to be stopped by his official minders and told that they had to go in a group, everyone together. At which point Popper slammed his fist down on the table, and declared loudly: “I will not go in a collective!

So perhaps abandoning the Popperian School at just the moment that it was becoming mainstream (as more and more international scholars flew-in to join the ranks) was in fact the Popperian thing to do. What came next though, was a little harder to explain. Its starts like this, with Feyerabend trying to scrub history a different colour:

Popper was my supervisor: working with him was a condition of my being paid by the British Council. I had not chosen Popper for this job, I had chosen Wittgenstein and Wittgenstein had accepted. But Wittgenstein died and Popper was the next candidate on my list. […] at the end of the year Agassi is speaking of (1953), Popper asked me to become his assistant; I said no despite the fact that I had no money and had begun selling my furniture and my books.

Matteo Collodel digs out the details here, behind both of these claims – about Wittgenstein and the rejection of Popper’s assistantship – and they are not just troublesome, but completely “implausible”. Feyerabend only met with Wittgenstein on one occasion in early 1950, a matter of weeks after Wittgenstein had been diagnosed with a terminal illness and a full two years after he resigned from his last academic position at the University of Cambridge. Just how the dying and retired Wittgenstein was going to supervise Feyerabend’s post-doctoral studies remains unanswered. It gets murkier still. By the time that Feyerabend had finished his doctoral dissertation in 1951 and was ready to begin considering post-doctoral options, Wittgenstein was long dead!

When Feyerabend submitted his application for the British Council scholarship, the name listed as his first choice for supervisor, out of all available academics in the country, was Karl Popper. So not a “condition” of his scholarship (as he claims), but a personal choice and preference. This type of loose language and revisionism of personal histories soon became the archetype for all of Feyerabend’s interactions and disputes. Including the second episode, where Feyerabend “said no” to Popper’s offer of an assistantship in 1953.

When the offer was made, the young Feyerabend was keenly engaged translating Popper’s The Open Society And Its Enemies into German for the first time. And we now know from official records – and the work of Collodel – that he quickly accepted the position, at some personal cost. Feyerabend had already taken-up an assistantship with Arthur Pap at the University of Vienna, so by accepting Popper he would have to cancel – quite unprofessionally – on Pap. Which he did! Only to then change his mind, again.  

Throughout this “Feyerabend’s correspondence with Popper reads quite confused”, only handing-back Popper’s offer a few days before he was due to arrive in London, with longwinded thoughts about how he was, again, leaving it all behind to become an opera singer. What he doesn’t say is the most revealing. Feyerabend’s wife was still finishing her studies in Vienna at the time and was in obvious need of “her husband’s support”. This is almost certainly the actual reason for the rejection, because a little over a year later – once his wife was finished with her work – Feyerabend moved quickly back into Popper’s orbit at the University of Bristol.

It was Popper’s influence that got him the position, but in his autobiography – and still at war with his own history and relationship with Popper – Feyerabend instead only credits Erwin Schrodinger for the appointment. A poor or selective memory might be an excuse, but it was the least of the problems that formed between Feyerabend, Popper and other members of the School. This is Popper – at his wits end – writing to Hans Albert, asking for help dealing with his former student and friend:

Unfortunately our personal relation has been somewhat clouded by the fact that [Feyerabend] is neurotic and that his neurosis partly lies (at least that is how I would explain it) in the fact that for many years he has stolen my ideas like a raven. Usually he proceeds in the same way as many others do: he mentions me somewhere in the articles in question, sometimes even quite frequently; but not when he comes to his “own” contribution, which is then usually stolen from me. However, this “own” contribution of his is often defended against me, or I am sharply criticized for my inadequacy, which is illuminated/revealed by this contribution.

Well, I am used to it. I do not take it all too seriously. After all I have enough ideas and I can leave some of them cheaply to my students (though without being asked), even if it goes perhaps a bit too far if my own ideas are (a) stolen from me, and (b) used to attack me. […] poor Paul knows he is stealing: I often called his attention to this in a friendly way. The last time (in March in Berkeley, in 1962) he answered: “Your ideas are so original that it takes a great effort to assimilate them; and by the time one has assimilated them one thinks they are one’s own.”

It wasn’t just Popper though, the spectre of plagiarism in Feyerabend’s work was increasingly raising alarms throughout the Popperian school. Other than Albert, at various points Bartley, Agassi, Watkins and Lakatos would reach out to Feyerabend with such concerns. It was the softness and friendly outreach of Lakatos (who on Feyerabend’s suggestion addressed each other “as one Popperian to another”) that made the difference and brought forward a muddled confession: “I for one am not aware of having produced a single idea that is not already contained in the realistic tradition and especially in Professor Popper’s account of it.” This is not as hyperbolic as it seems – the language and emphasis might be different, but at all turns Feyerabend’s work continued to be, and to sound, very Popperian.

But there was something in the nature of Feyerabend that liked the role of castaway and the life of excommunication. No sooner had he admitted to the impact of Popperianism and his countless undisclosed reference points, than he was back on the offensive, talking of the Popperian School as a church or political party. Also throwing fire on his new friendship with Lakatos by constantly referring to him as “the party secretary of Popperianism”. Strangely though, he always seemed genuinely shocked by the animosity that such comments garnered.

Feyerabend’s wasn’t the only acrimonious fracture from within the Popperian School. And much of this was the fault of Popper who had developed an idea of interpersonal conduct that failed his students badly. Due to their working relationships, he thought that criticism should be a private, not public, event. He felt hurt when people didn’t follow this standard, but worse it meant that Popper would often move forward with his philosophy without ever making public acknowledgement of the research of those around him. By not criticising his students in open air, their ideas were left to suffocate in darkness. Couple this with Popper’s “absent-mindedness in organizational matters”, his “workaholism”, his “idiosyncrasy”, and what Agassi called his “famous immense sense of persecution”, and it is possible to see that their broken relationship had more to it than just Feyerabend’s strange behaviour.

The nastiness of Feyerabend’s increasing attacks on Popper was something he self-excused as due to my writing style, and a part of the infuriating tactic whereby he would constantly insist not to be taken seriously. Just how an audience is supposed to understand and filter his words, and discover what he actually is serious about, is never offered. Here Feyerabend is a pre-echo of the relativist movement that he would later entertain.

John Watkins, perhaps the most loyal of all Popper’s students, offers this reflection on those messy years: “there ought to have been a Popperian critical tradition and not a Popperian School”. Always the bridge builder, Watkins kept in touch with Feyerabend throughout it all, and at some point towards the end of the 1960’s he recommended that Feyerabend read John Stuart Mill, particularly On Liberty. It took two years before he got around to it, but he would eventually write back to Watkins in a hot flush, saying that he was “more enthousiastic [sic!] than [he] ha[d] been about anything for a long time. […] Mill is really quite something”.

Feyerabend, the man in love with scientific genius, and the exploits of great minds, had a new hero… and a new bible!

 

*** The Popperian Podcast #2 – Matteo Collodel – ‘Karl Popper vs. Paul Feyerabend’ (The Popperian Podcast: The Popperian Podcast #2 – Matteo Collodel – ‘Karl Popper vs. Paul Feyerabend’ (libsyn.com)).