Post 788: The European flag

A tale of nested conspiracies in four-and-a-half fits - ideally it would have been post 666

For my swan-song I’ll supply the faithful with a tall tale of the sort that induced Mark to invite me on to his blog. I’m sure he would have enjoyed the mixture of verifiable truths, multiple ironies, and fuel for nested conspiracy theories.

Epi-prologue: the flag

It’s a good symbol. The ring of stars evokes an ideal of common values and aspirations, as in Schiller’s Ode to Joy:Űber Sternen mu er wohnen”. The empty space it defines but does not enclose invites states and citizens to fill it with the meaning and praxis they choose. Unlike the Ode, picked as the European anthem, it is not impossibly élitist for amateur use (*).

The circle of stars is not standard heraldry. The one previous use I could find was the shortlived 13-star Betsy Ross flag of the American revolutionaries. This was replaced progressively by the rectangular arrangement of today, with one star for each state, as settled in 1818. There is no reason to think the Eurocrats were inspired by the Betsy Ross, or had even heard of it. So where did the 12-star European flag come from? It never corresponded to the number of members of any European institution at the times of adoption.

I tell the story in reverse chronological order, like the detective explaining the murder to the assembled suspects in the drawing-room of the snowbound country house. In honour of Lewis Carroll, whose world we are plainly living in, the sections are called “fits”.
Continue reading “Post 788: The European flag”

Terawatt solar, halfway

A victory hobble down RBC memory lane

In January 2013 I published a long post on PV solar energy, under the grandiloquent title “Terawatt solar or bust”. For an inherently ephemeral blog post, it holds up pretty well. The heart of it was this table:

At the end of 2019, installed PV was 633 GW (Bloomberg NEF, via Wikipedia; may include a rough estimate of rooftop), with a central forecast of 770 GW for end 2020.

Half a terawatt is there already, and the second half can be expected for 2022. Yeah!

The EPIA/Greenpeace best scenario of 688 GW for 2020 was spot on. Remember that it was an outlier at the time, derided by the CW. The longhaired agitators at Greenpeace have turned out much better forecasters than the men in grey suits at the IEA.

My gloat is tempered by the fact that I tagged Greenpeace as too conservative, based on an extrapolation of past trends. I was wrong, and the growth rate did slow. I didn’t allow for the premature rush by many governments, from the UK to China, to phase out “unaffordable” solar subsidies too quickly. That exogenous shock has gone now, and there is little reason why solar should not get back on its previous fast track. Judging by their investment plans, Chinese PV manufacturers at least are betting on rapid growth.

What of the future? My dumb extrapolation had solar meeting the world’s entire energy demand of ca. 10 TW continuous soon after 2030. That implies 40 TW of solar at a 25% capacity factor. Say half would be wind instead, that’s still 20 TW of solar. A long way to go, but exponential growth is still the 500-lb gorilla that crushes everything in its path.

At current utility prices close to $1 per installed watt (*)  (modules are at 20c per watt), the cost would be about $20 trn, or $1.3 trn a year for 15 years. Global investment in all forms of energy was $1.85 trn in 2018 (IEA), of which $726 bn was for oil and gas, so the order of magnitude is as doable as it is necessary. Note that the increase is only in the upfront investment. It is now a truism (pointed out here by me in 2015) that the energy transition at worst breaks even over time, because the running costs of renewables are so low. It’s a huge bargain if you add in health and climate benefits.

(*) The NREL gives the average US cost in 2018 as $1.60/watt, with $1 at the bottom of the range, which will soon become the norm. OK. Round up to $2trn a year at worst if you insist.

I did get a few things right seven years ago:

- There has been no dramatic change in solar technology, just steady incremental improvements.

- A lot more attention is being paid to the future problem of firming large volumes of intermittent solar and wind without carbon emissions. (The immediate problem is being successfully managed by grid operators everywhere using gas turbines.)

I’d just like to highlight a couple of recent developments before we go.

One is Andrew Blakers’ 100% renewables scenario for Australia using current technology and costs . He showed it can be done, at less than current wholesale prices, using just four technologies: wind, solar, HVDC transmission, and off-river pumped hydro storage. The beauty of this minimalist approach is that you can add in riskier technologies – V2G, P2G, large-scale demand response, grid batteries - if they (a) work (b) make things cheaper. But the feasibility problem has been solved. Blakers followed up by answering the “no sites” objection to PUHS by creating a world atlas of potential sites, identified from satellite data. There are 616,000 of them. Bad luck for Estonia and the Netherlands, but most countries have plenty of options.

I suppose the big technology news in renewables in the last few years has been the sudden arrival of floating wind. The Norwegian oil company Equinor went from one test turbine to an operational wind farm without a hitch, and now other players are piling in. This opens up large areas of ocean off coasts with no continental shelf, especially the US West Coast and the Japanese Pacific one.

For entertainment value though, it’s hard to beat the even more rapid arrival of agrivoltaics. This is a Sybil Fawlty invention (“special subject – the bleeding obvious”) but welcome for all that. It turns out you can actually improve yields of some crops by growing them in partial shade under solar panels. Here’s a nice shot of a project in a vineyard in the Languedoc.

The French get their priorities right. The AI management software that steers the panels gives priority to protecting the vines from extreme weather, such as hail – a serious risk to high-quality vineyards in Burgundy and the Médoc. The setup even improves the wine, perhaps by cutting heat stress:

It has also been claimed the aromatic profile of the grape was improved in the agrivoltaic set-up, with 13% more anthocyanins – red pigments – and 9-14% more acidity.

A toast to solar energy: take over the world as fast as you can.

One last post to come.

Heroes

The hero of Midway was not like Achilles.

Roland Emmerich’s Midway is on Amazon Prime, so I watched it.  As you would expect from the director of Independence Day, it’s a watchable, technically adept war movie at a Boys’ Own Paper level of subtlety and depth. If you are looking for an exploration of the stresses of command – Nimitz’ acceptance of a critical battle with no advantage of forces, only the edge of surprise – or of front-line combat, you will be disappointed. This is not an RBC recommendation.

Richard Halsey Best

But it does appear to be historically accurate. The events of Midway are sufficiently dramatic not to need embroidery. They even supplied an unambiguous real hero, Lt. Cdr. Richard Halsey Best, the dive-bomber pilot who scored hits on two Japanese carriers in the same day, the one in the Akagi’s hangar dooming the ship.

In the film, Best is played by English actor Ed Skrein as the archetypal talented bad boy who makes good on the day, a clone of the Tom Cruise character in Top Gun and similar action heroes. This characterisation by Emmerich reinforces a narrow stereotype. Hollywood does not always follow this – from Marshal Will Kane in High Noon to the portrayal of pacifist Marine paramedic Desmond Doss in Mel Gibson’s surprising Hacksaw Ridge - but it does so often enough for the stereotyping charge to carry weight.

It’s worth asking whether the portrayal of Best is true to life. It’s not inherently implausible; military pilots, like other combatants, can be nerveless daredevils. But it’s not the only possibility. Homer presents us in the Iliad with three different styles of warrior-hero: Achilles, the Top Chariot Fighter in the Hollywood mold, brave for glory and because he enjoys fighting; Hector, courageous out of honour and duty; and the calculating Odysseus, who is brave because he wants to win, to survive, and to go home to his wife and son. With striking realism, Homer has only the last survive.

So which of these three was closest to the real Richard Best? Surprisingly for such a pivotal and iconic figure, I could find no assessment of his character on the Internet. However, there are enough recorded facts to build a pretty good Identikit portrait.

1. Best married at age 22, and stayed married. I could find no reference to a divorce or remarriage.

2. As a young Navy pilot, he was picked in 1938 for a post as instructor at Pensacola. You do not choose reckless individualists for instructors, but men who can balance aggression and prudence, and can focus on the mission.

3. From Pensacola, he asked for a transfer to an operational dive-bomber squadron – not a fighter one. Fighter pilots win dogfights and glory; bomber pilots can sink ships and win battles. Why not torpedo bombers? Perhaps there was enough scuttlebutt about the obsolescence of the Douglas Devastator plane and its unreliable Mark 13 torpedo.

4. He was regularly promoted, and at Midway was a squadron commander under Air Group Commander Wade McCluskey. Emmerich’s film has them both given field promotions by Halsey in extremis, which is not plausible.

5. In the first attack on the Japanese fleet at Midway, most of the 31 American dive-bombers attacked the Kaga. McCluskey’s claimed orders to split forces were not received. Best noticed the mistake and, without orders, took his two wingmen, Lt. Kroeger and Ensign Weber, to attack the Akagi. Amazingly this tiny force not only survived but destroyed the carrier. Weber was killed in the successful afternoon attack on the Hiryu, but Kroeger survived the war and lived to 89.

6. Best suffered serious damage to his lungs during the day from a faulty ventilator and was hospitalised. He developed full TB, and was invalided out. He never flew for the Navy again – or, as far as I can find out, at all. You would think that if his passion was flying as such, a war hero and master pilot like Best could have found a way to stay in the air, at least for recreation.

7. After leaving the Navy in 1944, Best held two responsible desk jobs: “After discharge from the hospital, Best worked in a small research division of the Douglas Aircraft Corporation. This division became part of the Rand Corporation in December 1948, where Best headed the security department until his retirement in March 1975” (Wikipedia). These do not look like sinecures. Both companies were important and unsentimental military contractors and prime targets for Soviet espionage. Running security for Rand may not have been physically demanding, but it demanded sharp wits and an eye for detail.

I don’t want to understate the sheer nerve required to put a warplane into a near-vertical dive over an enemy warship firing at you with every gun it has, not to release your bomb until you are certain, and to pull out of the dive at the last second before crashing. Still, Best’s CV reads like Odysseus not Achilles to me. I think he risked his life twice on the same day not to show how brave he was, or even out of a high sense of duty, but because he was determined for the United States to win the war.

Corrections and additions welcome.

PS: There is of course a wider debate about heroism, as the coronavirus crisis reminds us daily. The classical Greek authors expanded the canon of heroism to women like Andromache, Cassandra and Antigone, and men like Orestes who struggle with a profound ethical dilemma. Hollywood should follow their example.

Reflections on The End of RBC, Part I: Mark Kleiman as Blogger

In preparing to say goodbye to RBC, I have been spending time digging through the archives. In the process I have come up with some closing reflections that I will share here in our final month of existence. The first part is dedicated to RBC Founder Mark Kleiman as a blogger (I have written about my friend more generally here, this post is just about him as an RBCer).

The oldest post in RBC’s archive is this one, written by Mark on August 30, 2002. Mark started in a place of political alienation: he positively loathed the George W. Bush Presidency. Some people want to write; other people have to. I think at that historical moment Mark had to. In a previous century, he might have stamped out hand bills protesting the actions of Parliament or The King, but in 2002, the Internet was here and blogging was exploding as a written form. Cometh the medium, cometh the man.

My main feeling in looking at the early years of RBC’s archive is admiration of Mark’s work ethic and bloody minded persistence. Day after day he turned out post after post for a tiny audience. RBC was not a group blog but Mark Kleiman’s blog, and he was 100% responsible to keep it going. He pushed that rock uphill and slowly built a loyal audience. Even when Steve Teles and Michael O’Hare signed on a few years in, Mark was still the workhouse content producer.

I feel good about the fact that I was a core writer here when the RBC reached its largest audience (At least 250,000 unique readers a month), but going from no audience to 10,000 regular readers is a way bigger lift that going from 150,000 to 250,000. There were a zillion blogs when Mark was starting out that never hit that initial threshold of a loyal readership base, but Mark got there and then some entirely on his own.

This also highlights what a generous person he was. Many people who had labored so hard to create a platform and a following would not have shared it. But Mark offered RBC slots to dozens of writers over the years, letting them start out with a much bigger audience for their work than they could have attained without years of effort.

Mark also deserves praise for the range of substantive areas about which he blogged about in a thoughtful fashion. He is of course most well known for leading the only widely read English language blog that did serious drug policy analysis, but he also wrote intelligently about crime, politics, poverty, education, and a variety of other topics. I eulogized Mark at the American Society of Criminology last year by noting that he was really a 19th century intellectual rather than a 21st century social scientist: he didn’t stay confined to a discipline and didn’t rely much on complex statistics. Instead he used his roving mind and keen observational skills to make his points, and, he had enough chutzpah to think (usually correctly) that he could say something intriguing on virtually any topic.

At the same time, Mark was sometimes too undisciplined in his blogging, and indulged himself in rants or political attacks that didn’t advance the argument (Not that that was his purpose in those posts, he was I think venting). I wonder if it might explain a mystery of Mark Kleiman as a blogger: Why was it that so many of his equally successful contemporaries were hired by magazines and newspapers to be an inhouse blogger but Mark never was? It may be that the only blog that could hold Mark’s eclectic intellect, temperament, and sensibilities, was the one that he himself founded and ran.

Newton self-isolates

Newton’s prism experiment retold.

As a way of putting enforced seclusion to good use, it’s hard to beat Newton’s optics.

You all know the story in outline. In 1665 the bubonic plague that devastated London reached Cambridge, where Newton was a freshly minted B.A. (Cantab.) He fled to his uncle’s farm in Lincolnshire. This is now called Woolsthorpe Manor, though it’s more the farmhouse of a prosperous yeoman. He took with him a pair of prisms, with which he destroyed the prevailing theory of colour with a devastating experiment. We all know that Newton discovered, or rediscovered, the colour spectrum using a glass prism placed in a beam of light. But the real breakthrough came from the second prism.

Brief flashback. This prevailing theory was a common-sense one. White sunlight passes through a stained-glass window. It becomes blue or red or yellow. It’s the medium, the stained glass, that gives the colour, right?  As Shelley wrote, 150 years later:

Life, like a dome of many-colored glass,
Stains the white radiance of eternity.

Wrong.

The usual story is that Newton hid out in a barn. His own sketch of the experiment disproves this.

Barns don’t have small, square, high windows. Bedrooms do – like those on the first floor of the house. Glazed windows were gradually adopted in England in the course of the 17th century. As like as not, the bedrooms in the Newton farmhouse still only had stout wooden shutters to keep out the cold Lincolnshire winds: shutters with cracks in them.

The sketch clearly shows what Newton did. He placed prism 1 in a beam of sunlight passing through a crack or hole in the shutter, producing the familiar spectrum. Where did these colours spring from? Perhaps it was the medium again, the prism glass, as Descartes had proposed. Newton constructed a screen with a ladder of more holes to allow him to isolate the different colours. He placed prism 2 in the red, blue, .. light – and it stayed put. Schematically:

It doesn’t make sense that prism 1 would create colours and identical prism 2 do nothing to them. (To be quite sure you would need to replicate with prisms made from different sources of glass, but that was quickly done.) So the colours were in the light to begin with.

The illusion is white light, really a bundle of different colours. More disturbingly to our intuition, a perceived colour is a negative property. The stained glass absorbs all the other colours than the blue we see. Leaves are green because that wavelength is not absorbed by chlorophyll, which is tuned to blue and red.

It took Newton five years to write this all up into a full theory of optics. It was his 1672 paper on this that made him deservedly famous. Gravity came later (1687), though he started on that in the farmhouse too.

Go on. Crack string theory.

The two-way street

A standard Israeli till receipt:

The text is on the right, as Hebrew is written right-to-left. The numerals are on the left, written left-to-right, as is standard with Arabic numerals.

Wait a minute. Arabic text is also right-to-left. So why do its numbers go the other way?

Because they were not originally Arabic but Indian. The attribution is not incidentally at all controversial. The two eminent mathematicians who popularised the system in the Muslim world around 830 CE, the Persian Al-Khwarizmi (who gave his name to algorithm) and the Arab Al-Kindi, entitled their treatises respectively On the Calculation with Hindu Numerals and On the Use of the Indian Numerals: no concealed attribution there. The decimal point was the work of an earlier Iraqi Jewish scholar, Sind ibn Ali. A full treatment of zero had arrived quite late in India, in the work of Brahmagupta (628 CE).

India used and still uses a lot of different scripts. But the common ancestor of many is the Brahmi script adopted by the Buddhist emperor Ashoka, reigned 268 to 232 BCE. Brahmi is left-to-right, and so are its numerals.

Brahmi script on Ashoka Pillar (circa 250 BCE), Wikipedia

The first widely used alphabet was the Phoenician, around 1200 BCE. It was right to-left. You have to ask: why would anybody have chosen this unhandy scheme? Unless you are left-handed: a small minority (around 10%) of most populations, but sometimes they get to be kings, high priests, merchant tycoons or tennis champions, in a position to get their way. But both the main earlier non-alphabetic scripts, Mesopotamian cuneiform and Egyptian hieroglyphs, run left-to-right, so it’s an odd choice. The Archaic Greeks switched direction back at the same time as they democratised writing, including for bawdy inscriptions on winecups.

It’s controversial whether Indian alphabets had a Phoenician ancestry – some Indian scholars argue for a purely subcontinental origin – but it’s likely. At all events, early Indian scribes followed or paralleled the majoritarian Greek choice of left-to-right, and Ashoka set it in stone.

For numbers, there is no particular advantage in one direction over the other. To evaluate a positional decimal number, you have to count outwards from the decimal point in both directions. Right-to-left and left-to-right are mirror equivalents. The direction was determined by the non-numerical script it was embedded in.

Annals of weird infographics

A Norwegian consultancy comes up with a bafflingly cute one.

This chart, or whatever you want to call it, is from a report on the global energy transition by the big Norwegian consultancy DNV-GL. It’s not wrong or misleading so much as baffling. A new type of Tufte failure, perhaps. For their next effort, I suggest adding animated Teletubbies skiing down the mountaintops.

One clean beach

Why is my local beach now free of plastic litter?

No pretty photograph for this one. How can you take a snap of something that isn’t there?

Plastic litter on my local beach, that’s what.

I moved to Spain 15 years ago. My beach walks were interrupted by regular collections of litter, almost all plastic of one sort or another: drinks bottles, throwaway shopping bags, formless lumps of polystyrene, broken tangles of fishing net. It was densest along the shoreline, so jetsam (nice word: its counterpart flotsam is floating junk).

Recently I have had to leave my spandex Supergramps suit at home. There is hardly any to collect. On reflection, the change has been slow, though I’ve only just noticed it. Why has this happened?

Continue reading “One clean beach”

Is ad hominem a fallacy?

Sometimes not always. Wonkish.

I got into an interesting argument in the comments on a post I wrote on nuclear energy. Keith wrote something that draws a tangent of much wider import:

There was intense opposition to nuclear power from many activists before anyone was focused on climate change, so now there is a credibility problem for critics, i.e., “Group that always hated nuclear power on principle still hates nuclear power for new reason” isn’t persuasive to most voters.

The proposition is that nuclear opponents changed their argument, which indicates opportunism and bad faith, ergo many people see this as invalidating the argument.

I challenged the fact pattern in the comments thread there, and see no evidence of the alleged tacking. (Any reader comments on the issue please in the other post thread, not here). Still, let’s assume it’s true. So what?

At first sight this is simply an example of the ad hominem fallacy, or as the French nicely say, “procès d’intention”. The motives and character of the person making an argument are simply irrelevant to its validity. One of the routine jobs of intellectuals, public or no, is to raise the red flag on such elementary mistakes and tell their authors to cut it out.

Up to a point, Lord Copper. The case is more complex than with a straight logical fallacy like petitio principii, and several strands need to be disentangled.

Keith is undoubtedly right to think that ordinary people do weigh credibility in assessing arguments. I suspect this is part of Daniel Kahneman’s Type 1 thinking: the fast, efficient and kludgy Hare processes that allowed our distant ancestors to make quick decisions based on incomplete information. These are (though Kahneman does not make the claim) probably hard-wired into the brains of their descendants, that is us. They are in contrast with the slow and effortful Type 2 Tortoise processes of abstract reasoning. Dismissing arguments from untrustworthy sources saves time and allows us to move on.

But, says our Type 2 brain, it’s still a fallacy with a real practical downside. Dismissing tainted sources makes us miss out on some useful reasoning. This is not a remote possibility. A good example from an extremely tainted source is the Nazi opposition to smoking and cruelty to animals. As far as I can tell, this was based on sensible premises – unlike their equally correct suspicion of the austerity financial policies recommended by bankers, influenced for at least some by the belief that the banks were controlled by a cabal of sinister Jewish incubi determined to impoverish Aryan Germans (link to revolting cartoon from 1931). The term “batshit crazy” does not do justice to this evil fantasy.

Other examples are the famous Milgram and Stanford Prison https://en.wikipexperiments in psychology, which show how easy it is to get normal people to commit atrocities. As I understand it these would in their original form now be considered unethical, as the subjects are very distressed when the façade is torn down and they find out what they are capable of. The results are still valuable, and add to the obviously unrepeatable field observations of Christopher Browning on reservist SS troopers. More broadly, it is simply part of education to learn to address arguments from people you find uncongenial.

That’s one side. On the other, it is surely not required to treat tainted and reputable sources equally. Read the whole of the now famous tirade of Daniel Davies about the justifications put forward for Gulf War II:

Good ideas do not need lots of lies told about them in order to gain public acceptance. …. Fibbers’ forecasts are worthless… There is much made by people who long for the days of their fourth form debating society about the fallacy of “argumentum ad hominem”. There is, as I have mentioned in the past, no fancy Latin term for the fallacy of “giving known liars the benefit of the doubt”, but it is in my view a much greater source of avoidable error in the world.

Fair enough. So we face a procedural dilemma. Neither full-on obedience to the ad hominem rule nor its simple rejection seem adequate. Where do we draw the line?

We do need to distinguish between claims of fact and the reasoning built on them. For facts, the legal maxim falsus in unum, falsus in omnia is a fair guide: don’t trust liars, if you must use their work, double-check every claim they make. But what about their reasoning? Can’t we evaluate this independently of the claims of fact?

If reasoning were all syllogisms or mathematical deduction, we no doubt could. The following real-life example is is a perfectly sound logical inference, albeit from unacceptable premises:

  • Socrates is a corrupter of youth.
  • The laws of Athens say that a corrupter of youth must be put to death.
  • The laws of Athens are just.
  • Therefore Socrates must die.

If we disagree with the conclusion, and we do, it’s necessary to attack one or other of the premises. But in the typical case, facts are linked by inductive not deductive chains, calling on assumptions about the laws and state of nature as well as judgments of probabilities, both scientific and psychological. Would Saddam Hussein attack Israel if he had WMDs? Would the Iraqi people welcome an invading army of liberation? These are not yes/no facts.

In these complex assessments, trustworthiness is surely relevant. We rely on experts – doctors, statisticians, rocket scientists, economists, engineers, intelligence analysts, reporters - to inform us how the world works, drawing on long study or experience we can never ourselves emulate. We have to be able to trust them. Expert judgement is fallible, but it usually beats amateurs picking with a pin or clicking on an ad in Facebook.

This even applies, I understand, in the higher reaches of pure mathematics, the temple of deductive reasoning, where a new proof can be hundreds of pages long or the printout of a computer program exhaustively searching thousands of cases. I recall (but cannot trace) a description of the social process of acceptance of a new proof by the mathematical community, based on trust in colleagues expert in the relevant sub-area who accept the proof on detailed examination.

Trustworthiness is not a binary concept but a scale. We may allow that complete untrustworthiness is binary, as with Daniel Davies’ proven liars. So the ad hominem problem for inductive reasoning as well as claims of fact becomes one of calibrating our trust discount in a particular case not involving such liars.

Keith rightly mentions the emotional investment some may have in an issue as a distorting factor. We cannot usually wish this away by only listening to neutral experts. The investment is not determined by the people but by the issue. The validity of Andrew Wiles’ proof of Fermat’s Last Theorem took him years of dedicated work, but there were no impassioned pro-and anti-theorem schools in the background. Colleagues found a hole in his first proof, which he calmly acknowledged, then fixed to general applause. Contrast drugs policy, abortion, and nuclear power, where passions run high on both sides. Mark Jacobson (anti-nuclear) actually sued Christopher Clack (pro-nuclear) and the National Academy of Sciences as publisher over a hostile rebuttal to his first 100% renewables scenario. Both are reputable career scientists.

In such fields, it is generally impossible to find anybody with deep expert knowledge who does not have strongly held opinions on one side or the other of the relevant policy. Controversy and conflict are integral to the scientific and democratic processes. This applies in spades to advocacy groups, formed specifically to advance one or other policy. Greenpeace is not going to give you a sympathetic in-depth analysis of coal-mining. But its scenarios of solar deployment have consistently been much more accurate than those of professionals at the IEA.

What should the common reader or blogger do in this situation? I can only offer bromides.

  • Eliminate known liars and hired propagandists completely from consideration, see above.
  • Take into account formal credentials, institutional affiliations and possible conflicts of interest, as guides not filters.
  • Check whether the author fairly represents the opposing view or sets up straw men, notes unhelpful data or brushes it under the carpet.
  • Ignore tone short of abuse. Bias can hide under a façade of judicious neutrality, passion can be combined with fairness (see the model of Mark Kleiman). (This one may be a personal preference).
  • Check your own bias and lean over backwards to be fair to the side you aren’t on. IIRC David Hume, when writing the Dialogues Concerning Natural Religion, wrote to theologians to be sure he was presenting the cases of Cleanthes and Demea as well as possible, assuming he was Philo himself. (Can’t confirm this, help wanted.)
  • Remember that historians deal with and correct for biased sources all the time. Perhaps there is no other kind.

We now have an unsatisfactory answer to the question posed in the title: it depends. Sometimes the ad hominem rule calls for a red card (off the pitch), at others just an orange one with a dimmer (proceed with more or less caution).

Not much help? Welcome to the real world. Trust me.

[Update 30/7/2019]: A 2006 blog post by noted Australian economist John Quiggin on very similar lines.

[Update 2, 4/08/2019]: Australian conservative pundit Andrew Bot reminds us that there is another form of ad hominem attack, one that is not only fallacious but obnoxious. He devotes an entire column in Murdoch’s Melbourne newspaper the Herald-Sun to an unhinged and scurrilous personal attack on the teenage Swedish climate activist Greta Thunberg. Sample:

I have never seen a girl so young and with so many mental disorders treated by so many adults as a guru.

More here. Ms Thunberg has Asperger’s syndrome and does not conceal the fact. She shares it with several other famous people, possibly including Albert Einstein and Isaac Newton. I’m not sure what condition Andrew Bolt suffers from, but it probably ends in “-path”.

The third King

Balthasar is black, and it’s a good thing

Ravenna mosaic of the Magi

Today is Twelfth Night, Epiphany, the Christian feast commemorating an uncorroborated legend in one of the Gospels (Matthew 2, vv 1-9) of a visit by a group of Magi to the infant Jesus. By AD 500 the unnumbered Persian astrologers had become three kings. These mosaics from imperial Ravenna still depict them in Persian dress, but that knowledge was lost in the Dark Ages. Nobody in Western Europe in say 1100 AD had any idea what a Zoroastrian astrologer might have been like, so the shift is understandable.

What is far more puzzling is why one of the kings – usually Balthasar, sometimes Caspar – should be often painted as black. Continue reading “The third King”