Has violence declined, when large-scale atrocities are systematically included?

Note: I wrote the following on my personal time, then cleaned it up slightly for public consumption. This post is not directly related to GiveWell’s work, but we thought readers might find it interesting anyway. It provides a simple supplementary analysis to the argument presented in The Better Angels Of Our Nature that violence has declined over time. I conclude that the book’s big-picture point stands overall, but my analysis complicates the picture, implying that declines in deaths from everyday violence have been significantly (though probably not fully) offset by higher risks of large-scale, extreme sources of violence such as world wars and oppressive regimes.

Thanks to Steven Pinker for reviewing a draft of this post.

One of my favorite nonfiction books is The Better Angels of our Nature by Steven Pinker. It argues that “violence has declined over long stretches of time, and today we may be living in the most peaceable era in our species’ existence … it is an unmistakable development, visible on scales from millennia to years, from the waging of wars to the spanking of children.” For the most part, I think the book is quite convincing on this point.

This post focuses on what I see as the biggest missing piece of its analysis. The major large-scale atrocities of the 20th century – particularly the two World Wars and the regimes of Josef Stalin and Mao Zedong – stand as an obvious challenge to the book’s theme of declining violence over time. Better Angels does address these events, arguing that they are not as historically anomalous as they may seem. However, the book does not give a comprehensive, quantified picture of how recent centuries compare to older ones in terms of total deaths from such large-scale atrocities. It also does not compare the relative death toll of large-scale atrocities to that of other sources of violent deaths it discusses (homicide, witch hunts, executions, etc.) to determine whether the atrocities of the 20th century were violent enough to offset other kinds of improvements. While many critics have highlighted the atrocities of the 20th century, I don’t believe any of them have done this sort of analysis either, with the exception of a partial analysis on the Uncommon Descent blog.

Using some of the data cited in Better Angels, I’ve done a simple analysis to lay out estimated “deaths from major atrocities” for each century, going back to the 5th century BC. I’ve also looked a bit into how these figures would look if we included deaths from everyday violence as well. Having done this, four points stand out:

  • There are two other centuries (13th and 17th) that look to have been at least as bloody as the 20th, though this observation is very sensitive to very imprecise death toll estimates of a very small number of atrocities. (“Bloody” here refers to high violent deaths per capita per year; “atrocity” means an enormous large-scale mass killing, like a war or conquest or democide.) The 13th century death toll comes almost entirely from estimates of the damage done by Genghis Khan, while the 17th century death toll comes mostly from estimates about the fall of the Ming Dynasty. I don’t see a clear trend overall on “death risk from large-scale atrocities” from the 13th through 20th centuries.
  • Prior to the 13th century, it looks like per-century death tolls from the largest atrocities were consistently lower, and I doubt that this is an artifact of the data.
  • Around the 15th century, a documented fall in the homicide rate seems to have started. The homicide rate decline and the rise in deaths from very large-scale atrocities that took place between the 13th and 15th centuries seem to be in the same ballpark as each other, consistent with the idea that violence shifted from individuals to regimes. I would guess that the net effect was a decline in violent deaths, especially when bearing certain issues with the data in mind, but it isn’t clear.
  • Large-scale atrocities account for enormous numbers of violent deaths. While Better Angels describes multiple trends, it does not compare them to each other in an apples-to-apples way. My sense is that large-scale atrocities account for far more violent deaths than most of the other sources of violence the book discusses – so the lack of a positive trend means that the overall global risk of dying from violence may not have improved greatly over time (though it probably has improved). To make this point vivid, the global rate of violent deaths from the “big four” atrocities alone (two World Wars, regimes of Josef Stalin and Mao Zedong) – spread out over the entire 20th century – is ~50 violent deaths per 100,000 people per year; that’s comparable to the very worst national homicide rates seen today, whereas the homicide rate for high-income countries such as the U.S. tends to be less than 1/10 as high. In other words, the two World Wars + Stalin and Mao alone were enough to make the 20th century as a whole more dangerous than homicide makes today’s homicide-heaviest countries, and they were enough to offset the benefit of the European homicide rate decline that Better Angels describes from Medieval times through the Enlightenment.

Looking purely at quantified violent death risk by century, the picture that emerges from these figures is one of falling everyday violence that is significantly (though probably not fully) offset by higher risks of large-scale, extreme sources of violence such as world wars and oppressive regimes. The net impact is probably lower levels of violence, but it’s not entirely clear. The key transition looks like it was around the 13th-15th centuries; I don’t see much reason to think that the Scientific Revolution should bear much blame for rising atrocity tolls (the timing doesn’t work), but the “rule of law and rising power of governments” dynamic that Better Angels credits for much of the decline in everyday violence could be argued to have had a significant cost in terms of rare mass atrocities.

The dynamics of violent deaths discussed above are consistent with a picture of modernization as improvement in everyday conditions, accompanied by larger rare catastrophic events. This picture can be applied to more recent times as well, even as death tolls from atrocities have fallen: everyday peacefulness has continued to improve, but the potential maximum damage of global catastrophic risks (such as power grid failures, natural and engineered pandemics, climate change and artificial intelligence) seems to be on the rise as well. Today, the potential bad news is even more unlikely and infrequent, but potentially even more enormous, than ever before.

Taking a more holistic view – looking at ways in which non-fatal violence has declined, the phenomenon of the “long peace” since the mid-20th century, and other improvements over time – I think it remains the case that the modern world has become greatly less violent, as well as a better place to live in other ways. I do think that the overall point of Better Angels stands with my analysis in mind, though there is some added complexity to it.

Finally, I note that the literature on this topic appears extremely thin. Steven Pinker is not a historian, yet I believe his systematic examination of historical trends in violence is the first of its kind. Many critics of Better Angels highlight the question of how 20th century atrocities compared to past atrocities. However, I’ve seen only one critic who did either of the following: (a) spelled out a more systematic comparison Pinker could have done; (b) performed a rough version of this comparison. This critic was Uncommon Descent, a blog whose main purpose appears to be arguing for Intelligent Design.

Details follow. From this point on I abbreviate Better Angels as BA.

  • I go through BA’s discussion of the major atrocities of the 20th century, and discuss why I believe more analysis is called for. BA’s argument and the need for more analysis
  • I discuss my own rough attempt to make these comparisons, and what it shows: a lack of clear trend in deaths from large-scale atrocities from the 13th through 20th centuries, a smaller death toll from large atrocities but a higher toll from homicides prior to the 15th century, and the relative importance of large-scale atrocities vs. other sources of violence. More: My analysis
  • I reflect on how one should think about long-term historical trends in violence and quality of life with these corrections in mind. Reflections

[CLICK TO READ MORE]

Geomagnetic storms: History’s surprising, if tentative, reassurance

This is the second post in a series about geomagnetic storms as a global catastrophic risk. A paper covering the material in this series was just released.

My last post raised the specter of a geomagnetic storm so strong it would black out electric power across continent-scale regions for months or years, triggering an economic and humanitarian disaster.

How likely is that? One relevant source of knowledge is the historical record of geomagnetic disturbances, which is what this post considers. In approaching the geomagnetic storm issue, I had read some alarming statements to the effect that global society is overdue for the geomagnetic “Big One.” So I was surprised to find reassurance in the past. In my view, the most worrying extrapolations from the historical record do not properly represent it.

I hasten to emphasize that this historical analysis is only part of the overall geomagnetic storm risk assessment. Many uncertainties should leave us uneasy, from our incomplete understanding of the sun to the historically novel reliance of today’s grid operators on satellites that are themselves vulnerable to space weather. And since the scientific record stretches back only 30–150 years (depending on the indicator) and big storms happen about once a decade, the sample is too small to support sure extrapolations of extremes.

Nevertheless the historical record and claims based on it are the focus in this and the next post. I’ll examine four (kinds of) extrapolations that have been made from the record: from the last “Big One,” the Carrington event of 1859; from the July 2012 coronal mass ejection (CME) that might have caused a storm as large if it had hit Earth; a more complex extrapolation in Kappenman (2010); and the formal statistical extrapolation of Riley (2012). I’ll save the last for the next post.

[CLICK TO READ MORE]

Geomagnetic storms: An introduction to the risk

Image from NASA via Wikipedia
This is the first post in a series about geomagnetic storms as a global catastrophic risk. A paper covering the material in this series was recently released.

The Open Philanthropy Project has included geomagnetic storms in its list of global catastrophic risks of potential focus.

To be honest, I hadn’t heard of them either. But when I was consulting for GiveWell last fall, program officer Howie Lempel asked me to investigate the risks they pose. (Now I’m an employee of GiveWell.)

It turns out that geomagnetic storms are caused by cataclysms on the sun, which fling magnetically charged matter toward earth. The collisions can rattle earth’s magnetic field, sending power surges through electrical grids. The high-speed particles can also take out satellites critical for communication and navigation. The main fear is that an extreme storm would so damage electrical grids as to black out power on a continental scale for months, even years. The toll of such a disaster would be tallied in economic terms, presumably in the trillions of dollars. It would also be measured in lives lost, since all the essential infrastructure of civilization, from food transport to law enforcement, now depends on being able to plug things in and turn them on (NRC 2008, pp. 11–12).

Having examined the issue, especially its statistical aspects, I am not convinced that this scenario is as likely as some prominent voices have suggested. For example, as I will explain in a later post, Riley’s (2012) oft-cited estimate that an extreme storm—stronger than any since the advent of the modern grid—has a 12%-per-decade probability looks like an unrepresentative extrapolation from the historical record. I put the odds lower. My full report has just been posted, along with data, code, and spreadsheets.

Nevertheless, my reassurance is layered in uncertainty. The historical scientific record is short–we get a big storm about once a decade, and good data have only been collected for 30–150 years depending on the indicator. Scientific understanding of solar dynamics is limited. Likewise for the response of grids to storms. My understanding of the state of knowledge is itself limited. On balance, significant “tail risk”—of events extreme enough to cause great suffering—should not be ruled out.

This is why I think the geomagnetic storm risk, even if overestimated by some, deserves more attention from governments than it is receiving. To date, the attention has been minimal relative to the stakes.

The rest of this post delineates how geomagnetic storms come about and why they may particularly threaten one critical component of modern electrical grids, the high-voltage transformer. Later posts will delve into what the available evidence says about the chance of a geomagnetic “perfect storm.”

[CLICK TO READ MORE]

Key questions about philanthropy, part 1: What is the role of a funder?

This post was updated on July 6 with language edits but substantially unchanged content.

As a new funder, we’ve found it surprisingly difficult to “learn the ropes” of philanthropy. We’ve found relatively little reading material – public or private – on some of the key questions we’re grappling with in starting a grantmaking organization, such as “What sorts of people should staff a foundation?” and “What makes a good grant?” To be sure, there is some written advice on philanthropy, but it leaves many of these foundational questions unaddressed.

As we’ve worked on the Open Philanthropy Project, we’ve accumulated a list of questions and opinions piecemeal. This blog post is the first in a series that aims to share what we’ve gathered so far. We’ll outline some of the most important questions we’ve grappled with, and we’ll give our working answer for each one, partly to help clarify what the question means, and partly to record our thoughts, which we hope will make it easier to get feedback and track our evolution over time.

We’d love to see others – particularly experienced philanthropists – write more about how they’ve thought through these questions, and other key questions we’ve neglected to raise. We hope that some day new philanthropists will be able to easily get a sense for the range of opinions among experienced funders, so that they can make informed decisions about what kind of philanthropist they want to be, rather than starting largely from scratch.

This post focuses on the question: “what is the role of a funder, relative to other organizations?” In brief:

  • At first glance, it seems like a funder’s main comparative advantage is providing funding, and one might guess that a funder would do well to stick to this role as closely as possible. In other words, a funder might seek to play a ”passive” role, by considering others’ ideas and choosing which ones to fund, without trying to actively influence what partner organizations work on or how they work on it.
  • In practice, this doesn’t seem to be how the vast majority of major funders operate. It’s common for funders to develop their own strategies, provide funding restricted for specific purposes, develop ideas for new organizations and pitch them to potential founders, and more. Below, we lay out a spectrum from “highly passive” funders (focused on supporting others’ ideas) to “highly active” funders (focused on executing their own strategies, with strong oversight of grantees). More
  • In the final section of this post, we lay out our rough take on when we think it’s appropriate for us, as a funder, to do more than write a check. In addition to some roles that may be familiar from for-profit investing – such as providing connections, helping with fundraising and providing basic oversight – we believe it is also worth noting the role of funders play via cause selection, and the role a funder can play in filling gaps in a field by creating organizations. More

[CLICK TO READ MORE]

Incoming Program Officer for Criminal Justice Reform: Chloe Cockburn

We’re excited to announce that Chloe Cockburn has accepted our offer to join the Open Philanthropy Project team as a Program Officer, leading our work on criminal justice reform. She expects to start in August and to work from New York, where she is currently based. She will lead our work on developing our grantmaking strategy for criminal justice reform, selecting grantees, and sharing our reasoning and lessons learned.

Chloe comes to us from the American Civil Liberties Union (ACLU), where she currently serves as the Advocacy and Policy Counsel for the ACLU’s Campaign to End Mass Incarceration, heading up the ACLU’s national office support to state-level ACLU affiliates.

The search to fill this role has been our top priority within U.S. policy over the last few months. We conducted an extensive search for applicants and interviewed many strong candidates.

We feel that hiring Chloe is one of the most important decisions we’ve yet made for the Open Philanthropy Project. In the future, we plan to write more about how we conducted the search and why we ultimately decided to make Chloe an offer.

We’re very excited to have Chloe on board to lead our investment in substantially reducing incarceration while maintaining or improving public safety.

Corrections in our review of Development Media International

Recently, we discovered a few errors in our cost-effectiveness analysis of Development Media International (DMI). After correcting these errors, our best guess of DMI’s cost per life saved has increased from $5,236 per life saved to $7,264 per life saved. Additionally, we discovered some errors in our analysis of DMI’s finances. The corrected cost-effectiveness analysis is here.

These changes do not affect our bottom line about DMI, and we continue to consider it a standout charity.

What were the errors?

Crediting DMI with changes in antimalarial compliance. DMI broadcasts voice-acted stories embedded with health advice over radio into areas with high childhood mortalities. Among other advice, the messages encourage families to seek treatment for malaria when their child has a fever. However, the messages do not specifically address what is called “compliance”: completing the full course of malaria treatment, rather than treating the child only until symptoms stop.

DMI’s midline results found that antimalarial compliance had increased more in intervention areas than in control areas (the difference was not statistically significant). In our original analysis, we gave the option of crediting or not crediting DMI’s intervention with the increased compliance (with the default set to “yes, give credit”). We originally assumed that DMI’s campaign included messages specifically about complying with antimalarial treatment. Recently, we learned that it did not. While it’s possible that the DMI campaign had an effect on compliance without messaging on it, knowing that antimalarial compliance messages were not broadcast leads us to change our best guess. In our updated estimate, we have set the default compliance option to “no, don’t credit DMI for the increased compliance.” The option to credit DMI for the increase is still available in our model. (Note 1)

Not crediting DMI with increases in antimalarial compliance increased the cost per life saved by 38.7% (from $5,236 per life saved to $7,264 per life saved). This change accounts for the entire increase in headline cost per life saved, as the errors below are contained within the antimalarial compliance calculation, and thus only affect the headline cost per life saved if DMI is credited with improving antimalarial compliance.

Other errors in our cost-effectiveness analysis. In addition to mistakenly crediting DMI with the changes in antimalarial compliance, we discovered several other errors in our analysis. These errors did not cause any change in our cost per life saved estimate.

  • Antimalarial compliance calculation: Two formulas in our compliance calculation used incorrect inputs. If we credited DMI for increasing antimalarial compliance, and did not fix other errors, these errors caused a 20.7% deflation in our cost per life saved (from $6,607 per life saved to $5,236 per life saved). (Note 2)
  • Size of malaria mortality burden: We incorrectly used the upper bound of a mortality estimate instead of the point estimate. If we credited DMI for increasing antimalarial compliance, and did not fix other errors, this error caused an 11.2% deflation in our cost per life saved (from $5,899 per life saved to $5,236 per life saved). (Note 3)
  • Cameroon data used in Burkina Faso calculation: We used data from Cameroon in our analysis of Burkina Faso, which we calculated as a comparison to the Cameroon cost per life saved. Holding other errors constant, this error caused a 125.5% inflation in our estimate of cost per life saved in Burkina Faso (from $446 per life saved to $1,006 per life saved). (Note 4)

Categorization of past expenditures. In our review of DMI, we included a categorization of DMI’s spending for 2011 to 2014. This categorization contained some errors, which caused our calculation of DMI’s total 2011-2014 spending to be $212,650 higher than its actual total spending (an inflation of 2.5%). Since we based our estimate of DMI’s costs in Cameroon on its projection of those costs rather than on past spending in Burkina Faso, these errors did not affect our final cost-effectiveness estimate for DMI. (Note 5)

How did we discover these errors?

We discovered these errors in two ways:

First, when revisiting our cost-effectiveness analyses (as part of our broader effort to improve our cost-effectiveness analyses this year), one of our research analysts discovered two of the errors (the antimalarial compliance calculation mistake and the size of malaria mortality burden mistake). As we were correcting the analysis, we discovered the Cameroon data in the Burkina Faso analysis, and realized that we weren’t certain if the DMI campaign messaged on antimalarial compliance. DMI clarified that its campaign did not message on antimalarial compliance.

Second, as part of our standard process, an analyst (who did not conduct the original work) carefully reviews a page before we publish it. We call this process a vet. While vetting our review of DMI, one of our research analysts discovered the expenditure categorization errors. This vet occurred after the page had been published. Our standard process is to vet pages before they are published, but in this case we published the page without a vet in order to meet our December 1st deadline for publishing our new recommendations last year.

We have added these errors to our mistakes page.

How do these corrections affect GiveWell’s view of DMI?

As noted above, these changes do not affect our bottom line about DMI, and we continue to consider it a standout charity.

In particular, the change as a result of our error is small relative to our uncertainty about other inputs into our model. Specifically:

  • Our estimate of $7,264 per life saved relies solely on data from Cameroon because we guessed that Cameroon was the country where DMI was most likely to spend additional funds. We remain uncertain about where DMI will spend additional funds, and a more robust estimate of its cost-effectiveness would also incorporate estimates from other countries.
  • Our estimate credits DMI with affecting behavior for pneumonia and diarrhea but not malaria because DMI’s midline results only measured a 0.1% increase in treatment seeking for malaria in the intervention group compared to the control group. It is arguably unlikely that DMI would cause behavior change for pneumonia and diarrhea treatment-seeking, but not malaria treatment-seeking, given that the promoted behaviors are relatively similar.
  • As we wrote last December, we are uncertain about whether we should put more credence in our estimate of DMI’s cost-effectiveness based on available data about behavior change, or its own projection. Our cost-effectiveness analysis predicts a 3.2% decline in child mortality, but DMI’s, estimated by the people carrying out a study and paying the considerable expenses associated with it, predicts 10-20%. More in our December 2014 post.

We have not incorporated the above considerations into our cost-effectiveness analysis, but we would guess that incorporating the above could cause changes in our estimate of DMI’s cost-effectiveness significantly larger than the 38% change due to the error discussed in this post.

Footnotes

Note 1: See Cell D76.

Note 2: We are not sure how often ceasing antimalarial treatment prematurely is as bad (for the survival of the child) as not giving antimalarials at all; without an authoritative source we guessed that this is true 25% of the time.

One formula in our spreadsheet left this 25% figure out of the calculation, effectively assuming that 100% of non-compliance cases were as bad as not giving any antimalarials at all. Because the estimate now defaults to not crediting for compliance (see previous error), this error does not affect our updated headline figure for cost per life saved.

In our original cost-effectiveness estimate, Cell D88 (effective coverage before the campaign) erroneously incorporated Cell D75 (raw compliance before the campaign) as an input. In the updated cost-effectiveness estimate, Cell D88 incorporates Cell D79 (effective compliance accounting for the benefit from non-compliance).

In the original cost-effectiveness estimate, Cell D92 (effective coverage after the campaign) erroneously incorporated Cell D77 (raw compliance after the campaign) as an input. In the updated cost-effectiveness estimate, Cell D92 incorporates Cell D80 (effective compliance accounting for the benefit from non-compliance).

Our estimate of lives saved by pneumonia treatment did not contain an equivalent error, and we did not include an equivalent compliance factor for diarrhea since treatment is only needed for as long as symptoms persist. Our model still defaults to crediting DMI with an increase in pneumonia compliance, because DMI’s campaign messaged specifically on completing courses of pneumonia treatment.

Note 3: We use the Institute for Health Metrics and Evaluation’s data visualization tool to estimate the number of deaths from specific causes in target countries. For malaria deaths, ages 1-4, in Cameroon, we incorrectly used the upper bound of the estimate (18,724.2 deaths), rather than the point estimate (9,213.71 deaths). The RCT midline results did not report an increase in malaria treatment coverage, though antimalarial compliance did increase. Because the estimate now defaults to not crediting for compliance (see above), this error does not affect our updated headline figure for cost per life saved.

In the original cost-effectiveness estimate, Cell D106 erroneously included the upper bound of age 1-4 deaths from malaria (see Cell E106 for search parameters and calculation). In the updated cost-effectiveness estimate, Cell D106 includes the point estimate for age 1-4 deaths from malaria (see Cell E106 for search parameters and calculation).

Note 4: This comparison did not affect our headline cost per life saved, because we think a campaign in a country similar to Cameroon is a more likely use of marginal unrestricted funding directed to DMI. The Burkina Faso analysis was structurally the same as the Cameroon analysis, and included the compliance calculation error described above. In addition, the Burkina Faso analysis incorrectly used information about Cameroon, rather than Burkina Faso (specifically the number of under-5 deaths from malaria, pneumonia, and diarrhea; and the campaign cost estimate).

See columns G to I in the cost-effectiveness spreadsheet for the model of the Burkina Faso campaign. See cells G105, G106, and G107 for the data on deaths from pneumonia, malaria, and diarrhea. See cell G117 for the Burkina Faso campaign cost. In the original cost-effectiveness estimate, all of these cells duplicated the data for Cameroon (see D105, D106, D107, and D117). In the updated cost-effectiveness analysis, these cells have been updated with data pertaining to Burkina Faso.

Note 5: Our categorization process involved assigning a category code to each line item of DMI’s budget, then aggregating the subtotals for each category. Two types of errors occurred during this process:

  • A line item was coded to an incorrect category that wasn’t aggregated, causing the item to not be counted in the subtotals.
  • Some formulas for aggregating category subtotals drew inputs from incorrect ranges, causing some items to be double-counted.

DMI has requested that its budget be kept private. Because our categorization process involved coding the line items of DMI’s budget, we are unable to share our categorization files and the specific details about these errors.