The GiveWell Blog

Geomagnetic storms: History’s surprising, if tentative, reassurance

This is the second post in a series about geomagnetic storms as a global catastrophic risk. A paper covering the material in this series was just released.

My last post raised the specter of a geomagnetic storm so strong it would black out electric power across continent-scale regions for months or years, triggering an economic and humanitarian disaster.

How likely is that? One relevant source of knowledge is the historical record of geomagnetic disturbances, which is what this post considers. In approaching the geomagnetic storm issue, I had read some alarming statements to the effect that global society is overdue for the geomagnetic “Big One.” So I was surprised to find reassurance in the past. In my view, the most worrying extrapolations from the historical record do not properly represent it.

I hasten to emphasize that this historical analysis is only part of the overall geomagnetic storm risk assessment. Many uncertainties should leave us uneasy, from our incomplete understanding of the sun to the historically novel reliance of today’s grid operators on satellites that are themselves vulnerable to space weather. And since the scientific record stretches back only 30–150 years (depending on the indicator) and big storms happen about once a decade, the sample is too small to support sure extrapolations of extremes.

Nevertheless the historical record and claims based on it are the focus in this and the next post. I’ll examine four (kinds of) extrapolations that have been made from the record: from the last “Big One,” the Carrington event of 1859; from the July 2012 coronal mass ejection (CME) that might have caused a storm as large if it had hit Earth; a more complex extrapolation in Kappenman (2010); and the formal statistical extrapolation of Riley (2012). I’ll save the last for the next post.

Geomagnetic storms: An introduction to the risk

Image from NASA via Wikipedia
This is the first post in a series about geomagnetic storms as a global catastrophic risk. A paper covering the material in this series was recently released.

The Open Philanthropy Project has included geomagnetic storms in its list of global catastrophic risks of potential focus.

To be honest, I hadn’t heard of them either. But when I was consulting for GiveWell last fall, program officer Howie Lempel asked me to investigate the risks they pose. (Now I’m an employee of GiveWell.)

It turns out that geomagnetic storms are caused by cataclysms on the sun, which fling magnetically charged matter toward earth. The collisions can rattle earth’s magnetic field, sending power surges through electrical grids. The high-speed particles can also take out satellites critical for communication and navigation. The main fear is that an extreme storm would so damage electrical grids as to black out power on a continental scale for months, even years. The toll of such a disaster would be tallied in economic terms, presumably in the trillions of dollars. It would also be measured in lives lost, since all the essential infrastructure of civilization, from food transport to law enforcement, now depends on being able to plug things in and turn them on (NRC 2008, pp. 11–12).

Having examined the issue, especially its statistical aspects, I am not convinced that this scenario is as likely as some prominent voices have suggested. For example, as I will explain in a later post, Riley’s (2012) oft-cited estimate that an extreme storm—stronger than any since the advent of the modern grid—has a 12%-per-decade probability looks like an unrepresentative extrapolation from the historical record. I put the odds lower. My full report has just been posted, along with data, code, and spreadsheets.

Nevertheless, my reassurance is layered in uncertainty. The historical scientific record is short–we get a big storm about once a decade, and good data have only been collected for 30–150 years depending on the indicator. Scientific understanding of solar dynamics is limited. Likewise for the response of grids to storms. My understanding of the state of knowledge is itself limited. On balance, significant “tail risk”—of events extreme enough to cause great suffering—should not be ruled out.

This is why I think the geomagnetic storm risk, even if overestimated by some, deserves more attention from governments than it is receiving. To date, the attention has been minimal relative to the stakes.

The rest of this post delineates how geomagnetic storms come about and why they may particularly threaten one critical component of modern electrical grids, the high-voltage transformer. Later posts will delve into what the available evidence says about the chance of a geomagnetic “perfect storm.”

Key questions about philanthropy, part 1: What is the role of a funder?

This post was updated on July 6 with language edits but substantially unchanged content.

As a new funder, we’ve found it surprisingly difficult to “learn the ropes” of philanthropy. We’ve found relatively little reading material – public or private – on some of the key questions we’re grappling with in starting a grantmaking organization, such as “What sorts of people should staff a foundation?” and “What makes a good grant?” To be sure, there is some written advice on philanthropy, but it leaves many of these foundational questions unaddressed.

As we’ve worked on the Open Philanthropy Project, we’ve accumulated a list of questions and opinions piecemeal. This blog post is the first in a series that aims to share what we’ve gathered so far. We’ll outline some of the most important questions we’ve grappled with, and we’ll give our working answer for each one, partly to help clarify what the question means, and partly to record our thoughts, which we hope will make it easier to get feedback and track our evolution over time.

We’d love to see others – particularly experienced philanthropists – write more about how they’ve thought through these questions, and other key questions we’ve neglected to raise. We hope that some day new philanthropists will be able to easily get a sense for the range of opinions among experienced funders, so that they can make informed decisions about what kind of philanthropist they want to be, rather than starting largely from scratch.

This post focuses on the question: “what is the role of a funder, relative to other organizations?” In brief:

  • At first glance, it seems like a funder’s main comparative advantage is providing funding, and one might guess that a funder would do well to stick to this role as closely as possible. In other words, a funder might seek to play a ”passive” role, by considering others’ ideas and choosing which ones to fund, without trying to actively influence what partner organizations work on or how they work on it.
  • In practice, this doesn’t seem to be how the vast majority of major funders operate. It’s common for funders to develop their own strategies, provide funding restricted for specific purposes, develop ideas for new organizations and pitch them to potential founders, and more. Below, we lay out a spectrum from “highly passive” funders (focused on supporting others’ ideas) to “highly active” funders (focused on executing their own strategies, with strong oversight of grantees). More
  • In the final section of this post, we lay out our rough take on when we think it’s appropriate for us, as a funder, to do more than write a check. In addition to some roles that may be familiar from for-profit investing – such as providing connections, helping with fundraising and providing basic oversight – we believe it is also worth noting the role of funders play via cause selection, and the role a funder can play in filling gaps in a field by creating organizations. More

Incoming Program Officer for criminal justice reform: Chloe Cockburn

We’re excited to announce that Chloe Cockburn has accepted our offer to join the Open Philanthropy Project team as a Program Officer, leading our work on criminal justice reform. She expects to start in August and to work from New York, where she is currently based. She will lead our work on developing our grantmaking strategy for criminal justice reform, selecting grantees, and sharing our reasoning and lessons learned.

Chloe comes to us from the American Civil Liberties Union (ACLU), where she currently serves as the Advocacy and Policy Counsel for the ACLU’s Campaign to End Mass Incarceration, heading up the ACLU’s national office support to state-level ACLU affiliates.

The search to fill this role has been our top priority within U.S. policy over the last few months. We conducted an extensive search for applicants and interviewed many strong candidates.

We feel that hiring Chloe is one of the most important decisions we’ve yet made for the Open Philanthropy Project. In the future, we plan to write more about how we conducted the search and why we ultimately decided to make Chloe an offer.

We’re very excited to have Chloe on board to lead our investment in substantially reducing incarceration while maintaining or improving public safety.

Corrections in our review of Development Media International

Recently, we discovered a few errors in our cost-effectiveness analysis of Development Media International (DMI). After correcting these errors, our best guess of DMI’s cost per life saved has increased from $5,236 per life saved to $7,264 per life saved. Additionally, we discovered some errors in our analysis of DMI’s finances. The corrected cost-effectiveness analysis is here.

These changes do not affect our bottom line about DMI, and we continue to consider it a standout charity.

What were the errors?

Crediting DMI with changes in antimalarial compliance. DMI broadcasts voice-acted stories embedded with health advice over radio into areas with high childhood mortalities. Among other advice, the messages encourage families to seek treatment for malaria when their child has a fever. However, the messages do not specifically address what is called “compliance”: completing the full course of malaria treatment, rather than treating the child only until symptoms stop.

DMI’s midline results found that antimalarial compliance had increased more in intervention areas than in control areas (the difference was not statistically significant). In our original analysis, we gave the option of crediting or not crediting DMI’s intervention with the increased compliance (with the default set to “yes, give credit”). We originally assumed that DMI’s campaign included messages specifically about complying with antimalarial treatment. Recently, we learned that it did not. While it’s possible that the DMI campaign had an effect on compliance without messaging on it, knowing that antimalarial compliance messages were not broadcast leads us to change our best guess. In our updated estimate, we have set the default compliance option to “no, don’t credit DMI for the increased compliance.” The option to credit DMI for the increase is still available in our model. (Note 1)

Not crediting DMI with increases in antimalarial compliance increased the cost per life saved by 38.7% (from $5,236 per life saved to $7,264 per life saved). This change accounts for the entire increase in headline cost per life saved, as the errors below are contained within the antimalarial compliance calculation, and thus only affect the headline cost per life saved if DMI is credited with improving antimalarial compliance.

Other errors in our cost-effectiveness analysis. In addition to mistakenly crediting DMI with the changes in antimalarial compliance, we discovered several other errors in our analysis. These errors did not cause any change in our cost per life saved estimate.

  • Antimalarial compliance calculation: Two formulas in our compliance calculation used incorrect inputs. If we credited DMI for increasing antimalarial compliance, and did not fix other errors, these errors caused a 20.7% deflation in our cost per life saved (from $6,607 per life saved to $5,236 per life saved). (Note 2)
  • Size of malaria mortality burden: We incorrectly used the upper bound of a mortality estimate instead of the point estimate. If we credited DMI for increasing antimalarial compliance, and did not fix other errors, this error caused an 11.2% deflation in our cost per life saved (from $5,899 per life saved to $5,236 per life saved). (Note 3)
  • Cameroon data used in Burkina Faso calculation: We used data from Cameroon in our analysis of Burkina Faso, which we calculated as a comparison to the Cameroon cost per life saved. Holding other errors constant, this error caused a 125.5% inflation in our estimate of cost per life saved in Burkina Faso (from $446 per life saved to $1,006 per life saved). (Note 4)

Categorization of past expenditures. In our review of DMI, we included a categorization of DMI’s spending for 2011 to 2014. This categorization contained some errors, which caused our calculation of DMI’s total 2011-2014 spending to be $212,650 higher than its actual total spending (an inflation of 2.5%). Since we based our estimate of DMI’s costs in Cameroon on its projection of those costs rather than on past spending in Burkina Faso, these errors did not affect our final cost-effectiveness estimate for DMI. (Note 5)

How did we discover these errors?

We discovered these errors in two ways:

First, when revisiting our cost-effectiveness analyses (as part of our broader effort to improve our cost-effectiveness analyses this year), one of our research analysts discovered two of the errors (the antimalarial compliance calculation mistake and the size of malaria mortality burden mistake). As we were correcting the analysis, we discovered the Cameroon data in the Burkina Faso analysis, and realized that we weren’t certain if the DMI campaign messaged on antimalarial compliance. DMI clarified that its campaign did not message on antimalarial compliance.

Second, as part of our standard process, an analyst (who did not conduct the original work) carefully reviews a page before we publish it. We call this process a vet. While vetting our review of DMI, one of our research analysts discovered the expenditure categorization errors. This vet occurred after the page had been published. Our standard process is to vet pages before they are published, but in this case we published the page without a vet in order to meet our December 1st deadline for publishing our new recommendations last year.

We have added these errors to our mistakes page.

How do these corrections affect GiveWell’s view of DMI?

As noted above, these changes do not affect our bottom line about DMI, and we continue to consider it a standout charity.

In particular, the change as a result of our error is small relative to our uncertainty about other inputs into our model. Specifically:

  • Our estimate of $7,264 per life saved relies solely on data from Cameroon because we guessed that Cameroon was the country where DMI was most likely to spend additional funds. We remain uncertain about where DMI will spend additional funds, and a more robust estimate of its cost-effectiveness would also incorporate estimates from other countries.
  • Our estimate credits DMI with affecting behavior for pneumonia and diarrhea but not malaria because DMI’s midline results only measured a 0.1% increase in treatment seeking for malaria in the intervention group compared to the control group. It is arguably unlikely that DMI would cause behavior change for pneumonia and diarrhea treatment-seeking, but not malaria treatment-seeking, given that the promoted behaviors are relatively similar.
  • As we wrote last December, we are uncertain about whether we should put more credence in our estimate of DMI’s cost-effectiveness based on available data about behavior change, or its own projection. Our cost-effectiveness analysis predicts a 3.2% decline in child mortality, but DMI’s, estimated by the people carrying out a study and paying the considerable expenses associated with it, predicts 10-20%. More in our December 2014 post.

We have not incorporated the above considerations into our cost-effectiveness analysis, but we would guess that incorporating the above could cause changes in our estimate of DMI’s cost-effectiveness significantly larger than the 38% change due to the error discussed in this post.


Footnotes

Note 1: See Cell D76.

Note 2: We are not sure how often ceasing antimalarial treatment prematurely is as bad (for the survival of the child) as not giving antimalarials at all; without an authoritative source we guessed that this is true 25% of the time.

One formula in our spreadsheet left this 25% figure out of the calculation, effectively assuming that 100% of non-compliance cases were as bad as not giving any antimalarials at all. Because the estimate now defaults to not crediting for compliance (see previous error), this error does not affect our updated headline figure for cost per life saved.

In our original cost-effectiveness estimate, Cell D88 (effective coverage before the campaign) erroneously incorporated Cell D75 (raw compliance before the campaign) as an input. In the updated cost-effectiveness estimate, Cell D88 incorporates Cell D79 (effective compliance accounting for the benefit from non-compliance).

In the original cost-effectiveness estimate, Cell D92 (effective coverage after the campaign) erroneously incorporated Cell D77 (raw compliance after the campaign) as an input. In the updated cost-effectiveness estimate, Cell D92 incorporates Cell D80 (effective compliance accounting for the benefit from non-compliance).

Our estimate of lives saved by pneumonia treatment did not contain an equivalent error, and we did not include an equivalent compliance factor for diarrhea since treatment is only needed for as long as symptoms persist. Our model still defaults to crediting DMI with an increase in pneumonia compliance, because DMI’s campaign messaged specifically on completing courses of pneumonia treatment.

Note 3: We use the Institute for Health Metrics and Evaluation’s data visualization tool to estimate the number of deaths from specific causes in target countries. For malaria deaths, ages 1-4, in Cameroon, we incorrectly used the upper bound of the estimate (18,724.2 deaths), rather than the point estimate (9,213.71 deaths). The RCT midline results did not report an increase in malaria treatment coverage, though antimalarial compliance did increase. Because the estimate now defaults to not crediting for compliance (see above), this error does not affect our updated headline figure for cost per life saved.

In the original cost-effectiveness estimate, Cell D106 erroneously included the upper bound of age 1-4 deaths from malaria (see Cell E106 for search parameters and calculation). In the updated cost-effectiveness estimate, Cell D106 includes the point estimate for age 1-4 deaths from malaria (see Cell E106 for search parameters and calculation).

Note 4: This comparison did not affect our headline cost per life saved, because we think a campaign in a country similar to Cameroon is a more likely use of marginal unrestricted funding directed to DMI. The Burkina Faso analysis was structurally the same as the Cameroon analysis, and included the compliance calculation error described above. In addition, the Burkina Faso analysis incorrectly used information about Cameroon, rather than Burkina Faso (specifically the number of under-5 deaths from malaria, pneumonia, and diarrhea; and the campaign cost estimate).

See columns G to I in the cost-effectiveness spreadsheet for the model of the Burkina Faso campaign. See cells G105, G106, and G107 for the data on deaths from pneumonia, malaria, and diarrhea. See cell G117 for the Burkina Faso campaign cost. In the original cost-effectiveness estimate, all of these cells duplicated the data for Cameroon (see D105, D106, D107, and D117). In the updated cost-effectiveness analysis, these cells have been updated with data pertaining to Burkina Faso.

Note 5: Our categorization process involved assigning a category code to each line item of DMI’s budget, then aggregating the subtotals for each category. Two types of errors occurred during this process:

  • A line item was coded to an incorrect category that wasn’t aggregated, causing the item to not be counted in the subtotals.
  • Some formulas for aggregating category subtotals drew inputs from incorrect ranges, causing some items to be double-counted.

DMI has requested that its budget be kept private. Because our categorization process involved coding the line items of DMI’s budget, we are unable to share our categorization files and the specific details about these errors.

Update on GiveWell’s web traffic / money moved: Q1 2015

In addition to evaluations of other charities, GiveWell publishes substantial evaluation of itself, from the quality of its research to its impact on donations. We publish quarterly updates regarding two key metrics: (a) donations to top charities and (b) web traffic.

The tables and chart below present basic information about our growth in money moved and web traffic in the first quarter of 2015 compared to the last two years (note 1).

Money moved and donors: first quarter

Table_2015Q1MoneyMoved.png

Money moved by donors who have never given more than $5,000 in a year increased 78% to about $760,000. The total number of donors in the first quarter increased to about 3,400. This was up 70% compared to last year, roughly consistent with the last year’s growth.

Most of our money moved is donated near the end of the year (we tracked about 70% of the total in the fourth quarter each of the last two years) and is driven by a relatively small number of large donors. Because of this, our year-to-date total money moved provides relatively limited information, and we don’t think we can reliably predict our year-end money moved (note 2). Mid-year we primarily use data on donations from smaller donors, rather than total money moved, to give a rough indication of how our influence on donations is growing.

Web traffic through April 2015

Table_2015Q1WebTraffic.png

Growth in web traffic excluding Google AdWords increased moderately in the first quarter. Last year, we saw a drop in total web traffic because we removed ads on searches that we determined were not driving high quality traffic to our site (i.e. searches with very high bounce rates and very low pages per visit).

GiveWell’s website receives elevated web traffic during “giving season” around December of each year. To adjust for this and emphasize the trend, the chart below shows the rolling sum of unique visitors over the previous twelve months, starting in December 2009 (the first period for which we have 12 months of reliable data due to an issue tracking visits in 2008).

Chart_2015Q1WebTraffic.png

We use web analytics data from two sources: Clicky and Google Analytics (except for those months for which we only have reliable data from one source). The data on visitors to our website differs between the two sources. We do not know the cause of discrepancy (though a volunteer with a relevant technical background looked at the data for us to try to find the cause; he didn’t find any obvious problems with the data). (Note on how we count unique visitors.)

The raw data we used to generate the chart and table above (as well as notes on the issues we’ve had and adjustments we’ve made) is in this spreadsheet.



Note 1: Since our 2012 annual metrics report we have shifted to a reporting year that starts on February 1, rather than January 1, in order to better capture year-on-year growth in the peak giving months of December and January. Therefore, metrics for the “first quarter” reported here are for February through April.

Note 2: In total, GiveWell donors directed $1.76 million to our top charities in the first quarter of this year, compared with $1.45 million that we had tracked in the first quarter of 2014. For the reason described above, we don’t find this number to be particularly meaningful at this time of year.

Note 3: We count unique visitors over a period as the sum of monthly unique visitors. In other words, if the same person visits the site multiple times in a calendar month, they are counted once. If they visit in multiple months, they are counted once per month. Google Analytics provides ‘unique visitors by traffic source’ while Clicky provides only ‘visitors by traffic source.’ For that reason, we primarily use Google Analytics data in the calculations to exclude AdWords visitors.