The GiveWell Blog

Our ongoing review of Living Goods

Living Goods runs a network of Community Health Promoters (CHPs) who sell health and household goods door-to-door in their communities in Uganda and Kenya. CHPs also provide basic health counseling. Living Goods also provides consulting and funding to other organizations to run similar networks in other locations. We have been considering Living Goods for a 2014 recommendation.

We’ve now spent a considerable amount of time talking to Living Goods and analyzing documents Living Goods shared with us. This post shares what we’ve learned so far and what questions we’re planning to focus on throughout the rest of our investigation. (For more detail, see our detailed interim review.)

Living Goods has successfully completed the first phase of our investigation process and we view it as a contender for a recommendation this year. We now plan (a) to make a $100,000 grant to Living Goods (as part of our “top charity participation grants,” funded by Good Ventures) and (b) continue our analysis to determine whether or not we should recommend Living Goods to donors at the end of the year.

Reasons we prioritized Living Goods

Living Goods contacted us a few months ago to inform us that the initial results from a randomized controlled trial (RCT) of its program were available. The headline result from the study was a 25% reduction in under-five mortality, a remarkable effect size.

Questions we hope to answer in our ongoing analysis

How robust is the RCT?

The authors of the RCT have not yet completed the full report on the study, so we have not been able to vet the results in detail. RCTs generally have fewer methodological issues that severely undermine the results than other types of studies, but they are not immune to these problems. We discuss potential issues with the RCT in our interim review.

The authors are seeking publication in an academic journal and the paper will be embargoed until a journal publishes it. This may mean that we are unable to discuss the details of the study before releasing our 2014 recommendations. We are unsure how strong a recommendation of Living Goods we might make if we were unable to give the details of the main evidence for its impact.

In addition, we don’t want to overemphasize the strength of the evidence provided by a single RCT (even if it has no methodological issues). Interventions such as bednets and cash transfers are supported by multiple RCTs and other evidence.

Will future work be as impactful as past work, and how will we know?

There are some reasons to think future results could be worse than RCT results: locations for the RCT were carefully selected, perhaps to maximize impact, and malaria control in Uganda may have improved in recent years. Even if the program is somewhat less effective in the future, it may still be worth supporting.

Our main concern is about both Living Goods’ and our ability to know how well the program is performing in the future. Living Goods asks CHPs to report on activities such as treatments provided and follow up visits, but because of the incentive structure and lack of audits on the accuracy of these reports, we put limited weight on these metrics. Living Goods told us that its branch managers conduct randomized follow ups with clients, but we have not see documentation from these audits (or other evidence that these checks are happening). We’re not aware of any other monitoring that Living Goods conducts on its program.

Will other funders fill Living Goods’ funding gap?

Living Goods is looking to significantly scale up its program in the next four years. It is in discussions with current funders to see if they will increase their support. It believes it may be able to fund up to two-thirds of its scale-up through these commitments. It is continuing to seek new sources of funding. We may have to make a decision about how much funding to recommend to Living Goods in 2014 before other funders make their decisions known.

If Living Goods raises more than it needs for its scale-up, it would likely use these funds to co-fund partner organizations to start networks of CHP-like agents in other countries. This would be a riskier bet for donors, and its not clear how much we can expect to learn about how these programs turn out.

Is the CHP program cost-effective?

Living Goods estimates that its program will have a cost per life saved of $4,773 in 2015, decreasing to $2,773 in 2018. We have made some adjustments to this model to generate our own estimates. We estimate that Living Goods’ cost per life saved will be roughly $11,000 in 2014-2016. Making assumptions that we would guess are particularly optimistic about Living Goods, we estimate the cost per life saved at about $3,300. Pessimistic assumptions lead to an estimate of $28,000 per life saved. (Details in our interim review.) Our work on this model is ongoing.

Our guess is that Living Goods’ program is in the same range as (though slightly less cost-effective than) the most cost-effective programs we have considered, such as bednets, deworming, and iodization.

(See our page on cost-effectiveness for more on the role these estimates play in our recommendations.)

Expert philanthropy vs. broad philanthropy

It seems to me that the most common model in philanthropy – seen at nearly every major staffed foundation – is to have staff who specialize in a particular cause (for example, specializing in criminal justice policy). Often, such staff have a very strong background in the cause before they come to the foundation, and they generally seem to focus their time exclusively on one cause – to the point of becoming (if they weren’t already) an expert in it.

I think this model makes a great deal of sense, partly for reasons we’ve discussed previously. Getting to know the people, organizations, literature, challenges, etc. most relevant to a particular cause is a significant investment – a “fixed cost” that can then make one more knowledgeable about all giving opportunities within that cause. Furthermore, evaluating and following a single giving opportunity can be a great deal of work. Now that the Open Philanthropy Project has made some early grants, it is hitting home just how many questions we could – and, it feels, should – ask about each. If we want to follow each grant to the best our abilities, we’ll need to allocate a lot of staff time to each; having staff specialize in causes is likely the only way to do so efficiently.

Yet I’m not convinced that this model is the right one for us. Depth comes at the price of breadth. With our limited management capacity, following each grant to the best of our abilities shouldn’t be assumed to be the right approach. I’ve been asking myself the question of whether there’s a way to be involved in many more causes at a much lower level of depth, looking for the most outstanding giving opportunities to come along in the whole broad set of causes. I’ve been thinking about this question recently mostly in the context of policy, which will be the focus of this post.

Having a “low-depth” involvement in a given issue could take a number of forms – for example:

  • One might make a concerted effort to identify a small number of “big bets” related to an issue, and focus effort on following these “big bets.”
  • One might make a concerted effort to identify a small number of “gaps” – aspects of an issue that get very little attention and have very few people working on them – and focus grantmaking activity on these “gaps.” This approach could be consistent with making a relatively large number of grants in the hopes that some grantee gains traction.
  • One might focus on identifying a trusted advisor in an issue space, and make a small number of grants as recommended by the advisor (this is largely the approach behind our grants so far on labor mobility).
  • One might co-fund the work of another major funder, join a collaboration of major funders, or support the work of a large and established organization, and gain more familiarity with the issue over time by following this partner’s work.
  • One might aim for a very basic level of understanding of an issue – in particular, which way we would like to see policy change relative to the status quo, and whom we feel aligned enough with to take their advice. With this understanding in hand for multiple issues, one might then be well-positioned to support: (a) “cross-issue” organizations and projects that are likely to have a small impact on many issues; (b) campaigns aiming to take advantage of short-term “windows of opportunity” that arise for various issues.

I can see a few arguments in favor of trying one or more of these, all of which make it possible to take some form of a “breadth” oriented approach (more causes, at with a lower degree of depth and expertise, than the standard cause-specialist approach would involve).

First and most importantly, we will never know as much about grantees’ work as they do, and it arguably makes more sense to think of grantees as the relevant experts. The best funder might be the one who picks qualified grantees in an important cause, supports them and otherwise stays out of their way. With this frame in mind, focusing on in-house expertise is arguably inefficient (in the sense that our expertise would become somewhat redundant with grantees’) and possibly even counterproductive (in the sense that it could lead us to be overly “active” with grantees, pushing them toward our theory of the case).

Of course, picking qualified grantees is a serious challenge, and one that is likely harder without deep context. But the question is how much additional benefit deep context provides. Even without expertise, it is possible to get some signals of grantee quality – general reputation, past accomplishments, etc. – and even with expertise, there will be a great deal of uncertainty. In a high-risk model of the world, where perhaps 10% of one’s grants will account for 90% of one’s impact, it may be better pick “potentially outstanding” grantees from a relatively broad space of possibilities than to limit oneself to a narrower space, while having more precise and reliable ways of distinguishing between marginally better or worse giving opportunities.

Expertise would also be an advantage for following a grant, learning from it and continuing to help grantees as they progress. However, it seems quite possible to me that the best grantees tend to be self-driven and improvisatory, such that following them closely wouldn’t add value to what they’re doing, and would largely serve to assuage our own anxiety without doing much to increase our impact.

Secondly, the best giving opportunities may sometimes cut across multiple causes and be hard to assess if we’ve engaged seriously with only a small number of causes. This issue seems particularly important to me in the area of U.S. policy, where the idea of strengthening the network of people who share our values – or the platform representing those values – could be very important. If we focus exclusively on a small number of policy areas, and give little attention to others, we could end up lacking the knowledge and networks to perform well on this goal, and we could be ill-positioned to evaluate the ramifications of a giving opportunity for the full set of issues we care about. (An argument for pursuing both breadth- and depth-oriented strategies simultaneously is that the depth-oriented work may surface opportunities that are relevant to a large number of issues, and the breadth-oriented work could then be helpful in assessing such opportunities.)

Finally, it seems to us that there are some issue areas where the giving opportunities are quite limited – particularly issues that we think of as green fields, as well as neglected sub-areas of other issues. Devoting a full staff member to such an issue would pose particular risks in terms of inefficiency, and it might be better to fund the few available opportunities while waiting for more to emerge.

I think the cases of Ed Scott and the Sandler Foundation represent interesting examples of what a philanthropist can accomplish despite not specializing exclusively in a particular cause, and despite not building out a staff of domain experts.

  • Ruth Levine of the Hewlett Foundation writes that Ed Scott has “built at least four excellent organizations from the ground up” – including the Center for Global Development, which we have supported and think positively of. She adds that “Far more than many others seem to be able to do, he lets go – and as he does, the organizations he supports go further and faster than if he were holding on tight.”
  • We know less about the Sandler Foundation, but it seems to have played a founding role in several prominent organizations and to be well-respected by many, despite not having staff who specialize in a particular cause over the long run. It does do deep cause investigations in sequence, in order to identify promising grantees, but staff work on new cause investigations even while maintaining their funding of previous causes and organizations; this approach therefore seems distinct from the traditional foundation model and can be thought of as one approach to the kind of “broad” work outlined here. One of its core principles is that of looking for excellence in organizations and in leadership, and entrusting those it supports with long-term, flexible support (rather than continuously revisiting and revising the terms of grants).

In both cases, from what we can tell (and we are considering trying to learn more via case studies), a funder helped create organizations that shared a broad set of values but weren’t focused on a particular policy issue; the funder did not appear to become or hire a domain expert, and may have been more effective by being less hands-on than is the norm among major foundations. My point isn’t that these funders should be emulated in every way (I know relatively little about them), but that the “cause-focused, domain expert” model of grantmaking is not the only viable one.

I’m not yet sure of exactly what it would look like for us to try a breadth-emphasizing model, and I know that we don’t want this to be the only model we try. The depth-emphasizing model has much to recommend it. I can anticipate that, in some ways, a breadth-emphasizing model could be both genuinely risky and psychologically challenging, as we’d have a lower level of knowledge about our grants than many foundations have of theirs. But I think the potential benefits are big, and I think this idea is worth experimenting with.

Our ongoing review of Development Media International

Development Media International (DMI) produces radio and television programs in developing countries that encourage people to adopt improved health practices, such as exclusive breastfeeding of infants and seeking treatment for symptoms associated with fatal diseases. The program aims to reduce mortality of children under five years old.

In May, we wrote that we were considering DMI for a 2014 top charity recommendation. We’ve now spent a considerable amount of time talking to DMI and analyzing documents DMI shared with us. This post shares what we’ve learned so far and what questions we’re planning to focus on throughout the rest of our investigation. (For more, see our detailed interim review.)

DMI has successfully completed the first phase of our investigation process and we view it as a contender for a recommendation this year. We now plan to (a) make a $100,000 grant to DMI (as part of our “top charity participation grants,” funded by Good Ventures) and (b) continue our analysis to determine whether or not we should recommend DMI to donors at the end of the year, including conducting a site visit to Burkina Faso.

Reasons we prioritized DMI

We’ve long been interested in programs that aim to use mass media (e.g., radio or television programming) to promote and disseminate messages on potential life-saving practices. It’s quite plausible to us that messages on TV or the radio could influence behavior, and could reach large numbers of people for relatively low costs, leading to high cost-effectiveness in terms of lives saved or improved per dollar spent, but we previously deprioritized our work in this area due to limitations in the available evidence of effectiveness.

DMI is currently conducting a randomized controlled trial (RCT) of its program, preliminary results from which became available in April.

Questions we hope to answer in our ongoing analysis

How robust are the midline results from the RCT?

Our level of confidence in the success of DMI’s program rests heavily on the midline results from the RCT, but there are reasons these results should be interpreted with caution. In particular:

  • The treatment group (i.e., the regions that were randomly selected to hear DMI’s broadcasts) had noticeably higher levels of child mortality and less access to healthcare at baseline than the control group. Details in our interim review.
  • While DMI plans to collect data on mortality, the only results reported thus far are based on self-reported behavior change, the reliability of which is questionable.

Many details of the RCT are not yet available publicly as the study is ongoing, and we have a number of questions about it that could affect our view of DMI’s impact. In particular, we would like to know more about the activities of other health programs in Burkina Faso during the trial period, and the extent to which the midline results are driven by certain villages or regions versus consistent behavior change across all participating areas.

It’s also important to note that the evidence for DMI’s program relies heavily on a single unpublished RCT; interventions such as bednets and cash transfers are supported by multiple peer-reviewed RCTs and other evidence.

How representative is DMI’s impact in Burkina Faso of its likely impact in other countries?

There are some reasons to expect that DMI’s future results will vary from the RCT. For example, much of DMI’s expected impact comes from behavior changes that require access to health services or products to be effective, such as seeking treatment when a child displays symptoms of malaria. DMI’s ability to predict access in other countries is critical to predicting impact, and may be limited. Details in our interim review.

How cost effective is DMI’s program?

We have not yet completed a full cost effectiveness estimate for DMI’s work but plan to do so for our final review of DMI.

DMI estimates that the cost effectiveness of its intervention is extremely strong relative to other cost effective interventions (for example, more than 10x stronger than our estimate of our strongest top charities). We expect our final estimate of DMI’s “cost per life saved” to be substantially less optimistic, though still within the range of our current priority programs. Details in our interim review.

Will future work be as impactful as past work, and how will we know?

We do not know how DMI will design its attempts to measure its future programs’ impact on behavior change.

 

A promising study on the long-term effects of deworming

This year, Dr. Kevin Croke, a post-doctoral fellow at the Harvard School of Public Health, released a study that we consider an important addition to the evidence for deworming children. The study (Croke 2014) followed up on a randomized controlled trial (RCT) of a deworming program in Uganda and found higher scores on tests of literacy and numeracy in children living in treatment areas compared to the control 7 to 8 years later. This finding reinforces the findings of the only other RCT examining the long-term effects of deworming, which we had previously considered to be relatively strong but still had substantial reservations about. By providing a second data point that is free of some of our previous concerns, Croke 2014 substantially changes our view of the evidence.

Two of our top charities, the Deworm the World Initiative (DtWI) led by Evidence Action and the Schistosomiasis Control Initiative (SCI), focus on deworming. We have not yet concluded our examination of Croke 2014, but at this point we think it is likely to lead us to view these charities as more cost-effective.

Overview of Croke 2014

Croke 2014 follows up on an RCT that involved 48 parishes (the administrative level above the village and below the sub-county) in 5 districts of Uganda selected based on the prevalence of worms in the districts. Half of the parishes were randomly assigned to a treatment group and the other half to a control. In all the districts, community organizations delivered basic health services, like vitamin A supplementation, vaccination and growth monitoring, through regular Child Health Days (CHDs). Children in the treatment group received albendazole (a deworming drug) during CHDs in addition to the other services offered, while children in the control just received the usual services.

Croke analyzed surveys conducted by an education nonprofit several years later that happened to sample 22 of the parishes in the original RCT. He compared children living in the treatment parishes sampled who were 1 to 7 years old (the age group offered albendazole) at the time of the program to children of the same age living in the control parishes and found children in the treatment group had scores about 1/3 of a standard deviation higher on tests of literacy and numeracy.

Strengths and significance

Few other studies have rigorously examined the long-term effects of deworming. Up until now, we’ve relied heavily on two studies: (a) Bleakley 2004, a study of the Rockefeller Sanitary Commission’s campaign to eradicate hookworm in the American South in the early 20th century; (b) a series of studies in Kenya, in which school deworming was rolled out on a purposefully arbitrary (randomization-like) basis, and children who received more years of deworming were compared to children who had received fewer. These studies suggest the possibility that deworming children dramatically improves their productivity later in life by subtly improving their development throughout childhood. In our view, the case for deworming largely rests on these long-term, developmental effects, because the intervention seems to have few obvious short-term benefits.

Having two relatively recent RCTs from Sub-Saharan Africa increases our confidence in long-term benefits far more than having just one RCT, especially because we have had substantial reservations about the RCT in Kenya – some of which seem notably less applicable to Croke 2014. Specifically:

  • The earlier RCT was a trial of “combination deworming” – treatment of both schistosomiasis (with praziquantel) and soil-transmitted helminths (with albendazole). Croke 2014 looks only at albendazole. This is particularly important because one of our current top charities – the Deworm the World Initiative – operates largely in India, where only albendazole is used.
  • Regarding the earlier study, we also thought it was plausible that efforts to encourage students to attend school in order to receive treatment might have accounted for some of the effect found in Baird et al 2012 (the follow-up of Miguel and Kremer 2004). The intervention examined in Croke 2014 appears far less likely to introduce other positive changes into the treatment group, because it involves the addition of albendazole to an existing program rather than an intensive program of deworming in schools in the treatment compared to the absence of any program in the control.
  • We worried that the results of Miguel and Kremer 2004 (the RCT in Kenya) might not generalize to other areas, because of extraordinary flooding caused by the El Niño climate pattern during the study and abnormally high infection rates in the study area (more). Croke 2014 appears to have had a lower (though still high) initial prevalence of infections (the programs selected districts based on high rates of worms found by Kabatereine et al 2001, which estimated that 60% of children ages 5-10 were infected, primarily with hookworm). El Niño may still have affected the study, however, because the parishes examined in Croke 2014 are very close (some within about 10 miles) to the district in which Miguel and Kremer 2004 took place. The program evaluated in Croke 2014 started about 2 years after El Niño, but we’re not sure whether this amount of lag time would lead to lower or higher infection rates.

In our current cost-effectiveness analyses for deworming, we have a “replicability adjustment” to account for the possibility that Baird et al 2012 wouldn’t necessarily hold up on replication, as well as an “external validity adjustment” to account for the fact that most deworming programs likely take place in less heavily infected areas. We will be revisiting both of these adjustments, resulting in improved estimated cost-effectiveness for deworming.

Remaining questions

We still have some concerns about the evidence.

  • El Niño may have affected the parishes examined in Croke 2014 just as it affected the schools in Miguel and Kremer 2004, potentially causing unrepresentatively high infection rates and limiting the generalizability of both studies.
  • Though Croke 2014 finds a large increase in test scores, Baird et al 2012 does not.
  • We worry about the sensitivity of the results to the outcome and control variables used in regressions and about selective reporting of results.
  • We also worry about publication bias. Perhaps other parish-level surveys would have supplied other outcomes. We wonder if other analyses employing a similar methodology that did not find an effect would have been published.
  • The study included a relatively small number of clusters. Croke 2014 reports on a few different regressions and methods of calculating the standard error of the treatment effect, which lead to different estimates of that standard error. In one more conservative analysis, for instance, the effect on the combined literacy and numeracy test scores is significant at the 90% confidence level.
  • Finally, we still can’t articulate any mechanism for the long-term benefits of deworming supported by data (we haven’t seen notable impacts on weight or other health or nutrition measures).

Bottom line

We have not yet concluded our examination of Croke 2014, though we have looked it over closely enough to feel that it is very likely to result in a substantial positive revision of our view on deworming, and therefore of our views on two current top charities as well.

In combination with the earlier study, Croke 2014 represents a major update regarding the case for deworming; we’re very glad to see this new evidence generated, and hope that it will become a prominent part of the dialogue around deworming. We intend to do our part by updating our content for this giving season.

Guest post from Tom Rutledge

The following is a guest post from one of our Board members, Tom Rutledge, that he wrote to reflect on his personal experiences as a GiveWell supporter.

I was a Jerk for GiveWell

When I first learned about GiveWell, I was a real jerk about it.

I blame the preceding years. Before GiveWell, I had accumulated a lot of bad feelings related to giving. My charitable activities had consisted of the usual, in the usual categories: alumni funds, causes that friends solicited for, affinity groups I was somehow part of, and the odd fund drive related to an event.

Along the way, I didn’t really think I was doing much good. It gnawed at me that most causes were not transparent and didn’t deliver concrete information about results. You couldn’t compare one charity with another. My giving didn’t make sense. It was haphazard, reactive, and because of my network, probably biased away from the greatest needs and toward “charity for rich people.” And I knew it.

So when I met Holden and Elie, heard their story, and realized that GiveWell was doing philanthropy the way I wanted to do philanthropy, it was very exciting. They weren’t merely doing it my way–they were doing it in public, showing everyone how philanthropy should be done.

If I supported GiveWell, they would move my money—and other peoples’ money–to really effective causes. Moreover, by modeling evidence-based philanthropy for other organizations, they would indirectly route even more money to other effective causes. The very act of placing importance on effectiveness was radical and powerful. Words like “leverage,” ‘multiplier effect” and “market efficiency” danced gleefully in my little economist’s brain.

This was obviously the right way to do it. And if you were around me when this metaphysical axiom dawned on me, I probably explained this to you. Unfortunately, my recollection is that I was not very diplomatic about it. My memory is serving up some rather unflattering scenes. I may have subjected one victim to a high-volume, close-talking, garlic-breathy rant. Another may have been told she was effectively killing people by giving to her local PTA. For another, I might have gotten all intellectual, polishing my monocle and invoking Freud and Marx as I unpacked the relationship between slick corporate marketing and his Oedipal insecurities.

My mind may be exaggerating the specifics of those incidents. But I’m pretty sure I had a knack for turning cocktail party conversation into combat.

The only explanation I can offer is that I honestly didn’t understand why the GiveWell model, as I saw it, was not persuasive to absolutely everyone. How could you consider an opportunity to do more good with your donated dollar, in a repeatable and replicable way, and just say “pass”? It did not compute.

But eventually…and fortunately…something else dawned on me.

It’s your money. You can do what you want with it, because you have your own priorities. You can take time off from work to take care of a sick friend and live off your savings for a while. You can support a political cause. You can sponsor a park bench in the Hamptons and call it charity. You can buy yourself a sweet car.

There are a lot of perfectly good ways to live. I see that, and I promise you, I’m less of a jerk now.

For starters, I’ve accepted that the GiveWell story just doesn’t work for some people. It’s not an emotional or visceral appeal. GiveWell is often recommending causes that are far away and seem abstract. You have to overcome the fact that you can’t see the results with your own eyes. You have to put weight on how dire the needs are that are being addressed, and you have to derive confidence from the depth and quality of GiveWell’s research.

In addition, the needs addressed by GiveWell’s recommendations probably don’t involve your community or your pet projects. GiveWell doesn’t have a punchy or plaintive marketing pitch. Compared to other giving opportunities, there are a lot fewer stories.

For many, this is just not what charity is all about, period. I once had a dream of persuading these people. But having now gone through all the Kübler-Ross stages – anger, bargaining, depression and finally acceptance – I have let that dream die.

My own evolution has paralleled GiveWell’s in its efforts to enlist supporters. In GiveWell’s early strategy discussions, the board and Elie and Holden argued a lot how to market the product. Do GiveWell prospects work on Wall Street, Silicon Valley, academia…where are they? Will they respond to personal appeals, convincing analyses of top charities, endorsements of experts…what? I held out hope that GiveWell’s mission was just one clever marketing insight from spreading like a cat video.

But as of now, and with the board’s assent, Holden and Elie have prioritized research over outreach. The evidence suggests that GiveWell’s story has a niche appeal, and it’s the quality of the research that appeals to that niche. So that’s where we are.

I have voted in favor of that approach, but I’m not sitting quietly with my hands folded. The GiveWell idea is a big idea with the potential for a big audience. We can stick to our niche for now, but I believe that niche will expand over time and eventually stop looking so much like a niche.

On a day-to-day basis, I haven’t completely given up on my evangelism. There are still people like me who have been waiting to hear the GiveWell message, and there are others who will find the arguments compelling once they do.

I have gotten more civilized about courting these people. I recall one particular conversation where I was persuasive without resorting to jerk-ery.

Because my mother died of a particular disease, I am often approached to support organizations involved with that disease. Despite my very painful personal experience, I don’t feel any particular allegiance to those organizations. As I told one supporter, I don’t really care as much about moms with that disease as I care about moms in general. If I can save ten moms’ lives for the cost of saving one mom with the disease in question, I’d rather save ten.

It worked. The supporter agreed.

Maybe it had something to do with brushing my teeth and ditching the monocle.

Challenges of transparency

When we first started GiveWell, we wondered why major staffed foundations didn’t write more about the thinking behind their giving (and the results of it), in order to share their knowledge and influence others. We’ve tried to counterbalance normal practice by making transparency one of our core values.

Our commitment to transparency is as strong as it’s ever been; we derive major benefits from it, and we believe there’s far too little public information and discussion about giving. At the same time, we’ve learned a lot about just why transparency in philanthropy is so difficult, and we no longer find it mysterious that it is so rare. This post summarizes what we see as the biggest challenges of being public and open about giving decisions.

Summary:

  • Everything we publish can help or hurt our brand. We put substantial effort into the accuracy, clarity and tone of our public content.
  • In most cases, writing about our thinking and our results also means writing about other organizations (the organizations we recommend and support, both via our traditional work and via the Open Philanthropy Project). We don’t want to hurt or upset other organizations, and we put substantial effort into making our public content both (a) amenable to the organizations we write about and (b) fair and complete in its characterization of our views.
  • The level of transparency we seek is unusual, meaning it often takes substantial effort to communicate our expectations and processes to the organizations we recommend and support.
  • The interaction of the above challenges can make it extremely difficult and time-consuming to write publicly about grants, recommendations, and grantee progress. In addition, it can be the cause of major delays between drafting and publication: much of our content takes weeks or months to go from draft to published piece, as we solicit feedback from parties who might be affected by the content.
  • The costs of transparency are significant, but we continue to feel they are outweighed by the benefits. Public writeups help clarify and improve our thinking; they play a major role in our credibility with our audience; and they represent a step toward a world in which there is far more, and better, information available to help donors give well.
  • We don’t think it necessarily makes sense for all philanthropic organizations to put as much effort into transparency as we do. Rather, we see transparency as one of the core areas in which we are trying to experiment, innovate, and challenge the status quo.

Challenge 1: protecting our brand
Because of the work we’ve put into explaining and defending our positions in the past, we benefit substantially from our reputation and from word-of-mouth. Nobody checks every statement and footnote on our site; even our closest readers rely often on the idea that content under the GiveWell name has a certain degree of thoroughness, reliability and clarity. (We believe a common way of approaching GiveWell content is to spot-check the occasional claim, rather than checking all claims or no claims; in order for our content to perform well under arbitrary spot-checks, our content needs to have fairly consistently high quality.)

Somewhat ironically, this dynamic means we’re hesitant to publish content that we haven’t thought through, checked out, and worded carefully (in order to say what we feel is important and defensible, and no more). We feel that poorly researched or poorly worded content could erode the brand we’ve built up, and could make people feel that they have to choose between checking everything we write themselves and simply placing less weight on our claims. (In general, most of our busy audience would likely choose the latter in this case.)

Giving decisions are generally impossible to justify purely with appeals to facts and logic; there are many judgment calls and a great deal of guesswork even in the most seemingly straightforward decisions. This makes it particularly challenging to write about them while preserving a basic level of credibility and a strong brand, and we don’t know of clear role models for this endeavor. (A funder once told me that s/he didn’t want to publish the reasoning behind giving decisions because this reasoning wasn’t up to academic standards, and so would not be perceived as reasonable or credible.)

Rather than aim to write only what we can back with hard evidence, and rather than write everything we believe regardless of the level of support, we put a great deal of effort into being clear about why we believe what we believe – whether it is because of solid evidence or simply a guess. (Phrases such as “we would guess that” are common in our content.) This allows us to share a good deal of our thinking (not just the parts of it that are strongly supported) while still maintaining credibility. But it requires a careful, thoughtful, and somewhat distinctive writing style that has been an ongoing challenge to develop and maintain.

As our brand becomes stronger, our audience becomes broader and our staff grows, the challenges of maintaining the appropriate style – and backing up our statements appropriately – intensifies. For example, we now put most public pages through a “vet” – in which a staff member who was not involved in writing the page goes carefully through its statements, making sure that each is appropriately supported – before publication. (We do not do this for all pages, and we generally do not do it for blog posts, which are more informal.)

Challenge 2: information about us is information about grantees
We seek to be highly open about the lessons we’ve learned and the results we’ve seen from our work – including developments that contradict our expectations and reflect poorly on our earlier decisions (which are often particularly valuable for learning). Because our function is to recommend other organizations for donations and grants, being open about our performance almost always means being open about another organization’s performance as well. (For simplicity, the rest of this section will refer to GiveWell-recommended organizations, as well as Open Philanthropy Project grantees, as “grantees.”)

While we want to be open, we don’t want to create a dynamic in which working with us creates significant risks for grantees. (This could lead good organizations to avoid working with us.) So we’ve had to find ways of balancing the goal of openness with the goal of making it “safe” for an organization to work with us. Doing so has been a major challenge and the subject of many long-running discussions, both internally and with grantees.

Things we’ve done to strike the right balance include:

  • Putting serious effort into communicating expectations up front. Simply saying “we value transparency” is not enough to communicate to a grantee what sorts of things we might want to write in the future. We generally try to send examples of past things we’ve written (such as our 2013 updates on Against Malaria Foundation and Schistosomiasis Control Initiative), and we often try to agree on an initial writeup before going forward with a grant or recommendation.
  • Giving grantees ample opportunity to comment on pending writeups that discuss them. There have been cases in which a writeup has been the subject of weeks, or even months, of discussion and negotiation.
  • Giving grantees a standing opportunity to retract non-public information, including even the fact that they’ve participated in our process. (Organizations considered as potential top charities have often been given the option to withdraw from our process and have us publish a page simply saying “Organization X declined to participate in our process”; this option has sometimes been invoked quite early in the process and has sometimes been invoked quite late, after a draft writeup has been produced and shared with the organization.)
  • Being generally hesitant to run a writeup that a grantee is highly uncomfortable with. We’re often willing to put substantial effort into working on a writeup’s language, until it both (a) communicates the important aspects of our views and (b) minimizes grantees’ concerns about giving misleading impressions.
  • We are in the process of creating a more formal process for negotiating about transparency with grantees up front. This process will draw on the agreement we negotiated with The Pew Charitable Trusts.

Challenge 3: transparency is unusual
In general, unusual goals are harder to achieve than common goals, because the rest of the world isn’t already set up to help with unusual goals. When we ask for budgets, project plans, confidentiality agreements, proof of 501(c)(3) status, etc., people immediately know what we’re seeking and are ready to provide it. When we bring up transparency, people are often surprised, confused, and cautious. In some cases people underestimate how much we plan to write, which could lead to problems later; in other cases people fear that we will disclose information carelessly and indiscriminately, leading them to be be highly wary. Discussions about transparency often involve extensive communication between senior staff at both organizations, in order to ensure that everyone is clear on what is being requested and expected.

We believe that we could achieve the same level of transparency with far less effort if our practices were even moderately common and familiar.

The difficulty of writing about grants
The interaction of the above challenges can make it extremely difficult and time-consuming to write publicly about grants, recommendations, and grantee progress. We can’t simply “open-source” our process: each piece of public content needs to simultaneously express our views, maintain our credibility, and be as amenable as possible to other organizations discussed therein. Much of our content takes weeks or months between drafting and publication.

With this in mind, we no longer find it puzzling that existing foundations tend to do little substantive public discussion of their work.

Benefits of transparency
The costs of transparency are significant, but we continue to feel they are outweighed by the benefits.

First, the process of drafting and refining public writeups is often valuable for our own thinking and reflection. In the process of discussing and negotiating content with grantees, we often become corrected on key points and gain better understanding of the situation. Writing about our work takes a lot of time, but much of that time is best classified as “refining and checking our thinking” rather than simply “making our thinking public.”

Second, transparency continues to be important for our credibility. This isn’t because all of our readers check all of our claims (in fact, we doubt that any of our readers check the majority of our claims). Rather, it’s because people are able to spot-check our reasoning. Our blog generally tries to summarize the big picture of why our priorities and recommendations are what they are; it links to pages that go into more detail, and these pages in turn use footnotes to provide yet more detail. A reader can pick any claim that seems unlikely, or is in tension with the reader’s background views, or is otherwise striking, and click through until they understand the reasoning behind the claim. This process often takes place in conversation rather than merely online – for example, see our research discussions. For these discussions, we rely on the fact that we’ve previously reached agreement with grantees on acceptable public formulations of our views and reasoning. Some readers do a lot of “spot-checking,” some do a little, and some merely rely on the endorsements of others. But without extensive public documentation of why we believe what we believe, we think we would have much more trouble being credible to all such people.

Finally, we believe that there is currently very little substantive public discussion of philanthropy, and that a new donor’s quest to learn about good giving is unnecessarily difficult. Work on the history of philanthropy is sparse, and doing new work in this area is challenging. Intellectuals tend to focus their thoughts and discussions on questions about public policy rather than philanthropy, making it hard to find good sources of ideas and arguments; we believe this is at least partly because of the dearth of public information about philanthropy.

We don’t think philanthropic transparency is easy, and we certainly don’t believe it’s something that foundations can jump into overnight. We don’t think it necessarily makes sense for all philanthropic organizations to put as much effort into transparency as we do. Rather, we see transparency as one of the core areas in which we are trying to experiment, innovate, and challenge the status quo.

In doing so, we hope to continue refining the processes necessary to achieve transparency, encouraging future (as well as present) foundations to adopt them, and making it easier for future organizations to be transparent than it currently is for us, so that one day there will be rich and abundant information available about how to give well.

Our ultimate goal is to do as much good as possible, and if we ever believe we might accomplish this better by dropping the emphasis on transparency, we will give serious consideration to the possibility. But at this time, the chance to promote philanthropic transparency is a major part of the case for GiveWell’s future impact, and we plan to retain transparency as a costly but essential goal.