Challenges of transparency

When we first started GiveWell, we wondered why major staffed foundations didn’t write more about the thinking behind their giving (and the results of it), in order to share their knowledge and influence others. We’ve tried to counterbalance normal practice by making transparency one of our core values.

Our commitment to transparency is as strong as it’s ever been; we derive major benefits from it, and we believe there’s far too little public information and discussion about giving. At the same time, we’ve learned a lot about just why transparency in philanthropy is so difficult, and we no longer find it mysterious that it is so rare. This post summarizes what we see as the biggest challenges of being public and open about giving decisions.

Summary:

  • Everything we publish can help or hurt our brand. We put substantial effort into the accuracy, clarity and tone of our public content.
  • In most cases, writing about our thinking and our results also means writing about other organizations (the organizations we recommend and support, both via our traditional work and via the Open Philanthropy Project). We don’t want to hurt or upset other organizations, and we put substantial effort into making our public content both (a) amenable to the organizations we write about and (b) fair and complete in its characterization of our views.
  • The level of transparency we seek is unusual, meaning it often takes substantial effort to communicate our expectations and processes to the organizations we recommend and support.
  • The interaction of the above challenges can make it extremely difficult and time-consuming to write publicly about grants, recommendations, and grantee progress. In addition, it can be the cause of major delays between drafting and publication: much of our content takes weeks or months to go from draft to published piece, as we solicit feedback from parties who might be affected by the content.
  • The costs of transparency are significant, but we continue to feel they are outweighed by the benefits. Public writeups help clarify and improve our thinking; they play a major role in our credibility with our audience; and they represent a step toward a world in which there is far more, and better, information available to help donors give well.
  • We don’t think it necessarily makes sense for all philanthropic organizations to put as much effort into transparency as we do. Rather, we see transparency as one of the core areas in which we are trying to experiment, innovate, and challenge the status quo.

Challenge 1: protecting our brand
Because of the work we’ve put into explaining and defending our positions in the past, we benefit substantially from our reputation and from word-of-mouth. Nobody checks every statement and footnote on our site; even our closest readers rely often on the idea that content under the GiveWell name has a certain degree of thoroughness, reliability and clarity. (We believe a common way of approaching GiveWell content is to spot-check the occasional claim, rather than checking all claims or no claims; in order for our content to perform well under arbitrary spot-checks, our content needs to have fairly consistently high quality.)

Somewhat ironically, this dynamic means we’re hesitant to publish content that we haven’t thought through, checked out, and worded carefully (in order to say what we feel is important and defensible, and no more). We feel that poorly researched or poorly worded content could erode the brand we’ve built up, and could make people feel that they have to choose between checking everything we write themselves and simply placing less weight on our claims. (In general, most of our busy audience would likely choose the latter in this case.)

Giving decisions are generally impossible to justify purely with appeals to facts and logic; there are many judgment calls and a great deal of guesswork even in the most seemingly straightforward decisions. This makes it particularly challenging to write about them while preserving a basic level of credibility and a strong brand, and we don’t know of clear role models for this endeavor. (A funder once told me that s/he didn’t want to publish the reasoning behind giving decisions because this reasoning wasn’t up to academic standards, and so would not be perceived as reasonable or credible.)

Rather than aim to write only what we can back with hard evidence, and rather than write everything we believe regardless of the level of support, we put a great deal of effort into being clear about why we believe what we believe – whether it is because of solid evidence or simply a guess. (Phrases such as “we would guess that” are common in our content.) This allows us to share a good deal of our thinking (not just the parts of it that are strongly supported) while still maintaining credibility. But it requires a careful, thoughtful, and somewhat distinctive writing style that has been an ongoing challenge to develop and maintain.

As our brand becomes stronger, our audience becomes broader and our staff grows, the challenges of maintaining the appropriate style – and backing up our statements appropriately – intensifies. For example, we now put most public pages through a “vet” – in which a staff member who was not involved in writing the page goes carefully through its statements, making sure that each is appropriately supported – before publication. (We do not do this for all pages, and we generally do not do it for blog posts, which are more informal.)

Challenge 2: information about us is information about grantees
We seek to be highly open about the lessons we’ve learned and the results we’ve seen from our work – including developments that contradict our expectations and reflect poorly on our earlier decisions (which are often particularly valuable for learning). Because our function is to recommend other organizations for donations and grants, being open about our performance almost always means being open about another organization’s performance as well. (For simplicity, the rest of this section will refer to GiveWell-recommended organizations, as well as Open Philanthropy Project grantees, as “grantees.”)

While we want to be open, we don’t want to create a dynamic in which working with us creates significant risks for grantees. (This could lead good organizations to avoid working with us.) So we’ve had to find ways of balancing the goal of openness with the goal of making it “safe” for an organization to work with us. Doing so has been a major challenge and the subject of many long-running discussions, both internally and with grantees.

Things we’ve done to strike the right balance include:

  • Putting serious effort into communicating expectations up front. Simply saying “we value transparency” is not enough to communicate to a grantee what sorts of things we might want to write in the future. We generally try to send examples of past things we’ve written (such as our 2013 updates on Against Malaria Foundation and Schistosomiasis Control Initiative), and we often try to agree on an initial writeup before going forward with a grant or recommendation.
  • Giving grantees ample opportunity to comment on pending writeups that discuss them. There have been cases in which a writeup has been the subject of weeks, or even months, of discussion and negotiation.
  • Giving grantees a standing opportunity to retract non-public information, including even the fact that they’ve participated in our process. (Organizations considered as potential top charities have often been given the option to withdraw from our process and have us publish a page simply saying “Organization X declined to participate in our process”; this option has sometimes been invoked quite early in the process and has sometimes been invoked quite late, after a draft writeup has been produced and shared with the organization.)
  • Being generally hesitant to run a writeup that a grantee is highly uncomfortable with. We’re often willing to put substantial effort into working on a writeup’s language, until it both (a) communicates the important aspects of our views and (b) minimizes grantees’ concerns about giving misleading impressions.
  • We are in the process of creating a more formal process for negotiating about transparency with grantees up front. This process will draw on the agreement we negotiated with The Pew Charitable Trusts.

Challenge 3: transparency is unusual
In general, unusual goals are harder to achieve than common goals, because the rest of the world isn’t already set up to help with unusual goals. When we ask for budgets, project plans, confidentiality agreements, proof of 501(c)(3) status, etc., people immediately know what we’re seeking and are ready to provide it. When we bring up transparency, people are often surprised, confused, and cautious. In some cases people underestimate how much we plan to write, which could lead to problems later; in other cases people fear that we will disclose information carelessly and indiscriminately, leading them to be be highly wary. Discussions about transparency often involve extensive communication between senior staff at both organizations, in order to ensure that everyone is clear on what is being requested and expected.

We believe that we could achieve the same level of transparency with far less effort if our practices were even moderately common and familiar.

The difficulty of writing about grants
The interaction of the above challenges can make it extremely difficult and time-consuming to write publicly about grants, recommendations, and grantee progress. We can’t simply “open-source” our process: each piece of public content needs to simultaneously express our views, maintain our credibility, and be as amenable as possible to other organizations discussed therein. Much of our content takes weeks or months between drafting and publication.

With this in mind, we no longer find it puzzling that existing foundations tend to do little substantive public discussion of their work.

Benefits of transparency
The costs of transparency are significant, but we continue to feel they are outweighed by the benefits.

First, the process of drafting and refining public writeups is often valuable for our own thinking and reflection. In the process of discussing and negotiating content with grantees, we often become corrected on key points and gain better understanding of the situation. Writing about our work takes a lot of time, but much of that time is best classified as “refining and checking our thinking” rather than simply “making our thinking public.”

Second, transparency continues to be important for our credibility. This isn’t because all of our readers check all of our claims (in fact, we doubt that any of our readers check the majority of our claims). Rather, it’s because people are able to spot-check our reasoning. Our blog generally tries to summarize the big picture of why our priorities and recommendations are what they are; it links to pages that go into more detail, and these pages in turn use footnotes to provide yet more detail. A reader can pick any claim that seems unlikely, or is in tension with the reader’s background views, or is otherwise striking, and click through until they understand the reasoning behind the claim. This process often takes place in conversation rather than merely online – for example, see our research discussions. For these discussions, we rely on the fact that we’ve previously reached agreement with grantees on acceptable public formulations of our views and reasoning. Some readers do a lot of “spot-checking,” some do a little, and some merely rely on the endorsements of others. But without extensive public documentation of why we believe what we believe, we think we would have much more trouble being credible to all such people.

Finally, we believe that there is currently very little substantive public discussion of philanthropy, and that a new donor’s quest to learn about good giving is unnecessarily difficult. Work on the history of philanthropy is sparse, and doing new work in this area is challenging. Intellectuals tend to focus their thoughts and discussions on questions about public policy rather than philanthropy, making it hard to find good sources of ideas and arguments; we believe this is at least partly because of the dearth of public information about philanthropy.

We don’t think philanthropic transparency is easy, and we certainly don’t believe it’s something that foundations can jump into overnight. We don’t think it necessarily makes sense for all philanthropic organizations to put as much effort into transparency as we do. Rather, we see transparency as one of the core areas in which we are trying to experiment, innovate, and challenge the status quo.

In doing so, we hope to continue refining the processes necessary to achieve transparency, encouraging future (as well as present) foundations to adopt them, and making it easier for future organizations to be transparent than it currently is for us, so that one day there will be rich and abundant information available about how to give well.

Our ultimate goal is to do as much good as possible, and if we ever believe we might accomplish this better by dropping the emphasis on transparency, we will give serious consideration to the possibility. But at this time, the chance to promote philanthropic transparency is a major part of the case for GiveWell’s future impact, and we plan to retain transparency as a costly but essential goal.

David Roodman’s draft writeup on immigration and current residents’ wages

As discussed previously, we are investigating the cause of labor mobility as a potential focus area within U.S. policy. Much of our investigation is focused on outlining potential giving opportunities; concurrently, we are interested in reviewing the academic literature on the merits (and possible drawbacks) of the policy changes that we would be working toward.

One key question around this cause is whether increasing immigration to the U.S. (something that we believe could be an excellent outcome in global anti-poverty terms) would result in lower wages for current U.S. residents. We commissioned David Roodman to provide a critical review of the literature on this question. David previously completed a project for us on the connection between infant mortality and fertility.

We haven’t yet fully vetted this writeup (something we are planning to do), but we believe it gives a thorough and convincing picture of the literature, and provides some reason to believe that immigration is unlikely to result in substantially lower wages (particularly over the long run) for current residents.

From the introduction:

As ever, the evidence base is not as sturdy as we would wish. Ironically, immigration policy is often arbitrary and even randomized (as in visa lotteries), which has allowed some high-quality measurement of impacts on immigrants (Gibson, McKenzie, and Stillman 2010; McKenzie, Gibson, and Stillman 2010; Clemens 2013b). Unfortunately, this randomness has not been as exploitable when assessing impacts on the receiving economy, because admissions have not been randomized across occupations, say, or cities. One family of studies attempts the next-best thing, exploiting natural experiments, and some are persuasive. The rest of the research is less experiment-like, for example, looking at correlations between wages and immigration flows across US cities over 20 years, and so must be taken with more grains of salt. Most of non-experimental studies reviewed here make a bid in the direction of natural experiments by instrumenting. But even when they aggressively check the instrumented results for robustness, it is always hard to be sure that the strategy is working.
Still, the available evidence paints a fairly consistent and plausible picture:

  • There is almost no evidence of anything close to one-to-one crowding out by new immigrant arrivals to the job market in industrial countries. Most studies find that 10% growth in the immigrant “stock” changes natives’ earnings by between –2% and +2% (Longhi, Nijkamp, and Poot 2005, Fig 1; Peri 2014, Pg 1). Although serious questions can be raised about the reliability of most studies, the scarcity of evidence for great pessimism stands as a fact. The economies of destination countries largely appear flexible enough to absorb new arrivals, especially given time.
  • The group that appears most vulnerable to competitive pressure from new low-skill migrants is recent low-skill migrants. This possibility is easy to miss when talking of the impacts of “immigrants” on “natives.” Yet it stands to reason: a newly arrived Mexican with less than a high school education competes most directly with an earlier-arrived Mexican with less than a high school education.
  • One factor dampening the economic side effects of immigration is that immigrants are consumers as well as producers. They increase domestic demand for goods and services, perhaps even more quickly than they increase domestic production (Hercowitz and Yashiv 2002), since they must consume as soon as they arrive. They expand the economic pie even as they compete for a slice. This is not to suggest that the market mechanism is perfect—adjustment to new arrivals is not instantaneous and may be incomplete—but the mechanism does operate.
  • A second dampener is that in industrial economies, the capital supply tends to expand along with the workforce. More workers leads to more offices and more factories. Were receiving economies not flexible in this way, they would not be rich. This mechanism too may not be complete or immediate, but it is substantial in the long run: since the industrial revolution, population has doubled many times in the US and other now-wealthy nations, and the capital stock has kept pace over the long term, so that today there is more capital per worker than 200 years ago.
  • A third dampener is that while workers who are similar compete, ones who are different complement. An expansion in the diligent manual labor available to the home renovation business can spur that industry to grow, which will increase its demand for other kinds of workers, from skilled general contractors who can manage complex projects for English-speaking clients to scientists who develop new materials for home building. Symmetrically, an influx of high-skill workers can increase demand for low-skill ones. More computer programmers means more tech businesses, which means more need for janitors and security guards. Again, the effect is certain, though its speed and size are not.
  • An important corollary of this last observation is that a migrant inflow that mirrors the receiving population in skills mix is likely to have the most benign effects. Especially once capital ramps up to match the labor expansion, a balanced inflow probably approximates a dilation of the receiving economy, with similar percentage increases in all classes of workers, concomitant growth in aggregate demand, and minimal perturbation in prices for goods, services, and labor. In particular, one way to cushion the impact of low-skill migration on low-skill workers already present is to increase skilled immigration in tandem.

In addition to summarizing the existing literature, Roodman also includes a technical appendix that replicates the analysis in two key papers (Ottaviano and Peri 2012, which finds minimal costs for U.S. natives from additional immigration, and Borjas, Grogger, and Hanson 2012, which criticizes Ottaviano and Peri 2012 and argues for larger costs). While we don’t feel equipped to assess all of the details, our impression is that Roodman offers novel technical arguments implying that Borjas, Grogger, and Hanson’s criticisms of Ottaviano and Peri undermine their own arguments. We hope that these arguments are subjected to appropriate scrutiny, given the major role that these papers have played in the literature.

We encourage our readers to check out the writeup and send in their thoughts. Like David’s previous writeup for us, we think it is an interesting read.

David Roodman’s draft writeup on immigration and current residents’ wages

Open Philanthropy Project (formerly GiveWell Labs)

GiveWell and Good Ventures have launched a new website for the Open Philanthropy Project. This is the new name and brand for the project formerly known as GiveWell Labs.

The mission of the Open Philanthropy Project is to learn how to give as effectively as we can and share our findings openly so that anyone can build on them. The word “open” refers both to being (a) open to many possibilities (considering many possible focus areas, and trying to select the ones that will lead to as much good accomplished as possible) and (b) open about our work (emphasizing transparency and information sharing).

We have launched a new brand to replace the “GiveWell Labs” brand, because:

  • GiveWell and Good Ventures work as partners on the Open Philanthropy Project, and we wanted a name that would not be exclusively associated with one organization or the other.
  • We feel it is important to start separating the GiveWell brand from the Open Philanthropy Project brand, since the latter is evolving into something extremely different from GiveWell’s work identifying evidence-backed charities serving the global poor. A separate brand is a step in the direction of possibly conducting the two projects under separate organizations, though we aren’t yet doing that (more on this topic at our overview of plans for 2014 published earlier this year).

For now, the Open Philanthropy Project website provides only a basic overview, and links to GiveWell and Good Ventures for more information in many cases. We will continue posting updates on the Open Philanthropy Project to GiveWell’s blog, Twitter and Facebook; the Open Philanthropy Project’s Twitter and Facebook feeds will simply mirror those updates. The Open Philanthropy Project is currently only a brand (name, logo, website) rather than an organization, and it continues to be the case that the staff members who work on the Open Philanthropy Project are formally affiliated with either GiveWell or Good Ventures.

We plan to edit much of the content on our website to reflect this update, though we will not necessarily remove all previous references to GiveWell Labs.

Update on GiveWell’s web traffic / money moved: Q2 2014

In addition to evaluations of other charities, GiveWell publishes substantial evaluation of itself, from the quality of its research to its impact on donations. We publish quarterly updates regarding two key metrics: (a) donations to top charities and (b) web traffic.

The table and chart below present basic information about our growth in money moved and web traffic in the first half of 2014 (note 1).

Money moved: first two quarters

Growth in money moved, as measured by donations from donors giving less than $5,000 per year, slowed in the second quarter of 2014 compared with the first quarter, and was substantially weaker than growth in the first two quarters of 2013.

The total amount of money we move is driven by a relatively small number of large donors. These donors tend to give in December, and we don’t think we have accurate ways of predicting future large gifts (note 2). We therefore show growth among small donors, the portion of our money moved about which we think we have meaningful information at this point in the year.

Web traffic through July 2014

We show web analytics data from two sources: Clicky and Google Analytics. The data on visitors to our website differs between the two sources. We do not know the cause of discrepancy (though a volunteer with a relevant technical background looked at the data for us to try to find the cause). Full data set available at this spreadsheet. (Note on how we count unique visitors.)

Traffic from AdWords decreased in the first two quarters because in early 2014 we removed ads on searches that we determined were not driving high quality traffic to our site (i.e. searches with very high bounce rates and very low pages per visit).

Data in the chart below is an average of Clicky and Google Analytics data, except for those months for which we only have data (or reliable data) from one source (see full data spreadsheet for details).

Slowing growth?

The above indicates that our growth slowed significantly in 2014 relative to last year (and previous years). It is possible that the numbers above are affected by the fact that (a) growth in the second quarter of 2013 was particularly strong due to a series of media mentions (as we previously noted) or (b) differences in the way that our recommended charities track donations (we would guess that this could explain a difference of a few hundred donors). Our guess is that both of these factors contribute but do not explain the slower growth.


Note 1: Since our 2012 annual metrics report we have shifted to a reporting year that starts on February 1, rather than January 1, in order to better capture year-on-year growth in the peak giving months of December and January. Therefore metrics for the “first two quarters” reported here are for February through July.

Note 2: In total, GiveWell donors have directed $2.41 million to our top charities this year, compared with $1.46 million at this point in 2013. For the reason described above, we don’t find this number to be particularly meaningful at this time of year.

Note 3: We count unique visitors over a period as the sum of monthly unique visitors. In other words, if the same person visits the site multiple times in a calendar month, they are counted once. If they visit in multiple months, they are counted once per month.

Google Analytics provides ‘unique visitors by traffic source’ while Clicky provides only ‘visitors by traffic source.’ For that reason, we primarily use Google Analytics data in the calculations of ‘unique visitors ex-AdWords’ for both the Clicky and Google Analytics rows of the table. See the full data spreadsheet, sheets Data and Summary, for details.

 

Thoughts on the End of Hewlett’s Nonprofit Marketplace Initiative

Note: we sent a pre-publication draft of this post to multiple people who had been involved in the Hewlett program discussed here. A response from the Hewlett Foundation is available in the comments of this post; a response from Jacob Harold is available on the GuideStar blog.

Last April, the Chronicle of Philanthropy covered the decision by the William and Flora Hewlett Foundation to end its Nonprofit Marketplace Initiative, which in 2008 was the source of GiveWell’s first grant from a foundation, and has continued to be a source of substantial support for GiveWell’s operations in the years since. The Hewlett Foundation has been unusually transparent about the thinking behind its decision, and we have unusual context on the program as one of its grantees, so we find it worthwhile to reflect on this episode – how we perceived the Nonprofit Marketplace Initiative, its strengths and weaknesses, and the decision to end it.

The Nonprofit Marketplace Initiative aimed to improve the giving of individual donors. Hewlett states, “This Initiative’s goal was that by 2015, ten percent of individual philanthropic donations in the US (or $20 billion), would be influenced by meaningful, high-quality information about nonprofit organizations’ performance.” Grantees included GiveWell, GuideStar, Charity Navigator, Philanthropedia and Great Nonprofits.

In short:

  • We believe that Hewlett’s philanthropy program was a strong use of philanthropic funds. The program is reported to have spent a total of $12 million over 8 years, and we think its impact on GiveWell alone will likely ultimately be responsible for enough influence on donations to easily justify that expenditure.
  • We believe that ending this program may have been the right decision. With that said, we disagree with the specific reasoning Hewlett has given, for the same reason that we disagreed with its strategic plan while the program was running. We believe that Hewlett’s goal of influencing 10% of donors was unrealistic and unnecessary, at least over the time frame in question. We believe the disagreement may reflect a broader difference in how we see the yardstick by which a philanthropic program ought to be evaluated. 
  • We are very positive on how Hewlett ended the program. Great care was taken to end it in a way that gave grantees ample advance notice and aimed to avoid disruptive transitions. We also applaud Hewlett’s decision to publish its reasoning in ending the program and invite a public discussion, and we broadly feel that Hewlett is delivering on its stated intent to become a highly transparent grantmaker.

Our experience with the program

In 2008, Bill Meehan introduced us to Jacob Harold, who was then the Program Officer for Hewlett’s Nonprofit Marketplace Initiative program. Jacob met with us several times, getting to know us and the project. Late in 2008, we were invited to submit a proposal and were awarded a one-year, $100,000 grant. This grant was crucial for us. At the time, we had little to no name recognition, a major mistake on our record, and uncertainty about whether we’d be able to raise enough to continue operating. We were in the midst of a change of direction, after disappointing results from our first attempt at high-intensity outreach. We had determined that we needed to take a longer view and focus on research quality for the time being – and it was thanks to the support of Hewlett, among others, that we felt it was possible to do so. We benefited both from Hewlett’s financial support (which helped answer crucial questions about whether we’d be able to fund our plans at the time) and from Hewlett’s brand (being able to say we were a Hewlett grantee substantially improved our credibility and appeal in the eyes of many, something Hewlett was cognizant of).

Over the years, we continued to meet periodically with Jacob and to periodically submit grant proposals. For the most part, Hewlett continued to fund us at the level of $100,000 per year (there was one year where the support temporarily dropped to $60,000). As our audience and budget grew, this support became a smaller part of our revenue and became less crucial to us, but it remained quite valuable. Hewlett’s support reduced the amount of time we had to spend fundraising and worrying about sustainability, and increased the amount of time spent on core activities.

In addition to supporting us financially, Hewlett sought to integrate our work into its own vision for the “nonprofit marketplace.” Jacob encouraged us to attend convenings with other groups working on helping individual donors give effectively, such as Charity Navigator, GuideStar, Philanthropedia and Great Nonprofits (and we generally did so). He also discussed his vision for how impact would be achieved, and particularly emphasized the importance of working with portals and aggregators (such as GuideStar, where he now serves as CEO) that could pull together information from many different kinds of resources. He encouraged us to build an API in order to make aggregation easier, and saw aggregation as a more promising path than building our own website, brand and audience.

We disagreed with him on some of these points. We felt that his vision was overly specific, overly focused on reaching the “average” donor, and was under-emphasizing the promise of different organizations targeting different audiences in different ways. When the Hewlett-funded Money for Good study came out, we publicly disagreed with the common interpretation, and argued that the most promising path for nonprofit evaluation groups is to target passionate niche audiences rather than focusing on the unrealistic (as both we and Money for Good saw it) goal of influencing 10%+ of all U.S. giving

However, we never found Jacob or anyone else at Hewlett to be pushing its vision on us hard enough to cause problems. We certainly weighed Jacob’s encouragement when attending convenings and working on a partnership with GuideStar, but we were comfortable with the cost-benefit tradeoffs involved in these activities and didn’t undertake them solely to please a funder. We particularly valued some of the opportunities to get to know other organizations in our space. We didn’t build an API, and Hewlett didn’t pressure us to do so (its support continued).

All in all, our general feeling was that Hewlett was accomplishing substantial good via its relatively reliable, unrestricted funding even as its strategy was something we disagreed with.

Hewlett’s reasoning for ending the program, and our take on it

In a response to the Chronicle of Philanthropy, Larry Kramer (Hewlett’s current President) wrote:

We launched NMI in 2006 with the objective of influencing 10% of individual donors to be more evidence-based in their giving, a goal we sought to achieve by making high-quality information available about nonprofit performance. Based on independent research and evaluation, we concluded we were not going to meet that goal. And because we are committed to being transparent about our work – both successes and failures – we openly shared our reasons for ending the initiative in a video and blog post on our web site.

Hewlett also states that staff transitions provided a good opportunity to reflect systematically on the initiative: between late 2012 and early 2013, Larry Kramer replaced Paul Brest as President, Fay Twersky became the first Director of the newly formed Effective Philanthropy Group, and Lindsay Louie replaced Jacob Harold in a slightly different program officer role.

We believe that ending this program may have been the right decision. With that said, we disagree with the specific reasoning Hewlett has given, for the same reason that we disagreed with its strategic plan while the program was running. We believe that the goal of influencing 10% of donors was unrealistic and unnecessary, at least over the time frame in question. We believe that this is a case in which a commitment to specific quantitative targets, and a specific strategy for getting there, was premature and did not make the program better.

Despite this, we believe that Hewlett succeeded in choosing an important problem to work on and in finding and funding promising groups working on the problem, and that it played a real role in the development of at least one organization (ours) that is poised to influence far more dollars than Hewlett spent on the program. For this reason, we think it would be reasonable to consider the program a success, though not necessarily something that should have been continued.

In short, we feel this program was an instance of good and successful philanthropy, and that it may indeed have been time to end it, but we disagree with the way the program framed and evaluated itself and the way Hewlett justified the end of the program.

How Hewlett ended the program

Hewlett took great care to end the program in a way that would not be overly disruptive for grantees. We were notified well in advance of the public announcement about the program’s end; we were able to ask questions and receive helpful answers; and our two-year grant was renewed as an “exit grant.” We were told that other grantees had been treated similarly. By clearly communicating its intent to end the program and committing “exit funding,” Hewlett ensured that we would have ample time to adjust for the loss of this revenue.

We also applaud Hewlett’s decision to publish its reasoning in ending the program and invite a public discussion.

A note on Hewlett’s transparency

Shortly after taking over as President of the Hewlett Foundation, Larry Kramer expressed his desire to further improve Hewlett’s transparency, and we think there has indeed been substantial progress. The public discussion of the end of the Nonprofit Marketplace Initiative represents some of this progress. In addition:

  • Hewlett’s relatively new blog is frequently updated and has given us a window into the day-to-day work and thoughts of its staff.
  • Hewlett recently held conference calls with open Q&A for grantees.

As a result, we believe Hewlett has become one of the easiest foundations to learn about and get a feel for from the outside. We think this is quite a positive development, and may write more in the future about what we’ve learned from examining Hewlett’s output.

Key takeaways

Hewlett’s vision of good philanthropy, at least in this case, seems to have involved setting extraordinarily ambitious and specific goals, laying out a plan to get there, and closing the program if the goals aren’t reached. By this measure, the Nonprofit Marketplace Initiative apparently failed (though Hewlett followed its principles by closing a program falling short of its goals).

Our vision for good philanthropy is that it finds problems worth working on (in terms of importance, tractability and uncrowdedness) and supports strong organizations to work on them, while ensuring that any “active” funding (restrictions, advice, requests of grantees) creates more value than it detracts. We think that specific quantitative goals are sometimes called for, but are more appropriate in domains where the background data is stronger and the course is easier to chart (as with our top charities). By our measure, we think the Nonprofit Marketplace Initiative was at least reasonably successful.

Recognizing this difference in the way we think about good philanthropy will help us to better understand Hewlett’s decisions going forward, and will give us a disagreement to reflect on as we move forward with our vision. We’re glad to have examined Hewlett’s thinking on this matter, and see the chance to do so as a benefit of Hewlett’s improved commitment to transparency.

A note on the role of Hewlett’s funding in our budget:

Because this post discusses Hewlett’s work in an evaluative manner, we think it’s worth being clear about the support we receive so that people may take into account how this may influence our content.

Hewlett has provided generous support to GiveWell since 2008. We hope that it will continue doing so even after the end of our current grant, depending on how our work and Hewlett’s evolve (our work on GiveWell Labs seems to us to be relevant to Hewlett’s work on encouraging transparency among major funders). We are currently projecting expenses of and revenues of over $1.5 million per year, and Hewlett’s support has historically been around $100,000 per year.

Our ongoing review of ICCIDD

The International Council for the Control of Iodine Deficiency Disorders Global Network (ICCIDD) advocates for and assists programs that fortify salt with iodine. Our preliminary work (writeup forthcoming) implies that even moderate iodine deficiency can lead to impaired cognitive development.

ICCIDD tracks iodine deficiency around the world and encourages countries with iodine deficient populations to pass laws requiring iodization for all salt produced in and imported to the country. ICCIDD also provides – and helps countries find – general support and assistance for their iodization programs.

In February, we wrote that we were considering ICCIDD for a 2014 GiveWell top charity recommendation. We’ve now spent a considerable amount of time talking to and analyzing ICCIDD. This post shares what we’ve learned so far and what questions we’re planning to focus on throughout the rest of our investigation. (For more detail, see our detailed interim review.)

ICCIDD has successfully completed the first phase of our investigation process and we view it as a contender for a recommendation this year. We now plan (a) to make a $100,000 grant to ICCIDD (as part of our “top charity participation grants,” funded by Good Ventures) and (b) continue our analysis to determine whether or not we should recommend ICCIDD to donors at the end of the year.

Reasons we prioritized ICCIDD

We prioritized ICCIDD because of our impression that iodization has strong evidence of effectiveness, cost-effectiveness, and room for more funding.

The evidence of effectiveness for salt iodization is not fully straightforward – we plan to publish an intervention report with details before the end of the year – but multiple randomized controlled trials imply that reducing iodine deficiency in children leads to moderate (~3-4 points) gains in IQ.

We have yet to find well-documented assessments of the cost of iodization, but the estimates we have seen most commonly estimate approximately $0.10 per person reached.

Although iodization rates have increased dramatically over the past 20 years, significant deficiency still exists. ICCIDD publishes a scorecard showing countries’ iodine status; many fall significantly below the benchmark of 100 µg of iodine per liter of urine.

Questions we hope to answer in our ongoing analysis

What would have happened to iodization programs in ICCIDD’s absence?

Because ICCIDD is an advocacy/technical assistance organization (it does not directly implement iodization programs but advocates that others do so), it is difficult to assess its impact.

ICCIDD has provided us with several examples of countries in which it believes it played an essential role (some of which we discuss briefly in our interim review page), but we have not yet investigated these cases sufficiently to form a confident view about what role ICCIDD played and how crucial its contributions were to the program.

What role does ICCIDD play relative to other organizations that work on iodization?

A number of organizations support government and private-sector salt iodization programs, especially UNICEF, the Global Alliance for Improved Nutrition (GAIN), and the Micronutrient Initiative.

We hope to better understand the roles each organization plays so that we can formulate a view about where donated funds are likely to have the greatest impact. (We’re considering the possibility that funds donated to any should be thought of as “supporting the international effort to support iodization” and that the important question is assessing the combined costs and impacts of all 4 organizations.)

We are also considering GAIN for a 2014 GiveWell recommendation. We do not expect our decision about GAIN to affect the likelihood of ICCIDD receiving a recommendation.

Program monitoring

Surveys to assess iodine consumption and status are completed more than once a decade in most countries, and are usually conducted by country governments or UNICEF. We have yet to analyze these surveys carefully enough to know whether or not they provide a reliable assessment of the track record of iodization programs: i.e., do iodization programs lead to a reduction in iodine deficiency?

Room for more funding

We have seen strong evidence that ICCIDD is funding constrained. It told us that its staff members have, over the past few years, consistently submitted requests for funds that are significantly higher than it is able to allocate. Additionally, ICCIDD lost what had been its largest funder in 2012. It has also shared an overall budget with us requesting significantly more funding than it has received in the past.

Nevertheless, we have two major questions about room for more funding:

  1. Given iodization’s cost-effectiveness and track record, why haven’t others closed the funding gap? We have been told that the lack of funds may be due to “donor fatigue” (i.e., donors have supported iodization in the past and iodized a large proportion of the countries in need, so they no longer view it as a priority), but we have yet to investigate this question sufficiently to feel comfortable with our understanding.
  2. Will ICCIDD’s future activities be as cost-effective as past attempts to increase iodization rates? One possible explanation for the lack of donor funds is that the countries that remain iodine deficient are particularly problematic. Were this true, it might be the case that donors are acting rationally because future efforts to iodize could be significantly more costly than past efforts.

Note that previously the Gates Foundation made a $40 million grant to support universal salt iodization (USI) in 16 countries over seven years. That grant ends in March of 2015 and no extension of the grant has yet been scheduled.