The GiveWell Blog

David Roodman’s draft writeup on immigration and current residents’ wages

As discussed previously, we are investigating the cause of labor mobility as a potential focus area within U.S. policy. Much of our investigation is focused on outlining potential giving opportunities; concurrently, we are interested in reviewing the academic literature on the merits (and possible drawbacks) of the policy changes that we would be working toward.

One key question around this cause is whether increasing immigration to the U.S. (something that we believe could be an excellent outcome in global anti-poverty terms) would result in lower wages for current U.S. residents. We commissioned David Roodman to provide a critical review of the literature on this question. David previously completed a project for us on the connection between infant mortality and fertility.

We haven’t yet fully vetted this writeup (something we are planning to do), but we believe it gives a thorough and convincing picture of the literature, and provides some reason to believe that immigration is unlikely to result in substantially lower wages (particularly over the long run) for current residents.

From the introduction:

As ever, the evidence base is not as sturdy as we would wish. Ironically, immigration policy is often arbitrary and even randomized (as in visa lotteries), which has allowed some high-quality measurement of impacts on immigrants (Gibson, McKenzie, and Stillman 2010; McKenzie, Gibson, and Stillman 2010; Clemens 2013b). Unfortunately, this randomness has not been as exploitable when assessing impacts on the receiving economy, because admissions have not been randomized across occupations, say, or cities. One family of studies attempts the next-best thing, exploiting natural experiments, and some are persuasive. The rest of the research is less experiment-like, for example, looking at correlations between wages and immigration flows across US cities over 20 years, and so must be taken with more grains of salt. Most of non-experimental studies reviewed here make a bid in the direction of natural experiments by instrumenting. But even when they aggressively check the instrumented results for robustness, it is always hard to be sure that the strategy is working.
Still, the available evidence paints a fairly consistent and plausible picture:

  • There is almost no evidence of anything close to one-to-one crowding out by new immigrant arrivals to the job market in industrial countries. Most studies find that 10 percentage point increase in the immigrant “stock” as a share of the labor force changes natives’ earnings by between –2% and +2% (Longhi, Nijkamp, and Poot 2005, Fig 1; Peri 2014, Pg 1). Although serious questions can be raised about the reliability of most studies, the scarcity of evidence for great pessimism stands as a fact. The economies of destination countries largely appear flexible enough to absorb new arrivals, especially given time.
  • The group that appears most vulnerable to competitive pressure from new low-skill migrants is recent low-skill migrants. This possibility is easy to miss when talking of the impacts of “immigrants” on “natives.” Yet it stands to reason: a newly arrived Mexican with less than a high school education competes most directly with an earlier-arrived Mexican with less than a high school education.
  • One factor dampening the economic side effects of immigration is that immigrants are consumers as well as producers. They increase domestic demand for goods and services, perhaps even more quickly than they increase domestic production (Hercowitz and Yashiv 2002), since they must consume as soon as they arrive. They expand the economic pie even as they compete for a slice. This is not to suggest that the market mechanism is perfect—adjustment to new arrivals is not instantaneous and may be incomplete—but the mechanism does operate.
  • A second dampener is that in industrial economies, the capital supply tends to expand along with the workforce. More workers leads to more offices and more factories. Were receiving economies not flexible in this way, they would not be rich. This mechanism too may not be complete or immediate, but it is substantial in the long run: since the industrial revolution, population has doubled many times in the US and other now-wealthy nations, and the capital stock has kept pace over the long term, so that today there is more capital per worker than 200 years ago.
  • A third dampener is that while workers who are similar compete, ones who are different complement. An expansion in the diligent manual labor available to the home renovation business can spur that industry to grow, which will increase its demand for other kinds of workers, from skilled general contractors who can manage complex projects for English-speaking clients to scientists who develop new materials for home building. Symmetrically, an influx of high-skill workers can increase demand for low-skill ones. More computer programmers means more tech businesses, which means more need for janitors and security guards. Again, the effect is certain, though its speed and size are not.
  • An important corollary of this last observation is that a migrant inflow that mirrors the receiving population in skills mix is likely to have the most benign effects. Especially once capital ramps up to match the labor expansion, a balanced inflow probably approximates a dilation of the receiving economy, with similar percentage increases in all classes of workers, concomitant growth in aggregate demand, and minimal perturbation in prices for goods, services, and labor. In particular, one way to cushion the impact of low-skill migration on low-skill workers already present is to increase skilled immigration in tandem.

In addition to summarizing the existing literature, Roodman also includes a technical appendix that replicates the analysis in two key papers (Ottaviano and Peri 2012, which finds minimal costs for U.S. natives from additional immigration, and Borjas, Grogger, and Hanson 2012, which criticizes Ottaviano and Peri 2012 and argues for larger costs). While we don’t feel equipped to assess all of the details, our impression is that Roodman offers novel technical arguments implying that Borjas, Grogger, and Hanson’s criticisms of Ottaviano and Peri undermine their own arguments. We hope that these arguments are subjected to appropriate scrutiny, given the major role that these papers have played in the literature.

We encourage our readers to check out the writeup and send in their thoughts. Like David’s previous writeup for us, we think it is an interesting read.

David Roodman’s draft writeup on immigration and current residents’ wages

Open Philanthropy Project (formerly GiveWell Labs)

GiveWell and Good Ventures have launched a new website for the Open Philanthropy Project. This is the new name and brand for the project formerly known as GiveWell Labs.

The mission of the Open Philanthropy Project is to learn how to give as effectively as we can and share our findings openly so that anyone can build on them. The word “open” refers both to being (a) open to many possibilities (considering many possible focus areas, and trying to select the ones that will lead to as much good accomplished as possible) and (b) open about our work (emphasizing transparency and information sharing).

We have launched a new brand to replace the “GiveWell Labs” brand, because:

  • GiveWell and Good Ventures work as partners on the Open Philanthropy Project, and we wanted a name that would not be exclusively associated with one organization or the other.
  • We feel it is important to start separating the GiveWell brand from the Open Philanthropy Project brand, since the latter is evolving into something extremely different from GiveWell’s work identifying evidence-backed charities serving the global poor. A separate brand is a step in the direction of possibly conducting the two projects under separate organizations, though we aren’t yet doing that (more on this topic at our overview of plans for 2014 published earlier this year).

For now, the Open Philanthropy Project website provides only a basic overview, and links to GiveWell and Good Ventures for more information in many cases. We will continue posting updates on the Open Philanthropy Project to GiveWell’s blog, Twitter and Facebook; the Open Philanthropy Project’s Twitter and Facebook feeds will simply mirror those updates. The Open Philanthropy Project is currently only a brand (name, logo, website) rather than an organization, and it continues to be the case that the staff members who work on the Open Philanthropy Project are formally affiliated with either GiveWell or Good Ventures.

We plan to edit much of the content on our website to reflect this update, though we will not necessarily remove all previous references to GiveWell Labs.

Update on GiveWell’s web traffic / money moved: Q2 2014

In addition to evaluations of other charities, GiveWell publishes substantial evaluation of itself, from the quality of its research to its impact on donations. We publish quarterly updates regarding two key metrics: (a) donations to top charities and (b) web traffic.

The table and chart below present basic information about our growth in money moved and web traffic in the first half of 2014 (note 1).

Money moved: first two quarters

Growth in money moved, as measured by donations from donors giving less than $5,000 per year, slowed in the second quarter of 2014 compared with the first quarter, and was substantially weaker than growth in the first two quarters of 2013.

The total amount of money we move is driven by a relatively small number of large donors. These donors tend to give in December, and we don’t think we have accurate ways of predicting future large gifts (note 2). We therefore show growth among small donors, the portion of our money moved about which we think we have meaningful information at this point in the year.

Web traffic through July 2014

We show web analytics data from two sources: Clicky and Google Analytics. The data on visitors to our website differs between the two sources. We do not know the cause of discrepancy (though a volunteer with a relevant technical background looked at the data for us to try to find the cause). Full data set available at this spreadsheet. (Note on how we count unique visitors.)

Traffic from AdWords decreased in the first two quarters because in early 2014 we removed ads on searches that we determined were not driving high quality traffic to our site (i.e. searches with very high bounce rates and very low pages per visit).

Data in the chart below is an average of Clicky and Google Analytics data, except for those months for which we only have data (or reliable data) from one source (see full data spreadsheet for details).

Slowing growth?

The above indicates that our growth slowed significantly in 2014 relative to last year (and previous years). It is possible that the numbers above are affected by the fact that (a) growth in the second quarter of 2013 was particularly strong due to a series of media mentions (as we previously noted) or (b) differences in the way that our recommended charities track donations (we would guess that this could explain a difference of a few hundred donors). Our guess is that both of these factors contribute but do not explain the slower growth.


Note 1: Since our 2012 annual metrics report we have shifted to a reporting year that starts on February 1, rather than January 1, in order to better capture year-on-year growth in the peak giving months of December and January. Therefore metrics for the “first two quarters” reported here are for February through July.

Note 2: In total, GiveWell donors have directed $2.41 million to our top charities this year, compared with $1.46 million at this point in 2013. For the reason described above, we don’t find this number to be particularly meaningful at this time of year.

Note 3: We count unique visitors over a period as the sum of monthly unique visitors. In other words, if the same person visits the site multiple times in a calendar month, they are counted once. If they visit in multiple months, they are counted once per month.

Google Analytics provides ‘unique visitors by traffic source’ while Clicky provides only ‘visitors by traffic source.’ For that reason, we primarily use Google Analytics data in the calculations of ‘unique visitors ex-AdWords’ for both the Clicky and Google Analytics rows of the table. See the full data spreadsheet, sheets Data and Summary, for details.

 

Thoughts on the end of Hewlett’s Nonprofit Marketplace Initiative

Note: we sent a pre-publication draft of this post to multiple people who had been involved in the Hewlett program discussed here. A response from the Hewlett Foundation is available in the comments of this post; a response from Jacob Harold is available on the GuideStar blog.

Last April, the Chronicle of Philanthropy covered the decision by the William and Flora Hewlett Foundation to end its Nonprofit Marketplace Initiative, which in 2008 was the source of GiveWell’s first grant from a foundation, and has continued to be a source of substantial support for GiveWell’s operations in the years since. The Hewlett Foundation has been unusually transparent about the thinking behind its decision, and we have unusual context on the program as one of its grantees, so we find it worthwhile to reflect on this episode – how we perceived the Nonprofit Marketplace Initiative, its strengths and weaknesses, and the decision to end it.

The Nonprofit Marketplace Initiative aimed to improve the giving of individual donors. Hewlett states, “This Initiative’s goal was that by 2015, ten percent of individual philanthropic donations in the US (or $20 billion), would be influenced by meaningful, high-quality information about nonprofit organizations’ performance.” Grantees included GiveWell, GuideStar, Charity Navigator, Philanthropedia and Great Nonprofits.

In short:

  • We believe that Hewlett’s philanthropy program was a strong use of philanthropic funds. The program is reported to have spent a total of $12 million over 8 years, and we think its impact on GiveWell alone will likely ultimately be responsible for enough influence on donations to easily justify that expenditure.
  • We believe that ending this program may have been the right decision. With that said, we disagree with the specific reasoning Hewlett has given, for the same reason that we disagreed with its strategic plan while the program was running. We believe that Hewlett’s goal of influencing 10% of donors was unrealistic and unnecessary, at least over the time frame in question. We believe the disagreement may reflect a broader difference in how we see the yardstick by which a philanthropic program ought to be evaluated. 
  • We are very positive on how Hewlett ended the program. Great care was taken to end it in a way that gave grantees ample advance notice and aimed to avoid disruptive transitions. We also applaud Hewlett’s decision to publish its reasoning in ending the program and invite a public discussion, and we broadly feel that Hewlett is delivering on its stated intent to become a highly transparent grantmaker.

Our experience with the program

In 2008, Bill Meehan introduced us to Jacob Harold, who was then the Program Officer for Hewlett’s Nonprofit Marketplace Initiative program. Jacob met with us several times, getting to know us and the project. Late in 2008, we were invited to submit a proposal and were awarded a one-year, $100,000 grant. This grant was crucial for us. At the time, we had little to no name recognition, a major mistake on our record, and uncertainty about whether we’d be able to raise enough to continue operating. We were in the midst of a change of direction, after disappointing results from our first attempt at high-intensity outreach. We had determined that we needed to take a longer view and focus on research quality for the time being – and it was thanks to the support of Hewlett, among others, that we felt it was possible to do so. We benefited both from Hewlett’s financial support (which helped answer crucial questions about whether we’d be able to fund our plans at the time) and from Hewlett’s brand (being able to say we were a Hewlett grantee substantially improved our credibility and appeal in the eyes of many, something Hewlett was cognizant of).

Over the years, we continued to meet periodically with Jacob and to periodically submit grant proposals. For the most part, Hewlett continued to fund us at the level of $100,000 per year (there was one year where the support temporarily dropped to $60,000). As our audience and budget grew, this support became a smaller part of our revenue and became less crucial to us, but it remained quite valuable. Hewlett’s support reduced the amount of time we had to spend fundraising and worrying about sustainability, and increased the amount of time spent on core activities.

In addition to supporting us financially, Hewlett sought to integrate our work into its own vision for the “nonprofit marketplace.” Jacob encouraged us to attend convenings with other groups working on helping individual donors give effectively, such as Charity Navigator, GuideStar, Philanthropedia and Great Nonprofits (and we generally did so). He also discussed his vision for how impact would be achieved, and particularly emphasized the importance of working with portals and aggregators (such as GuideStar, where he now serves as CEO) that could pull together information from many different kinds of resources. He encouraged us to build an API in order to make aggregation easier, and saw aggregation as a more promising path than building our own website, brand and audience.

We disagreed with him on some of these points. We felt that his vision was overly specific, overly focused on reaching the “average” donor, and was under-emphasizing the promise of different organizations targeting different audiences in different ways. When the Hewlett-funded Money for Good study came out, we publicly disagreed with the common interpretation, and argued that the most promising path for nonprofit evaluation groups is to target passionate niche audiences rather than focusing on the unrealistic (as both we and Money for Good saw it) goal of influencing 10%+ of all U.S. giving

However, we never found Jacob or anyone else at Hewlett to be pushing its vision on us hard enough to cause problems. We certainly weighed Jacob’s encouragement when attending convenings and working on a partnership with GuideStar, but we were comfortable with the cost-benefit tradeoffs involved in these activities and didn’t undertake them solely to please a funder. We particularly valued some of the opportunities to get to know other organizations in our space. We didn’t build an API, and Hewlett didn’t pressure us to do so (its support continued).

All in all, our general feeling was that Hewlett was accomplishing substantial good via its relatively reliable, unrestricted funding even as its strategy was something we disagreed with.

Hewlett’s reasoning for ending the program, and our take on it

In a response to the Chronicle of Philanthropy, Larry Kramer (Hewlett’s current President) wrote:

We launched NMI in 2006 with the objective of influencing 10% of individual donors to be more evidence-based in their giving, a goal we sought to achieve by making high-quality information available about nonprofit performance. Based on independent research and evaluation, we concluded we were not going to meet that goal. And because we are committed to being transparent about our work – both successes and failures – we openly shared our reasons for ending the initiative in a video and blog post on our web site.

Hewlett also states that staff transitions provided a good opportunity to reflect systematically on the initiative: between late 2012 and early 2013, Larry Kramer replaced Paul Brest as President, Fay Twersky became the first Director of the newly formed Effective Philanthropy Group, and Lindsay Louie replaced Jacob Harold in a slightly different program officer role.

We believe that ending this program may have been the right decision. With that said, we disagree with the specific reasoning Hewlett has given, for the same reason that we disagreed with its strategic plan while the program was running. We believe that the goal of influencing 10% of donors was unrealistic and unnecessary, at least over the time frame in question. We believe that this is a case in which a commitment to specific quantitative targets, and a specific strategy for getting there, was premature and did not make the program better.

Despite this, we believe that Hewlett succeeded in choosing an important problem to work on and in finding and funding promising groups working on the problem, and that it played a real role in the development of at least one organization (ours) that is poised to influence far more dollars than Hewlett spent on the program. For this reason, we think it would be reasonable to consider the program a success, though not necessarily something that should have been continued.

In short, we feel this program was an instance of good and successful philanthropy, and that it may indeed have been time to end it, but we disagree with the way the program framed and evaluated itself and the way Hewlett justified the end of the program.

How Hewlett ended the program

Hewlett took great care to end the program in a way that would not be overly disruptive for grantees. We were notified well in advance of the public announcement about the program’s end; we were able to ask questions and receive helpful answers; and our two-year grant was renewed as an “exit grant.” We were told that other grantees had been treated similarly. By clearly communicating its intent to end the program and committing “exit funding,” Hewlett ensured that we would have ample time to adjust for the loss of this revenue.

We also applaud Hewlett’s decision to publish its reasoning in ending the program and invite a public discussion.

A note on Hewlett’s transparency

Shortly after taking over as President of the Hewlett Foundation, Larry Kramer expressed his desire to further improve Hewlett’s transparency, and we think there has indeed been substantial progress. The public discussion of the end of the Nonprofit Marketplace Initiative represents some of this progress. In addition:

  • Hewlett’s relatively new blog is frequently updated and has given us a window into the day-to-day work and thoughts of its staff.
  • Hewlett recently held conference calls with open Q&A for grantees.

As a result, we believe Hewlett has become one of the easiest foundations to learn about and get a feel for from the outside. We think this is quite a positive development, and may write more in the future about what we’ve learned from examining Hewlett’s output.

Key takeaways

Hewlett’s vision of good philanthropy, at least in this case, seems to have involved setting extraordinarily ambitious and specific goals, laying out a plan to get there, and closing the program if the goals aren’t reached. By this measure, the Nonprofit Marketplace Initiative apparently failed (though Hewlett followed its principles by closing a program falling short of its goals).

Our vision for good philanthropy is that it finds problems worth working on (in terms of importance, tractability and uncrowdedness) and supports strong organizations to work on them, while ensuring that any “active” funding (restrictions, advice, requests of grantees) creates more value than it detracts. We think that specific quantitative goals are sometimes called for, but are more appropriate in domains where the background data is stronger and the course is easier to chart (as with our top charities). By our measure, we think the Nonprofit Marketplace Initiative was at least reasonably successful.

Recognizing this difference in the way we think about good philanthropy will help us to better understand Hewlett’s decisions going forward, and will give us a disagreement to reflect on as we move forward with our vision. We’re glad to have examined Hewlett’s thinking on this matter, and see the chance to do so as a benefit of Hewlett’s improved commitment to transparency.

A note on the role of Hewlett’s funding in our budget:

Because this post discusses Hewlett’s work in an evaluative manner, we think it’s worth being clear about the support we receive so that people may take into account how this may influence our content.

Hewlett has provided generous support to GiveWell since 2008. We hope that it will continue doing so even after the end of our current grant, depending on how our work and Hewlett’s evolve (our work on GiveWell Labs seems to us to be relevant to Hewlett’s work on encouraging transparency among major funders). We are currently projecting expenses of and revenues of over $1.5 million per year, and Hewlett’s support has historically been around $100,000 per year.

Our ongoing review of ICCIDD

The International Council for the Control of Iodine Deficiency Disorders Global Network (ICCIDD) advocates for and assists programs that fortify salt with iodine. Our preliminary work (writeup forthcoming) implies that even moderate iodine deficiency can lead to impaired cognitive development.

ICCIDD tracks iodine deficiency around the world and encourages countries with iodine deficient populations to pass laws requiring iodization for all salt produced in and imported to the country. ICCIDD also provides – and helps countries find – general support and assistance for their iodization programs.

In February, we wrote that we were considering ICCIDD for a 2014 GiveWell top charity recommendation. We’ve now spent a considerable amount of time talking to and analyzing ICCIDD. This post shares what we’ve learned so far and what questions we’re planning to focus on throughout the rest of our investigation. (For more detail, see our detailed interim review.)

ICCIDD has successfully completed the first phase of our investigation process and we view it as a contender for a recommendation this year. We now plan (a) to make a $100,000 grant to ICCIDD (as part of our “top charity participation grants,” funded by Good Ventures) and (b) continue our analysis to determine whether or not we should recommend ICCIDD to donors at the end of the year.

Reasons we prioritized ICCIDD

We prioritized ICCIDD because of our impression that iodization has strong evidence of effectiveness, cost-effectiveness, and room for more funding.

The evidence of effectiveness for salt iodization is not fully straightforward – we plan to publish an intervention report with details before the end of the year – but multiple randomized controlled trials imply that reducing iodine deficiency in children leads to moderate (~3-4 points) gains in IQ.

We have yet to find well-documented assessments of the cost of iodization, but the estimates we have seen most commonly estimate approximately $0.10 per person reached.

Although iodization rates have increased dramatically over the past 20 years, significant deficiency still exists. ICCIDD publishes a scorecard showing countries’ iodine status; many fall significantly below the benchmark of 100 µg of iodine per liter of urine.

Questions we hope to answer in our ongoing analysis

What would have happened to iodization programs in ICCIDD’s absence?

Because ICCIDD is an advocacy/technical assistance organization (it does not directly implement iodization programs but advocates that others do so), it is difficult to assess its impact.

ICCIDD has provided us with several examples of countries in which it believes it played an essential role (some of which we discuss briefly in our interim review page), but we have not yet investigated these cases sufficiently to form a confident view about what role ICCIDD played and how crucial its contributions were to the program.

What role does ICCIDD play relative to other organizations that work on iodization?

A number of organizations support government and private-sector salt iodization programs, especially UNICEF, the Global Alliance for Improved Nutrition (GAIN), and the Micronutrient Initiative.

We hope to better understand the roles each organization plays so that we can formulate a view about where donated funds are likely to have the greatest impact. (We’re considering the possibility that funds donated to any should be thought of as “supporting the international effort to support iodization” and that the important question is assessing the combined costs and impacts of all 4 organizations.)

We are also considering GAIN for a 2014 GiveWell recommendation. We do not expect our decision about GAIN to affect the likelihood of ICCIDD receiving a recommendation.

Program monitoring

Surveys to assess iodine consumption and status are completed more than once a decade in most countries, and are usually conducted by country governments or UNICEF. We have yet to analyze these surveys carefully enough to know whether or not they provide a reliable assessment of the track record of iodization programs: i.e., do iodization programs lead to a reduction in iodine deficiency?

Room for more funding

We have seen strong evidence that ICCIDD is funding constrained. It told us that its staff members have, over the past few years, consistently submitted requests for funds that are significantly higher than it is able to allocate. Additionally, ICCIDD lost what had been its largest funder in 2012. It has also shared an overall budget with us requesting significantly more funding than it has received in the past.

Nevertheless, we have two major questions about room for more funding:

  1. Given iodization’s cost-effectiveness and track record, why haven’t others closed the funding gap? We have been told that the lack of funds may be due to “donor fatigue” (i.e., donors have supported iodization in the past and iodized a large proportion of the countries in need, so they no longer view it as a priority), but we have yet to investigate this question sufficiently to feel comfortable with our understanding.
  2. Will ICCIDD’s future activities be as cost-effective as past attempts to increase iodization rates? One possible explanation for the lack of donor funds is that the countries that remain iodine deficient are particularly problematic. Were this true, it might be the case that donors are acting rationally because future efforts to iodize could be significantly more costly than past efforts.

Note that previously the Gates Foundation made a $40 million grant to support universal salt iodization (USI) in 16 countries over seven years. That grant ends in March of 2015 and no extension of the grant has yet been scheduled.

Partnership with The Pew Charitable Trusts

Throughout the post, “we” refers to GiveWell and Good Ventures, who work as partners on GiveWell Labs. [Added August 27, 2014: GiveWell Labs is now known as the Open Philanthropy Project.]

We have agreed to a major partnership with The Pew Charitable Trusts as part of our work on criminal justice reform. Good Ventures will provide $3 million to support and expand the work of Pew’s public safety performance project (PSPP), which aims “to advance data-driven, fiscally sound policies and practices in the criminal and juvenile justice systems that protect public safety, hold offenders accountable, and control corrections costs” through technical assistance to states, research and public education, and promotion of nontraditional alliances and collaboration around smart criminal justice policies.

We came into contact with Pew through our investigation on criminal justice reform. Our impression is that PSPP has been intensively involved in the criminal justice reform packages that have passed in over two dozen states since 2007. PSPP now seeks more funding to work in additional states, help states to cement existing reforms, explore the potential for reform at the federal level, and continue pursuing research and public education and engaging with nontraditional allies of reform.

In discussions with Pew, we have been impressed with the knowledge and thoughtfulness both of the PSPP team and of The Pew Charitable Trusts as a whole. It appears to us that Pew has worked in a substantial number of policy areas, often with concrete goals and concrete stated results over several-year time frames, and that Pew has a good deal of general capacity for assessing the opportunities in a policy space and developing a relatively systematic strategy for working within it. (This does not mean that we see eye to eye with Pew on all matters. We believe it sets policy priorities using a different value system from ours; for example, we have stronger interest in foreign aid and other issues related to developing-world poverty reduction.) More information on Pew as a whole will be forthcoming, including notes from a day-long visit in November and a potential historical case study on its work in an another area. Our current writeup includes an assessment of the track record of PSPP specifically.

We see this partnership as an important step on multiple fronts:

  • Criminal justice reform is a current focus area for us, and PSPP appears to be one of the most prominent and effective organizations working toward change on this front. Funding and following its work represents an opportunity for both impact and learning.
  • We are also interested in developing a relationship with Pew as a whole; we believe this relationship will be a valuable resource as we continue to explore policy-oriented philanthropy. Based on conversations with Pew representatives, we see supporting PSPP as one of the best ways to support Pew as a whole.
  • Finally, the process of establishing this partnership has itself been a valuable learning opportunity. With PSPP’s help, we have conducted a brief review of PSPP’s track record, which was our first attempt to assess the track record of a U.S.-policy-focused organization and taught us a fair amount about the criminal justice reform space. We have also dealt with new challenges around how to balance our goal of transparency with the goal of having maximal impact; when working on policy, there can be particular tension between these, and we have established an agreement regarding public discussion of PSPP that may serve as a guide to future grant agreements. Note that we have agreed to a review process for public updates that is likely to be time-consuming for both us and Pew, and accordingly we have agreed to limit the frequency with which we publish updates on the project.

Our full writeup has further discussion of PSPP, its track record, our cost-effectiveness estimate, and the case for (and details of) this collaboration.

Writeup on our partnership with PSPP
Note that we believe PSPP has room to productively use more than the $3 million Good Ventures will be providing. Donors interested in contributing to PSPP should contact us.