2015 plan for GiveWell’s traditional (“top charities”) work

This is the third post (of six) we’re planning to make focused on our self-evaluation and future plans. The goal of this post is to update GiveWell’s followers on our plans for our traditional work in 2015 and to establish a general set of goals by which we can evaluate ourselves at the beginning of 2016.

As discussed in our previous post, in 2014, GiveWell’s traditional (“top charities”) work conducted a large amount of research while maintaining research quality and building substantially more capacity to conduct research in the future. The amount of money moved to our recommended charities continued to grow; we moved about $28 million to recommended charities in 2014.

This year, our primary goals are to:

  • Build management and research capacity for GiveWell’s traditional work while further reducing senior staff time (note 1) spent on this work, primarily by reallocating Elie Hassenfeld’s management responsibilities related to GiveWell’s traditional work.
  • Maintain our core research product by completing updates on all eight 2014 recommended charities and determining which of them should be recommended as top charities for the 2015 giving season.

Our secondary goals for 2015 are to:

  • Continue to seek outstanding giving opportunities by reviewing 2-4 new charities and publishing 2-4 new intervention reports.
  • Improve the cost-effectiveness analyses and room for more funding analyses in charity reviews.
  • Finish and launch a redesigned GiveWell website.
  • Make further progress on experimental work to “seed” potential recommended charities.

We expect our total output on “top charities” work to be roughly comparable to last year’s, despite a growing staff, because (a) a major focus of the coming year is training, and we expect to trade some short-term efficiency for long-run output; (b) we may be reallocating some capacity from our “top charities” work to the Open Philanthropy Project this year.

More details on some of these goals are below.

Building capacity

In 2015, we hope to build substantially more management and research capacity for GiveWell’s traditional work in order to move toward our goal of having a sustainable organization that is not dependent on past senior staff. As we have discussed before, building capacity is challenging and generally leads to reductions in capacity in the short term. This year, we plan to build capacity by:

  • Training relatively senior staff to take on management roles by reallocating Elie Hassenfeld’s management responsibilities to them as much as possible. For example, Senior Research Analyst Natalie Crispin is currently performing all of Elie’s management responsibilities with respect to GiveWell’s 2015 charity reviews and charity updates. Elie is overseeing Natalie during this transition. Since Natalie is managing others on this work, she does not have as much time to directly do research work herself.
  • Training relatively junior staff to do most charity updates, intervention reports, and new charity reviews. Most junior staff members are relatively new to this type of work.
  • Continuing to hire and train new Research Analysts, Outreach Associates, and Conversation Notes Writers.

We expect that these efforts to build capacity will enable us to do more research – for both GiveWell and the Open Philanthropy Project – in the long run but will reduce the efficiency of our work in the short run, requiring more person-hours per unit of output than in 2014.

Building capacity to do intervention reports

As we wrote in our 2014 self-evaluation post, completing new intervention reports in 2014 was much more difficult and time-consuming than we had anticipated. This year, we are trying to build more capacity for completing these reports by training more staff to do intervention-related research and by improving our process for doing this research. Our ultimate goal is to have a process for completing a reasonable number of intervention reports that does not require substantial involvement from Elie.

We consider building capacity to do more intervention reports to be a high priority because we must be able to complete these reports in order to best prioritize new charities for investigation.

Charity updates

We plan to publish updates on all eight of our recommended charities in 2015. We are generally aiming to have conversations with each charity in February, May, and September that will each result in conversation notes and/or an update report (example). This update schedule may vary somewhat by charity. We are following this charity update schedule so that a) we learn about any major updates that might cause changes in a charity’s recommendation status as soon as possible and b) we spread out the work of refreshing our charity reviews over the course of the year.

As part of our February update calls with recommended charities (especially top charities), we will be gathering more information about charities’ room for more funding situations. As we mentioned in December 2014, we may update our recommended allocation to top charities to reflect any major changes in charities’ funding needs. We tentatively plan to publish such an update in April.

In our charity updates, in addition to our standard questions to follow up on each charity’s activities, we will be focusing in particular on whether any new information from our “standout” charities might lead them to be recommended as “top” charities by the end of the year.

Intervention reports

This year, we hope to complete 2-4 new intervention reports. The programs and program areas that we have preliminarily prioritized for investigation include:

  • Nutrition programs (e.g., folic acid fortification and iron fortification)
  • Immunization programs (e.g., immunization against measles and meningitis)
  • Neglected tropical diseases programs (e.g., trachoma and onchocerciasis)
  • Programs for which we believe a charity would apply if we determined the intervention to be a priority program (e.g., “Targeting the Ultra-Poor” (or “Ultra-Poor Graduation”) programs and voluntary medical male circumcision for the prevention of HIV)

We also plan to publish two intervention reports that are near completion: maternal and neonatal tetanus elimination and mass drug administration to eliminate lymphatic filariasis.

A major factor in prioritizing among intervention reports is determining which interventions seem to be most broadly similar to our other priority programs. We believe that such interventions are most likely to succeed in our current process. Factors that seem to be common among our priority programs include:

  • The program has strong evidence of effectiveness (preferably from multiple high-quality studies)
  • The program is very low-cost per person reached
  • Studies of the program’s effects seem not to be overly dependent on the particular context in which the program was implemented (e.g., studies of health programs often seem more likely to be externally valid than studies of education programs because the mechanisms by which health programs have their effects are often more consistent across populations)
  • The program is highly replicable for an implementing organization (i.e., a charity would face a low burden of proof to show that they were carrying out the same intervention that was studied and shown to be effective)
  • The program has informative proximate outcomes (e.g., deworming pills taken, bed nets delivered and used, etc.) that can be fairly easily measured and monitored

New charity reviews

We plan to actively pursue evaluations of 2-4 new potential top charities this year. Our tentative plans for which charities we may evaluate are below. However, note that there are many reasons that new charity review prioritization could change during the year, such as learning new information from potential recommended charities and completing new intervention reports that change our views on which interventions are promising (several of the new charities that we may evaluate implement interventions for which we have not yet published intervention reports). We also plan to maintain our “open-door policy” for allowing any charity to apply for a recommendation.

As with previous years, we chose to mainly prioritize charities based on our best guess about whether they will become top-rated organizations. We also gave additional weight to organizations that we guessed have some chance of being substantially more cost-effective than our current top charities. The charities that we may evaluate include:

We have also reached out again to Nothing But Nets about applying for a recommendation because it distributes long-lasting insecticide-treated bed nets, which we consider to be one of the most cost-effective priority programs.

Experimental work

In 2015, we (in collaboration with Good Ventures) plan to continue the experimental work to “seed” additional top charities that we began in 2014, though we still do not consider this work to be a high priority. A few activities that we are considering in this area include:

  • Providing funding to promising young charities, such as New Incentives, that could eventually become recommended charities.
  • Funding additional research on and support for scale up for programs that a) could be priority programs if they were supported by additional studies and b) could be scaled up with additional support. The main way in which we are currently doing this is our funding of Evidence Action (early conversation, recent conversation, grant page). We have also investigated other potential partners and may take one or more on in the future.
  • Funding additional independent monitoring that could increase our confidence in the success of recommended charities’ programs (e.g., deworming, salt iodization) or increase our confidence in other organizations’ ability to carry out priority programs (e.g., if we learned that standard government-led bed net distributions were high-quality, then we might recommend additional funding to large non-profits that fund government bed net distributions). We are initially planning to work with IDinsight on this project.

We plan to explore options and publish updates on our progress on this work throughout 2015.

Improving quality of charity reviews

In general, we feel that our charity reviews are high-quality. However, we believe that there are some ways in which they could be improved. In particular, in 2015, we hope to improve the cost-effectiveness analyses (CEs) (example) and “room for more funding” analyses (example) in our charity reviews, if we have the capacity to do so.

We feel that the quality of our CEs has been acceptable in the past, but we have identified tangible ways to improve them and feel that it is worth using some of our additional capacity to do this because such analyses are relatively important to our charity recommendations. In particular, we aim to:

  • Generally improve the transparency and clarity of our CEs.
  • Think more carefully about the major inputs that cause a substantial amount of variation in our CEs and ensure that we know as much as we can about those inputs. For example, the proportion of deworming pills that were given to children as part of Schistosomiasis Control Initiative (SCI)‘s campaigns is a relatively important parameter in our CE for SCI, but we did not have as much confidence in our understanding of this parameter as we could have at the end of last year.
  • Ensure that we properly account for “leverage” considerations when appropriate (e.g., in our CEs for organizations such as Iodine Global Network and Deworm the World Initiative).
  • Ensure that we are applying rules for including and excluding costs and benefits consistently across all CEs (e.g., making sure that we have captured all in-kind donations to all of our charities’ programs in our CEs).

Similarly, we would like to improve the “room for more funding” sections of our charity reviews. To achieve this, we plan to:

  • Discuss charities’ funding needs with them earlier in the year so that we can gain as much clarity as possible about their funding situations.
  • Standardize the questions that we ask charities about their room for more funding so that we can be more confident that we are making similar comparisons across organizations.

How large is the pipeline of potential top charities and priority programs?

We are building a charity review organization with a substantial amount of capacity for conducting research, but it is unclear how many charities that we have not yet reviewed may be competitive with our current top charities.

One factor that affects our estimate of the size of the pipeline is that, over time, we have broadened our criteria and research process so that we are able to evaluate more types of potentially high-impact giving opportunities. For example, in the last two years we have begun to evaluate organizations that play an advocacy and advisory role to governments, such as Deworm the World Initiative and Iodine Global Network. We are also now open to evaluating components of “mega-charities” that are working to scale up potential priority programs, such as UNICEF Maternal and Neonatal Tetanus Elimination Initiative.

We have also seen that charities have been more willing to engage in our review process over time. This may be due to our growing money moved, growing influence, and generally improved incentives for charities to apply (discussed in our 2014 self-evaluation post).

These changes have had the effect of increasing the pipeline of potential charities and programs to evaluate and potentially allowing us to find more cost-effective giving opportunities than we had been able to find previously.

We are also trying to increase the pipeline of outstanding giving opportunities through our experimental work mentioned above (e.g., by funding New Incentives and replications of studies).

Currently, we do not have a strong sense of the overall number of potential top charities and potential priority programs that we have not yet evaluated, but we feel that there are substantially more promising charities and programs than we will be able to evaluate this year and believe that it is possible that the pipeline will grow over time.

Note 1: In this post, senior staff refers to Elie, Holden, and Alexander. Many staff took on additional responsibilities throughout 2014, so this refers to senior staff as of January 2014, not as of today.

2014 progress on GiveWell’s traditional (“top charities”) work

This is the second post (of six) we’re planning to make focused on our self-evaluation and future plans.

This post reviews and evaluates last year’s progress on our traditional work of finding and recommending evidence-based, thoroughly vetted charities that serve the global poor. It has two parts. First, we look back at the plans we laid out in early 2014 and compare our progress against them, providing details on some of the most significant accomplishments and shortcomings of the year. Then, we reflect on the quality of our traditional work and critically evaluate some of our major strategic decisions. In our next post in this series, we will cover our plans for GiveWell’s traditional work in 2015.


Overall, we feel that 2014 was an excellent year for GiveWell’s traditional work.

At the beginning of 2014, we laid out our most ambitious research goals yet, including publishing updates on all recommended charities, reviewing several new charities, and completing new intervention reports. We expected to be able to complete a higher volume of work than ever before while also reducing senior staff time (note 1) devoted to GiveWell’s traditional work by continuing to hire, train, and develop non-senior staff. We feel that we broadly met those goals while maintaining the overall quality of our research.

The impact of GiveWell’s traditional work continues to steadily increase, as we moved about $28 million to recommended charities in 2014 (more details forthcoming in the final post of this series, which focuses on our metrics).

We believe that our 2014 recommended charities list was high quality. A notable development was that we included four new “standout” organizations on our recommended charities list. We believe that some of these organizations may become top charities in the future.

A few areas in which we fell short in 2014 were:

  • Publishing new intervention reports. Completing intervention reports was more difficult than we had expected at the beginning of 2014; we hope to improve on our process for doing these reports in 2015.
  • We finalized the details of our top charity recommendations in late November 2014, after we had already made a recommendation to Good Ventures about how to allocate its giving among the recommended charities. If we had completed our analysis earlier, we may have recommended a different allocation to Good Ventures. We see the harm here as minimal since we ultimately adjusted our public targets to account for grants from Good Ventures; however, in the future we should try to avoid a recurrence of this issue, perhaps by announcing our recommendations to Good Ventures and the public at the same time.
  • We did not follow the ideal process for ensuring that our cost-effectiveness analyses were robust, accurate, and easily understandable, which led us to finalize these analyses very late in the year.

Our progress in 2014 relative to our plans
Our “2014 plan” blog post laid out several main goals for the year:

  • Continue to build capacity for conducting “top charities”-related research work. Reduce senior staff time devoted to this work by training other staff to take over senior staff’s responsibilities
  • Publish updates on previously recommended charities
  • Conduct reviews for several new potential recommended charities
  • Maintain our “open-door policy” for allowing charities to apply for a recommendation
  • Publish four intervention reports that were near completion (maternal and neonatal tetanus elimination, salt iodization, Vitamin A supplementation, and polio)
  • Publish 5-10 new intervention reports on nutrition programs, behavior change programs, and other programs
  • Fund experimental work that may lead to more recommended charities in the future (e.g., providing early funding to promising charities such as New Incentives or funding replications for promising interventions)
  • Conduct other miscellaneous research (e.g., produce a cost-effectiveness estimate for Dispensers for Safe Water (DSW), review the midline of Development Media International (DMI)‘s randomized controlled trial (RCT), consider evaluating a mega-charity, etc.)

We feel that we broadly achieved these goals in 2014. In summary, we:

One area in which we fell short of our expectations was publishing new intervention reports, largely because completing these reports was more difficult than we had anticipated.

More details on some of our major achievements and shortcomings are below.

Capacity building

GiveWell’s traditional work produced more total research ‘output’ in 2014 than in previous years while also using less senior staff capacity than in previous years. A rough measure of total research output is the number of charity updates, charity reviews, intervention reports, and other major research work that we did during the year. In 2014, we completed four charity updates, four new charity reviews, two intervention reports, and some work on “seeding” new top charities. For comparison, in 2013, we completed three charity updates (two of which (AMF and SCI) required a significant amount of senior staff time because they were substantial updates), one new charity review (Deworm the World Initiative), and one intervention report (water quality). (More details on the work we did in 2013 are in our 2013 self-evaluation.) We consider the increase in research output in 2014 to be a major achievement.

Non-senior staff continued to be trained to take on additional responsibilities, and our staff continued to steadily expand. Examples of greater responsibilities shared by non-senior staff and reductions in senior staff time spent on GiveWell’s traditional work include:

  • All four new charity reviews (DMI, IGN, GAIN, and Living Goods) were led by non-senior staff. Our first new charity review led by non-senior staff was the DtWI review in 2013. We would not have had the capacity to do four new charity reviews in a year if not for our expanded non-senior staff capacity.
  • Holden and Alexander spent very little time on traditional work. In particular, Holden substantially reduced the amount of time that he spent writing blog posts for GiveWell’s traditional work. Elie continued to spend most of his time on traditional work but also passed off some of his responsibilities to other staff.
  • Non-senior staff took on increased management responsibility. For example, Natalie Crispin managed other staff on our updated review of SCI, our new review of Living Goods, and other work. Other staff supported with management of new Research Analysts, Summer Research Analysts, and Conversation Notes Writers.
  • All three site visits to recommended charities were conducted without senior staff.
  • All intervention report work was primarily conducted by Jake Marcus.

We see reducing senior staff time spent on GiveWell’s traditional work as a major success because a) making the organization less dependent on a few individuals improves the sustainability of the organization and b) we have historically primarily been constrained by senior staff capacity, so freeing up senior staff capacity should enable us to make progress on goals such as the Open Philanthropy Project.

We have also substantially improved our capacity by hiring and training Conversation Notes Writers. GiveWell has published about 150 conversation notes per year for the last two years (see them on our conversations page). In 2013 and early 2014, Research Analysts spent a substantial amount of their time writing conversation notes. In 2014, we hired Conversation Notes Writers to handle this responsibility. We now have 8 Conversation Notes Writers, and Research Analysts generally spend very little of their time on conversation notes.

Finally, although increased capacity has already allowed us to accomplish more than we had previously, we believe that many of the largest benefits will come in the future. Staff has consistently contributed significantly more as their tenure at GiveWell has grown. We currently have only five staff members who have been at GiveWell for more than two years.

New charity reviews

During 2014, we completed new reviews for four charities (DMI, IGN, GAIN, and Living Goods) that we ultimately recommended as “standout” organizations. (note 2)

We feel that this was a major success of our research work for the year. We believe that adding these “standout” charities to our list of recommended charities was valuable because (roughly in order of importance):

  1. These organizations seem to be very promising giving opportunities; some of them may become top charities in the future.
  2. If our money moved continues to grow, it will be important to have as much “room for more money moved” as possible. Even if current standout charities never become as strong (in isolation) as our current top charities, they may become the best options available when room for more funding is taken into account.
  3. The “standout” charities represent the organizations that we felt, on preliminary review, had the best chance of being significantly stronger giving opportunities than our current top charities. This time around, further review concluded that they were not as strong, but we feel it is important to continue engaging in these sorts of investigations and evaluating the best possible challenges to our current list.
  4. The “standout” designation and associated changes to our review process improves the incentives for potentially promising charities to apply for a GiveWell recommendation, which makes us more likely to be able to find the best giving opportunities. In particular, in 2014 we provided participation grants to promising charities that allowed us to review them publicly, directed some funding to the “standout” organizations by adding them to our list of recommended charities, and conferred some status on these organizations by giving them a GiveWell recommendation. These factors improve the cost-benefit analysis for a charity considering applying for a GiveWell recommendation, which we hope leads to more promising charities applying over time. Consistent with this, we saw increased interest from charities in engaging in our process in 2014 and expect this to continue as our money moved and influence grows.
  5. Adding more charities to our recommended list provides donors with more options. If donors have different values from us or different fundamental beliefs about which types of organizations are likely to be most effective, then we could be providing a valuable service by doing research on a wider set of donation options.

Intervention reports

We published fewer intervention reports than we had hoped to at the beginning of 2014. We completed intervention reports for salt iodization and vitamin A supplementation, but we have not yet published the other two reports that we had said were near completion at the beginning of 2014 (polio and maternal and neonatal tetanus elimination) and did not publish any new reports, though we said last year that we had hoped to publish 5-10 new reports. That said, our goal of publishing 5-10 new intervention reports was arbitrary and, upon further reflection, unrealistic given the amount of time that it has typically taken us to complete intervention reports in the past.

We did not accomplish as much as we expected on this front primarily because completing these reports was much more difficult and time-consuming than we had anticipated. As of the beginning of the year, we had only completed 3 intervention reports that match our current standards of thoroughness, and senior staff had led the completion of each such report. This year, we tried to complete intervention reports with far less involvement from senior staff, and this proved challenging. There are an essentially unlimited number of questions we could ask about a given intervention, and making the right decisions about which to focus on (and at what level of thoroughness) is key; with less involvement from senior staff, it was more difficult to ensure that time spent investigating and writing up questions was allocated to the right questions at the right level of detail. In particular, we had cases in which an intervention report appeared close to completion, but late-stage reviews and peer feedback added many more questions.

Improving our process for doing intervention reports is one of our primary goals for 2015 (more on our goals in a forthcoming blog post). Additionally, the main staff member who worked on intervention reports (Jake Marcus) also worked on other evidence reviews, such as reviewing a new, promising study on deworming, an early, unpublished draft of the Living Goods study, and the midline of the DMI study. He also spent some of his time investigating donating to the Ebola response as a giving opportunity.

Other shortcomings

Late completion of top charity recommendations

We finalized the details of our top charity recommendations later in the year than would have been ideal. In late November, we were still clarifying facts and debating some key issues related to our recommendations, such as SCI’s room for more funding and estimated cost-effectiveness and AMF’s room for more funding.

This is problematic because we made our recommendation to Good Ventures about how to allocate its giving in mid-November. We had agreed with Good Ventures that it should aim to announce its giving plans at the same time that we released our recommendations to the public in order to avoid potential fungibility concerns. To meet this deadline, we sought to finalize our recommendation to Good Ventures a couple of weeks before our public recommendations were released.

If we had fully completed our analysis before making a recommendation to Good Ventures, we likely would have recommended relatively more to AMF and relatively less to GiveDirectly. (For more details on how Good Ventures allocated its giving and our recommended allocation to donors, see our 2014 recommendations announcement post.)

In the end, we adjusted the public targets we announced based on the grants Good Ventures had committed to, so we don’t see a major issue here. However, in the future we should try to avoid a recurrence of this issue.

In the past, we have tried more than once to finalize our recommendations well in advance of giving season. At this point, we’re not sure that goal is realistic: we want our giving-season recommendations to take advantage of the most recent possible information and ideas, and it’s unlikely that we’d be comfortable with finalizing our recommendations before the date that we have to do so. An alternative way to avoid the issue described above might be to announce our recommendations to Good Ventures and the public at the same time.

Issues with cost-effectiveness analysis

We did not follow the ideal process for reviewing and internally critiquing our cost-effectiveness analyses, which led us to finalize them later in the year than would have been ideal. In particular:

  • There was little senior-level review of the details of some of our key cost-effectiveness analyses (e.g., the cost-effectiveness analyses for SCI and DMI) until late in our research process.
  • We did not ensure that multiple staff members understood the most important parameters and assumptions in all cost-effectiveness analyses until late in the research process. For example, the proportion of deworming pills that were given to children as part of SCI‘s campaigns was a relatively important parameter in our cost-effectiveness analysis for SCI, but we did not have as much confidence in our understanding of this parameter as we could have at the end of the year.
  • The cost-effectiveness analyses were often complicated and somewhat opaque, which made it difficult for staff members to use the analyses as an input to their thinking about what GiveWell’s recommendation should be.

After putting in additional work on the cost-effectiveness analyses late in the research process, we ultimately felt that they were acceptable, but we plan to improve these analyses in the future (more details in the next post in this series).

Quality of our traditional work

Quality of recommended charities list

The quality of our top charities list (measured roughly in terms of expected impact) improved in 2014 relative to 2013 because AMF had room for more funding, a new study increased our estimate of the impact of deworming programs, and GiveDirectly had a stronger track record after another year of successfully distributing unconditional cash transfers at scale.

Additionally, we added four “standout” organizations to our recommended charities list, which we felt improved the quality of our recommendations for the reasons mentioned above.

Research quality

We feel that we maintained the high quality of our research in 2014. Though evaluating the quality of our research is difficult and involves many subjective judgments, we feel we have maintained our research quality because:

  • Our major research reports (charity reviews, intervention reports, etc.) lay out all reasoning explicitly and back up all evidence-backed claims with footnotes that show what evidence is being used to support their claims. These standards force all researchers to produce reports that can be easily vetted by other staff and the public. All reports receive many levels of critical review before they are published. For example, each charity review and intervention report is reviewed by at least one staff member who did not write the report and by the staff member’s manager. For intervention reports, we generally solicit feedback on the quality of the reports from experts in the appropriate fields (see, e.g., our water quality report).
  • We feel that we have a very strong understanding of our recommended charities’ activities. In general, we feel that the quality of our “What does [the charity] do?” and “Does it work?” sections of charity reviews are as high as or higher than they have ever been. For example, our understanding of (top charity) SCI’s activities is much stronger now than it had been in the past due to greater capacity for deepening our investigation.

However, we believe that there is still room to improve the quality of our research. In particular, we think that the “What do you get for your dollar?” (cost-effectiveness) sections of our charity reviews could be substantially improved and that the “Room for more funds?” sections could be improved. More details on this in the next blog post in this series.

Other self-evaluation questions

Does our impact justify the size of our staff?

In 2014, we moved about $28 million to our recommended charities. Excluding Good Ventures’ giving, we moved approximately $12.7 million to our recommended charities. (More details on our 2014 money moved will be in our forthcoming 2014 metrics blog post.) We currently project total GiveWell/Open Philanthropy Project expenses of about $2.3 million for 2015 (more). We estimate that about half of those expenses are attributable to GiveWell’s traditional work. We previously wrote that we believe that expenses that are 15% of money moved are well within the range of normal, so we feel comfortable with the relative size of our operating expenses at this point.

How much larger should GiveWell’s staff become?

As noted above, we have substantially increased our capacity for GiveWell’s traditional work after many years of struggling to do this. However, we feel that it is worth critically evaluating how much value is being added by our additional capacity and how much further we should expand our staff, if at all.

An important factor in our thinking about the ideal size of GiveWell staff is that we now see more potential than we had previously for some staff to transition to working for the Open Philanthropy Project.

To analyze the costs and benefits of different staff sizes, we can imagine three scenarios for future GiveWell staff:

  1. Expansion: increasing the size of GiveWell’s staff would allow us to: review as many or more new charities each year in the future, eventually enable us to allocate more staff to the Open Philanthropy Project, potentially improve our work of “seeding” potential future top charities, and potentially improve our future outreach efforts.
  2. Status quo: if we kept the size of GiveWell staff the same as it is now, we would likely dedicate most staff to maintaining our current level of research. Under this scenario, we would likely halt the transition of staff to the Open Philanthropy Project, not do substantial work to improve future outreach efforts, and do relatively little to seed potential future top charities.
  3. Contraction: in this scenario, we would reduce the size of GiveWell staff to the minimum amount of staff needed to maintain our recommendations. A smaller staff would likely be able to publish updates on our past top charities while conducting about one new charity review per year. Under this scenario, we would be relatively unlikely to find promising new giving opportunities, so we would be making a bet that we had already largely found the best giving opportunities.

The main arguments we see in favor of expansion are:

  • If our money moved continues to grow, we will likely need more “room for more money moved.” To increase “room for more money moved” and ensure that we are recommending high-quality giving opportunities, we will likely need to do research on new charities and do more work to seed potential future top charities.
  • The Open Philanthropy Project is early in its process of finding promising new giving opportunities and is severely capacity-constrained. Increasing the size of GiveWell’s staff will likely lead to more capacity for the Open Philanthropy Project.
  • GiveWell would need more staff in order to do more work on seeding potential future top charities and to do more outreach while maintaining its current level of research. These activities could be highly valuable.
  • Hiring operates on a long time scale; there are long lags between a) advertising a position, b) hiring and c) the new staff member reaching their full potential. Highly experienced hires are very versatile and valuable; the benefits of making such hires are robust across many potential future paths for GiveWell and the Open Philanthropy Project.
  • The worst case scenario for overexpansion is that some amount of money is used inefficiently on staff and that GiveWell must contract later, while the worst case scenario for underexpansion is that GiveWell and the Open Philanthropy Project are unable to capitalize on a vastly larger future opportunity for impact.

The main arguments we see in favor of maintaining the status quo or contracting are:

  • GiveWell’s “impact per dollar” would likely be higher in the short term in the status quo or contraction scenarios because we could maintain our current top charities list while spending less on our operations. GiveWell has not found many new top charities in the recent past, so we may not be sacrificing much impact by contracting. However, the legitimacy of GiveWell’s top charities list may degrade over time if the set of plausible candidates for top charities grows relative to the set of charities we have considered.
  • To some extent, there are diminishing returns to additional hiring because a growing staff requires more overhead- and human resources-related work.

Ultimately, we feel that the arguments in favor of expansion are significantly stronger than those for maintaining the status quo or contracting. However, we are still unsure of how much larger GiveWell’s staff should become in the longer term. The ideal future size depends on many factors, such as whether our research process has been identifying new top charities, the size of the “pipeline” of potential new top charities and priority programs (which we plan to discuss in the next post in this series), how many existing GiveWell staff ultimately work for the Open Philanthropy Project, and the size and success of our outreach operation. We plan to continue revisiting this question periodically.

Allocation of resources to research vs. outreach

As with previous years, we did not set a goal to do more outreach in 2014; we maintained our outreach at similar levels to what we had done in the past. Our approach to outreach has been to prioritize the highest return-on-investment activities while not making outreach a major priority. That said, the resources that we devote to outreach are not insignificant. For example, Co-Executive Director Elie Hassenfeld spent more than 10% of his time on outreach in 2014. More details on how we think about prioritizing outreach are available in this blog post.

Note 1: In this post, senior staff refers to Elie, Holden, and Alexander. Many staff took on additional responsibilities throughout 2014, so this refers to senior staff as of January 2014, not as of today.

Note 2: These were not necessarily the charities that we had expected to review at the beginning of 2014. At that time, we believed that we might complete reviews for ICCIDD (now named IGN), Centre for Neglected Tropical Diseases (CNTD), Nothing But Nets, UNICEF Maternal and Neonatal Tetanus Elimination Initiative (MNT), Measles and Rubella Initiative, and Menafrivac. Of those charities, we completed a review for IGN and made substantial progress on forthcoming reviews for CNTD and UNICEF MNT. Nothing But Nets declined to participate in our process. We ultimately prioritized different charity reviews because we learned new information–for example, Living Goods contacted us to share early results from its RCT and DMI found promising midline results from its RCT.

GiveWell’s Progress in 2014 and Plans for 2015: summary

This is the first post (of six) we’re planning to make focused on our self-evaluation and future plans.

As in past years, we’re going to be posting our annual self-evaluation and plan as a series of blog posts. This post summarizes what changed for GiveWell in 2014 and what it means for the future. Future posts will elaborate.

Money moved to our top charities was ~$28 million, compared to ~$17 million in 2013. Excluding Good Ventures, money moved to top charities went from ~$8.1 million in 2013 to ~$12.7 million in 2014.

We made major progress on building capacity, and plan to continue expanding.

  • At the beginning of 2014, we had 11 full-time staff and 1 Conversation Notes Writer; as of today, we have 18 full-time staff and 8 Conversation Notes Writers.
  • Non-senior staff (note 1) have been taking on significantly more responsibility, as senior staff have focused more on the Open Philanthropy Project and management. In particular, all four new charity reviews (DMI, IGN, GAIN, and Living Goods) as well as all three site visits were led by non-senior staff.
  • Of our current full-time staff, five work primarily on the Open Philanthropy Project, while the other thirteen do a mix of top charities work and cross-cutting work (including managing Conversation Notes Writers, vetting content from both projects, and administrative work). Currently, our payroll expenses are roughly evenly allocated between the two projects.
  • We are hoping to add 4-8 additional Research Analysts over the next 12 months. There are three future Research Analysts (two of whom were previously Summer Research Analysts) who have accepted offers and are starting mid-year. We are hoping to involve more Research Analysts in the Open Philanthropy Project, particularly for helping with writeups of cause investigations and grants, as well as build still more capacity for evaluating potential top charities. In addition, we are starting to seek cause-specific hires for the Open Philanthropy Project, and we have started to advertise for an Outreach Associate position to help us continue to maintain relationships with a growing number of people who give significantly to our top charities.

Our work on top charities produced much more output than in past years.

In the coming year, we hope for a similar level of output, while further improving the quality of our research, particularly when it comes to the transparency of our cost-effectiveness analysis and the reliability of our room for more funding analysis. We hope to do this while further reducing the role of senior staff, and shifting some capacity to the Open Philanthropy Project.

We feel that our top charities generally improved as giving opportunities. There were no new additions to the list, though some of this year’s “standout charities” may become top charities in the future. Against Malaria Foundation returned to our list for reasons related to room for more funding. A combination of new evidence and successful scaling up improved our confidence in all four organizations.

The Open Philanthropy Project progressed and evolved substantially, though it came short of our stretch goals.

  • We made substantial progress on our main priorities: U.S. policy and global catastrophic risks. The precise nature of our goal (commitments to causes) shifted, but we have completed a substantial number of high-level cause investigations and decided on our working cause priorities. We are now shifting our focus from cause investigations to aiming for major grants and/or hires.
  • We made less progress than hoped on other cause categories: scientific research funding and global health and development. For 2015, our main goal (a stretch goal) is to form clear priorities within scientific research funding, comparable to where we currently stand on U.S. policy and global catastrophic risks.
  • We have recently been prioritizing investigation over public writeups, and our public content is running well behind our private investigations. We are experimenting with different processes for writing up completed investigations – in particular, trying to assign more of the work to more junior staff.

We are planning to launch new websites for both GiveWell and the Open Philanthropy Project this year. Creating separate websites for GiveWell and the Open Philanthropy Project is a step in the direction of creating clear separation between the two. We are also hoping to begin conversations about what it would look like to form two separate organizations.

Fundraising remains a priority. We are currently fundraising for unrestricted support, supporting a team that is allocated flexibly between Open Philanthropy Project and our more traditional work.

Note 1: In this post, senior staff refers to Elie, Holden, and Alexander. Many staff took on additional responsibilities throughout 2014, so this refers to senior staff as of January 2014, not as of today.

The Path to Biomedical Progress

We’ve continued to look into scientific research funding for the purposes of the Open Philanthropy Project. This hasn’t been a high priority for the last year, and our investigation remains preliminary, but I plan to write several posts about what we’ve found so far. Our early focus has been on biomedical research specifically.

Most useful new technologies are the product of many different lines of research, which progress in different ways and on different time frames. I think that when most people think about scientific research, they tend to instinctively picture only a subset of it. For example, people hoping for better cancer treatment tend instinctively to think about “studying cancer” as opposed to “studying general behavior of cells” or “studying microscopy techniques,” even though all three can be essential for making progress on cancer treatment. Picturing only a particular kind of research can affect the way people choose what science to support.

I’m planning to write a fair amount about what I see as promising approaches to biomedical sciences philanthropy. Much of what I’m interested in will be hard to explain without some basic background and vocabulary around different types of research, and I’ve been unable to find an existing guide that provides this background. (Indeed, many of what I consider “overlooked opportunities to do good” may be overlooked because of donors’ tendencies to focus on the easiest-to-understand types of science.)

This post will:

  • Lay out a basic guide to the roles of different types of biomedical research: improving tools and techniques, studying healthy biological processes, studying diseases and conditions of interest, generating possible treatments, preliminarily evaluating possible treatments, and clinical trials.
  • Use the example of the cancer drug Herceptin to compare the roles of these different sorts of research more concretely.
  • Go through what I see as some common misconceptions that stem from overfocusing on a particular kind of research, rather than on the complementary roles of many kinds of research.

Basic guide to the roles of different types of biomedical research

Below are some distinctions I’ve found it helpful to draw between different kinds of research. This picture is highly simplified: many types of research don’t fit neatly into one category, and the relationships between the different categories can be complex: any type of research can influence any other kind. In the diagram to the right (click to expand), I’ve highlighted the directions of influence I believe are generally most salient.

(A) Improving tools and techniques. Biomedical researchers rely on a variety of tools and techniques that were largely developed for the general purpose of measuring and understanding biological processes, rather than with any particular treatment or disease/condition in mind. Well-known examples include microscopes and DNA sequencing, both of which have been essential for developing more specific knowledge about particular diseases and conditions. More recent examples include CRISPR-related gene editing techniques, RNA interference, and using embryonic stem cells to genetically modify mice. All three of these provide ways of experimenting with changes in the genetic code and seeing what results. The former two may have direct applications for treatment approaches in addition to their value in research; the latter two were both relatively recently honored with Nobel Prizes. Improvements in tools and techniques can be a key factor in improving most kinds of research on this list. Sometimes improvements in tools and techniques (e.g., faster/cheaper DNA sequencing; more precise microscopes) can be as important as the development of new ones.

(B) Studying healthy biological processes. Basic knowledge about how cells function, how the immune system works, the nature of DNA, etc. has been essential to much progress in biomedical research. Many of the recent Nobel Prizes in Physiology or Medicine were for work in this category, some of which led directly to the development of new tools and techniques (as in the case of CRISPR-based gene editing, which is drawn from insights about bacterial immune systems).

(C) Studying diseases and conditions of interest. Much research focuses on understanding exactly what causes a particular disease and condition, as specifically and mechanistically as possible. Determining that a disease is caused by bacteria, a virus, or by a particular overactive gene or protein can have major implications for how to treat it; for example, the cancer drug Gleevec was developed by looking for a drug that would bind to a particular protein, which researchers had identified as key to a particular cancer. Note that (C) and (B) can often be tightly intertwined, as studying differences between healthy and diseased organisms can tell us a great deal both about the disease of interest and about the general ways in which healthy organisms function. However, (B) may have more trouble attracting support from non-scientists, since the applications can be less predictable and clear.

(D) Generating possible treatments. No matter how much we know about the causes of a particular disease/condition, this doesn’t guarantee that we’ll be able to find an effective treatment. Sometimes (as with Herceptin – more below) treatments will suggest themselves based on prior knowledge; other times the process comes down largely to trial and error. For example, malaria researchers know a fair amount about the parasite that causes malaria, but have only identified a limited number of chemicals that can kill it; because of the ongoing threat of drug resistance developing, they continue to go through many thousands of chemicals per year in a trial-and-error process, checking whether each shows potential for killing the relevant parasite. (Source.)

(E) Preliminarily evaluating possible treatments (sometimes called “preclinical” work). Possible treatments are often first tested “in vitro” – in a simplified environment, where researchers can isolate how they work. (For example, seeing whether a chemical can kill isolated parasites in a dish.) But ultimately, a treatment’s value depends on how it interacts with the complex biology of the human body, and whether its benefits outweigh its side effects. Since clinical trials (next paragraph) are extremely expensive and time-consuming, it can be valuable to first test and refine possible treatments in other ways. This can include animal testing, as well as other methods for predicting a treatment’s performance.

(F) Clinical trials. Before a treatment comes to market, it usually goes through clinical trials: studies (often highly rigorous experiments) in which the treatment is given to humans and the results are assessed. Clinical trials typically involve four different phases: early phases focused on safety and preliminary information, and later phases with larger trials focused on definitively understanding the drug’s effects. Many people instinctively picture clinical trials when they think about biomedical research, and clinical trials account for a great deal of research spending (one estimate, which I haven’t vetted, is that clinical trials cost tens of billions of dollars a year, over half of industry R&D spending). However, the number of clinical trials going on generally is – or should be – a function of the promising leads that are generated by other types of research, and the most important leverage points for improving treatment are often within these other types of research.

(A) – (C) are generally associated with academia, while (D) – (F) are generally associated with industry. There are a variety of more detailed guides to (D) – (F), often referred to as the “drug discovery process” (example).

Example: Herceptin
Herceptin is a drug used for certain breast cancers, first approved in 1998. Its development relied on relatively recent insights and techniques, and it is notable for its relative lack of toxicity and side effects compared to other cancer drugs. I perceive it as one of the major recent success stories of biomedical research (in terms of improving treatment, as opposed to gaining knowledge) – it was one of the best-selling drugs of 2013 – and it’s an unusually easy drug to trace the development of because there is a book about it, Her-2: The Making of Herceptin (which I recommend).

Here I list, in chronological order, some of the developments which seem to have been crucial for developing Herceptin. My knowledge of this topic is quite limited, and I don’t mean this as an exhaustive list. I also wish to emphasize that many of the items on this list were the result of general inquiries into biology and cancer – they weren’t necessarily aimed at developing something like Herceptin, but they ended up being crucial to it. Throughout this summary, I note which of the above types of research were particularly relevant, using the same letters in parentheses that I used above.

  • In the 1950s, there was a great deal of research focused on understanding the genetic code (B). For purposes of this post, it’s sufficient to know that a gene serves the function of a set of instructions for building a protein, a kind of molecule that can come in many different forms serving a variety of biological functions. The research that led to understanding the genetic code was itself helped along by multiple new tools and techniques (A) such as Linus Pauling’s techniques for modeling possible three-dimensional structures (more).
  • In the 1970s, studies on chicken viruses that were associated with cancer led to establishing the idea of an oncogene: a particular gene (often resulting from a mutation) that, when it occurs, causes cancer. (C)
  • In 1983, several scientists established a link between oncogenes and a particular sort of protein called epidermal growth factor receptors (EGFRs), which give cells instructions to grow and proliferate. In particular, they determined that a particular EGFR was identical to the protein associated with a known chicken oncogene. This work was a mix of (B) and (C), as it grew partly out of a general interest in the role played by EGFRs. It also required being able to establish which gene coded for a particular protein, using techniques that were likely established in the 1970s or later (A).
  • In 1986, an academic scientist collaborated with Genentech to analyze the genes present in a series of cancerous tumors, and cross-reference them with a list of possible cancer-associated EGFRs (C). One match involved a particular gene called HER2/neu; tumors with this gene (in a mutated form) showed excessive production of the associated protein, which suggested that (a) the mutated HER2/neu gene was overproducing HER2/neu proteins, causing excessive cell proliferation and thus cancer; (b) this particular sort of cancer might be mitigated if one could destroy or disable HER2/neu proteins. This work likely benefited from advances in being able to “read” a genetic code more cheaply and quickly.
  • The next step was to find a drug that could destroy or disable the HER2/neu proteins (D). This was done using a relatively recent technique (A), developed in the 1970s, that relied on a strong understanding of the immune system (B) and of another sort of cancer that altered the immune system in a particular way (C). Specifically, researchers were able to mass-produce antibodies designed to recognize and attach to the EGFR in question, thus signaling the immune system to destroy them.
  • At that time, monoclonal antibodies (mass-produced antibodies as described above) were seen as highly risky drug candidates, since they were produced from other animals and likely to be rejected by human immune systems. However, in the midst of the research described above, a new technique (A) was created for getting the body to accept these antibodies, greatly improving the prospects for getting a drug.
  • Researchers then took advantage of a relatively recent technique (A) for inserting human tumors into modified mice, which allowed them to test the drug and produce compelling preliminary evidence (E) that the drug might be highly effective.
  • At this point – 1988 – there was a potential drug and some supportive evidence behind it, but its ultimate effect on cancer in humans was unknown. It would be another ten years before the drug went through all relevant clinical trials (F) and received FDA approval, under the name Herceptin. Her-2: The Making of Herceptin gives a great deal of detail on the challenges of this period.

As detailed above, many essential insights necessary for Herceptin’s development came out very long before the idea of Herceptin had been established. My impression is that most major biomedical breakthroughs of the last few decades have a similar degree of reliance on a large number of previous insights, many of them fundamentally concerning tools and techniques (A) or the functioning of healthy organisms (B) rather than just disease-specific discoveries.

General misperceptions that can arise from over-focusing on certain types of research
I believe that science supporters often have misperceptions about the promising paths to progress, stemming from picturing only certain types of research. Below, I informally list some of these misperceptions, as informal non-attributed quotes.

  • “Publicly funded research is unnecessary; the best research is done in the for-profit sector.” My impression is that most industry research falls into categories (D)-(F). (A)-(C), by contrast, tend to be a poor fit for industry research, because they are so far removed from treatments both in terms of time and risk. Because it is so hard to say what the eventual use is of a new tool/technique or insight into healthy organisms, it is likely more efficient for researchers to put insights into the public domain rather than trying to monetize them directly.
  • “Drug companies don’t do valuable research – they just monetize what academia provides them for free.” This is the flipside of the above misconception, and I think it overfocuses on (A)-(C) without recognizing the challenges and costs of (D)-(F). Given the very high expenses of research in categories (D)-(F), and the current norms and funding mechanisms of academia, (D)-(F) are not a good fit for academia.
  • “The best giving opportunities will be for diseases that aren’t profitable for drug companies to work on.” This might be true for research in categories (D)-(F), but one should also consider research in categories (A)-(C); this research is generally based on a different set of incentives from those of drug companies, and so I’d expect the best giving opportunities to follow a different pattern.
  • “Much more is spent on disease X than disease Y; therefore disease Y is underfunded.” I think this kind of statement often overweights the importance of (F), the most expensive but not necessarily most crucial category of research. If more is spent on disease X than on disease Y, this may be simply because there are more promising clinical trial candidates for disease X than disease Y. Generally, I am wary of “total spending” figures that include clinical trials; I don’t think such figures necessarily tell us much about society’s priorities.
  • “Academia is too focused on knowledge for its own sake; we need to get it to think more about practical solutions and treatments.” I believe this attitude undervalues (A)-(B) and understates how important general insights and tools can be.
  • “We should focus on funding research with a clear hypothesis, preliminary support for the hypothesis, and a clear plan for further testing the hypothesis.” I’ve heard multiple complaints that much of the NIH takes this attitude in allocating funding. Research in category (A) is often not hypothesis-driven at all, yet can be very useful. More on this in a future post.
  • “The key obstacles to biomedical progress are related to reproducibility and reliability of studies.” I think that reproducibility is important, and potentially relevant to most types of research, but it is most core to clinical trials (F). Studies on humans are generally expensive and long-running, and so they may affect policy and practice for decades without ever being replicated. By contrast, for many other kinds of research, there is some cheap effective “replication” – or re-testing of the basic claims – via researchers trying to build on insights in their own lab work, so a non-reproducible study might in many cases mean a relatively modest waste of resources. I’ve heard varying opinions on how much waste is created by reproducibility-related issues in early-stage research, and think it is possible that this issue is a major one, but it is far from clear that it is the key issue.

Thoughts on the Sandler Foundation

Note: Steve Daetz of the Sandler Foundation reviewed a draft of this post prior to publication.

Previously, we wrote about the tradeoff between expertise and breadth in philanthropy. We noted the traditional “program officer” model of philanthropy, in which staff specialize in particular causes, and we contrasted it with some other possible models that sacrifice true cause-level expertise, while allowing a philanthropist to work in more areas at once.

We cited the Sandler Foundation as an example of a foundation that appears to have a strong track record despite not following the traditional “program officer” model. Since then, we’ve had a couple of extended conversations with the Sandler Foundation’s Herb Sandler and Steve Daetz. We’ve tried to understand better how its approach differs from more traditional approaches, and what the pros and cons are. We’ve come out thinking that:

  • The Sandler Foundation appears to have an impressive track record; it has played major roles in the development of multiple impressive organizations. More
  • The Sandler Foundation does seem to have noticeable differences with the more traditional approach. Its staff are not subject matter experts specializing in particular causes, and they do not operate with fixed budgets for the amount of time and money spent on a cause. Rather, the Sandler Foundation is highly flexible and opportunistic, ready to put a lot of time and money into an idea when they find the right leadership, or stay out of a cause of interest entirely when they don’t. They often put a lot of time and energy into investigating and refining a grant early on, to the point where working on a single grant becomes a major part of their agenda; this is temporary, however, as they have a preference for reliable, recurring, flexible support (rather than continuously revisiting and revising the terms of grants). More
  • In many of the ways that the Sandler Foundation differs from traditional foundations, we think the Sandler model may be preferable. More

Notable Sandler Foundation grants
We discussed multiple interesting grants in our conversation with the Sandler Foundation. Below are some highlights:

I’m generally interested in cases where a foundation played a major role in the development of a strong and important institution, and at this point we’ve spoken with the heads of many major foundations and asked them about their major success stories. I think the above list compares favorably with comparable lists I’d be able to put together for other foundations’ work over the last decade (based in many cases on off-the-record conversations). This isn’t necessarily a fully appropriate comparison, since the Sandler Foundation explicitly prioritizes making large grants and helping to start organizations; it’s possible that other foundations have had equal or greater impact with larger numbers of smaller grants, and that it’s simply hard to put together comparable lists of highly tangible “success stories.” Still, my impression is that the Sandler Foundation has been quite successful in helping to build strong organizations, despite having a much smaller staff – and less subject-matter expertise – than traditional foundations.

The Sandler Foundation approach
From talking to the Sandler Foundation, I perceive it as diverging from traditional foundations on a couple of key dimensions:

1. The priority placed on funding strong leadership. The Sandler Foundation emphasized its preference for flexible, long-term support rather than constantly picking and prescribing projects. This sort of support is likely especially valuable to grantees, and even more so for new organizations trying to attract outstanding talent. At the same time, giving flexible and long-term support is a major “bet,” and seems most appropriate when one has very high confidence in the leadership one is supporting. The Sandler Foundation emphasized its extensive due diligence on leadership (for example, Sandler Foundation staff had over 30 conversations about John Podesta before supporting him to start Center for American Progress), and its high expectations for leaders: it aims to support people who are highly strategic, highly receptive to criticism and interested in self-improvement, and highly aligned with the Sandler Foundation on values and communication (“good chemistry” was emphasized).

2. A high level of “opportunism”: being ready to put major funding or no funding behind an idea, depending on the quality of the specific opportunity. The Sandler Foundation emphasized its lack of well-defined “budgets” for either money or time: its staff are often exploring several ideas at once with a low level of time commitment, and ready to substantially raise their involvement when a good opportunity presents itself. In the case of ProPublica, the Sandler Foundation first developed the basic idea for a nonprofit newsroom in 2006, and had 15-20 conversations with potential leaders; in May of 2007, when they met Paul Steiger, they quickly became interested in funding him and started putting much more time into the idea. At the same time, there are some cases in which the Sandler Foundation has explored an idea or an issue for a considerable period of time, and ultimately decided not to make any major grants. The general pattern seems to be that the Sandler Foundation puts a great deal of “front-end energy” into promising grant opportunities they’ve identified, and spends relatively less time on (a) pursuing ideas for which strong leaders haven’t yet been identified; (b) following up on a given existing grant (though it still spends substantial time on those as well).

The Sandler Foundation believes that cause-specific “program officers” are a poor fit for this model. The Sandler model relies on strong assessment of organizational leadership, with relatively few, large grants to trusted leaders. Program officers tend to have incentives to make more, smaller grants, and tend not to be well positioned for the funders to defer to their judgments about organizational leadership. Program officers also typically want pre-specified budgets, which the foundation leadership worries would make them insufficiently opportunistic.

What can we learn?
We don’t think the Sandler Foundation’s model is obviously the best one, and we don’t plan on fully emulating it. Among other things,

  • We aren’t fully aligned with the Sandler Foundation’s values and priorities, and we believe that our set of policy priorities doesn’t map very well to today’s most common political platforms. Because of this, it could be particularly hard for us to find leaders whom we feel fully aligned with.
  • We believe the “expert philanthropy” model has much to recommend it (more), and we plan to experiment with it.
  • We believe there can be a good deal of value in relatively small, low-confidence, low-due-diligence grants that give a person/team a chance to “get an idea off the ground.” We’ve made multiple such grants to date and we plan on continuing to do so.
  • We have a favorable impression of the Sandler Foundation’s track record, but we don’t have enough information to be highly confident in this.

With that said, we see the Sandler Foundation as something of a proof of concept that high-impact grants can come from opportunistic generalists.

For reasons outlined previously, we’re highly interested in trying out a philanthropic model that looks across multiple issue areas for the very most outstanding opportunities, and we think that taking a highly opportunistic approach – scanning multiple areas, waiting for outstanding leadership, keeping the bar high, and being ready to get very involved when an opportunity comes up – makes a great deal of sense for this goal. By taking this attitude toward many of our focus areas, we might be able to make the most of our generalist staff, and be able to keep our bar high for the opportunities we get most involved in (something that would be more difficult to do if we pre-committed to a smaller number of particular issues and ideas).

Note: another perspective on the Sandler Foundation is available in a January piece from Inside Philanthropy.

Notes from November convening on our policy priorities

Last November, we held a day-long convening in Washington, D.C. to discuss possible priorities for Open Philanthropy Project work on U.S. policy.

Our main goal was to present our picture of several policy issues, as well as to receive input to inform upcoming decisions about which issue(s) we should focus on. For each issue, we laid out what sort of change we’d like to see, why we find the issue especially promising for philanthropy, what the current landscape looks like (including other funders), and what possible strategies might look like. We sought feedback on all of these points, as well as ideas for promising issue areas and promising strategies that haven’t occurred to us.

We’ve now posted a summary of points raised at the convening, a partial list of participants, and the briefing materials for the convening here:

Page on Nov. 10 policy convening

Many points were raised at the convening, and it served as an input into our overall strategy setting on U.S. policy (which we will be writing more about). Some of the highlights, from our perspective, were:

  • We had a fair amount of discussion of active vs. passive funding. Our discussion reinforced the importance of finding people we’re comfortable giving unrestricted support to if possible, while being willing to make compromises and engage in some degree of “active funding” on particular issues.
  • Reactions to the causes we’re considering varied considerably. Participants were generally quite positive on macroeconomic policy (feeling that aspects of it are under-attended to) and criminal justice reform (seeing, as we do, a window of opportunity). By contrast, there was a much more mixed and hesitant reaction to some other causes we’re considering, such as labor mobility. We aren’t necessarily inclined to favor the causes that received a more positive reaction, since we see a great deal of value in working on issues whose value isn’t widely recognized. However, hearing the different reactions helped us understand which of our potential causes might present particular challenges in terms of communications and coalition building.
  • We discussed the goal of strengthening the general community that shares our policy priorities (in particular, prioritizing both economic efficiency and global humanitarianism). One idea that came up in this regard was that of funding scholarships and fellowships, in order to encourage people to get interested in issues we consider important early in their careers. However, the convening also reinforced our view that this sort of goal will probably be easier to work on after we’ve done more concrete work and gained experience, strengthened our networks, etc.
  • We got many suggestions for potential causes to look into.