The GiveWell Blog

Learning from a small failure

Global Health Report:

A computer science group from the State University of New York at Stonybrook presented three applications or “apps,” that is to say mini-computer programs, that they had designed for use on no-frills mobile phones owned by women working in the informal Senegalese economy. The pilot tests for two of the apps—a dictionary and a book-keeping calculator—were deemed successes. The third app—for measuring profit and loss—was judged a failure.

The pilot was a failure because the fish sellers found the mobile phone profit-and-loss calculator useless. The Stonybrook group did not learn why the app was useless, however, until a second round of testing in which one of the Senegalese computer science students happened to have a grandmother who was a fish seller. After talking with the fish sellers, he explained to the Stonybrook group that all the prices for both fresh fish and dried fish are fixed. Since everyone charges the same price for fish (one for dried, the other for fresh) on any given day, there is no way for the women to wait until the price is right.

It’s a small example, but we wish we saw more stories like this – public sharing of people trying a program, critically assessing it, and learning from what doesn’t work.

Comments

  • Tony Wang on February 24, 2009 at 3:35 am said:

    Perhaps there’s no incentive, because the information marketplace doesn’t exist. But here’s an idea – if someone were to create a blog called Success & Failure, looking at the different successes and failures of the philanthropic sector, how many people do you think would subscribe? Just a thought..

  • Kim Dionne on February 24, 2009 at 7:05 am said:

    Relatedly, it’s rare to find academic publications of “failures” as journals are biased to report findings that aren’t “null.” We call it publication bias. It’s a shame really: what happens to those very carefully thought research projects analyzing an important development or health intervention only to find the intervention failed, or more typically, didn’t have a positive or negative effect? Those results aren’t getting published in an academic journal and the NGO or similar body that was tasked with providing the intervention won’t want to publicize to its donors that something they did failed to have an impact.

  • Tony Wang on February 24, 2009 at 1:54 pm said:

    Kim, do you think there would be value in the academic world for a publication that focused on failures? I imagine lots of researchers would find it valuable to read, even if there was little incentive to publish (since it wouldn’t really advance people’s academic careers that much).

  • Kim Dionne on February 25, 2009 at 8:42 pm said:

    I have actually joked with friends that we should start our own journal and call it “Null” to publish articles where the research bore null findings. I think there’s value to researchers because:

    (1) We’ll be able to learn what’s been done before and wasn’t successful so that we won’t remake wheels that are un-make-able.

    (2) We’ll actually have a home for all of those projects we start but never finish because the preliminary analysis shows null findings. If we really believe what we’re testing is important, then even null findings would matter and should be publicized to our colleagues and the general public.

    More important than the academic world, however, is the world of health and development interventions. So little is reported about the failures (except of course via investigative journalism), that there are few opportunities to really learn what works and what doesn’t. And this is money and time we’re spending as we all sit in our silos implementing those things we think will work.

  • Kim, I like the idea. We’ve written before about publication bias (here) and ideas for getting more negative results published (here) – interested in your thoughts.

  • Alanna on March 5, 2009 at 1:43 am said:

    Engineering has some good models for failure analysis: http://tinyurl.com/bec53l

  • Great discussion – there is a Journal of Negative Results in Biomedicine:
    http://www.jnrbm.com/

    Alanna is right, there are some great models in engineering, part of that is because engineers want to see things fail so they can improve them. At least in public/global health there is not much room for failure or failing fast which would allow one to try many ideas.

Comments are closed.