The GiveWell Blog

Thoughts from my visits to Small Enterprise Foundation (South Africa) and VillageReach (Mozambique), part I

I previously posted “raw data” (pictures, audio, notes) from my recent visit to two of our top charities in Africa. The next few posts will give my thoughts from the trip.

First, a note on representativeness. I was only in Africa for two weeks; I was a complete outsider; I certainly don’t think that anything I saw “proves” anything about the programs or areas I was looking at. In many cases what I saw (and what I discussed with staff) prompted me to discuss and think harder about issues I’d already thought about a little. So as I share thoughts from the trip, think of these as thoughts that were partly inspired by what I saw and discussed, not as “things I’ve learned.”

In fact, I was hesitant to visit the field too early because I was afraid that I would form a vivid picture of how things work based on what I saw, and that it would be difficult to imagine how differently things could work in other settings (and even on other days). From this point on, I am definitely going to have a little trouble thinking about microfinance without picturing what I saw at Small Enterprise Foundation (SEF), for example. I think that when dealing with multinational charities that work in a huge variety of settings, it is best to get most of our information by reading the observations and analysis of others.

With that said, here are some thoughts.

I was impressed with the staff of the two nonprofits I visited.

In many of my conversations with nonprofit staff, I feel like I’m being sold a story, people are telling me what they think I want to hear, etc., which makes me instinctively somewhat distrustful. I can honestly say that I felt none of this during my interactions with SEF and VR staff, and that includes the lower-level staff. They were straightforward with me about challenges and concerns. Most acknowledged that there are reasons to worry about whether they’re being effective, and did not seem interested in downplaying concerns or exaggerating successes. And most seemed to me to be quite intelligent, knowledgeable, and reasonable about the work they were doing.

These two organizations had already been identified as outstanding before I visited, and I would have ranked them among the very best organizations in terms of “straightforward, no-nonsense interactions with staff” even before I went (the other nonprofits I’d put in this category are Against Malaria Foundation, Population Services International, and Stop Tuberculosis Partnership). However, it’s possible that people who are working in the field in program roles tend to be better (more direct) communicators with GiveWell than people in fundraising roles, and I’m very curious as to what impression I would have come away with if I had done a similar visit to a charity we have a lower opinion of.

Getting pictures, audio and video was not a problem.

People I spoke to never objected to being recorded and were usually (with some exceptions) happy to have their pictures taken, sometimes even insisting on it. Children particularly enjoyed being photographed (for example, see this video of me taking pictures as well as the photos from my trip to the village).

This surprised me somewhat (arguably it shouldn’t have) only because I feel like I’ve seen relatively little use of multimedia to monitor, evaluate, and report on programs. For example, I’ve been told many times that I “have to see a program in action” to be sold on its effectiveness; yet now I wonder why the charities that feel this way aren’t posting large amounts of real-time, unedited footage to give America-based donors as much of the experience as possible.

Charities do often produce heavily edited videos and photos, but we see little value in such productions as evidence (or as monitoring/evaluation tools) because it is so difficult to distinguish observation from editorial.

We’ve written before that we see a lot of potential value in “qualitative evidence” that is presented systematically and transparently, but we rarely see this happening.

Connecting with clients – culturally and even linguistically – appeared to be a fairly significant challenge, and the way I believe most nonprofits deal with it points to the importance of systematic monitoring and evaluation.

Any American charity working in the developing world ultimately has to connect people (donors and clients) who speak different languages and come from very different cultures. If the charity is even of moderate size (i.e., working in more than a few village, as even the relatively small charities I visited do), it also has to manage operations beyond what upper management can observe directly. It seems to me that the usual approach to this challenge is to have several degrees of separation between upper management and the people doing work in the field.

  • The CEO of the Small Enterprise Foundation does not speak the local languages, and the COO (originally from Croatia) says he has learned to understand quite a bit but still cannot speak them. The lowest-level staff, development facilitators (similar to “loan officers”), tend to have similar backgrounds to the clients and to speak the local languages well, but this of course means that their background is very different from that of upper management and donors (note that the development facilitator I spoke with has very limited English). In in-between roles, there are some employees who are better able to “bridge the gap” (such as the staffer who translated for me on day two). Given this situation, it isn’t surprising that SEF’s management process is heavily dependent on systematic collection, auditing and analysis of key metrics (as I discussed with the COO).
  • VillageReach doesn’t employ as many people, but it also has people in major roles who are American and need help from translators to communicate with local staff (during my visit, we had a translator traveling with us partly to help with communications between Leah, from the Seattle office, and Durao, a local VillageReach employee).
  • I did see one instance of an alternate approach: literally sending Americans to live among clients, learn their languages, etc. This was the approach taken by the missionaries who helped us out when VillageReach’s vehicle broke down in Mozambique. However, my impression (which they confirmed) is that this approach (which I imagine presents problems and challenges of its own) is relatively unusual even among missionaries and is essentially unheard of among nonprofits focused on humanitarian aid.

None of the above observations should come as a surprise, but to me they highlight the importance of formal, systematic monitoring/auditing/evaluation. We often focus on the benefits of monitoring/evaluation for donors, but in situations like those described above it also seems like they are essential for conducting any kind of meaningful organizational management. I have trouble seeing how an organization that conducts no formal data collection and auditing can even run a program of any meaningful size. I would certainly be curious to see how operations work within some of the charities we have found to be less data-oriented.

I think it’s also important to note how difficult translation and communication can be. For example, during my first day with SEF, my communication with clients required several steps: I would ask a question in English, SEF’s CEO would rephrase it so that the development facilitator could understand it, the development facilitator would ask the question in the local language and relay the answer back in English, and finally the CEO had to rephrase the English again so I could follow it. On my second day, I was with someone who was very strong in both languages, but you can hear how much work he put into translating my fairly short and basic-seeming questions. He explained that, in addition to culture-based difficulties with translation, he was being very careful with wording because clients very much want to tell donors what they think the donors want to hear.

We have long felt that survey data is most useful for extremely concrete, factual questions: “What did you eat yesterday?” is more useful than “What do you normally eat?” is more useful than “Did this program help you?” is more useful than “How much did this program help you?” More on this idea at this post on Philanthropy Action (co-maintained by GiveWell Board member Tim Ogden).

More thoughts from the trip coming in Part II.

Comments

  • Links: “this video of me taking pictures” and “the photos from my trip to the village”, don’t go anywhere.

Comments are closed.