A reader was good enough to send in a Lancet article (free registration required for full text) about a well-designed study of a combination microfinance/education program in South Africa.
Study design, strengths and weaknesses
A program consisting of both loans and group meetings was rolled out to 8 villages in rural South Africa, but the villages were randomly split into 4 that received it right away and 4 that received it 3 years later. Meetings included a curriculum that “covered topics including gender roles, cultural beliefs, relation ships, communication, intimate-partner violence, and HIV, and aimed to
strengthen communication skills, critical thinking, and leadership” (pg 1975).
Researchers hypothesized that (a) women in the loan groups would have fewer experiences of intimate-partner violence (presumably due to being financially/culturally more empowered); (b) this in turn would be connected with less unprotected sex in their households; (c) this in turn would slow the spread of HIV in their villages. A very ambitious theory of how to slow the spread of HIV – but to the researchers’ credit, they specified their hypotheses formally before conducting the study, as well as registering it on ClinicalTrials.gov. Combined with the use of randomization, this study had just about all the ingredients for avoiding the plague of publication bias.
A problem with the study, which the researchers partially acknowledge (pg 1981), is that it was only conducted in 8 villages total (4 receiving the program and 4 not receiving it). Therefore, it’s hard to say with confidence that any observed differences were due to the program as opposed to other differences between one randomly chosen set of 4 villages and another. Villages were similar on most observable characteristics, but very different on a few (see pg 1980).
The study concludes that the program resulted in less intimate-partner violence, but not in less safe sex or in slowing the spread of HIV.
A few possible interpretations of this result:
- The researchers’ interpretation is that the program was responsible for reductions in violence, but that these simply didn’t translate into slowing the spread of HIV. Definitely a possibility. (If this is right, by the way, I’d call this a great program solely on the basis of its successfully reducing intimate-partner violence. That would be a great accomplishment in its own right, even if it didn’t have the hoped-for effect on the spread of HIV.)
- It’s also possible that the program had no effect, and that the observed change was a change in reported episodes of violence. Perhaps women who participated in the program came to feel more shame about reporting these episodes. (It’s also possible that the measurement error is in the other direction – that women in the program felt more pressure to report episodes, and that the fall in violence was greater than what was measured. This is the researchers’ theory, given on pg 1982.)
- And it’s possible that random fluctuations simply swamped any effects of the program itself. As mentioned above, it examined only 8 villages; and there was definitely a lot else happening in these villages over the time period in question. For example, the unspecified measure of “greater food security” had a huge rise across all villages studied, whether or not they received the program (see pg 1980). I can’t help but wonder: if this had been a more typical (less rigorous) study without a comparison group, would this increase in food security have been touted as a success of the program?
The one thing I feel fairly sure of after reading this study is that the researchers’ elaborate, multi-step theory of how loans and education can slow the spread of HIV didn’t come out looking great when all the facts were in. For every community program that publishes a study like this (and this is one of the very few I’ve seen), there are many more similar programs, with similarly involved theories of the linkages between credit, knowledge, health, empowerment, etc. that have simply never been checked in any way.