Our investigation isn’t yet complete, but it has taken several turns that we’ve found educational, and our vision of what it means to investigate a “cause” has evolved. This post gives an update on how we’ve gone from “investigating meta-research in general, starting with development economics” to “specifically investigating the issue of reproducibility in medical research” to “investigating alternatives to the traditional journal system.”
The big-picture takeaway is that if one defines a “cause” the way we did previously – as “a particular set of problems, or opportunities, such that the people and organizations working on them are likely to interact with each other, and such that evaluating many of these people and organizations requires knowledge of overlapping subjects” – then it can be difficult to predict exactly what will turn out to be a “cause” and what won’t. We started by articulating a broad topic – a seeming disconnect between the incentives academics face and the incentives that would be in line with producing work of maximal benefit to society – and looking for people and organizations who do work related to this topic, but found that this topic breaks down into many sub-topics that are a better match for the concept of a “cause.”
Simply identifying which sub-topics can be approached as “causes” is non-trivial. We believe it is important to do so, if one wishes to deliberately focus in on the most promising causes that can be understood in a reasonable time frame, rather than spreading one’s investigative resources exploring several causes at once.
In a previous meta-research update, we focused on the field of development economics. Following that update, we collaborated for several months with an institutional funder that supports a significant amount of development economics work and has expressed similar “meta-research” interests; we also explored some other fields, as discussed in a recent post. We ultimately came to the working conclusion that
- Meta-research in medicine-related fields is “further along” than in social sciences, in the sense that there are more established organizations and infrastructure around meta-research (for example, Cochrane Collaboration and EQUATOR network) and there has been more research on related issues (particularly the work of John Ioannidis).
- With that said, meta-research in medicine-related fields still has a long enough way to go – and little enough in the way of existing funders working on it – to make it a potentially promising area.
- In social sciences, studies are often so expensive and lengthy to conduct (the deworming study we’ve discussed before took over a decade to produce what we consider its most relevant results) that the prospects for robustly establishing conclusions to inform policy generally seem distant. By contrast, we believe that improving the reliability of medical research would likely have fairly direct and quick impacts on medical practice.
- The institutional funder we have collaborated with continues to work in social sciences (specifically development economics), and we believe its approach and attitude is similar enough to ours that our value-added in this area would be limited.
With these points in mind, we decided to shift our focus and deeply investigate meta-research in medicine-related fields rather than meta-research in development economics. This was a provisional decision; we remain interested in the latter.
Alexander Berger led an investigation of meta-research in medicine, beginning in February. His basic approach was to start with the leads we had – contacts at Cochrane as well as individuals suggested by John Ioannidis – and get referrals from them to other people he should be speaking with.
In early May, we paused the investigation to take stock of where we were. It occurred to us that the people and organizations we had come across were divided into a few categories, which didn’t necessarily overlap:
1. The “efficiency and integrity of medical research” community. This community focuses on improving the efficiency with which medical research funding is translated into reliable, actionable evidence, by promoting practices such as (a) systematic reviews, which synthesize many studies to provide overall conclusions that can inform medical practitioners; (b) data sharing, especially of clinical trial data; (c) preregistration; and (d) replications of existing studies to check their reliability. This community includes the Cochrane Collaboration.
People in this community that we spoke to include:
- Many individuals affiliated with the Cochrane Collaboration (see here).
- Steven Goodman, Associate Dean of Clinical and Translational Medicine at Stanford University.
- John Ioannidis, Professor of Medicine at Stanford University.
- Doug Altman, chair, and David Moher, member, of the Steering Group of the EQUATOR Network, which promotes transparent and accurate reporting of medical research studies.
- Tom Kenny, Director of External Relations, the UK’s National Institute for Health Research Evaluation, Trials and Studies Coordinating Centre.
- Ivan Oransky, co-founder of Retraction Watch.
- Elizabeth Iorns, founder of Science Exchange and the Reproducibility Initiative.
- Ken Witwer, Assistant Professor at Johns Hopkins School of Medicine.
- Ferric Fang, Professor of Laboratory Medicine and Microbiology at the University of Washington.
- Stanley Young, Assistant Director of Bioinformatics, National Institute of
2. The “open science” community. This community focuses on new tools for producing, sharing, reviewing, and evaluating research, many of them focusing on the idea of a transition from traditional paper journals to more powerful and flexible online applications. Some such tools (such as Open Science Framework) are produced by nonprofits, while others (such as ResearchGate and JournalLab) are produced by for-profits.
People in this community that we spoke to include:
- Joshua Greenberg, program officer for the Sloan Foundation.
- Jason Priem, co-founder of alternative metrics provider ImpactStory.
- David Jay, co-founder of JournalLab.
- Jane Hunter, Managing Director of Facutly of 1,000, and Rebecca Lawrence, Publisher of F1000Research.
- John Wilbanks, Chief Commons Officer, Sage Bionetworks.
- Cameron Neylon, Advocacy Director, Public Library of Science (PLoS).
- William Gunn, Head of Academic Outreach, Mendeley.
- Kaitlin Thaney, Manager, External Relationships, Digital Science.
- Heather Joseph, Executive Director of SPARC
Widespread adoption of tools such as those listed above could eventually make it much easier for researchers to share their data, check the reliability of each others’ work, and synthesize all existing research on a given question – in other words, such adoption could eventually lead to resolution of many of the same issues that the “efficiency of medical research” community deals with. Not surprisingly, many of the people in the “open science” community emphasize the same problems with today’s research world that people in the “efficiency of medical research” community emphasize – so it’s not surprising that, when we expressed interest in these issues, we were pointed to people in both categories.
That said, there is little overlap between communities #1 and #2, and we believe that this is largely for good reason. Community #1 focuses on medical research; community #2 is generally working across many fields at once. Community #1 focuses on actions that could directly and quickly improve the usability of medical research; community #2 is largely working on a longer time horizon, and hopes to see dramatic improvements when widespread adoption of its tools takes place. (Despite this, when there are organizations that have a disciplinary bent, we’ve continued to focus on the more bio-medically relevant ones, as opposed to those focused on, e.g. astronomy or geosciences.)
3. Other communities. Some other communities that could fall under the heading of “meta-research relevant to medical practice” include:
- The evidence-based medicine community, which seeks to improve the usefulness of evidence for medical practice by increasing the extent to which available high-quality evidence is used in medical practice. (We see this community as distinct from the “efficiency and integrity of biomedical research” community because it focuses on the use, as opposed to the production, of evidence, though many of the practitioners overlap.)
- People seeking to improve the practice of epidemiology (whose methods and issues are quite distinct from those of the sort of research that Cochrane Collaboration synthesizes). One such group is the Observational Medical Outcomes Partnership (OMOP), which we spoke with David Madigan about.
- John Ioannidis, whose work seems largely unique as far as we can tell. Prof. Ioannidis has studied a wide variety of “meta-research” issues in a wide variety of fields, including reproducibility of clinical research, bias, reliability of genome-wide association studies, and conformity vs. creativity in biology research.
- Vannevar, a group started by Dario Amodei (who is a GiveWell fan and personal friend), which aims to improve the infrastructure around fields such as basic biology (which is distinct from both epidemiology and the sort of medical research that the Cochrane Collaboration addresses) and machine learning. Unlike most of the groups discussed above, Vannevar is focused on improving the ability of academia to produce high-risk, revolutionary work, rather than on improving its ability to efficiently produce immediately actionable recommendations for medical practitioners and policymakers.
Many of the individuals working in these communities may have cross-cutting interests and play some role in multiple communities, but we see the communities as having discrete identities. The characterization above is not meant to be exhaustive or to eliminate the possibility of other groupings, but rather to convey our understanding of the relationships between various problems, interventions, and individuals.
At this point, the community we feel we have covered the most thoroughly is #2, the “open science” community. This hasn’t been an entirely deliberate decision: we’ve spoken to the people we’ve been pointed to and the people they’ve pointed us to, and only after many conversations have we noticed the patterns and distinct communities discussed above.
Because it is important to us to complete a medium-depth writeup, we’re currently aiming to complete such a writeup on open science. We will add the other communities discussed above to our list of potential shallow investigations.
In this process, we’ve learned that it can take a fair amount of work and reflection just to determine what counts as a “cause” in the relevant way. We think such work and reflection is worthwhile. Rather than speaking to everyone who is somehow connected to a problem of interest, we seek to identify different causes, deliberately pick the ones we want to focus in on, and cover those thoroughly.