I want to take the time to explain a couple views I hold that offend some people. These views are:
- The label “philanthropy expert” means very little to me.
- I see no strong reason to believe that charitable foundations are generally capable, competent, or effective at anything at all.
These aren’t insults aimed at particular people, and they aren’t hyperbole intended to generate controversy. They’re opinions I hold based on my observations about how philanthropy is structured. I can explain them. Both of these beliefs come from the fact that I have no way of determining whether the “respected experts” in philanthropy represent a set of True Experts, or an old boys’ club.
True Experts
True Experts have two qualities. (1) They have learned a lot from experience. (2) This translates to a demonstrable ability to do something.
Regarding (1): Experience can be educational … or it can not be. It’s educational when it consists of trying something, learning whether it worked, and trying something else, all in an environment where “what works” is fairly stable. A highly experienced glovemaker will know all kinds of tricks of the trade and have all kinds of instincts that a young but “naturally talented” glovemaker doesn’t.
In the world of investing, by contrast, it’s much less safe to bet on the more experienced person. The environment is so complicated and the results so random that people can spend 50 years day-trading without ever learning from it. Writing music is even more this way: I don’t think there’s any strong connection between a musician’s level of experience and their ability to write great music (if anything the relationship is negative).
Now imagine an activity that consists of investing without looking at your results. In other words, you buy a stock, but you never check whether the stock makes money or loses money. You never read the news about whether the company does well or does poorly. How much would you value someone with this sort of experience – buying and selling stocks without ever checking up on how they do? Because that’s what “experience in philanthropy” (or workforce development, or education) comes down to, if unaccompanied by outcomes evaluation.
The peculiar thing about philanthropy is that because you’re trying to help someone else – not yourself – you need the big expensive study, or else you literally have no way of knowing whether what you did worked. And so, no way of learning from experience. I wish I had an analogy for this but I don’t. There is no analogy. Philanthropy is the one sphere where you don’t get any info about your effects unless you explicitly study them.
Regarding (2): In most fields, “expert” doesn’t just mean “experienced” – it means “proven.” Tons of people have been investing their whole lives and are still dreadful at it, worse than a bright young 18-year-old would be. But Warren Buffett has a strong track record of investment success; he’s a True Expert in investing. On a smaller scale, your local doctor hopefully has a track record of helping people get better. OK, there’s no randomized-design evaluation of him, but we have a lot of independent information about how medical ailments progress if untreated, and the people your doctor treats get better instead. Unlike a Middle Ages leech supplier, your doctor is a True Expert in medicine.
As an aside, it’s pretty rare for a True Expert to refuse to share their knowledge. Even in the hypercompetitive finance industry, Warren Buffett basically tells you how he thinks because he knows you can’t replicate it. In more world-serving fields such as academia, every expert’s beliefs are peer reviewed, published, and available to any schmoe with a library card.
What am I to make of the statement that Person X is a respected expert in philanthropy? It could mean that they have a track record of solving social problems, and I just haven’t seen the evidence due to a lack of information sharing. Or it could mean …
Old boys’ clubs
We’ve all seen old boys’ clubs. They’re groups of people who think alike, act alike, and have some sort of power or privilege that they don’t have to continually earn.
I’ll use the example of a typical college fraternity. Nominally, the fraternity has a mission, something about “upholding honor and serving the community” or something. Nominally, the President is the person most qualified to help the fraternity promote this mission, the Treasurer is the best person to keep the books, etc. But really? The President is the most popular guy. The Treasurer is ~4th-most popular. The guy who slept with the President’s girlfriend is at the bottom of the totem pole. Etc.
These clubs’ members usually have a lot of weird, idiosyncratic beliefs about the world, because of how similar they are to each other and how little they check these beliefs with the outside. You can usually bet that the “accepted wisdom” at these clubs plus $2 is worth about one subway ride. You can usually not bet that the “top people” at these clubs are better in any way than the others. And it’s not clear that Louis XVI and his cronies did, in fact, run France better than a 26-year-old punk might have.
Foundations
The thought that much of the world’s well-meaning money is managed by old boys’ clubs is scary – scary enough to make one want to deny it, even without evidence. But the fact is, everything about the way foundations are structured is consistent with the idea that they may be old boys’ clubs. I have no evidence that they do the measuring necessary to learn from experience. (They might, and they might not.) I have no evidence that they interact meaningfully with people outside their bubble. (They might, or they might not.) I have no evidence that they have made the world better off, consistent with their mission. (They may have. They may not have.)
I have no evidence that their power and privilege are connected in any way with performance or merit. Unlike with a private company, the fact that they have money means only that someone gave them money decades ago.
The only thing I do know is that they don’t share their information as much as would be consistent with devotion to a better world. That’s why I have a negative view of them.
I’m not committed to this view. Maybe foundations do have ways of learning from their activities. Maybe they do have a track record of success. Maybe they know far more about philanthropy than GiveWell can ever hope to. That’s definitely possible.
So which is it? Foundations, are you an old boys’ club – or can you demonstrate that you’re True Experts?
Comments
Ironically, while there is no randomized design trial of your local doctor in particular, (and Atul Gawande has shown that he may differ GREATLY from the average doctor http://www.newyorker.com/archive/2004/12/06/041206fa_fact), there is a randomized design trial of doctors taken in aggregate, and bizarrely, it shows no effect from their expertise http://www.google.com/url?sa=t&ct=res&cd=1&url=http%3A%2F%2Fhanson.gmu.edu%2Fshowcare.pdf&ei=b35iR-fNDZOeeo-x9Us&usg=AFQjCNH3tCn46wBM8zpjWGa6zJTDTbuSqA&sig2=a_1OCKwNptgSqYqAknZSaA.
Of course, I would readily question the claim that most money management is by true experts as well, and doctors seem to me to be a very clear case of refusal to share knowledge, from bars to entrance into medical school (rather than competitive licensing procedures for graduates) to resistance to outside results monitoring to
the Hippocratic Oath.
-Your link isn’t working for me on the randomized trial of doctors. Are you literally saying that people who left their symptoms untreated had the same progression as people who went in to a doctor?
-Regarding investment management, I happen to agree with you that many investors are not True Experts. I think Warren Buffett is one, though.
-Regarding doctors: barriers to practice are very different from barriers to knowledge. As far as I know, you can read medical journals at the library. Nobody’s hiding anything about our knowledge of medicine. Going to medical school and getting a license are different matters entirely.
I also think you could/should have clarified that you’re being tangential/discussing details, not questioning my main point. It certainly appears that way. My post is directed at people who believe that foundations have “valuable expertise” in philanthropy, and my intent is to show just how little support there is for such an idea. I assume you agree with this, and are going further in questioning whether “valuable expertise” is even present in the examples I cite.
And the gauntlet is thrown. . .
Good luck with the information quest. Some concerned citizens here in Iowa ran into the same barriers with a public institution. Iowa State University Foundation refused to release information until sued.
Private foundations will be another matter entirely. . .
Point of emphasis for any who need it. No I certainly don’t dispute the main point here. OTOH, if I disagreed about the main point why would I stay around? Trolling? Secret mission from Holden’s relatives who want him to go back to making money? Generally productive conversation takes the form of debate between those who basically agree, no? Hmm… I guess that’s the sort of claim that one has to agree with to productively argue against.
Anyway, here’s a better link
http://www.overcomingbias.com/2007/05/rand_health_ins.html
I agree that Buffet is an expert, though I can’t confidently say he tells the truth about his investment strategies, as I haven’t checked.
I don’t think that journals contain much of a medical education. Anyone can read one. Now tell me how to learn to perform an appendectomy by doing so.
I find, when I educate myself deeply about a field where one normally uses ‘experts’, that I sometimes disagree with those ‘experts’ in a number of ways, and that they’re not super-geniuses.
However, this doesn’t necessarily imply that they are incompetent. I think it’s all about expectations.
I don’t know enough about the decisions of foundations and professionals in this field to have strong opinions, but I would SUPPOSE that the following is roughly accurate:
Let’s set two possible benchmarks as the baseline case for choosing where to allocate charitable donations:
1) Dartboard selection. Assemble a list of many charities meeting a very simple baseline criteria (approved by IRS as a 501 c 3, or a recent 990 on file, or listed in a major database such as Charity Navigator’s…) Choose a charity at random. Or perhaps, consider the impact of the ‘average’ charity in that group (of course, doing the latter reasonably accurately would require a level of evaluation of all these charities that’s not practical).
2) Weighted average. Consider the charitable choices made by the public at large, weighted by the volume of donations. Exclude certain classes of contributions, like religious contributions (driven for personal reasons and probably not conducive to this type of analysis). Multiply weighted contributions by some kind of effectiveness factor. (Again, I realize that accurately computing effectiveness for a large number of charities isn’t realistic. Consider this a theoretical exercise.)
Now, compare the donations of (medium to large) foundations and others guided by experts/charity professionals to either of the above benchmarks. I suspect that the expert donations would be found to be more effective.
OTOH, if you apply a theoretical standard of ‘best possible donation’, perhaps determining that standard within major areas (sort of like you’re doing – compare African health initiatives to each other, and so on), then I suspect the foundations fall short.
But then, that’s the case in virtually every field. The advice of a typical doctor is better than I could self-diagnose, but is also likely not perfect.
The question, I suppose, is whether the expert decisions are reasonable given the constraints. A doctor can’t make perfect decisions because of time and cost pressure, and because of human fallibility, among other things. But our economy *is* structured such that there are financial incentives that allocate a large portion of our best and brightest to the field of medicine. I have no real idea how foundation directors and charity experts fall on this scale.
I think there is a reasonably sized audience that reads this blog and agrees with much of our worldview while disagreeing on the kind of thing I said in this post. Specifically, a lot of people like us for our commitment to transparency, while still feeling that foundations’ views on specific charities are worth more than ours. I wanted to challenge that idea.
It seems important to note that the study you reference is about “on the margin” medicine. I certainly don’t dispute that a lot of doctors’ visits may be unnecessary or useless, and I don’t believe that doctors are infallible in their beliefs and recommendations. This is still pretty far from showing that doctors don’t have value-added, though. (To observe that they have value-added for extreme ailments may seem trivial … but it isn’t. Throughout much of human history even that much wasn’t true.)
I’d think that the reason for not explaining how to do an appendectomy (as opposed to explaining the evidence that it works) has more to do with the fact that no one’s interested in this info … it’s not useful if you can’t practice anyway. My broader point was that True Experts don’t tend to be secretive, and when they do, they usually have a good reason like “Protecting my profit” rather than a bad reason like “Not being mean to charities.”
Phil: I agree that little is gained by asking whether foundations are better than complete randomness (they almost certainly are) or whether they are perfect (they’re not). But on the “reasonableness” question, let me be more concrete:
I find it possible that the decisions made by well-funded, well-staffed foundations with decades of experience are inferior to the decisions made by two young punks going from zero to their rough cut in a few months. If that’s true, it’s a pretty damning indictment of foundations by just about any measure. I explain above why I wouldn’t rule this out, at least until/unless they open up.
I agree that it may be possible that your recommendations are better than the average foundation recommendations. But I’m still not sure that that’s awful evidence against foundations.
Good decision-making is not free.
If a foundation is only making modest annual grants, say, $50,000, then the value of improving the effectiveness of their giving by 10% is roughly $5,000/year. I find it easy to believe that the incremental research costs of attaining that 10% improvement could easily dwarf the improvement itself.
There are of course other issues at play – a volunteer board (or one with nominal pay) has all kinds of conflicts – the banker volunteering for that board may feel a sense of obligation to do the right thing, but on the other hand, with effectiveness measurement being difficult anyways, it may be all too easy to accept a simple proposal without too much research versus spending many hours in search of a possibly nebulous effectiveness improvement. This points to larger issues of how we handle charity and foundations, but that’s a broader argument that I’ll skip for now.
OTOH, there are much larger foundations with full time staff whose job it is, basically, to make the foundations grants as effective as possible.
I’m not convinced (as you seem to be) that these individuals are, on the whole, making poor decisions. Frankly, I’m not convinced either way – I haven’t seen strong evidence. From what I’ve seen, some of the larger foundations have played a role in some key developments over the last century (the Green Revolution, for instance). But just as you won’t accept thinly sourced anecdotal opinions to prove the effectiveness (or lack thereof) of any particular charity is effective, I won’t accept your thinly sourced implication that foundations are, on the whole, poorly run.
I think again it comes down to a sort of difference in our respective null hypotheses. Neither of us see much evidence on this issue – you assume the worst (or at least, imply it), I don’t assume too much one way or the other.
To clarify, I don’t assume or strongly believe the worst (in fact I took great pains to express this above). This post is more directed at people – most people in the sector, I think – who assume the best. My point is that until foundations open up, there’s no reason to take what they say as expert opinion.
Personally, I’d put the “old boys’ club” case at about 55% relative to the “True Experts” case. My belief is based on analogy and anecdote, as you say. I don’t feel strongly. I desperately wish foundations would come out and show what they are.
Hmm… It’s true that you couch your case against foundations with the limits of your knowledge. But I also think there’s a clear anti-foundation tone that comes across in the above article, and I think I’ve detected it pretty strongly in other stuff you’ve written.
Personally, I’m a bit more interested in the opinions of subject matter experts than of the foundations themselves (though the latter may employ the former). Like you, I come at the issue of charitable effectiveness with little personal subject-matter expertise to apply. My thought process and background reading has led me to think that the a highly effective target for my donations is third-world health, and in turn, that The Carter Center is likely to be a highly effective charity within this field.
But I may be way off-base. Perhaps there are long-term negative consequences from short-term Western aid in these areas. Perhaps The Carter Center’s approach is ineffective. Perhaps I should be focusing on a different part of the world, or a different field within the third world (education, economic development, press and/or political freedom, etc.) I *think* I’ve come to reasonably sound decisions, but then again, I’m aware of how often novices go far astray in fields that I’m much more knowledgeable about (computers/software, finance), and I’m well aware of the possibility that I may be making ignorant mistakes myself.
In other words, I find it reasonably possible that I (or you), as a reasonably well-intentioned, intelligent, non-expert could parse available data and come to some possibly better conclusions than a typical small to medium foundation director who is basically in the same position. But I would think the true expert who we should be comparing ourselves against, and who’s opinion we should seek out, is the person who has been in the field in the third world, working for a variety of different types of programs and in contact with a variety of different NGOs, with 10-30 years of experience to draw on.
The problem is that when I come across thoughtful information from such people, they are either:
1) Advocating rather extreme positions (IMO) on high level policy debates (Sachs and Easterly).
or
2) Presenting views that I feel to be a bit narrow. ‘The Blue Clay People’ is a great book about NGO involvement in a bad situation (Liberia in very troubled times). But I suspect that situation was rather atypical. Sometimes I get feedback from folks who’ve been in the aid ‘biz’, but they’re currently working for one group or another, and so of course, they favor that particular group. But that’s hardly a neutral review of the field.
It’s not just the RAND study. The persistent failure to find life expectancy harms in religions that ban conventional medicine and the failure to find benefits to medical spending in a variety of international comparisons are also important. Also, the RAND study represents 70s medicine, so total spending was significantly less than half what it is now as a fraction of GDP and even less than half of that in “real dollars”, but OTOH, medicine may have gotten better (though not anywhere near proportionally more expensive anywhere but in the US).
I think that medicine is probably harmful well below the margin, but is probably net beneficial and surely not net neutral. I plan to post my reason for not concluding that medicine is useless on Overcoming Bias once I finish it, hopefully in a few days.
Good point about your readers.
On good reasons and secrecy http://ethicsinthenews.typepad.com/practicalethics/2007/11/it-is-10-oclock.html
I still don’t think you are close to cynical enough. Humanity is, in a phrase, criminally insane, as are all demographics of any size. My advice on this is that people who want to know what people are really like has to forgive them *first*, otherwise they won’t allow themselves to recognize what the evidence is pointing to. This is one of those topics where no level of exaggeration can actually be farther from the truth than people’s attitudes, even attitudes already considered cynical, as cynicism inevitably undershoots.
There’s also an extensive body of research on exogenous shocks in medical spending within the United States built up over the last few decades. They all find huge margins at least as large as that seen in the RAND. The most recent work to make big waves has come out of Dartmouth, finding that we could cut Medicare spending at least 25-30% without negative health effects by bringing the most wasteful regions to parity with the least wasteful (which means that the high-spending regions would see still greater improvements in efficiency).
http://www.annals.org/cgi/content/full/138/4/288
Hi guys,
I wanted to let you know about the Nonprofit December Giving Carnival. Deadline to submit your post is Dec. 20th.
Here is the link,
http://christopherscottblog.typepad.com/blog/2007/11/december-giving.html
Let me know if you have any questions.
Christopher S.
http://www.ChristopherScottblog.com
This message is for Holden. I was your Calculus teacher and Math team coach! I am so proud of you, but I am NOT surprised that this is how you are using your education.
We still do math competitions, and if you are ever interested in helping to fund the ICTM contest,please contact me. We are always hoping the successful mathletes will help continue our endeavors.
While your comments are fair (in some cases) and needed (across the board) they overlook some decent examples. Those “so called experts” that can’t explain their methodology, justify their investments or outline their outcomes simply don’t know how to track their work – and as a result the old boys network is beginning to crumble.
Edna McConnell Clark publishes an annual report…which begins with “The Edna McConnell Clark Foundation has devoted its resources — about $30 – 35 million annually in grants — to increasing the number of low-income youth served by organizations with scientifically proven or persuasive evidence of effectiveness. Ultimately, our goal is to strengthen these organizations’ prospects for long-term sustainability.”
And, the report goes on to outline how programs are measured, etc.
I think that those operations that subscribe to the “old boys network,” will cease to exist over time. True expertise, program effectiveness/measurement and operational transparency will be demanded by charities, foundations and individual donors.
We’re working to get there.
The term “old boys network” carries with it a connotation of racism and sexism. My experience has not been that the foundation world generally suffers from either of those problems. (And I’m guessing that Holden doesn’t believe that either. I don’t think that’s what he meant by the term. But since it inevitably carries that connotation, I wanted that point to be clear.)
That said, I agree with the real thrust of Holden’s argument. In fact, I actually think he understates the negative impact of the current state of the foundation world. Beyond doing a relatively poor job of picking winners, the current structure has two additional pernicious effects:
1) Foundations distort non-profit leadership:
To be “successful” as a non-profit leader means you have to be successful at raising money from foundations, which means you either (1) have the charisma necessary to convince foundation leaders to give you money or (2) have personal connections with these leaders. Unfortunately, neither of these qualities necessarily translates into the ability to effectively lead an organization. Which means the non-profit sector gets stuck with too many executive directors who are good fundraisers but ineffective organizational leaders.
Plus, whether these leaders or ineffective or effective, it becomes more difficult for non-profits to transition organizational leadership, because they cannot risk losing the people who have developed personal relationships with their funders.
2) Foundations encourage people to leave the non-profit field:
The nicest office environments I have ever been in are foundations. And the people who work in these offices tend to get paid significantly more than their counterparts who are working in the non-profits they fund, despite working fewer hours in less intense jobs. If you work in the non-profit world, I think this is incredibly disheartening. And I think it leads to people questioning why they remain in the non-profit sector and choosing to leave to work in foundations or the private sector.
Foundations and nonprofits who deliver services are certainly aware of the importance of evaluation. These issues are not at all new. I don’t think there are many people who actually think that what you call the “straw ratio” is adequate. It is easy, inexpensive and unobtrusive. But obviously a return-on-investment figure would be a better measure. This is common sense.
An ROI measure is relatively easy to get in a for profit environment, where the bottom line is actually simply dollars. There is no easy way to do what you want to do in an environment where human beings are attempting to improve the human condition. It is as complex as any social science research, which is so difficult that some people don’t even think social science is real science at all. If there were an easy way to evaluate and compare nonprofits, it would have been done. That it hasn’t been to the extend you advocate for does not imply incompetence or ignorance. But your attempts to develop comparable measures and the dialogue is certainly welcome.
There is an extensive evaluation literature, and professional organizations that connect practitioners from all over the world.
Examples:
http://ioce.net/index.shtml
http://www.campbellcollaboration.org/index.asp
http://www.eval.org/
Couple of books:
Evaluation: A Systematic Approach by Peter H. Rossi, Mark W. Lipsey, Howard E. Freeman
Qualitative Research & Evaluation Methods by Michael Quinn Patton
Responsive Evaluation: New Directions for Evaluation (J-B PE Single Issue (Program) Evaluation by Jennifer C. Greene (Editor), Tineke A. Abma (Editor)
Foundations and Evaluation: Contexts and Practices for Effective Philanthropy, eds. Marc T. Braverman, Norman A. Constantine, Jana Kay Slater, August 2004
And see:
http://www.ppv.org/ppv/youth/youth.asp
PPV did a huge randomized national level study of Big Brothers Big Sisters that was excellent, which BBBS has links to on its website, and you can see from reading about it what a major undertaking such evaluations are.
Phil:
*I tried to answer your point about tone on the “Apples and Oranges” comment thread. Basically, it is true that my usual tone toward foundations is highly critical and obnoxious, but this is because I’m usually focused on their opacity. I am much more agnostic about the quality of their analysis, but wanted this post to distinguish between being agnostic and assuming that they’re good just because they’re “experienced.”
*I agree with you that we don’t know much and that it would be much better to get the opinions of subject matter experts. I also agree with you that it is hard to find good stuff out there that relates directly to the question of where to give. We have certainly tried, but have often found relevant literature so focused on policy (again relating to my belief that most people think of charity as either unimportant or easy, when it is neither) that we have found it more productive to focus on a particular question and find what’s written on that question. Basically, the Africa expert’s guide to what kind of charity works in Africa isn’t something we’ve been able to find. If we could, we would draw on it heavily.
Michael/Carl: I may be overstating the case for the value of “expertise.” I don’t feel strongly about this or feel that it is important for me to feel strongly. The attitude of GiveWell is that seeing no one else asking our questions or doing what we’re doing, we are going ahead with it until and unless someone convinces us that it is a bad idea. This is consistent with maximum skepticism about experts, or just 50% skepticism.
Andrea: as others mentioned on the apples & oranges thread, your resources seem to be more about evaluation in the abstract, rather than actual studies of what works. This isn’t necessarily a bad thing, but it doesn’t leave me feeling much more informed about whether foundations are to be trusted.
I have read the Big Brothers/Big Sisters study. My vague recollection is that it focused on attitudes and did not give any analysis of the connection of these measures to life outcomes, so I ended up not knowing much about what to make of it. I didn’t spend much time on it, though, as Big Brothers / Big Sisters didn’t apply for our grant.
Steve B, we checked out EMCF quite some time ago and didn’t get very far, but now I am hearing buzz about their newer initiatives. I will check out their website again when I get a chance.
Thanks to everyone for commenting. The quality of dialogue on this blog has improved a lot lately. I’d like your thoughts on anything I can do to keep you all around and keep the discussion going. Should we move these kinds of discussions to a more robust forum system (such as the one we’re experimenting with at discussion.givewell.org)?
Probably too soon to say if/when the winds of glasnost will blow through the world of charitable foundations, but wondered what you thought of the utility of a foundation analog to http://www.thefunded.com, a site for entrepreneurs to discuss the experiences they’ve had with particular venture capital firms in a members-only, somewhat protected format.
Sounds like a good idea. I don’t want to build it, but I’m in favor.
Holden: I certainly was not arguing that no analysis is better than imperfect analysis. I wouldn’t advocate anything so anti-intellectual, or silly. All analysis is imperfect, of course. I was merely responding to some specific comments made on this blog that appear to me to be facile.
The evaluation resources I provided are indeed abstract. They are academic sources that provide conceptual underpinnings for evaluation. I just wanted to make the point that evaluation has a long history that would inform and enhance your project.
I also want to point out that the Big Brothers Big Sisters study did indeed made strong claims for the effects of BBBS mentoring on the life outcomes of the kids who participated in the study as part of the “treatment” group. What is important about the study is that is was randomized, which is difficult to achieve when one is studying the outcomes of interventions on people (since in order to conduct a randomized study there has to be a control group of kids who are not given the benefit of the intervention) and really expensive! Mentoring reduced by almost half the rate of first-time drug and alcohol use and reduced school absentism by half. At any rate, I encourage you to revisit that study, because I think it’s an example of exactly what you are promoting here.
Nobody is suggesting that people donate randomly. I think I was put off a bit by an entry you wrote some time ago, where you stated that most people think improving people’s lives is easy. But now I’m not sure you actually believe that–maybe you were just trying to stir the pot. It would be fanatastic if you could help create a climate that encourages foundations and donors to fund more evaluation studies and develop more systematic reporting and translation mechanisms so that donors could easily access and understand them.
Andrea, what you’re describing in the BBBS study doesn’t sound like what I read. It sounds much more interesting. I’ll check it out when I get a chance.
I don’t think anyone would tell you that helping people is easy, if you asked them point-blank (and were specific about the issues: education, global health, etc.) But I do think there’s an implicit assumption on the part of those who don’t think about these issues very hard. The obsession with “minimizing overhead” seems to be a reflection of this mentality. Anyway, it doesn’t matter too much. We agree that helping people is hard.
Under your definition of “True Experts” and the two qualities they have:
(1) They have learned a lot from experience. (2) This translates to a demonstrable ability to do something.
…a doctor who is just out of medical school is not a True Expert because although they can give medical advice, they have no track record of helping people get better.
However, I would submit that the track record is not as important as the ability to do something which can be demonstrated. For instance, a doctor who is just out of medical school in my opiniioin IS a True Expert because they still have a lot of independent information about medical ailments and what treatments work (i.e. OTHERS have used the same treatments to get people better).
I think the definition should be….
1)The person has learned something EITHER from experience or through the study of the experience of others
2)This translates to a demonstrable ability to do something.
medical advice: I think your point is valid and your definition is an improvement on mine.
It was just a thought. Thank you. I enjoy your intelligent writing… 😉
Comments are closed.