There is an increasing number of clinical ethics consultants in hospitals around the world, which raises the question: Can anyone be an expert in giving moral advice? We contend that clinical ethics expertise is obtainable. We review three common challenges to the idea that clinical ethicists can be moral experts and show that each fails to undermine its plausibility. We admit, however, that the conditions under which clinical ethics is practiced render acquiring and identifying moral expertise more difficult than expertise acquired under different conditions. We argue that a first step in identifying and developing expertise is understanding how the environment in which clinical ethicists work affects their ability to get adequate feedback on their advice. We think the prospects for acquiring and enhancing clinical ethics expertise depend on whether ethics services can use methods developed by psychologists to enhance expertise in similar environments to create mechanisms for eliciting reliable feedback on consulting practices.
Who are clinical ethicists, and what can they do?
There is an increasing number of clinical ethics consultants in hospitals around the world, an increasing number of bioethics programs purporting to train them, and a new certification in Health Care Ethics Consultation (HEC-C) offered through the American Society for Bioethics and Humanities (ASBH) purporting to credential them. This has some asking what knowledge, skills, or competencies clinical ethicists offer health care institutions. Do they simply keep tabs on the current cultural ethos on hot-button topics, like physician aid in dying and organ donation after cardiac death? Are they just conflict mediators? Or can they offer substantive moral advice in the day-to-day practice of medicine? In what are they experts, if anything? We argue that there is a strong case for the claim that clinical ethicists can be specialized experts who offer substantive moral advice. We argue that three supposed objections to this claim dissolve on close analysis. However, we argue that a central obstacle to developing clinical ethics expertise is constructing appropriate feedback mechanisms to assess proficiency.
To be sure, not all activities admit of expertise. For example, no one can be an expert at getting out of bed in the morning or dog-walking. Call these “non-expert” domains. Some activities require training, but only a little, and they are fairly easy to master, such as using Microsoft Word, mowing the grass, and taking blood. Call these “low performance ceiling” domains of expertise. Other activities require an extensive combination of education, experience, and practice to master, such as medicine, engineering, and playing a musical instrument. Call these “specialized” domains of expertise. Specialized expertise is a high degree of competence in a domain (see Watson forthcoming).
This set of distinctions proves useful when trying to understand the authority of experts. Authority tracks competence in a domain, and the more specialized a domain, the more difficult authority is to attain. Consider that, in the literature on psychotherapy, researchers find that minimally trained therapists are as effective as professionals with advanced training in psychology (Dawes 1994; Tracey, et al. 2014). This suggests that therapy is a low performance ceiling domain, and that minimally trained therapists are as authoritative as their more highly credentialed colleagues in the domain of providing therapy (psychologists are likely to have higher competence, and therefore, authority, in other aspects of their domain). How, then, should we categorize clinical ethics consultation?
According to the ASBH, clinical ethics consultation is “a set of services provided by an individual or group in response to questions from patients, families, surrogates, healthcare professionals, or other involved parties who seek to resolve uncertainty or conflict regarding value-laden concerns that emerge in health care” (2011: 2). Could clinical ethicists be experts in morality, that is, in addressing value-laden concerns? In this context, “moral expertise” refers to the ability to help others make better moral decisions, in contrast to the academic moral expert, who can teach the history and theory of ethics, and the performative moral expert, who leads an exemplary life. But unlike medicine or engineering, the nature and dictates of moral behavior are contentious. On one hand, everyone has a grasp of the basics (don’t murder or steal for personal gain), and on the other, everyone disagrees about the rest (moral rights to “intellectual property,” the moral permissibility of human genome editing, etc.).
On examination, however, this dichotomy between the “basics” and “the rest” proves false. Complex moral questions are difficult for anyone. But, like other kinds of normativity, ethical complexity varies by degree. And it is not inconceivable that people who spend time studying moral reasoning are better at arriving at well-supported conclusions than people who only ever think about low-complexity ethics issues. But should we call what develops from this kind of study “expertise” in morality? There at least three reasons to think the answer is no.
Three Objections to Clinical Ethics Expertise
Autumn Fiester (2007; 2015) argues that extensive disagreement among ethicists over what is morally permissible, impermissible, and obligatory puts ethicists at significant risk of unjustly imposing their own values onto a decision. And if we agree with Jon Matheson (2015) that in cases where there is significant disagreement over controversial propositions among people who should know about those propositions, then everyone else should suspend judgment about them (128-9), then it would seem no one should listen to ethicists about what to do in the hospital.
This objection is just factually mistaken. While there is widespread disagreement in some domains of ethics (metaethics and normative ethics), there is little disagreement in clinical ethics over starting ethical assumptions and ethically appropriate options in most decisions. Ethicists agree that capacity is necessary for decision-making, capacitated patients have the right to make bad decisions or refuse all treatments, the role of a surrogate is to “speak the patient’s voice,” and so on. Further, many cases of ethical uncertainty are driven by empirical or clinical uncertainty, such as whether a patient is capacitated, whether an old advance directive reflects a patient’s current wishes, and whether someone is in pain.
Some argue that such agreement is only possible if clinical ethicists share a (or “the”) correct moral theory (Cholbi 2018). Others presume that a principlist approach can accommodate disagreements regarding theory, appealing to a broad sense of “common morality” (Beauchamp and Childress, 2012). Still others argue that moral expertise is casuist in nature (Jonsen 1991). We think nothing about a clinical ethicist’s competence hangs on this debate. In our department, one ethicist is a neo-Kantian deontologist, one is a particularist, one is an ethical pragmatist, and one refuses to commit to any one ethical approach. We were all trained in very different environments. And yet, in the vast majority of our consults, we agree on the relevant moral features of the case and what counts as an appropriate ethics recommendation in those cases. We all find Beauchamp and Childress’s principles enlightening when employed properly, and we all take seriously the idea that the details of a case make a substantive moral difference in decision-making. Thus, Fiester’s concern does not seem to us motivated by any evidence that properly trained ethicists are at risk of values imposition.
A second objection comes from Giles Scofield (2018), who cites five different conceptions of ethics consultation and argues that, since even clinical ethicists disagree about what they do, there is no clear domain of clinical ethics practice and, thereby, no way to identify anyone as an expert in that domain. Interestingly, Scofield takes his evidence from discussions attempting to provide a theoretical framework for what clinical ethicists do rather than practical discussions of clinical ethics (for example, of requests for futile treatment or criteria for adequately informed consent). Note that if you asked ten doctors to define “disease,” you would likely get ten different answers. Theoretically, “disease” is a controversial concept. But despite this, medicine is not “in perpetual need of having its life saved” (595), as Scofield says ethicists act. In our department, we never go to work wondering what in the world we are supposed to do for the day. We answer questions about autonomy in informed consent, about the ethical standing of surrogate decision-makers, about competing moral obligations in complex discharges, and about just treatment for undocumented patients.
Domains are admittedly fuzzy (Is an expert hematologist also an expert internist even if she is not an expert nephrologist?), but they tend to be carved by the questions they purport to answer. And those questions need not be formulated from the armchair. The domain of behavioral economics, for example, emerged from psychological challenges to economic assumptions about human decision-making (Thaler 2016). Similarly, clinical ethics is defined by a clear set of questions. Those questions emerge from the goals of medicine, power asymmetries between doctors and patients, and agreed-upon organizational values, such as commitments to benefiting patients by patients’ own lights, to not put patients at risk of unnecessary harm, to privacy, to both provider and patient autonomy, and so on. Thus, not only is there a clear domain of clinical ethics consultation, there are people who, because of education, experience, and practice, can answer those questions competently. We contend that those people are moral experts.
A final, and related, objection is posed by Michael Cholbi (2007), namely, even if there are people who can answer those questions competently, how do we know which people those are? If doctors, nurses, and patients are not themselves moral experts, they cannot adequately comment on whether a clinical ethicist is offering well-supported moral recommendations. So, how could anyone know if an ethicist is doing a good job?
Interestingly, this “credentials problem” faces many expert domains, including medicine. We might try formal exams, but no one thinks, for example, the USMLE Step exams strongly correlate with competence in medicine. Rather, medical expertise is acquired slowly, over time, through practice and feedback. We might try to identify a track record of success, but track records are hard to come by, even in domains like internal medicine, psychology, and radiography. This suggests some good news for clinical ethicists, namely: they cannot be written off simply because they face the credentials problem. The bad news is that this is a serious problem. And while medical specialists have established public support that obviates the need to prove expertise, clinical ethicists must figure out how to overcome it.
Prospects for Clinical Ethics Expertise
Why might clinical ethics expertise be so hard to identify? Consider the difference between being a good surgeon and being a good policy maker. Surgeons know when they’ve made a mistake, and they can typically tell whether they have fixed a problem. This is because the surgeon’s environment is what psychologist Robin Hogarth (2001; 2010) calls a “kind learning environment,” an environment in which the feedback one gets is immediate and strongly correlated with skill level (2010: 343). In contrast, policy makers rarely know if their policies are successful or failures; there are too many contravening variables. Policymaking takes place in what Hogarth calls a “wicked learning environment,” environments where “feedback is either missing or distorted” (Ibid. 343). Because the aims of a medical decision are partly what ethicists must weigh in on, clinical ethics takes place in a wicked environment.
Understanding the difference between kind and wicked environments makes a difference in developing, and therefore, identifying, clinical ethics expertise. Hogarth argues that one effect of the wicked environment is that would-be experts lose the “metacognitive ability to correct for sampling biases or missing feedback” (2010: 343). Fortunately, there is a range of tools for developing expertise in wicked environments (see Klein 1998; Tetlock and Gardner 2015). But if we treat moral expertise as if it takes place in a kind environment, we won’t recognize the need for these tools. The prospect for developing clinical ethics expertise requires understanding the complexities of the wicked environment, so we do not inadvertently obscure them. Thus, a central challenge for clinical ethicists is to make use of empirically supported tools for developing feedback mechanisms appropriate for wicked environments.
Although the field of clinical ethics continues to aim toward professionalization and accountably, many clinical ethics services still lack adequate feedback mechanisms for their consultants, regardless of whether they are new or seasoned. The HEC-C exam, like the USMLE, tests for minimal knowledge, but neither is strongly correlated with expertise; therefore, neither is a substitute for the feedback necessary for developing moral expertise. Certain standard clinical measures of quality that have been recently adopted by some clinical ethicists (e.g., length of stay) can be misleading and even detrimental (see Craig & May 2005). Some services solicit feedback from those who request consults. But for this feedback to be effective, it needs to a) come from appropriately placed people at the right time, b) minimize the chances of selection bias or partiality, c) be regular enough that it captures diverse instances, and d) cover the range of skills and knowledge a clinical ethicist should possess. And, of course, knowing whether any feedback strongly correlates to (d) is exactly the problem that wicked environments raise. Clinical ethicists should be wary, therefore, of relying on anecdotal accolades, satisfaction reports, or other feedback mechanisms that can fail in any of the above-listed ways.
Like other experts in wicked environments, clinical ethicists can obscure the wickedness of their environment by ignoring it, minimizing it, or constructing strategies that set themselves up for a false sense of confidence. Grappling with the wicked environment of clinical ethics therefore requires urgent attention from the field.
Where does this leave us? Whether any particular person serving as a clinical ethicist is also a moral expert is an open question. Nevertheless, we think there is adequate evidence that clinical ethics expertise is plausible. Further, we think that recognizing that clinical ethics takes place in a wicked environment suggests a way forward for developing and identifying moral expertise in practice. The ongoing challenge is to engage with the literature on this type of expertise in order to create and implement feedback mechanisms that are strongly correlated with clinical ethics expertise.
Correspondence and Affiliation
Jamie Carlin Watson, PhD
Assistant Professor of Medical Humanities and Bioethics (UAMS)
Plain Language Writer
Department of Medical Humanities and Bioethics
University of Arkansas for Medical Sciences
Arkansas Children's Hospital
Laura Guidry-Grimes, PhD
Assistant Professor of Medical Humanities and Bioethics (UAMS)
Assistant Professor of Psychiatry (UAMS, secondary)
Department of Medical Humanities and Bioethics
University of Arkansas for Medical Sciences
Arkansas Children's Hospital
 Following standard philosophical usage, we will use “ethics” and “morality” interchangeably, and therefore, mean the same thing by “ethics expertise” and “moral expertise.” Further, by “clinical ethics expertise” we mean “the competence to help others make better moral decisions” in health care contexts (Watson and Guidry-Grimes 2018: 11). We contrast this sort of “practical moral expertise” with “academic moral expertise,” which is academic competence to engage with the major debates in ethics, and “performative moral expertise,” which is the competence to make good moral decisions for oneself.
 See Watson (2017: §6.4) for a basic argument for the plausibility of moral expertise.
American Society for Bioethics and Humanities (ASBH) (2011). Core Competencies for Healthcare Ethics Consultation, 2nd ed., Glenview, IL.
Cholbi, Michael (2018). “Why Moral Expertise Needs Moral Theory,” in Jamie Carlin Watson and Laura K. Guidry-Grimes, eds., Moral Expertise: New Essays from Theoretical and Clinical Bioethics 71-86, Cham, Switzerland: Springer.
Cholbi, Michael (2007). “Moral Expertise and the Credentials Problem,” Ethical Theory and Moral Practice 10 (4): 323-334.
Craig, J.M. and T. May. “Evaluating the Outcomes of Ethics Consultation,” The Journal of Clinical Ethics 17, no. 2 (Summer 2005): 168-180.
Dawes, Robyn (1994). House of Cards: Psychology and Psychotherapy Built on Myth, New York: Free Press.
Fiester, Autumn (2015). “Teaching Nonauthoritarian Clinical Ethics: Using an Inventory of Bioethical Positions,” The Hastings Center Report 45(2): 20-26.
Fiester, Autumn (2007). “The Failure of the Consult Model: Why ‘Mediation’ Should Replace ‘Consultation,’” The American Journal of Bioethics 7(2): 31-32.
Hogarth, Robin M. (2001). Educating Intuition, Chicago: University of Chicago Press.
Hogarth, Robin (2010). “Intuition: A Challenge for Psychological Research on Decision Making,” Psychological Inquiry 21: 338-353.
Jonsen, Albert (1991). “Casuistry as Methodology in Clinical Ethics,” Theoretical Medicine, 12(4): 295-307.
Klein, Gary (1998). Sources of power: How people make decisions, Cambridge, MA: The MIT Press.
Scofield, Giles (2018). “What—If Anything—Sets Limits to the Clinical Ethics Consultant’s ‘Expertise’?”
Perspectives in Biology and Medicine 61(4): 594-608.
Terence, Tracey, J. G., Wampold, Bruce E., Lichtenberg, James W., Goodyear, and Rodney K. (2014).
“Expertise in psychotherapy: An elusive goal?” American Psychologist, Vol 69(3), 218-229.
Tetlock, Philip E. and Dan Gardner (2015). Superforecasting: The Art and Science of Prediction, New York: Crown Publishers.
Thaler, Richard (2016). Misbehaving: The Making of Behavioral Economics, New York: W. W. Norton & Company, Inc.
Watson, Jamie Carlin (Forthcoming). Expertise: A Philosophical Introduction, London: Bloomsbury Publishing.
Watson, Jamie Carlin (2017). Winning Votes by Abusing Reason: Responsible Belief and Political Rhetoric, Lanham, MD: Lexington Books.
Watson, Jamie Carlin and Laura K. Guidry-Grimes (2018). “Introduction,” in Watson and Guidry-Grimes, eds., Moral Expertise: New Essays from Theoretical and Clinical Bioethics 1-33, Cham, Switzerland: Springer.