Introduction
People are often uncertain about which choices are morally right. Imagine you're standing in front of a food stall with €2 in your pocket. Do you get the tasty meat sandwich or settle for the less appetizing lettuce sandwich? You know everything there is to know about animal suffering and the environmental impact of meat consumption. But you’re still stuck because you don’t know to what degree you should care about animals. The empirical facts alone don’t tell you what you should do.
This is an example of moral uncertainty: uncertainty that does not arise from empirical factors, but from moral ones.
In this blog post I’ll present three arguments for why we should be morally uncertain. Then, I’ll discuss two arguments for why we should take moral uncertainty into account when making decisions. In a future post, I’ll examine some arguments for why we should not take moral uncertainty into account.
Unlike all my previous posts, this one won’t introduce any new ideas and will only present arguments found in the existing literature on moral uncertainty, particularly the works of MacAskill, Ord and Bykvist. If you’ve already read their works (and don’t need a refresher), you can skip this post.
Also, I will focus on the big-picture rationale for moral uncertainty, not specific case studies.
With that out of the way, let’s get into it:
1. Why Should We Be Morally Uncertain?
Examining moral uncertainty becomes relevant only if people actually experience doubts about their moral views. However, some individuals are firmly convinced of their moral beliefs.1 But this certainty is not always rational. Here are three arguments for why we should sometimes be morally uncertain.
1.1 Moral Disagreement
A good reason to not be too sure of yourself is that smart people disagree about morality… a lot.2 Although there are moral issues on which there is broad consensus (like that killing a random baby isn’t exactly great) many other issues remain the subject of significant debate. In many of these cases, intelligent and well-informed individuals disagree. This makes it less reasonable to adhere to a particular moral theory with absolute certainty. After all, it’s possible that others, who have thought about the issue just as deeply (or even more deeply), have reached a better-founded conclusion. They may have intuitions, experiences, or evidence that could influence your beliefs if you were aware of them. Therefore, it is rational to reconsider confidence in your own moral positions if reasonable and thoughtful individuals reach different conclusions.3
1.2 Moral Philosophy is Complex
The second reason, which follows from the previous one, is that moral philosophy is a very complex discipline. Small nuances between different moral theories can have major implications for our actions. Moreover, there’s a great diversity of moral theories, each with its own arguments for and against. Various factors can make a theory appealing, such as how simple or intuitive it is. Weighing all these factors against each other is not easy, and even small errors in this evaluation can have significant consequences for our moral judgment.
Additionally, moral judgments can be influenced by various forms of bias, e.g:
Your culture, upbringing, and social environment shape what seems “obvious” to you.
The status quo feels morally right just because it’s familiar.
Your evolutionary instincts weren’t exactly fine-tuned for abstract ethical debates.
These factors introduce general irrational tendencies in our brains.4 Generally, we do not know whether we have these biases until they are pointed out to us.5 And even when they are pointed out, we often fail to eliminate them entirely (because, well, bias).6 It’s therefore better to assume that we are biased in many ways, even when we try our best not to be. Given all this, the odds that you’ve flawlessly navigated moral philosophy without error seem… let’s say, low-ish.
1.3 Overconfidence in Humans
A final argument is the general human tendency toward overconfidence.7 Research has consistently shown that people are often much more certain of their judgments than is actually justified. For example:
Studies show that when people estimate that an event has a probability of more than 70%, it usually happens in less than 70% of cases.8
Research has found that when people are 100% certain of something9, they are still wrong in about 20% of cases.10
This overconfidence affects even experts.11
If we are overconfident in so many different domains, it’s likely that this also applies to moral philosophy. In fact, it even seems more likely that we are overconfident in moral matters because people are often strongly biased when it comes to moral issues.12 Therefore, when we feel great certainty about a particular moral belief, it’s very likely that we should actually be less certain.
In summary, these three arguments show that it is wise to assume that we should be morally uncertain. However, the question remains whether we should also incorporate this uncertainty into our decision-making.
2. Should We Act On Moral Uncertainty?
There are two main reasons to take moral uncertainty into account when making decisions. First, in some situations, it can be beneficial to consider moral uncertainty because we have nothing to lose by doing so. Second, in addition to objective moral obligations, there are also subjective moral obligations that can influence our decisions.13
2.1 Nothing to Lose
In certain cases, there is nothing to lose by considering moral uncertainty. Suppose Timmy enjoys both a lettuce sandwich and a meat sandwich equally. While he’s most confident in a moral theory that states that eating meat is neither better nor worse than not eating meat, he also finds another moral theory, which claims that eating meat is wrong, somewhat convincing. In such a case, it seems wiser to choose the lettuce sandwich just to be safe, even if it later turns out that eating meat was not unethical. This choice follows the logic of the dominating decision rule from decision theory (aka: if one choice is at least as good as another in all scenarios, but better in some, then you should pick that one).14 It’s therefore plausible that a similar rule applies in situations of moral uncertainty, implying that there are norms that can guide our behavior in such cases.
2.2 Subjective Norms
Philosopher Frank Jackson introduced a thought experiment to illustrate the existence of subjective norms:15
Kathandra is a doctor with a patient named Jimothy.16 Jimothy keeps sneezing out bubbles and she doesn’t know why. She’s unsure whether Jimothy has condition A or condition C; both possibilities seem equally likely. It’s impossible for Kathandra to obtain further diagnostic information to help her make a decision. She has three medicines available: A, B, and C. If Jimothy receives medicine A and he actually has condition A, he will fully recover; but if he has condition C, he will die. Conversely, if he receives medicine C and he has condition C, he will fully recover; but if he has condition A, he will die. If Kathandra chooses medicine B, Jimothy will almost fully recover regardless of the condition.
Jimothy has condition A - 50% Jimothy has condition C - 50%
A Completely cured Dead 😵
B Almost completely cured Almost completely cured
C Dead 😵 Completely cured
Since Kathandra assigns equal probability to both conditions, it would be reckless for her to administer medicine A or C due to the significant risk of Jimothy’s death. Jackson's argument suggests that there are subjective norms that advocate choosing the safer medicine B, thereby reducing the risk of a fatal error.
2.3 Subjective Norms in Moral Uncertainty
The previous thought experiment showed that subjective norms exist. But what about subjective norms when it comes to moral uncertainty? MacAskill, Bykvist, and Ord explored this with a modified version of the thought experiment, specifically focusing on moral uncertainty:17
Kathandra is a doctor faced with two critically ill patients: Amandrew, a human, and Chrisabelle, a chimpanzee. Both suffer from the same fatal bubble sneeze disease and urgently need treatment. Kathandra has only one bottle of life-saving medicine.
If she gives the entire bottle to Amandrew, Amandrew will survive but with a disability that halves their happiness.
If she gives the entire bottle to Chrisabelle, Chrisabelle will fully recover.
If she splits the medicine, both will survive, but their health and happiness will be slightly lower than if they had fully recovered.
Kathandra is a utilitarian, meaning she tends to add up happiness to determine the best outcome. However, she’s uncertain about the moral value of a chimpanzee’s happiness. She considers it equally likely that either:
A chimpanzee’s happiness has no moral value, or
A chimpanzee’s happiness is just as morally important as a human’s.
Since she cannot gather more information, she must decide between:
A: Give the whole bottle to Amandrew.
B: Split the bottle evenly.
C: Give the whole bottle to Chrisabelle.
The outcomes in terms of happiness (higher is better):
Amandrew’s happiness 🥳 Chrisabelle’s happiness 🐵🎉
A 50 0
B 49 49
C 0 50
The moral theories disagree sharply on whether option A or C is best, but both agree that option B is only a small step away from the best choice. This can be summarized as:
Chimpanzee happiness does not matter 50% Chimpanzee happiness does matter 50%
A Morally right Very immoral 👎
B Slightly immoral Slightly immoral
C Very immoral 👎 Morally right
Now, imagine that in the future, it turns out that chimpanzee happiness is indeed as morally valuable as human happiness, meaning that giving the whole bottle to Chrisabelle would have been the right choice. What should Kathandra do in this situation?
In the earlier thought experiment, we saw that it would be epistemically reckless for Kathandra to choose anything other than the safest option, medicine B. Similarly, in this moral scenario, it seems morally reckless to choose anything other than option B.18
This suggests that we should take both epistemic (knowledge-based) and moral uncertainty seriously and that there are norms that can guide our decisions in both cases.
I think this paints a pretty clear picture in favor of embracing moral uncertainty. For the sake of fairness I will also go over some objections to it in the future, but (spoiler alert) I mostly don’t find them too convincing. The only thing I haven’t touched on yet is how to incorporate moral uncertainty into your decisions/theories. If you don’t know where to start, then consider checking out my post Resolving moral uncertainty with randomization.
[insert religious/political opponent here]
like, a lot a lot
Christensen, D. (2009) ‘Disagreement as Evidence: The Epistemology of Controversy’, Philosophy Compass, vol. 4, no. 5, pp.755
Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
Pronin, Lin en Ross. (2002) ‘The Bias Blind Spot: Perceptions of Bias in Self versus Others’, Personality and Social Psychology Bulletin, vol. 28, no. 3, pp.369
Pronin, E. (2008) ‘How We See Ourselves and How We See Others’, Science, vol. 320, no. 5880
Yes, even you
Lichtenstein, Fischhoff, & Phillips. (1982) Calibration of probabilities: The state of the art to 1980. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under Uncertainty: Heuristics and Biases (pp. 306-334). Cambridge: Cambridge University Press.
Do not recommend btw
Adams & Adams (1960) ‘Confidence in the Recognition and Reproduction of Words Difficult to Spell’, The American Journal of Psychology, vol. 73, no. 4, pp. 544.
Lichtenstein & Fischhoff (1977) ‘Do Those Who Know More Also Know More About How Much They Know?’, Organizational Behavior and Human Performance, vol. 20, no. 2, pp. 159.
Taber & Lodge (2006) ‘Motivated Skepticism in the Evaluation of Political Beliefs’, American Journal of Political Science, vol. 50, no. 3, pp. 755.
Zimmerman, M. (2006) ‘Is Moral Obligation Objective or Subjective?’, Utilitas, vol. 18, no. 4, pp. 329
Abadi, Gonzalez. (1993) Data Fusion in Robotics & Machine Intelligence, Academic Press, p. 227
Jackson, F. (1991) ‘Decision-Theoretic Consequentialism and the Nearest and Dearest Objection’, pp. 462
Jackson, tragically, did not use those names
MacAskill, Bykvist & Ord. (2020). Moral Uncertainty. Oxford University Press. pp.16.
Bykvist, K (2011). ‘How to Do Wrong Knowingly and Get Away with It’
Making inter-comparisons for moral theories is difficult, but modern moral philosophers in this space often use moral parliament (https://www.fhi.ox.ac.uk/wp-content/uploads/2021/06/Parliamentary-Approach-to-Moral-Uncertainty.pdf) or maximizing expected choice-worthiness (see MacAskill’s entire book on Moral Uncertainty) here.