Response to 'Philosophy of Probability II - Existential Risks'

The first in a series.

The first in a series with Mauricio on critical rationalism and bayesian epistemology. See one, two, three, four, five, six, and seven.

Mauricio studies political science, economics, and philosophy, and was kind enough to write an analysis of the conversation Ben and I had wherein I criticized the use of probability within the EA / Existential Risk community. The following is his response.


You both raise good points. I think there are significant problems with the criticisms Vaden makes. (I don’t want to write “it seems” before every sentence, so please interpret me as less confident than I might sound.) (Also, I think I did better at identifying problems in others’ current beliefs than in my current beliefs, so please poke holes in this.)

Summary of my criticisms:

I agree with Vaden that hearing someone’s arguments is often more useful than hearing their credences. I don’t see how this is a strong criticism of Ord’s book–isn’t most of the book spent providing arguments for the credences that he states near the end?

Vaden’s positions seem largely motivated by the idea that the “repugnant” conclusion, as an impossibility theorem, implies that there’s a fundamental mistake in assigning numbers to well-being. This is wrong. Like any other impossibility theorem, the “repugnant” conclusion demonstrates that, out of a set of things that you’d intuitively want out of an approach to population ethics, you can’t have all of them. This would only mean that your fundamental assumption about a project was wrong if all of the assumptions that you can’t have together were motivated by the same fundamental assumption. That might be the case with Arrow’s Theorem (if one assumed that you could straightforwardly represent the will if the people), but there is no such underlying assumption in population ethics–the intuitions that we can’t have together seem separately motivated, so we can reject one while keeping the others. We can reject the intuition that the “repugnant” conclusion is repugnant1 and carry on with the other assumptions, which work just fine together. Because of this, the characterization of longtermist population ethics as “this is impossible but we’re going to do it anyway” is inaccurate–most longtermists seem to coherently accept the “repugnant” conclusion. Since longtermist population ethics is something we might coherently want to pursue, finding the best ways to do it (even if we can’t think about the long-term future nearly as rigorously as we might want to) makes sense. Somewhat-informed guesswork is the best we can do to learn about the long-term consequences of our actions, so it is extremely sensible and important, if we believe that the value of our actions is dominated by their long-term consequences.

Some of Vaden’s skepticism towards thinking that the future is of overwhelming importance seems to come from his belief that there will be an infinite number of future people, making far future concerns always overwhelm present ones. This is also not correct–increasing entropy seems set to wipe out complex life eventually. One might object that, even then, our decisions would be dominated by the slim chance that doing things like research on how to avoid heat death would benefit infinite future generations. This is, at least, far from being a necessary implication of valuing people at all times equally. If the chance of creating some number of future people with good lives falls rapidly enough, as the number of people your action potentially creates increases, then the value of acting for an infinite number of potential future people is finite 2, so it might be less than the finite value of acting for near-future people.

Vaden also seems skeptical of longtermism on the basis that future people don’t exist. This is also arguably wrong–there are compelling arguments for thinking that the past and future exist, just as much as the present does.

I’m skeptical of the claim that we can do anything other than make decisions on the basis of our subjective credences. It seems useful, accurate, and representative of decision-making more broadly to say that my subjective credence in the claim [it will rain tomorrow] has a big influence, together with my desires and other beliefs, on whether I pack an umbrella tomorrow. And it seems like it should. If the probabilities that end up influencing our decisions are always our subjective credences, then it’s pointless to say that we should be using something else–the best we can do is calibrate these credences as well as possible (which we can do by e.g. drawing on frequentist data when possible). We use our subjective credences to make decisions, and standard decision theory endorses this, so it seems useful to devote a significant amount of effort to making these credences closer to what’s optimal.

Similarly, there’s good reasons to think that any decision we make is at least implicitly assigning relative quantitative values (which Vaden seems skeptical of). As with assigning credences–since we can’t do anything else, it seems useful to figure out how we can best go about assigning relative quantitative values. It’s important to do this with explicit thinking because our intuitions are really bad at dealing with numbers, especially large numbers. In practice, people who don’t approach ethics quantitatively seem to consistently overlook immense injustice and suffering.

Rejecting probabilistic assessments of far future trajectories, Vaden argues that “the best explanation is all we have” (1:15:20). This is not correct. We also have the 2nd-best explanation, and the 3rd-best explanation, and so on. I’m not just saying this to nitpick. If the 2nd best explanation / set of arguments implies “we’re in significant danger, and we can do something about it,” ignoring it seems reckless. Expected value calculus based on subjective credences offer a way to account for things like 2nd-best explanations that would be important if true. I don’t see how Vaden’s proposed approach allows for such reasonable-seeming caution.

Vaden argues that ideas are the primary causes of existential risks. I’d be very curious to hear more about how ideas might pose x-risks, and how they might be positively shaped for the long-term future. But his claim that they’re primary causes seems unsubstantiated as a general claim and not very helpful in specific cases. For example, when it comes to risks of engineered pandemics–sure, these wouldn’t happen if no one thought that engineering a pandemic was a good idea, but restricting DNA synthesis might be a more tractable solution than reducing misanthropy.

Vaden also claims that measuring subjective credence doesn’t measure anything very important–that it’s detached from the objective probabilities that we actually want to know. If subjective credences are formed in such ways that they’re correlated with objective probabilities, then this claim is false.

I agree that stating explicit probabilities risks making somewhat-informed guesswork seem much more authoritative than it really is. This is a reason to embrace wide confidence intervals, not to throw away quantitative thinking about future important things altogether.


Footnotes

  1. Prof Greaves and Rob Wiblin give reasons that I find compelling for doubting our intuitive repulsion here, and I wrote more about why the framing of the repugnant conclusion might be misleading here

  2. Attempted proof here