Closing thoughts (Mauricio)

The seventh and final in a series

The seventh in a series with Mauricio on critical rationalism and bayesian epistemology. See one, two, three, four, five, six, and seven.


Thanks for your final thoughts and recommendations!

It does seem that we agree on plenty. On the areas where you’re not very interested, I want to say a little about why I am:

I think accurately identifying an ideal is valuable, even if its precise applications are often incalculable and its rough applications are often intuitive. People have lots of intuitions and arguments about how we should deal with risk, and these are contradictory enough that some of them must be wrong. With a normative ideal (to put it in CR terms), we have a good way to criticize our intuitions and arguments about risk: figuring out whether they at least roughly match the normative ideal of MEU.

For instance, by asking whether they even roughly fit the ideal of MEU, we can figure out that the following arguments are all invalid:

Without a normative ideal, my concern is that we’d be muddling through our confused or vague ideas about risk, without even knowing what we’re aiming at. Our aim is bad, but we’ll do better if we know what target to occasionally glance at.

And as far as targets go, I quite like making the vast or even infinite reverberations of our choices be the best they can be.

Apologies for throwing out new arguments in a last response–I’ll assume you’d have an excellent response :)

Thanks so much for taking part in this! You’ve also taught me a lot.


Update

(Added 9/17/20)

It took some more time, but I think I finally got part of what Vaden’s been getting at.

I think, now, my position is something like this (the middle third is what’s changed):

Humans intuitively have varying degrees of confidence about whether events will happen, and these credences are roughly Bayesian. However, these credences are super vulnerable to all kinds of biases, e.g. conformity and groupthink. If our credences weren’t vulnerable in this way, then they’d be super useful to use as weights for the values of different outcomes. But they are vulnerable in this way, especially when one is in a community in which a bunch of people have similar beliefs. So, when we’re trying to make predictions about events with very little data and lots of potential for groupthink (e.g. when we’re trying to make predictions about AGI), these subjective credences mean little, and we shouldn’t treat them as if they mean much. Instead, we should demand quite good evidence/arguments for updating significantly away from [assigning equal weights to different things that might happen]. (As ideals, expected value maximization and Bayesian updating still seem to make sense, as does the current focus of the existential risk community on risks from synthetic biology/AI systems that don’t do quite what we want them to.)