So there are also ways, of course, in which some human enhancements could mitigate existential risk, for example, cognitive enhancement. It might be that we just need to be smarter to figure out how to not destroy ourselves the first time we create, say, machine super intelligence. That you might need, you might need enough intelligence to be able to foresee the consequences of some of the actions we're taking. So depending on what kind of enhancement you're talking about, it might either increase or decrease existential risk. I mean, ultimately... So here one question also is whether your values have focused on currently existing people or whether you're, at the other extreme, neutral between all future generations and bringing happy people into existent counts as much as making people already exist happy. So if you only care about existing people then you might want to be quite risk seeking in the sense that currently we're all dying. So unless something radical changes, we're all going to be dead within a hundred years or so, and most of us much sooner. When in a desperate situation like that you want to try, even if it's a long shot, it's the only change you have of maybe achieving a cosmic scale lifespan. You know, something really radical would have to change.
If you are temporarily neutral and you care as much about bringing new happy people in to existence as you do about making currently existing people happy, then your priority will instead be to do whatever increases the chances that ultimately we will develop, you know, a galactic civilisation that's happy. Whether it takes a hundred years or fifty years or fifty thousand years is completely irrelevant, because once it's there it can last for billions of years. So you would then do whatever it takes to reduce existential risk as much as possible. And whether that means causing famines in Africa or not doing that, whatever it would be, just fading in significance compared to this call of reducing existential risk. So you get very different, very different sort of, priorities depending on this basic question in value theory.