6 Comments
Apr 4Liked by Sam Harsimony

Thanks for writing this! I find it quite surprising that societies would accept such a high level of risk for such a modest gain, but the analysis looks sound to me. This suggests a pretty weird world: one in which governments pay a lot of money to avoid existential risk⁹ but nevertheless society charges ahead on risky technologies with large payoffs—basically the OpenAI approach.

I think I've changed my mind a bit as a result of this, with the egoistic part of my moral parliament being slightly more okay with AI risk, but not much, since I think the alternative to go ahead with TAI development could still hold large improvements in life quality and healthspan, even more so since I'm signed up for cryonics. (I also have a p(doom)≈60%, which doesn't quite make the cutoff).

I'd like to understand the reasons for lower time-preference ⇒ more okay with TAI risk, since my time preference is probably much lower than the one of most people and this could have implications for my beliefs/actions.

The standard EA rejoinder is that this post assumes a pure time preference, which philosophers reject and economists embrace¹⁰. But you don't assume that future generations don't matter, right? Besides discounting, that is.

> This nicely reproduces Jones’ result that societies with lower discount rates take on more AI risk.

Is there an intuitive explanation for this? I've tried following the equations, but I find it tricky. Also, n_ai stands for the net population growth under AI, right? It'd be helpful if you added this to the text.

> First, lets assume that AI produces some sort of “singularity” that delivers infinite growth

Just flagging that I'm very skeptical about this, even if my changes probably wouldn't modify the outcome very much. My go-to assumptions⁴ are that

• AI produces hyperbolic growth for a while³

• that runs into a fixed but far higher growth rate again, such as the economy doubling every year, and then

• *eventually* a slowdown¹² to cubic growth as the economy expands at light speed in a fixed state (with bumps for space between galaxies/galaxy clusters), and then

• again a slowdown to zero growth as we have colonized the reachable universe (or the part of the universe not grabbed by any other grabby civilization⁸), and

• *eventually* a negative growth rate because we run out of negentropy/computational capacity in the universe⁵

(Not considering any things beyond astronomical waste⁶ or æstivation⁷ or obscure physics like Malament-Hogarth spacetimes or entropy-defying tech.)

> Is it silly to try to estimate things about the post-AI world like n_ai and g_ai? These things are fundamentally unknowable, so what good is a model that depends on them?

Needless to say, I also disagree with this, and I'm happy you tried :-)

¹: https://forum.effectivealtruism.org/posts/pFHN3nnN9WbfvWKFg/this-can-t-go-on?commentId=x3BNxMhuqY7t7RQZi

²: Limits to Growth (Robin Hanson, 2009) https://www.overcomingbias.com/p/limits-to-growthhtml

³: Modeling the Human Trajectory (David Roodman, 2020) (https://www.openphilanthropy.org/blog/modeling-human-trajectory)

⁴: http://niplav.site/notes.html#A_FIRE_Upon_the_Deep

⁵: https://arxiv.org/abs/quant-ph/0110141

⁶: https://www.lesswrong.com/posts/Qz6w4GYZpgeDp6ATB/beyond-astronomical-waste

⁷: https://arxiv.org/abs/1705.03394

⁸: https://arxiv.org/abs/2102.01522

⁹: https://forum.effectivealtruism.org/posts/DiGL5FuLgWActPBsf/how-much-should-governments-pay-to-prevent-catastrophes

¹⁰: https://philarchive.org/rec/GREDFH

Expand full comment
author

Thanks for the feedback!

I think you've zeroed in on the interesting and unintuitive part about lower discounting meaning more risk-taking. Here's how I rationalize it:

1. We assume that AI increases population growth (either via lower mortality, higher birthrates, or both. This assumption could be violated for example if AI lowers mortality and lowers birthrates enough that n_ai < n_0)

2. If population growth is higher with AI, then the far future has many more people and total utility is much higher than without it.

3. The lower your time discount rate, the more heavily you weight the far future and the more risk you are willing to take to reach them.

Social planners that have linear utility in population are willing to accept near certainty of doom for a tiny chance of high future welfare. That's why a purely total utilitarian social planner is a bad idea!

As for different models of growth under AI, I think they're going to give similar results. The social planner always faces the problem of "AI could potentially create a huge amount of value in the future, how much risk am I willing to take to get there?"

Expand full comment

Interesting!

My intution then says that "sure, we're creating additional lives, but not *that* many, because we're just “moving the distribution forward”—that is, of all the people that will ever be created, we're just instantiating them a century earlier" (plus creating a bit less than a century of people because time is presumably finite).

But I guess that due to discounting, moving them a century earlier matters a whole lot.

Expand full comment
author

Yup this is correct, the model assumes growth continues indefinitely whereas you see the future as a sort of logistic function with a finite total number of people.

But also note that "moving the distribution forward" also increases the total life-years lived, so the two approaches aren't too different.

I would guess that if I redid this with a generalized logistic function the results wouldn't change much so long as the exponential-looking part of the logistic function continued for long enough. Because rho > n_ai, the far future makes a pretty small contribution to total utility past (say) 1000 years.

Expand full comment

By the way, have you considered cross-posting this to the EA forum? I'd've liked to see it there, but I also see that your post on LW didn't get many responses.

Expand full comment
author

Yeah I can try posting on EA forum, thanks for the nudge!

Expand full comment