Discussion about this post

User's avatar
niplav's avatar

Thanks for writing this! I find it quite surprising that societies would accept such a high level of risk for such a modest gain, but the analysis looks sound to me. This suggests a pretty weird world: one in which governments pay a lot of money to avoid existential risk⁹ but nevertheless society charges ahead on risky technologies with large payoffs—basically the OpenAI approach.

I think I've changed my mind a bit as a result of this, with the egoistic part of my moral parliament being slightly more okay with AI risk, but not much, since I think the alternative to go ahead with TAI development could still hold large improvements in life quality and healthspan, even more so since I'm signed up for cryonics. (I also have a p(doom)≈60%, which doesn't quite make the cutoff).

I'd like to understand the reasons for lower time-preference ⇒ more okay with TAI risk, since my time preference is probably much lower than the one of most people and this could have implications for my beliefs/actions.

The standard EA rejoinder is that this post assumes a pure time preference, which philosophers reject and economists embrace¹⁰. But you don't assume that future generations don't matter, right? Besides discounting, that is.

> This nicely reproduces Jones’ result that societies with lower discount rates take on more AI risk.

Is there an intuitive explanation for this? I've tried following the equations, but I find it tricky. Also, n_ai stands for the net population growth under AI, right? It'd be helpful if you added this to the text.

> First, lets assume that AI produces some sort of “singularity” that delivers infinite growth

Just flagging that I'm very skeptical about this, even if my changes probably wouldn't modify the outcome very much. My go-to assumptions⁴ are that

• AI produces hyperbolic growth for a while³

• that runs into a fixed but far higher growth rate again, such as the economy doubling every year, and then

• *eventually* a slowdown¹² to cubic growth as the economy expands at light speed in a fixed state (with bumps for space between galaxies/galaxy clusters), and then

• again a slowdown to zero growth as we have colonized the reachable universe (or the part of the universe not grabbed by any other grabby civilization⁸), and

• *eventually* a negative growth rate because we run out of negentropy/computational capacity in the universe⁵

(Not considering any things beyond astronomical waste⁶ or æstivation⁷ or obscure physics like Malament-Hogarth spacetimes or entropy-defying tech.)

> Is it silly to try to estimate things about the post-AI world like n_ai and g_ai? These things are fundamentally unknowable, so what good is a model that depends on them?

Needless to say, I also disagree with this, and I'm happy you tried :-)

¹: https://forum.effectivealtruism.org/posts/pFHN3nnN9WbfvWKFg/this-can-t-go-on?commentId=x3BNxMhuqY7t7RQZi

²: Limits to Growth (Robin Hanson, 2009) https://www.overcomingbias.com/p/limits-to-growthhtml

³: Modeling the Human Trajectory (David Roodman, 2020) (https://www.openphilanthropy.org/blog/modeling-human-trajectory)

⁴: http://niplav.site/notes.html#A_FIRE_Upon_the_Deep

⁵: https://arxiv.org/abs/quant-ph/0110141

⁶: https://www.lesswrong.com/posts/Qz6w4GYZpgeDp6ATB/beyond-astronomical-waste

⁷: https://arxiv.org/abs/1705.03394

⁸: https://arxiv.org/abs/2102.01522

⁹: https://forum.effectivealtruism.org/posts/DiGL5FuLgWActPBsf/how-much-should-governments-pay-to-prevent-catastrophes

¹⁰: https://philarchive.org/rec/GREDFH

Expand full comment
niplav's avatar

By the way, have you considered cross-posting this to the EA forum? I'd've liked to see it there, but I also see that your post on LW didn't get many responses.

Expand full comment
4 more comments...

No posts