User:IssaRice/AI safety/Asymmetry of risks

From Machinelearning
Revision as of 03:58, 23 June 2020 by IssaRice (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

https://www.greaterwrong.com/posts/aPwNaiSLjYP4XXZQW/ai-alignment-open-thread-august-2019/comment/moZS7T7gGYnTxkqDJ

"I guess I have a high prior that making something smarter than human is dangerous unless we know exactly what we’re doing including the social/political aspects, and you don’t, so you think the burden of proof is on me?" [1] "In general when someone proposes a mechanism by which the world might end, I think the burden of proof is on them. You’re not just claiming “dangerous”, you’re claiming something like “more dangerous than anything else has ever been, even if it’s intent-aligned”. This is an incredibly bold claim and requires correspondingly thorough support." [2]

https://www.greaterwrong.com/posts/QSBgGv8byWMjmaGE5/preparing-for-the-talk-with-ai-projects#comment-B6KDnC2zYDQRpz2iX