User:IssaRice/AI safety/Asymmetry of risks: Difference between revisions

From Machinelearning
(Created page with "https://www.greaterwrong.com/posts/aPwNaiSLjYP4XXZQW/ai-alignment-open-thread-august-2019/comment/moZS7T7gGYnTxkqDJ")
 
No edit summary
Line 1: Line 1:
https://www.greaterwrong.com/posts/aPwNaiSLjYP4XXZQW/ai-alignment-open-thread-august-2019/comment/moZS7T7gGYnTxkqDJ
https://www.greaterwrong.com/posts/aPwNaiSLjYP4XXZQW/ai-alignment-open-thread-august-2019/comment/moZS7T7gGYnTxkqDJ
"I guess I have a high prior that making something smarter than human is dangerous unless we know exactly what we’re doing including the social/political aspects, and you don’t, so you think the burden of proof is on me?" [https://www.greaterwrong.com/posts/HekjhtWesBWTQW5eF/agis-as-populations#comment-PEsLPm8HSftYvpgt4] "In general when someone proposes a mechanism by which the world might end, I think the burden of proof is on them. You’re not just claiming “dangerous”, you’re claiming something like “more dangerous than anything else has ever been, ''even'' if it’s intent-aligned”. This is an incredibly bold claim and requires correspondingly thorough support." [https://www.greaterwrong.com/posts/HekjhtWesBWTQW5eF/agis-as-populations#comment-CdEBDCN3GxuSaLmg3]

Revision as of 22:08, 28 May 2020

https://www.greaterwrong.com/posts/aPwNaiSLjYP4XXZQW/ai-alignment-open-thread-august-2019/comment/moZS7T7gGYnTxkqDJ

"I guess I have a high prior that making something smarter than human is dangerous unless we know exactly what we’re doing including the social/political aspects, and you don’t, so you think the burden of proof is on me?" [1] "In general when someone proposes a mechanism by which the world might end, I think the burden of proof is on them. You’re not just claiming “dangerous”, you’re claiming something like “more dangerous than anything else has ever been, even if it’s intent-aligned”. This is an incredibly bold claim and requires correspondingly thorough support." [2]