User:IssaRice/AI safety/Asymmetry of risks: Difference between revisions

From Machinelearning
No edit summary
No edit summary
 
Line 2: Line 2:


"I guess I have a high prior that making something smarter than human is dangerous unless we know exactly what we’re doing including the social/political aspects, and you don’t, so you think the burden of proof is on me?" [https://www.greaterwrong.com/posts/HekjhtWesBWTQW5eF/agis-as-populations#comment-PEsLPm8HSftYvpgt4] "In general when someone proposes a mechanism by which the world might end, I think the burden of proof is on them. You’re not just claiming “dangerous”, you’re claiming something like “more dangerous than anything else has ever been, ''even'' if it’s intent-aligned”. This is an incredibly bold claim and requires correspondingly thorough support." [https://www.greaterwrong.com/posts/HekjhtWesBWTQW5eF/agis-as-populations#comment-CdEBDCN3GxuSaLmg3]
"I guess I have a high prior that making something smarter than human is dangerous unless we know exactly what we’re doing including the social/political aspects, and you don’t, so you think the burden of proof is on me?" [https://www.greaterwrong.com/posts/HekjhtWesBWTQW5eF/agis-as-populations#comment-PEsLPm8HSftYvpgt4] "In general when someone proposes a mechanism by which the world might end, I think the burden of proof is on them. You’re not just claiming “dangerous”, you’re claiming something like “more dangerous than anything else has ever been, ''even'' if it’s intent-aligned”. This is an incredibly bold claim and requires correspondingly thorough support." [https://www.greaterwrong.com/posts/HekjhtWesBWTQW5eF/agis-as-populations#comment-CdEBDCN3GxuSaLmg3]
https://www.greaterwrong.com/posts/QSBgGv8byWMjmaGE5/preparing-for-the-talk-with-ai-projects#comment-B6KDnC2zYDQRpz2iX

Latest revision as of 03:58, 23 June 2020

https://www.greaterwrong.com/posts/aPwNaiSLjYP4XXZQW/ai-alignment-open-thread-august-2019/comment/moZS7T7gGYnTxkqDJ

"I guess I have a high prior that making something smarter than human is dangerous unless we know exactly what we’re doing including the social/political aspects, and you don’t, so you think the burden of proof is on me?" [1] "In general when someone proposes a mechanism by which the world might end, I think the burden of proof is on them. You’re not just claiming “dangerous”, you’re claiming something like “more dangerous than anything else has ever been, even if it’s intent-aligned”. This is an incredibly bold claim and requires correspondingly thorough support." [2]

https://www.greaterwrong.com/posts/QSBgGv8byWMjmaGE5/preparing-for-the-talk-with-ai-projects#comment-B6KDnC2zYDQRpz2iX