User:IssaRice/AI safety/Whole brain emulation: Difference between revisions

From Machinelearning
Line 31: Line 31:
==References==
==References==


Superintelligence
Superintelligence -- WBE discussion is scattered across the book. The book actually covers most (all?) the points that carl brings up in LW comments (see links below), but the problem is that bostrom writes in his characteristic style where he lays out the considerations without actually stating his opinions.


https://youtu.be/EUjc1WuyPT8?t=4286
https://youtu.be/EUjc1WuyPT8?t=4286

Revision as of 08:09, 27 September 2019

Different kinds of WBE

  • hi-fi
  • lo-fi

i'm not sure if hi-fi/lo-fi is about the resolution at which the brain is emulated, or if it's about something else.

Distinction between magically obtaining and the expected ways of obtaining

bostrom calls this "technology coupling" p. 236

Computer speed vs thinking speed

https://www.greaterwrong.com/posts/AWZ7butnGwwqyeCuc/the-importance-of-self-doubt/comment/Jri6mr6WzdysbyaTH

Timelines

  • how many years to WBE under a "default timeline"?
    • "The Roadmap concluded that a human brain emulation would be possible before mid-century, providing that current technology trends kept up and providing that there would be sufficient investments." [1]
  • how much can this timeline be accelerated?
  • different ways to accelerate timelines
  • i wonder if point estimates between different people have the same ordering of WBE vs de novo AGI (e.g. people might disagree about when WBE happens, but might be consistent about WBE not coming sooner than de novo AGI)
  • the amount of "advance warning" we get: for WBE, depends on what the bottleneck/last piece is

https://www.greaterwrong.com/posts/dokw8bHND9ujPrSAT/discussion-yudkowsky-s-actual-accomplishments-besides/comment/3u5994Lm72rTtfTMN

https://www.greaterwrong.com/posts/xgr8sDtQEEs7zfTLH/update-on-kim-suozzi-cancer-patient-in-want-of-cryonics/comment/2pZPLguNk58xtf9hu

https://www.greaterwrong.com/posts/xgr8sDtQEEs7zfTLH/update-on-kim-suozzi-cancer-patient-in-want-of-cryonics/comment/h2wFFDDGgboFDqviF

References

Superintelligence -- WBE discussion is scattered across the book. The book actually covers most (all?) the points that carl brings up in LW comments (see links below), but the problem is that bostrom writes in his characteristic style where he lays out the considerations without actually stating his opinions.

https://youtu.be/EUjc1WuyPT8?t=4286

nate: https://forum.effectivealtruism.org/posts/cuB3GApHqLFXG36C6/i-am-nate-soares-ama#KFvaoWBKdLchFHDw8

"A risk-mitigating technology. On our current view of the technological landscape, there are a number of plausible future technologies that could be leveraged to end the acute risk period." https://intelligence.org/2017/12/01/miris-2017-fundraiser/#3 I'm guessing WBE is included as a candidate for this.

https://www.greaterwrong.com/posts/v5AJZyEY7YFthkzax/hedging-our-bets-the-case-for-pursuing-whole-brain-emulation/comment/3zCweNgDiP9bvvJZa

https://www.greaterwrong.com/posts/QqZcdAGDJFLnqpsmG/will-the-ems-save-us-from-the-robots/comment/jSAHbffFBiRxrcsx5

https://www.greaterwrong.com/posts/dc9ehbHh6YA63ZyeS/genetically-engineered-intelligence/comment/5ywLzrcnGPHqnCmh3

https://www.greaterwrong.com/posts/v5AJZyEY7YFthkzax/hedging-our-bets-the-case-for-pursuing-whole-brain-emulation/comment/8PudjfJmLXGMSGzbu

https://www.greaterwrong.com/posts/QqZcdAGDJFLnqpsmG/will-the-ems-save-us-from-the-robots/comment/oDRdAWnpygymvJdtP

https://intelligence.org/files/SS11Workshop.pdf

https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf

https://www.greaterwrong.com/posts/jMKZKc2GiFGegRXvN/superintelligence-via-whole-brain-emulation

http://intelligence.org/files/WBE-Superorgs.pdf

https://www.greaterwrong.com/posts/QqZcdAGDJFLnqpsmG/will-the-ems-save-us-from-the-robots

http://blog.ciphergoth.org/blog/2010/02/20/david-matthewman-whole-brain-emulation-roadmap/

http://blog.ciphergoth.org/blog/2010/02/24/doug-clow-whole-brain-emulation-roadmap/

https://www.greaterwrong.com/posts/v5AJZyEY7YFthkzax/hedging-our-bets-the-case-for-pursuing-whole-brain-emulation/comment/q5asyrQSPbNkhJyHg

https://www.greaterwrong.com/posts/kiaDpaGAs4DZ5HKib/call-for-new-siai-visiting-fellows-on-a-rolling-basis/comment/Tz83otixS6BSwj4vo

https://link.springer.com/chapter/10.1007/978-3-642-31674-6_19

https://content.sciendo.com/configurable/contentpage/journals$002fjagi$002f4$002f3$002farticle-p170.xml