User:IssaRice/AI safety/Comparison of terms related to agency: Difference between revisions

From Machinelearning
No edit summary
(Redirected page to /)
 
(35 intermediate revisions by the same user not shown)
Line 1: Line 1:
{| class="wikitable"
#redirect [[/]]
! Term !! Opposite
|-
| Agent ||
|-
| Optimizer, optimization process ||
|-
| consequentialist ||
|-
| expected utility maximizer ||
|-
| goal-directed, goal-based ||
|-
| pseudoconsequentialist ||
|}


parameters to check for:
This page has moved to https://wiki.issarice.com/wiki/Comparison_of_terms_related_to_agency
 
* is it searching through a list of potential answers?
* does it have an explicit model of the world?
* can it be modeled as having a utility function?
* can we take an intentional stance toward it? i.e., is it useful (so far as predicting what it will do is concerned) to model it as having intentions?
 
examples to check against:
 
* humans
* evolution/natural selection
* bottlecap
* RL system playing Pong without an explicit model

Latest revision as of 01:09, 18 May 2020