User:IssaRice/AI safety/Comparison of terms related to agency: Difference between revisions

From Machinelearning
No edit summary
No edit summary
Line 14: Line 14:
| pseudoconsequentialist ||
| pseudoconsequentialist ||
|}
|}
parameters to check for:
* is it searching through a list of potential answers?
* does it have an explicit model of the world?
* can it be modeled as having a utility function?
* can we take an intentional stance toward it? i.e., is it useful (so far as predicting what it will do is concerned) to model it as having intentions?


examples to check against:
examples to check against:

Revision as of 22:54, 2 September 2019

Term Opposite
Agent
Optimizer, optimization process
consequentialist
expected utility maximizer
goal-directed, goal-based
pseudoconsequentialist

parameters to check for:

  • is it searching through a list of potential answers?
  • does it have an explicit model of the world?
  • can it be modeled as having a utility function?
  • can we take an intentional stance toward it? i.e., is it useful (so far as predicting what it will do is concerned) to model it as having intentions?

examples to check against:

  • humans
  • evolution/natural selection
  • bottlecap
  • RL system playing Pong without an explicit model