User:IssaRice/AI safety/Comparison of terms related to agency: Difference between revisions

From Machinelearning
No edit summary
No edit summary
Line 10: Line 10:
| expected utility maximizer ||
| expected utility maximizer ||
|-
|-
| goal-directed, goal-based ||
| goal-directed, goal-based || act-based?
|-
|-
| pseudoconsequentialist ||
| pseudoconsequentialist ||

Revision as of 22:56, 2 September 2019

Term Opposite
Agent
Optimizer, optimization process
consequentialist
expected utility maximizer
goal-directed, goal-based act-based?
pseudoconsequentialist

parameters to check for:

  • is it searching through a list of potential answers?
  • does it have an explicit model of the world?
  • can it be modeled as having a utility function?
  • can we take an intentional stance toward it? i.e., is it useful (so far as predicting what it will do is concerned) to model it as having intentions?

examples to check against:

  • humans
  • evolution/natural selection
  • bottlecap
  • RL system playing Pong without an explicit model