User:IssaRice/AI safety/Comparison of terms related to agency

From Machinelearning
Revision as of 22:51, 2 September 2019 by IssaRice (talk | contribs)
Term Opposite
Agent
Optimizer, optimization process
consequentialist
expected utility maximizer
goal-directed, goal-based
pseudoconsequentialist

examples to check against:

  • humans
  • evolution/natural selection
  • bottlecap
  • RL system playing Pong without an explicit model