User:IssaRice/AI safety/Comparison of terms related to agency: Difference between revisions

From Machinelearning
(Created page with "{| class="wikitable" ! Term !! Opposite |- | Agent || |- | Optimizer, optimization process || |- | consequentialist || |- | expected utility maximizer || |- | goal-directed, g...")
 
No edit summary
Line 14: Line 14:
| pseudoconsequentialist ||
| pseudoconsequentialist ||
|}
|}
examples to check against:
* humans
* evolution/natural selection
* bottlecap
* RL system playing Pong without an explicit model

Revision as of 22:51, 2 September 2019

Term Opposite
Agent
Optimizer, optimization process
consequentialist
expected utility maximizer
goal-directed, goal-based
pseudoconsequentialist

examples to check against:

  • humans
  • evolution/natural selection
  • bottlecap
  • RL system playing Pong without an explicit model