User:IssaRice/AI safety/Comparison of terms related to agency

From Machinelearning
Revision as of 23:02, 2 September 2019 by IssaRice (talk | contribs)
Term Opposite
Agent
Optimizer, optimization process
consequentialist
expected utility maximizer
goal-directed, goal-based act-based?
pseudoconsequentialist

parameters to check for:

  • is it searching through a list of potential answers?
  • does it have an explicit model of the world? i.e. it has counterfactuals (see drescher on subactivation)
  • can it be modeled as having a utility function?
  • can we take an intentional stance toward it? i.e., is it useful (so far as predicting what it will do is concerned) to model it as having intentions?
  • is it solving some sort of optimization problem? (but what counts as an optimization problem?)
  • origin: was it itself produced by some sort of optimization process?
  • does it hit a small target, out of a large space of possibilities?

examples to check against:

  • humans
  • evolution/natural selection
  • bottlecap
  • RL system playing Pong without an explicit model