User:IssaRice/AI safety/Comparison of terms related to agency: Difference between revisions

From Machinelearning
No edit summary
No edit summary
Line 21: Line 21:
* can it be modeled as having a utility function?
* can it be modeled as having a utility function?
* can we take an intentional stance toward it? i.e., is it useful (so far as predicting what it will do is concerned) to model it as having intentions?
* can we take an intentional stance toward it? i.e., is it useful (so far as predicting what it will do is concerned) to model it as having intentions?
* is it solving some sort of optimization problem? (but what counts as an optimization problem?)


examples to check against:
examples to check against:

Revision as of 22:58, 2 September 2019

Term Opposite
Agent
Optimizer, optimization process
consequentialist
expected utility maximizer
goal-directed, goal-based act-based?
pseudoconsequentialist

parameters to check for:

  • is it searching through a list of potential answers?
  • does it have an explicit model of the world? i.e. it has counterfactuals (see drescher on subactivation)
  • can it be modeled as having a utility function?
  • can we take an intentional stance toward it? i.e., is it useful (so far as predicting what it will do is concerned) to model it as having intentions?
  • is it solving some sort of optimization problem? (but what counts as an optimization problem?)

examples to check against:

  • humans
  • evolution/natural selection
  • bottlecap
  • RL system playing Pong without an explicit model