User:IssaRice/AI safety/Comparison of terms related to agency: Difference between revisions
No edit summary |
No edit summary |
||
Line 39: | Line 39: | ||
* chess-playing algorithm that just does tree search (e.g. alpha-beta pruning algorithm) | * chess-playing algorithm that just does tree search (e.g. alpha-beta pruning algorithm) | ||
* a simple feed-forward neural network (e.g. one that recognizes MNIST digits) | * a simple feed-forward neural network (e.g. one that recognizes MNIST digits) | ||
* a thermostat | |||
* a plant | * a plant |
Revision as of 23:20, 2 September 2019
Term | Opposite |
---|---|
Agent | |
Optimizer, optimization process | |
consequentialist | |
expected utility maximizer | |
goal-directed, goal-based | act-based? |
pseudoconsequentialist |
parameters to check for:
- is it searching through a list of potential answers?
- does it have an explicit model of the world? i.e. it has counterfactuals (see drescher on subactivation)
- can it be modeled as having a utility function?
- can we take an intentional stance toward it? i.e., is it useful (so far as predicting what it will do is concerned) to model it as having intentions?
- is it solving some sort of optimization problem? (but what counts as an optimization problem?)
- origin: was it itself produced by some sort of optimization process?
- does it hit a small target, out of a large space of possibilities?
- how many elements in the space of possibilities is it instantiating?
examples to check against:
- humans
- evolution/natural selection
- bottlecap
- RL system playing Pong without an explicit model
- tool AGI/CAIS
- task AGI
- KANSI
- targeting system on a rocket
- single-step filter
- chess-playing algorithm that just does tree search (e.g. alpha-beta pruning algorithm)
- a simple feed-forward neural network (e.g. one that recognizes MNIST digits)
- a thermostat
- a plant