|
|
(31 intermediate revisions by the same user not shown) |
Line 1: |
Line 1: |
| {| class="wikitable"
| | #redirect [[/]] |
| ! Term !! Opposite
| |
| |-
| |
| | Agent ||
| |
| |-
| |
| | Optimizer, optimization process ||
| |
| |-
| |
| | consequentialist ||
| |
| |-
| |
| | expected utility maximizer ||
| |
| |-
| |
| | goal-directed, goal-based || act-based?
| |
| |-
| |
| | pseudoconsequentialist ||
| |
| |}
| |
|
| |
|
| parameters to check for:
| | This page has moved to https://wiki.issarice.com/wiki/Comparison_of_terms_related_to_agency |
| | |
| * is it searching through a list of potential answers?
| |
| * does it have an explicit model of the world? i.e. it has counterfactuals (see drescher on subactivation)
| |
| * can it be modeled as having a utility function?
| |
| * can we take an intentional stance toward it? i.e., is it useful (so far as predicting what it will do is concerned) to model it as having intentions?
| |
| * is it solving some sort of optimization problem? (but what counts as an optimization problem?)
| |
| * origin: was it itself produced by some sort of optimization process?
| |
| | |
| examples to check against:
| |
| | |
| * humans
| |
| * evolution/natural selection
| |
| * bottlecap
| |
| * RL system playing Pong without an explicit model
| |
Latest revision as of 01:09, 18 May 2020