User:IssaRice/AI safety/Distribution of AI failures leading up to AGI
Distribution of AI failures leading up to AGI refers to how many and how severe AI-related failures are, in the period leading up to AGI.
Each "failure event" has several parameters: how strong/weak the AI is, how easy/hard it is to anticipate or detect problems in advance.
The inconspicuous failure hypothesis states that catastrophic failure will be difficult to anticipate. Phrased in terms of distribution of AI failures, this says that there will exist failures involving strong AI, which are hard to detect in advance. A subcase of this hypothesis states that there will also be failures involving weak AI which are easy to detect (this is the especially worrying case, where familiarity with the easier failures leads to complacency).