Lower semicomputable function: Difference between revisions

From Machinelearning
No edit summary
No edit summary
Line 5: Line 5:


The way to think of this is that given some fixed <math>x \in X</math>, the values <math>g(x,0), g(x,1), g(x,2), \ldots</math> are successive approximations of the value <math>f(x)</math>, and we have <math>g(x,0) \leq g(x,1) \leq g(x,2) \leq \cdots \leq f(x)</math>.
The way to think of this is that given some fixed <math>x \in X</math>, the values <math>g(x,0), g(x,1), g(x,2), \ldots</math> are successive approximations of the value <math>f(x)</math>, and we have <math>g(x,0) \leq g(x,1) \leq g(x,2) \leq \cdots \leq f(x)</math>.
The definition says nothing about the rate of convergence, so for instance if we want <math>g(x,n)</math> to be within <math>1/1000</math> of the value of <math>f(x)</math>, there is not in general some way to find a large enough <math>n</math>.

Revision as of 05:50, 25 July 2019

A function f:XR is lower semicomputable iff there exists a computable function g:X×NQ such that:

  • for all xX and all natural numbers n, we have g(x,n+1)g(x,n)
  • for all xX we have limng(x,n)=f(x)

The way to think of this is that given some fixed xX, the values g(x,0),g(x,1),g(x,2), are successive approximations of the value f(x), and we have g(x,0)g(x,1)g(x,2)f(x).

The definition says nothing about the rate of convergence, so for instance if we want g(x,n) to be within 1/1000 of the value of f(x), there is not in general some way to find a large enough n.