= Universal approximation theorem
{wiki=Universal_approximation_theorem}
The Universal Approximation Theorem is a foundational result in the field of neural networks and approximation theory. It states that a feedforward neural network with at least one hidden layer and a finite number of neurons can approximate any continuous function on a compact subset of \\(\\mathbb\{R\}^n\\) to any desired degree of accuracy, provided that the activation function used in the network is non-constant, bounded, and continuous.
Back to article page