For the formalism used to approximate the influence of an extracellular electrical field on neurons, see
activating function. For a linear system’s transfer function, see
transfer function.
The activation function of a node in an
artificial neural network is a function that calculates the output of the node based on its individual inputs and their weights. Nontrivial problems can be solved using only a few nodes if the activation function is nonlinear.[1] Modern activation functions include the smooth version of the
ReLU, the GELU, which was used in the 2018
BERT model,[2] the logistic (
sigmoid) function used in the 2012 speech recognition model developed by Hinton et al,[3] the
ReLU used in the 2012
AlexNet computer vision model[4][5] and in the 2015
ResNet model.
Comparison of activation functions
Aside from their empirical performance, activation functions also have different mathematical properties:
Nonlinear
When the activation function is non-linear, then a two-layer neural network can be proven to be a universal function approximator.[6] This is known as the
Universal Approximation Theorem. The identity activation function does not satisfy this property. When multiple layers use the identity activation function, the entire network is equivalent to a single-layer model.
Range
When the range of the activation function is finite, gradient-based training methods tend to be more stable, because pattern presentations significantly affect only limited weights. When the range is infinite, training is generally more efficient because pattern presentations significantly affect most of the weights. In the latter case, smaller
learning rates are typically necessary.[citation needed]
Continuously differentiable
This property is desirable (
ReLU is not continuously differentiable and has some issues with gradient-based optimization, but it is still possible) for enabling gradient-based optimization methods. The binary step activation function is not differentiable at 0, and it differentiates to 0 for all other values, so gradient-based methods can make no progress with it.[7]
These properties do not decisively influence performance, nor are they the only mathematical properties that may be useful. For instance, the strictly positive range of the softplus makes it suitable for predicting variances in
variational autoencoders.
An activation function is saturating if . It is nonsaturating if it is not saturating. Non-saturating activation functions, such as
ReLU, may be better than saturating activation functions, because they are less likely to suffer from the
vanishing gradient problem.[8]
In
biologically inspired neural networks, the activation function is usually an abstraction representing the rate of
action potential firing in the cell.[9] In its simplest form, this function is
binary—that is, either the
neuron is firing or not. Neurons also cannot fire faster than a certain rate, motivating
sigmoid activation functions whose range is a finite interval.
If a line has a positive
slope, on the other hand, it may reflect the increase in firing rate that occurs as input current increases. Such a function would be of the form .
A special class of activation functions known as
radial basis functions (RBFs) are used in
RBF networks, which are extremely efficient as universal function approximators. These activation functions can take many forms, but they are usually found as one of the following functions:
Folding activation functions are extensively used in the
pooling layers in
convolutional neural networks, and in output layers of multiclass classification networks. These activations perform aggregation over the inputs, such as taking the
mean,
minimum or
maximum. In multiclass classification the
softmax activation is often used.
Table of activation functions
The following table compares the properties of several activation functions that are functions of one
foldx from the previous layer or layers:
^ For instance, could be iterating through the number of kernels of the previous neural network layer while iterates through the number of kernels of the current layer.
In
quantum neural networks programmed on gate-model
quantum computers, based on quantum perceptrons instead of variational quantum circuits, the non-linearity of the activation function can be implemented with no need of measuring the output of each
perceptron at each layer. The quantum properties loaded within the circuit such as superposition can be preserved by creating the
Taylor series of the argument computed by the perceptron itself, with suitable quantum circuits computing the powers up to a wanted approximation degree. Because of the flexibility of such quantum circuits, they can be designed in order to approximate any arbitrary classical activation function.[20]
^Hinkelmann, Knut.
"Neural Networks, p. 7"(PDF). University of Applied Sciences Northwestern Switzerland. Archived from
the original(PDF) on 2018-10-06. Retrieved 2018-10-06.
^
abcHendrycks, Dan; Gimpel, Kevin (2016). "Gaussian Error Linear Units (GELUs)".
arXiv:1606.08415 [
cs.LG].
^Glorot, Xavier; Bordes, Antoine; Bengio, Yoshua (2011).
"Deep sparse rectifier neural networks"(PDF). International Conference on Artificial Intelligence and Statistics.
^Clevert, Djork-Arné; Unterthiner, Thomas; Hochreiter, Sepp (2015-11-23). "Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)".
arXiv:1511.07289 [
cs.LG].