Journal Article
Neural Networks
IF: 2.891
Q2 (54/137)

Piecewise polynomial activation functions for feed forward neural networks

E. López-Rubio, F. Ortega-Zamorano, E. Domínguez, J. Muñoz-Pérez

Neural Processing Letters2019Vol. 50: 121–147
9
Citas
1023
Visualizaciones
N/A
Descargas
N/A
Altmetric Score
1/8/2019
Publicado
Resumen

Since the origins of artificial neural network research, many models of feedforward networks have been proposed. This paper presents an algorithm which adapts the shape of the activation function to the training data, so that it is learned along with the connection weights. The activation function is interpreted as a piecewise polynomial approximation to the distribution function of the argument of the activation function. An online learning procedure is given, and it is formally proved that it makes the training error decrease or stay the same except for extreme cases. Moreover, the model is computationally simpler than standard feedforward networks, so that it is suitable for implementation on FPGAs and microcontrollers. However, our present proposal is limited to two-layer, one-output-neuron architectures due to the lack of differentiability of the learned activation functions with respect to the node locations. Experimental results are provided, which show the performance of the proposal algorithm for classification and regression applications.

Palabras Clave
Activation Functions
Piecewise Polynomials
Feedforward Neural Networks
Function Approximation
Deep Learning
Acceso a la Publicación
Información de Publicación
Volumen
50
Páginas
121–147
Publicado
1/8/2019
Métricas de Impacto
Citas9
Factor de Impacto2.891
Cuartil
Q2 (54/137)
Visualizaciones1023