Conference paper
Hardware Implementation
IF: 0
Conference

Deep Neural Network Architecture implementation on FPGAs using a Layer Multiplexing Scheme

F. Ortega-Zamorano, J. M. Jerez, G. Juárez, L. Franco

Advances in Intelligent Systems and Computing2016Vol. 474: 79-86
2
Citas
1811
Visualizaciones
N/A
Descargas
N/A
Altmetric Score
1/6/2016
Publicado
Resumen

In recent years predictive models based on Deep Learning strategies have achieved enormous success in several domains including pattern recognition tasks, language translation, software design, etc. Deep learning uses a combination of techniques to achieve its prediction accuracy, but essentially all existing approaches are based on multi-layer neural networks with deep architectures, i.e., several layers of processing units containing a large number of neurons. As the simulation of large networks requires heavy computational power, GPUs and cluster based computation strategies have been successfully used. In this work, a layer multiplexing scheme is presented in order to permit the simulation of deep neural networks in FPGA boards. As a demonstration of the usefulness of the scheme deep architectures trained by the classical Back-Propagation algorithm are simulated on FPGA boards and compared to standard implementations, showing the advantages in computation speed of the proposed scheme.

Palabras Clave
Deep Neural Networks
FPGA
Layer Multiplexing
Hardware Implementation
Resource Optimization
Neural Network Architectures
Acceso a la Publicación
Información de Publicación
Volumen
474
Páginas
79-86
Publicado
1/6/2016
Métricas de Impacto
Citas2
Factor de Impacto0
Cuartil
Conference
Visualizaciones1811