F. Ortega-Zamorano, J. M. Jerez, G. Juárez, J. O. Pérez, L. Franco


This work presents a high precision FPGA implementation of neural network activation functions. The approach achieves superior numerical accuracy while maintaining computational efficiency, addressing the challenge of implementing non-linear functions in fixed-point hardware.