Residential College | false |
Status | 已發表Published |
A 218 GOPS neural network accelerator based on a novel cost-efficient surrogate gradient scheme for pattern classification | |
Siddique, Ali1,2; Iqbal, Muhammad Azhar3; Aleem, Muhammad4; Islam, Muhammad Arshad4 | |
2023-04-17 | |
Source Publication | Microprocessors and Microsystems |
ISSN | 0141-9331 |
Volume | 99Pages:104831 |
Abstract | The accuracy and hardware efficiency of a neural system depends critically on the choice of an activation function. Rectified linear unit (ReLU) is a contemporary activation function that yields high accuracy and allows the construction of efficient neural chips, but it results in a lot of dead neurons, especially at the output layer. This problem is more pronounced in case of multichannel, multiclass classification. This is due to the fact that ReLU cancels out negative values altogether, as a result of which the corresponding values cannot be successfully backpropagated. This phenomenon is referred to as the dying ReLU problem. In this article, we present a novel ‘surrogate gradient’ learning scheme in order to solve gradient vanishing and the dying ReLU problems. To the best of our knowledge, this is the first learning scheme that enables the use of ReLU for all network layers while solving the Dying ReLU problem. We also present a high-performance inference engine that uses ReLU-based actuators for all the network layers in order to achieve high hardware efficiency. The design is excellent for online learning as well, since the derivative of the activation is equal to a constant and can be implemented using low-complexity components. The proposed technique significantly outperforms various contemporary schemes for the CIFAR-10 dataset, and can successfully yield about 98.39% accuracy on MNIST dataset while using less than 159k synapses. Moreover, the proposed hardware implementation is able to perform about 218 giga operation per second (GOPS) while consuming only about 3.95 slice registers and 25.89 slice look-up tables per synapse on a low-end Virtex 6 FPGA. The system is able to operate at a clock frequency of 93.2 MHz. |
Keyword | Activation Function Artificial Neural Networks (Anns) Deep Learning (Dl) Dying Relu Field Programmable Gate Arrays (Fpgas) Giga OPerations Per Second (Gops) Surrogate Gradient |
DOI | 10.1016/j.micpro.2023.104831 |
URL | View the original |
Indexed By | SCIE |
Language | 英語English |
WOS Research Area | Computer Science ; Engineering |
WOS Subject | Computer Science, Hardware & Architecture ; Computer Science, Theory & Methods ; Engineering, Electrical & Electronic |
WOS ID | WOS:000989325300001 |
Publisher | ELSEVIER, RADARWEG 29, 1043 NX AMSTERDAM, NETHERLANDS |
Scopus ID | 2-s2.0-85163576011 |
Fulltext Access | |
Citation statistics | |
Document Type | Journal article |
Collection | DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING THE STATE KEY LABORATORY OF ANALOG AND MIXED-SIGNAL VLSI (UNIVERSITY OF MACAU) |
Corresponding Author | Iqbal, Muhammad Azhar |
Affiliation | 1.State Key Laboratory of Analog and Mixed Signal VLSI (AMSV), University of Macau, Taipa, 999078, Macao 2.Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Taipa, 999078, Macao 3.School of Computing and Communications, Lancaster University, Lancaster, LA1 4YW, United Kingdom 4.Department of Computer Science, National University of Computer and Emerging Sciences, Islamabad, 44000, Pakistan |
First Author Affilication | University of Macau; Faculty of Science and Technology |
Recommended Citation GB/T 7714 | Siddique, Ali,Iqbal, Muhammad Azhar,Aleem, Muhammad,et al. A 218 GOPS neural network accelerator based on a novel cost-efficient surrogate gradient scheme for pattern classification[J]. Microprocessors and Microsystems, 2023, 99, 104831. |
APA | Siddique, Ali., Iqbal, Muhammad Azhar., Aleem, Muhammad., & Islam, Muhammad Arshad (2023). A 218 GOPS neural network accelerator based on a novel cost-efficient surrogate gradient scheme for pattern classification. Microprocessors and Microsystems, 99, 104831. |
MLA | Siddique, Ali,et al."A 218 GOPS neural network accelerator based on a novel cost-efficient surrogate gradient scheme for pattern classification".Microprocessors and Microsystems 99(2023):104831. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment