An Analysis of Operations Modification in Deep Neural Network in Hardware Perspective
19 Jan 2018
Reading time ~1 minute


As the convolutional neural network (CNN) achieving better classification performance in figures such as ImageNet dataset, many state-of-the-art architectures have been proposed, including AlexNet, GoogleNet, ResNet and Xception. However, these models only focus on classification accuracy, and thus the structure goes deeper and deeper, which is a great burden for hardware implementation. In this project, we surveyed and reviewed several famous CNN architectures and conducted experiments to test the power consumption of each architecture. With these experiments, we summarized the top-1 accuracy, top-5 accuracy, the number of parameters, GFLOPs, power and inference time in a table and drew some charts to analyze the advantages and drawbacks of these architectures. Finally, we surveyed some hardware-friendly designs, which are helpful for us to implement these CNN architectures in hardware.