Quantization of Constrained Processor Data Paths Applied to Convolutional Neural Networks

Authors: de Bruin, B. and Zivkovic, Z. and Corporaal, H.

Abstract::Artificial Neural Networks (NNs) can effectively be used to solve many classification and regression problems, and deliver state-of-the-art performance in the application domains of natural language processing (NLP) and computer vision (CV). However, the tremendous amount of data movement and excessive convolutional workload of these networks hampers large-scale mobile and embedded productization. Therefore these models are generally mapped to energy-efficient accelerators without floating-point support. Weight and data quantization is an effective way to deploy high-precision models to efficient integer-based platforms. In this paper a quantization method for platforms without wide accumulation registers is being proposed. Two constraints to maximize the bit width of weights and input data for a given accumulator size are introduced. These constraints exploit knowledge about the weight and data distribution of individual layers. Using these constraints, we propose a layer-wise quantization heuristic to find a good fixed-point network approximation. To reduce the number of configurations to consider, only solutions that fully utilize the available accumulator bits are being tested. We demonstrate that 16-bit accumulators are able to obtain a Top-1 classification accuracy within 1% of the floating-point baselines on the CIFAR-10 and ILSVRC2012 image classification benchmarks.

[BibTeX] [ DOI]
  author = {{de Bruin}, B. and {Zivkovic}, Z. and {Corporaal}, H.},
  booktitle = {21st Euromicro Conference on Digital System Design (DSD)},
  title = {Quantization of Constrained Processor Data Paths Applied to Convolutional Neural Networks},
  year = {2018},
  volume = {},
  number = {},
  pages = {357-364},
  keywords = {computer vision;convolution;feedforward neural nets;fixed point arithmetic;image classification;natural language processing;NNs;regression problems;NLP;computer vision;data movement;embedded productization;energy-efficient accelerators;floating-point support;data quantization;integer-based platforms;accumulation registers;accumulator bits;convolutional workload;processor data paths;image classification;precision models;artificial neural networks;convolutional neural networks;floating-point baselines;fixed-point network approximation;layer-wise quantization;data distribution;Quantization (signal);Computational modeling;Benchmark testing;Kernel;Mathematical model;Convolutional neural networks;quantization, fixed-point efficient inference, narrow accumulators, convolutional neural networks},
  doi = {10.1109/DSD.2018.00069},
  issn = {},
  url = {https://research.tue.nl/files/112931714/08491840.pdf},
  month = aug