RNNPool: Efficient Non-linear Pooling for RAM Constrained Inference

Saha, O. ; Kusupati, A. ; Simhadri, H.V. ; Varma, M. ; Jain, P. (2020) RNNPool: Efficient Non-linear Pooling for RAM Constrained Inference In: 34th Conference on Neural Information Processing Systems (NeurIPS 2020), December 2020, Vancouver, Canada.

Full text not available from this repository.

Official URL: http://manikvarma.org/pubs/saha20.pdf

Abstract

Standard Convolutional Neural Networks (CNNs) designed for computer vision tasks tend to have large intermediate activation maps. These require large working memory and are thus unsuitable for deployment on resource-constrained devices typically used for inference on the edge. Aggressively downsampling the images via pooling or strided convolutions can address the problem but leads to a signifi�cant decrease in accuracy due to gross aggregation of the feature map by standard pooling operators. In this paper, we introduce RNNPool, a novel pooling operator based on Recurrent Neural Networks (RNNs), that efficiently aggregates features over large patches of an image and rapidly downsamples activation maps. Empirical evaluation indicates that an RNNPool layer can effectively replace multiple blocks in a variety of architectures such as MobileNets, DenseNet when applied to stan�dard vision tasks like image classification and face detection. That is, RNNPool can significantly decrease computational complexity and peak memory usage for inference while retaining comparable accuracy. We use RNNPool with the standard S3FD [50] architecture to construct a face detection method that achieves state-of-the-art MAP for tiny ARM Cortex-M4 class microcontrollers with under 256 KB of RAM.

Item Type:Conference or Workshop Item (Paper)
ID Code:119531
Deposited On:14 Jun 2021 07:25
Last Modified:14 Jun 2021 07:25

Repository Staff Only: item control page