Post Date
Apr 27 2022

Pruned to Perfection

Under the supervision of Dr. Murtaza Taj, the thesis work of his MS students; Shehryar Malik, Muhammad Umair Haider, Omer Iqbal, has been published online. This includes two MS thesis, one of which is based on neural network pruning through constrained reinforcement learning (CRL), which will be featured in this story.

In agriculture, pruning is cutting off unnecessary branches or stems of a plant. In machine learning, pruning is removing unnecessary neurons or weights. Network pruning reduces the size of neural networks by removing (pruning) neurons such that the performance drop is minimal. Traditional pruning approaches focus on designing metrics to quantify the usefulness of a neuron which is often quite tedious and sub-optimal. More recent approaches have instead focused on training auxiliary networks to automatically learn how useful each neuron is however, they often do not take computational limitations into account. In this work, the research team proposes a general methodology for pruning neural networks. The proposed methodology can prune neural networks to respect pre-defined computational budgets on arbitrary, possibly non-differentiable, functions. The team only assume the ability to be able to evaluate these functions for different inputs, and hence they do not need to be fully specified beforehand. This was achieved by proposing a new pruning strategy through constrained reinforcement learning (CRL) algorithms. The paper proves the effectiveness of the team’s approach via comparison with state-of-the-art methods on standard image classification datasets. Specifically, the study reduced 83 − 92.90% of total parameters on various variants of VGG ( pretrained network), while achieving comparable or better performance than that of original networks. The team also achieved 75.09% reduction in parameters on ResNet18 without incurring any loss in accuracy.

[Source of graphic: https://towardsdatascience.com/pruning-neural-networks-1bb3ab5791f9]

 

The team evaluated our approach using CIFAR-10 dataset on ResNet18 and variants of VGG network. The training was performed using Adam optimizer. They propose a novel framework for neural network pruning via constrained reinforcement learning that allows respecting budgets on arbitrary, possibly non-differentiable functions. There is a pro-Lagrangian approach that incorporates budget constraints by constructing a trust region containing all policies that respect constraints. Their team’s experiments show that the proposed CRL strategy significantly outperform the state-of-the-art methods in terms of producing small and compact while maintaining the accuracy of unpruned baseline architecture. Specifically, our method reduces nearly 75.08%−92.9% parameters without incurring any significant loss in performance.


Reference: https://doi.org/10.48550/arXiv.2110.08558