Policy-Guided Neural Thinning: Dynamic Parameter Removal During Inference
DOI:
https://doi.org/10.5281/zenodo.17792846Keywords:
Dynamic inference, policy-guided thinning, adaptive neural models, selective activation, reinforcement-driven optimization, efficient computation, lightweight analyticsAbstract
This work presents a dynamic inference framework in which neural models selectively deactivate internal parameters based on a policy learned through reinforcement signals. The method, termed policy-guided neural thinning, enables a network to adjust its computational footprint at run time, allowing inference to scale with the difficulty of the input or constraints of the device. Instead of relying on fixed pruning decisions, the system evaluates structural importance on a per-input basis and activates only the components that contribute meaningfully to prediction quality. Experiments demonstrate that this adaptive approach reduces computation and energy consumption while preserving stable predictive behavior across varying workloads. The results show that neural thinning, when controlled by decision policies, forms a viable pathway toward efficient and responsive analytics on constrained platforms.
Downloads
Published
Issue
Section
License
Copyright (c) 2020 The Artificial Intelligence Journal

This work is licensed under a Creative Commons Attribution 4.0 International License.