Performance Evaluation of Lightweight Deep Neural Architectures for Resource-Constrained Edge Intelligence
DOI:
https://doi.org/10.5281/zenodo.17785577Keywords:
Edge intelligence, lightweight deep learning, embedded AI, resource-constrained systems, model compression, inference optimizationAbstract
The demand for localized intelligence has accelerated the deployment of compact neural models capable of executing directly on embedded edge hardware. These resource-constrained environments impose strict limitations on computational load, memory bandwidth, and energy consumption, requiring models that preserve accuracy while minimizing architectural complexity. This study conducts a detailed performance evaluation of several lightweight deep neural architectures within the context of early edge computing systems. The analysis incorporates latency profiling, throughput estimation, architectural efficiency metrics, and robustness testing under fluctuating sensor inputs. Results show that carefully optimized lightweight architectures can deliver competitive performance under tight resource budgets, enabling practical on-device intelligence across diverse distributed environments.
Downloads
Published
Issue
Section
License
Copyright (c) 2020 The Artificial Intelligence Journal

This work is licensed under a Creative Commons Attribution 4.0 International License.