Performance Evaluation of Lightweight Deep Neural Architectures for Resource-Constrained Edge Intelligence

Authors

  • Kristjan Saar Department of Computer Systems Tallinn University of Technology (TalTech), Estonia Author
  • Liisa Tammet Department of Computer Systems Tallinn University of Technology (TalTech), Estonia Author
  • Andrus Vaher Department of Computer Systems Tallinn University of Technology (TalTech), Estonia Author
  • Maarja Ounapuu Department of Computer Systems Tallinn University of Technology (TalTech), Estonia Author
  • Tarmo Kivisild Department of Computer Systems Tallinn University of Technology (TalTech), Estonia Author

DOI:

https://doi.org/10.5281/zenodo.17785577

Keywords:

Edge intelligence, lightweight deep learning, embedded AI, resource-constrained systems, model compression, inference optimization

Abstract

The demand for localized intelligence has accelerated the deployment of compact neural models capable of executing directly on embedded edge hardware. These resource-constrained environments impose strict limitations on computational load, memory bandwidth, and energy consumption, requiring models that preserve accuracy while minimizing architectural complexity. This study conducts a detailed performance evaluation of several lightweight deep neural architectures within the context of early edge computing systems. The analysis incorporates latency profiling, throughput estimation, architectural efficiency metrics, and robustness testing under fluctuating sensor inputs. Results show that carefully optimized lightweight architectures can deliver competitive performance under tight resource budgets, enabling practical on-device intelligence across diverse distributed environments.

Downloads

Published

2025-12-02