CLEO-AI: Closed-Loop Edge System for Optimized and Adaptive Intelligence

Case ID:
UA26-068
Invention:

CLEO-AI is a closed-loop edge artificial intelligence framework that allows resource-limited devices to learn, optimize, and redeploy models in real time without cloud dependence. It combines on-device model growth, training-free compression, and a hardware-agnostic runtime, enabling systems-on-chip (SoCs) to adapt to changing data or environments while maintaining strict Size, Weight, and Power (SWaP) constraints. CLEO-AI delivers low latency and high reliability for edge operations where bandwidth is limited or unavailable.

At its core, CLEO-AI integrates an Adaptive Learning System that expands or reduces models as data evolves, and a Runtime System that monitors objectives such as accuracy, latency, and energy efficiency. The system automatically distributes workloads across processors like CPUs, GPUs, and FPGA accelerators. If performance drops below set thresholds, the system can trigger human-in-the-loop actions to maintain mission integrity.

Background: 
Traditional neural networks are over-parameterized and rely on static, cloud-based training, which limits their ability to perform effectively on edge devices. These models often include redundant layers and weights, wasting energy and processing power. Current optimization tools address inference but not on-device learning or adaptive runtime management. CLEO-AI closes this gap by integrating continuous learning, compression, and deployment directly into a single on-device platform. This approach ensures robust, self-optimizing performance even in unpredictable or disconnected environments.

Applications: 

  • Autonomous vehicles and driver assistance systems
  • Precision agriculture and environmental monitoring
  • Next-generation communication systems
  • Space and satellite operations
  • Defense and mission-critical unmanned systems


Advantages: 

  • Real-time, on-device adaptation to data drift without cloud retraining
  • Hardware-agnostic deployment across heterogeneous systems
  • Training-free compression that maintains accuracy with reduced resources
  • Continuous objective monitoring for energy, latency, and accuracy optimization
  • Operates effectively under strict SWaP limitations
  • Scalable to manage multiple concurrent models
  • Maintains mission continuity in low-connectivity environments
Patent Information:
Contact For More Information:
Scott Zentack
Licensing Manager, College of Engr
The University of Arizona
zentack@arizona.edu
Lead Inventor(s):
Ali Akoglu
Mustafa Ghanim
Saul Durazo Martinez
Md Sahil Hassan
Keywords: