Skip to content

mit-han-lab/tinyml

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tiny Machine Learning [website]

[News] We refactored MCUNet into a standalone repo: https://github.com/mit-han-lab/mcunet. Please follow the new repo for updates on TinyEngine release!

[News] We actively collaborate with industrial partners for real-world TinyML applications. Our technolgy has successfully influenced many products and deployed on over 100K IoT devices. Feel free to contact Prof. Song Han for more info.

[News] Our projects are covered by: MIT News, WIRED, Morning Brew, Stacey on IoT, Analytics Insight, Techable.

TinyML Projects

Projects Keywords
MCUNet Memory-efficient inference, System-algorithm co-design
TinyTL On-device learning, Memory-efficient transfer learning
NetAug Training technique for tiny neural networks

About TinyML

Intelligent edge devices with rich sensors (e.g., billions of mobile phones and IoT devices) have been ubiquitous in our daily lives. Combining artificial intelligence (AI) and these edge devices, there are vast real-world applications such as smart home, smart retail, autonomous driving, and so on. However, the state-of-the-art deep learning AI systems typically require tremendous resources (e.g., large labeled dataset, many computational resources, many AI experts), both for training and inference. This hinders the application of these powerful deep learning AI systems on edge devices. The TinyML project aims to improve the efficiency of deep learning AI systems by requiring less computation, fewer engineers, and less data, to facilitate the giant market of edge AI and AIoT.

Demo

Watch the video

Related Projects

MCUNet: Tiny Deep Learning on IoT Devices (NeurIPS'20, spotlight)

TinyTL: Reduce Memory, Not Parameters for Efficient On-Device Learning (NeurIPS'20)

Once for All: Train One Network and Specialize it for Efficient Deployment (ICLR'20)

ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware (ICLR'19)

AutoML for Architecting Efficient and Specialized Neural Networks (IEEE Micro)

AMC: AutoML for Model Compression and Acceleration on Mobile Devices (ECCV'18)

HAQ: Hardware-Aware Automated Quantization (CVPR'19, oral)

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •