Language: 漢語
07-31, 12:15–12:35 (Asia/Taipei), TR309
PyTorch for TinyML : run PyTorch model on CMSIS-NN
Talk Length –20
您是否知悉並同意如採遠端形式分享,需提供預錄影片(您需同意大會才能接受您的稿件) – yes Difficulty –進階
other info –None
Abstract –近年來 AIoT (AI-powered Internet-of-Things) 的發展十分被看好,但 Cortex-M 平台上是否能跑得起巨大的深度學習模型一直困擾著 AIoT 開發者。在本演講當中,我們會介紹 ONNC 編譯器是如何在 Cortex-M 硬體上加速 PyTorch for TinyML 的機器學習模型,進而有效減少 IoT 裝置所需的硬體資源。在 Cortex-M4 的裝置下,透過 ONNC 編譯的 MobileNet,我們在 Visual Wake Words (人員偵測)測試資料上,和 TensorFlow Lite for Micro 比較,我們可以加速 27%,在記憶體使用上節省 19% ,在程式碼大小上更可以省下 89% 的 Flash 使用空間。
slido url – hackmd url –https://hackmd.io/@coscup/ryHfQ6vAu/%2F%40coscup%2FS1rgXpwCO
English Abstract –In recent years, the AIoT(AI-powered Internet-of-Things) market is growing, but the hardware limitations, such as limited computation and limited storage, of Cortex-M devices impede AIoT development for years. In this talk, we will introduce how ONNC accelerates models on PyTorch for TinyML by the ONNC compiler. With the help of the ONNC compilation, we could reduce hardware resource consumptions for models. In a Cortex-M4 device, compared with a MobileNet compiled by TensorFlow for Micro, a MobileNet compiled by ONNC with Visual Wake Words dataset can accelerate 1.27 times, reduce 1.57 times of memory consumption, and reduce 7.12 times of code size.