Deploy a custom Machine Learning model to mobile
TensorFlow TensorFlow
597K subscribers
16,785 views
0

 Published On May 12, 2022

Walk through the steps to author, optimize, and deploy a custom TensorFlow Lite model to mobile using best practices and the latest developer tooling. This includes using the model authoring APIs, applying and debugging model optimization techniques such as quantization, benchmarking on a real device, and deployment to Android.

Resource:
Converting your model - https://goo.gle/3M5yux0
Analyze your converted model - https://goo.gle/3L76HLc
TensorFlow Model Optimization - https://goo.gle/39T3iCX
Quantization Debugger - https://goo.gle/3N4uE7h
Performance best practices - https://goo.gle/3Lgcxdv
TensorFlow Lite website - https://goo.gle/37BhVdk
TensorFlow Forum - https://goo.gle/3L0RxY0
TensorFlow website → https://goo.gle/3KejoUZ
Follow on Twitter - https://goo.gle/3sq7a4C

Speakers: Arun Venkatesan, Yu-Cheng Ling, Adam Koch

Watch more:
All Google I/O 2022 Sessions → https://goo.gle/IO22_AllSessions
ML/AI at I/O 2022 playlist → https://goo.gle/IO22_ML-AI
All Google I/O 2022 technical sessions → https://goo.gle/IO22_Sessions

Subscribe to TensorFlow → https://goo.gle/TensorFlow

#GoogleIO

show more

Share/Embed