tinyML Talks Singapore: ScaleDowStudy Group: Optimisation Techniques: Knowledge Distillation
“ScaleDowStudy Group: Optimisation Techniques: Knowledge Distillation”
Soham Chatterjee
Machine learning engineer
Sleek Tech
Knowledge Distillation is a process of compressing information from a larger model(teacher) to a smaller model(student). This student model is trained using the predictions of a teacher model. This way the student model can be trained with unlabelled data, by using the teacher model to generate labels!
Join us on 29th August at 8:30 pm SGT to learn about Knowledge Distillation and try your hands at testing KD at the edge.
In this session, we will cover:
1. Introduction to Knowledge Distillation
2. Implementation of KD using Tensorflow and Pytorch
3. Using ScaleDow for KD optimisation technique
4. Test KD on an embedded device
5. Resources and Research Papers
source