Cloud to the Edge: Future of LLMs w/ Mahesh Yadav of Google
Manage episode 441634742 series 3574631
Curious about how you can run a colossal 405 billion parameter model on a device with a mere 2 billion footprint? Join us with Mahesh Yadav from Google, as he shares his journey from developing small devices to working with massive language models. Mahesh reveals the groundbreaking possibilities of operating large models on minimal hardware, making internet-free, edge AI a reality even on devices as small as a smartwatch. This eye-opening discussion is packed with insights into the future of AI and edge computing that you don't want to miss.
Explore the strategic shifts by tech giants in the language model arena with Mahesh and our hosts. We dissect Microsoft's investment in OpenAI’s Phi model and Google's development of Gamma, exploring how increasing the parameters in large language models leads to emergent behaviors like logical reasoning and translation. Delving into the technical and financial implications of these advancements, we also address privacy concerns and the critical need for cost-effective model optimization in enterprise environments handling sensitive data.
Advancements in edge AI training take center stage as Mahesh unpacks the latest techniques for model size reduction. Learn about synthetic data generation and the use of quantization, pruning, and distillation to shrink models without losing accuracy. Mahesh also highlights practical applications of small language models in enterprise settings, from contract management to sentiment analysis, and discusses the challenges of deploying these models on edge devices. Tune in to discover cutting-edge strategies for model compression and adaptation, and how startups are leveraging base models with specialized adapters to revolutionize the AI landscape.
Learn more about the tinyML Foundation - tinyml.org
Chương
1. Cloud to the Edge: Future of LLMs w/ Mahesh Yadav of Google (00:00:00)
2. Edge AI Development and Challenges (00:00:37)
3. Edge AI With Small Language Models (00:13:43)
4. Advancements in Edge AI Training (00:22:53)
5. Techniques for Model Size Reduction (00:27:15)
6. Applications of Small Language Models (00:37:40)
7. Discussion on NVIDIA, ONNX, and Acceleration (00:41:05)
8. Model Compression and Adaptation Techniques (00:53:57)
13 tập