Chuyển sang chế độ ngoại tuyến với ứng dụng Player FM !
Speed will win the AI computing battle with Tuhin Srivastava from Baseten
Manage episode 408090527 series 3444082
At a time when users are being asked to wait unthinkable seconds for AI products to generate art and answers, speed is what will win the battle heating up in AI computing. At least according to today’s guest, Tuhin Srivastava, the CEO and co-founder of Baseten which gives customers scalable AI infrastructures starting with interference. In this episode of No Priors, Sarah, Elad, and Tuhin discuss why efficient code solutions are more desirable than no code, the most surprising use cases for Baseten, and why all of their jobs are very defensible from AI.
Show Links:
Sign up for new podcasts every week. Email feedback to show@no-priors.com
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @tuhinone
Show Notes:
(0:00) Introduction
(1:19) Capabilities of efficient code enabled development
(4:11) Difference in training inference workloads
(6:12) AI product acceleration
(8:48) Leading on inference benchmarks at Baseten
(12:08) Optimizations for different types of models
(16:11) Internal vs open source models
(19:01) timeline for enterprise scale
(21:53) Rethinking investment in compute spend
(27:50) Defensibility in AI industries
(31:30) Hardware and the chip shortage
(35:47) Speed is the way to win in this industry
(38:26) Wrap
112 tập
Manage episode 408090527 series 3444082
At a time when users are being asked to wait unthinkable seconds for AI products to generate art and answers, speed is what will win the battle heating up in AI computing. At least according to today’s guest, Tuhin Srivastava, the CEO and co-founder of Baseten which gives customers scalable AI infrastructures starting with interference. In this episode of No Priors, Sarah, Elad, and Tuhin discuss why efficient code solutions are more desirable than no code, the most surprising use cases for Baseten, and why all of their jobs are very defensible from AI.
Show Links:
Sign up for new podcasts every week. Email feedback to show@no-priors.com
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @tuhinone
Show Notes:
(0:00) Introduction
(1:19) Capabilities of efficient code enabled development
(4:11) Difference in training inference workloads
(6:12) AI product acceleration
(8:48) Leading on inference benchmarks at Baseten
(12:08) Optimizations for different types of models
(16:11) Internal vs open source models
(19:01) timeline for enterprise scale
(21:53) Rethinking investment in compute spend
(27:50) Defensibility in AI industries
(31:30) Hardware and the chip shortage
(35:47) Speed is the way to win in this industry
(38:26) Wrap
112 tập
Tất cả các tập
×Chào mừng bạn đến với Player FM!
Player FM đang quét trang web để tìm các podcast chất lượng cao cho bạn thưởng thức ngay bây giờ. Đây là ứng dụng podcast tốt nhất và hoạt động trên Android, iPhone và web. Đăng ký để đồng bộ các theo dõi trên tất cả thiết bị.