Microsoft Reducing AI Compute Requirements with Small Language Models
Tech Disruptors - A podcast by Bloomberg
![](https://is1-ssl.mzstatic.com/image/thumb/Podcasts112/v4/ea/ce/f3/eacef36f-33b5-3ef1-b004-eb4ab3adb6ab/mza_1389785146080499039.png/300x300bb-75.jpg)
Categories:
“Microsoft is making a bet that we’re not going to need a single AI, we’re going to need many different AIs” Sebastien Bubeck, Microsoft’s vice president of generative-AI research, tells Bloomberg senior technology analyst Anurag Rana. In this Tech Disruptors episode, the two examine the differences between a large language model like ChatGPT-4o and a small language model such as Microsoft’s Phi-3 family. Bubeck and Rana account for various use cases of the models across various industries and workflows. The two also compare the costs and differences in compute/GPU requirements between SLMs and LLMs.