This week's topic: Small language models
As described in Thoughtworks Technology Radar Vol. #33.
We’ve observed steady progress in the development of small language models (SLMs) across several volumes of the Technology Radar. With growing interest in building agentic solutions, we’re seeing increasing evidence that SLMs can power agentic AI efficiently. Most current agentic workflows are focused on narrow, repetitive tasks that don’t require advanced reasoning, making them a good match for SLMs. Continued advancements in SLMs such as Phi-3, SmolLM2 and DeepSeek suggest that SLMs offer sufficient capability for these tasks — with the added benefits of lower cost, reduced latency and lower resource consumption compared to LLMs. It’s worth considering SLMs as the default choice for agentic workflows, reserving larger, more resource-intensive LLMs only when necessary.
Zoom link will be added about 5 min before the event starts.
Discussion Resources :
How Small Language Models Are Key to Scalable Agentic AI By Peter Belcak
https://developer.nvidia.com/blog/how-small-language-models-are-key-to-scalable-agentic-ai/
Small Language Models are the Future of Agentic AI By Peter Belcak, Greg Heinrich, Yonggan Fu, Xin Dong, Saurav Muralidharan, Yingyan Celine Lin, Pavlo Molchanov
https://research.nvidia.com/labs/lpr/slm-agents/
Introducing Phi-3: Redefining what’s possible with SLMs By Misha Bilenko,
https://azure.microsoft.com/en-us/blog/introducing-phi-3-redefining-whats-possible-with-slms/
When to Choose Small vs Large Models | Why Tiny Beats Huge in 2026 By CodeCraft Academy
https://www.youtube.com/watch?v=Z0d2bJO_i4Q
Small Language Models (SLMs) Are the Future: Fine-Tuning AI That Runs on Your iPhone By Daniel Bourke
https://www.youtube.com/watch?v=EXB8HokGVMI