[UCL WI Talks] DSPy: Self-Improving Language Model Programs

![[UCL WI Talks] DSPy: Self-Improving Language Model Programs](https://secure.meetupstatic.com/photos/event/1/b/d/0/highres_519427120.webp?w=750)
Details
DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines
Abstract:
It is now easy to build impressive demos with language models (LMs) but turning these into reliable systems currently requires hand-tuned combinations of prompting, chaining, and finetuning LMs. Toward a more systematic approach, we introduce DSPy, a programming model that replaces ad-hoc LM prompting techniques with composable modules and with optimizers that can supervise complex LM programs. Even simple LM systems expressed in DSPy routinely outperform standard hand-crafted prompt pipelines, in some cases while using small LMs. I conclude by discussing how DSPy enables a new degree of research modularity, one that stands to allow open research to again lead the development of AI systems. DSPy is an active open-source project at http://dspy.ai
Bio:
Omar is a fifth-year CS Ph.D. candidate at Stanford NLP and a 2022 Apple Scholar in AI/ML. He is interested in Natural Language Processing (NLP) at scale, where systems capable of retrieval and reasoning can leverage massive text corpora to craft knowledgeable responses efficiently and transparently. Omar is the author of the ColBERT retrieval model, which has helped shape the modern landscape of neural information retrieval (IR). His lines of work on ColBERT and DSPy form the basis of influential open-source projects and have sparked applications at dozens of research labs and tech companies, including Google, Meta, Amazon, IBM, VMware, Baidu, Huawei, AliExpress, and many others.

[UCL WI Talks] DSPy: Self-Improving Language Model Programs