DSPy: Programming—not prompting—Foundation Models at Databricks
Details
We will be starting this event 30 minutes early at 6:30pm.
Food will be provided.
This week we will be at databricks to learn about their new open source tool DSPy. DSPy is a framework for algorithmically optimizing Language Model prompts and weights, especially when Language Models are used one or more times within a pipeline.
Schedule:
- 6:30pm - Doors Open
- 7PM-8PM. Four presentations:
- DSPy and DSPy on Databricks, by Omar Khattab (Databricks)
- MIPROv2, the latest DSPy prompt optimizer, by Krista Opsahl-Ong (Stanford)
- Finetuning in DSPy, by Dilara Soylu (Stanford)
- DSPy Assertions, by Arnav Singhvi (Databricks)
- 8 - 9PM : Social Hour/Networking
DSPy is a framework for algorithmically optimizing LM prompts and weights, especially when LMs are used one or more times within a pipeline. To use LMs to build a complex system without DSPy, you generally have to: (1) break the problem down into steps, (2) prompt your LM well until each step works well in isolation, (3) tweak the steps to work well together, (4) generate synthetic examples to tune each step, and (5) use these examples to finetune smaller LMs to cut costs. Currently, this is hard and messy: every time you change your pipeline, your LM, or your data, all prompts (or finetuning steps) may need to change.
Github: https://github.com/stanfordnlp/dspy
Continue the discussion on our discord: https://discadia.com/gen-ai/
Podcast: https://podcast.genaimeetup.com/
