Skip to content

Discussion - Topic: Structured output from LLMs

Photo of Ken Dempster
Hosted By
Ken D.
Discussion - Topic: Structured output from LLMs

Details

This week's topic: Structured output from LLMs

As described in Thoughtworks Technology Radar Vol. #32.

Structured output from LLMs refers to the practice of constraining a language model’s response into a defined schema. This can be achieved either by instructing a generalized model to respond in a particular format or by fine-tuning a model so it “natively” outputs, for example, JSON. OpenAI now supports structured output, allowing developers to supply a JSON Schema, pydantic or Zod object to constrain model responses. This capability is particularly valuable for enabling function calling, API interactions and external integrations, where accuracy and adherence to a format are critical. Structured output not only enhances the way LLMs can interface with code but also supports broader use cases like generating markup for rendering charts. Additionally, structured output has been shown to reduce the chance of hallucinations in model outputs.

Zoom link will be added about 5 min before the event starts.

Discussion Resources :

Please try to read/watch before the meetup as it will help drive the discussions.

Reading and video resources will be added few days before the meetup. Please try to read/watch the resources before the meetup so you participate in the discussion.

Photo of DevTalk LA group
DevTalk LA
See more events
Online event
Link visible for attendees
FREE