Skip to content

Discussion - Topic: LLM Guardrails

Photo of Ken Dempster
Hosted By
Ken D.
Discussion - Topic: LLM Guardrails

Details

This week's topic: LLM Guardrails

As described in Thoughtworks Technology Radar Vol. #31.

LLM Guardrails is a set of guidelines, policies or filters designed to prevent large language models (LLMs) from generating harmful, misleading or irrelevant content. The guardrails can also be used to safeguard LLM applications from malicious users attempting to misuse the system with techniques like input manipulation. They act as a safety net by setting boundaries for the model to process and generate content. There are some emerging frameworks in this space like NeMo Guardrails, Guardrails AI and Aporia Guardrails our teams have been finding useful. We recommend every LLM application have guardrails in place and that its rules and policies be continuously improved. Guardrails are crucial for building responsible and trustworthy LLM chat apps.

Zoom link will be added about 5 min before the event starts.

Discussion Resources :

TBD

Photo of DevTalk LA group
DevTalk LA
See more events
Online event
Link visible for attendees
FREE