Skip to content

LLMs for Engineers: Beyond Chatbots and Into Real Execution

Photo of James Crisp
Hosted By
James C. and Francis C.
LLMs for Engineers: Beyond Chatbots and Into Real Execution

Details

As we all know, most LLMs are savants that have long-term experiences with short-term amnesia. They forget things past a few pages, leak data, and agree with you at every turn, even at the expense of hallucinating. If you're serious about building tools with LLMs, then this talk might be the right starting place for you.

You'll learn:

- Why current LLM workflows fail at memory, trust, and reproducibility

- How to use OpenRouter to pick models based on performance, price, and privacy

- What happens when you control the full client stack from prompt all the way to response?

I’ll walk through how I built a local LLM shell that has long-term memory, picks the right OpenRouter models, and runs offline so you can even use it on an airplane. There are many LLM clients out there, but this one is mine, and using what we'll discuss in this session, you can build your own tools that are just as good as well.

Come join me and we'll talk about some of the interesting things you can build with a client stack that is built on 100% .NET, and doesn't shackle you to one model or a single web browser.

We will be giving away 2 Jetbrains licenses as part of our long standing tradition. You will need to login to the Twitch Channel and participate in the quiz at the end to win the licenses. Winners will be required to activate their licenses immediately as part of the terms & conditions of the licenses.

Stream at: https://www.twitch.tv/sydneyaltnet

Photo of Sydney Alt.Net User Group group
Sydney Alt.Net User Group
See more events
FREE
100 spots left