Skip to content

Details

This month we are going to shift over to LLMs - Chat GPT, Claude, Perplexity, Grok, Manus, Gemini, and others. These have different costs to use and some specialize or work better for different uses.

This presentation will be given by Mark Ondash and Dr. Brian Lambert. Mark will be local in the room with us, Dt. Lambert will be will be joining us remotely via a Zoom call. Following is the agenda outline:

1. Opening question is: "How many of you have looked at your API bill this month and thought — there has to be a better way?"

2. What we are covering. LLms landscape, the cost problem, local llms

3. Cloud vs opensource models

4. The cost of API calls

5. Simple example.

6. THE THREE TIER DECISION FRAMEWORK Heavy, workhorse,, Local

7. Ollama: which models run well

8. My hardware

9. Quick demo of Ollama

10. How to connect

In this session we will be delving deeper into these topics as the tokens and cost factors come more into play with developed automation and AI Agent solutions. This is not just a "Chat GPT" session. These topics come into play when deciding cost solutions and also when considering running your own home-based or local LLM.

Looking forward to seeing folks Wednesday. Please if you cannot make it update your RSVP accordingly. I am not setting a size limit due to so many no-shows. The room comfortably holds 50.

Related topics

Events in Tampa, FL
Artificial Intelligence
Artificial Intelligence Applications
Automation
DevOps

You may also like