Portland AI Engineers January Meeting
Overview
Get constructive feedback on your AI project from a welcoming AI engineers community—great for builders seeking diverse perspectives and new connections.
Details
This month we will have in depth technical presentations from AI builders looking for feedback and diverse perspectives our amazing community can always be counted on for.
Schedule:
- 5:00 - 5:30: Snacks and networking
- 5:30 - 6:30: Presentations
- 6:30 - 7:00: Wrap up and networking
***
### Presentors
---
Randy Olson
Title: Beyond the Demo: Building Reliable AI with LLM Evaluations
About Me:
Randy Olson is Co-Founder and CTO of Goodeye Labs, where he's building tools to help teams evaluate AI outputs at scale. He has a PhD in Computer Science and 15+ years of hands-on experience across software engineering, machine learning, computational biology, data science, and AI product development. His work has been featured in the New York Times, Wired, and FiveThirtyEight, including projects like computing optimal road trips and analyzing large-scale data trends. He's also created widely-used open source tools like TPOT, an automated machine learning library. These days, he's focused on solving a problem he's seen across every team he's worked with: making AI systems more reliable and trustworthy in production environments.
Talk Description:
LLM evaluations (evals) have become one of the most talked-about topics in AI engineering, but for many teams, they remain abstract or intimidating. This talk cuts through the noise and gets practical. What are LLM evals, why should you care, and how do you actually implement them?I'll start by explaining what makes evals essential, especially as AI moves from demos to production systems where quality and consistency matter. We'll look at the evaluation landscape, including solid open-source frameworks like DeepEval and Verdict that you can start using today. Then I'll walk through a hands-on example using Truesight, the evaluation platform I've been building at Goodeye Labs, to show what setting up and running evals looks like in practice.Whether you're an AI engineer shipping models, a product manager defining quality bars, or anyone working hands-on in the AI space, you'll leave with a clear understanding of why evals matter and how to get started, with or without a budget.
Get involved:
- Present your work at a future event: Email us at info@portlandai.engineer to discuss sharing your project or insights
- Provide feedback: Help shape our community by sharing your ideas and suggestions
- Sponsor an event: Contact us to discuss partnership opportunities
For more information and to join our community, visit portlandai.engineer
Thank you to our amazing sponsors
Portland Incubator Experiment (PIE)
Silicon Florist
AlteredCraft
O'Reilly
Instrument




