
What weāre about
Welcome to our AI Meetup! We are a passionate community dedicated to building and learning about artificial intelligence. Whether you're an expert or just starting out, join us to share knowledge, collaborate on projects, and explore the fascinating world of AI together.
We'll be getting different events off the ground, both locally (Seattle) and virtually.
I'd like to have an AI book club going again in 2024, so if you have recommendations for us to read, let us know!
We'll AI cover topics such as Machine Learning (ML), Large Language Models (LLMs), Deep Learning, Data engineering, MLOps, Python, Computer Vision, Natural Language Processing (NLP), the Latest AI developments, and more!
Questions? Reach out to Sage Elliott on LinkedIn: https://www.linkedin.com/in/sageelliott/
Upcoming events (4)
See all- Fine-tune & Serve LLMs with LoRA & QLoRA for Production - LLMOps WorkshopLink visible for attendees
Build Scalable Workflows for Large Language Models (LLMs) and serve it in a hosted application!
REGISTER HERE TO GET THE LIVE LINK & RECORDING:
https://www.eventbrite.com/e/fine-tune-serve-llms-with-lora-qlora-for-production-llmops-workshop-tickets-1302657155619Training complex AI models at scale requires orchestrating multiple steps into a reproducible workflow and understanding how to optimize resource utilization for efficient fine-tuning. Modern MLOps and LLMOps tools help streamline these processes, improving the efficiency and reliability of your AI pipelines. This workshop will introduce you to the basics of MLOps and best practices for building efficient AI pipelines for large language models (LLMs).
By completing this workshop, you'll gain hands-on experience structuring scalable and reproducible AI workflows for fine-tuning LLMs using best practices such as caching, versioning, containerized resource utilization, parameter-efficient fine-tuning (PEFT), and more. We'll use Hugging Face for transformers and datasets, PEFT for implementing LoRA and QLoRA, bitandbytes for quantization, and union.ai for scalable workflows, GPUs, and serving our fine-tuned model.
This workshop will cover:
- MLOps / LLMOps pipeline basics
- Fine-tune a Hugging Face LLM model with LoRA & QLoRa
- Build a scalable and reproducible production grade workflow
- Deploy (Serve) your fine-tuned LLM in a real-time streamlit app
- Concepts covered can transfer to more complex pipelines and models
What you'll need to follow along:
- A free Union.ai account (https://www.union.ai/)
- A GitHub account
- A Google account for Colab
More Session Details:
Part 1: MLOps/LLMOps & Effecent fine-tuning overview
Get introduced to the concepts around reproducible workflows, best practices for implementing efficient AI pipelines, and why parameter-efficient fine-tuning techniques such as Low-Rank Adaptation (LoRA) and QLoRA (Quantized Low-Rank Adaptation) are widespread.Part 2: Build a scalable workflow and implement parameter-efficient fine-tuning [hands-on]
In this hands-on section, we'll walk through and run all the code to create our parameter-efficient fine-tuning workflow. We'll implement the following tasks:- Download Dataset
- Download Model
- Visualize Dataset
- Fine-tune Model (LoRA & QLoRA)
- Evaluate Model Performance
- Perform Batch Inference
Part 3: Serve your fine-tuned LLM for real-time inference with a Streamlit UI
We'll pass our fine-tuned model artifact into an application using Streamlit to create a user interface for interaction.
After this section, you'll have the skills to build an end-to-end production-grade AI pipeline for fine-tuning and serving large language models (LLMs)About the Speaker:
Sage Elliott is an AI Engineer with a background in computer vision, LLM evaluation, MLOps, IoT, and Robotics. He's taught thousands of people at live workshops. You can usually find him in Seattle biking around to parks or reading in cafes, catching up on the latest read for AI Book Club.
Connect with Sage: https://www.linkedin.com/in/sageelliott/About Union.ai
Our AI workflow and inference platform unifies data, models and compute with the workflows of execution on a single pane of glass.We also maintain Flyte, an open-source orchestrator that facilitates building production-grade data and ML pipelines.
š¬ Join our AI and MLOps Slack Community: https://slack.flyte.org/
ā Check out Flyte on GitHub: https://github.com/flyteorg/flyte
š¤ Learn about everything else weāre doing at https://union.ai/ - AI Book Club: Hands-On APIs for AI and Data ScienceLink visible for attendees
May's book is "Hands-On APIs for AI and Data Science"!
This is a casual-style event. Not a structured presentation on topics. Sometimes, the discussion even drifts away from the chapters, but feel free to grab the mic to help steer it back.
Feel free to join the discussion even if you have not read the book chapters! :)
Want to discuss the contents during the reading week? Join the Slack Flyte MLOps Slack group and search for the "ai-reading-club" channel. https://slack.flyte.org/
-------------------------------------------------
About the book:
Title: Hands-On APIs for AI and Data Science
Authors: Ryan Day
Published: March 2025
Hands-On APIs for AI and Data ScienceChapters:
- I. Building APIs for Data Science
1. Creating APIs That Data Scientists Will Love
2. Selecting Your API Architecture
3. Creating Your Database
4. Developing the FastAPI Code
5. Documenting Your API
6. Deploying Your API to the Cloud
7. Batteries Included: Creating a Python SDK
II. Using APIs in Your Data Science Project
8. What Data Scientists Should Know About APIs
9. Using APIs for Data Analytics
10. Using APIs in Data Pipelines
11. Using APIs in Streamlit Data Apps
III. Using APIs with Artificial Intelligence
12. Using APIs with Artificial Intelligence
13. Deploying a Machine Learning API
14. Using APIs with LangChain
15. Using ChatGPT to Call Your API
Book Description:
Are you ready to grow your skills in AI and data science? A great place to start is learning to build and use APIs in real-world data and AI projects. API skills have become essential for AI and data science success, because they are used in a variety of ways in these fields. With this practical book, data scientists and software developers will gain hands-on experience developing and using APIs with the Python programming language and popular frameworks like FastAPI and StreamLit.
As you complete the chapters in the book, you'll be creating portfolio projects that teach you how to:- Design APIs that data scientists and AIs love
- Develop APIs using Python and FastAPI
- Deploy APIs using multiple cloud providers
- Create data science projects such as visualizations and models using APIs as a data source
- Access APIs using generative AI and LLMs
Learn more about the book here:
https://learning.oreilly.com/library/view/hands-on-apis-for/9781098164409/ - I. Building APIs for Data Science
- AI Book Club: The Hundred-Page Language Models Book: hands-on with PyTorchLink visible for attendees
June's book is "The Hundred-Page Language Models Book: hands-on with PyTorch"!
This is a casual-style event. Not a structured presentation on topics. Sometimes, the discussion even drifts away from the chapters, but feel free to grab the mic to help steer it back.
Feel free to join the discussion even if you have not read the book chapters! :)
Want to discuss the contents during the reading month? Join the Flyte MLOps Slack group and search for the "ai-reading-club" channel. https://slack.flyte.org/
-------------------------------------------------
About the book:
Title: The Hundred-Page Language Models Book: hands-on with PyTorch
Authors: Andriy Burkov
Published: January 15, 2025https://thelmbook.com/
Chapters:
- Chapter 1. Machine Learning Basics
- Chapter 2. Language Modeling Basics
- Chapter 3. Recurrent Neural Network
- Chapter 4. Transformer
- Chapter 5. Large Language Model
- Chapter 6. Further Reading
Book Description
Large language models (LLMs) have fundamentally transformed how machines process and generate information. They are reshaping white-collar jobs at a pace comparable only to the revolutionary impact of personal computers. Understanding the mathematical foundations and inner workings of language models has become crucial for maintaining relevance and competitiveness in an increasingly automated workforce.
This book guides you through the evolution of language models, starting from machine learning fundamentals. Rather presenting transformers right away, which can feel overwhelming, we build understanding of language models step by stepāfrom simple count-based methods through recurrent neural networks to modern architectures. Each concept is grounded in clear mathematical foundations and illustrated with working Python code.In the largest chapter on large language models, you'll learn both effective prompt engineering techniques and how to finetune these models to follow arbitrary instructions. Through hands-on experience, you'll master proven strategies for getting consistent outputs and adapting models to your needs.
Learn more about the book here:
https://thelmbook.com/ - AI Book Club: Reinforcement Learning for FinanceLink visible for attendees
July's book is "Reinforcement Learning for Finance"!
This is a casual-style event. Not a structured presentation on topics. Sometimes, the discussion even drifts away from the chapters, but feel free to grab the mic to help steer it back.
Feel free to join the discussion even if you have not read the book chapters! :)
Want to discuss the contents during the reading week? Join the Slack Flyte MLOps Slack group and search for the "ai-reading-club" channel. https://slack.flyte.org/
-------------------------------------------------
About the book:
Title: Reinforcement Learning for Finance
Authors: Yves Hilpisch
Published: October 2024
https://learning.oreilly.com/library/view/reinforcement-learning-for/9781098169169/Chapters:
- 1. Learning Through Interaction
- 2. Deep Q-Learning
- 3. Financial Q-Learning
- II. Data Augmentation
- 4. Simulated Data
- 5. Generated Data
- III. Financial Applications
- 6. Algorithmic Trading
- 7. Dynamic Hedging
- 8. Dynamic Asset Allocation
- 9. Optimal Execution
- 10. Concluding Remarks
Book Description
Reinforcement learning (RL) has led to several breakthroughs in AI. The use of the Q-learning (DQL) algorithm alone has helped people develop agents that play arcade games and board games at a superhuman level. More recently, RL, DQL, and similar methods have gained popularity in publications related to financial research.
This book is among the first to explore the use of reinforcement learning methods in finance.
Author Yves Hilpisch, founder and CEO of The Python Quants, provides the background you need in concise fashion. ML practitioners, financial traders, portfolio managers, strategists, and analysts will focus on the implementation of these algorithms in the form of self-contained Python code and the application to important financial problems.
This book covers:- Reinforcement learning
- Deep Q-learning
- Python implementations of these algorithms
- How to apply the algorithms to financial problems such as algorithmic trading, dynamic hedging, and dynamic asset allocation
This book is the ideal reference on this topic. You'll read it once, change the examples according to your needs or ideas, and refer to it whenever you work with RL for finance.
Dr. Yves Hilpisch is founder and CEO of The Python Quants, a group that focuses on the use of open source technologies for financial data science, AI, asset management, algorithmic trading, and computational finance.Learn more about the book here:
https://learning.oreilly.com/library/view/reinforcement-learning-for/9781098169169/