
What we’re about
Hands-on project-oriented data science, with a heavy focus on machine learning and artificial intelligence. We're here to get neck-deep into projects and actually do awesome things!
Join our new discord https://discord.gg/xtFVsSZuPG where you can:
- discuss more AI/ML papers
- suggest/plan events
- share and discuss github projects
- find and post jobs on our jobs channel
- buy/sell used local gpu/server equipment
- scroll our social media aggregators for the latest AI research news across Bsky, X, Reddit, Youtube, Podcasts, and more
The meetup consists of:
- recurring study groups (if you want to start one, just notify Ben to be made a meetup co-organizer).
- intermediate/advanced working groups (starting in 2019)
- occasional talks and gathering (aiming for at least quarterly starting in 2019)
Upcoming events (4+)
See all- Reinforcement Learning: Dynamic Programming Value Iteration and ExamplesLink visible for attendees
Last meeting we began Chapter 4 and covered how policy evaluation can use a form a fixed-point iteration to converge to the correct value function of a policy. We then proved the policy improvement theorem and made use of it in the policy iteration algorithm. If you didn't attend that meeting, you can find the recording of it in the YouTube playlist below since this session will build off what was covered there.
This meeting we will finish Chapter 4 by covering some specific examples of policy iteration and then introduce the value iteration algorithm with more examples.
As usual you can find below links to the textbook, previous chapter notes, slides, and recordings of some of the previous meetings.
Useful Links:
Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto
Recordings of Previous Meetings
Short RL Tutorials
My exercise solutions and chapter notes
Kickoff Slides which contain other links
Video lectures from a similar course - Paper Group: MemOS: An Operating System for Memory-Augmented Generation (MAG)Link visible for attendees
Join us for a paper discussion on "MemOS: An Operating System for Memory-Augmented Generation (MAG) in Large Language Models"
Examining unified architectures for memory management in next-generation LLMs
Featured Paper:
"MemOS: An Operating System for Memory-Augmented Generation (MAG) in Large Language Models" (Li et al., 2025) presented by Evelyn
arXiv Paper
Discussion Topics:
Motivation and Memory Typology- Challenges: LLMs lack unified, structured memory—leading to limited adaptability, inconsistent long-term context, and isolated “memory silos”
- Three memory types detailed:
- Parametric Memory (embedded in model weights)
- Activation Memory (inference states like KV-cache, hidden activations)
- Plaintext Memory (external sources, editable/traceable, e.g., knowledge graphs, prompts)
MemCube Abstraction
- Unified representation for heterogeneous memory (parametric, activation, plaintext)
- Structured metadata:
- Descriptive (semantic type, timestamps, origin)
- Governance (permissions, lifespan, compliance)
- Behavioral indicators (usage frequency, evolution tracking)
- Enables memory tracking, fusion, migration, and cross-context reuse
System Architecture
- Three-layer framework:
- Interface Layer: Unified Memory API (provenance, update, log queries)
- Operation Layer: Schedulers, lifecycle managers, organization (semantic, graph/tagged)
- Infrastructure Layer: Governance, storage (MemVault), migration (MemLoader/MemDumper)
Execution Flow
- User/task initiates memory API call
- MemCube units carry context through operation pipeline (query/update/archive)
- Scheduling selects memory types and loads into context for reasoning
- Results archived/propagated for future tasks or cross-agent sharing
Performance and Design Highlights
- Modular scheduling (LRU, semantic, label-based) optimizes memory selection per task
- Versioning, rollback, and access auditing ensure compliance and adaptability
- Supports multi-agent collaboration, task continuity, and scalable memory evolution
Future Directions
- Cross-LLM memory sharing and Memory Interchange Protocol (MIP)
- Self-evolving MemBlocks for automated optimization
- Decentralized memory marketplace for knowledge transfer and collaborative updates
Implementation Challenges
- Integrating memory governance with multi-user, multi-agent environments
- Memory lifecycle tuning for long-term AI adaptation and personalized intelligence
- Ensuring privacy, auditability, and storage efficiency
---
Silicon Valley Generative AI has two meeting formats:
1. Paper Reading - Every second week we meet to discuss machine learning papers. This is a collaboration between Silicon Valley Generative AI and Boulder Data Science.
2. Talks - Once a month we meet to have someone present on a topic related to generative AI. Speakers can range from industry leaders, researchers, startup founders, subject matter experts and those with an interest in a topic and would like to share. Topics vary from technical to business focused. They can be on how the latest in generative models work and how they can be used, applications and adoption of generative AI, demos of projects and startup pitches or legal and ethical topics. The talks are meant to be inclusive and for a more general audience compared to the paper readings.If you would like to be a speaker or suggest a paper email us @ svb.ai.paper.suggestions@gmail.com or join our new discord !!!
- Reinforcement Learning: Topic TBALink visible for attendees
Typically covers chapter content from Sutton and Barto's RL book
As usual you can find below links to the textbook, previous chapter notes, slides, and recordings of some of the previous meetings.
Useful Links:
Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto
Recordings of Previous Meetings
Short RL Tutorials
My exercise solutions and chapter notes
Kickoff Slides which contain other links
Video lectures from a similar course - Reinforcement Learning: Topic TBALink visible for attendees
Typically covers chapter content from Sutton and Barto's RL book
As usual you can find below links to the textbook, previous chapter notes, slides, and recordings of some of the previous meetings.
Useful Links:
Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto
Recordings of Previous Meetings
Short RL Tutorials
My exercise solutions and chapter notes
Kickoff Slides which contain other links
Video lectures from a similar course