- Chassis.ml: ML Containers Made Easy
Containers are a great way to build applications, but they’re not always easy to make or use—especially for data scientists. Chassis is an open-source project that turns ML models into containerized prediction APIs in just minutes. We built this tool for Data Scientists, Machine Learning Engineers, and DevOps teams who need an easier way to automatically build ML model containers and ship them to production.
Presenter:
Brad is an experienced technologist with a focus on AI/ML & Machine Learning Operations (MLOps). As a Data Scientist and Solution Engineer at Modzy, I help organizations deploy ML models at enterprise scale and unlock value in their AI investment projects through the implementation of robust MLOps pipelines.- Justin G.
- Seth C.
- Vine
- 23 attendees
- Justin G.
- Skills to Develop and Avoid When Building AI-Generated Content
NOTE: This meeting will be IN-PERSON AT Lab651. Please RSVP so we can plan accordingly for food and drink!
Content generated from AI tools can run the gambit from high-quality and on-brand to poorly disguised copyright violations. Learn the most effective ways to build prompts, edit AI content, and ensure high-quality outputs from both text and image generators.
Our speaker this month is Deborah Carver. Deborah helps businesses ensure they are making sound investments in content and marketing technology. She also writes about demystifying publishing technologies and illuminating the gaps between the tech industry and the media. If it's SEO, UX, any kind of content recommendation algorithm, new tools, and workflows for content... you name it, she is figuring out how creatives can use and better understand technology.
Image from CrewMachine
- Justin G.
- Kimberly Anne J.
- Michael L.
- 21 attendees
- Justin G.
- Building Conversational AI with Python
In this min-workshop, Dishant Gandhi, a conversational expert from our community will be sharing with us the ins and out of how to build an AI assistant with Rasa and their open-source platform!
This will be a mini-workshop where participants will learn about the following topics:
- How does Conversational AI work
- What is NLU
- Using Rasa to build conversational AI
- Chatting with the assistant.Specific use cases will be covered during the session. Register today!
- Justin G.
- Arana F.
- Chris W.
- 19 attendees
- Justin G.
- Using the NIST AI Risk Management Framework
With the current rapid pace of advancement in AI, organizations are often playing catch up when determining how to assure their products are not causing negative impacts or harm. The rise of generative AI typifies such harms, as well as the potential benefits of AI technology. Instead of reacting to ever-frequent technology launches, organizations can buttress their processes through risk management.
The National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) provides organizations with a guiding structure to operate within, and outcomes to aspire towards, based on their specific contexts, use cases, and skillsets. The rights-affirming framework operationalizes AI system trustworthiness within a culture of responsible AI practice and use.
Our speaker this month is Reva Schwartz, Reva is a research scientist in the Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST), a member of the NIST AI RMF team, and Principal Investigator on Bias in Artificial Intelligence for NIST’s Trustworthy and Responsible AI program.
Her research focuses on evaluating AI system trustworthiness, studying AI system impacts, and driving an understanding of socio-technical systems within computational environments. She has advised federal agencies about how experts interact with automation to make sense of information in high-stakes settings.
Reva's background is in linguistics and experimental phonetics. Having been a forensic scientist for more than a decade she has seen the risks of automated systems up close. She advocates for interdisciplinary perspectives and brings contextual awareness into AI system design protocols.
- Justin G.
- GJullian F.
- Mark P.
- 9 attendees
- Justin G.
- When Images Aren't RGB - a Light Walk Through AI Visible-Thermal Research
Link visible for attendees
Conventional computer vision applications usually learn features from the visible spectra. From typical image classification to even generative algorithms, RGB imagery is frequently used as training data. In this talk, I'll walk you through some of the research I've worked on from classic image registration to generative modeling in the thermal spectra. Thermal imagery, particularly captured in Long-Wave Infrared (LWIR), visualizes radiated heat from a longer waveform compared to the visible spectrum. As a result, conventional methods from face detection to image alignment cannot be easily transferred from the visible to thermal domain due to differences in intensity and texture. Join me as I touch on these projects and introduce you to some of the motivations, as to why thermal-based computer vision is relevant in our society.
Speaker Bio:
Catherine Ordun is an Executive Advisor of AI at Booz Allen Hamilton, currently working on her dissertation as a PhD Candidate at the University of Maryland Baltimore County. Her research focuses on visible-thermal image translation and registration, in addition to multimodal pain detection. At Booz Allen, she wears many hats from leading AI research for the NIH to technical validation of partner companies through our VC fund, to development of novel algorithms for multiple Federal agencies. She currently has a MPH and MBA, and is looking forward to finally defending her dissertation in October 2023. To escape the dredge of PhD life, she's completing her first scifi book.- Justin G.
- Catherine O.
- Jim F
- 16 attendees
- Justin G.