What we're about

"Learn by Practice". Join us to learn and practice AI, Machine learning, Deep learning and Data Science technology together with like-minded developers.

Our goal is to congregate with AI enthusiasts from all over Portland to learn and practice AI tech, through tech talks, study jams, code labs etc.. we regularly invite tech leads from innovated companies, successful startups to share their practice experiences and practices in the world of AI, Cloud, Data.

If you’d like to speak at future meetups, co-promote your meetup or inquire about partnership opportunities, please feel free to reach out to us.

Upcoming events (4)

Productizing Machine Learning at the Edge

Online event

Online event, register here to receive the zoom joining link: https://www.aicamp.ai/event/eventdetails/W2021102210

These days, training of the Machine Learning models at the device Edge is still a risky endeavor. It is frequently considered a purely academic subject with little value for real-life product development.
In her talk, Vera will challenge this misconception, talk about the advantages of learning at the Edge and guide you through the Edge learning decision-making framework and design principles.

Speaker: Vera Serdiukova (Salesforce)
Vera Serdiukova is a Senior AI Product Manager at Salesforce, working primarily in the field of Natural Language Processing. Before Salesforce, she was a part of LG’s Silicon Valley lab, where her focus was on building machine learning capabilities at the edge. Prior to that, Vera did product and project management in conversational AI, developing speech-enabled interfaces for Bosch’s robotics, connected car, and smart home products.

Modernize AI-powered Application Using Open Source Tools

This is online tech talk event, register on the website to receive join link:

Modernizing your application is inevitable in the era of digital transformation. Having architectural flexibility, improved ROI by infusing AI and faster time to market can be achieved easily by application modernization.
In this talk, we will be combining open source tools and OpenShift to automate deployment of an AI-powered application in a hybrid multicloud environment.

You will learn how to provide a Rest API to the deep learning models using an open source framework, basics of OpenShift and the source-to-image concept, and how to build and deploy your AI-powered application as a microservice on OpenShift directly from the source code using the OpenShift concepts learned.

Building and Scaling Robust Zero-code IoT Streaming Data Pipelines

This is online tech talk event, register on the website to receive join link:

With the rapid onset of the global Covid-19 Pandemic in 2020 the USA Centers for Disease Control and Prevention (CDC) quickly implemented a new Covid-19 pipeline to collect testing data from all of the USA’s states and territories, and produce multiple consumable results for federal and public agencies. They did this in under 30 days, using Apache Kafka.
Inspired by this story, we built two demonstration streaming pipelines for ingesting, storing, and visualizing public IoT data (Tidal data from NOAA, the National Oceanic and Atmospheric Administration) using multiple open source technologies. The common ingestion technologies were Apache Kafka, Apache Kafka Connect, and Apache Camel Kafka Connector, supplemented with Prometheus and Grafana for monitoring. The initial experiment used Open Distro for Elasticsearch and Kibana as the target storage and visualization technologies, while the second experiment used PostgreSQL and Apache Superset.

In this talk we introduce each technology and the pipeline architecture, and walk through the steps followed, challenges encountered, and solutions used to build reliable and scalable pipelines, and visualize the results (including Tidal periods, ranges and locations). We compare and contrast the two approaches, focusing on exception handling, scalability, performance and monitoring, and the pros and cons of the two visualization technologies (Kibana and Superset).

Online Tech Talk: Deliver Actionable Insights At Scale

Online event

This is online tech talk event, register on the website to receive join link:

Join data and analytics leaders and strategists for a panel discussion on strategies to translate raw data into actionable information that includes common enterprise metrics (like revenue, cost, and quantities) as well as standardized dimensions to sort, group, and categorize around clear, defined concepts (like time periods, geographic locations, and products).
Our featured speakers will share practical guidance and examples on how to prepare analysis-ready information using a semantic layer without the need for complex data engineering.

What you will learn in this webinar?
- Creating governed and consistent access to features, numerics, categorical, hierarchies, dimensions and measures.
- Dimensional analysis techniques for descriptive, diagnostic, predictive, prescriptive analytics.
- Information types: process-mediated data, human-sourced information, machine-generated data, and context-setting information.
- Slicing and dicing information using business concepts like time, region, product, and price.
- Best practices for performance optimization and simplification of data prep for your BI teams.

Who should join?
BI and Analytics leaders and practitioners (e.g., Chief Data Officers, data scientists, data literacy, business intelligence, and analytics professionals) looking to translate raw data into actionable information at scale.

Featured speakers:
- Heather Fitzgerald, Head of Data & Business Intelligence, Jackson National Life.
- Brian Allen, Big Data Engineer, Allstate.
- Barry Devlin, Business Intelligence, Author and Speaker.
- Larry Clark, Distinguished Solution Engineer, Tableau Software.
- Dave Mariani, Chief Technology Officer, Founder, AtScale

Past events (231)

Photos (219)