What we're about

AppliedAI is a conference, podcast, newsletter, YouTube channel, and monthly meetup that develops Minnesota’s next generation of experts by educating citizens of all ages on the tools, processes, and applications that are needed to implement solutions that use Artificial Intelligence (AI). The organization runs regular events that involve training, learning and sharing real-world projects where AI is applied to solve problems that impact our lives.

Additionally, we encourage you to subscribe to our Conversation On Applied AI Podcast. We publish new episodes every other week on topics related to Artificial Intelligence and its applications! We also publish a Monthly Newsletter with details on our current events and ways to engage with our community.

Applied AI transitioned from IoTFuse in February 2020. IoTFuse continues to run conferences around the metro area.

What is Artificial Intelligence
In computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans. Leading AI textbooks define the field as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving".

What is AppliedAI?

Simple. :) It's the real-world applications that these far-fetched, big ideas lead to. Some things around us that are applications of AI include digital assistants (Google Home, Alexa), self-driving cars, robots, chatbots and many many more!

Who should go AppliedAI Meetups?

This group is a place for anyone who is interesting in learning, sharing and applying Artificial Intelligence to their products and services. It's also for people of all ages just interested in what's going on in the AI space by networking and talking with real-world practitioners of this exciting and new technology.

What happens at AppliedAI Meetups?

• We have speakers and panelists talking about AI in real-world scenarios
• We do demos of AI related projects and explain how they work
• We discuss, where AI has been, where it is today and where it's heading in the future
• We have fun!

Where else can I find you?

Follow us on social media!
Facebook, LinkedIN and Twitter

We also have a podcast and YouTube page.

What are the colors of your logo?

They are derived from the color palette of a few of the first-ever AI-generated photos from Google’s Deep Learning tool.

Upcoming events (4+)

Chassis.ml: ML Containers Made Easy

Link visible for attendees

Containers are a great way to build applications, but they’re not always easy to make or use—especially for data scientists. Chassis is an open-source project that turns ML models into containerized prediction APIs in just minutes. We built this tool for Data Scientists, Machine Learning Engineers, and DevOps teams who need an easier way to automatically build ML model containers and ship them to production.

Presenter:
Brad is an experienced technologist with a focus on AI/ML & Machine Learning Operations (MLOps). As a Data Scientist and Solution Engineer at Modzy, I help organizations deploy ML models at enterprise scale and unlock value in their AI investment projects through the implementation of robust MLOps pipelines.

Skills to Develop and Avoid When Building AI-Generated Content

NOTE: This meeting will be IN-PERSON AT Lab651. Please RSVP so we can plan accordingly for food and drink!

Content generated from AI tools can run the gambit from high-quality and on-brand to poorly disguised copyright violations. Learn the most effective ways to build prompts, edit AI content, and ensure high-quality outputs from both text and image generators.

Our speaker this month is Deborah Carver. Deborah helps businesses ensure they are making sound investments in content and marketing technology. She also writes about demystifying publishing technologies and illuminating the gaps between the tech industry and the media. If it's SEO, UX, any kind of content recommendation algorithm, new tools, and workflows for content... you name it, she is figuring out how creatives can use and better understand technology.

Image from CrewMachine

Building Conversational AI with Python

Link visible for attendees

In this min-workshop, Dishant Gandhi, a conversational expert from our community will be sharing with us the ins and out of how to build an AI assistant with Rasa and their open-source platform!

This will be a mini-workshop where participants will learn about the following topics:

- How does Conversational AI work
- What is NLU
- Using Rasa to build conversational AI
- Chatting with the assistant.

Specific use cases will be covered during the session. Register today!

Using the NIST AI Risk Management Framework

Link visible for attendees

With the current rapid pace of advancement in AI, organizations are often playing catch up when determining how to assure their products are not causing negative impacts or harm. The rise of generative AI typifies such harms, as well as the potential benefits of AI technology. Instead of reacting to ever-frequent technology launches, organizations can buttress their processes through risk management.

The National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) provides organizations with a guiding structure to operate within, and outcomes to aspire towards, based on their specific contexts, use cases, and skillsets. The rights-affirming framework operationalizes AI system trustworthiness within a culture of responsible AI practice and use.

Our speaker this month is Reva Schwartz, Reva is a research scientist in the Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST), a member of the NIST AI RMF team, and Principal Investigator on Bias in Artificial Intelligence for NIST’s Trustworthy and Responsible AI program.

Her research focuses on evaluating AI system trustworthiness, studying AI system impacts, and driving an understanding of socio-technical systems within computational environments. She has advised federal agencies about how experts interact with automation to make sense of information in high-stakes settings.

Reva's background is in linguistics and experimental phonetics. Having been a forensic scientist for more than a decade she has seen the risks of automated systems up close. She advocates for interdisciplinary perspectives and brings contextual awareness into AI system design protocols.