What we’re about
AppliedAI is a conference, podcast, newsletter, YouTube channel, and monthly meetup that develops Minnesota’s next generation of experts by educating citizens of all ages on the tools, processes, and applications that are needed to implement solutions that use Artificial Intelligence (AI). The organization runs regular events that involve training, learning and sharing real-world projects where AI is applied to solve problems that impact our lives.
Additionally, we encourage you to subscribe to our Conversation On Applied AI Podcast. We publish new episodes every other week on topics related to Artificial Intelligence and its applications! We also publish a Monthly Newsletter with details on our current events and ways to engage with our community.
Applied AI transitioned from IoTFuse in February 2020. IoTFuse continues to run conferences around the metro area.
What is Artificial Intelligence
In computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans. Leading AI textbooks define the field as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving".
What is AppliedAI?
Simple. :) It's the real-world applications that these far-fetched, big ideas lead to. Some things around us that are applications of AI include digital assistants (Google Home, Alexa), self-driving cars, robots, chatbots and many many more!
Who should go AppliedAI Meetups?
This group is a place for anyone who is interesting in learning, sharing and applying Artificial Intelligence to their products and services. It's also for people of all ages just interested in what's going on in the AI space by networking and talking with real-world practitioners of this exciting and new technology.
What happens at AppliedAI Meetups?
• We have speakers and panelists talking about AI in real-world scenarios
• We do demos of AI related projects and explain how they work
• We discuss, where AI has been, where it is today and where it's heading in the future
• We have fun!
Where else can I find you?
What are the colors of your logo?
They are derived from the color palette of a few of the first-ever AI-generated photos from Google’s Deep Learning tool.
Upcoming events (3)See all
- Using the NIST AI Risk Management FrameworkLink visible for attendees
With the current rapid pace of advancement in AI, organizations are often playing catch up when determining how to assure their products are not causing negative impacts or harm. The rise of generative AI typifies such harms, as well as the potential benefits of AI technology. Instead of reacting to ever-frequent technology launches, organizations can buttress their processes through risk management.
The National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) provides organizations with a guiding structure to operate within, and outcomes to aspire towards, based on their specific contexts, use cases, and skillsets. The rights-affirming framework operationalizes AI system trustworthiness within a culture of responsible AI practice and use.
Our speaker this month is Reva Schwartz, Reva is a research scientist in the Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST), a member of the NIST AI RMF team, and Principal Investigator on Bias in Artificial Intelligence for NIST’s Trustworthy and Responsible AI program.
Her research focuses on evaluating AI system trustworthiness, studying AI system impacts, and driving an understanding of socio-technical systems within computational environments. She has advised federal agencies about how experts interact with automation to make sense of information in high-stakes settings.
Reva's background is in linguistics and experimental phonetics. Having been a forensic scientist for more than a decade she has seen the risks of automated systems up close. She advocates for interdisciplinary perspectives and brings contextual awareness into AI system design protocols.
- When Images Aren't RGB - a Light Walk Through AI Visible-Thermal ResearchLink visible for attendees
Conventional computer vision applications usually learn features from the visible spectra. From typical image classification to even generative algorithms, RGB imagery is frequently used as training data. In this talk, I'll walk you through some of the research I've worked on from classic image registration to generative modeling in the thermal spectra. Thermal imagery, particularly captured in Long-Wave Infrared (LWIR), visualizes radiated heat from a longer waveform compared to the visible spectrum. As a result, conventional methods from face detection to image alignment cannot be easily transferred from the visible to thermal domain due to differences in intensity and texture. Join me as I touch on these projects and introduce you to some of the motivations, as to why thermal-based computer vision is relevant in our society.
Catherine Ordun is an Executive Advisor of AI at Booz Allen Hamilton, currently working on her dissertation as a PhD Candidate at the University of Maryland Baltimore County. Her research focuses on visible-thermal image translation and registration, in addition to multimodal pain detection. At Booz Allen, she wears many hats from leading AI research for the NIH to technical validation of partner companies through our VC fund, to development of novel algorithms for multiple Federal agencies. She currently has a MPH and MBA, and is looking forward to finally defending her dissertation in October 2023. To escape the dredge of PhD life, she's completing her first scifi book.
- The Curse (and Blessing) of DimensionalityLab651, St Paul, MN
So much of data science is oriented around dimensionality reduction. Covariance analysis, regression, Principal Component Analysis, and autoencoders all address the curse of dimensionality through reduction. However, if the right tools are used, dimensionality is in fact a blessing when it comes to segmentation and anomaly detection. Several examples will be presented illustrating the main points.
Join us this month as we are thrilled to have Brian Turnquist from BoonLogic present on this subject! Dr. Turnquist has worked in machine learning for the past twenty years developing numerous novel algorithms for automatically clustering biological signals in real-time. Turnquist is CTO of Minneapolis tech start-up, Boon Logic and was a tenured professor at Bethel University. His Ph.D. is in Mathematics from the University of Maryland, and he has long-standing joint projects with researchers at Johns Hopkins University, Yale University, the University of Erlangen-Nürnberg (Germany), and the University of Mannheim-Heidelberg (Germany) including additional collaborations with Harvard University, University of Nagoya (Japan), Hanyang University (Seoul), University of Oslo (Norway), University of Minnesota, Purdue Pharma Pharmaceutical, and Allergan Pharmaceutical. Turnquist was a visiting researcher at the University of Nürnberg (2004-2005) and at the University of Heidelberg (2011-2012) developing algorithms to detect and classify biological signals, control neural stimulators, and automatically classify ultrasonic acoustic signals in real-time. Turnquist has fifteen refereed publications in neuroscience and mathematics and is the author of the C++-based software package Dapsys which is in use by numerous laboratories worldwide.
We'll be doing this event, In-person at Lab651. We'll have networking, pizza, and drinks starting at 6:00 pm. See you then!