グループの特徴
ODSC brings together the open source and data science communities with the goal of helping its members learn, connect and grow.
The focus of this Meetup group is to allow ODSC to work with Meetup groups, non-profits, and other organizations to present informative lectures, workshops, code sprints and networking events to help grow the use of open source languages and tools within the data science and data-centric community. As such, our specific goals are:
1. Build a collaborative group to work with other Meetup groups, non-profits, and other organizations.
2. Promote the use of open source languages and tools amongst data scientists and others.
3. Host educational workshops.
4. Spread awareness of new open source languages and tools that can be used in data science.
5. Contribute back to the open source community.
Who is this meetup for?
• Data engineers, analysts, scientists, and other practitioners
• R, Python and other software engineers who work with data or want to learn
• Data visualization developers and designers
• Non-technical team leads, executives, and other decision makers from data centric startups and large companies looking to utilize open source tools
Get Involved with our Meetups:
• Meetup/Webinar Speaker Submission Form https://forms.gle/STEDWxgWBMnLnt8F8
• Suggest a Meetup Topic Form
https://forms.gle/FAnBGMnC6puP1zLs6
• Volunteer Form
https://forms.gle/rJB2k8ZvU7mj1R3c8
• Host or Sponsor Form
https://forms.gle/bVdnzttfSuKkWrHq5
• Showcase your Startup Form
https://forms.gle/2Z31dmGPe7RTw28B9
ODSC Links:
• Get free access to more talks/trainings like this at Ai+ Training platform:
https://hubs.li/H0Zycsf0
• ODSC blog: https://opendatascience.com/
• Facebook: https://www.facebook.com/OPENDATASCI
• Twitter: https://twitter.com/odsc & @odsc
• LinkedIn: https://www.linkedin.com/company/open-data-science
• Slack Channel: https://hubs.li/Q011lRxw0
• Code of conduct: https://odsc.com/code-of-conduct/
今後のイベント (2)件
すべて見る- Designing AI for Trust - How To Create Value While Setting The Right Guardrails参加者に表示されるリンク
To access this webinar, please register here: https://hubs.li/Q02y4Xjd0
Topic: "Designing AI for Trust - How To Create Value While Setting The Right Guardrails"
Speaker: Cal Al-Dhubaib, Head of AI & Data Science at Further
Cal Al-Dhubaib is a globally recognized data scientist, entrepreneur, and innovator in responsible artificial intelligence, specializing in heavily regulated sectors such as healthcare, energy, and defense. He leads AI and Data Science at Further, a privacy-first data, analytics, and AI company servicing over 100 of the Fortune 500. Cal founded and scaled Pandata, known for responsible AI design and development. Under his guidance, Pandata worked with high-profile clients including Cleveland Clinic, Progressive Insurance, and Parker Hannifin, leading to its acquisition by Further in March of 2024.
Cal frequently speaks on topics including AI ethics, change management, data literacy, and the unique challenges of implementing AI solutions in regulated industries. His insights have been featured in noteworthy publications such as Forbes, Nasdaq, VentureBeat, CDO Magazine, and Open Data Science. Cal has been recognized by Crain’s Cleveland as a Notable Immigrant Leader, Entrepreneur, and Technology Executive.
Abstract:
As AI becomes integral to business strategy, many organizations are navigating the complex interplay between technical innovation, creating business value, and managing risk. In many cases challenges arise with human adoption, alignment with business values, risk management processes, and unexpectedly costly data curation efforts.With a focus on business and technical leaders responsible for bringing AI solutions to life, we will draw from best practices in designing and deploying AI solutions across mission-critical sectors such as healthcare, energy, and financial services, where trust is critical.
Participants will walk away with some practical tools to lead their organizations in developing and deploying AI solutions that are not only technically sound but also widely trusted.
ODSC Links:
• Get free access to more talks/trainings like this at Ai+ Training platform:
https://hubs.li/H0Zycsf0
• ODSC blog: https://opendatascience.com/
• Facebook: https://www.facebook.com/OPENDATASCI
• Twitter: https://twitter.com/_ODSC & @odsc
• LinkedIn: https://www.linkedin.com/company/open-data-science
• Slack Channel: https://hubs.li/Q02w1GKB0
• Code of conduct: https://odsc.com/code-of-conduct/ - ODSC IN-PERSON MEETUP "Automating Data Curation for AI" - Hosted by CleanLab75 Hawthorne St, San Francisco, CA
Pre-registration is REQUIRED. It is important to RSVP on lu.ma here - https://hubs.li/Q02z0n3Y0
This meetup is co-organized by CleanLab and ODSC.
Who: People who build, develop and apply LLMs or wish to learn more about them
When: 5.30-8:30pm, Thursday June, 13 2024
________________
Speaker: Curtis Northcutt CEO, Co-Founder of CleanlabCurtis Northcutt is CEO and Co-Founder of Cleanlab, an AI software company that reduces the time and cost to improve machine learning model performance. He completed his PhD at MIT, where he invented Cleanlab’s algorithms for automatically finding and fixing label issues in any dataset. He was a recipient of MIT’s Morris Levin Thesis Award, an NSF Fellowship, and a Goldwater Scholarship and has worked at several leading AI research groups including Google, Oculus, Amazon, Facebook, Microsoft, and NASA.
Talk: Automating Data Curation for AI: Algorithms and theory for finding and improving mislabeled data in any machine learning dataset.
Summary:
The coupling of machine intelligence and human intelligence has the potential to empower humans with augmented capabilities (e.g., improving rhyme-density while writing song lyrics, enhancing empathy via emotion detection, and personalizing learning in online courses).Unfortunately, humans operate in an uncertain world – where the performance of even the most sophisticated model-centric artificially intelligent system often depends on its data-centric ability to deal with the uncertainty in the labels upon which it is trained.
To this end, we introduce confident learning whereby a machine (like humans) must learn with noisy-labeled data, directly quantify and identify label noise, and unlearn misconceptions by re-learning with confidence on cleaned data with erroneous labels removed. We achieve this by developing a principled theory and framework for confident learning with affordances for quantifying, identifying, and learning with label errors in data, and we open-source their implementations in the cleanlab Python package.
Based on human verification of the label errors found using cleanlab: we estimate a 3.4% lower bound error rate of the test set labels of ten of the most commonly used machine learning datasets across audio, image, and text modalities; examine the noise prevalence needed to change machine benchmark rankings; and provide corrected test sets so that humans can
benchmark machine performance with increased confidence.We'll conclude the talk with several real-world customer use cases of Cleanlab Studio, a SaaS version of the open-source package, built on top of confident learning and other related algorithmic approaches.
ODSC Links:
• Get free access to more talks/trainings like this at Ai+ Training platform:
https://hubs.li/H0Zycsf0
• ODSC blog: https://opendatascience.com/
• Facebook: https://www.facebook.com/OPENDATASCI
• Twitter: https://twitter.com/_ODSC & @odsc
• LinkedIn: https://www.linkedin.com/company/open-data-science
• Slack Channel: https://hubs.li/Q02w1GKB0
• Code of conduct: https://odsc.com/code-of-conduct/