Skip to content

Details

We will host next in-person ML monthly meetup (February), in collaboration with Nvidia Inception. Please register on the event website: https://www.aicamp.ai/event/eventdetails/W2023020920

[RSVP Instruction]

  • RSVP closed on meetup. you must register at the event website (We will have your correct name for printing badge and check in. NO walk-ins, NO access without badge)
  • Contact us to submit topics and/or sponsor the meetup on venue/food/drink/swags/prizes. https://forms.gle/JkMt91CZRtoJBSFUA
  • Community on Slack for events chat, speakers office hour and sharing learning resources, job openings and projects collaboration. join slack

Agenda:

  • 10:00am~10:20am: Checkin and Networking
  • 10:20am~10:30am: Welcome/Community update/Sponsor intro
  • 10:30am~12:00pm: Tech talks
  • 12:00pm~12:30pm: Open discussion and Lucky draw
  • 12:30pm~ 1:30pm: Lunch

Tech Talk 1: Build Production-Grade Data and ML Workflows
Speaker: Samhita Alla, Engineer @Union.ai
Abstract: Effective workflow orchestration is critical for the successful operation of data and machine learning workflows. While it may not always be at the forefront of considerations when building and managing these workflows, it is crucial for ensuring smooth and efficient operation as the use of ML and data workflows becomes more prevalent within teams and companies. It is essential to have a thorough understanding of the nuances of orchestration to fully optimize these systems.

Flyte is a powerful solution for data and ML workflow orchestration that simplifies the process of collaborating, scaling, and deploying these workflows. Flytekit, the Python SDK for Flyte, enables data and ML teams to write business logic as isolated tasks, compose them into more complex workflows, share them within teams, and horizontally/vertically scale compute resources.

In this talk, attendees will learn about the importance of workflow orchestration, the challenges of building data and ML workflows, and how Flyte can help overcome them. They will gain an understanding of Flyte's features and integrations, followed by a demonstration of the tool in action.

Tech Talk 2: Deploy MultiModel using Nvidia Triton Inference Server
Speaker: Ayyanar Jeyakrishnan
Abstract: Triton Inference Server from NVIDIA is a production-ready deep learning inference server designed to serve large models. It helps with large model inference in scalability, performance optimization,
deployment flexibility, monitoring and management.
In this session, we will deep dive into Triton and demonstrate how to use it with SageMaker to deploy and serve large models with ease on any infrastructure.

Related topics

Events in Bengaluru
Artificial Intelligence
Deep Learning
Machine Learning
Data Science
Nvidia

You may also like