Techceleration's! Let’s Talk Tech - 21st Edition


Details
Announcing Techceleration's! Let’s Talk Tech - 21st Edition
Date & Time: 26th February, Wednesday, 5.00 PM
Topic 1: Memory Layers by Krithiga
In this talk, we will explore memory layers, a powerful mechanism that enables deep learning models to efficiently store, retrieve, and utilize long-term knowledge. We will cover how memory layer works, architectures with memory layers, real world applications and open research topics in the space.
Speaker: Krithiga ( https://www.linkedin.com/in/krithiga06/ ) works as an Lead, ML Engineer in Toyota Connected India, Chennai. Her interests lies at the intersection of Natural Language Processing and Deep Learning.
***
Topic 2: FPGAs for Deep Learning by Dr.Karthikeyan Rajagopal
Field-Programmable Gate Arrays (FPGAs) are gaining prominence in deep learning due to their flexibility, power efficiency, and high-performance capabilities. Unlike GPUs, which are general-purpose accelerators, FPGAs can be reconfigured to implement optimized deep learning models with hardware-level parallelism. This makes them ideal for low-latency, real-time AI applications, particularly in edge computing and embedded AI systems.
Leading FPGA solutions for deep learning include Intel FPGAs (formerly Altera) and AMD Xilinx FPGAs:
- Intel FPGAs: Intel’s Arria 10, Stratix 10, and Agilex FPGAs offer high-performance AI acceleration with optimized deep learning libraries such as OpenVINO. These FPGAs are widely used in AI inference applications, including autonomous systems and cloud-based AI workloads.
- AMD Xilinx FPGAs: Xilinx’s Versal AI Engine, Alveo, and Zynq UltraScale+ MPSoC platforms provide efficient deep learning acceleration, with dedicated DSP blocks and support for AI inference frameworks like Vitis AI. These FPGAs are extensively used in real-time AI applications, including autonomous vehicles, industrial automation, and edge AI solutions.
Both Intel and AMD Xilinx FPGAs offer key advantages such as power efficiency, reconfigurability, and scalability, making them preferred choices for deploying AI workloads in constrained environments where energy consumption and latency are critical.
Speaker: Dr. Karthikeyan Rajagopal ( https://www.linkedin.com/in/karthikeyan-rajagopal-65731211/) Director - Research, SRM Group of Institutions (Chennai Ramapuram & Trichy Campus), India. Ranked in the top 2% scientist list (global) published by Stanford University-Elsevier in both Single year list (Top 0.4%) and Career list (Top 2%) for the fourth consecutive year. Ranked in the top 100 (80) Indian scientist in the Engineering and technology field.
***
Registration / RSVP: Event Attendee link: https://events.teams.microsoft.com/event/d856ebda-da5d-47f5-be3e-f1718111a9ab@cfb57949-7a2c-4f96-b7f8-05382a502dad
*Order of topic might change

Techceleration's! Let’s Talk Tech - 21st Edition