PyData Montreal Meetup #30 (in-person | en personne)
Details
With spooky season 🎃 upon us and winter starting to show it’s nose ❄️🥸, it’s the perfect opportunity to stay inside, grab a coffee… and attend a PyData meetup!
When, you ask? - Novembre 7th.
But where, you retort? - 1001 rue Sherbrooke Ouest. Montreal, QC H3A 1G5, 2nd Story of Bronfman Building.
And who will be presenting, you blurt out? - 2 wonderful speakers; Jean-Olivier Pitre, Cloud Engineer at DataSphere Lab @McGill University and Moetez Kdayem, MLOps Engineer at Alex Legal.
See a description of the talks bellow 👇:
AGENDA
- 18h00 - Open doors
 - 18h10 - Introduction
 - 18h20 - Talk #1
 - 19h00 - Break
 - 19h15 - Talk #2
 - 20h00 - Networking
 - 20h45 - End of event
 
TALKS
1. Python as an orchestrator for a RAG (retrieval-augmented generation) Architecture
By Jean-Olivier Pitre
Description of the talk :
Python shines in RAG (Retrieval-Augmented Generation) systems due to its efficiency in orchestrating various processes and its extensive libraries, such as LangChain and Hugging Face Transformers. The building blocks for RAG include data extraction and preprocessing, transforming data into vectors via embedding models, and using vector databases for retrieval. Python excels in setting up data pipelines for indexing, retrieval, and generation, integrating different components, and ensuring low-latency, high-efficiency real-time processing. Real-world applications of RAG systems showcase Python's benefits and challenges in implementation, demonstrating its versatility and robustness in managing complex data flows and interactions.
2. Advancing Deep Learning and Vision Efficiency with Mamba, VMamba, and Vim.
By Moetez Kdayem
Description of the talk:
Transformers are foundational in deep learning but face computational inefficiencies with long sequences. Inspired by continuous systems, Mamba, a simplified sequence model that makes State Space Models parameters dynamic and uses a hardware-aware parallel algorithm, achieving up to 5× faster inference than Transformers and linear scaling in sequence length. Mamba excels in language, audio, and genomics tasks without the need for attention mechanisms or MLP blocks. Building on Mamba, it was adapted for vision tasks where challenges like position sensitivity and global context are crucial. VMamba employs Visual State-Space (VSS) blocks and a 2D Selective Scan (SS2D) module to handle visual data efficiently, setting new benchmarks in computational efficiency and performance. Similarly, Vim (Vision Mamba) uses bidirectional Mamba blocks with position embeddings, outperforming models like DeiT without relying on self-attention, highlighting the versatility of state-space models in vision applications.
We’re excited to see you there 🍂👋

