Efficient Neural Networks Through Tensor Networks
Details
Join us at PyData St. Louis for a talk and community discussion on how tensor networks can make neural networks more efficient, scalable, and memory friendly.
Modern AI systems continue to grow larger and more computationally expensive. This session explores an alternative approach inspired by physics and scientific computing: tensor networks. These techniques can compress neural network representations while preserving much of their performance, offering a possible path toward more efficient AI systems.
Pizza and networking will begin at 5:30 PM, and the talk will start at 6:15 PM.
In this session, we’ll cover:
• What tensor networks are and why they matter
• The scaling and memory challenges of modern neural networks
• How tensor decompositions such as Matrix Product States (MPS) and Tensor Trains compress information
• How tensor networks can reduce parameters in neural networks while maintaining performance
• Practical examples and demonstrations in Python
• Applications of tensor networks in machine learning and scientific computing
The talk is beginner friendly and open to anyone interested in data science, machine learning, scientific computing, or AI systems.
After the talk, we’ll leave time for questions, discussion, and networking from 6:50 PM to 7:30 PM with others in the local data community.
Pizza will be provided.
Who should attend?
Anyone curious about machine learning, efficient AI systems, scientific computing, or physics-inspired approaches to data science, including students, professionals, hobbyists, and beginners.
Special thanks to Spark Coworking for providing the venue and supporting the local data science community.
Come learn, connect, and be part of the PyData St. Louis community!
PyData St. Louis is part of the global PyData community. PyData is an educational program of NumFOCUS, a nonprofit organization that promotes open practices in research, data, and scientific computing.
