Navigating LLM Deployment: Tips, Tricks, and Techniques
Details
Navigating LLM Deployment: Tips, Tricks, and Techniques
Self-hosted Language Models are going to power the next generation of applications in critical industries like financial services, healthcare, and defence. Self-hosting LLMs, as opposed to using API-based models, comes with its own host of challenges - as well as needing to solve business problems, engineers need to wrestle with the intricacies of model inference, deployment and infrastructure. In this talk we are going to discuss the best practices in model optimisation, serving and monitoring - with practical tips and real case-studies.
ā° Agenda
19:00 - 19:10 - Welcome speech and introduction from WWCode
19:10 - 19:45 - Meryem's Talk
19:45 - 20:00 - Q&A and wrapping up the session
šļø Speaker
Meryem Arik, Co-founder@TitanML | LinkedIn
Meryem co-founded TitanML with the vision of creating a seamless and secure infrastructure for enterprise LLM deployments. Meryem's training was in Theoretical Physics and Philosophy at the University of Oxford. Beyond her contributions to TitanML, Meryem is dedicated to sharing her insights on the practical and ethical adoption of AI in enterprise.
š©š» Event Host
Silke Nodwell, Data Scientist at TAC Index
LinkedIn
š©š½āš» About Women Who Code
Women Who Code is the largest and most active community of engineers dedicated to inspiring women to excel in technology careers. We envision a world where women are representative as technical executives, founders, VCs, board members, and software engineers. Our programs are designed to get you there.
Join us on Slack
Email us: [london@womenwhocode.com](http://mailto:london@womenwhocode.com/)
Follow us:
š©š½āš» Code of Conduct
WWCode London events are dedicated to providing inclusive & safe experiences for everyone. Before attending please read our code of conduct. Read the full version and access our incident report form here.
