Skip to content

Enhanced base LLMs: Perplexity Sonar+DeepSeek R1 optimized by fine tuning Llama

Photo of FounderCulture
Hosted By
FounderCulture
Enhanced base LLMs: Perplexity Sonar+DeepSeek R1 optimized by fine tuning Llama

Details

FounderCulture, Storytell and UpHonest Capital have teamed up to create a Meetup series called "Building The Future of AI."

🟢 This event will be live in San Mateo, CA -- also be live-streamed on this Zoom link if you can't join in person. The Zoom will start at 6:30pm PT.

Join us for a robust discussion led by Tim Kellogg and moderated by DROdio.

Enhanced base LLMs: Perplexity Sonar+DeepSeek R1 optimized by fine tuning Llama

The rapid evolution of large language models has brought fine-tuning to the forefront as a critical strategy for enhancing performance. This session will explore the cutting-edge approaches used to optimize two leading models: Sonar and DeepSeek R1, both fine-tuned on Llama architectures.

  • Perplexity's Sonar, built on Llama 3.3 70B, has been further trained to excel in factuality and readability, making it ideal for search applications. Its integration with Cerebras inference technology enables blazing-fast token processing, allowing for real-time responses. The model’s open-source nature and Mixture of Experts (MoE) architecture, which activates only a subset of parameters per query, will be discussed in depth.
  • DeepSeek R1, a fine-tuned version of Llama 70B using samples generated by DeepSeek R1, has demonstrated exceptional performance on tasks like MATH-500 (94.5%) and AIME 2024 (86.7%). It supports a full 128k context window and optimized token throughput, significantly enhancing its reasoning capabilities.

This meetup will delve into the methodologies behind fine-tuning, the technological advancements that support it, and the emerging evidence from models like Sonar and DeepSeek R1. Discussion will encompass techniques, infrastructure needs, and cost considerations involved in fine-tuning LLMs for reasoning tasks, revealing how businesses and researchers can leverage this practice to enhance performance and yield better results.

Join us as we explore how fine-tuning is challenging the traditional notion that base LLMs should not be fine-tuned, and how these optimized models are setting new benchmarks for efficiency and accuracy in AI search and reasoning.

šŸŽ FounderCulture is powered by our Patrons -- consider becoming a Patron to support our vision of improving the odds for Founders globally (you'll get early access to RSVP to our events!)

Photo of Building the Future of AI group
Building the Future of AI
See more events
This is a hybrid event.
In Person
San Mateo
Ā· San Mateo, CA
Online event
This event has passed