Crafting Code Suggestions using Large Language Models


Details
Speaker
Albert Ziegler
Abstract
By virtue of scale and clever architecture, large language models have made significant leaps in their ability to predict source code. That gives them potential. But turning that potential into actually useful tooling
requires overcoming a hard obstacle: Developers don't actually write source code. At least, not in the linear, every-file-for-itself fashion that can be predicted so successfully. Solving this challenge is shaping up to be a central question for AI-based developer tooling.
Using the example of GitHub Copilot, I'll give some insights into the strategies we used to address it for making code snippet suggestions. There, I'll mainly focus on promptcrafting for code, addressing:
- intra-file reordering,
- codebase linearization, and
- large-scale assessment.
Bio
Albert Ziegler (https://githubnext.com/team/wunderalbert/) is a principal machine learning engineer with a background in Mathematics and a home at GitHub Next, GitHub's innovation engine. His main interests are combinations of deductive and intuitive reasoning to improve the software development experience. He's previously worked on developer productivity, ML guided CodeQL, and he was part of the trio that conceived and then implemented the GitHub Copilot project.
Recent project: https://githubnext.com/projects/copilot-radar/
--
Attend in person, or online at https://kth-se.zoom.us/j/66538884335

Crafting Code Suggestions using Large Language Models