Crafting Code Suggestions using Large Language Models
Details
Speaker
Albert Ziegler
Abstract
By virtue of scale and clever architecture, large language models have made significant leaps in their ability to predict source code. That gives them potential. But turning that potential into actually useful tooling
requires overcoming a hard obstacle: Developers don't actually write source code. At least, not in the linear, every-file-for-itself fashion that can be predicted so successfully. Solving this challenge is shaping up to be a central question for AI-based developer tooling.
Using the example of GitHub Copilot, I'll give some insights into the strategies we used to address it for making code snippet suggestions. There, I'll mainly focus on promptcrafting for code, addressing:
- intra-file reordering,
- codebase linearization, and
- large-scale assessment.
