Skip to content

[Paper Reading]: Energy-Based Transformers are Scalable Learners and Thinkers

Photo of Kate Amon
Hosted By
Kate A. and SupportVectors AI L.
[Paper Reading]: Energy-Based Transformers are Scalable Learners and Thinkers

Details

This week, we will walk through and discuss the paper: Energy-Based Transformers are Scalable Learners and Thinkers [https://arxiv.org/abs/2507.02092]

Abstract of the paper:
Inference-time computation techniques, analogous to human System 2 Thinking, have recently become popular for improving model performances. However, most existing approaches suffer from several limitations: they are modality-specific (e.g., working only in text), problem-specific (e.g., verifiable domains like math and coding), or require additional supervision/training on top of unsupervised pretraining (e.g., verifiers or verifiable rewards). In this paper, we ask the question "Is it possible to generalize these System 2 Thinking approaches, and develop models that learn to think solely from unsupervised learning?" Interestingly, we find the answer is yes, by learning to explicitly verify the compatibility between inputs and candidate-predictions, and then re-framing prediction problems as optimization with respect to this verifier. Specifically, we train Energy-Based Transformers (EBTs) -- a new class of Energy-Based Models (EBMs) -- to assign an energy value to every input and candidate-prediction pair, enabling predictions through gradient descent-based energy minimization until convergence. Across both discrete (text) and continuous (visual) modalities, we find EBTs scale faster than the dominant Transformer++ approach during training, achieving an up to 35% higher scaling rate with respect to data, batch size, parameters, FLOPs, and depth. During inference, EBTs improve performance with System 2 Thinking by 29% more than the Transformer++ on language tasks, and EBTs outperform Diffusion Transformers on image denoising while using fewer forward passes. Further, we find that EBTs achieve better results than existing models on most downstream tasks given the same or worse pretraining performance, suggesting that EBTs generalize better than existing approaches. Consequently, EBTs are a promising new paradigm for scaling both the learning and thinking capabilities of models.

-----------

We are a group of applied AI practitioners and enthusiasts who have formed a collective learning community. Every Wednesday evening at PM PST, we hold our research paper reading seminar covering an AI topic. One member carefully explains the paper, making it more accessible to a broader audience. Then, we follow this reading with a more informal discussion and socializing.

You are welcome to join this in person or over Zoom. SupportVectors is an AI training lab located in Fremont, CA, close to Tesla and easily accessible by road and BART. We follow the weekly sessions with snacks, soft drinks, and informal discussions.

If you want to attend by Zoom, the Zoom registration link will be visible once you RSVP. Note that we have had to change and add security to the Zoom link to prevent Zoom bombing.

Photo of SupportVectors: Generative AI, LLMs, Machine Learning group
SupportVectors: Generative AI, LLMs, Machine Learning
See more events
This is a hybrid event.

Every week on Wednesday until February 11, 2026

In Person
SupportVectors
46540 Fremont Blvd, Suite 506 · Fremont, CA
Online event
Link visible for attendees
Google map of the user's next upcoming event's location
FREE