Skip to content

Study Group: On the Measure of Intelligence

A
Hosted By
Adrian
Study Group: On the Measure of Intelligence

Details

Machine learning is becoming increasingly relevant to everyone who works in technology. In these study group events we will collectively walk through machine learning research papers in order to better understand their theoretical motivations, practical contributions, and implementation details.

This time, we’ll discuss a paper that promotes research on more generally intelligent systems than the current norm. It develops criteria for general intelligence and introduces a new benchmark task.

** On the Measure of Intelligence **
Paper link: https://arxiv.org/abs/1911.01547

Everyone should take the time to read the paper in detail several days before the event, and (to the extent possible) read the key references. This paper is very long, but it reads easily.

The event will cost $5/person -- cash only, charged at the event -- to help cover the cost of the meeting room.

Organizational Notes:

  • We'll use Meetup RSVPs to sign you in when you arrive.
  • Attendee selection process: In each Meetup, there is an initial open capacity for people to RSVP. We then select another batch from the waiting list (if one forms) based on profile information to have good balance/diversity. Priority will be given to Meetup group members whose prior RSVPs have been reliable, and who have the name and introduction sections properly filled out in their profiles.
  • There is some overbooking. Seating capacity is about 10, with a few more seats that can be brought in and put in corners. Arrive early to get a seat if it is absolutely necessary for you; others may have to stand in corners or form their own mini-group outside the conference room, as indicated by the organizer.

Paper abstract

To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.

Looking forward to discussing this paper with you in our study group!

Photo of Montreal Machine Learning group
Montreal Machine Learning
See more events
Café Parvis
433 Rue Mayor · Montréal, QC