Skip to content

Details

What Is Intelligence, and are other animals and AI truly Intelligent?

I would like us to discuss two related issues:
How do we define intelligence?
What other animals/machines are intelligent?

How should we apply the word intelligent to humans, animals, and AI in the philosophical sense? Consciousness and the mind/body question are related and may crop up but are not central to the questions in this discussion (we can, and already have, discussed these issues separately). Instead, we can discuss what intelligence is and how we recognise or judge intelligence in non-humans. The ultimate question of the session is “to what extent do we think AI is intelligent?”, and on the way there we can also ask ourselves how the intelligence of non-human animals compares to ours.

WHAT IS INTELLIGENCE?

Intelligence is difficult to pin down but some common themes are that it consists of:

  1. Gathering information from our environment in a learning process (learning);
  2. Memorising the information (memory);
  3. Understanding and modelling the outcomes resulting from specific actions (understanding);
  4. Adapting models to fit different environments to achieve different goals (adapting).

This list is somewhat subjective, because it depends on what is meant by intelligence. Skills such as communication probably help us achieve these things. We can probably view intelligence as a toolbox of skills that help us achieve a goal. Do you agree with this definition?

This is an explainer of some of the background to what might be regarded as intelligence:
https://www.youtube.com/watch?v=ck4RGeoHFko

When we talk about intelligence applying or not applying to animals or computers it strikes me that what we really want to judge is if they think like humans. When we think of intelligence, we think of our own intelligence. So can intelligence be something completely different?

ARE OTHER ANIMALS INTELLIGENT?

This seems like a no-brainer (excuse the pun): non-human animals are clearly intelligent. Are there any nuances to this? Their lives as they experience them are probably very different to ours and it is probably beyond us to fully understand their world. However, by and large they do achieve the skills I listed earlier. Humans are good at all the skills I listed, and animals are probably good at skills 1 and 2, but not so good at skills 3 and 4.

One difference between us and other animals is that their language is not sophisticated. But it would be wrong to say that they have no language: many animals such as whales and dolphins certainly communicate. So is the difference one of quality or magnitude?

If you have time this video is very good at explaining the relevance of this (if you don’t have time to watch everything, at least watch the segment “what is intelligence” at 20 mins and from 39 mins to the end): https://youtu.be/RkpI93hd3G8?si=kzAFry7ye7c9Ks48
Is intelligence “the capacity of the brain to predict the future by analogy to the past” as Jeff Hawkins claims? Is this perhaps what other animals are not so good at?

IS AI INTELLIGENT?

The Turing Test is a measure of a machine's ability to exhibit intelligent behaviour indistinguishable from a human. It was proposed by British mathematician Alan Turing in his 1950 paper "Computing Machinery and Intelligence." How it works:
A human evaluator has text-based conversations with two hidden participants - one human and one machine. If the evaluator cannot reliably tell which is which, the machine is said to have "passed" the Turing Test.

On this basis, perhaps some humans would not pass the Turing test. What do you think?

Some arguments against the test:

  • The philosopher John Searle argued that a system could pass the test by following rules without truly "understanding" them.
  • The test might be too narrow because it only focuses on conversation, ignoring other aspects of intelligence like perception, creativity, or physical interaction.
  • Human-level conversation is actually quite difficult and may not be necessary for useful AI.
  • Programs can use tricks (like ELIZA's deflection techniques) to seem more intelligent than they are. They can be programmed to imitate particular behaviour or answer particular types of logic which occur in IQ tests.

Here’s a good and light explainer: https://www.youtube.com/watch?v=GyNaH27lX90

Most of the common AI systems - think of CatGPT etc - are based on Large Language Models (with additional tweaks such as “transformers” which add a layer of nuance through context and “attention”). These models are primarily text prediction machines. They are trained on huge amounts of text data, and in essence they reproduce the most probabilistically “likely” text based on a set of user prompts. Given this, to what extent does an LLM represent intelligence as we recognise it?

Clearly AI is better than humans in certain tasks that require memory. Where they fail is that they have learned from the set of data on the internet and they cannot easily adapt to a new situation that is suddenly presented in front of them. Partly this could be that they do not sense the world in the same way that we do. The current AI models are language systems, generally - aside from some specific applications - often without sound or image input. For example they may be exceptional at translating texts from one language to another, but poor at tasks demanding interaction with the real world.

Artificial General Intelligence
AGI systems are defined as systems that are good at the 4 skills I mentioned earlier of learning, memory, understanding and adapting. Critically, they must additionally be able to achieve this autonomously (without user direction) - a condition that animals take for granted. Animals may sometimes receive direction from peers, but we can recognise when that direction is misleading (it is a social skill to discern between the two, and animals are capable of this nuance).
In his essay “The Coming Technological Singularity” (1993) Vernor Vinge said that within 30 years, we will have the technological means to create superhuman intelligence; shortly thereafter, the human era will end. This wasn’t a prediction of apocalypse, but of “epistemic rupture”: humans would no longer be the smartest agents shaping the future. To achieve a superintelligence singularity, an AGI would need to be created, and furthermore it would need to be capable of rapid recursive self-improvement.

Personally I think that a singularity will arrive at some point into the future- if we survive that long - but that LLMs probably won’t deliver it.

QUESTIONS

How do you think intelligence should be defined?
To what extent is intelligence a subjective quality?
Do we think that intelligence tests are truly reflective of intelligence?
Are humans and animals intelligent, or do we just react to learned behaviour?
Is the difference between human and animal intelligence one of quality or magnitude?
Is intelligence “the capacity of the brain to predict the future by analogy to the past” as Jeff Hawkins claims? Is this perhaps what other animals are not so good at?
If I apply an IQ test to AI, how do you think it will perform?
Will AI achieve AGI any time soon?
If an AI singularity is to be reached, would humans just try to shut it down?

Related topics

You may also like