What we're about

"The creation and study of synthetic intelligences with sufficiently broad (e.g. human-level) scope and strong generalization capability is at bottom qualitatively different from the creation and study of synthetic intelligences with significantly narrower scope and weaker generalization capability."
--Goertzel (2014), his "core AGI hypothesis”

Artificial General Intelligence (AGI), also known as "strong AI," is usually held to be distinct from narrow AI, which typically is functional within a well constrained domain and usually relies heavily on statistical correlations based upon massive training data.

The AI/AGI distinction has been promulgated largely by Goertzel, Voss, Wang and the AGI community at large starting after 2000. Since the explosion of deep learning successes in the last few years interest in AGI has increased dramatically. In recent years the boundaries separating AI and AGI are increasingly debatable.

The purpose of the group is to explore all relevant issues related to the development of AGI including discussions of existing cognitive architectures and possibly AGI development plans of members. AGI is a field which effectively encompasses all disciplines. The starting points for AGI development are many. The relevant areas serving as inspirations for AGI open to discussion include applicable work in computer science, neuroscience, psychology, sociology, mathematics, linguistics, electronics, physics, robotics, philosophy, and others.

Upcoming events (2)

The Nature of the Judgment

Link visible for attendees

The main purpose of this event is to examine prevailing theories of judgment so that we can greater envision the role of the judgment in AGI.

"Kant’s taking the innate capacity for judgment to be the central cognitive faculty of the human mind, in the sense that judgment, alone among our various cognitive achievements, is the joint product of all of the other cognitive faculties operating coherently... (is) the centrality thesis"

- Stanford Encyclopedia of Philosophy

There seems to be a prevailing, tacit climate of opinion in AI/AGI, especially in today's world of very large scale language models, that a judgment made by a machine is correct primarily if it is basically equivalent to prior human-caused judgments or the result of rewarding the algorithm upon the attainment of goals. Emily Bender characterized this trend in ML broadly as the employment of "stochastic parrots" -- a trick in that machines simply spit out the most appropriate answer based on clever statistics built from nearly infinite past examples.

Thus there seems to be a trend for developers to ignore the issue of a theory of judgment so long as the machine makes a decision statistically compatible with prior such decisions or which at least results in the desired goal at hand. There is too much else to work on. It almost seems like an anachronism to worry about such considerations, a throwback to the "good, old fashioned" days of AI when many researchers busied themselves with issues like decision theory rather than ever larger training parameters and examples.

But the problem of the combinatorial explosion still exists anyway: in the real world often novel, unforeseen variations occur, each nuance bearing a thousand little preferences, values, and probabilities that should weigh on a machine's decision. Is it enough just to train on everything that has happened before, or is there room left for a novel judgment? What is the limit of training, and is the space beyond the limit of the training data then the judgment?

It seems like an AGI developer and persons interested in the development of AGI ought to at the very least spend some time contemplating the nature of the judgment if 1) the goal of AI is human level intelligence and 2) the judgment is the "central cognitive faculty of the human mind" as Kant believed.


For this meetup, we are going to examine some prominent issues and theories about the nature of the judgment. Tentatively, the plan is to present the following issues with breaks after each for discussion. So, it's basically a workshop and not a presentation per se. The basic goal is just to get a good grasp of existing theories of judgment. There will probably be another event for further exploration of how the judgment applies to AGI.


  • Basic issues and topics in the established theory of judgment and decision making (eg., what is a normative judgment?)
  • Kahneman, several of his important points about judgments
  • Kant, Hegel, Brentano
  • (and others likely TBD) theories of judgment

(notes/write-up in progress --> https://docs.google.com/document/d/1pHDM2zaSbU59bhj70NDYzCM9O-Z7CVwzdV47LQfvnKE/edit)

Stephan Verbeeck on World Models

Link visible for attendees

The Northwest AGI Forum is pleased to welcome Stephan Verbeeck for a presentation on the topic of "world models" as exemplified in his work.

Main points as provided by Stephan:

  • Short introduction about me and Digitronic Software Solutions (DSS)
  • Introduction of modules and products that we work on and how these fit into the grand scheme of introducing (rolling out) synthetic life in everyday life and work world-wide
  • How these modules work together, their inter-dependency, and what each of them actually does
  • The story behind these developments, in particular why we do things like this and not any other implementation or other product
  • From a programmers point of view: the data stored, how the data is collected and kept up-to-date with new experiences
  • The variety of ways to do this, the minimum constraints that any implementation of a world model needs to fill-in
  • How reasoning is done based on the world model data and what limitations exist in the grounding of that data
  • The final goal of DSS

Website: https://digitronic.software

Past events (24)

AI Demo and Drinks

Needs a location

Photos (9)