What we're about
Upcoming events (2)
The main purpose of this event is to examine prevailing theories of judgment so that we can greater envision the role of the judgment in AGI.
"Kant’s taking the innate capacity for judgment to be the central cognitive faculty of the human mind, in the sense that judgment, alone among our various cognitive achievements, is the joint product of all of the other cognitive faculties operating coherently... (is) the centrality thesis"
- Stanford Encyclopedia of Philosophy
There seems to be a prevailing, tacit climate of opinion in AI/AGI, especially in today's world of very large scale language models, that a judgment made by a machine is correct primarily if it is basically equivalent to prior human-caused judgments or the result of rewarding the algorithm upon the attainment of goals. Emily Bender characterized this trend in ML broadly as the employment of "stochastic parrots" -- a trick in that machines simply spit out the most appropriate answer based on clever statistics built from nearly infinite past examples.
Thus there seems to be a trend for developers to ignore the issue of a theory of judgment so long as the machine makes a decision statistically compatible with prior such decisions or which at least results in the desired goal at hand. There is too much else to work on. It almost seems like an anachronism to worry about such considerations, a throwback to the "good, old fashioned" days of AI when many researchers busied themselves with issues like decision theory rather than ever larger training parameters and examples.
But the problem of the combinatorial explosion still exists anyway: in the real world often novel, unforeseen variations occur, each nuance bearing a thousand little preferences, values, and probabilities that should weigh on a machine's decision. Is it enough just to train on everything that has happened before, or is there room left for a novel judgment? What is the limit of training, and is the space beyond the limit of the training data then the judgment?
It seems like an AGI developer and persons interested in the development of AGI ought to at the very least spend some time contemplating the nature of the judgment if 1) the goal of AI is human level intelligence and 2) the judgment is the "central cognitive faculty of the human mind" as Kant believed.
For this meetup, we are going to examine some prominent issues and theories about the nature of the judgment. Tentatively, the plan is to present the following issues with breaks after each for discussion. So, it's basically a workshop and not a presentation per se. The basic goal is just to get a good grasp of existing theories of judgment. There will probably be another event for further exploration of how the judgment applies to AGI.
- Basic issues and topics in the established theory of judgment and decision making (eg., what is a normative judgment?)
- Kahneman, several of his important points about judgments
- Kant, Hegel, Brentano
- (and others likely TBD) theories of judgment
(notes/write-up in progress --> https://docs.google.com/document/d/1pHDM2zaSbU59bhj70NDYzCM9O-Z7CVwzdV47LQfvnKE/edit)
The Northwest AGI Forum is pleased to welcome Stephan Verbeeck for a presentation on the topic of "world models" as exemplified in his work.
Main points as provided by Stephan:
- Short introduction about me and Digitronic Software Solutions (DSS)
- Introduction of modules and products that we work on and how these fit into the grand scheme of introducing (rolling out) synthetic life in everyday life and work world-wide
- How these modules work together, their inter-dependency, and what each of them actually does
- The story behind these developments, in particular why we do things like this and not any other implementation or other product
- From a programmers point of view: the data stored, how the data is collected and kept up-to-date with new experiences
- The variety of ways to do this, the minimum constraints that any implementation of a world model needs to fill-in
- How reasoning is done based on the world model data and what limitations exist in the grounding of that data
- The final goal of DSS