|Sent on:||Wednesday, October 12, 2011 1:41 AM|
|I think I understand your question better after reading it's context on your blog:|
From the earliest stages of childhood development, we are fascinated with both ourselves and our environment. It would seem that we learn as a result of discovering who we are rather than our environments response to what we do.
Considering this, I propose the following concept:
An intelligent agent may have greater success not through the awareness of it’s environment; but rather, an intelligent agent should see all factors as individual pieces of its environment, including itself. Each factor in its environment has a corresponding equation, with all equations culminating together into a much greater equation.
I think what your saying here is that a learning based agent is preferable to a none learning agent. The Videos left out a lot of Chapter 2 from the book. Chapter 2 explains different basic types of agents and when to use them, from a simple-reflex agent to a learning agent. Sometimes all you need is a variation of a simple-reflex agent, although the learning based agent is needed for the most challenging tasks. Also, the same agent may need different 'resolutions of intelligence' in different situations. Like a car that always instantly brakes for stopped cars but also learns about traffic patterns.
The environment properties list in the videos is also a bit different than in the book. I would strongly recommend every one read chapters 2 and 3 from the book because it covers this core intro material in significantly greater depth. Ch 2 explains a critically fundamental framework for intelligent agents, and I'm surprised the videos didn't cover more of it. A free version of ch 2 and 3 is linked from the wiki: http://freedombluesky.com/aiml
--- On Tue, 10/11/11, Michael Stratton <[address removed]> wrote: