addressalign-toparrow-leftarrow-rightbackbellblockcalendarcameraccwchatcheckchevron-downchevron-leftchevron-rightchevron-small-downchevron-small-leftchevron-small-rightchevron-small-upchevron-upcircle-with-checkcircle-with-crosscircle-with-pluscrossdots-three-verticaleditemptyheartexporteye-with-lineeyefacebookfolderfullheartglobegmailgoogleimageimagesinstagramlinklocation-pinmagnifying-glassmailminusmoremuplabelShape 3 + Rectangle 1outlookpersonplusprice-ribbonImported LayersImported LayersImported Layersshieldstartickettrashtriangle-downtriangle-uptwitteruseryahoo

Let's cover searching with nondeterministic actions (AIMA ch. 4).

When solving problems in nondeterministic environments, it doesn't suffice to merely propose a series of actions and follow them blindly; it turns out the we have to respond to the effect of our actions on the environment, and plan accordingly.

AND/OR graph search is one mechanism to search and execute while responding to the environment.

Exercise 4.6 will come in handy later on in chapter 5:

Explain precisely how to modify the AND - OR - GRAPH - SEARCH algorithm to generate a cyclic plan if no acyclic plan exists. You will need to deal with three issues labeling the plan steps so that a cyclic plan can point back to an earlier part of the plan, modifying OR-SEARCH so that it continues to look for acyclic plans after finding a cyclic plan, and augmenting the plan representation to indicate whether a plan is cyclic. Show how your algorithm works on (a) the slippery vacuum world, and (b) the slippery, erratic vacuum world. You might wish to use a computer implementation to check your results.

If you're more ambitious, start on 4.11; which (combined with 4.13) will probably play us out with chapter 4:

We can turn the navigation problem in Exercise 3.7 into an environment as follows:

  • The percept will be a list of the positions, relative to the agent, of the visible vertices.
    The percept does not include the position of the robot! The robot must learn its own position from the map; for now, you can assume that each location has a different 'view."
  • Each action will be a vector describing a straight-fine path to follow. If the path is
    unobstructed, the action succeeds; otherwise, the robot stops at the point where its
    path first intersects an obstacle. If the agent returns a zero motion vector and is at the
    goal (which is fixed and known), then the environment teleports the agent to a random
    location (not inside an obstacle).
  • The performance measure charges the agent l point for each unit of distance traversed
    and awards 1000 points each time the goal is reached.
  1. Implement this environment and a problem-solving agent for it. After each teleporta-
    tion, the agent will need to formulate a new problem, which will involve discovering its
    current location.
  2. Document your agent's perfornumce (by having the agent generate suitable commentary
    as it moves around) and report its performance over 100 episodes.
  3. Modify the environment so that 30% of the time the agent ends up at an unintended
    destination (chosen randomly from the other visible vertices if any: otherwise, no move at all). This is a crude model of the motion errors of a real robot. Modify the agent so that when such an error is detected, it finds out where it is and then constructs a plan to get back to where it was and resume the old plan. Remember that sometimes getting back to where it was might also fail! Show an example of the agent successfully overcoming two successive motion errors and still reaching the goal.

Join or login to comment.

  • Glenn B.

    Sorry I couldn't make it! I'll see you all next week.

    1 · January 8, 2013

    • Peter D.

      Sounds good; it was basically just a social hour, anyway. Next week, we'll hit the non-determinism.

      January 10, 2013

6 went

People in this
Meetup are also in:

Sign up

Meetup members, Log in

By clicking "Sign up" or "Sign up using Facebook", you confirm that you accept our Terms of Service & Privacy Policy