A grazer is a version of the Follower agent that cycles among following three different light sources. It chooses which light to go to next based on its color. The color of light is defined by a red component, a green component and a blue component. While the grazer is running, it loops between finding the light with the highest red component, and approaching that, then finding the light with the highest green component, and approaching that, and then finding the light with the highest blue component, and approaching that, and so forth. When you write your grazers, they can start out by following whatever light is convenient, as long as they loop appropriately after that.
You can think of the grazer as a primitive animal that every day starts in a safe place to sleep (represented by the red light), forages for food (represented by the green light), drinks some water (represented by the blue light), and then goes back to sleep and starts the next day.
The point of this assignment is to understand indexicality better. Indexicality says that you can use your ongoing relationships with objects in your environment as a substitute for memory. First you will build a grazer with explicit memory. That is pretty easy. The real challenge is to create a grazer without memory. This reactive grazer will need to analyze its percepts to understand the ongoing relationships that it has with objects in its environment. You can think of the reactive grazer as having indexical representations.
One new element in this scenario is the observation that an agent might encounter ambiguity in establishing indexical representations. That is, the agent might find itself in the same relationship to two different objects. In this case the agent might not understand the world correctly and might behave incorrectly too. This is a real problem in natural intelligence. (For example, many illusions exploit the visual ambiguity between objects colliding with one another or passing behind one another. You might know the ballroom dance step which exploits this: the performer brings the hands to alternate knees to create the effect that the legs pass through each other!)
Like the predator, the grazer is implemented as a subclass of the Follower class. The grazer has an extra parameter called the threshold that indicates how close it should get to each light source before moving on to the next one.
As before, the skeleton code highlights the methods isTarget(Percept), targetCost(Percept), deliberate(List<Percept>) and draw(Graphics).
You do not have to worry about physics again; the grazer can turn as much as it wants and can set its speed to any legal value at each time step. The defaults in the skeleton code already allow this.
First, write a grazer based on memory. The skeleton code for this agent is provided in a template file called StateGrazer.java. You will have to adapt the skeleton code in the following ways:
At the end you should have an agent that always cycles from red, to green, to blue lights.
Next, write a graze that is purely reactive. In other words, it relies on cues in the environment to make sure that it moves from one light to another in an appropriate way. The skeleton code for this agent is provided in a template file called ReactiveGrazer.java. You will have to adapt the skeleton code in the following ways:
At the end you should have an agent that cycles from red, to green, to blue lights - at least as long as the natural indexical relationships in the task are unambiguous.
You should make sure that your two grazers both work when the lights are distributed loosely around the environment. In these cases, indexical cues will always be unambiguous. The following two sample files are representative cases:
Create an environment in which the reactive grazer encounters an ambiguous indexical relationship. In other words, at a certain point, the reactive grazer winds up in the same relationship to two different light sources, one of which it should pay attention to in order to carry out its activity, and one of which it should not. Set up the scenario so that the reactive grazer responds to the ambiguity by doing the wrong thing. This file should be called rgl.xml (for reactive grazer loses).
Create a corresponding file with a state grazer instead of a reactive grazer, to demonstrate that the state grazer behaves correctly. Call this sgw.xml (for state grazer wins).
Sometimes it is possible to avoid ambiguity. In cognitive science, one such strategy is known as active perception: moving to a location where you can see the world better. As an optional extension, you can refine the reactive agent using the idea of active vision so that it is not fooled by the illusion you create in the analysis section. Concretely, here's what's involved:
As always, your reactive agent should carry out this decision making only if the parameter "withExtensions" (specified by XML attribute "with-extensions") is true. Your agent should behave in the simple easy way if this parameter is false.