This was another decision-based talk using NKS, and I found this one just as interesting as the last. Hewitt studied a relatively simple but fascinating model of decision making based on choices between binary states with memory about which decisions led to which states. He had a very large number of of quantities he was using to describing the the behavior of the system, including predicatbility, desirability, and entropy.
The elementary system was a 2D CA with 2 choices, (turn left and no action), and 1 indicator with two states (right direction and wrong direction.) The agent must learn what to do when feedback indicates it is headed in the wrong direction based on past behavior, but with no indication of what moving toward the direction would be. Hewitt presented some state transition diagrams where the agent would determine the correct direction very quickly, but he quickly threw a wrench in the works by giving himself the ability to change the target point in real time.
He showed an interactive graphic of the agent seeking its target. The agent would head off in a direction more or less toward the target, but frequently get stuck in "orbits" once it had passed it on one side. I found it very amusing that the agent would get stuck in these orbits for some number of cycles and then finally get "frustrated" when it received enough negative feedback for each of its actions, and strike off in a random direction which would usually cause it to hit the target a few steps later.
Then he showed multiple agents where some of the agents were other agents targets. Their behaviors were reminiscent of bees milling around a hive, or as one of the audience members eloquently phrased it, "dating."
Hewitt ended with some graphics of distributions of various features of the agents' behavior.