About the IPP

A key problem for an agent with multiple, possibly inconsistent, goals is: 'what should I do next'?

What to do next can be formalised as the Intention Progression Problem (IPP): what means (course of action) to use to achieve a given (sub)goal, and which course of action (intention) to progress at the current moment. This problem is both central to agent reasoning and complex in nature, as the agent's intentions may conflict with each other given the resources available.

In the competition, the Intention Progression Problem is framed in the context of an agent with a library of predefined plans. Each plan consists of steps which are either basic actions or sub-goals. Each sub-goal is in turn achieved by some other plan.

The relationship between goals, plans, subgoals in plans, and plans for subgoals is naturally represented as a tree structure termed a goal-plan tree. The root of a GPT is a top-level goal (goal-node), and its children are the plans that can be used to achieve the goal (plan-nodes). Usually there are several alternative plans to achieve a goal: hence, the child plan-nodes are viewed as 'OR' nodes. By contrast, plan execution involves performing all the steps in the plan: hence, the children of a plan-node are viewed as 'AND' nodes.

The intentions of an agent are represented by a set T of goal-plan trees, where the root goal gi of each GPT ti ∈ T corresponds to a top-level goal of the agent. The progression of an intention to achieve a top-level goal gi amounts to traversing a path through the goal-plan tree ti, executing basic actions and choosing plans for subgoals. The path specifies a sequence of plans, actions, sub-goals and sub-plans that, if executed successfully, will achieve gi.

The Intention Progression Problem for a set of GPTs T is the problem of determining at runtime which ti in T to progress so as to maximise the agent's utility. For the first edition of the International Intention Progression Competition, utility is defined as the number of goals achieved.

Each entry will be evaluated using a number of previously unseen instances of intention progression problems. The problems will be of a similar level of difficulty to the example problem instances provided in the competition download. All entries will be evaluated using the same problem instances and starting in the same initial environment. The same exogenous changes to the environment and run-time additions to the set of goals will be applied to all entries.

  • Brian Logan, John Thangarajah, and Neil Yorke-Smith (2017). "Progressing Intention Progresson: A Call for a Goal-Plan Tree Contest." Proceedings of the 16th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2017), (pp. 768-772).
  • John Thangarajah, Lin Padgham and Michael Winikoff (2003). "Detecting & avoiding interference between goals in intelligent agents." Proceedings of the 18th International Join Conference on Artificial Intelligence (IJCAI 2003), (pp. 721-726).