MetaAgent, a Steering Behavior Template Library






4.89/5 (18 votes)
May 13, 2003
5 min read

114696
Library for creating autonomous agents that have (fun) life-like behaviors.
Introduction
This articles present MetaAgent, a C++ library for creating steering behaviors.

Some history about behaviors
In 1986, Craig Reynolds was writing boids, a computer model for animated animal motion, such as bird flocks and fish schools. He published a technical paper about it in [Craig Reynolds, 87]. His method was quite astonishing by it's simplicity since the model was based on 3 simple rules:
Separation: a boid should avoid his neighbors. To do so, you just need to steer the boid away from the center of his nearest neighbors.
Alignment: a boid tends to align his velocity with his neighbors.
Cohesion: a boid tends to go towards his neighbors
The resulting forces of these 3 rules were merged together by summing (with weighting) them together and applying to the boid. The Craig Reynolds boids have been and are still flying on his personnal web page[^].
Since then, Craig Reynolds has released another great paper [Craig Reynolds, 99] describing a number of behaviors to "give life" to autonomous characters: target seeking, obstacle avoidance, wandering, etc...
MetaAgent and OpenSteer
MetaAgent is not the only project around about steering behaviors. In fact it is the little sister of another project, OpenSteer initiated by Craig Reynolds.

Why another library ?
First of all, playing with autonomous characters is fun and is a great project if you plan to learn C++. That is basically how MetaAgent started: a playground for testing generic programming and meta-programming.
The real reason for building another library was that OpenSteer was mainly a collection of C function wrapper into some C++ classes (ok I'm exagerating...). MetaAgent plans (and hopefully will succeed) to use the full power of the C++ and Generic Programming to create behaviors.
MetaAgent guidelines
Here are the some of the guidelines that the project tries to follow:
- Break down all classes into orthogonal policies (I will talk about it later)
- Use signals and slots for rendering,
- Use as much as possible STL and Boost
Policies Class Design
I have run into Policy class design in the famous book of Andrei Alexandrescu
"Modern C++ design", see [Alexandrescu, 2001]. The basic idea is to assemble a
class with complex behavior by combining little classes ( called
Andrei Alexandrescu spends an entire chapter about the Policy design, I will try to illustrate it below on the agent-behavior creation.
How an autonomous agent works ?
The agent is basically a body (dynamics) that moves accordingly to his brain ( behavior ). It can be broken into several parts:
- the body, that implements the dynamics
- the brain, that is composed by a behavior
Building an agent using policies
Policies decomposition
"It is as if [some_host_class] acts as a little code generation engine and you configure the ways in which it generates code". Andrei Alexandrescu.
Let's start by building the dynamic model of our agent. This body must be able to move and react to a steering force (that will be given by the behavior).
As told previously, we want to use policies. So we want to decompose that model into orthogonal policies. Let's take a look at the fact that
- Regardless of the dynamical model type, you can always retreive the state of the center of mass.
- The dynamic model does not need to know what happens into the "brain" of the agent, it just needs the resulting steering

The dynamic model and the behavior can be seen as policies:
template<
typename ModelPolicy,
typename BehaviorPolicy
>
class agent : public ModelPolicy, public BehaviorPolicy
As you can see, agent
inherits from ModelPolicy
and
BehaviorPolicy
and thus it inherits all their methods!
agent
is called a host class since it is built from
policies.
Determining the interface
Unlike classic interfaces (collection of pure virtual methods), policies interfaces are loosely defined. Just use the policies methods in the host class without any prior declaration, if they are not defined in the policies classes, the compiler will fire an error. Hence, we simply write a method that makes the agent and think and act:
template<
typename ModelPolicy,
typename BehaviorPolicy
>
class agent : public ModelPolicy, public BehaviorPolicy
{
public:
void think_and_act()
{
First step, think and compute the steering. This will be the job of the
BehaviorPolicy
.
// vec is some 2D vector
vec steering_force = think( get_acceleration(), get_velocity(), get_position() );
Second step, apply the computed steering force to the model and integrate the equations:
act( steering_force ); // move according to the steering force -> ModelPolicy
};
};
Great, we have just defined the interface for ModelPolicy
and
BehaviorPolicy
.
Implementing the ModelPolicy
A class that implements a policy is called a policy class. The simplest dynamic model is the point-mass model integrated by an explicit euler scheme:
class point_mass_model { public: void act( vec steering ) { m_acceleration = m_steering / m_mass; m_velocity += m_acceleration; m_position += m_velocity; } protected: vec m_acceleration; vec m_velocity; vec m_position; };
The class point_mass_model is now almost ready to be used. We just need to add some getters for the states (get_acceleration, etc...) since they are need by the BehaviorPolicy:
class point_mass_model
{
...
vec const & get_accelartion() const { return m_acceleration;};
...
};
Implementing the BehaviorPolicy
The behavior policy classes only need to implement the
think
method. The following behaviors makes an agent
- go in circle (by taking the perpendicular to the velocity):
// this class makes the agent go round struct circle_move_behavior { // this is the interface to implement vec think( vec const& acceleration, vec const& veloctiy, vec const& position ) const { return -perpendicular( velocity ); }; };
- seek towards a target (by pointing the velocity towards the
target)
struct seek_behavior { // the target vec m_target; // this is the interface to implement vec think( vec const& acceleration, vec const& veloctiy, vec const& position ) const { return m_target - position; }; };
Merging together policies in the host class
Here comes the magic of the policies. By merging different policies, we create totally different agent:
// agent will go round
agent< point_mass_model, circle_move_behavior > circle_mover;
// this agent will track a target
agent< point_mass_model, seek_behavior > seeker;
Better still, you can change the target simply by doing:
seeker.m_target = new_target;
Since seeker is inherited from seek_behavior
and
m_target
is a public attribute.
Small conclusion
Using policies, creating agents with different dynamics and behaviors is as simple as changing some templates parameters. This is a major idea behing MetaAgent.

Want to participate ?
The above was a very rough description of the possibility for constructing behaviors using policies. For example seeking a target can be decomposed into:
- predict the target collision point:
PredictoryPolicy
- track the predicted collision point:
TrackerPolicy
.
You can then combine all kinds of predictors and trackers with great flexibility.
If you are interrested, you can go the MetaAgent WikiWikiWeb[^] and learn/contribute to the project.
Here are some snapshot of MetaAgent demonstration applications:

Wander behavior.

Seeking behaviors variants.
History
05-20-2003 | Fixed image links to the new site |
05-12-2003 | Initial publication |
Reference
[MetaAgent] | http://metaagent.sourceforge.net[^] |
[Craig Reynolds, 87] | http://www.red3d.com/cwr/papers/1987/boids.html[^] |
[Craig Reynolds, 99] | http://www.red3d.com/cwr/papers/1999/gdc99steer.html[^] |
[OpenSteer] | http://opensteer.sourceforge.net[^] |