Posted by: lljkkennedy | March 22, 2010

Progress

Spent most of the day coding up a way to only have my FYP agents trigger the area preconditions.

Posted by: lljkkennedy | March 22, 2010

Porting Progress

I’ve ported all classes over to C++ / Source; now I begin the final stages of development where I implement the HTN in Half Life 2.

To begin, I need to get lists of all objects and relationships that I want, in first order form.

I’ve started off with a simple test

Objects

  • Player
  • Bot
  • Area

Relationships

  • canSee(bot, player)

Topology

  • inArea(area, player)
  • inArea(area,bot)

Operators

  • goToPlayer(bot, player, area)
    • precondition ( inArea(area, player), ~inArea(area, bot))
    • effect (inArea(area, bot))

I’ve started implementation of this by creating a trigger area class called fyp_area in Source, which fires an event whenever the player enters the area. To test this, I inserted it into the map and got the following output (see top left corner):

Posted by: lljkkennedy | March 21, 2010

Progress

Have the implementation in C# mostly finished, now beginning the porting process to c++;

Notes for dissertation:

  • rewrote fyp_grunt to use the existing actions in game instead of writing new ones
  • overloading operator for predicate comparison in C++
Posted by: lljkkennedy | March 2, 2010

HTN planner progress

Bit of a roadblock at the moment trying to figure out how to represent the planner in code.

Posted by: lljkkennedy | February 22, 2010

Prototype 5 – Day 3 (Code)

Moving on, the basis for today’s work was to begin coding of the HTN Prototype in C#.

The planning domain is set up as follows:

To replicate this in C#,the following classes are created;

  • DockWorkerRobot.cs: Instantiates the locations, piles, robot, containers and crane as seen in Figure 30. This is where the operations listed above are performed.
  • Container.cs: Stores the container directly below it.
  • Crane.cs: Stores the location it belongs to and the container it is holding, if any.
  • Location.cs: Stores all adjacent locations, attached piles and cranes. There is a Boolean occupied to indicate if the robot is in the location or not.
  • Pile.cs: Stores the location it is attached to and a list of containers on it.
  • Robot.cs: Stores its current location and the container it’s carrying, if any.

All objects have a function writeStatus() which prints out the current status of the object to the console. Full code for the HTNPrototype project is located in Appendix C.

When the project is ran, the following output is captured:

Posted by: lljkkennedy | February 21, 2010

Prototype 5 – Day 2

Domain Assumptions

In order to correctly define the domain, there are some assumptions about it that need to be clarified. For the purpose of this prototype, the planning domain is as follows:

  • The system has a finite set of states.
  • The system is fully observable, insofar as there is complete knowledge about the state of the system
  • The system is static as there will be no external events that will impact on the planner. This means the state-transition system can now be defined as .
  • The system has restricted goals which are limited to a goal state  or set of goal states , where the objective is any sequence of states that end at one of the goal states.
  • The planner will create a sequential plan where the solution plan is an ordered set of actions.
  • The system works under implicit time. This means that actions and events have no duration, but are performed instantaneously.
  • This particular prototype will word under the assumption that does not change while the plan is being executed, for simplicity purposes. Therefore, the plan will be computed offline.

According to Ghallab et al, it is possible to now reduce the problem as follows:

Given Σ = (S,A, γ), an initial state s0 and a subset of goal states Sg, find a sequence of actions  corresponding to a sequence of state transitions (s0, s1, …, sk) such that s1 ϵ γ(s0, a1), s2 ϵ γ(s1, a2) …, sk ϵ γ(sk -1, ak), and sk ϵ Sg.

Simplified, the goal is to find the set of actions that alter the state of the system so that when applied in order from the start state, the final state equals the goal state.

1.1.2       Dock Worker System

1.1.2.1     Definition

For this prototype, the system will be based on the Dock Worker Robot example defined by Ghallab et al. The following description is taken from “Automated Planning: Theory and Practice” (20).

Figure 30: The Dock Worker Robot problem from “Automated Planning: Theory and Practice”, page 230

The system is defined as follows:

  • A set of locations which are the storage areas loc(x) in Figure 30.
  • A set of cranes which belong to a single location and can carry only one container at a time.
  • A set of piles which are fixed areas attached to a location. At the bottom of each pile is a pallet, on top of which are stacked zero or more containers.
  • A set of containers which can be stacked in some pile on top of a pallet or another container. They can also be held by a crane.
  • A symbol pallet which denotes the bottom of a pile.

In this system, there are the following constraints:

  • Any location that has piles also has one or more cranes.
  • A crane can move a container from the top of a pile to an empty pile or to the top of another pile at the same location.

The topology of the system can be defined as follows:

  • adjacent (l, l’): Location l is adjacent to location l’
  • attached(p,l): Pile p is attached to location l.
  • belong(k,l): Crane k belongs to location l.

The relationship between entities can be defined as follows:

  • holding(k,c) : Crane k is holding container c.
  • empty(k) : Crane k is not holding a container.
  • in(c,p): Container c is in pile p.
  • on(c, c’): Container c is on some container c’ or on a pallet in a pile.
  • top(c,p): Container c is at the top of pile p. If pile p is empty, this will be denoted as top(pallet, p)

And finally, the possible operations that can be performed:

  • Take(c, k, p): Take a container c with empty crane k from the top of pile p.
  • Put(c, k, p): Put container c from crane k in pile p.
  • Move(c, p): Move container c to pile p.
Posted by: lljkkennedy | February 20, 2010

Prototype 5

Prototype goal Create the HTN planner
Requirements
  1. A valid HTN planner is created and can formulate a plan based on input and expected outcome
Design
  1. Decide if the planner is domain-independent or domain-specific
New Revision Needed
Next Revision Requirements

Prototype Goal

This prototype aims to create a Hierarchal Task Network in order to establish the framework for the Project. This prototype will be coded in C # for quick development and testing – this framework code will then be ported over to the Source Engine when it is completed.

Requirements

This prototype should be able to come up with a valid plan based on the data input. Given a world state, the planner’s objective is to find which actions apply to which states in order to achieve the goal world state.

The Plan returned should be in the form of a structure that lists the appropriate actions.

The Objective should consist of a goal state or a set of goal states. For the purposes of this project, the Objective can be achieved by any sequence of state transitions that end at one of the goal states. However, a Constraint should be added to the system to allow for states to be avoided.

Planner  Definition

According to Ghallab et al (20), planning is defined as “an abstract, explicit deliberation process that chooses and organizes actions by anticipating their expected outcomes”. This process is either domain-independent or domain-specific.

Domain-independent planning relies on abstract, general models of actions whereas domain-specific planning can take advantage of domain knowledge, such as “an enemy doesn’t fire at its allies”, and has knowledge about good plans, such as “do not run into a room after a grenade” – if a planner comes up with a plan like that, it is obviously a bad plan, resulting in the death of the Agent. Finally, domain-independent planners can use the domain knowledge to influence it’s search routine to find the best plan, by explicitly encoding knowledge into the search. This means the plan can be narrowed to certain avenues, such as “plan long range attacks before short range”.

Beyond this, there are also different forms of planning to consider. Ghallat et al specify the following:

  • Project Planning: where the tasks involved are focused on time, and are constrained on what has happened before. Project planners usually take a plan input by the user and check the feasibility of the constraints, and can return useful attributes such as its critical path. However, this type of planner is not applicable for this project.
  • Scheduling and resource allocation: this involves the same constraints as Project Planning, while adding constraints on the resources to be used by each task. These types of planners take the actions to be carried out, the resource limitations and and optimization criteria required to create a plan – again, this is not appropriate for this project.
  • Plan Synthesis – these planners use the conditions needed to perform a task (or cause) and the resultant world state after the task is completed (or effect) to create a valid plan of action. It takes in as input the model of all known actions, the world state and the objective, or final state the world is required to be in, and creates the plan of tasks to be performed based on their cause and effects in order to create a world state equal to the objective state. This is exactly the kind of plan that is required for this project.

The project, using the parameters above, can now be defined as a domain-specific and plan-synthesis based planner.

High Concept Model

This project will use the general planning model defined by Ghallas et al(20), which is a state-transition system. It is defined as a 4 tuple where:

  • S = {s1, s2 …}is a finite or recursively enumerable set of states;
  • A = {a1, a2 …} is a finite or recursively enumerated set of actions;
  • E = {e1, e2 …} is a finite or recursively enumerated set of events;
  • is a state transition function.

This can be represented as a directed graph, whose nodes are the states in the set S. If a state s is contained in the possible state space ,  being a pair (a,e) where a is an element of the set of actions A and e is an element of the set of events E , there is a state transition between s and s’.

Figure 28: A State Transition Graph where S = {s1, s2, s3}

Based on this state transition system, the planning model must allow for real-time dynamic updating of the current world state, as demanded by this project. The plan must constantly be reevaluated as the player moves through the world and causes events to happen. The planner must also be able to take into consideration the actions of the Agent as they perform the plan, and the causes of those actions.

To allow for this, there are different components of the system that will perform certain tasks. They are as follows:

  • The state-transition system which defines the current world state;
  • A controller which will observe the current world state and perform actions based on a plan; in this project, the controller is the AI Agent. The controller will also report back to the planner with a status update of how well the plan is going, in order to allow for new plans to be calculated if the current one does not succeed.
  • A planner which takes the current world state, the goal world state, and the description of which contains all possible actions that can be performed and creates the plan for the Agent.

Figure 29: The basic model for the Planner

Posted by: lljkkennedy | February 20, 2010

Prototype 4

Prototype goal Hook the preexisting collision detection into the Agent.
Requirements The Agent is involved in the physics engine and is affected by physics interactions.
Design 1: Add the Agent to the list of objects calculated by the physics interactions.
New Revision Needed No
Next Revision Requirements N/A

The purpose of this prototype is to enable the physics collision for the NPC Agent. To do this, the hitbox for the character is created in order for the physics engine to detect and process interactions with other objects in the game.

Figure 22: A simple 2D hitbox

As seen in Figure 22, a hitbox, visualized here as a Bounding Box, is the area wherein physics collisions are calculated. In Half-Life 2, the physics collisions are handled by Bounding Boxes placed and sized to correspond with the limbs and other vital areas of the character models.  By doing this, the collision detection can be localized to specific limbs on the model – so if a character is hit in the arm, the arm will take most of impact and distribute energy from the collision to the other parts of the model. In reality, this type of physics interaction is quite expensive to calculate, so the hitboxes are mainly used to calculated damage taken from shots (i.e. a shot to the head is more damaging than a shot in the foot), and a pre-baked animation is then played to show the collision. Specifically in Half-Life 2, if the impact is enough to inflict fatal damage to the character it “ragdolls”. Ragdoll means all physical resistance is turned off on the model, and it falls to the ground in a realistic pose – or if enough force is applied, will fly through the air spectacularly.

Figure 23: the ragdoll effects working rather sorely in Skate by EA Black Box

To hook the collision detection into the Agent, the Spawn() method is altered to include the Bounding Box collision model and set the bounding box positions to a humanoid shape (seen in Italics below):

void fyp_grunt::Spawn( void )
{
Precache();

// set up the model and collision detection details
SetHullType(HULL_HUMAN);
SetHullSizeNormal();
SetSolid( SOLID_BBOX );
AddSolidFlags( FSOLID_NOT_STANDABLE );
SetModel( GRUNT_MODEL );

// set the movement type of the NPC and the general details
SetMoveType( MOVETYPE_STEP );
SetBloodColor( BLOOD_COLOR_RED );
m_iHealth = 20;
m_flFieldOfView = 0.5;
m_NPCState = NPC_STATE_NONE;

CapabilitiesClear();
NPCInit();
}

This creates the hitboxes for the character.

Figure 24: The hitboxes for the Agent. Instead of one large hitbox, there are many to allow for localized hit detection.

When the Agent is subjected to a physics collision in the game, it now responds appropriately:

Figure 25: The Agent is blown back by the physics engine after an explosion

Posted by: lljkkennedy | February 20, 2010

Prototype 3 Revision 1

Prototype goal Find a soldier model to use instead of the Alyx model to improve immersion.
Requirements A different model is used for the Agent
Design 1: Use a soldier-like model that will not look odd with more than one on screen.
New Revision Needed No
Next Revision Requirements N/A

The Alyx model used in the first version of this Prototype was too unique to be used in the game – it broke immersion when more than one model was placed at once. To rectify this, a different model was used.

Figure 21: The new Soldier model for the Agent. Also seen are the relationship table’s effects when a Zombie hates the Agent.

Posted by: lljkkennedy | February 20, 2010

Prototype 3

Prototype goal Create a test Agent object and instantiate it in the level
Requirements Have an Agent object created with a model and placed in the test level.
Design 1: Base the code on monster_dummy.cpp with all AI elements removed2: Use Alyx.mdl as the character model3: Set up relationships to other entities in the game
New Revision Needed Yes
Next Revision Requirements 1: Find a soldier model on the internet to use instead of the Alyx model to improve immersion.

This prototype is concerned with creating an NPC object and placing it in the level. To begin, a class called fyp_grunt is created, based on Valve’s monster_dummy.cpp file. All existing AI elements have been stripped out, and the “Alyx” model from the game is set as the NPC model.

fyp_grunt.cpp


#include "cbase.h"
#include "ai_default.h"
#include "ai_task.h"
#include "ai_schedule.h"
#include "ai_hull.h"
#include "soundent.h"
#include "game.h"
#include "npcevent.h"
#include "entitylist.h"
#include "activitylist.h"
#include "ai_basenpc.h"
#include "engine/IEngineSound.h"

// memdbgon must be the last include file in a .cpp file!!!
#include "tier0/memdbgon.h"

// define the model for use throughout the class
#define GRUNT_MODEL "models/Combine_Super_Soldier.mdl"

//-----------------------------------------------------------------------------
// Purpose: instantiates the class inheriting from the Base NPC class
//-----------------------------------------------------------------------------
class fyp_grunt : public CAI_BaseNPC
{
// Macro to declare the class details to the rest of the SDK
DECLARE_CLASS( fyp_grunt, CAI_BaseNPC );

public:
// Purpose: loads the model "behind the scenes" before the level begins
void Precache( void );
// Purpose: Called when a new instance is created. Sets up physics, sets the model.
void Spawn( void );
// Purpose: Sets the object class type for character relationships.
Class_T Classify( void );

// Macro to automate the declaration of the DATADESC table
// DATADESC contains metadata about the class, such as the information
// saved about the instance when the game is saved
DECLARE_DATADESC();

};

// Macro to link the name npc_FYP_grunt to the class for use in external editors
// such as hammer
LINK_ENTITY_TO_CLASS( npc_FYP_grunt, fyp_grunt );

//---------------------------------------------------------
// Save/Restore details
//---------------------------------------------------------
BEGIN_DATADESC( fyp_grunt )

END_DATADESC()

//-----------------------------------------------------------------------------
// Purpose: loads the model "behind the scenes" before the level begins
//-----------------------------------------------------------------------------
void fyp_grunt::Precache( void )
{
PrecacheModel( GRUNT_MODEL );

BaseClass::Precache();
}

//-----------------------------------------------------------------------------
// Purpose: Called when a new instance is created. Sets up physics, sets the
// model.
//-----------------------------------------------------------------------------
void fyp_grunt::Spawn( void )
{
Precache();

//// set up the model and collision detection details
SetModel( GRUNT_MODEL );
SetHullType(HULL_HUMAN);
SetHullSizeNormal();
SetSolid( SOLID_BBOX );
AddSolidFlags( FSOLID_NOT_STANDABLE );

// set the movement type of the NPC and the general details
SetMoveType( MOVETYPE_STEP );
SetBloodColor( BLOOD_COLOR_RED );
m_iHealth = 20;
m_flFieldOfView = 0.5;
m_NPCState = NPC_STATE_NONE;

CapabilitiesClear();

NPCInit();
}

//-----------------------------------------------------------------------------
// Purpose: Sets the object class type for character relationships.
//-----------------------------------------------------------------------------
Class_T fyp_grunt::Classify( void )
{
// CLASS_FYP_GRUNT has been defined in BaseEntity.h
return CLASS_FYP_GRUNT;
}

Next, the relationships the NPC will have with the other entities in the game are defined. This is listed in Appendix B.

Next, the newly created Agent must be added to the list of custom entities for the modification of the game. This is done by creating an FGD file in the mod folder, and listing the details of the Agent there. The FGD file is used as a lookup table for external applications to reference content that is not distributed by the SDK. In this project, it will be used by Hammer to reference any custom classes created so they can be placed in the map.

Full_life.fgd

@PointClass base(Targetname) studio(“models/alyx.mdl”)= npc_FYP_grunt :  “FYP NPC Class”

[

]

As there are no custom parameters set up yet, no variables are passed by the fgd file. The object, when loaded in Hammer, will be called “npc_FPY_grunt”.

Finally, the Agent can be loaded in Hammer and placed in the level. When the entity tool is chosen, the Agent is available in the list of objects that can be placed.

Figure 18: Prototype 3 listed in the Entity list

Figure 19: The Agent is placed in the level, with the properties window displayed.

The Agent is placed in the scene, the level is compiled and the game is loaded to ensure the object appears.

Figure 20: The Agent in-game

« Newer Posts - Older Posts »

Categories