EP1725966A1 - Procede et systeme de navigation a l'ecran de personnages numeriques et analogues - Google Patents

Procede et systeme de navigation a l'ecran de personnages numeriques et analogues

Info

Publication number
EP1725966A1
EP1725966A1 EP05714659A EP05714659A EP1725966A1 EP 1725966 A1 EP1725966 A1 EP 1725966A1 EP 05714659 A EP05714659 A EP 05714659A EP 05714659 A EP05714659 A EP 05714659A EP 1725966 A1 EP1725966 A1 EP 1725966A1
Authority
EP
European Patent Office
Prior art keywords
digital
entity
cell
recited
movable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP05714659A
Other languages
German (de)
English (en)
Other versions
EP1725966A4 (fr
Inventor
Paul Kruszewski
Greg Labute
Cory Kumm
Fred Dorosh
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BGT Biographic Technologies Inc
Original Assignee
BGT Biographic Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BGT Biographic Technologies Inc filed Critical BGT Biographic Technologies Inc
Publication of EP1725966A1 publication Critical patent/EP1725966A1/fr
Publication of EP1725966A4 publication Critical patent/EP1725966A4/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/10
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • A63F13/577Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/64Methods for processing data by generating or executing the game program for computing dynamical parameters of game objects, e.g. motion determination or computation of frictional forces for a virtual car
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/64Methods for processing data by generating or executing the game program for computing dynamical parameters of game objects, e.g. motion determination or computation of frictional forces for a virtual car
    • A63F2300/643Methods for processing data by generating or executing the game program for computing dynamical parameters of game objects, e.g. motion determination or computation of frictional forces for a virtual car by determining the impact between objects, e.g. collision detection
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6607Methods for processing data by generating or executing the game program for rendering three dimensional images for animating game characters, e.g. skeleton kinematics

Definitions

  • the present invention relates to the digital entertainment industry and to computer simulation. More specifically, the present invention concerns a method and system for on-screen navigation of digital objects or characters.
  • FIG. 1 of the appended drawings illustrates a generic 3D application from the prior art which can be in the form of an animation package, a video/computer game, a trainer or a simulator for example.
  • the 3D application shares the basic high-level architecture of objects, which can be more generally referred to as digital entity, being manipulated by controllers via input devices or physics and artificial intelligence (Al) systems and made real to the user by synthesizers, including visual rendering and audio.
  • One level deeper, 3D applications are typically broken down into two components: the simulator and the image generator. As illustrated in Figures 2 and 3, the simulator takes in many inputs from both human operators and CGE the simulator then modifies the world database accordingly and outputs the changes to the image generator for visualization.
  • a typical simulation/animation loop structure is illustrated in Figure 4.
  • the world state manager first loads up the initialisation data from the world database. For each frame/tick of the simulation, the world state manager updates the controllers, the controller acts accordingly and sends back object updates to the manager.
  • the world state manager then resolves all the object updates into a new world state in the world database (WDB) and passes this to the image generator (IG).
  • the IG updates the characters' body limb positions and other objects and renders them out to the screen.
  • Game Al makes games more immersive. Typically game Al is used in the following situations:
  • NPCs intelligent non-player characters
  • PPCs non-player characters
  • Al can be used to fill sporting arenas with animated spectators or to add a flock of bats to a dungeon scene; • to create opponents when there are none.
  • Many games are designed for two or more players however, if there is no one to play against intelligent Al opponents are needed; or • to create team members when there are not enough. Some games require team play, and game Al can fill the gap when there are not enough players.
  • CGFs Computer Generated Forces
  • SAFs Semi-Automated Forces
  • Vehicle drivers and pilots although vehicles have very complex models for physics (e.g., helicopters will wobble realistically as they bank into turns and tanks will bounce as they jump ditches) and weapon / communication systems (e.g., line of sight radios will not work through hills), they tend to have simplistic line of sight navigation systems that fail in the 3D concrete canyons of MOUT (e.g., helicopters fly straight through skyscrapers rather than around them and tanks get confused and stuck in the twisty garbage filled streets ofthe third world).
  • Al can be used to simulate of the brain of the human driver in the vehicle (with or without the actual body being simulated).
  • Groups of individual doctrinal combatants United States government-created SAFs of groups of individual doctrinal combatants limited in their usefulness to MS&T applications since they are limited to US installations only and are unable to navigate properly in urban environments. While aggregate SAFs operate on a strategic and often abstract level, individual combatant simulators operate on a tactical and often 3D physical level. Groups of individual irregular combatants: by definition irregular combatants such as armormen and terrorists are non-doctrinal and hence it is difficult to express their personalities and tactics with a traditional SAF architecture. Crowds of individual non-combatants (clutter): one of the most difficult restrictions of MOUT is how to conduct military operations in an environment that populated with non-combatants. These large civilian populations can affect a mission by acting as only operational "clutter" to actually affecting the outcome of the battle.
  • One of the more specific aspects of game Al and MS&T is real-time intelligent navigation of agent per se and, for example, in the context of crowd simulation.
  • Reece develops a system for modelling crowds within the Dismounted Infantry Semi-Automated Forces (DISAF) system (Reece,
  • Movement is fundamental to all entities in a simulation whether bipedal, wheeled, tracked or aerial.
  • an Al system should allow an entity to navigate in a realistic fashion from point X to point Y.
  • traffic rules such as staying in lane and stopping a traffic lights
  • military operations avoiding roadblocks and trying not to run over civilian bystanders.
  • Intelligent navigation can be broken down into two basic levels: dynamically finding a path from X to Y and avoiding dynamic obstacles along that path.
  • An example of agent navigation is following a pre- determined path (e.g., guards on a patrol path).
  • a predetermined path is an ordered set of waypoints that digital characters may be instructed to follow.
  • a path around a racetrack would consist of waypoints at the turns of the track.
  • Path following consists of locating the nearest waypoint on the path, navigating to it by direct line of sight and then navigating to the next point on the path. The character looks for the next waypoint when it has arrived the current waypoint (i.e., within the waypoint's bounding sphere).
  • Figure 5 shows how a path can be used to instruct a character to patrol an area in a certain way. It has been found that path following as a navigation mechanism works well when not only both the start and destination are known but also the path is explicitly known.
  • An object of the present invention is therefore to provide an improved navigation method for a digital entity in a digital world.
  • Another object of the invention is to provide a method for automatically moving a digital entity on-screen from start to end points.
  • a method in a computer system for moving at least one digital entity on-screen from starting to end points in a digital world comprising: i) providing respective positions of obstacles for the at least one movable digital entity in the digital world; defining at least portions of the digital world without obstacles as reac able space for the at least one movable digital entity; ii) creating a navigation mesh for the at least one movable digital entity by dividing the reachable space into at least one convex cell; iii) locating a start cell and an end cell among the at least one convex cell including respectively the start and end points; and iv) verifying whether the starting cell corresponds to the end cell; if the starting cell corresponds to the end cell, then: iv)a) moving the at least one movable digital entity from the starting point to the end point; if the starting cell does not correspond to the end cell, then iv)b) i) determining a sequence
  • a system for moving a digital entity on-screen from starting to end points in a digital world comprising: a world database for storing information about the digital world and for providing respective positions of obstacles for the movable digital entity in the digital world; a navigation module i) for defining at least portions ofthe digital world without obstacles as reachable space for the movable digital entity; ii) for creating a navigation mesh for the movable digital entity by dividing the reachable space into at least one convex cell; iii) for locating a start cell and an end cell among the at least one convex cell including respectively the start and end points; and iv) for verifying whether the starting cell corresponds to the end cell; and if the starting cell does not correspond to the end cell, for further v) determining a sequence of cells among the at least one convex cell from the starting cell to the end cell, and vi) determining at least one intermediary point located on a respective boundary between consecutive cells in the
  • Figure 1 which is labeled "prior art", is a block diagram illustrating the first level of a generic three-dimensional (3D) application
  • Figure 2 which is labeled "prior art", is a block diagram illustrating the second level of the generic 3D application from Figure 1 ;
  • FIG 3 which is labeled "prior art", is an expanded view of the block diagram from Figure 2;
  • Figure 4 which is labeled “prior art” is a flowchart illustrating the flow of data from the generic 3D application from Figure 1 to the image generator part of the 3D application from Figure 1 ;
  • Figure 6 is a block diagram illustrating a system for onscreen animation of digital entities including a navigation module embodying a system for moving a digital entity on-screen from starting to end points in a digital world according to an illustrative embodiment ofthe present invention
  • Figure 7 is a flowchart illustrating a method for moving a digital entity on-screen from starting to end points in a digital world according to an illustrative embodiment of the present invention
  • Figure 8 is a schematic view illustrating a two-dimensional barrier used with the method of Figure 7;
  • Figure 9 is a schematic view illustrating a three-dimensional barrier used with the method of Figure 7;
  • Figure 10 is a schematic view illustrating a co-ordinate system used with the method of Figure 7;
  • Figure 11 is a top plan schematic view of a two-dimensional world in the form of a one-floor building according to a first example ofthe reachable space for a specific movable digital entity according to the method from Figure 7;
  • Figure 12 is a top plan schematic view of the one-floor building from Figure 11 illustrating a navigation mesh created through the method from Figure 7;
  • Figure 13 is a schematic view of a connectivity graph obtained from the navigation mesh from Figure 12;
  • Figure 14 is a top plan schematic view of a two-dimensional world according to a second example ofthe reachable space for a specific movable digital entity according to the method from Figure 7;
  • Figure 15 is a top plan schematic view of the world from Figure 11 illustrating a navigation mesh created through the method from Figure 7;
  • Figure 16 is a top plan schematic view similar to Figure 15, illustrating steps from the method from Figure 7;
  • Figure 17 is a top plan view schematic view similar to Figure 15, illustrating a path resulting from the method from Figure 7;
  • Figure 18 is a top plan view schematic view similar to Figure 17, illustrating a first alternative path to the path illustrated in Figure 17 resulting from the blocking of a first passage;
  • Figure 19 is a top plan view schematic view similar to
  • Figure 18 illustrating a second alternative path to the path illustrated in Figures 17 and 18 further resulting from the blocking of a second passage
  • Figure 20 is a top plan schematic view similar to Figure 17, illustrating a third alternative path to the path illustrated in Figure 17, resulting from new starting and end points;
  • Figure 21 is a top plan view schematic view similar to Figure 20, illustrating an alternative path to the path illustrated in Figure 20, resulting from doubling the width of the movable digital entity;
  • Figure 22 is a perspective view of a floor plan generator on a small city part of a simulator;
  • Figure 23 is a perspective view illustrating the navigation mesh created from the floor plan generator from Figure 22 using the method from Figure 7;
  • Figure 24 is a perspective view of a digital world in the form of a city street;
  • Figure 25 is a perspective view of the city street from Figure 24, illustrating the navigation mesh creating step according to the method from Figure 7, including the used of blind data to characterize the resulting cells;
  • Figure 26 is a cut out perspective view of a digital world in the form of a building
  • Figure 27 is a perspective view of the navigation mesh resulting from the building from Figure 26 using the method from Figure 7;
  • Figure 28 is flowchart of a collision avoidance method for a digital entity moving on-screen from starting to end points in a digital world according to a specific illustrative embodiment of the present invention;
  • Figure 29 is a perspective view of an entity in a 3D application, in the form of a character, illustrating character's sensor according to the present invention
  • Figure 30 is a top plan view of an entity in a digital world illustrating the entity's vision sensor according to the present invention, and more specifically illustrating the field of view provided by the sensor;
  • Figure 31 is a perspective view of an entity in a digital world similar to Figure 30 illustrating the selection of a sub path to avoid obstacles according to a specific embodiment of the method from the Figure 7;
  • Figures 32A-32C are schematic views illustrating avoidance strategies ( Figures 32B-32C) that can be used by a movable digital entity according to a specific embodiment of the method from Figure 7 in the case of a stationary obstacle ( Figure 32A);
  • Figures 33A-33C are schematic views illustrating avoidance strategies ( Figures 33B-33C) that can be used by a movable digital entity according to a specific embodiment of the method from Figure 7 in the case of an incoming obstacle ( Figure 33A);
  • Figures 34A-34C are schematic views illustrating avoidance strategies ( Figures 34B-34C) that can be used by a movable digital entity according to a specific embodiment of the method from Figure 7 in the case of an outgoing obstacle (Figure 34A);
  • Figures 35A-35C are schematic views illustrating avoidance strategies (Figures 35B-35C) that can be used by a movable digital entity according to a specific embodiment of the method from Figure 7 in the case of an sideswiping obstacle (Figure 35A); and
  • Figures 36A-36E are schematic views illustrating paths for simultaneously moving five movable digital entities using the method from Figure 7 and applying a group-based movement modifier.
  • a system 10 for on-screen animation of digital image entities including a navigation module 12 embodying a method for moving a digital entity on-screen from starting to end points in a digital world according to an illustrative embodiment of the pr&sent invention will now be described with reference to Figure 6.
  • the system 10 comprises a simulator 14 a world database (WDB) 16 coupled to the simulator 14, a plurality o"f image generators (IG) 18 (three shown) coupled to both the world DB and to the simulator 14, a navigation module 12 according to an illustrative embodiment of the present invention, coupled to the simulator 14, a decision-making module, also coupled to the simulator 14, and a plurality (three shown) of animation control module, each coupled to a respective IG 18.
  • WDB world database
  • IG o"f image generators
  • the number of IG 18 may of course vary depending on the application. For example, in the case wherein the system 10 is embodied in a 3D animation application, the number of IG 18 may effect the rendering time.
  • the simulator 14 and IG 18 may be in the form of a single computer.
  • the world database 16 is stored on any suitable memory means, such as, but not limited to, a hard drive, a dvd or cd-rom disk to be read on a corresponding drive, or a random-access memory part of the computer 14.
  • the simulator 14 and IG 18 are in the form of computers or of any processing machines provided with processing units, which are programmed with instructions for animating, simulating or gaming as will be explained hereinbelow in more detail.
  • the simulator 14, IG 18 and world DB 16 can be remotely coupled via a computer network (not shown) such as Internet.
  • the simulator 14 can take another form such as a game engine or a 3D animation system.
  • the modules 12, 20 and 22 are in the form of sub-routines or dedicated instructions programmed in the simulator 14 for example.
  • the characteristics and functions of the modules 20, 22 and more specifically of module 12 will become more apparent upon reading the following non- restrictive description of a method 100 for moving a digital entity on-screen from a starting point to an end point in a digital world according to an illustrative embodiment of the present invention.
  • the method 100 which is illustrated in Figure 7, comprises the following steps:
  • 102 providing respective positions of obstacles for the movable digital entity in the digital world and defining at least portions of the digital world without obstacles as reachable space for the movable digital entity; 104 - creating a navigation mesh for the movable digital entity by dividing the reachable space into convex cells; 106 - locating a starting cell and an end cell among the convex cells including respectively the starting and end points; 108 - verifying whether the starting cell corresponds to the end cell; if the starting cell corresponds to the end cell then : 110 - moving the digital entity from the starting point to the end point and stopping the method; if the starting cell does not correspond to the end cell then : 112 - determining a sequence of cells among the convex cells from the starting cell to the end cell; 114 - determining intermediary points located on a respective boundary between consecutive cells in the sequence of cells; and 116 - moving the digital entity from the starting point to each consecutive intermediary points to the end point.
  • step 102 the respective position of obstacles for the movable digital entity in the digital world are defined yielding the portion of the digital world without obstacles as reachable space for the movable digital entity.
  • the reachable space can be defined as regions ofthe digital world enclosed by barriers.
  • the digital world may have been previously defined including any autonomous or non-autonomous entity with which the movable digital entity may interact.
  • the concept of digital world and of digital entity will now be described according to an illustrative embodiment of the present invention.
  • the digital world model includes image object elements.
  • the image object elements include two or three-dimensional (2D or 3D) graphical representations of objects, autonomous and non-autonomous characters, building, animals, trees, etc. It also includes barriers, terrains, and surfaces.
  • 2D or 3D three-dimensional
  • the movable entity that is to be moved using the method 100 can be either autonomous or non-autonomous.
  • the concepts of autonomous and non-autonomous characters and objects will be described hereinbelow in more detail.
  • the graphical representation of objects and characters can be displayed, animated or not, on a computer screen or on another display device, but can also inhabit and interact in the virtual world without being displayed on the display device.
  • Barriers are triangular planes that can be used to build walls, moving doors, tunnels, etc., or any obstacles for any movable entity in the digital world.
  • Terrains are 2D height-fields to which entities can be automatically bound (e.g. keep soldier characters marching over a hill).
  • Surfaces are triangular planes that may be combined to form fully 3D shapes to which autonomous characters can also be constrained.
  • these elements are to be used to describe the world in which the characters inhabit. They are stored in the world DB 16.
  • the digital world model includes a solver, which allows managing entities, including autonomous characters, and other objects in the digital world.
  • the solver can have a 3D configuration, to provide the entities with complete freedom of movement, or a 2D configuration, which is more computationally efficient, and allows an operator to insert a greater number of movable entities in a scene without affecting performance of the animation system.
  • a 2D solver is computationally more efficient than a 3D solver since the solver does not consider the vertical (y) co-ordinate of an image object element or of an entity.
  • the choice between the 2D and 3D configuration depends on the movements that are allowed in the virtual world by the movable entities and other objects. If they do not move in the vertical plane then there is no requirement to solve for in 3D and a 2D solver can be used. However, if any entity requires complete freedom of movement, a 3D solver is used. It is to be noted that the choice of a 2D solver does not limit the dimensions of the virtual world, which may be 2D or 3D.
  • Non-autonomous characters are objects in the digital world that, even though they may potentially interact with the digital world, are not driven by the solver. These can range from traditionally animated characters (e.g. the leader of a group) to player characters to objects (e.g. flying debris) driven by other components of the simulator.
  • Barriers are used to represents obstacles for movable entities, and are equivalent to one-way walls, i.e. an object or a digital entity inhabiting the digital world can pass through them in one direction but not in the other.
  • spikes forward orientation vectors
  • an object or an entity can pass from the non-spiked side to the spiked side, but not vice-versa.
  • a specific avoidance constraint can be defined and activated for a digital entity to attempt to avoid the barriers in the digital world. The concept of behaviours and constraints will be described hereinbelow in more detail.
  • a barrier is represented in a 2D solver by a line and by a triangle in a 3D solver.
  • the direction of the spike for 2D and 3D barriers is also shown in Figures 8-9 (see arrows 24 and 26 respectively) where P1-P3 refers to the order in which the points ofthe barrier are drawn. Since barriers are unidirectional, two-sided barriers are made by superimposing two barriers and by setting their spikes opposite to each other.
  • Each barrier can be defined by the following parameters:
  • a bounding box is a rectilinear box that encapsulates and bounds a 3D object.
  • the solver of the digital world model may include subsolvers, which are the various engines of the solver that are used to run the simulation. Each subsolver manages a particular aspect of object and simulation in order to optimize computations.
  • each animated digital entity is associated to animation clips allowing representing the entity in movement in the digital world.
  • virtual sensors are assigned to and used by some entities to allow them gathering data information about image object elements or other entities within the digital world. Decision trees can also be used for processing the data information resulting in selecting and triggering one of the animation cycle or selecting a new behaviour.
  • an animation cycle which will also be referred to herein as "animation clip” is a unit of animation that typically can be repeated.
  • animation clip is a unit of animation that typically can be repeated.
  • the animator creates a "walk cycle". This walk cycle makes the character walks one iteration. In order to have the character walk more, more iterations ofthe cycle are played. If the character speeds up or slows down during time, the cycle is "scaled” accordingly so that the cycle speed matches the character displacement so that there is no slippage (e.g., it looks like the character is slipping on the ground).
  • the autonomous image entities are tied to transform nodes of the animating engine (or platform).
  • the nodes can be in the form of locators, cubes or models of animals, vehicles, etc. Since animation clips and transform nodes are believed to be well known in the art, they will not be described herein in more detail.
  • Figure 10 shows a co-ordinate system for moving the IE and used by the solver.
  • IE from the present invention can also be characterized by behaviours.
  • the behaviours are the low-level thinking apparatus of an IE. They take raw input from the digital world using virtual sensors, process it, and change the lE's condition accordingly.
  • Behaviours can be categorized, for example, as Locomotive behaviours allowing an IE to move. These locomotive behaviours generate steering forces that can affect any or all of an lE's direction of motion, speed, and orientation (i.e. which way the IE is facing) for example.
  • a locomotive behaviour can be seen as a force that acts on the IE.
  • This force is a behavioural force, and is analogous to a physical force (such as gravity), with a difference that the force seems to come from within the IE itself.
  • behavioural forces can be additive.
  • an autonomous character may simultaneously have more then one active behaviours.
  • the solver calculates the resulting motion of the character by combining the component behavioural forces, in accordance with behaviour's priority and intensity.
  • the resultant behavioural force is then applied to the character, which may impose its own limits and constraints (specified by the character's turning radius attributes, etc) on the final motion.
  • Behaviours can be divided into four subgroups: simple behaviours, targeted behaviours, and group behaviours.
  • Targeted behaviours apply to an IE and a target object, which can be any other object in the digital world (including groups of objects).
  • Group behaviours allow lEs to act and move as a group where the individual lEs included in the group will maintain approximately the same speed and orientation as each other.
  • Avoid Barriers The Avoid Barriers behaviour allows a character to avoid colliding with barriers.
  • Parameters specific to this behaviour may include, for example:
  • the Avoid Obstacles behaviour allows an IE to avoid colliding with obstacles, which can be other autonomous and non- autonomous image entities. Similar parameters than those detailed for the Avoid Barriers behaviour can also be used to define this behaviour.
  • the Accelerate At behaviour attempts to accelerate the IE by the specified amount. For example, if the amount is a negative value, the
  • the IE will decelerate by the specified amount.
  • the actual acceleration/deceleration may be limited by max acceleration and max deceleration attributes of the IE.
  • Acceleration which represents the change in speed (distance units/frame2) that the IE will attempt to maintain.
  • the Maintain Speed At behaviour attempts to set the target lE's speed to a specified value. This can be used to keep a character at rest or moving at a constant speed. If the desired speed is greater than the character's maximum speed attribute, then this behaviour will only attempt to maintain the character's speed equal to its maximum speed. Similarly, if the desired speed is less than the character's minimum speed attribute, this behaviour will attempt to maintain the character's speed equal to its minimum speed.
  • a parameter allowing defining this behaviour is the desired speed (distance units/frame) that the character will attempt to maintain.
  • the Wander Around behaviour applies random steering forces to the IE to ensure that it moves in a random fashion within the solver area.
  • Parameters allowing defining this behaviour may be for example:
  • the Orient To behaviour allows an IE to attempt to face a specific direction.
  • Targeted Behaviours The following behaviours apply to an IE (the source) and another object in the world (the target).
  • Target objects can be any object in the world such as autonomous or non-autonomous image entities, paths, groups and data. If the target is a group, then the behaviour applies only to the nearest member of the group at any one time. If the target is a datum, then it is assumed that this datum is of type ID and points to the true target of the behaviour. An ID is a value used to uniquely identify objects in the world. The concept of datum will be described in more detail hereinbelow.
  • the following parameters, shared by all targeted behaviours, are:
  • the Seek To behaviour allows an IE to move towards another IE or towards a group of lEs. If an IE seeks a group, it will seek the nearest member of the group at any time.
  • a Seek To behaviour may be programmed according to the navigation method 100.
  • Attribute Description Look This parameter instructs the IE to move towards a projected Ahead future point of the object being sought. Increasing the Time amount of look-ahead time does not necessarily make the Seek To behaviour any "smarter” since it simply makes a linear interpolation based on the target's current speed and position. Using this parameter gives the behaviour
  • the Flee From behaviour allows an IE to flee from another IE or from a group of lEs. When an IE flees from a group, it will flee from the nearest member of the group at any time.
  • the Flee From behaviour has the same attributes as the Seek To behaviour, however, it produces the opposite steering force. Since the parameters allowing defining the Flee From behaviour are very similar to those of the Seek To behaviour, they will not be described herein in more detail.
  • the Look At behaviour allows an IE to face another IE or a group of lEs. If the target of the behaviour is a group, the IE attempts to look at the nearest member of the group.
  • the Strafe behaviour causes the IE to "orbit" its target, in other words to move in a direction perpendicular to its line of sight to the target.
  • a probability parameter allows to determine how likely it is at each frame that the IE will turn around and start orbiting in the other direction. This can be used, for instance, to make a moth orbit a flame.
  • the effect of a guard walking sideways while looking or shooting at its target can be achieved by turning off the g uard's
  • a parameter specific to this behaviour may be, for example, the Probability, which may take a value between 0 and 1 that determines how often the IE change direction of orbit. For example, at 24 frames per second, a value of 0.04 will trigger a random direction change on average every second, whereas a value of 0.01 will trigger a change on average every four seconds.
  • the Go Between behaviour allows an IE to get in-between the first target and a second target.
  • this behaviour can be used to enable a bodyguard character to protect a character from a group of enemies.
  • the following parameter allow specifying this behaviour:, which may take a value between 0 and 1 that determines how close to the. second target one wish the entity to go.
  • the follow Path behaviour allows an IE to follow a path.
  • this behaviour can be used to enable a racecarto move around a racetrack.
  • Group behaviours allow grouping individual lEs so that they act as a group while still maintaining individuality. Examples include a school of fish, a flock of birds, etc.
  • the Align With behaviour allows an IE to maintain the same orientation and speed as other members of a group.
  • the IE may or may not be a member of the group.
  • the Join With behaviour allows an IE to stay close to members of a group.
  • the IE may or may not be a member of the group.
  • Join Distance is similar to the "contact radius" in targeted behaviours. Each member ofthe group within the neighbourhood radius and outside the join distance is taken into account when calculating the steering force of the behaviour.
  • the join distance is the external distance between the characters (i.e. the distance between the outsides of the bounding spheres of the characters). The value of this parameter determines the closeness that members ofthe g roup attempt to maintain.
  • the Separate From behaviour al rws an IE to keep a certain distance away from members of a group. For example, this can be used to prevent a school of fish from becoming too crowded.
  • the IE to which the behaviour is applied may or may not be a member of the group.
  • the Separation Distance is an example of parameters that can be used to define this behaviour. Each member ofthe group within the neighbourhood radius and inside the separation distance will be taken into account when calculating the steering force of the behaviour.
  • the separation distance is the external distance b etween the lEs (i.e. the distance between the outsides of the bounding spheres of the lEs). The value of this parameter determines the external separation distance that members of the group will attempt to maintain.
  • An IE can have multiple active behaviours associated thereto at any given time. Therefore, means can be provided to assign importance to a given behaviour.
  • a first means to achieve this is by assigning intensity and priority to a behaviour.
  • the assigned intensity of a behaviour affects how strong the steering force generated by the behaviour will be. The higher the intensity the greater the generated behavioural steering forces.
  • the priority of a behaviour defines the precedence the behaviour should have over other behaviours. When a behaviour of a higher priority is activated, those of lower priority are effectively ignored.
  • the animator informs the solver which behaviours are more important in which situations in order to produce a more realistic animation.
  • the solver calculates the desired motion of all behaviours, sums up these motions based on each behaviour's intensity, while ignoring those with lower priority, and enforces the maximum speed, acceleration, deceleration, and turning radii defined in the lE's attributes. Finally, braking due to turning may be taken into account. Indeed, based on the values of the character's Braking Softness and Brake Padding attributes, the character may slow down in order to turn.
  • a navigation mesh 35 is created forthe movable digital entity (not shown). This is achieved by dividing or converting the reachable space 34 into convex cells 36 as illustrated in Figure 12 for the example of the one-floor building from Figure 11.
  • the navigation mesh 35 can be created either manually or automatically using, for example, the collision layer or the rendering geometry.
  • a collision layer is a geometric mesh that is a simplification of the rendering geometry for the purposes of physics collision detection/resolution.
  • the navigation mesh is the subset ofthe collision layer upon which the movable entity could move (this is typically the floors and not the walls).
  • Deriving the navigation mesh from the rendering geometry requires simplifying the geometry as much as possible and fusing the geometry into a seamless a mesh as possible (e.g., removal of intersecting polygons, etc.).
  • a 3D operator typically a 3D artist inspects the input geometry, fuses the polygons correctly and strips out the non-reachable space. It is to be noted that algorithms exist that can automatically handle this to a high degree. Convex polygons are used as cells in the creation of the navigation mesh 35 since any point within such a cell is directly reachable in a straight line to any other point in the cell.
  • step 106 An edge Exy connecting cells Cx and Cy in the navigation mesh will be considered “passable”, if the entity can pass from cell Cx to Cy via Exy.
  • step 106 the starting and end points (not shown) are located and the corresponding cells that includes each of those two points are identified.
  • the expressions "starting point” and “end point” should not be construed herein in a limited way. Indeed, unless the digital movable entity is pixel size, the starting and end point will refer to a location or a zone in the virtual world.
  • step 108 a first verification is done whether the starting and end points are both located in the same cell. If this is the case, the method 100 proceeds with step 110, wherein the digital entity is moved from the starting to the end point before the method stops.
  • a method 100 may yield a movable digital entity with such an adaptive behaviour.
  • Step 112 can be achieved first by constructing a connectivity graph 38, which is obtained by replacing each cell 36 by a node 40 and connecting each pair of passable cell (node) by a line 42.
  • An example of connectivity graph 38 is illustrated in Figure 13 forthe example illustrated in Figures 11 and 12. Of course, such a graph 38 is purely virtual and is not actually graphically created.
  • the resulting graph 38 is searched to find a path between the two nodes 40 representing respectively the starting and end points.
  • Many known techniques can be used to solve such a graph searching problem so as to yield a path between these two corresponding nodes.
  • the path, if it exists is returned as corresponding cells.
  • a breadth first search can be used to search the graph 38.
  • the well known BFS method allows providing the path of lowest cost but can be very expensive in terms of number of nodes explored.
  • a depth first search which can also be used, would be significantly less expensive in terms of nodes explored but does not allow to provide the path at the lowest cost.
  • Heuristics can be placed on the DFS to try to improve path quality while maintaining computational efficiency.
  • the centerpoint of each edge can be selected. Of course, other points can alternatively be chosen.
  • the point on the cell edge (cells interface) can be chosen so as to reduce the distance traveled between cells and then further smooth the path.
  • the digital entity is then moved from the starting point to each selected consecutive intermediary points, finally to the ending points (step 116).
  • the method 100 will be illustrated with reference to another simple world 44 (see Figure 14) delimitated by walls 46.
  • a navigation mesh 47 is created and the world 44 is divided in convex cells 48 identified from A to Z for reference purposes.
  • the starting and end points 50 and 52 are shown in
  • step 106 of the method 100 they are found in cells 'Z and 'J' respectively. Since they are not located in the same cell, the method continues with step 112 with the determination of a sequence of cells between the points 50 and 52, yielding the following sequence as illustrated in Figure 16: 'Z', ⁇ , '0', 'X', 'W', 'V, 'U', T, ⁇ ', and 'J'.
  • step 114 After determining intermediary points on the respective boundary between consecutive cells in the sequence of cells (step 114), the method continues with the entity moving along the determined path 54 as illustrated in Figure 17. As can be seen in Figure 17, the path can be smooth to yield a more realistic trail.
  • the navigation mesh can be dynamically modified at run-time (step 104).
  • cells can be turned off via blind data to simulate road blocks or congestion due to excess people or physics-driven debris or turned on to simulate a door opening, congestion ending or a passage through a destroyed wall.
  • former cell ⁇ ' has been turned off, resulting in a first alternative path 56
  • both former cells 'B' and ⁇ ' have been turned off, resulting in a second alternative path 58.
  • the method 100 also allows taking into consideration the dimension of the movable digital entity, and more specifically its transversal dimension relatively to its moving direction such as its width.
  • the creation of the navigation mesh in step 104 may take into account such characteristic ofthe movable entity so that the method 100 outputs a path that the entity can pass through. This is illustrated in Figures 20 and 21.
  • Figure 20 illustrates a path 60 obtained from the method 100 to move a digital entity from a starting point 62 to and end point 64.
  • the former cell 'L' is no longer part of the navigation mesh 68 and a new path 70 is provided by the method 100.
  • the method 100 is not limited to two-dimensional digital world. As it is illustrated in Figures 22 and 23, the method 100 can be used to determine the path between a starting point to an end point and then move a digital movable entity, such as an animated character, between those two points in a three dimensional digital world.
  • Figure 22 illustrates the output of the floor plan generator on a small city part of a simulator, a game or an animation.
  • Figure 23 illustrates the navigation mesh resulting from step 104 from the method 100.
  • the method 100 can be adapted for outdoor and indoor environments.
  • An outdoor environment typically consists of buildings and open spaces such as market spaces, parks with trees, separated by sidewalks, roads and rivers.
  • a floor plan generator uses the exterior building walls to cut out holes in the navigation mesh.
  • blind data are used to characterize different part of the reachable space.
  • blind data can then be associated to the cells of the navigation mesh to specify the differences in navigable surfaces (e.g., roads and sidewalks) and have the entities navigate accordingly (e.g., keep vehicles on the road and humans on the sidewalks).
  • Figure 24 illustrates a digital world in the form of a city street.
  • Figure 25 illustrates a navigation mesh obtained from step 104 of method 100. Blind data are used to differentiate between roadway (white cells) and sidewalk (grey cells).
  • an indoor environment is typically multi-layer and consists of floors divided into rooms via inner walls and doors; and connected by stairways.
  • a floor plan generator calculates the navigation mesh for each floor using the walls as barriers and then links the navigation surfaces by the cells that correspond to the stairways. This results in a 3D navigation mesh in which cells may be on top on one another. Path finding is now modified to determine which surface cell the digital movable entity is on rather than which cell the character is in.
  • a navigation mesh is created for each ofthe levels and two consecutive navigation meshes are interconnected by connecting cells.
  • the method 100 allows the movable entity to move from a predetermined intermediary point to the next in a straight line. Therefore, the only things that can prevent the entity from going in a straight line may be dynamic obstacles. To cope with this situation, the movable entity may be provided with sensors.
  • sensors Before describing in more details a dynamic collision avoidance method according to a method from the present invention, the concept of sensor and other relevant concepts such as data information, commands, decisions and decision trees will first be described briefly. It is to be noted however that neither the collision avoidance method according to the present invention nor the navigation method 100 are to be construed as being limited to a specific embodiment of sensors or decision rules for the digital movable entity, etc.
  • An entity's data information can be thought of as its internal memory.
  • Each datum is an element of information stored in the entity's internal memory.
  • a datum could hold information such as whether or not an enemy is seen or who is the weakest ally.
  • a Datum can also be used as a state variable for an IE.
  • Data are written to by an entity's Sensors, or by Commands within a Decision Tree.
  • the Datum's value is used by the Decision Tree to activate and deactivate behaviours and animations, or to test the entity's state. Sensors and Decision trees will be described hereinbelow in more detail.
  • Sensors Entities use sensors to gain information about the world.
  • a sensor will store its sensed information in a datum belonging to the entity.
  • a parameter can be used to trigger the activation of a sensor. If a sensor is set off, it will be ignored by the solver and will not store information in any datum.
  • the vision sensor is the eyes and ears of a character and allows the character to sense other physical objects or movable entities in the virtual world, which can be autonomous or non-autonomous characters, barriers, and waypoints, for example.
  • the following parameters allow, for example, defining the vision sensor:
  • Decision trees are used to process the data information gathered using sensors.
  • a command is used to activate a behaviour or an animation, or to modify an lE's internal memory. Commands are invoked by decisions. A single Decision includes a conditional expression and a list of commands to invoke.
  • a decision tree includes a root decision node, which can own child decision nodes. Each of those children may in turn own children of their own, each of which may own more children, etc.
  • a parameter indicative of whether or not the decision tree is to be evaluated can be used in defining the decision tree. Whenever the command corresponds to activating an animation and a transition is defined between the current animation and the new one, then that transition is first activated. Similarly, whenever the command corresponds to activating a behaviour, a blend time can be provided between the current animation and the new one. Moreover, whenever the command corresponds to activating a behaviour, the target is changed to the object specified by a datum.
  • obstacle avoidance can also be seen as a two-step method 200, which is illustrated in Figure 28, of: 202 - assessing threats of potential collisions between the movable digital entity and a moving obstacle, and 204 - if there is such a threat, the movable digital entity responding accordingly by adopting a strategy to avoid the moving obstacle
  • each entity 72 uses its sensor (see Figure 29) to detect what potential obstacles are in its vicinity and decide which of those obstacles poses the greatest threat of collision.
  • the sensor is configured as a field of view 74 around the entity 72, characterized by a depth of field, defining how far the entity can see.
  • the field of view of each movable entity can be defined as a pie (or a sphere depending on the application) surrounding the entity.
  • each obstacle removes a piece of this pie. The size ofthe piece removed depends on the obstacle's size and its distance from the entity.
  • holes 76-78 Unobstructed sections ofthe pie are will be referred to herein as holes 76-78 (see Figure 31 ).
  • the entity 72 searches forthe best hole to continue through.
  • the best hole can be determined in several ways. The typical way is as follows: • The holes 76-78 are sorted in order of increasing radial distance from the desired direction of the entity 72; • The first hole that is large enough for the entity to pass through is chosen; • If there is no hole, the agent is completely blocked and will stop moving. Depending on the chosen hole, the movable entity can move into that hole by either turning or reversing.
  • the obstacles can be characterized as having different avoidance importance.
  • a Hummvee may consider avoiding other vehicles as its highest priority, pedestrians as a secondary priority and small animals such as dogs as a very low priority; and a civilian pedestrian may consider vehicles as its highest priority and other pedestrians as a secondary priority.
  • extreme situations such as a riot may dynamically change these priorities.
  • different obstacle groups have different avoidance constraints.
  • the most basic constraint is assigning an obstacle to be a threat. Accordingly, for each obstacle group, there is an associated awareness radius. As the character moves through its world, its sensor sweeps around it, for every obstacle detected in its sweep that is within its awareness radius, it is flagged as a potential collision threat.
  • step 204 For each collision threat, there are two kinds of avoidance strategies (step 204): • circumvention: try avoiding the collision by going around the obstacle; and • queuing: try avoiding the collision by slowing down (and potentially stopping) until the obstacle exits the collision path.
  • Figures 32A-32C, 33A-33C, 34A-34C, and 35A-35C illustrate three examples of collision threats ( Figures 32A, 33A, 34A, and 35A), each with both corresponding avoidance strategies.
  • Figures 32A-35C show how circumvention is useful for going around stationary obstacles and getting out ofthe way of incoming obstacles. However, it can cause a lot of jostling on outgoing obstacles. Nonetheless, circumvention has the advantage of minimizing gridlock and eventually finding a way around.
  • the group movement modifier that is most rapidly identified with computer graphic artificial intelligence is flocking, made famous by
  • Reynolds who modelled flocks of birds called boids as super particles. Reynolds identified three basic elements to flocking: • alignment: the tendency of group members to harmonize their motion by aligning themselves in the same direction with the same speed; • separation: the tendency of group members to maintain a certain amount of space between them; and • joining: the tendency of group members to maintain a certain proximity with one another. Considering now a group of friends walking down the street, slower members of the group will speed up to catch up to the others, the fastest members (assuming they are polite) will slow down slightly to allow the stragglers to catch up. Depending on the cultural background of the group more or less space is required or tolerated between the friends (cf. urban dwellers to rural dwellers).
  • Figures 36A-36F shows the effects of different flocking strategies on a group of five characters following a leader character.
  • Such group-based modifier can be used to yield a more natural effect when the method 100 is used to simultaneously move a group of entities in a digital world between starting and end points.
  • the method and system for moving a digital entity on-screen from starting to end points in a digital world has been described as being included in a specific illustrative embodiment of a 3D application, it can be included in any 3D application requiring the autonomous displacement on-screen of image element.
  • a navigation method according to the present invention can be used to move digital entity not characterized by behaviours such as described hereinabove.
  • a navigation method according to present invention can be used to navigate any number of entities and is not limited to any type or configuration of digital world.
  • the present method and system can be used to plan the displacement of a movable object or entity in a virtual world without further movement of the object or entity.

Abstract

L'invention concerne un procédé permettant de déplacer sur un écran une entité numérique, par exemple un personnage ou un objet, d'un point de départ à un point d'arrivée dans un monde numérique, qui consiste à fournir à l'entité numérique la position des obstacles et à définir la partie du monde numérique sans obstacles comme espace atteignable; à créer un réseau de navigation destiné à l'entité numérique en divisant l'espace atteignable en cellules convexes; à localiser les cellules de départ et d'arrivée parmi les cellules convexes; si la cellule de départ correspond à la cellule d'arrivée, l'entité numérique s'est donc déplacée du point de départ au point d'arrivée. Si la cellule de départ ne correspond pas à la cellule d'arrivée, on détermine alors les points intermédiaires situés à la limite entre des cellules consécutives d'une série de cellules parmi les cellules convexes du point de départ au point d'arrivée et on déplace l'entité numérique du point de départ vers chaque point intermédiaire consécutif jusqu'au point d'arrivée.
EP05714659A 2004-03-19 2005-03-18 Procede et systeme de navigation a l'ecran de personnages numeriques et analogues Withdrawn EP1725966A4 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US55435704P 2004-03-19 2004-03-19
PCT/CA2005/000426 WO2005091198A1 (fr) 2004-03-19 2005-03-18 Procede et systeme de navigation a l'ecran de personnages numeriques et analogues

Publications (2)

Publication Number Publication Date
EP1725966A1 true EP1725966A1 (fr) 2006-11-29
EP1725966A4 EP1725966A4 (fr) 2008-07-09

Family

ID=34993915

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05714659A Withdrawn EP1725966A4 (fr) 2004-03-19 2005-03-18 Procede et systeme de navigation a l'ecran de personnages numeriques et analogues

Country Status (4)

Country Link
EP (1) EP1725966A4 (fr)
JP (1) JP2007529796A (fr)
CA (1) CA2558971A1 (fr)
WO (1) WO2005091198A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0911981D0 (en) 2009-07-09 2009-08-19 Movix Uk Ltd Data processing system using geographical locations
US9164653B2 (en) 2013-03-15 2015-10-20 Inspace Technologies Limited Three-dimensional space for navigating objects connected in hierarchy
US11532139B1 (en) * 2020-06-07 2022-12-20 Apple Inc. Method and device for improved pathfinding
US11804012B1 (en) * 2020-06-07 2023-10-31 Apple Inc. Method and device for navigation mesh exploration

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4862373A (en) * 1987-05-13 1989-08-29 Texas Instruments Incorporated Method for providing a collision free path in a three-dimensional space

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DANTE TREGLIA,: "Game Programming Gems 3" 24 August 2002 (2002-08-24), CHARLES RIVER MEDIA , XP002481483 * pages 307-320; figures 3.7.9-3.7.11 * *
See also references of WO2005091198A1 *

Also Published As

Publication number Publication date
CA2558971A1 (fr) 2005-09-29
WO2005091198A1 (fr) 2005-09-29
EP1725966A4 (fr) 2008-07-09
JP2007529796A (ja) 2007-10-25

Similar Documents

Publication Publication Date Title
US20050071306A1 (en) Method and system for on-screen animation of digital objects or characters
Müller et al. Sim4cv: A photo-realistic simulator for computer vision applications
Reynolds Steering behaviors for autonomous characters
US20070188501A1 (en) Graphical computer simulation system and method
Ren et al. Group modeling: A unified velocity‐based approach
EP1725966A1 (fr) Procede et systeme de navigation a l'ecran de personnages numeriques et analogues
CN114470775A (zh) 虚拟场景中的对象处理方法、装置、设备及存储介质
Stone et al. Robocup-2000: The fourth robotic soccer world championships
Thompson Scale, spectacle and movement: Massive software and digital special effects in the lord of the rings
van Goethem et al. On streams and incentives: A synthesis of individual and collective crowd motion
Thalmann et al. Geometric issues in reconstruction of virtual heritage involving large populations
Patel et al. Agent tools, techniques and methods for macro and microscopic simulation
Tomlinson The long and short of steering in computer games
Rojas et al. Safe navigation of pedestrians in social groups in a virtual urban environment
Boes et al. Intuitive method for pedestrians in virtual environments
Karamouzas Motion planning for human crowds: from individuals to groups of virtual characters
Rudomin et al. Groups and Crowds with behaviors specified in the environment.
Simola Bergsten et al. Flocking Behaviour as Demonstrated in a Tower-Defense Game
Thalmann et al. Behavioral animation of crowds
Cozic Automated cinematography for games.
Savidis There is more to PCG than Meets the Eye: NPC AI, Dynamic Camera, PVS and Lightmaps
Moudhgalya Language Conditioned Self-Driving Cars Using Environmental Object Descriptions For Controlling Cars
Lee Using Global Objectives to Control Behaviors in Crowds
Metoyer Building behaviors with examples
Ricks Improving crowd simulation with optimal acceleration angles, movement on 3D surfaces, and social dynamics

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20060918

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU MC NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

RIC1 Information provided on ipc code assigned before grant

Ipc: A63F 13/10 20060101AFI20080527BHEP

A4 Supplementary search report drawn up and despatched

Effective date: 20080605

18W Application withdrawn

Effective date: 20080505