Introduction
Sinn is a platform-puzzle game developed by a team of seven students from the Master of Videogame Creation at UPF (Pompeu Fabra University). The development of the game lasted 1 year and it was made using a custom engine, using Visual Studio (C ++), DirectX, LUA, 3DMax, Fmod, PhysX, Sustance and Cal3D.
Team
The team is composed of seven people: three artists and four programmers. In addition to the core team, we have the valuable collaboration of two musicians who created a wonderful soundtrack for our game. We also received help from two voice actors who brought the characters of Seele, Golem, and Quetz to life.
The Game
The main mechanic of Sinn allow us to switch between two different worlds, the 3D world and the 2D world.
When we "project" control passes from the character (3D model) to his shadow. In this way we can go through places where there is no floor thanks to the shadow of other objects.
The main mechanic of Sinn allow us to switch between two different worlds, the 3D world and the 2D world.
When we "project" control passes from the character (3D model) to his shadow. In this way we can go through places where there is no floor thanks to the shadow of other objects.
The main mechanic is unique because when the character passes control from the body to the 2D world, some of the rules of the 3D world are maintained.
Due to this, when we have control in the 2D world we must take into account our physical body to overcome the different puzzles, and thanks to this we have many more possibilities of gameplay.
For example, thanks to our particular mechanics, we can move in depth allowing us to reach areas where we could not reach before.
The main mechanic is unique because when the character passes control from the body to the 2D world, some of the rules of the 3D world are maintained.
Due to this, when we have control in the 2D world we must take into account our physical body to overcome the different puzzles, and thanks to this we have many more possibilities of gameplay.
For example, thanks to our particular mechanics, we can move in depth allowing us to reach areas where we could not reach before.
My Work
During the development of Sinn, being a small team of seven people, all design decisions were made jointly. However, in programming, each member initially focused on a specific section of the system to optimize time and avoid delays in decision-making. Later on, we ended up helping each other, which allowed us to collaborate and work on parts of the project that were originally assigned to other team members.
Thus, my work mainly consisted of:
Artificial Intelligence
In Sinn, the guardians are the character's enemies and have a single mission: to prevent Seele from progressing. Therefore, their behavior is primarily based on chasing Seele and attempting to catch her. They detect her presence through sight, with a limited range, as well as by any sounds the character may make, all within a specific area around them.
The development of these behaviors was done using the following Behaviour Tree:
In this Behaviour Tree you can observe numerous nodes representing the different behaviors of the enemies. In summary, our enemies have six main types of behavior: patrol, alert, suspicion, pursuit, combat, and surrender.
More details
Patrol
In this behavior, the guardian moves between the different points of its patrol route using the NavMesh. Upon reaching each point, it pauses briefly to observe its surroundings before continuing its path.Alert
When a guardian enters an alert state, it scans its surroundings to detect if the character is nearby.Suspicious
If the guardian enters this state, it means it has heard the character from a nearby location. In response, it will walk toward the area where it believes it heard something and search the surroundings. This state can also be triggered if the character uses the distraction system by throwing an object that generates noise and raises suspicion.Pursuer
The guardian can enter this state for two reasons:- Visual contact with the character within its line of sight.
- Accumulation of suspicion events that indicate the character must be nearby.
Combat
This state is activated when the guardian is close enough to the character to attempt an attack. Before attacking, it evaluates whether the conditions are suitable to do so.- Wait: If it enters this node, it means another guardian has been assigned to capture the character. The assignment depends on the distance to the character, and the nearest guardian will take responsibility. This system uses a BlackBoard to store data such as the distance to the character, the assigned guardian, and other relevant information.
- Combat - Cliff: This behavior is triggered if there is a cliff near the capture area from which the character can be thrown. However, this functionality is disabled due to the lack of animations because of time constraints.
- Combat - Constrict n this behavior, the guardian captures the character, who must repeatedly press a button to break free.
- Release: If the character presses the button the required number of times, the guardian will release them, and they will fall to the ground. After a few seconds, the character will get up confused and search around.
- Death: If the character fails to break free, the screen will darken, and the game will restart from the last saved point.
Surrender
The guardian will enter this state when the "impossible path" system is activated. This occurs when, even though the character is directly in front of the guardian or very close, the NavMesh cannot find a path to reach them. To view the complete code for the guardians' AI, you can access the following link: Guardian code repositoryOn the other hand, to synchronize the different behaviors with their corresponding animations, a finite-state machine was used that allowed the entire process to do in a more secure way.
Cameras
During the development of Sinn, one of my teammates, who had started working on the cameras, encountered issues with its implementation. As a result, they delegated the responsibility to me, and based on their work, I rebuilt them from zero.
In the game, you can see several types of cameras, as shown in the previous videos. However, in summary, they are divided into four main categories:
- 3D Cameras
- 2D Cameras
- Fixed Cameras
- Rail Cameras:
More details
3D Cameras
These cameras work through two main parameters: pitch and yaw.
- The pitchparameter is obtained from the vertical movement of the mouse and ranges from 0 to 1. Using this value along with a curve that starts at the character's feet and ends at their head, the position on the curve is evaluated to generate the vertical movement of the camera in a three-dimensional environment.
- The yaw parameter is derived from the horizontal movement of the mouse and controls the camera's rotation around the target, which is typically the character.
2D Camera
This camera is used with the main gameplay mechanic, simulating a two-dimensional environment. Its movement is solely horizontal and vertical. The camera focuses on the character's shadow, although the parameters are set to ensure the body is never out of frame.
Additionally, to highlight the importance of the shadow over the body, this camera includes a depth of field effect.
Rail Cameras:
These cameras use the same curves as the 3D cameras. However, instead of evaluating the position on the curve through the pitch parameter, the character’s position relative to the curve is used. With a small margin, the camera is positioned at a point on the curve, focusing on the character.
A distinctive feature of these cameras is that they cannot be directly controlled by the player, except through the character's movement along a predefined path.
To view the camera code, you can access the following link: Cameras code repository
Character Movement
Although the basic character movement was initially the responsibility of one of my teammates, their approach wasn't working as expected. Therefore, I took over the responsibility and, using their work as a base, completely rebuilt the character movement mechanics.
More details
Walk, stealth and sprint
The character movement system is quite simple. For walking, sprinting, and sneaking movements, the moment when the movement input is pressed is recorded, and using a simple algorithm, the percentage of the maximum speed is calculated.
CEntity* ent_player = ctx.getOwner();
TCompPlayer* comp_player = ent_player->get();
float inc_curve = (1.0 / 10.0);
float tacc = stateData.time_acceleration;
float tnow = comp_player->getPlayerTimer() - comp_player->getIniTimeAcc();
float v100 = 1.0 - 1.0 / (tacc / inc_curve + 1.0);
float porAcc = 2.0 - v100 - 1.0 / (tnow / inc_curve + 1.0);
if (porAcc > 1) porAcc = 1;
return porAcc;
![](https://pablogramirez.com/wp-content/uploads/2021/02/excel_aceleracion-300x208.png)
Similarly, the moment when the movement input is released is calculated, and the speed is adjusted until the character comes to a stop.
CEntity* ent_player = ctx.getOwner();
TCompPlayer* comp_player = ent_player->get();
float inc_curve = 1;
if(!comp_player->isJumping()) inc_curve = (1.0 / 50.0);
float tdec = stateData.time_deceleration;
float tnow = comp_player->getPlayerTimer() - comp_player->getIniTimeDec();
float v100 = 1.0 - 1.0 / (tdec / inc_curve + 1.0);
float porDec = (2.0 - v100 - 1.0 - inc_curve / (tnow + inc_curve )) * - 1.0;
if (porDec < 0) porDec = 0;
return porDec;
![](https://pablogramirez.com/wp-content/uploads/2021/02/excel_desaceleracion-300x161.png)
Since our third-person cameras have the character positioned slightly to the right to provide a wider view of the environment, and since the character’s movement direction is determined by the front of the camera in use, a small issue arose. When walking straight, the character would gradually twist, significantly worsening the control.
Initially, we tried to fix it with some complex calculations, but the result was still unsatisfactory. Thus, the final solution was to create a direction component that communicated with the current camera to know the yaw and pitch parameters for each frame. However, unlike the camera, this component did not have a slight deviation to the left, keeping the character positioned to the right. This solved the direction problem.
Jump
Jumping is a bit more complicated, as we wanted an adjustable jump in maximum height, jump time, and distance. Therefore, we first separated the horizontal and vertical movements. For horizontal movement, we used the same formulas as for walking, but with different maximum speeds. However, for vertical movement, we developed a small parabola algorithm where, by inputting the maximum height and maximum jump time, we could calculate the trajectory in the air.
![](https://pablogramirez.com/wp-content/uploads/2021/02/geogebra_jump-300x236.png)
Additionally, since we wanted to give the jump more precision, two features were added:
The first is that horizontal movement does not have any kind of momentum. Therefore, if the player stops pressing forward or changes direction, the character will stop moving horizontally or change direction without restrictions. This improves the sense of control.
The second feature is that the player can perform a small jump or reach the maximum height. If the player releases the button at the start, the maximum height will be calculated based on how long the button was pressed. This way, the player won't need to jump 2 meters to reach an obstacle that's only 20 cm high.
With these features, the jump calculation was divided into two parts. In the first part, the shape of the parabola is pre-calculated based on the jump force (how long the button is pressed), and the jump is initiated, preventing the character from getting stuck in place while running. In the second part, the pre-calculated values of the parabola are used to determine the height velocity, which is combined with horizontal velocity to move the character while jumping.
//Precalc
float t = time_jump * jump_strength;
y0 = max_y_jump * jump_strength;
a = (2*2) (y0 / t*t);
float aux = j_y0 / j_a;
x0 = pow(aux, 1.0 / 2);
//Jump calc
float x = timer_jump;
float aux = x - x0;
float newY = -a * (aux*aux) + y0;
newY += y_ini_value;
Movement over platform
When we move over a moving platform, we can't let PhysX calculate the character's movement, as this causes the character to fall or results in constant bumping between the platform and the character. Therefore, in each frame, the platform communicates its movement to the character. The character then adds this to its movement vector, creating the illusion that the character is stationary on the platform while it moves.
To view the character movement code, visit the Character movement code repository.
Animations
During the development of Sinn, I was responsible for integrating all the animations created by the artists for the different characters and some objects into the game.
In a project with a custom engine, the animation integration process involves working directly with the animation files, in this case, using 3DMax. Although not at a professional level, I gained a lot of experience in this field by handling these tasks.
In the project, we implemented Cal3D, the system responsible for playing the animations. However, to automate the process and avoid writing multiple lines of code every time we needed to play an animation, I developed several specific classes to manage the animations. These classes included features such as:
- Automatically replacing the current animation when a new one is played.
- Performing smooth transitions (blending) between animations configured for this purpose.
- Stopping animations at the right moment.
Although I won’t include videos in this section, it’s enough to watch the gameplay videos to appreciate the animation integration.
If you want to check the code, you can do so at the following link: Animation Code Repository
NavMesh
For the movement of the guardians across the different maps in the game, we used navigation meshes (NavMesh). Since the guardians were my main task, I was responsible for integrating them into the game. To do this, I used the work of Mikko Mononen, from whom I obtained the necessary tools to generate the mesh and later implement it.
To create the NavMesh, I used the corresponding demo, where I loaded the 3DMax file with the environment.
To integrate the navigation meshes, I developed a module to act as an intermediary between Mononen's work and our engine. In this module, I included the necessary tools, such as:
- Calculate the path between two points.
- Determine the closest point in a specific area of the mesh.
- Cast a raycast on the mesh.
See Code
#include "mcv_platform.h"
#include "module_nav_mesh.h"
void CModuleNavMesh::start()
{
navmesh = CNavmesh();
navmesh.loadMesh("data/navmeshes/milestone5.bin");
if (navmesh.m_navMesh) {
navmesh.prepareQueries();
}
else {
fatal("Error when creating navmesh\n");
}
}
void CModuleNavMesh::stop()
{
navmesh.destroy();
}
void CModuleNavMesh::renderDebug()
{
}
void CModuleNavMesh::update(float elapsed)
{
}
void CModuleNavMesh::renderInMenu()
{
}
vector CModuleNavMesh::calculePath(VEC3 pos, VEC3 dest, float step, float slop)
{
return navmeshQuery.findPath(pos, dest, step, slop);
}
vector CModuleNavMesh::calculePath(VEC3 pos, VEC3 dest)
{
return navmeshQuery.findPath(pos, dest, step_size, slope);
}
float CModuleNavMesh::wallDistance(VEC3 pos)
{
return navmeshQuery.wallDistance(pos);
}
bool CModuleNavMesh::raycast(VEC3 start, VEC3 end, VEC3& hitPos)
{
return navmeshQuery.raycast(start, end, hitPos);
}
VEC3 CModuleNavMesh::findNearestNavPoint(VEC3 start)
{
return navmeshQuery.closestNavmeshPoint(start);
}
VEC3 CModuleNavMesh::findNearestPointFilterPoly(VEC3 pos, PolyFlags filter)
{
return navmeshQuery.nearestPointFilterPoly(pos, filter);
}
VEC3 CModuleNavMesh::findRandomPointAroundCircle(VEC3 pos, float maxDist)
{
return navmeshQuery.findRandomPointAroundCircle(pos, maxDist);
}
VEC3 CModuleNavMesh::findRandomPoint()
{
return navmeshQuery.findRandomPoint();
}
Others
In addition to the tasks mentioned, in Sinn, I also carried out the following tasks:
Objects with movement
I developed a type of object that allowed configuring the type of movement it should have, the duration of that movement, and whether it should loop. What is unique about these objects is that they don't move through animations, but by using physics.
Objects with animations
Another component I developed was for objects that had animations and could be activated at specific moments. In addition to activating the animation, they transfer the animation's movement to the collider, so that the movement is not only graphical but also physical.
Final event component
This component allows creating a list of events that occur in a specific order and at set times. The events can be any command programmed in the game, such as object creation, camera shake, sound playback, among others.
Bone Tracker
This component allows any game entity to follow the same movement as the bone of another entity's skeleton.