Monday, 9 July 2012

Inside a Game Engine


Hello and a good morning to all of you!

Let's start with a motivational speech - Not by me but by Matthew Jeffrey who works in talent acquisition and talent brand. He used to work for famous game publisher EA, but recently went to Autodesk, owner and publisher of many high-quality 3D design and editing tools. Being the professional recruiter that he is, he gives you a see-it-through-and-you-will-definitely-make-it-and-here-is-how-kind-of speech. He raises a lot of valid points and even though it might not convince you to start a career in video games, it certainly gives a good overview of the field.


The fascinating thing about game design is that less and less skill and knowledge are required to create a modern game with more and fancier features than ever before. The reason is that pioneers and researchers in games and graphics lay the groundwork. Most readers probably never even heard of Jerry Tessendorf even though almost every drop of ocean water you see in games and animation films today, is based on his publication "Simulating Ocean Water" from 1999. That kind of freely available research is often picked up by individual game designers and developers, as well as companies and special interest groups and put into games, game engines and frameworks.

Game development tools (or kits), such as Unreal Development Kit, Unity and CryENGINE 3 SDK (video on the left) are a combination of game engineartist and development tools. They make it easy for artists to produce complex game scenes, and require less and less programming knowledge from developers to describe behavior and constraints of objects and characters, by providing more graphical and intuitive editors.


So how does a realistic looking game or pure simulation work? Short answer: We take a bunch of data, put them into a magic hat, say abracadabra, and we see results that, to some degree, resemble stuff in the real world.

The longer explanation: Artists use tools, such as 3ds Max, Blender or Maya to describe how the world looks like. We call this description a scene. After (and during) editing, the data of the scene is saved in files (similar to a word document). The files are usually not readable by us humans because they contain "raw" bits and bytes, and not just text. Unlike us, a game or simulator engine can read and understand the data from the file. 
By itself the scene is static. Nothing will move. We need a program that moves things around - Our magic hat! The magic word to be spoken out loud while casting the spell depends on the method that we want to use to move or morph an object: 
  1. Animation - The way object and characters move is pre-defined by the artist or some algorithm, and never changes. The reason for why animated objects don't look too robot-like, is that they are usually animated by multiple animations at once, using a method called animation blending. Often, the animation changes, depending on the situation. For example there are different animations for walking, sitting down, standing up etc.
  2. Simulation - We apply the laws of physics to objects and characters of our virtual world. For example, objects and characters fall down (and not up!) due to gravity.

That's it: The engine puts the scene description from the file into our magic hat which is a program that can either animate or simulate or both, to make things move about. Game engines provide many default programs to do this. If a game engine does not offer the kind of program that the game designer wants (for example, animated ocean water or real-time shadows), she can either use a program from someone else to add to the engine, or write it herself. We call a program that can be added into another one, a script, a library or Software Development Kit (SDK) (an SDK is a collection of libraries and possibly some tools). You might have seen or heard of "dll" files on Windows - Those are libraries, and they can be used by other programs to do certain things. That also tells us a little bit about engines: We can say, an engine is a program in that it can tell the computer to "do stuff", but we can also say that it is a collection of many programs which each have their own special purpose. An elementary program that solves only one particular problem, is also called an algorithm. For example, an algorithm is used to simulate the water in the video on the left. If the user has an influence on how things turn out, by for example, pressing a mouse key or raising a hand in front of a Kinect, we say that the program is interactive.
One small note on semantics: The water animation that we see requires more than just one algorithm. The algorithm continuously changes the data that we use to represent the water (the data being a whole bunch of numbers, in this case, stored in a so-called grid in memory). That data is created and modified by the algorithm, but it is then displayed on our screen by a so-called renderer. The renderer uses the position of the water, in combination with some other data, such as colors and lights to display things in a certain way.


And that principle applies to every game or simulator: One part of the engine, which we call the main-loop (consisting of a whole bunch of programs, mostly simulators and animators) just crunches numbers to modify the data. Another part of the engine, the render-loop, or renderer (another bunch of programs), then uses these numbers to display the results on the screen. Because the data changes every frame, the displayed result always looks different, and a sense of motion is created. 




And this is the last item on today's agenda: What is a frame? As you probably already know, computers and TVs always only show one picture at a time. That picture is also called a frame. Both, the main-loop and the render-loop, need to compute some stuff before a frame can be displayed - And that takes time. How much time? That depends: The more data we want to display, the longer those computations take. Since the different components of our computer only have limited speed (called the clock rate), we can only process limited data within a given amount of time. Since we usually need to display at least 20 (to 30) frames per second, so that the human eye cannot make out a single picture, we only have at most 1/20th of a second, i.e. 50 milliseconds for all computations. We call this the real-time constraint: If computations take more than 50 milliseconds, the displayed result is generally not considered real-time. Those 50 milliseconds, together with the speed of the computer define how much data we can have in any real-time application, and the amount of data defines how beautiful we can make our game look. That is why game designers and developers always have to weigh beauty against speed to produce the best looking game that runs on all computers that satisfy the least possible minimum requirements. If, for example, you want to produce games for mobile devices or older computers, your game must run at much slower speeds than when targeting a last-generation console or computer, which is why they either cannot look as good, or more complicated tricks and algorithms must be developed to make them look good without requiring a lot of data.




At this point, I'd like to give a quick thanks to Keith Newton and the I-Novae team for offering me an opportunity and providing some really sweet content. He has been working on Unreal Engine 3, and is currently, together with his associates, working on the I-Novae game engine, which is featured in the two videos above.


For further reading, there are good books and articles that explain the architecture and anatomy of a game engine in depth. Here are some good links that explain how to create some very basic games.

Saturday, 7 July 2012

Introduction: Jimmy

Hello, my name is Jimmy and I am friend of Dominik's. I am responsible for editing, but I will also be posting here. At the moment I am interested in mobile games, skeletal animation, and the game industry. I hope our articles are accessible and interesting to gamers and programmers!

Friday, 6 July 2012

Evolution of real-time Physics


Physics simulation is something that most games need. Even the gun-wielding heroes from Doom (who is called Doomguy, by the way) and Wolfenstein 3D (Private William "B.J." Blazkowicz) already adhered to the most basic law of physics: Neither Blazkowicz nor Robo-Hitler can occupy the same space as a wall or the floor.

But times change. Where Doomguy simply stopped moving after bumping into something, the cake-seeking test subject from Portal the Game can shove and throw around boxes, and Parker from Red Faction simply blows walls that block his view, off the surface of Mars. Real-time simulation, just like every other aspect of games has evolved - A lot! The upcoming Unreal Engine 4 makes use of PhysX for a previously unseen level of real-time realism. The video on the right shows how the use of PhysX added some advanced features to popular AAA games.


PhysX, Havoc and their open source contemporary, Bullet, are development tools that make simulating physics easy because they do all the math. Note that in this video, the most basic features (such as not falling through the floor) have been implemented without PhysX, even though PhysX also provides that functionality. In a perfect world, when using a physics simulation engine, the developer only has to define physical properties of the scene and then let the physics engine do the rest. All these engines have different functionality but they also share a common subset of basic features that they can all do almost equally well. It is like comparing two motor vehicles of different brands or type - They might feel and drive very differently, but they all can drive.

The video below highlights a bunch of very sexy features to be seen in Unreal Engine 4. None of them are brand-new, but they all have been improved to establish new records in visual quality.
Highlighted Unreal Engine 4 features include:
  1. High-quality hair and fur simulation
  2. Dynamic fracturing and shattering
  3. Fluid-like particle rendering
  4. Soft-body and cloth animation
Don't be confused by some of the terms. I will get to all those and a lot more in the near future. For now, let me briefly explain why these features are so amazing:

High-quality hair and fur simulation is of great interest to many game designers because the average human or animal has tens of thousands of strands. In contrast to this example, where every strand is simulated on it's own, in most games, hair is only rendered in a few patches, and usually, not even simulated, like in Skyrim, as you can see here.

Dynamic fracturing and shattering allows you to destroy anything. Imagine, you are playing an FPS and you know the enemy is hiding behind the wall right in front of you, waiting for you to step through the door and trigger a mine, but he did not realize that those walls are not strong enough to withstand a blast from your BFG. Boom.
There are only very few games that have an extensively destructible environment, and I personally am hoping that it will become a standard feature in upcoming FPS games.

Fluid-like particle rendering is just beautiful. Particle effects are standard in every modern game, but they are usually only animated (like these). That means they will always behave the same, and not interact with the player and rarely with each other. Simulation allows particles to behave like a fluid, such as fire, fog, or water, that interacts with the objects within. I myself have worked on simulating particles that behave like fluids. My code and results are open source and available online.

Last but not least, soft-body and cloth animation allows us to simulate the behavior of soft and elastic objects, such as cloth, rubber balls, plastic and basically everything that is not 100% brittle and may change its shape. However, most of a game's objects are represented by idealized (and computationally less expensive) rigid bodies.


Game physics simulation is a broad topic, and I hope that this article gives the reader a sense of appreciation (in case of former absence thereof) as to how many features of our complex world can be captured and replayed by running numbers through a micro chip.

The awkward first time

Hi,

My name is Dominik, and I will be your author today.

Seeing how this is my first post, I feel somewhat obligated to quickly explain what kind of blabber you can expect here in the near future. In short: It will be about physics, about the inner workings of games and other real-time 3D applications, as well as a bit of programming.

The blog is not only aimed at game enthusiasts, but also students, as well as professionals and pretty much everyone who enjoys exploring the technical side of things.

Without further ado - Let's get to it.