GAME THEORY

Introduction to Game Engine for Game Designing and Game Development

A game world is created with lots of polygons. When a world have several hundred thousand vertices / polygons , In first person view that’s located on one side of our 3D world have some of the world’s polygons, though others are not visible, because some object or objects, like a visible wall, is obscuring them. Even the best game coders can’t handle 300,000 triangles in the view on a current 3D card and still maintain 60fps (a key goal).

The process of removing polygons that aren't visible before handing them to the gpu using a code is called Culling.

let’s discuss in Game Designing why the card can’t handle super-high polygon counts. I mean, doesn’t the latest card handle X million polygons per second? Shouldn’t it be able to handle anything?

First, you have to understand in Game Designing that there are such things as marketing polygon rates, and then real world polygon rates. Marketing polygon rates are the rates the card can achieve theoretically. How many polygons can it handle if they are all on screen, the same texture, and the same size, without the application that’s throwing polygons at the card doing anything except throwing polygons at the card. Those are numbers the graphics chip vendors throw at you.

However, in real Game Design and Development situations the application is often doing lots of other things in the background — doing the 3D transforms for the polygons, lighting them, moving more textures to the card memory, and so on.

In Game Designing Not only do textures get sent to the card, but the details for each polygon too. Some of the newer cards allow you to actually store the model / world geometry details within the card memory itself, but this can be costly in terms of eating up space that textures would normally use, plus you’d better be sure you are using those model vertexes every frame, or you are just wasting space on the card ,this is especially true if you have a slow CPU, or insufficient memory.

The simplest approach to culling is to divide the world up into sections, with each section having a list of other sections that can be seen. That way you only display what’s possible to be seen from any given point. How you create the list of possible view sections is the tricky bit. Again, there are many ways to do this, using BSP (Binary Space Partitioning and Portals).

This is a way of dividing up the world into small sections, and organizing the world polygons such that it’s easy to determine what’s visible and what’s not — handy for software based renderers that don’t want to be doing too much overdrawing. It also has the effect of telling you where you are in the world in a very efficient fashion.

A Portal based engine is one where each area (or room) is built as its own model, with doors (or portals) in each section that can view another section. The renderer renders each section individually as separate scenes.

Graphics Pipeline in Game Design and Development Process

Graphics Pipeline from game to rendered polygons might flow something like this:

Game determines what objects are in the game, what models they have, what textures they use, what animation frame they might be on, and where they are located in the game world. The game also determines where the camera is located and the direction it’s pointed.

Game passes this information to the renderer. In the case of models, the renderer might first look at the size of the model, and where the camera is located, and then determine if the model is onscreen at all, or to the left of the observer (camera view), behind the observer, or so far in the distance it wouldn’t be visible. It might even use some form of world determination to work out if the model is visible (see next item).

The world visualization system determines where in the world the camera is located, and what sections / polygons of the world are visible from the camera viewpoint. This can be done many ways, from a brute force method of splitting the world up into sections and having a straight “I can see sections AB&C from section D” for each part, to the more elegant BSP (binary space partitioned) worlds. All the polygons that pass these culling tests get passed to the polygon renderer.

For each polygon that is passed into the renderer, the renderer transforms the polygon according to both local math (i.e. is the model animating) and world math (where is it in relation to the camera?), and then examines the polygon to determine if it is back-faced (i.e. facing away from the camera) or not. Those that are back-faced are discarded. Those that are not are lit, according to whatever lights the renderer finds in the vicinity. The renderer then looks at what texture(s) this polygon uses and ensures the API/graphics card is using that texture as its rendering base. At this point the polygons are fed off to the rendering API and then onto the card.