To quickly create and test 3D interactive applications, I use my own game engine that includes all the necessary tech for i/o, modern rendering, collision, physics, and basic user interaction. There are some specific features that have yet to be added. Fortunately, it is easy to update whenever something new is needed. Note, I have nothing against middleware solutions. On the contrary, I believe it always makes sense to try and choose the best tool for the job. However, at the early creative stages of a project it is harder to invent something new when all you are doing is little more than replacing the jpg files from an existing game with your own bitmaps. The benefit of having your own code is that it is easy to prototype new technology and gameplay ideas - and be able to do that fast. There are no limits on what you can do. In the future, should any project ever need heavy usage of any particular area of an area of technology it would be easy to replace that component with an off-the-shelf library or SDK.
Rendering: The engine is capable of drawing 3D models and environments. The system is based on d3d9 and supports the latest shaders and modern rendering approaches. Recently we've been focusing on local illumination with multiple shadows and dynamic lights. Combined with good shading techniques (such as parallax/relief mapping), its easy to make basic interior environments look fairly good. Our current code has various rendering paths available. For example, in addition to our regular multipass approach, we have a single pass fallback, and we have a deferred shading implementation where diffuse, normal and position information is first captured in backbuffer sized render targets. Its easy to support fallback effects for lower end hardware, however I haven't taken much time to support that. Even our default technique uses a lot of gpu cycles. To run at reasonable speeds on my 2 year old laptop (nvidia 6200), I use fewer lights (passes) and run in a smaller window.
Physics: The engine has a gjk based collision detection and a basic rigidbody simulation system with constraints. As discussed earlier, the implementation is small and consistent with the rest of the code so it is easy to use and debug. Having our own physics lets us do some unconventional things such as boolean operations (geomod) efficiently where the rigidbodies can then interact properly with the newly created geometry. Our goal is not to develop a big physics library. Should a project demand heavy use of rigidbody dynamics, then a high performance physics library (such as Bullet or Physx) could easily be used. Should a project require high fidelity character motion then Natural Motion would be the appropriate choice. The reason for not committing to any particular solution at this time is because it is best to remain flexible toward the needs of any future client/project.
Sound: The game engine can be compiled to use either fmod or a simple directx-based sound library. The simple library is more appropriate if the project has straightforward sound requirements and can get by with just wav files. In contrast, fmod supports mp3s, is cross-platform and much more. Furthermore, it has a reasonable licencing scheme. However, using fmod, I found the library start-up and the initial creation of sound objects seemed too long for a small game designed for the impatient user with ADHD.
Art Pipeline: It is better to not become tied to a particular DCC tool for all asset creation and its associated metadata. For simple basic assets, any game engine is going to provide data visualization for it. Given that, I went one step further and provided simple in-game editing/tweaking of geometric, physical, and game properties. This enables faster iteration and prototyping for the game. Custom level editors (which is a major undertaking) take this concept to completion. No plans to do that. Note, the use of custom world builders can sometimes prevent high-end art from being changed and reexported without losing the information added by the custom tool. I do have a small custom exporter for 3dsmax that outputs max nodes directly into engine ready vertexbuffers and matrix palettes for skinned mesh animation. However, I am willing to get rid of that code and go with a "standard". But which "standard" will emerge as the standard? Currently I'm still keeping tabs on the various DCC pipeline options such as collada, fbx, xna, x, custom, etc.. Any pipeline decision is best done with input from the people who will be using the pipeline. As for textures, images just go into a subdirectory where a material file can override the default shader and settings. Images that are specified as height maps have normal maps automatically generated with the height information written into the alpha channel of the result - useful for parallax and bump mapping.
Characters: Its easier to achieve quality environment and objects compared to characters - content that significantly adds to the development cost. This includes the model creation and rigging, the animation/simulation level required for realism, and the shading has to be very sophisticated to avoid looking previous generation. Also, any NPC is going to require some sort of behavior control. In conclusion, more game systems (including animation, scripting, geometry pipeline, skinned mesh rendering) have to be complete before we can deliver on this front. Consequently, this is a requirement for game #2 - not the first game.
Geomod: One of the "cool" technical features we want to exploit in a game is constructive solid geometry. This lets the player dynamically modify the environment in real time. For example he can blow a hole in the wall wherever he wants - not just in a specified "place bomb here" position.
Level Editing: By using the geomod tech for level editing made it easy to quickly prototype various level layouts and designs. Areas are made up of logical units. We call these brushes, but they don't have to be convex. What constitutes a brush can be whatever makes sense for the scene under construction. For example, in an interior environment each room and each hallway area can be its own unit. These can consist of a solid finite object or the representation of an empty space. For example a room can be constructed just using the 6 walls that face inward. The brushes are merged together to create the scene. The way this works is that the underlying spatial structures are intersected as necessary creating the resulting geometry. If you don't like where a hallway meets a room, you simply move the hallway over. Such boolean operations can even be done interactively. This beats trying to go back and vertex-editing the entranceway. The goal isn't to replace the DCC tools out there. Its just that this in-game level editing system was surprisingly useful in certain rapid prototyping situations and therefore we felt it worth mentioning.
Moving Holes: I prototyped out a gameplay idea that exploits continuous usage of the constructive solid geometry code. In particular, the interactive geometry such as the "moving holes" that you can use to escape from one enclosed room/area to another. I figure the gameplay would be similar to how the player would jump on a moving platform to get from one area to another in a typical platformer game like mario. The difference here is that you jump into a hole in space that slips through walls and floors. If you remember the old bugs bunny cartoons... Think of how when coyote would order holes from acme and then put them on path of the roadrunner in an attempt to make roadrunner fall through them. Only to have the roadrunner stop before the hole, pick it up and move it. Immediately after coyote would fall though the hole. Its different, but I believe the user would easily grasp how things work and be able to play in this world. Visually I think this will look really cool.
Game Scripting: The engine is equipped with a command console that lets the user easily experiment with settings and invoke functions interactively. The configuration files for setting up key bindings and specifying options use the same interpreter. Similar systems can be found in id's and valve's engines. In addition to global functions and variables, objects can expose members to the console as well. The object hash that implements this is also useful for exporting and importing game data to and from xml files. The console system doesn't provide a fully programmable language. Currently there are some lisp-like features to allow more flexibility. However, its insufficient to express all the control flow and logic needed to create a game using the engine. Consequently we have set aside a c++ module for game specific code for the time being. Ideally the engine should be separate from the game. There are a number of ways to make this possible. The existing console could be improved to provide complete lisp features. Alternatively, a full game scripting language could be incorporated. We've looked and done some preliminary tests with lua and game monkey script. Because doing a full integration could take a couple of days to complete, the hesitation here is picking the right system. The language should interface nicely with the engine internals and allow the scripter to easily work with 3D data such as vectors and quats. Lua has a track record and more thorough documentation whereas game monkey was initially designed specifically for games. After testing a "shallow" lua integration, I'm currently integrating squirrel - a language that embeds very similar to lua but with a better syntax.
Why not start with an existing engine? - Given where we were at it was just easier, faster and cheaper to roll our own. Furthermore, I wanted to experiment and try different game ideas that require some tech development.
But why reinvent the wheel? - Creating your own engine isnt so much a reinvention of the wheel, but rather a re-assembly of the wheel. Think of it like IKEA furniture. You are assembling well understood parts together. This is much different than killing an animal so you can take its hide and chopping down trees into lumber to make your own furniture. In the domain of game engines, there is reference material, research papers, and low level libraries for pretty much anything you need to do.
How much work is it to write a rendering engine? - There isn't a single answer to this. It takes more work as you add more features and optimizations. Getting single pass rendering up and running is easy. Then you will want to add simple shadows followed next by accurate self-shadowing and so on. After you add more passes to get multiple dynamic lights with multiple shadows happening you will then get tempted to start adding lots of lights into your scene. This will slow down your performance so you will be prompted, as I was, to go back and add optimizations such as scissoring and culling into your code. So you can see how what was supposed to be a 2 day job can end up taking you a few weeks.
What benefit is there to writing your own rendering code? - If you are building an application/game you have to many design decisions to make. Even if you are using off-the-shelf technology, you still need to research how your content is going to be rendered so that you will know how to build that content. For example, will you need height and normal maps? So by taking the time to study the latest rendering techniques you will be in a better position to evaluate the various game engines and make the most of what you go with.
What about PRT lighting? - Our initial focus has been interior environments with dynamic lighting. Since we've been wanting to explore game ideas, we wanted something that would maximize flexibility and allow as much of the content to be dynamic (moving shadow casting objects and moving lights). To put this in context consider a different game project where the emphasis is on visual quality of an exterior scene that features subtle lighting effects and soft shadows. This suggests using a global illumination model and PRT lighting. The disadvantage is that the environment is static and lighting is global. Nonetheless, we can add this tech when needed. While both are very important, right now we put more importance on making a fun game experience than on drawing a pretty picture.
Is Deferred Shading using multiple render targets the way to go? - If you look at the game developer forums, it seems that the jury is still out on this one. That is consistent with our findings. With respect to speed, we've found it faster in some situations, but not in others.
How do you write your own physics engine? - Compared to rendering, this does require more research and work to even get a basic 3D physics engine up and running. Read Baraff and Witkin "physically based modeling" for the underlying theory and math on rigidbody dynamics. Then read Erin Catto's GDC presentations for a brilliant yet simple explanation of the architecture how modern physics engines work. When you design your phyics engine you should do it in a way so that you can swap it with a high performance engine such as bullet or physx in the future. The developers of those systems have gone ahead with all those little ideas and optimizations that you may or may not think about but cant justify the time to do yourself. IMHO going through the experience of developing your own physics code will make you a better user of another physics engine later.
Do the middleware physics engines limit what you can do? What about the geomod and physics point you mentioned? - I don't believe the standard physics SDKs limit what you can do. But sometimes it can be a bit of development work to adapt to the specifics of your application. For the specific usage of geomod (changing) geometry, the modified data would have to be converted on-the-fly into a form that the physics engine would understand.
If a physics engine is too much work, then why not start with an existing SDK? - That is probably a more sensible choice. I'm not planning to stack boxes for a living. One difficulty is determining which physics SDK would be the right choice. I had assumed that the dust was going to settle and there was going to be some updated api's and/or further standardization across the industry. If you are on a big budget project with a deadline then you dont have the luxury of just waiting. In this case, picking bullet, physx, havoc, or another popular system would be a good choice.
What about usage of cloth? - In the past, cloth is more often used as eyecandy instead of actually contributing to gameplay. Although, systems (such as physx) that do two-way interaction with cloth and rigidbodies can make cloth contribute to the gamer's experience.
What about threading and parallel processing? - Rigid body physics has already been optimized by the various physics sdks. I dont intend to repeat this effort, but rather switch to another physics solution - the choice of which depends on cost and target platform. However, the other unique technical features (geomod) used in this engine are excellent candidates for parallel processing on either multi-core or specialized processors. I'm now ready to go down that road as soon as I get my hands on the necessary hardware. I'm just waiting for one of those multi-core mac pro computers to "fall off the back of a truck" and somehow wind up at my house.
Why do you have an FAQ section anyways? - So we can provide additional points of important information for you easily and incrementally. Writing "good" documents and essays can take a lot of time.
last updated s melax, sept 2007