The Central Ohio Gamedev Group Message Board › Questions about Blitters and Engines
|A former member||
So I have this project in mind and I know just enough about programming to entertain friends at dinner parties they are much smarter than me and enjoy my fumbling around.
When I started this ventured I contemplated a game I play...OpenTTD.. as a platform to emulate (since I wanted the application to run on older machines without powerful GPUs). From searching the forums I found that it is "a CPU blitter running a node based terrain engine using square tiles. For the longest time I thought a blitter as part of the source code but I came to understand it is how the hardware manages data. And, to be honest I still have no idea what the experts are saying.
Which brings me to the nodes based terrain engine. A Google search gives me a dead end...http://forums.cgsocie... but from everything I read it seems that all terrain engines begin with nodes. Is this a true observation?
So my understanding is that my engine first creates a grid of nodes on which I place a heightmap on top of which goes the textures. Am I on the right track in my understanding?
Which raises another question when I start with Google earth it is a globe and I can zoom into street level and clearly see my house. But, with some applications the terrain gets blurry at street level. What causes this? Was it a decision by the programmer not to have another level of detail?
I believe that is enough for now. I am sure any answers to these questions will raise more.
A blitter is a hardware circuit that is used to efficiently copy blocks of data from one location in memory to another, and it can do it while the processor is doing other things. It is a very useful component for rendering computer graphics, because rendering involves a lot of copying of blocks of data (e.g., bitmap textures).
In the context in which you saw the phrase "CPU blitter", it is a bit of a misnomer. I suspect the writer of that phrase meant to indicate an optimized software routine that uses standard CPU instructions to copy blocks of data for rendering graphics. Typically, software blitter routines are much slower than dedicated hardware blitters.
For the past decade, virtually all computers (even smartphones and tablets) have a GPU that (mostly) obviates the need to write software blitters. So, I would not worry too much about the details of blitters, since the operation of blitters is abstracted by both the GPU and the graphics API that you use (e.g., OpenGL, or DirectX).
RE: "node based terrain engine"
Not all terrain engines "begin with nodes", but many of them do create a tree-like data structure (e.g., quadtree) where each leaf node is a tile of the terrain. A tile is a portion of the entire terrain. Most terrain engines make their tiles either squares, rectangles, or hexagons.
As long as your terrain doesn't have caves or overhangs, then yes, a height map is appropriate for encoding terrain topology. Typically, a terrain tile references a portion of the terrain height map (for geometry) and a set of one or more texture maps (for materials). These maps may or may not be pre-divided into tiles; it depends on the engine and the needs of the application.
The reason a tree structure is often used is to optimize visibility determination of tiles. If only a subset of tiles are visible at a time, then you want to quickly determine which tiles are visible, and tree structures help you do that.
In the simplest tree-based terrain engine, each non-leaf node in the tree contains (or implies) information about the cumulative bounds of all of its child nodes. Thus the root node tells you how big is the entire terrain. In order to determine which tiles are visible, you recursively test nodes, starting at the root, to see if the node is at least partially visible. If a node is visible, then recursively test its child nodes. If a node is not visible, then you can skip that entire branch of nodes.
Some terrain engines don't bother with a tree structure. Instead, they simply use a grid of tiles. This is appropriate when all of the tiles are always visible from the camera. There are ways to optimize flat grid visibility determination, but tree structures seem to be more commonly used.
Google Maps and Google Earth have many levels of detail. They use a system where the tree of nodes has tile data (texture map and height map) attached to each node, not just the leaf nodes. When traversing the tree for visibility determination, the recursive tests stop at a level that is appropriate for the client's rendering capability and memory restrictions. This makes it possible to render the entire Earth, or render just your neighborhood block, with the same apparent detail and use the same amount of memory on the client's computer. The servers that host Google Maps and Earth have access to all of the nodes on their disk storage, and they use much more memory than the clients do, but they still don't keep all of the node data in memory at the same time (that's too much data). Instead, node data is read from disk storage only as necessary. Also, some of the non-visible node data is cached, both locally on the client computers and on the host servers. But that's just an optimization to minimize the perceived lag of streaming in new nodes, which is a slow operation.
Other terrain viewing applications may have fewer levels of detail (i.e., fewer nodes in the tree), and so you may see less detail (i.e., it gets blurry) when zoomed in all the way. There are many reasons why an application has fewer levels of detail. Storage requirements are one such reason. Every additional level of detail you provide will quadruple the amount of data you need to store. Another reason that one map application may have less detail than another is licensing or monetary cost. Google has purchased terrain data from several sources. Other map providers, may not have bought the most detailed level of terrain data.
Lastly, tree structures don't have to be explicitly defined with nodes pointing to other nodes. I recently worked on a terrain and vegetation rendering system that used an implicit quadtree. Each node was stored in a linear array and arranged according to Morton code (a.k.a. Z-curve order). This was helpful for performing hardware instancing of vegetation. It also avoids the costly operation of following memory pointers to traverse the tree.
|A former member||
Thank you for your reply. It will take a few days of contemplation before I fully understand what you are trying to teach me. But, I will be diligent.
I will be honest and admit I am still trying to understand the lingo terrain engines. So, if I miss use a term forgive me and as my education develops I will learn the proper use of terms.
As I want this application to be used by the most widely based populace and as my son does not have a very powerful gpu as he has a nvidia geforce 6100 nforce 405 video card. It is my understanding that utilizing the cpu is the best method.
As for the terrain itself, since the "game is based on the earth then I do need to start with a heightmap, and data from NASA or USGS but as I do not want the typical ground clutter one gets from Google Earth I believe that the terrain and "ground/street" level will need to be procedurally built. Which, I am still trying to figure out if it is even possible.
Further more since this is geared toward the 3 - 10 year old range I do not believe it is critical to be concerned with graphic quality.
Finally, I look forward to the Saturday meeting and I probably should invest in a laptop before then and hopefully I will be better able to get my thoughts, ideas, confusion etc. expressed in a more clear fashion.
I hope to see you there.