DumDum v1.4 Experiment

Goal For v1.4, I want to make all the image layers 'drape' onto a single terrain layer, thus allowing low-resolution layers the ability to utilize the terrain modeling, and also allowing multiple 'terrain-mapped' high resolution image layers to be composited on top of each other. I also want to extend this by allowing the user to specify opacity and Z-order for image layers via the GUI. This will also allow dynamic shading to be used as there will only be a single planet surface to be rendered.

Challenges As I see it, the main challenge for v1.4 is 'lapping' textures of different geographics sizes on top of a set of 'Terrain Tiles' of various resolutions in a dynamic and efficient manner.

Where am I at? I've been taking things step by step. First I created a new class called SurfaceRenderer, and this is instantiated in the WorldWindow.World class. SurfaceRenderer contains the familiar Update(DrawArgs drawArgs) and Render(DrawArgs drawArgs) functions, and is used in much the same way a RenderableObject is used. SurfaceRenderer doesn't use the IRenderable interface yet, but it probably could.

SurfaceRenderer is actually a lot like QuadTileSet. It contains a set of "top-most" SurfaceTile objects, except that each SurfaceTile contains multiple textures instead of a single texture like QuadTile does. I've been able to make each SurfaceTile efficiently query the TerrainAccessor (which also required some tweaking...) and intially grab terrain information that is already in memory, and also grab the "better" terrain data when it loads into memory. Basically, if the terrain data isn't in memory, the TerrainAccessor gives the best it has, and then goes to 'fetch' the better data and load it in memory when it can.

The next step, and the most challenging one, is to dynamically load, chop, and resample 'source images' such that they can be composited onto the new SurfaceTiles. Right now I'm trying to use the .NET GDI+ for this process, but it's not very fast. Technically, each 'source image' only needs to be 'processed' once, as we can save the resulting textures for future use, but even that first 'processing' step could result in a single 'source image' needing to be split up into dozens of smaller images of various resolutions. I'm looking for a way to do this with just Texture manipulation.

When and where can I see this Black Magic? Check the CVS under the branch WORLDWIND_1_4_DDH_20050321

Screen Shot 01

By the way, please feel free to comment on this page, feedback is good.

Multi-texture Links (for GFX cards that can do it)

 

Multi-texturing isn't the issue, the issue is creating the textures that will be mapped onto the SurfaceRenderer tiles. Although I could use multi-texturing once the textures are created, I might fallback to multipass rendering for tiles with many textures, or for machines with video cards that don't support multitexturing. [CM]