Surface Mesh Primitive
The surface mesh is similar to the triangle mesh, but it only renders meshes on the surface of a globe. Arbitrary mesh or meshes at altitude cannot be rendered with the surface mesh. The main advantage of the surface mesh over the triangle mesh is it will conform to terrain as shown below.
In addition to conforming to terrain, the surface mesh can have a texture applied to it as shown below.
Given the advantages of the surface mesh, you may wonder why you would ever use the generic triangle mesh to render meshes on the surface. There are two reasons:
- The surface mesh requires a video card and drivers that support OpenGL 2.0. This has been available since 2004 but you may still want to call SurfaceMeshPrimitive.Supported before assuming the surface mesh is supported.
- Performance. Although the surface mesh is highly optimized, including the use of geometry shaders when run on Shader Model 4 video cards, it is still slower than the triangle mesh. Performance tests showed the triangle mesh to be 55-75% faster than the surface mesh on a GeForce 8800 GTX.
The following example from the GraphicsHowTo shows how to apply a texture to a surface mesh.
[C#] | Copy Code | |
---|---|---|
|
The extent triangulator is used to compute a mesh on the surface. As shown in the previous two examples, both the extent and polygon triangulator can be input to the surface mesh. The extruded polyline triangulator cannot be input since it does not provide meshes on the surface. A texture is created, in the same way it is created for markers, and passed along with the triangulator to define the surface mesh. The texture in this example has an alpha channel, which is why the entire extent is not filled.
The surface mesh provides Set to allow dynamic updates.
Texture Matrix (Advanced)
When applied to a surface mesh, texture edges align to latitudinal and longitudinal lines where the top edge is north. The texture extends to the bounding extent of the surface mesh. The texture can be scaled, translated, and rotated by assigning a TextureMatrix to the surface mesh. Performing transformations like these over time can create effects like water movement.
When a surface mesh is rendered, texels in the texture are mapped to pixels on the screen. Texture coordinates are used to lookup texels. A texture lies on a plane, and its coordinates are in the range [0, 1] in both the u and v directions as shown below.
This is texture space. The texture repeats for values outside the range. The texture coordinates u and v are multiplied by the 4x4 matrix prior to the lookup. Modifying this texture matrix changes the way texels are mapped to pixels.
If no texture matrix is specified or the default texture matrix, which is the identity matrix, is assigned, the u and v coordinates are unchanged.
[C#] | Copy Code | |
---|---|---|
|
Use a scaling matrix to tile the texture. The first example below scales the texture coordinates so that they fall within [0, 2], causing four copies of the texture to be rendered. In the example, the .NET Matrix type is used to simplify matrix construction. In the second example, the texture coordinates only range within [0.5, 0.5] so that only one quarter of the texture is rendered.
[C#] | Copy Code | |
---|---|---|
|
[C#] | Copy Code | |
---|---|---|
|
The following example constructs a translation matrix so that the texture coordinates fall within [0.5, 1.5]. Translation can be used to create texture animation effects.
[C#] | Copy Code | |
---|---|---|
|
This example constructs a rotation matrix that rotates the texture coordinates 30 degrees counter-clockwise from the north.
[C#] | Copy Code | |
---|---|---|
|
This example first translates (-0.5, -0.5), then rotates -30 degrees, and finally translates (0.5, 0.5) the texture coordinates. This rotates the texture around its center. Note that the transformations are applied in the opposite order described.
[C#] | Copy Code | |
---|---|---|
|
A more interesting example allows you to map the four corners of a texture to new corners. A common example involves mapping an image taken from a UAV's camera onto the Earth where the cartographic coordinate of each corner of the image is known. If such an image were added as a texture to a surface mesh that was a rectangle initialized from the image's latitude and longitude bounding extent, the following results:
This fails to map the image correctly. The cartographic coordinate of each corner is not mapped to the correct location on the Earth. A better approach is to compute a texture matrix that maps each image corner to the correct location. Such a matrix results in the following:
This more accurately shows where the image actually extends. It is obvious that the camera took the image from the southwest. The example code from the GraphicsHowTo follows:
[C#] | Copy Code | |
---|---|---|
|
While far better than the original, this mapping is not perfect; it does not account for terrain or the curve of the Earth. The pixels at the corners are correct, but all of the other pixels are not. The image below shows the remapped texture (outlined in black) overtop of terrain and the correctly georegistered Bing Maps for Enterprise imagery. Note that the edges do not line up. The farther the pixel is from the camera location, the further off the pixel is. This error is reduced for geographically smaller images.
Despite this issue, this method is often the best that can be done if you only have the cartographic coordinates of the corners. If you has the position, attitude, and the field of view of the camera, the image can be more properly georegistered. See the Raster and Projection Streams Overview for more information.