Large-scale terrain mapping projects bring great challenges for visualization. The lately popular LiDAR technology is capable of producing dense point cloud representations of terrain by scanning it with a laser from aerial platforms. The resulting data usually consists of several million points, with joined datasets of entire areas reaching billions of 3D coordinates.
There are several approaches to display LiDAR data on computers. The simplest way is to display a pixel for each of the points. The drawback of this is that on closer views, gaps between points appear and objects become “transparent”. A bit more advanced are distance attenuated point sprites – basically pixels enlarged based on their distance from the camera. This improves the views up close, but produces surfaces resembling fish scales if points have strongly different colors and the surfaces flicker with rotations of the camera as the sprites stay oriented to the screen. An alternative to points are triangles and a lot of software offers an option to convert the points into triangular meshes. They produce hole-free surfaces that stay fixed in 3D space even when moving the camera. They do, however, have their own drawbacks when used on areas of vegetation. LiDAR scanning has an advantage in that it is able to penetrate forest canopies to also scan the ground beneath. Such a point cloud is difficult to triangulate properly.
Instead of these methods, our visualization prototype uses advanced point-based rendering. The data is preprocessed to calculate the shape of terrain at each of the points and the information is stored as point normals. By using them in custom shader programs on the GPU, the points are then displayed as oriented circles (commonly called splats), with a size appropriate to the local point density. By blending the overlapping splats, a smooth continuous surface is visualized for ground terrain, similar to triangular meshes, but with appropriate display of vegetation also (the splats are oriented in various directions and look like leaves).
Common commercial software visualizes LiDAR data at lowered density to enable views of larger areas, only enabling full density for single image rendering or close up views. We convert the data into an efficient non-redundant quadtree structure, enabling level-of-detail visualization in real-time, adapting to camera movement and retaining smooth frame rates. The data can also be stored remotely and/or compressed for efficient storage and retrieval.
We have developed an additional method of combined point-triangle visualization. During preprocessing, we can determine which points belong to smooth surfaces and store this classification. During real-time visualization, these surface points can be triangulated at lower density, but retain the high quality image by using textures generated from the original visualization. With the exception of surface classification, all this can be done on-the-fly and results in frame rates twice as high as with basic point-based rendering.