In a point cloud, each pixel is represented with a color value and x, y, and z coordinate, which are combined to create a 3D representation of the project area. While the 3D point cloud may appear more detailed while zoomed out, zooming in reveals the spaces between points. The varying attributes of each point in a point cloud can be categorized and filtered in a variety of different ways, for example, allowing for the vegetation or structures to be toggled on/off, making the point cloud an incredibly robust tool for a variety of real-world applications.
Stereophotogrammetry, the traditional method used to calculate 3D coordinates from two or more photographs, is supercharged by Multi-Viewpoint Stereo (MVS) which allows for a Surface-From-Motion system to create very dense point clouds. Historically, the use of stereo pairs (two photographs with high overlap taken from almost the same location) allowed an operator to manually draw contours and break lines. The MVS method allows the computer to calculate the position of each pixel by first determining a precise camera position and then analyzing how much each pixel moves between successive images. The more a pixel moves between successive photographs in relation to the rest of the pixels in the image, the higher that pixel is. These pixel calculations are used in the generation of a point cloud, which benefits from the much more detailed elevation data (in comparison to contours) and the automation of the process.