Light detection and ranging (LiDAR) sensors are a commonly used source of optical information for remote sensing payloads in scanning and surveying applications. LiDAR payloads use penetrating pulses of laser light that are reflected and used to determine relative distance to the point that it reflected from. When these pulses backscatter (reflect at an angle of 180 degrees), many payloads will use inertial navigation systems (INS) to timestamp and georeference data points that are acquired. Compiled together, these individual data points paired with point cloud software streamline the process for analyzing structures and ground planes.
A point cloud is created by scanning an area with a 3D Laser Scanner. This scan is then imported into post-processing software (unless desired accuracy is obtained in real-time) where errors are removed. After processing the data, modeling software is used where the clouds can either be geo-referenced to a ground plane or manipulated locally without reference. From here, point clouds are then exported into computer-aided design (CAD) or building information modeling (BIM) systems, where they can be manipulated further, generating meshes and applying boundary conditions to generate accurate and realistic 3D models (1). Even with an explanation of a point cloud it is still hard to understand what a point cloud is if one has not been familiarized with the technology. When the user performs a scan, the laser scanner sends out beams of light in many different directions. As these light beams are reflected to the scanner, the system uses a datalogger to record reflected positions as localized vectors. These scan files can contain as little as thousands of logged vectors or as many as millions, if not billions depending on the scanning project at hand. These 3D vectors are then used in the post-processing software to generate a visualized point cloud.