Home > Unpiloted Systems

Principles of Photogrammetry and LiDAR

Principles of Photogrammetry and LiDAR

Remote sensing techniques make high-resolution terrain mapping easier and quicker.

By Jose Diego Monroy, CFM, and Shuhab Khan, Ph.D.

Terrain mapping measures the shapes and features of the terrain, which is important in fields such as land development, flood control and mitigation, landslide hazard identification and monitoring, project management, and obstruction identification. Higher resolution, higher accuracy, and faster terrain mapping helps the engineering community reduce the cost to generate a good 3D representation of the terrain morphology.

Traditional topographic mapping relied mostly on total station surveys, which was slow, costly, and low in resolution. With the advancement of remote sensing technologies, especially photogrammetry and light detection and ranging (LiDAR), as well as the flourishing of unmanned aerial vehicles (UAVs) as the sensor platform, surveying is becoming easier, quicker, and achieving better resolution. This article introduces and summarizes the basic principles of surveying remote sensing techniques, compares their capabilities and limitations, and proposes broader application of these techniques in terrain mapping.

The method frequently used by drone operators is photogrammetry, which consists of making measurements, especially in exact locating of surface features. A stereo vision can be constructed when the same objects are observed from two viewing angles, just like the perception of depth from humans’ two eyes. A line of sight is constructed from the center of the camera lens to the object, then the intersection of these lines (triangulation) from multiple views reconstruct a 3D representation of the object.

For example, the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) onboard NASA’s Terra satellite has two telescopes in visible and near-infrared spectral range, one nadir-looking and one backward-looking (see Figure 1), thus allowing generation of digital elevation models (DEMs) for the entire globe, the ASTER GDEM (Hirano et al., 2003).

Many automated computer algorithms have been generated to identify common objects from multiple views, which facilitate image registration and depth recognition (Lucas and Kanade, 1981; Scharstein and Szeliski, 2002). This process is normally called “Structure from Motion” (SFM), as scene structures are derived from the movement of the camera. As the name implies, the camera should be moved to get images from multiple observing angles and have enough overlaps between images. Commonly, cameras are mounted on a moving vehicle and take photographs automatically to obtain multi-view.

Figure 2: Incorrect and correct ways to collect photographs for SFM in various scenarios, modified from Agisoft LLC (2017).

Since the features identified from photogrammetry are scale invariant, objects with known dimensions in imagery are used to provide the scale. Topographic surveyors commonly put large targets (ground control points, GCPs) on the ground and survey the geographic coordinates with RTK GPS or total stations. The targets are extracted from images, and the associated geographic coordinates are used to georeference all images into a global reference frame. These targets normally have regular geometry and are easy to identify, like squares with checkerboard patterns and circles with crosshairs.

The georeferenced point cloud is then utilized in creating DEMs and an orthophoto/orthomosaic, which is a geometrically corrected image that allows distance measurements on the image. Typically, the orthophoto has twice the spatial resolution (half the pixel size) of the DEM and has color information, which helps interpretation.

LiDAR

LiDAR sends out a light signal (typically laser) to the object and detects the reflected or scattered signal, or the induced fluorescence from the target. LiDAR measures the range between the target and the sensor; these measurements across the target form a 3D point cloud of the target.

Normally, LiDAR uses time of flight (pulse) or phase difference (continuous wave) to measure distances. For time of flight LiDAR, a high-energy laser pulse is sent out, then the reflected pulse from the target is detected. The range is then determined from the product of the speed of light times half of total time of flight (to the target and back). The ranging precision is limited by the accuracy of the internal clock and can be down to about 10 cm scale.

For phase difference LiDAR, a long wavelength sinuous waveform is modulated onto a continuous, low-energy, short wavelength carrier laser signal. Once the laser reflects from the target, the phase difference of the long wavelength signal is measured, and the range can be computed. The distance can be measured very precisely (about 2 mm), but the range is less than the range of a time of flight LiDAR.

Within typical LiDAR instruments, a mirror is used to redirect the laser toward the target. These mirrors oscillate or rotate to change the path of the laser so it can scan the target, and the mirror angles are recorded. With both angle and distance measurements, the location of the target with reference to the LiDAR is computed. The location of the LiDAR instrument can be measured from global navigation satellite system (GNSS), then the measurements can be georeferenced into a global coordinate system.

Typical LiDAR instruments make more than 100,000 measurements. GNSS, however, can never reach such a high sampling rate. This leads to a problem that when LiDAR is mounted onto a mobile platform to facilitate fast data collection, its location cannot be measured quickly enough. This problem is solved by the integration of an inertia measurement unit (IMU), which can measure the angular orientation and acceleration in a high frequency, with the GNSS, to compute the LiDAR location. The GNSS/IMU integration is performed with an automated algorithm called Kalman Filter (Kalman, 1960; Schwarz et al., 1993), which blends prediction from IMU with measurement updates from GNSS to obtain optimum estimates of coordinates.

Laser is a coherent signal with single direction and low divergence; however, the laser beam divergence cannot be infinitely low, and the footprint diameter are in centimeter to decameter scale at typical scanning ranges. Thus, laser signal can be reflected from multiple targets along its path within the footprint. If multiple return peaks can be recorded and resolved from waveform processing, then multiple targets can be registered from a single laser pulse.

The laser signal is attenuated due to scattering while transmitting in the air, and laser beam divergence reduces the laser intensity received by the detector. As a result, targets with low reflectivity or long range become hard to detect because of weaker returns. Phase difference LiDARs produce continuous waves with relatively lower energy, thus have lower detection range, while time of flight LiDARs generate short but stronger laser pulses and have higher detection range.

UAVs

UAVs, in general, can be classified into fixed-wing aircrafts that use forward airspeed to generate lift, and rotorcrafts that use rotary wings to generate lift. UAVs of both kinds have been flourishing rapidly, while most commercial UAVs used in topographic mapping are multirotors, or multicopters that have four, six, or eight rotors (quadcopter, hexacopter, and octocopter). An advantage of multirotors is that the flight control mechanics are easy; motion control is achieved by varying each rotor’s speed.

UAVs can take various kinds of payload, including camera, LiDAR, and other remote sensors that allow mapping and surveying. GNSS/IMU can be integrated on UAVs, and their recordings can be stored in photo EXIF that can be extracted in SFM algorithms or transferred into LiDAR for georeferencing.

UAVs are capable of low-altitude and slow flight, which is suitable for high-resolution, high-accuracy, and high-sensitivity mapping applications. Better remote sensing performances are normally achieved only with heavier instruments. For example, laser pulses of higher intensity, which are needed for longer range and better accuracy, can only be generated from larger and heavier laser sources with higher wattage. Higher weight inevitably reduces the flight time of UAVs. Most commercially available UAV systems can support 20 to 40 minutes of flight, depending on payload weight and UAV propellers.

Application of UAV photogrammetry and LiDAR has been booming (Goncalves and Henriques, 2015; Jaakkola et al., 2010; Wallace et al., 2012). Most of these applications include creation of a DEM and an orthomosaic image; sometimes vector 3D models, contour maps, and 3D point clouds are delivered to clients.

Comparisons and applications

Compared with traditional surveying methods, aerial photogrammetry allows fast, cheap, and high-resolution topographic mapping. Together with 3D point clouds, aerial photogrammetry provides high-resolution ortho-imagery, which helps data interpretation. Cameras are significantly cheaper compared with LiDAR and commonly weigh less, which makes UAV flights much easier.

Photogrammetry has its limitations. First, SIFT works well with targets that have significant contrast with their surroundings so that they can be searched automatically within the images. This raises difficulties in aligning photographs of homogeneous materials with low color contrast, surfaces with smooth texture, or moving objects. Some examples are large areas of bare ground or thick vegetation, or thin vegetation that were constantly blowing in the wind. Without easy identifiable targets, the photographs will be hard to align, resulting in low-accuracy alignments or misalignments.

Some UAVs have onboard GNSS/IMU so that the location and angle of the camera can be integrated into EXIF information of photos, relieving the worry of misalignment. However, the performances of low-cost GNSS/IMU onboard normal UAV platforms are limited. GCPs can improve the accuracy of georeferencing but do not improve the accuracy of image alignments.

Another issue with photogrammetry is the ability to sense the ground in vegetated areas. Common UAV photography can reach centimeter-level spatial resolution at flying heights of tens to hundreds of meters, and there is very little chance to see pixels of the ground through canopy with centimeter resolutions. What’s more, the ground area must be seen by multiple photographs for a point to be registered in a SFM point cloud, thus greatly limiting the ability of photogrammetry to sense the ground.

A possible alternative is to use oblique view in addition to nadir view, which complicates flight planning because obstruction from a tree needs compensation from multiple views at different angles, and these views need to be setup for all trees that may not be regular on a site. What’s more, oblique view does not work in densely vegetated areas since the line of sight would be blocked by canopy nearby.

The third issue with photogrammetry is the relatively long time for data processing, as both SIFT and multi-view stereo algorithms have gigantic need of computer processing, storage, and time.

LiDAR, on the other hand, does not have any issue with misalignment. Since each measurement has its own georeferencing information, LiDAR works well in areas without significant contrast of texture. Secondly, LiDAR can sense the ground through canopy because of its multi-target capability. Both laser footprint and photograph ground resolution are in centimeter scale, but laser beams have a better chance to penetrate the canopy than photos because the actual laser footprint that avoided obstruction from canopy and reaches the ground can be much smaller than the size defined by beam divergence, while the ground covered in a part of a pixel must be significantly larger than the part of a leaf so that a pixel can be regarded as a pixel of the ground rather than leaf.

This multi-target capability opens the application of laser scanning in forested areas. Signals from leaves, branches, trunk, and ground can be collected, allowing estimation of biomass. Thirdly, because of the detection and ranging rather than triangulation, LiDAR provides direct 3D point clouds without time-consuming processing of photographs.

LiDAR, as an active remote sensing technique, is independent of natural radiation source, and allows wide application in various time of day and weather conditions. With the high intensity and priori knowledge of the stimulating signal, the sensitivity to background noise is reduced and target properties such as lithology (Franceschi et al., 2009; Hartzell et al., 2014) and vegetation indices (Li et al., 2014) can be delineated. With a low-divergence, single-direction, single-phase, and single-frequency light source, high range and spatial resolution is achieved.

LiDAR has its own disadvantages. First, the instrument cost of LiDAR is significantly greater than photogrammetry. The laser source and detector, timing electronics, mirror and its motor, and goniometer are all costly compared with photogrammetry devices. Second, LiDAR is heavier, consumes more electricity, and requires a larger UAV for heavier payload. One UAV with a camera onboard can weigh less than 1.5 kg, while the lightest LiDAR itself with enough accuracy and ranging ability can be 1.55 kg by itself, let alone the weight of the UAV and battery packs. Surveyors must carefully compare the tradeoff between cost, accuracy requirements, and technical performance.

With the remote sensing techniques onboard a UAV, surveyors can map faster and easier, and map inaccessible areas such as swamps, areas with steep slopes or vertical cliffs, and areas with safety hazards. This expands the capabilities of topographic mapping to include frequent revisiting, monitoring, and change detection, and allow better management of project progression and assessment.


References

  • Agisoft LLC, 2017, Agisoft PhotoScan User Manual-Professional Edition, Version 1.3.
  • Franceschi, M., Teza, G., Preto, N., Pesci, A., Galgaro, A., Girardi, S., 2009, Discrimination between marls and limestones using intensity data from terrestrial laser scanner, ISPRS J. Photogramm. Remote Sens. 64, p. 522-528.
  • Goncalves, J.A., Henriques, R., 2015, UAV photogrammetry for topographic monitoring of coastal areas. ISPRS J. Photogramm. Remote Sens. 104, p. 101-111.
  • Hartzell, P., Glennie, C., Biber, K., Khan, S., 2014, Application of multispectral LiDAR to automated virtual outcrop geology, ISPRS J. Photogramm. Remote Sens. 88, p. 147-155.
  • Hirano, A., Welch, R., Lang, H., 2003, Mapping from ASTER stereo image data: DEM validation and accuracy assessment, ISPRS J. Photogramm. Remote Sens. 57, p. 356-370.
  • Jaakkola, A., Hyyppä, J., Kukko, A., Yu, X., Kaartinen, H., Lehtomäki, M., Lin, Y., 2010, A low-cost multi-sensoral mobile mapping system and its feasibility for tree measurements, ISPRS J. Photogramm. Remote Sens. 65, p. 514-522.
  • Kalman, R.E., 1960, A New Approach to Linear Filtering and Prediction Problems 1, J. Fluids Eng. 82, p. 35-45.
  • Li, W., Sun, G., Niu, Z., Gao, S., Qiao, H., 2014, Estimation of leaf biochemical content using a novel hyperspectral full-waveform LiDAR system, Remote Sens. Lett. 5, p. 693-702.
  • Lucas, B.D., Kanade, T., 1981, An iterative image registration technique with an application to stereo vision, Proceedings DARPA Image Understanding Workshop, Vancouver, BC, Canada, p. 121-130.
  • Scharstein, D., Szeliski, R., 2002, A taxonomy and evaluation of dense two-frame stereo correspondence algorithms, Int. J. Comput. Vis. 47, p. 7-42.
  • Schwarz, K.P., Chapman, M., Cannon, M.W., Gong, P., 1993, An Integrated INS/GPS Approach to the Georeferencing of Remotely Sensed Data, Photogramm. Eng. Remote Sens. 59, p. 1667-1674.
  • Wallace, L., Lucieer, A., Watson, C., Turner, D., 2012, Development of a UAV-LiDAR system with application to forest inventory, Remote Sens. 4, p. 1519-1543.

Jose Diego Monroy, CFM, civil engineering manager, Dally + Associates, Inc. (www.dalllyassociates.com) has performed civil engineering, design-build services, flood protection, construction management, and project management throughout California, Arizona, New Mexico, and Texas and has worked on a variety of land development projects. Shuhab Khan, Ph.D., professor and graduate advisor, Geosciences, University of Texas, Dallas (www.utdallas.edu), uses quantitative remote sensing and geophysical tools for tectonic studies. His research involves field observations, geomorphic and structural measurements, and application of LiDAR, satellite radar interferometry (InSAR), GPS, and geochemistry to a wide variety of earth science problems.