Multidimensional Image Processing with Lidar Data for Segmentation and Classification Using Machine Learning and OpenCV

José Luis Domínguez
7 min readJul 16, 2023

--

LIDAR (light detection and ranging)

Background

Currently, the use of state-of-the-art sensors has made it possible to innovate with high efficiency in the monitoring of natural resources, such is the case of LIDAR technology (Light Detection And Ranging -detection by light and distance-), which is the result of the integration of GPS technologies, Inertial Measurement Unit and Laser Sensor, used in the measurement of ranges at variable distances by collecting data that is projected in a cloud of points in three dimensions (x, y, z).

These data are used to define the terrain surface and generate Digital Terrain Models (DTM) and Digital Surface Models (DSM) from which the Canopy Height Model (CHM) is generated, which is equal to the height or residual distance between the ground and the top of the objects above the ground (Figure 1).

Figure 1. Composition of a Lidar data model

Through these digital models, forest biometric studies can be implemented both at the individual tree level (total height and crown height, among others) and at the plant level (volume, basal area, and biomass) (Figure 2).

Figure 2. Types of information extraction from a Lidar data model.

Given this context, we will apply data science on this type of multidimensional information to solve a problem of structural separation of different types of vegetation (Figure 3), in which the different distributions in (x, y, z) cause segmentation conflict. between the objects belonging to trees, shrubs, grasslands and natural soil, mainly due to the slope present in the study area (Figure 4).

Figure 3. Lidar elements with different x.y.z distributions due to the uneven slopes of the terrain on which they are located.
Figure 4. Lidar data distributions affected by terrain slopes.

Subsequently, a semantic segmentation of the image is sought, to see the forest and non-forest cover of the surface. And finally, quantitative information of the image will be obtained through clustering and filtering of optimal thresholds to separate the different elements contained in the CHM image.

Methodology

Data

Information will be collected by LIDAR survey with an Unmanned Aerial Vehicle (UAV) over a small area of land. Due to the large differences in the environments and tree growth, the width of the strip was set to 10 m to obtain a wider range of sample data. The flight altitude was set at 80 m to obtain fine point cloud data (Figure 5). From this information the Canopy Height Model (CHM) will be generated from DTM and DSM.

Figure 5. Obtaining Lidar data with Drone.

Development

1- Big Data Processing.

A major problem with 3D point clouds is data density. Which leads to a higher computational cost in data processing and visualization. For this, it is necessary to implement subsampling methods to reduce the data density in the point cloud. Some of the most representative are:

Random method: It is the simplest for reducing data density, in which a specified number of data points is randomly selected.

Minimum distance method: The data point selection is constrained by a minimum distance so that no data point in the selected subset is closer to another data point than the specified minimum distance.

Voxel grid method: A grid structure is created over the point cloud and a representative data point is selected. This is based on the division of 3D space into regular cubic cells called voxels, where for each grid in the division of 3D space we will only keep one representative point. For this method, the following must be considered:

A. Define the dimension of the bounding box (length, width and height of the voxel)
B. For each voxel, we check if it contains one or more points.
C. Finally, we calculate the representative of the voxel, in this case the centroid.

With the data compression by means of voxel subsampling, now the objective is to be able to clearly separate from the lidar data what corresponds to the soil class and what is of the vegetation class plus grassland. For this we will apply grouping and segmentation techniques.

2- Separation of ground points by multidimensional grouping.

The first model we will implement will be high-dimensional data clustering. However, a group is intended to contain objects that are related based on observations of their (x, y, z) attribute values, so it is likely that some attributes are correlated and cognate subspaces may exist in groups. arbitrarily oriented.

The concept of distance becomes less precise as the number of dimensions grows, since the distance between two points in a given data set converges, therefore the discrimination of the nearest and furthest point in particular loses meaning.

3- Separation of the ground points by one-dimensional grouping.

The second model is the so-called Jenks natural breaks classification method. In which from the Z value of the lidar data, the problem of how to divide a range of numbers into contiguous classes is sought to minimize the squared deviation within each class. This attempts to group the data into groups that minimize within-group variance and maximize between-group variance.

Class breaks are created in a way that best groups similar values and maximizes differences between classes. Features are divided into classes whose boundaries are set where there are relatively large differences in data values.

4- Separation of the ground points by slopes calculated with K-MEANS.

A function is defined that filters the ground points of a point cloud using a simple algorithm (K-MEANS). The algorithm calculates the k nearest neighbors for each point, calculates the slope of each point for its k nearest neighbors, and filters the terrain points based on a maximum slope threshold.

It is important to note that the filter function takes three arguments:
points: The input point cloud as a numeric array of shapes representing (x, y, z) coordinates.
neighbors: The number of nearest neighbors to consider when calculating the slope of each point.
maximum slope: The maximum slope (in radians) allowed for a point to be considered terrain.

5- Separation of ground points by multidimensional image processing.

a- Gaussian filtering in CHM to smooth the data

LiDAR-derived Canopy Height Model (CHM) smoothing is used to remove false local maxima caused by tree branches. This will help ensure that we are finding the treetops correctly before running the watershed segmentation algorithm.

b- Segmentation

Through Watershed Segmentation the watershed algorithm treats the pixel values as a local topography (elevation). The algorithm floods the basins from the markers until the basins attributed to different markers meet on the watershed lines.

The maxima of this distance (ie, the minima of the opposite of the distance) are chosen as markers and the flooding of the basins of these markers define the separation function along a watershed. Achieving in this way a definite segmentation of adjacent objects.

c- Lowest point of the statistical grid.

With the segmentation carried out we can correctly obtain the lowest point of the 3D Lidar data, thus compensating for the different ground levels for the various contained objects.

For this, a cycle is generated in which for each point (x, y, z) it is compared with a mask that labels it as 1 if it is different from the minimum value obtained in the segmentation and 0 if it is similar to the segmentation value. The loop will continue until an attempt has been made to fill all values using adjacent data.

d- Separation by morphological operations.

Finally, to better refine the classification of the Lidar data, the values (x, y, z) of the image are morphologically processed, based on their neighbors to minimize the computational calculation of the final segmentation. Morphological dilation makes objects more visible and fills in small holes in objects. And morphology erosion removes floating points and isolated points so only the relevant objects remain. This calculates a weighted height difference based on the difference between the neighboring values.

--

--

José Luis Domínguez
José Luis Domínguez

Written by José Luis Domínguez

Data scientist who develops sustainable reliability in the processes driven by the development of artificial intelligence in future society.