Sterblue Blog

Tech

Creating virtual twins of real structures to enrich our inspections

Introducing drones to industrial inspections

The main objective of an industrial inspection is to get an accurate overview of the state of a piece of equipment or infrastructure. The first step of the inspection is to use sensors to perform raw measurements on the whole structure. In order to ensure a given resolution and quality of the records with a limited number of sensors, you need to move a few of them around the surface of your asset, which is why drones are very good solutions for inspections. They are stable, they can easily access every area of the “3D space” and collect data without being intrusive.

That being said, conducting a drone inspection is not all straightforward as some requirements need to be fulfilled:

  • Coverage requirements: you don’t want to miss an area
  • Resolution requirements: you want to spot defects with a given size 
  • Safety requirements: you don’t want to crash the drone or damage the infrastructure

With this in mind, manual inspection is not an ideal option and it can even seem like you need to use a very sophisticated drone to deal with those constraints. At Sterblue, we believe that cameras and off-the-shelf drones are sufficient to carry out an inspection. We proved that Sterblue is able to perform accurate trajectories and to extract rich information from the pictures without the need of lidar, for instance. Instead, our trajectory relies on a parameterized 3D model that acts as a virtual twin of the actual asset that is being inspected.

sterblue-inspection-workflow-based-on-3D-model
Figure 1: Sterblue's inspection workflow based on the 3D model.

Once the drone, or basically anything else, performed the measurements, all the hard work remains to be done. What makes things uneasy is the tremendous amount of pictures you get from an industrial inspection. For instance, Sterblue collects around 10,000 pictures on a cooling tower inspection. You need to find if something is wrong on each output record: missing part, erosion, corrosion, etc. We tackle this repetitive task with both human experts and computer vision models. The final part of the inspection process is to aggregate the annotated records in order to make a concise yet exhaustive report out of it. We will show that here again our virtual twin proves very helpful.

Designing our virtual twins

Sterblue aims at giving its customers a simple yet reliable representation of their assets. We believe that the best model is not the most exhaustive but the most pragmatic, in terms of the number of parts, geometry, and degrees of freedom. We use our operational expertise to determine the elements and areas of the asset that really matter and create one parameterizable 3D generic model for each vertical. This way, you can easily qualify any structure you inspect, you generalize better among different structures and you are able to compare them on a single basis.

So far Sterblue was able to describe and inspect 120 different models of wind turbines, with a diameter ranging between 60 and 120 m (180 to 360 ft) with our 48 parameters’ generic model (Figure 2).
Figure 2 - The wind turbine generic model we designed.

The models we describe are standard 3D meshes, that is to say, collections of points and triangular faces in space, that combine information about the actual 3D representation of the structure and a 2D mapping of the surface of the structure. For a single theoretical position and configuration of the drone relative to the inspected asset, Sterblue takes advantage of the power of 3D rendering engines to extract a lot of useful quantitative information concerning what is seen through the camera (figure 3). We explain several use cases of those 3D models below.

Figure 3 - Different rendering modes of an inspection picture; the caption below each picture shows the information encoded by the colors. From a single image our rendering engine can tell what is seen and how it is seen.

Performing accurate inspections on the field

From a generic 3D model to a virtual twin: the calibration step

Our virtual twins can be seen as a collection of parameters and degrees of freedom that have a certain probability of being correct (figure 4). Some of the variables Sterblue uses to parametrize the model are referenced and immutable dimensions of the structure, like the height, top, bottom and neck diameters of a hyperbolic cooling tower, they are assumed as perfectly known. Some other variables are either unknown to the constructor or subject to changes during time, like the rotation, yaw, blades’ bend of the wind turbine so we cannot know them before going on the field.

Prior to inspection is a calibration step during which Sterblue optimizes the unknown parameters to obtain the 3D model that best matches the structure that the drone actually sees.
Figure 4 - Effect of the variation of 3 parameters on the wind turbine’s blade. From top to bottom, variation of the draw-wise bend, lift-wise bend and twist. From left to right, increasing strength of the parameter.
Figure 5 - Description of a model throughout the calibration process.

A trajectory that fits the structure

The very first use case of our tuned virtual twin is trajectory computing. Sterblue designed Perception, a very generic engine that allows us to generate a trajectory around any 3D model. We build trajectories that strictly meet requirements of safety distances, resolution and overlap and target the most important areas of the structure that are agreed upon with our customers. For instance, during the inspection of the wind turbine blade’s leading edge, the trajectory will follow the bend that was computed thanks to the calibration process, thus not losing a chunk of the most risk exposed area.

The key aspect of Perception is its modularity, that allows us to adapt to different requirements on the same structure, or simply to inspect a new unknown structure. In particular, this allows us to seamlessly perform unique inspections on specific structures like for instance an iconic pyramid structure that Sterblue will be inspecting soon...
Figure 6 - Our generic trajectory applied on a pyramid model. The trajectory fulfils requirements in resolution and overlap between pictures. For this project, the drone will fly as close as 3m (10ft) away from the surface.
Figure 7 - Display of an inspection trajectory around a cooling tower, on Sterblue cloud platform.

Post-processing our inspection outputs

The raw output of a regular inspection, that is, pictures with annotations made either automatically through AI or by experts, is far from being actionable. At most, it helps to point out defects that impair the use of the asset and require a manual operation on the structure to fix it. There are several key questions that still need to be be answered:

  • Did I cover the whole structure? 
  • How well did I cover the structure?
  • What part of the structure is seen in each picture?
  • How many unique anomalies do I have on the structure?
  • Where is each anomaly located on the structure? 
  • What is the shape of the defect on the structure?

Here again our 3D models are a real asset that helps us answer those questions.

Detections merging and location

As you don’t want to miss areas during the inspection, you may take more pictures than necessary, with a large overlap between subsequent frames. The first issue with redundancy in the pictures is that you might catch the same anomaly from different points of view without noticing that it is actually the same defect. Reliably estimating the health status and the amount of maintenance work necessary then becomes very imprecise. Another need from our customers is to be able to accurately locate the detections on the structure, as the severity of an anomaly also depends on its location. Obviously, you can use the GPS coordinates associated with each picture to have an idea of where the defect is on the structure, but this remains very approximative and hard to automate.

Sterblue came up with an automatic pipeline that aggregates the detections on each picture to spot the unique anomalies that can be associated with several detections, locate them precisely on the structure and compute their shape. This process fuses all the information Sterblue acquired during the inspection: the structure’s parameters obtained during the calibration, the GPS records of each picture and the annotations that were made on those pictures. We take advantage of the redundancy between pictures by refining the position and shape of the anomalies. 

Figure 8 - Merging of several detections into one anomaly.
Top: two detections of the same anomaly on a mission.
Bottom: 3D scene. The detections extend into cones (shown in red) in 3D that intersect into one solid anomaly (shown in yellow).

Coverage information

Despite the efforts on the overlap between pictures, you can not know if you did not miss anything by just looking at them. We are confident that the pictures output from our inspection trajectories defined above will cover the whole surface, but as a generic platform collecting inspection data, Sterblue also receives output from manual inspections that can be incomplete. We use the pictures themselves and the information they carry in their metadata to reconstruct the inspection with the virtual twin of the structure.

A 3D scene is generated to automatically find out for each picture what part of the structure was covered. The aggregation of the per-picture result allows Sterblue to map the coverage of the structure (Figure 9 & 10). Since our workflow design is completely generic, this not only applies for any structure, but also for any kind of data we actually want to aggregate on the structure. 

The easiest information to fuse that we propose here is the raw coverage of the structure (every part that is actually seen on the structure) but we  can use smarter metrics to assess coverage taking into account:

  • The angle of sight of the surface
  • The resolution of the surface
  • The quality of the focus on the picture
  • ...
Figure 9 - Coverage map on a cooling tower mission. The mission typically takes 2 to 6 hours (depending on the resolution and the size of the asset) to be executed. The colors encodes the number of times the area was seen, with a linear scale from 0 (red) to 20 (green).
Figure 10 - Monitoring of the coverage map of Figure 9 during the inspection. The mission typically takes 2 to 6 hours (depending on the resolution and the size of the asset) to be executed.

Image stitching

To ease the work of the experts that review the raw pictures resulting from the inspection, we need to organize them well.  The easiest coordinate that can be used to order pictures is time. However, this 1D chronological navigation becomes tedious when dealing with more than 500 pictures, which is very common for industrial inspection. The ideal output is to aggregate the pictures in a photogrammetry fashion so that you have a 3D picture of your asset, free of redundancy and very straightforward to inspect. But photogrammetry doesn’t work for every surface. As it is based on matching keypoints on overlapping pictures, it needs heterogeneous surfaces that have remarkable features. While the grainy concrete that cooling towers are made of is well suited for this, the smooth white surface of wind turbines prevents us from using photogrammetry.

However, Sterblue succeeded at using our prior virtual twin to simply put the images at the best place in the 3D space, so that they stitch well. This way, we offer a 3D navigation around the asset allowing us to see the big picture and to zoom in to specific pictures when more details are needed.

Figure 11 - Wind turbine inspection output after ordering the 900 pictures in 3D. Very low resolution samples are shown in the scene to ease navigation.

Designing virtual twins is a key aspect of our inspection pipeline

These parameterizable 3D models not only make it possible to inspect infrastructures with off-the-shelf drones, but allow to aggregate complex and large data into actionable insights. The results presented here show a part of the potential of this paradigm and we are working on other features to make the most of it, like the use of photorealistic renderings to train our AI to automatically detect infrastructures during inspections (Figure 12). Once embedded in the drone, such models will enable real time calibration of the structures and corrections of the trajectory. 

Figure 12 - Some outputs of our photorealistic rendering engine for wind turbine blades detection.