Machine learning is how vast quantities of Unmanned Aerial System (UAS) imagery data will be analyzed for precision agriculture, oil, gas, and utility line inspections. The problem is, most commercial UAS imagery mosaic and analysis services used to generate analytic products for these industries introduce alterations and distortions to the data as part of the mosaic process that often hamper efforts to apply machine learning. To truly integrate machine learning analysis tools, you must first insure the data going in is as pristine and un-distorted as possible so here are a few things to check to get in the best position for machine learning analysis.
We are specifically talking about the analysis of multi-layered multispectral imagery data ranging from visible to near infrared imagery bands. We encountered numerous obstacles within our datasets during recent efforts to analyze Denver, Colorado, Ash trees as part of a city-effort to identify the spread of the Emerald Ash Borer (EAB). For our processing, all the bands are analyzed and must therefore be tightly registered (aligned) with each other as well as assembled into a final mosaic image.
Here’s where things begin to go wrong (Remember this!) Each time you manipulate images (e.g. image-to-image registration, image set mosaic) the processing changes the data, altering raw pixels in ways big and small. Here are a few examples to watch for in a typical data-project:
The first check is “Image Clarity”. Below is an example of an image, both before (left) and after (right) processing, where the spatial resolution was reduced, making it appear “fuzzy” compared to the RAW data.
In our case looking for EAB, we needed to identify individual leaves that were early indicators (yellowing) for infestation. While the initial image registration on individual scenes maintained the spatial resolution, the mosaic process reduced this resolution to where we could no longer identify individual leaves. A simple side-by-side comparison with raw data is a quick way to see how well the mosaic data maintained spatial integrity.
Secondly, image band alignment errors is often blaringly obvious when stacked as in this True-Color Composite sample. For illustration we highlight the noticeable offsets of the square objects, but also observe the less-obvious effects on the trees that make it often difficult to notice.
The third check is for the introduction of distortions, and alterations of pixels. This can sometimes happen when mosaic software attempts (and struggles) to figure out how images should be placed. This can be especially tricky with objects like trees (imagine seeing eight images looking down on a tree, and noticing that it is “shifting” between the images), where many top-down images may look similar but can include more sides of the tree. To try solve this problem, the software will try to build a 3D model of the terrain based on how things “shift” between images! Software generally tries to “wrap” the leaf imagery around what it thinks is a three-dimensional object. Below is an example of complications when trying to wrap this data around an object that come out as smeared or warped pixels that have lost their accuracy to the real world.
For us (and you!) the integrity of the data is what determines whether we can run our machine learning tools, requiring data collectors to pay attention to their choice of sensors and collection methods to get the best results. While no small task, we decided to build our own tools make the processing (registration and mosaic) faster and more responsive to maintaining data-integrity.
Contact us if you are struggling to get your data pristine and accurate enough to process with your machine learning tools…chances are we hit the same problems and found a resolution (lol…get it?)