Tuesday, April 9, 2019

Week of 4/1: Initial Data Products

This week was centered around the completion of a series of products. First, the initial processing of C-astral data was completed. Also, this week we completed additional M600 and C-astral data collection flights. Finally, the posters for the aforementioned poster symposium were completed. For the poster that will shared at the symposium, click here.

C-Astral Flight 1: Data Alignment Issues

The C-astral data collected during the flight in the previous week has been processed - but not without issues. The majority of the week was spent trying to understand the C-astral PPK correction system. PPK is a means of spatially aligning data to a geo-corrected standard. This method utilizes a ground station that has a precise GNSS fix as a known point. Each of the individual images captured during the flight have their position marked within an associated rover file saved within the aircraft files. The position of the marked images is then corrected based on the known point, providing a means of ensuring extremely accurate data post-process. The window for post-processing the UAS imagery is present in the C3P flight planning software (Figure 1).

Figure 1: C3P post-processing window


In order to post process the imagery, the rover log from the aircraft must be uploaded in addition to RINEX files from the known fixed point. We used the INWL based station in West Lafayette, IN which is part of the NOAA CORS network. After the rover log and base station data is inputted, images much be uploaded. From there, additional settings can be imputed. We used the Apply Geiod and write EXIF settings.

The primary issue we were having was related to the PPK correction. After corrected and processing, we noticed that the orthomosaic was not spatially matching with the basemap in only one axis. See figure 2 for an example. It took us a long time to discover what exactly was causing this mismatch. We assumed there may have been issues in data logging, converting geographic projections, or a geo-correction issue in Pix4D. However, after a few days of trial and error, we discovered that the issue was related to the image numbering system.

Figure 2: Spatial mismatch in initial orthomosaics
The first clue that the issue was image numbering was that the images were only mismatched in one direction. Looking back at figure 2, it is clear that if the image was shifted up, the roads would align with the basemap. This suggests that the entire image is shifted downward. Since the aircraft flew parallel to N.River Rd (in figure 2),  a potential cause of this mismatch is if the images had incorrect Lat and longitude associate with them. Only one dimension of alignment would be shifted due to this since the aircraft flew almost exactly a North-South series of transects. Longitude would be very different between images due to this transect direction, but latitude would be. Thus, if images were given coordinates belonging to a different image along the transect, the shift would be downward, just as it is in this case. During the attempt to discover if this was the issue, we discovered that the amount of images taken did not match the amount of images the aircraft recorded. Despite clearing the logs and reformatting the sensor data storage before flight, there was an issue matching sensor images to sensor triggers. This comes down to a design issue in the C-astral aircraft where it must be turned off after any practice captures, which is not listed in the checklist. To fix this mismatch, we copied the first image three times at the start of the image data set. This allowed for the creation of three throw away images that took up the three extra sensor clicks, and the sensor triggers should then on match the number of images. After using this method, the problem was solved (figure 3).

Figure 3: Aligned orthomosaic
From there, we were able to fully process the imagery. While some areas towards the edges or deep within the forested areas within the final mosaic did not align particularly well, the majority of the data aligned properly. The result is a 75 hectare map (Figure 4).

Figure 4: Simple map of flight area


Zooming in on this final orthomosaic, the true detail captured by the 42MP sensor from about 400ft above ground level is very clear. Individual pieced of trash, the internals of trash cans, individual parts of chairs, weeds in the ground surface, cracks in sidewalks, and even emblems on the crews' clothing are visible. The advantage of this level of detail in RGB from a remote sensing context would be the ability to classify nearly anything present within the 75 hectare area discretely. This data set would be particularly potent using a GOBIA classification workflow since texture and shape information is available at a very fine resolution. The following five figures show how detailed this orthomosaic is.

Figure 5: Zoomed-in on the amphitheater, visible on the Southern end of Figure 4  


Figure 6: Zoomed in on amphitheater further. Notice individual weeds and paths where water has previously flowed are present


Figure 7: The chair section of the amphitheater



Figure 8: Cracks in the pavement and view of partially empty trashcan



Figure 9: Rock piles in the imagery, weeds visible

M600 Data

 Some of our most preliminary m600 collected data has been processed this week. In particular, the initial steps of the amphitheater 3D modeling project. To model the amphitheater in 3D, two different flights were conducted. A grid pattern at 60 degrees and a grid pattern with the camera set to nadir. For some reason, the sensor did not change directional orientation during the flight leading to a quality bias towards the north (figure 10 shows the camera direction with a sparse point cloud). The nadir data set is yet to be processed. After each are processed separately, the data sets would be combined to produce a very high quality 3D textured model produced through structure from motion photogrammetry. While the project is not yet complete, the following images show the dense point cloud and textured mesh for the angled data product. 

Figure 10: Camera location and angle for amphitheater data set collected at 60 degrees camera


Figure 11: Amphitheater dense point cloud

Figure 12: Point cloud showing sloping ability within the model. Green and blue points are ground control markers


Figure 13: Textured mesh from the front of amphitheater
Figure 14: Textured mesh showing trees, pavement cracks, lampposts, and decorations within the model

No comments:

Post a Comment