Wednesday, December 5, 2018

Geospatial Video: A Powerful and Underappreciated Tool in UAS

Outside of military surveillance operations, geospatial video is a data format that is likely unheard of for most commercial UAS operators. If not unheard of, at least poorly understood. Geospatial video is the inherent combination of geospatial information and video into a single data product. Essentially, geospatial video allows pilots and data analyst to understand where a UAV is positioned in space at any point along a recorded video. Depending on the video format, even the frame of view of the video can be spatially located along a basemap. At first glance, this may seem like a minor alteration to a given UAV video with little use, but this technology fills an important gap in the capabilities of UAS remote sensing.

Perhaps the most common data product created with UAS data, the orthophoto mosaic, has temporal and computational drawbacks. Comprised of an assortment of images "stitched" together to form a point cloud, orthomosaics take an exceptionally long time to process and can not take into account moving objects within the test area. Within many fields, a target topic of study may act on temporal scales too quick to be assessed in an orthomosaic or even too quick for a UAV flying a standard grid pattern. Two examples that come to mind in my beloved field of ecology are animal surveys and wildfire tracking. Animals that move at all are likely to be subtracted or duplicated from an orthomosaic. This effectively makes animal surveys through this method impossible. Even if you were looking through individual images that are used to generate a mosaic, the non-continuous nature of the data could make repeat observations more likely.  Wildfires, especially if tracking wildfire dynamics in real time, is another topic area that could benefit from the geospatial video technology. Either for the purpose of fire fighting or ecology, many of the concerns and interactions caused by wildfires occur at extremely fine temporal scales. Scales that the drone is capable of flying, but mosaic software can not account for. Many other topics of research could benefit from geospatial video within ecology include algal bloom research, population and community level interactions, and simply using drones as a recon tool to find and locate features ahead of time without the limited scaling on a typical grid transect pattern.

Generally speaking, faster image analysis is possible with geospatial video technology. Shape files could be created based on the aircraft path, so a nadir video following a physical feature could help inform cartographic and volumetric analysis. Real time analysis is possible with this method which could be critical for activities in conservation. Even just allowing for shorter field campaigns by enabling some of the observations to be made in the air makes this technology beneficial for the wildlife biologist.

While payware and better video cameras streamline the processing for this technology, simple UAVs that log GPS data can be used with freeware geospatial video software to produce professional quality product. I used Video GeoTagger FREE to investigate the ease of use of the technology and how it might be useful to the ecological research community.

Guide to use and my Experience

The geotagger software is available for free here.

After the software is downloaded and installed, the intro screen looks like below.


Conveniently,  a Youtube video is included in the top left corner that provides guidance on at least the most basic steps. The software is targeted for ESRI users as this can be used as an extension, but the steps are essentially the same.

The first thing to do is to load data points. Go to file and click "open video for geotagging".

The following prompt will be produced. Fill in the open categories with the desired video and GPS file. The file can be a .txt, .gpx, .SRT, among others.




At this point, the video (viewable in the top left panel when the Youtube video is not selected) and GPS data (viewable as points along the base map) are present. However, this data is not yet merged, they are two separate entities. You must select a point (preferably the first or last) from the GPS data on the basement and then select the "Geotag Video" option in the bottom left corner. This effectively merges the data set.
The interface after data has been imported but before it has merged


Once the data is merged, the media browser tab should be selected, and the icon for the video should be selected. Multiple merged videos can be present at one time, so it is important to select the correct video. The following image shows what this should look like. You may also notice the four new icons on the right side of the basemap.


After this step, when the video is played, the active location of the UAS is tracked in time with the video. The green dot on the basemap (in the following image) represents the UAV as it travels across the flight path. As the video plays, the four indicators on the right of the map also change showing heading, time, altitude, as well as general metadata (location in space as well as time and altitude) in descending order. At any point the video can be paused and the information for that point can be recorded. This is a massive advantage of this technology in that observations found in the video can be tied directly with metadata for that individual point in time (and thus feature). The following two images show a zoomed-in perspective of the flight tracker screen and the corresponding video clip.



Use and Weaknesses


As is hopefully apparent, the ability to pair metadata with video is crucial for tasks that need quick assessment or need to be monitored in real time. This tool is even more powerful when using the MISB video format that enables a bounding box of the camera extent to be created along the basemap as well as UAV position. I can also see this as being the primary means of UAV data collection in research groups without the infrastructure to use more expensive software or hardware.

However, there are some weaknesses to this approach, at least with the freeware version. Precision of point is a significant concern. As you may tell in the above image, the gps points in space are not entirely continuous. There is some lag in the continuous video among the stream of points. Meaning that if I pause the video twice in a row within a very short amount of time, both pauses may claim the same GPS point and associated metadata. Obviously this is not possible as the video stream pause locations, while similar, are not exactly the same. This may limit the technology somewhat for observations that require an immensely accurate GPS point. While none come to mind immediately, (perhaps an assessment of extremely heterogeneous landscapes) missions that require that much precision are likely better suited for different data products.

The other issue is scaling and validation of the basemap-GPS combination. While different basemap formats can be selected, including imagery, the natural resolution of UAS video may create a problem. If the drone data, which is extremely high resolution, does not match the lower resolution basemap, it may be difficult to interpret the exact location of points in space. How much this is an issue is difficult to quantify with a freeware software, but should be a consideration. Another problem with this software may come in the form of correcting for differences in the video and the basemap, or identifying differences in the least. This would especially be an issue when the video is taken of homogeneous landscapes. If the video is taken over a coniferous forest away from identifying man made features, for example, it could be extremely difficult to validate if the GPS points on the basemap are accurate at all. In this video, we had access to a road and we can clearly see the road in the basemap is also present in the video. Operators may not always have that luxury.

No comments:

Post a Comment