Monday, February 25, 2019

Week of 2/18/19: Poster and Mothod Development

This week, Krysta and I focused on developing the poster further. In particular, two major tasks have been underway. The first task is a comprehensive literature review structured around finding information specific to our poster topic. Krysta has been leading up a literature review and introduction for the overall project, as we are hoping to publish an academic article at the end of the semester, but we needed more specific sources for the poster. We also worked towards developing a formal methodology that we could test and gather preliminary data for that would double as the main focus of the poster. Being that the literature review would, in theory, be based on our methodology, generating said methodology was the primary task.

Methodology: Flood modeling and boundary delineation accuracy

The topic we have finalized for testing is to compare the accuracy of a flood model generated with a LIDAR digital terrain model (DTM) to a drone imagery method. To be more specific, we will be comparing the model to two different UAS data types: geospatial video and multispectral imagery. The methodology to complete this part of our capstone, and thus the poster, can be split into a few parts.

Ground Data Collection

Ground control points collected through a survey grade RTK gps unit will be used to ensure the accuracy of the drone data in general, but several points will be used for a different purpose, taken at the boundary of the flood extent (see "accuracy assessment" section of post for more information). The Reach GPS unit will be used for ground control data collection.

Model Generation

The model will be generated to estimate the extent of flooding based on rain and river height conditions. ArcPRO will be used to generate this model. A LIDAR DTM will be used to understand the topographic information necessary to generate the model.

Airborne Data Collection

Flights will be conducted with both methods. A geospatial video flight will be flown manually. The aircraft will be flown directly over where the pilot assumes the boundary of the flood to be. The aircraft's GPS system will record position in reference to the video it is generating. This will effectively create a hard boundary with one side of the generated flight path being water and the other being ground.
A second flight will be conducted with a multispectral sensor, likely a micasense sensor. The flight will be autonomous with a predetermined flight area. A conventional pixel based unsupervised classification will be used to identify water pixels from other surfaces. It is difficult to say at the moment if two thematic classes will be made (water versus land) or additional classes will be made (urban, trees, etc) which may be more useful in the long run, past the poster.
Data from both methods will be processed with the requisite software to inevitably generate a shape file of flood extent from both technologies, and in the case of the multispectral technique, an additional raster file.

Accuracy assessment

The accuracy of each method must be calculated with reference data before the accuracy of the model with the aerial data can be compared. It is useless to compare the two data types if neither are calibrated with a more accurate assessment. The GPS points taken at the edge of the river can serve as a reference. By taking the points at the physical boundary of water and land, it also creates a bi-model distribution of pixels. In both techniques, the shapefile ultimately creates a more continuous bi-model distribution of pixels in the same way; that is, one side of the points/lines are water, the other is land. The raster can also be used directly for the multispectral imagery. If a line does not past directly through a GPS point, it is either over or underclassified. Pixels on one side of the line can be considered to be water and the other side, land. One side of the gps point, in the same way, can be considered to be water and the other land, in the same way. Thus, accuracy can be determined by the relationship of classified pixels being compared to the reference points. The following diagrams I generated explain this phenomena:

Figure 1: A fully accurate classification example

Figure 2: An overclassifed example

Figure 3: An underclassified example


Comparison to Model 

After the accuracy of the drone collected data is determined, it can effectively be compared to the model. The model will be classified just as the data collection methods had been. The classification of water will be compared between the model and drone data to determine accuracy, and validity, of the model.

Literature Review

So far, the literature review has been relatively successful. We are looking for journal articles on a variety of topics so that our introduction can be very effective at communicating the point and importance of this work. The list of papers we have collected at the moment can be found here.


Upcoming

This week, we begin the data collection by starting the initial ground survey of ground control points.

No comments:

Post a Comment