In previous studies, we have successfully mapped eelgrass beds and were able to distinguish between eelgrass and co-occurring macro algae using airborne imaging spectrometers (Garono et al., 2004; Simenstad et al., 2008). These studies created spatial data sets, which described the abundance and distribution of eelgrass (Zostera marina) over hundreds of kilometers of shoreline along Hood Canal in western Washington. Although successful, we found that aircraft were expensive to operate, and that positioning the aircraft over the target at the best time for image acquisition was difficult: aircraft had to be positioned over the study area during the low tide window and cloud heights had to be above the altitude of the aircraft during image collection.
The introduction of unmanned aerial vehicles (UAVs) during the past decade has made acquisition of low altitude imagery less expensive than imagery collected by manned aircraft. Desktop and cloud-based software have also advanced during this same period so that producing edge-matched orthomosaic images from a series of individual aerial photographs requires less sophisticated hardware than it did previously. Finally, tablet-based drone flight planning applications allow UAV pilots to better plan flights, in the office or in the field, to collect imagery over study areas ranging from 1 to 10s of acres.
The purpose of these studies was to evaluate the feasibility of using a dual sensor UAV to map eelgrass beds along intertidal areas of South Slough (Charleston, OR). In this study, we evaluated our ability to discriminate between eelgrass and co-occurring macro algae, and the spatial accuracy of the UAV-collected imagery.
We collected aerial images using a DJI Phantom 3 Professional UAV to which a Sentera High-Precision NDVI Single sensor was installed. The sensors produced both color and NIR band images. The sensors had the following characteristics. The 12.4M pixel (4000 X 3000) color sensor was a 1/2.3” CMOS with an ISO range of 100-1600. The lens shutter speed ranged from 8 to 1/8000 sec and had a FOV that was a 35mm format equivalent (94 deg 20mm) with a f/2.8 focus at infinity. The Sentera NIR sensor produced images using two filters one at 625nm Center Wavelength (CWL) x 100nm width and the second NIR Band at 850nm CWL x 40nm width.
We collected a series of aerial images using the UAV, which was programed to follow flight lines at different altitudes.
We selected each altitude in order to evaluate the resulting image resolution for its suitability to map eelgrass. Higher altitudes produced imagery that covered a larger on-the-ground footprint but lower image resolution.
Field teams, led by SSNERR staff, estimated the percent cover, to the nearest 5%, of eelgrass, algae, and bare substrate cover in 1m2, 2m2, and 5m2 quadrats at the time of the flights. Field teams also measured each of the four quadrat corners with a sub-meter GPS and labeled each quadrat with a number. The number tags were used as both identifiers and directional indicators—each tag was placed on the northeast corner of the quadrat.
To achieve the best balance between image resolution and image footprint, we evaluated the spatial coverage of each mission (a series of
flight lines taken from a predesignated altitude). We compared the final area and the number of quadrats appearing in each mosaic
Next, we evaluated spatial accuracy in two ways. First as the RMSE (Root Mean Square Error) associated with each mosaic; and second, by using quadrat corners as ground control points in a GIS. In addition to a single RMSE value, we also summarized the spatial error for each of the three axes (X, Y, and Z). We made comparisons between the field-measured GPS coordinates of quadrat corners to those measured within GIS for each image mosaic.
Then, we assessed image quality by visually determining whether blades of eelgrass and macroalgae were identifiable in the final mosaic at each flight altitude. For mosaics with sufficient quality to distinguish among these features, image quality was further characterized by recording the image resolution, pixel size, and number of pixels per 1m2 quadrat.
As expected, we found that flights flown at a greater altitude provided larger footprints for all RGB mosaics than lower altitude flights. Image characteristics from the 30ft and 60ft flights were comparable. Flights made at 240ft covered about eight times the area of 30ft and 60ft flights but lacked the detail of the lower altitude images.
We found that higher altitudes produced a lower overall RMSE, i.e., had higher spatial accuracy. The 30ft mosaics had an error of about 0.6ft and 240ft had an error of about 0.3ft. The spatial error in mosaics is distributed across the entire image. This error is calculated from the characteristics of the UAV camera (e.g., focal length and altitude) and the edge matching, which occurs during processing.
We also evaluated the spatial accuracy associated with each quadrat. In this case, sources of error included error associated with the GPS unit, typically a few centimeters, and the mosaiced image. We found that we could place quadrats within 4.6ft to 32.0ft. (average=14.6ft). This difference was calculated with the average difference between the four field-measured and four GIS-measured corners for quadrats. Since both 30ft and 240ft have similar spatial errors (between 1 and 2 meters), we expect to be able to achieve 3ft to 10ft accuracy in 60ft imagery during future studies.
We found the resolution of NIR imagery to be a bit coarser than RGB imagery; however, both 30ft and 60ft are of usable resolution and clarity. NIR imagery separates “green” and “non-green” and is, therefore, more useful and time-saving for classification than RBG imagery. Although 30ft is the highest quality imagery produced during this study, these high-resolution images require more computational and data processing time than smaller images. Developing NDVI (Normalized Difference Vegetation Index) or other types of classifications from low altitude images would also require more time. Therefore, we suggest that there is a tradeoff between high resolution imagery and processing time.
In summary, we found that there was sufficient resolution to see individual eelgrass blades and clumps of macroalgae in imagery collected at 30ft and 60ft. Images collected at 30ft, however, were slow to process compared to the 60ft images.
We used both features identified in the imagery (e.g., blades of eelgrass) and differences in pixel values to classify both RGB and NIR mosaics (see Photo Comparisons above).
In developing eelgrass coverage assessments from the RGB imagery, we determined that the manual simplification and classification process produced classes that showed significant error.
Classification of NIR imagery, on the other hand, worked well for both the 30ft and 60ft imagery. We found that the 240ft imagery was too coarse to
resolve eelgrass and macroalgae. The NIR classification approach requires little manual editing. The 60ft imagery is faster to process than the 30ft.
We found that the difference between the field- and image-based assessments varied between 5% and 60% for eelgrass, and between 10% and 40% for macroalgae. As expected, the error from the 240ft imagery was the largest most likely because the higher altitude did not produce sufficient resolution to discern eelgrass cover.
Although additional data would be useful, we believe that we have a method that can accurately map eelgrass using unmanned aerial vehicles equipped with a near infrared sensor.
We evaluated two types of imagery, both color (RGB) and near infrared (NIR) and three methods for stitching together image mosaics. Based on a very small set of field data, we successfully classified, with a 10-20% error, areas of eelgrass and macroalgae within South Slough NERR. This is comparable to differences between observers for field-based assessments.
We evaluated three different flight altitudes and found that both the 30ft and 60ft altitudes produced acceptable results. We also learned that the higher altitude (60ft) is more desirable in order to reduce computational and processing times associated with excessively large datasets . We recommend that higher altitudes (e.g., 75ft and 100ft) be evaluated to determine if imagery collected is also suitable for this type of analysis. This would both reduce the processing time and increase the on-the-ground coverage for each flight.