Tuesday, December 10, 2013

Lab 8

Goal and Background


The main goal of this lab was for me to gain experience on the measurement and interpretation of spectral reflectance (signatures) of various Earth surface materials captured by satellite images. I learned how to collect spectral signatures from remotely sensed images, graph them, and perform analysis on them to verify whether they would pass a spectral separability test. All analysis performed in Erdas Imagine.

Methods


First, I zoomed in on the provided Landsat ETM+ image of the Eau Claire area. Next, I used the polygon tool to draw within an area of standing water and used the Signature editor tool to examine the polygon's reflectance. In the tool, I created a new signature from the polygon I drew. Then, I examined the mean plot window to view the spectral signature. After taking note of its properties, I continued by drawing polygons on 11 more surfaces: Moving water, Vegetation, Riparian vegetation, Crops, Urban grass, Dry soil (uncultivated), Moist soil (uncultivated), Rock, Asphalt highway, Airport runway and a Concrete surface (Parking lot). I examined the relationships among the signatures, which are provided below.


Results

Spectral signatures of various surfaces

Sources


None

Tuesday, December 3, 2013

Lab 7

Goal and Background


The main goal of this laboratory exercise was to develop my skills in performing key photogrammetric tasks on aerial photographs and satellite images. Specifically the lab was able to help me understanding the mathematics behind the calculation of photographic scales, measurement of areas and perimeters of features, and calculating relief displacement. Moreover this lab was indented to introduce me to stereoscopy and performing orthorectification on satellite images.


Methods


First, I performed basic scale, measurement and relief displacement calculations using various photogrammetric equations. In the first exercise I calculated the scale of a photograph using a real world distance and a on-photo measured distance. Next I calculated the scale of the photograph with another equation that uses altitude, focal length, and elevation. I continued by using Erdas Imagine's  'Measure Perimeters and Areas' digitizing tool to calculate the area and perimeter of a body of water in the Eau Claire area. Next, I calculated relief displacement using object height, principal point, and elevation.

Next, I used Erdas Imagine's 'Anaglyph Generation' tool to create an anaglyph photo from an image of Eau Claire and a digital elevation model (DEM) of the city of Eau Claire at 10 meters spatial resolution. Noting that I set the vertical exaggeration to 2 (rather than 1), I analyzed the anaglyph image's features in relation to real life features.

Finally, I performed orthorectification using Erdas Imagine Lecia Photogrammetric Suite (LPS). First, I created a new project in LPS project manager, and created a new block file. I set the projection system according to my images proceeded to add a frame which subsequently contained an image. I verified that the sensor properties matched my image's appropriate sensor settings (SPOT). Next, I activated the point measurement tool to collect ground control points (GCPs). I used the horizontal reference tool to activate my reference image that facilitates the collection of GCPs. I used an already orthorectified image of Palm Springs, CA as a reference image, and began collecting GCPs. I did this by placing a point on the reference image, and one in the same geographic location on the un-rectified image. The (x,y) coordinates were provided to facilitate GCP collection. I collected 9 GCPs using this method. I then reset the horizontal reference source and collected two more ground control points using a different ortho-image. I then assigned z values to all of the GCPs using a DEM image of the Palm Springs area. Next I changed the GCP's type to 'Full' and their usage to 'Control' to specify their uses and properties. I then returned to the LPS project manager to add another frame and another image. Again, I verified the sensor properties and reopened the point measurement tool. In the new image, I began collecting GCPs that corresponded to those in the first image. The points that were not present on the second image were skipped because they did not overlap. Next, I edited the automatic tie point generation properties to fit my purposes, and created tie points  based on the already present ground control points. Some 25 were automatically generated. I then edited the Triangulation properties in LPS project manager to establish the mathematical relationship between the images that make the block file, the sensor model, and the ground. I ran the model, viewed the report, and accepted the triangulation results. I finally selected the 'Start Ortho Resampling Process' icon in LPS project manager and verified my DEM settings, output file name, resampling method and made sure both project images were included. I viewed the results of the orthorectified images, noting that the spatial match between them was very accurate.

Results

Ortho-rectified overlapping images of Palm Springs, CA area

Sources

None

Tuesday, November 19, 2013

Lab 6

Goal and Background


The goal of this lab was to introduce me to the important image process, geometric correction. There are two major types of geometric correction that are often used, both of which are covered. All methods in this lab were completed using Erdas Imagine.

Methods


First, I performed image to map rectification. I opened a distorted image of the Chicago area and selected a control points tool to start the rectification. I was prompted to select a geometric model, and I selected polynomial. I performed a 1st order polynomial transformation, and this required at least 3 ground control points to be used. To obtain these GCPs, I first selected a point on the distorted image, and found the corresponding point on a map of the Chicago area. When I had the proper number of GCPs, I began to move them to reduce the RMS error. Once I got this down to below 1, I finished the model and saved it as an output file.

Next, I performed an image to image rectification. This process is essentially the same as previously described, except this time I was rectifying the image to one that was previously rectified. This time, I selected the control points tool and polynomial model, but this time I used a 3rd order transformation, which required me to create at least 9 GCPs. I corrected them until I had an RMS error of less than 1.

Results






Sources

None

Sunday, November 17, 2013

Lab 5

Goal and Background


The goal of this lab was to introduce myself to some important analytical processes in remote sensing. The lab explored RGB to IHS transformations, image mosaicing, spatial and spectral image enhancement, band ratio, and binary change detection. The end result of the lab was enabling me to apply all the analytical processes introduced in this lab in real life projects.


Methods


This lab was completed mostly using Erdas Imagine, and a little bit of Esri ArcGis.
The first section introduced me to RGB to IHS transformations and reversing them. First, I observed an image of the Eau Claire area, then used the RGB to IHS tool to create a new image. I then compared them, considering color characteristics and their histograms. I continued by transforming this image back to RGB, and comparing it with the original RGB image.

In the next part of the lab, I performed image mosaicing. In Erdas Imagine, I opened two images, and displayed them in virtual mosaic. This allowed me to view them overlapping each other. I then used Mosaic Express to perform an actual mosaic. I chose the order of my images, and left many of the default parameters, and ultimately ran the mosaic. I displayed the output image, and noted that the color transition from one of the input images to the other was not very smooth. I then repeated this process in MosaicPro, which is a more robust mosaic tool. I used a histogram matching color tool, and noted that the output image for this method had a much smoother color transition from one input image to the next. I compared the quality of each of these mosaic images.

In the next section, I performed band ratioing by implementing the normalized difference vegetation index (NDVI) on an image of the eau claire area. I did this using the NDVI tool in Erdas Imagine. Leaving many of the default parameters, I created the NDVI image and brought it into my viewer. I analyzed the various values in the image, noting that it is a great tool for highlighting areas of vegetation.

Next, I performed spatial and spectral image enhancement. I used a low pass filter on a high frequency image of Chicago using the Convolution tool. I chose a low pass kernel type, and noted the differences betweent the resulting image and the original one. I then repeated this process, but used a high pass filter on an image of Sierra Leone (a low frequency image). Next, I performed edge enhancement. I used the same convolution window, but this time selected a kernal type called 3x3 Laplacian Edge Detection. It helped to highlight areas where there were abrupt changes in pixel values. I next applied Spectral enhancement on an image of the Eau Claire area. I used the General Contrast tool in Erdas Imagine to perform a Gaussian minimum maximum contrast stretch on the image. On another image (this time not Gaussian) I performed a piecewise contrast stretch using the piecewise contrast tool. This allowed me to specify the modes in the histogram of the bimodal image that were to be stretched. Next, I used the histogram equalization method to improve contrast on an image of the Eau Claire area. For this I used the Histogram Equalization tool, and accepted all default parameters. The output image's contrast was indeed greatly reduced.

In the final portion of the lab I performed binary change detection (image differencing). This allows an image interpreter to identify areas of change between two images. I used a raster tool called Two Image Functions to access the Two Input Operators interface. I specified the input files and bands to be differenced, and used a subtract method. When I opened the output file, I noted that this method did not allow me to see areas of change. I then viewed the histogram and noted that areas where the pixels changed are usually at the tails of the Gaussian histogram curve. I calculated a threshold for these as mean + 1.5 standard deviations. This is marked and provided below. Next, I mapped changed pixels in differenced image using the Spatial Modeler tool. Here, I created a model that brought two images of the Eau Claire (one in 1991, and one in 2011)area to this equation:
$n1_ec_envs_2011_b4 - $n2_ec_envs_1991_b4 + 127
which subtracts the 1991 image from the 2011 image and adds a constant. I named the output image and ran the model. I viewed the output image's histogram, and calculated the change threshold as mean + (3 x standard deviation). I then created a new model, with a single input, an equation and an output. I used the following equation to show all pixels above my threshold, and mask those that werent.
EITHER IF ( $n1_ec_91> change/no change threshold value) OR 0 OTHERWISE
I ran the model, and took my results to Esri ArcGis. I mapped the areas of change above a background image of my study area. This is provided below.

Results


RGB to IHS Transform
IHS back to RGB Transform
Image Mosaic
Minimum-Maximum Contrast Stretch
Piecewise Contrast Stretch
Histogram Equalized Image
Image Differencing Histogram

Friday, November 1, 2013

Lab 4

Goal and Background


The goal of this lab was (1) to learn how to delineate a study area from a larger satellite image scene, (2) to demonstrate how spatial resolution of images can be optimized for visual interpretation purposes, (3) to introduce some radiometric enhancement techniques in optical images, (4) to learn to link a satellite image to Google Earth which can be a source of ancillary information, and, (5) to become familiar with various methods of resampling satellite images. All of this was done in Erdas Imagine. 


Methods


In Erdas Imagine, I first made an inquire box and used the "subset and chip" tool to create an image subset. I did this by specifying an input file, an output file, and selecting "from inquire box" to select the proper area. This image subset is shown in the Results section.
Next I created another image subset via .aoi file. First, I imported a shapefile delineating Eau Claire and Chippewa counties. Next, I selected this area and saved the selection as an area of interest (.aoi) file. I then used the subset and chip tool to create another image subset, this time selecting my area of interest file to be the boundaries. This image subset is shown in the Results section.
Then, I performed image fusion on two images, one panchromatic and one spectral. I used the resolution merge tool under pan sharpen in Erdas Imagine to do so. This involved specifying the panchromatic image, the multispectral image, and the output image. I selected the multiplicative method, and the nearest neighbor sampling method. I then ran the tool, and observed the differences between the original multispectral image and the pan sharpened one. The main difference was in spatial resolution.
Next, I used the haze reduction tool on an image of the Eau Claire area, and noted the differences in the original image, and the haze-reduced image. This tool removed the haze effect and cloud haze from the image.
Next, I connected my Erdas Imagine image viewer to Google Earth. I viewed them side by side, and thought about the potential benefits this could include for image interpretation.
In the next section of the lab, I performed image resampling on a satellite image of the Eau Claire area. I did this using the resample image size tool. I resampled up from 30 pixels to 20 pixels using the nearest neighbor method. I then repeated this process using the Bilinear Interpolation method. Finally I compared the original image to each of the resampled images, noting the smoother properties of the resampled ones. 

Results


Below are the subsetted images described above.

This image subset was created using an Area of Interest (.aoi) file.

This image subset was created using an Inquire Box


Sources


None