Tuesday, November 19, 2013

Lab 6

Goal and Background


The goal of this lab was to introduce me to the important image process, geometric correction. There are two major types of geometric correction that are often used, both of which are covered. All methods in this lab were completed using Erdas Imagine.

Methods


First, I performed image to map rectification. I opened a distorted image of the Chicago area and selected a control points tool to start the rectification. I was prompted to select a geometric model, and I selected polynomial. I performed a 1st order polynomial transformation, and this required at least 3 ground control points to be used. To obtain these GCPs, I first selected a point on the distorted image, and found the corresponding point on a map of the Chicago area. When I had the proper number of GCPs, I began to move them to reduce the RMS error. Once I got this down to below 1, I finished the model and saved it as an output file.

Next, I performed an image to image rectification. This process is essentially the same as previously described, except this time I was rectifying the image to one that was previously rectified. This time, I selected the control points tool and polynomial model, but this time I used a 3rd order transformation, which required me to create at least 9 GCPs. I corrected them until I had an RMS error of less than 1.

Results






Sources

None

Sunday, November 17, 2013

Lab 5

Goal and Background


The goal of this lab was to introduce myself to some important analytical processes in remote sensing. The lab explored RGB to IHS transformations, image mosaicing, spatial and spectral image enhancement, band ratio, and binary change detection. The end result of the lab was enabling me to apply all the analytical processes introduced in this lab in real life projects.


Methods


This lab was completed mostly using Erdas Imagine, and a little bit of Esri ArcGis.
The first section introduced me to RGB to IHS transformations and reversing them. First, I observed an image of the Eau Claire area, then used the RGB to IHS tool to create a new image. I then compared them, considering color characteristics and their histograms. I continued by transforming this image back to RGB, and comparing it with the original RGB image.

In the next part of the lab, I performed image mosaicing. In Erdas Imagine, I opened two images, and displayed them in virtual mosaic. This allowed me to view them overlapping each other. I then used Mosaic Express to perform an actual mosaic. I chose the order of my images, and left many of the default parameters, and ultimately ran the mosaic. I displayed the output image, and noted that the color transition from one of the input images to the other was not very smooth. I then repeated this process in MosaicPro, which is a more robust mosaic tool. I used a histogram matching color tool, and noted that the output image for this method had a much smoother color transition from one input image to the next. I compared the quality of each of these mosaic images.

In the next section, I performed band ratioing by implementing the normalized difference vegetation index (NDVI) on an image of the eau claire area. I did this using the NDVI tool in Erdas Imagine. Leaving many of the default parameters, I created the NDVI image and brought it into my viewer. I analyzed the various values in the image, noting that it is a great tool for highlighting areas of vegetation.

Next, I performed spatial and spectral image enhancement. I used a low pass filter on a high frequency image of Chicago using the Convolution tool. I chose a low pass kernel type, and noted the differences betweent the resulting image and the original one. I then repeated this process, but used a high pass filter on an image of Sierra Leone (a low frequency image). Next, I performed edge enhancement. I used the same convolution window, but this time selected a kernal type called 3x3 Laplacian Edge Detection. It helped to highlight areas where there were abrupt changes in pixel values. I next applied Spectral enhancement on an image of the Eau Claire area. I used the General Contrast tool in Erdas Imagine to perform a Gaussian minimum maximum contrast stretch on the image. On another image (this time not Gaussian) I performed a piecewise contrast stretch using the piecewise contrast tool. This allowed me to specify the modes in the histogram of the bimodal image that were to be stretched. Next, I used the histogram equalization method to improve contrast on an image of the Eau Claire area. For this I used the Histogram Equalization tool, and accepted all default parameters. The output image's contrast was indeed greatly reduced.

In the final portion of the lab I performed binary change detection (image differencing). This allows an image interpreter to identify areas of change between two images. I used a raster tool called Two Image Functions to access the Two Input Operators interface. I specified the input files and bands to be differenced, and used a subtract method. When I opened the output file, I noted that this method did not allow me to see areas of change. I then viewed the histogram and noted that areas where the pixels changed are usually at the tails of the Gaussian histogram curve. I calculated a threshold for these as mean + 1.5 standard deviations. This is marked and provided below. Next, I mapped changed pixels in differenced image using the Spatial Modeler tool. Here, I created a model that brought two images of the Eau Claire (one in 1991, and one in 2011)area to this equation:
$n1_ec_envs_2011_b4 - $n2_ec_envs_1991_b4 + 127
which subtracts the 1991 image from the 2011 image and adds a constant. I named the output image and ran the model. I viewed the output image's histogram, and calculated the change threshold as mean + (3 x standard deviation). I then created a new model, with a single input, an equation and an output. I used the following equation to show all pixels above my threshold, and mask those that werent.
EITHER IF ( $n1_ec_91> change/no change threshold value) OR 0 OTHERWISE
I ran the model, and took my results to Esri ArcGis. I mapped the areas of change above a background image of my study area. This is provided below.

Results


RGB to IHS Transform
IHS back to RGB Transform
Image Mosaic
Minimum-Maximum Contrast Stretch
Piecewise Contrast Stretch
Histogram Equalized Image
Image Differencing Histogram

Friday, November 1, 2013

Lab 4

Goal and Background


The goal of this lab was (1) to learn how to delineate a study area from a larger satellite image scene, (2) to demonstrate how spatial resolution of images can be optimized for visual interpretation purposes, (3) to introduce some radiometric enhancement techniques in optical images, (4) to learn to link a satellite image to Google Earth which can be a source of ancillary information, and, (5) to become familiar with various methods of resampling satellite images. All of this was done in Erdas Imagine. 


Methods


In Erdas Imagine, I first made an inquire box and used the "subset and chip" tool to create an image subset. I did this by specifying an input file, an output file, and selecting "from inquire box" to select the proper area. This image subset is shown in the Results section.
Next I created another image subset via .aoi file. First, I imported a shapefile delineating Eau Claire and Chippewa counties. Next, I selected this area and saved the selection as an area of interest (.aoi) file. I then used the subset and chip tool to create another image subset, this time selecting my area of interest file to be the boundaries. This image subset is shown in the Results section.
Then, I performed image fusion on two images, one panchromatic and one spectral. I used the resolution merge tool under pan sharpen in Erdas Imagine to do so. This involved specifying the panchromatic image, the multispectral image, and the output image. I selected the multiplicative method, and the nearest neighbor sampling method. I then ran the tool, and observed the differences between the original multispectral image and the pan sharpened one. The main difference was in spatial resolution.
Next, I used the haze reduction tool on an image of the Eau Claire area, and noted the differences in the original image, and the haze-reduced image. This tool removed the haze effect and cloud haze from the image.
Next, I connected my Erdas Imagine image viewer to Google Earth. I viewed them side by side, and thought about the potential benefits this could include for image interpretation.
In the next section of the lab, I performed image resampling on a satellite image of the Eau Claire area. I did this using the resample image size tool. I resampled up from 30 pixels to 20 pixels using the nearest neighbor method. I then repeated this process using the Bilinear Interpolation method. Finally I compared the original image to each of the resampled images, noting the smoother properties of the resampled ones. 

Results


Below are the subsetted images described above.

This image subset was created using an Area of Interest (.aoi) file.

This image subset was created using an Inquire Box


Sources


None