This lab was a continuation of last week's as we again worked with Hurricane Sandy damage data. This time the output was hurricane track map as well as a structural damage assessment and analysis. To perform our damage assessment we created a feature class with all the attributes we wanted to collect data on. We created fields for structure damage, wind damage, inundation and structure type. These would be valuable data points to collect in the field, but for the purpose of this lab we were limited to what we could collect visually using remote imagery. We created raster mosaics for pre and post hurricane. Toggling between these two rasters, we created point features on each of the structures present, assessed and rated their level of damage on a scale of 0 (no damage) to 4 (completely destroyed) for a pre-defined area of interest. After completing and symbolizing my data I created a polyline feature class to represent the coastline and parsed the data into damage category and distance from the coastline. My results are as follows:
Sunday, July 28, 2024
Monday, July 22, 2024
Module 4- Coastal Flooding- Applications in GIS
Increased occurrence of coastal flooding and sea level rise are two effects of climate change that threaten coastal cities. This week we performed a coastal flooding assessment to compare the rate of damage before and after Hurricane Sandy hit the New Jersey coastline. Then we used elevation models to delineate coastal flood zones in Naples, Florida.
For our hurricane damage assessment, we subtracted the pre-Sandy
raster from the post-Sandy raster. It was very clear when inspecting the resulting
rate of change raster what areas had the greatest concentration of damage. We
then compared the resulting image to a building footprint from 2019 that showed
what buildings were present at that time.
After learning how to use the reclassify and region grouping
tools to create a predictive map of a storm surge, we created a map using two
elevation models, a USGS DEM and a LiDAR derived DEM to predict areas and
buildings effected by a 1-meter storm surge. We then analyzed the type of
buildings affected and, presuming the LiDAR data was more accurate the errors
of omission and errors of commission associated with the data.
Saturday, July 13, 2024
Module 3- Visibility Analysis- Applications in GIS
This week’s lesson centered around 3D visualization techniques in ArcGIS as well as performing line of sight and viewshed analyses. Before working with 3D visualization there are several important core concepts to understand.
We covered how to determine which of the 3 elevation types to utilize and how to set or change elevation type in ArcGIS Pro. The 3 elevation types are:
- on the ground (typically used for
vegetation or buildings)
-
relative to the ground (traffic cameras, subsurface features)
-an absolute height (airplane paths, satellites)
Other important terms are:
- cartographic offset, or adjusting the height of an entire layer so that the features are not obscured by others layers within the scene. In our case the houses were obscured by the trees, so we set the cartographic offset to 50 to make them visible.
- vertical exaggeration, emphasizing vertical features on a map, particularly useful in cases where the horizontal extent is vast.
We covered the process of extruding data. Extruding data from a 2D to a 3D feature can be a way to visualize more than just the literal dimensions of an object. For our exercise we extruded building using their absolute value in order to visualize the most valuable properties in a city.
Additionally we covered how to apply visual effects using symbology, and setting illumination properties for global and local scenes to see the effects of shadows and illumination on the scene.
After we covered introductory 3D concepts we performed both a line of sight analysis and a viewshed analysis.
The 3 main steps to performing a sight line analysis are to determine observers and targets, construct sight lines, and determine lines of sight. We used ArcGIS 3D analyst tools to complete these steps. We considered line of sight from 2 observation points along a parade route we considered only those with visibility within 600ft which left clear gaps in coverage from a security standpoint.
The centermost portion of the parade route was completely absent of coverage with the line of sight analysis at 600ft.For our viewshed analysis we assessed a proposed campground lighting system to determine what height was necessary to obtain sufficient campsite coverage.
-
-
Friday, July 5, 2024
Module 2- Applications in GIS- Forestry and LiDAR
This week we worked with Virginia LiDAR data from Shenandoah National Park to create a DEM and DSM, forest height and forest biomass (canopy density) layers. I found the LiDAR data a bit tricky. The layers themselves flowed smoothly but, when viewing in 2D, I never could get my LiDAR data to populate on my map without zooming in so far that it rendered the visual pointless. Regardless, I was able to work around this by creating a 3D map with the LiDAR layer, the 3D model had less trouble populating than the 2D although I still couldn't view it zoomed out as much as I needed to for my final map.
We used the Point File Information tool to summarize the contents of the LiDAR layer and then created a DEM layer using the LAS Dataset to Raster tool with ground points. We created a DSM layer using Non-Ground Points. For the heigh layer we subtracted the DEM from the DSM using the minus tool.
To calculate biomass density, we used the LAS to multipoint tool and created a layer representing the ground and a layer representing the vegetation. We converted these layers to rasters with a “count” assignment. We used the IS NULL tool to create a binary file for each layer, then use the Con tool on these created layers to accept as true if value of 0 is encountered and to pull a value of 1 from the original raster to pull the ground and vegetation layers separately.
We used the Plus tool to combine these layers and the Float tool to transform the information from an integer to a float. Finally, we calculated density using the Divide tool to compare the vegetation raster that was created using the con tool with the float data. I realized ¾ of the way through this process that this would have been a great time to have used the Model Builder we learned about in GIS Programming last term, but since I had already created so many files it had become counterproductive to use it at this point. If I was doing these calculations frequently, I would create a model to use to make the process more efficient and leave less room for error. Eventually I had so many features I was working with, and I was trying to remember which one was from which step even with intentional naming patterns. A model would have been very useful.
Finally, we created a histogram chart to display the height data created earlier in the lab. The exercise had said negative values were in error, but I was having a hard time finding how to exclude the handful of negative data points. For our vector data we excluded the point in the symbology pane, but this wouldn’t work for the raster data set. Eventually I discovered a button called “Selection” which I could toggle on and off that, when combined with a SQL clause, allowed me to exclude data as needed. I used this to remove all data points below 0 from my histogram.
Here are 3 maps that I created from some of the many layers I created in this process.
GIS Portfolio
We were tasked to create a GIS portfolio for our internship program. It was a great opportunity to put organize the work I have been doing....
-
Our final lab for Special Topics began with exploring the effects of scale on vector data and resolution on raster imagery. We began with a ...
-
Project Run a time series analysis to analyze spectral recovery following the 2000 Jasper Fire in South Dakota. Data For this project data w...
-
As I have explored job opportunities throughout the program, I have been excited to see all the different ways GIS can be applied. It h...












