Introduction:
This exercise is a continuation from the first exercise. Where exercise one focused on designing a small terrain surface and finding an effective way to gather data for that surface. This lab will focus on the many ways to model that surface. It will be important to note the changes between each interpolation method and the reasons behind them. For that reason I have provided a quick overview behind the processes that make up each interpolation method.Methods:
Before the many interpolation methods were run, it was important to create a simple Triangular Irregular Network (TIN). This would act as my control map; which, I would use this TIN to compare the subtle changes between methods.Steps to Creating a TIN in ArcMap:
Figure 1. TIN surface as viewed in ArcScene. Angle of TIN illustrates the lowest point in the terrain model designated the dark blue in the lower portion of the image. |
- Import the excel table into arcmap.
- Display XY Data.
- Create a Geodatabase desired workspace.
- Export the x-, y-, and z-coordinates as a feature class.
- Turn on the 3D Analyst extension.
- Create TIN from XYZ data.
- Symbolize TIN.
Once the TIN was created the other interpolation methods were used. The output for these tools were in the form of rasters. However, for enhanced data analyse a 3-Dimentional model would need to be created and viewed in ArcScene. The steps to create the raster models to TIN format are as follows:
- Convert the raster models to TIN format using the “Raster to TIN” tool (3D Analyst).
- Open ArcScene, load the five TINs for each respective interpolation method.
- Load correlating raster image.
- Set each raster image to float on correlating TIN surface.
- (Optional) Turn on hill shade effect.
*I used a Jenks Natural Breaks with 9 classes classification scheme for all 5 interpolation methods.
Interpolation tools
The focus of data collection always comes with a trade off between time/resources available and the precesion for the data set. Interpolation methods are usefull when data is not collected for every precise point on a surface, yet a continuous surface is desired. Given the different approaches to interpolation, there are two categories which describe the general processes behind the methods. These two categories are geostatistical and deterministic interpolation.Deterministic Interpolation methods assign values to locations based on the surrounding measured values and on specified mathematical formulas that determine the smoothness of the resulting surface. Deterministic interpolation methods include: IDW, Natural Neighbor, and Spline.
Geostatistical methods are based on statistical models that include autocorrelation (the statistical relationship among the measured points). Because of this, geostatistical techniques not only have the capability of producing a prediction surface, but also provide some measure of the certainty, or accuracy, of the predictions. Geostatistical methods include Kriging.
Inverse Distance Weighting (IDW) -
IDW is a weighted distance average, meaning a matrix is used on each point. That matrix can then be assigned values, or 'weights,' which will affect the impact those points have on the output value. It is important to note that the average value, for the output cell, cannot exceed the highest or lowest value from the inputs, your raw data.
The best results from IDW are obtained when sampling is sufficiently dense, with regard to the local variation you are attempting to simulate. Because the influence of an input point on an interpolated value is isotropic (meaning it is uniform in all directions), IDW will not preserve ridges. Although, you can use a polyline feature class as 'barriers' to help maintain your edges (note: this will significanly increase your processing time).
Natural
Neighbor- This tool finds the closest subset of input samples to a query point and applies weights to them based on proportionate areas to interpolate a value. This method is also known as Sibson or "area-stealing" interpolation because of this area based approach. Basically it uses a subset of samples that surround a query point, then interpolates heights within the range of the input values used. It is important to note that natural neighbor does not infer trends; and so, it will not produce peaks, pits, ridges, or valleys that are not already present in the data set. In general, the natural neighbor surface passes through the input samples and smooths out all locations, except at the input point locations.
Spline- Uses an interpolation method that estimates values using a mathematical function that minimizes overall surface curvature, resulting in a smooth surface that passes EXACTLY through the input points. There are two types of spline: regularized, shown in figure 4, and tension, shown in figure 5.
Kriging- creates a new surface of calculated values using a set of points with z-values. To do this the kriging method assumes the distance and/or direction between points reflects a spatial correlation that can be used to predict the variation of a surface. This idea is based off the first rule of geography: closer points are more closely related than points that are farther away.
Inverse Distance Weighting (IDW) -
IDW is a weighted distance average, meaning a matrix is used on each point. That matrix can then be assigned values, or 'weights,' which will affect the impact those points have on the output value. It is important to note that the average value, for the output cell, cannot exceed the highest or lowest value from the inputs, your raw data.
The best results from IDW are obtained when sampling is sufficiently dense, with regard to the local variation you are attempting to simulate. Because the influence of an input point on an interpolated value is isotropic (meaning it is uniform in all directions), IDW will not preserve ridges. Although, you can use a polyline feature class as 'barriers' to help maintain your edges (note: this will significanly increase your processing time).
Figure 3. Natural Neighbor interpolation method used to create a TIN and overlayed raster image. |
Figure 4. Spline interpolation method. This type of spline uses the specific "regularized" technique; which, uses a third-derivative algorithm to smooth out the features. |
Kriging- creates a new surface of calculated values using a set of points with z-values. To do this the kriging method assumes the distance and/or direction between points reflects a spatial correlation that can be used to predict the variation of a surface. This idea is based off the first rule of geography: closer points are more closely related than points that are farther away.
- Exploratory statistical analysis of the data.
- Variogram modeling.
- Semivariogram Models
- Circular
- Spherical – one of the most commonly used models
- Exponential
- Gaussian
- Linear
- Creating the surface
- (optional) exploring a variance surface
Types of Kriging Methods: Ordinary and Universal
Ordinary Kriging is the most general and widely used of the Kriging methods and is the default. It assumes the constant mean is unknown. This is a reasonable assumption unless there is a scientific reason to reject it.
Universal Kriging assumes that there is an overriding trend in the data—for example, a prevailing wind—and it can be modeled by a deterministic function, a polynomial. This polynomial is subtracted from the original measured points, and the autocorrelation is modeled from the random errors. Once the model is fit to the random errors and before making a prediction, the polynomial is added back to the predictions to give meaningful results. Universal Kriging should only be used when you know there is a trend in your data and you can give a scientific justification to describe it.
*The remaining interpolation tools, Topo to Raster and Topo to Raster by File, use an interpolation method specifically designed for creating continuous surfaces from contour lines, and the methods also contain properties favorable for creating surfaces for hydrologic analysis.
Ordinary Kriging is the most general and widely used of the Kriging methods and is the default. It assumes the constant mean is unknown. This is a reasonable assumption unless there is a scientific reason to reject it.
Universal Kriging assumes that there is an overriding trend in the data—for example, a prevailing wind—and it can be modeled by a deterministic function, a polynomial. This polynomial is subtracted from the original measured points, and the autocorrelation is modeled from the random errors. Once the model is fit to the random errors and before making a prediction, the polynomial is added back to the predictions to give meaningful results. Universal Kriging should only be used when you know there is a trend in your data and you can give a scientific justification to describe it.
*The remaining interpolation tools, Topo to Raster and Topo to Raster by File, use an interpolation method specifically designed for creating continuous surfaces from contour lines, and the methods also contain properties favorable for creating surfaces for hydrologic analysis.
Discussion:
Of all the interpolation methods I like the geostatistical method of kriging the best. Although the points are all estimated values, I still think this method modeled the surface the best. This was the only method that was devoid of the data spike in the top right portion of the other models.
The next two methods I liked based off of the structure of their processes. I thought natural neighbor and the regularized version of spline were effective, they just didn't stand out with my models. I liked these methods because they preserved the data points that were collected. With this in mind, a more effective grid could have been collected and these models very well could have created more accurate surfaces models. However, of these two I liked natural neighbor the best. I liked it because the surface passes through all the input points then smooths out the surfaces in between.
Conclussion:
Effective modeling of surfaces has and will continue to be a challenge for GIS professionals. As I found with this exercise, there will never be one method that is always best. Each project will have unique circumstances that will possibly favor one method of interpolation over others. That is why this exercise was so important. The exercise introduced us to the many types of interpolation and pushed us towards asking the why and how of each method.
Unfortunately none of the models perfectly illustrated the actual topography of the sand box. In my opinion kriging and the natural neighbor methods produced the best models, but this is hard to see in the static 2D-images I've given you. If I was to do this again I would try to incorporate videos from ArcScene so a 360 degree view could be given.
If I was to perform a similar exercise to this in the future I would try to take a better sample of data points. The 3"x3" grid we used was effective, but if we could have taken more readings along the ridges and slopes I think the interpolation methods would have been more effective. Another feature that would have been nice would have been a line file designating the ridges. If this was incorporated into IDW interpolation the model would have looked much different. One problem with polyline files is that often times there aren't defined ridges in nature, they are more continuous transitions. All of these are considerations I can bring to future projects.
No comments:
Post a Comment