Fast Generation of Triangulated Irregular Networks

Jeremy Sachs

Rensselaer Polytechnic Institute
Troy, NY

Abstract

Triangulated Irregular Networks (or TINs) are a common type of terrain representation whose generation involves complex operations on a set of points. We show how present-day technologies that drive web content are capable of handling these operations, and can deliver TINs to today's audience in a reasonable time frame.

Keywords

Triangulated Irregular Networks, Delaunay point set triangulation, convex hull, Adobe Flash, heightmap, terrain, modeling.

Related visualizations

The following media requires Flash Player 9 or 10:

Most of the visualizations can be advanced by clicking the left mouse button.


1. Introduction

Until recently, most media experienced online with a web browser has been definitively two-dimensional. Originally the computational cost of 3D visualizations, though affordable for standard code execution, was too great for the languages used to render and script web content. Recent advances in web standards have opened the door to richer media online, and the methods used for decades to implement 3D visualizations can now be retooled to work with these standards. One might say that heightmap generation, a common problem with many applications, is a good place to start.

Heightmaps are unbroken meshes of polys (more usually triangles) whose geometries approximate those represented in a reference image. A heightmap's resolution is the ratio of the number of features in this image to the number of vertices in the mesh. It is typical to assume that each pixel in the image contains one feature, and that the resolution is greater than 1- that is to say, the image's pixels always outnumber the mesh's vertices.

Triangulated Irregular Networks are a type of heightmap whose vertices are selected based on their importance. Generating a TIN involves three major steps:

  1. Assigning a value to every feature in the reference image.
  2. Selecting features based on their value and proximity to other features.
  3. Triangulating a set of points, whose positions correspond to the positions of the selected features.

Assigning values to features is a common task in computer vision, where the goal is to choose which features are easiest to recognize in a series of images. Kanade, Lucas, Shi and Tomasi have demonstrated that by convolving an image with the appropriate matrix, these features can be quantitatively measured and identified. Shi and Tomasi masked features in the order they were chosen so that other features lying within that mask, or "neighborhood", would be discarded for being too close.

Delaunay showed that every set of points in an n-dimensional space can be triangulated in such a way that the enclosing hyperspheres of the resulting simplices would not contain any of the points in the set. In two dimensions, this equates to a triangular mesh were the circumcircles of the triangles do not contain any points. Others have demonstrated how to solve for Delaunay triangulations in O(n2) and O(n log n) time.

These tasks would be meaningless without a method for rendering the resulting mesh. For this paper, we targeted the ubiquitous Adobe Flash runtime and used the free and open-source 3D pipeline Papervision3D. The user experience of Adobe Flash is nearly consistent across all browsers and operating systems, and Papervision3D is the most accessible 3D rendering system for the Flash platform at this time.

The developer community surrounding Papervision3D is very strong, and through it Luke Mitchell contributed a basic heightmap implementation. We have extended his heightmap class to accommodate several different approaches to terrain representation.

2. Definition of a Good Mesh

Most meshes' vertices lie on a rectangular grid, whose horizontal and vertical segments are pecified when they are first generated. As most systems do not support quads, these rectangular cells are often arbitrarily divided into right triangles. Points on grids are very quick to generate, and for most this type of heightmap is adequate. However, there are some notable disadvantages:

  1. Features from the reference image are mapped to scalene triangles. Generally, the features that are proximal and parallel to the edges of the mesh's triangles are more accurately represented. The features mapped to the three edges of a scalene triangle are not evenly distributed, and so there is a "feature bias" along one of these edges in the region represented by that triangle. In cases where all the quads of a rectangular grid heightmap are split into triangles from the Northwest to the Southeast, the entire mesh has a feature bias along those two cardinal directions, and features fairly perpendicular to those directions will not be represented as well as the features that aren't.
  2. The vertices in the grid have no relationship with the terrain data itself, apart from their height values. Grid heightmaps cannot represent multiple specific features of the terrain if those features do not lie on the grid.
The effects of the first of these problems can be minimized by using a hexagonal grid rather than a rectangular one. When triangulated, hexagonal grids contain more similar angles and isosceles triangles, which distribute the feature bias evenly throughout the graph.

Delaunay-triangulated TINs do not suffer from either of these problems. First, Delaunay's constraint on the mesh selects for nearly equilateral triangles, which evenly distribute the feature bias. Second, TINs' vertices are derived from the features of the reference image, regardless of those features' positions. If we compare grid-based heightmaps to TINs at various resolutions, we would observe that the grid of the first heightmap grows and shrinks around a point of origin, while the TIN's mesh is "anchored" to the most important features of the reference image. Thus, TINs consistently represent multiple features of the terrain, while grid heightmaps can only consistently represent a feature if it lies at the grid's point of origin.

3. Selecting Features

A good feature to select from a terrain is one where there is a significant change in slope relative to its neighbors. For instance, the peak of a mountain is flat, while the ground around it is steep, making the summit a good feature of the mountain. The same is true for valleys, and slightly less so for features on the edges of cliffs, which are similar to the other features along the cliff. Plateaus do not have very important features based on this criterion. Therefore, our feature selector is similar to edge detection. We produce a 3 x 3 kernel K for varying angles A, so that:

This kernel emphasizes pixels in the reference image that are "edgy" at the given angle. By averaging the resulting convolutions of several of theses angles, we get a table of numbers representing the edginess of the features they correspond to. We store these values in a Flash BitmapData object, typically used to represent bitmap images. BitmapData objects have four channels for red, green, blue, and alpha values of an image; each of these channels can only store integers between 0 and 255. In some use cases, this may cause a somewhat coarse table of values. However, we use this limitation to our benefit; to sort all the features in the terrain by their value, we create 256 lists, appropriately numbered, and add a feature to the list whose index is the value of that feature. These lists are then concatenated into one large list. Whereas the QuickSort algorithm would sort our features in O(n log n) time, our method takes O(n).

There are problems with using this convolution method for finding important features. Intuitively, a heightmap containing sharp mountains should have the peaks of those mountains represented in the TIN, for the reasons we mentioned before. However, even for crisp reference images, this may not be the case; if the mountain's summit is larger than the 3x3 matrix used to convolve, then it may not have a good value at all. However, nearby features will be selected, and will adequately represent the terrain.

An interesting side effect of this convolution is that if we only take one convolution K(A), with A equal to the angle of a light source to the triangulated mesh, we get an approximation of the illuminated portions of the terrain. This is because the features which are edgy in the direction of the light will catch more light than those that face away from them. Although currently this shading does not compare in quality with Gouraud shading, it is very easy to produce and can be baked into the heightmap's texture prior to feeding the scene to the Papervision3D pipeline. This means that the shading will be precalculated, so that as long as the scene is static, it can be transformed without needing to recalculate its shading.

With the features graded and sorted by value, we may now select from them those features that will be part of the TIN mesh. Our original design called for clustering features around areas of interest, such as hills and riverbeds, while leaving plateaus fairly empty. One problem with this proposal is that in the context in which the TIN will be viewed, we want the audience to experience a consistent framerate; and a camera panning over a mesh with a non-uniform distribution of vertices will not have a consistent framerate, even with current advances in web-based 3D visualization. Therefore we opted to produce a mesh of uniform distribution of features; however, the features chosen should be the best features to generate a uniform mesh from. For this, we implement Shi and Tomasi's system of alternatively picking the next best feature in the list and invalidating the features lying within a user-specified neighborhood around the chosen feature. We use the grid heightmap's concept of horizontal and vertical resolution to help the user of the TIN class generate the appropriate mask; the width and height of our mask is inversely related to the horizontal and vertical resolution of a grid heightmap.

Note that, in the end, our features are merely points in space; if there are points that we value more than the ones chosen, we can either modify the features' values before they are sorted, or manipulate the list of features at the end of this process. There will always be a Delaunay triangulation for the features we choose.

Click here to see the grading system in action. The "A" and "S" keys will allow you to modify the resolution of the TIN; hitting "M" will turn on and off the triangle mesh.

4. Finding the Delaunay Triangulation

All Delaunay triangulations are dependent on a fast method for determining whether a given point lies within the circumcircle of a given triangle. While the quickest way to determine this is by computing a 3x4 determinant of the point and the triangle's three vertices, it was important in this project to visualize the circumcircles, and so we solved for them, sacrificing some performance for clarity.

The intuitive, O(n2) method for finding a Delaunay triangulation is as follows:

  1. Create a triangle that encloses the entire point set.
  2. For each point in the point set,
  3. i. Find the triangle that contains that point ii. Iterate over that triangle's neighboring triangles. Mark the triangles whose circumcircles contain the new point. iii. Iterate over the marked triangles' neighbors in the same fashion. Continue until there are no more unmarked triangles that fail the circumcircle test. iv. Delete the edges shared by all the triangles that failed the test. v. Connect the remaining edges to the new point. This creates new triangles that pass the circumcircle test.
For this project we tried to implement the O(n log n) divide and conquer algorithm for finding the Delaunay triangulation. The point set is initially sorted by x-coordinate; when points share an x-coordinate, which is inevitable when working with features from an image, they are sorted secondarily by y-coordinate. Slight noise is also added to their x-coordinates, so that they are no longer in a straight line. When three points in a point set rest on a line, the triangle connecting them is anomalous, and depending on how the algorithm is implemented. After they are sorted, the points are grouped into two- and three-point connected subgraphs. Each of these small graphs is, by definition, a Delaunay triangulation. Now these subgraphs must be merged into larger and larger subgraphs, until the entire point set is represented by one graph.

Before two subgraphs are merged, we must identify the appropriate place to begin the merge operation. The outermost vertices of a Delaunay triangulation are always a convex hull; otherwise, there would be at least three vertices that could be connected into a triangle. Because of this, we know that the starting point for our merge function should be a common tangent between the convex hulls of the subgraphs. In our implementation, we always began our merge at the lowest common tangent of the two graphs. Preparata showed that this tangent can be found by iterating over the bottom halves of the convex hulls until the tangents of the points on either graph are both greater than the slope of the line connecting them. At the time of this writing, our lowest common tangent code does not work properly; it may be possible to detect when our algorithm makes a mistake, and to switch to the O(n2) algorithm, but in the end this would not be productive.

Assuming that the lowest common tangent has been found, the merge algorithm is fairly simple. Using the common tangent as a "base edge", the merge finds the next point in either subgraph to connect to either the left side of the base edge or the right side. That point is connected by an edge to either the left or the right base vertex, to become the new base edge. This continues until there are no candidates remaining for the merge process to join together.

Due to our insufficient implementation of solving for the lowest common tangent, this project utilizes a Flash ActionScript O(n2) Delaunay algorithm made available through the Flash development community.

5. Green Threading

Most of the operations we've mentioned involve iterating over large sets of data. A shortcoming of most browser-based languages is that they run asynchronously, and therefore cannot dwell too long during an iteration, lest they time out. This is a deliberate design element of these languages, which allows their runtimes to execute the code more effectively. However, this means that large sets of data cannot be iterated over in one sweep. Instead, large iterations must be broken into four pieces of code that are then juggled:

  1. the start of the iteration
  2. a step in the iteration
  3. the condition that holds true if the iteration should proceed
  4. the end of the iteration
By isolating these parts of an iterative task, we can iterate over large data sets without timing out. Fortunately, Flash ActionScript and JavaScript support method closures, which allow code that runs in a certain scope to share that scope with code that might execute after the program has exited the scope. This means that variables declared within functions may still be shared and operated on after the function has returned. This entire process is known as "green threading"- simulating a process thread so that a task may be interrupted at any time.

6. Results and Future Work

Please try our heightmap demo yourself. The "T" key cycles through the types of height maps; "F" may help you to identify the type you are looking at.

Currently the largest bottleneck in our algorithm is The O(n2) Delaunay triangulation, which is by far the slowest portion of the TIN generation procedure. An O(n log n) triangulating algorithm is clearly feasible and must be implemented should this project evolve further. Second to this, the speed of the masking during feature selection is the slowest operation. For ~1000-element point sets, our algorithm takes roughly four seconds to produce a TIN in Flash Player 10 running on a 2.4 Ghz Intel Core 2 Duo, compared with half-second grid heightmaps. However, at double the resolution, the running times for the TIN and grid heightmaps begin to differ.

The difference between the grid heightmap and the TIN is not obvious until a shader is applied. Due to Papervision3D's shader implementation, the Gouraud shading system sometimes produces a pattern on the grid mesh's surface, while the effect is minimized on the TIN. This has to do with the grid heightmap's geometric pattern, which is absent in the TIN.

Apart from these subtleties, there is little reason to use a TIN instead of a grid heightmap, except in cases where the grid is noticeable and unpleasant. We have demonstrated, however, that TINs are possible within a web browser, which will hopefully open the door to other computer graphics problems being developed for web-based delivery.

7. Other accomplishments

The heightmaps used to test the TIN system are randomly generated from a pocked, cloudy pattern generated from Perlin noise. A color gradient is applied to the heightmap to generate the above-, below-, and on-the-water textures. Papervision3D does not have a depth buffer, so to layer the ocean on top of the below-water terrain and below the above-water terrain, the mesh had to be subdivided along the water level.

Because of the gamelike nature of the demo for this project, we also implemented a walking character system. This character walks in the cardinal directions relative to the camera, and takes longer to walk uphill than downhill.

Acknowledgements

Patrick Donnelly's support and insight helped to sustain this project's impetus. Cody Phillips provided assistance in the feature selection phase. Barbara Cutler, PhD teaches computer science like no one else can.


The work to come out of this project will be contributed to the open source Papervision3D project. Contact the author with any questions or to request the entire source.


References

  1. J. Shi and C. Tomasi, "Good features to track," in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1994, pp. 593-600.
  2. Peucker, T.K., Fowler, R.J., Little, J.J. and Mark, D.M. (1980): The Triangulated Irregular Networks (TIN). AUTOCARTO IV, vol 2, 96 - 103.
  3. Peterson, Samuel. Computing constrained Delaunay triangulations in the plane. 1997. University of Minnesota. Apr. 2009 <http://www.geom.uiuc.edu/~samuelp/del_project.html>.
  4. Adobe Flash. Computer software. Vers. 10. Adobe Flash Platform. <http://www.adobe.com/flashplatform/>.
  5. Computer software. Delaunay triangulation and Voronoï diagram. 9 Sept. 2008. Apr. 2009 <http://en.nicoptere.net/?p=10>.
  6. Papervision3D. Computer software. Vers. 2.0. Papervision3D. <http://blog.papervision3d.org/>.
  7. Mitchell, Luke. Heightmap. Computer software. Vers. 1.0.0. Papervision 3D Tutorials. Mar. 2009 <http://www.papervision2.com/>.




Jeremy Sachs          Apr 24, 2009