10 Fun Things to do with Tessellation

Hardware tessellation is probably the most notable feature of Direct3D11.

Direct3D11 was announced at the last Gamefest and a technical preview was released in the November 2008 DirectX SDK. Hardware implementations are expected to be available this year.

[missing image]

Direct3D11 extends the Direct3D10 pipeline with three new stages: Two programmable shader stages (the Hull and Domain Shaders), and a fixed function stage (the Tessellator). More details can be found here and here.

Rendering of Catmull-Clark subdivision surfaces is often mentioned as the primary application for the tessellation pipeline, but there are many other interesting uses that have not received that much attention.

I thought it would be interesting to take a closer look at those other applications, and submitted a proposal to do that at GDC’09. However, it seems that the organizers do not think tessellation is as interesting as I do, or they didn’t like my proposal, or maybe it’s just that they know I’m a lousy speaker. I will never know, because the gracious feedback of the GDC review committee can be represented by a single boolean.

In any case, here’s a brief overview of the 10 fun things that I was planning to talk about. I don’t get very deep into the technical details, but in future posts I may describe some of these applications more thoroughly. Please, leave your comments if there’s something you would like to learn more about.

PN-Triangles

Curved PN Triangles is a triangle interpolation scheme that operates directly on triangle meshes whose vertices are composed of positions and normals (PN stands for Point-Normal).

[missing image – PN Triangles]

It’s an interesting way of improving visual quality that offers a simple migration path, since assets do not need to be heavily modified.

The PN Triangle evaluation consists of two steps: First, for every triangle of the input mesh a triangular cubic patch is derived solely from the vertex positions and normals; no adjacency information is required. Then, the resulting patch is subdivided or tessellated for rendering.

The resulting surface is smoother than the polygonal surface, but does not have tangent continuity in general, and that results in shading discontinuities. To hide these discontinuities normals are interpolated independently using either linear or quadratic interpolation. These normals are not the true surface normals, but they provide a smooth appearance to the surface.

This two-step evaluation maps very well to the Direct3D11 tessellation pipeline. The evaluation of the control points can be performed in the Hull Shader, the fixed function tessellator can produce a tessellation pattern in the triangle domain, and the actual surface can be evaluated for each of the tessellated vertices in the Domain Shader.

[missing image – Scalar Tagged PN Triangles]

In order to support sharp edges a rim of small triangles is added along the edges. That increases the number of patches, and it’s not entirely clear how to properly texture map them. Scalar Tagged PN-Triangles solve that problem in a more elegant way by tagging each crease vertex with three scalar that act as shape controllers and modify the construction of the surface control points. However, this representation does not support crease corners.

Silhouette Refinement

When tessellation is enabled the only supported primitive type is the patch primitive. In Direct3D11 a patch is an abstract primitive with an arbitrary number of vertices. You can use patches to represent traditional primitives (ie. a triangle is just a patch with 3 vertices), but this also enables you to represent other input primitives with arbitrary topology and additional connectivity information.

An interesting extension of of PN-Triangle tessellation is to augment the input triangles with the neighbor vertices in order to perform silhouette refinement.

With this additional information it’s possible to compute tessellation factors in he Hull Shader based on whether an edge is on the silhouette or the interior of the mesh. Then the fixed function tessellator uses these edge tessellation factors to produce a semi-regular tessellation pattern and the Domain Shader transforms it to interpolate the surface.

Phong Tessellation

[missing image – phong tessellation]

Phong Tessellation is a geometric version of Phong interpolation, but applied to vertex positions instead of normals.

First, points are interpolated linearly over each triangle using its barycentric coordinates, then the points are projected onto the planes defined by the corner position and normal, and finally the result of the three projections is interpolated again.

This procedure produces a smooth surface comparable to PN Triangles, but its evaluation is much cheaper, since no additional control points need to be computed.

Bezier Surfaces

Curved surfaces are not only useful for characters, but also for level geometry and objects.

[missing image – Quake 3 Arena]

id Software introduced the use of quadratic Bezier patches for architectural geometry in Quake 3 Arena and has been using them ever since.

Climax Brighton’s Moto GP used cubic Bezier patches to model the motorcycles.

Bezier patches can be evaluated very efficiently, because they don’t need any information about the surrounding mesh. As these games show, tessellation hardware is not required to render these surfaces. However, hardware tessellation will allow doing it much more efficiently, and will facilitate the use of these and more complex surfaces.

Approximation to Subdivision Surfaces

Rendering of approximated Catmull-Clark subdivision surfaces is probably the most anticipated application of hardware accelerated tessellation. Several approximation methods exist.

[missing images]


Approximating Catmull-Clark Subdivision Surfaces with Bicubic Patches is the most popular one. This approximation constructs a geometry patch and a pair of tangent patches for each quadrilateral face of the control mesh. The geometry patch approximates the shape and silhouette, but does not provide tangent continuity. A smooth normal field is constructed using two additional tangent patches. The approximation supports boundaries and has also been extended to support creases in Real-Time Creased Approximate Subdivision Surfaces.

GPU Smoothing of Quad Meshes proposes an alternative approximation using piecewise quartic triangular patches that have tangent continuity and do not require additional tangent patches to provide a smooth appearance. In Fast Parallel Construction of Smooth Surfaces from Meshes with Tri/Quad/Pent Facets the same approach is extended to approximate triangular and pentagonal faces.

[missing image]
(c) Kenneth Scott, id Software

Gregory patches are a more compact representation that also provides a very similar approximation, but only support quad and triangle control faces.

The availability of sculpting tools like ZBrush and Mudbox makes it possible to create highly detailed meshes. Displaced subdivision surfaces provide a compact and efficient representation for these meshes.

Rendering Geometry Images

Another approach to render highly detailed surfaces is to use geometry images. While geometry images can be rendered very efficiently, their video memory requirements are generally higher than displacement maps due to the lack of high precision texture compression formats. Traditional animation algorithms are not possible with this representation, and view dependent tessellation level evaluation is complicated, because geometry information is not directly available at the Hull Shader stage. However, geometry images may be the fastest approach to render small static objects at fixed tessellation levels.

Terrain Rendering

Terrain rendering is one of the most obvious applications for tessellation. The flexibility of the tessellation pipeline enables the use of sophisticated algorithms to evaluate the level of refinement of the terrain patches, and frees you from having to worry about many of the implementation details.

[missing image – Saga of Ryzom]

It’s also possible to extend traditional terrain engines with arbitrary topologies. Some MMORPGs are already doing that to create more rich environments.

For example Saga of Ryzom, a game that is based on the Nevrax engine, uses cubic patches to model the terrain, which enables them to create impressive cliffs and overhangs.

[missing image – Saga of Ryzom]

Tessellation should make it possible to combine regular heightfields, with caves, cliffs, arches, and other interesting rock formations.

I think that ZBrush or Mudbox would be excellent tools to create natural looking rugged terrain.

Hair Rendering

Efficient hair rendering is one of the most interesting applications of the Direct3D11 tessellation pipeline. In addition to triangular and quad patches the fixed function tessellator can also generate lines, which are very useful for applications like hair and fur rendering.

[missing image – nalu]

The algorithm described in Hair Animation and Rendering in the Nalu Demo maps very well to the tessellation pipeline.

As shown in Real-Time Rendering of Realistic Hair, the use of the hardware tessellation pipeline makes it very easy to simulate and render realistic hair with high geometric complexity in real-time.

That’s possible, because the simulation is performed only on a few hundred guide hairs, that are expanded by the tessellator into thousands of hair strands.

Rendering Panoramas

Another application for tessellation is to perform arbitrary non linear projections, that is useful, for example, to create real-time panoramas.

Since graphics hardware relies on homogeneous linear interpolation for rasterization, arbitrary projections and deformations at the vertex level result in errors unless the surface is sufficiently refined.

[missing image – panquake]

The traditional image based approach is to render the scene to a cube map and then perform an arbitrary projection of the cubemap to screenspace relying on texture hardware to do the sampling and interpolation. This was the approach taken in Fisheye Quake and Pan Quake.

While that works well, it requires rendering the scene to the 6 cube faces, and sometimes results in oversampling or undersampling of some areas of the scene.

[missing image – panorama]


Dynamic Mesh Refinement on GPU using Geometry Shaders proposes the use of the geometry shader to dynamically refine the surfaces to prevent linear interpolation artifacts. However, the Geometry Shader operates sequentially and is not well suited for this task. On the other side, the dynamic mesh refinement algorithm maps well to the Direct3D11 tessellation pipeline.

Rendering of 2D curved shapes

While GPUs can render simple polygons, they are not able to automatically handle complex concave and curved polygons with overlaps and self intersections, without prior triangulation and tessellation.

[missing image – svg tiger]

The Direct3D11 tessellation pipeline is not designed to perform triangulation. However, there’s a well known method to render arbitrary polygons using the stencil buffer that can be used in this case. This method was first described in the OpenGL Red Book, but was recently popularized by its implementation in the Qt graphic library.

It’s possible to combine this technique with hardware tessellation to render curved tessellated shapes without the need of expensive CPU tessellation and triangulation algorithms.


7 Comments

repi
Posted 11/1/2009 at 10:31 am | Permalink
Excellent post Ignacio! Great inspiration for future geometry pipelines & usages in games

cb
Posted 11/1/2009 at 11:36 am | Permalink
Do you think you could do Reyes-style motion blur?

Similar to the Rendering Panoramas method that you mention – tesselate triangles until they are near 1 pixel size and then extrude them based on their velocity (assume the engine is passing a velocity vector per vertex or something) ?

castano
Posted 11/1/2009 at 1:27 pm | Permalink
> Do you think you could do Reyes-style motion blur?

I think that tessellation does not help much in this setting. There’s no tessellation mode that creates multiple copies of the same triangle. However, you can do that in the geometry shader. The problem is how to combine these copies. You can use some sort of order independent transparency, which might actually be possible in Direct3D11, since pixel shaders support arbitrary gather and scatter operations.

In a reyes style render shading is performed in object space. Fast moving objects do not need to be shaded exactly, so using tessellation you can dynamically change the shading frequency based on this and other factors.

ren canjiang
Posted 7/2/2009 at 3:55 am | Permalink
thank you for so interesting topics. i’m using opengl for demos, i hope it can catch up with dx11?-?

sonali joshi
Posted 21/6/2009 at 4:17 am | Permalink
it was just ok

Slawa
Posted 24/7/2009 at 10:40 am | Permalink
gud gemacht ) weiter so !

Sarah
Posted 12/3/2010 at 9:39 pm | Permalink
You’re an excellent speaker, and have an interesting article. It’s a shame you didn’t get to speak! =(

Ownership-based Zippering

Since my Gamefest talk I’ve received numerous questions about the ownership-based zippering algorithm that I proposed. So, I’ll try to explain it in more detail. See my previous article on watertight texture sampling for more background info.

In the averaging method we would have to store the texture coordinate of every patch that contributes to a shared feature. Edges are shared by only two patches, but corners can be shared by many patches. By defining the ownership of the shared features (corners and edges), we only have to store the texture coordinates of the patch that owns the corresponding feature.

So, we have:

  • 4 texture coordinates for the interior (4).
  • 2 texture coordinates for each edge (8).
  • 1 texture coordinate for each corner (4).

Therefore, the total number of texture coordinates per patch is: 4+8+4 = 16

Deciding what patch owns a certain edge or corner is done as a pre-process, so that the patch texture coordinates can be computed in advance. The way I store these texture coordinates is as follows:

[missing picture]

Each vertex has:

1 interior texture coordinate. (index 0)
1 edge texture coordinate for each of the edges. (index 1 and 2)
1 corner texture coordinate. (index 3)

On the interior, we interpolate the interior texture coordinates bilinearly:

float2 tc = bar.x * texCoord[0][0] +
            bar.y * texCoord[1][0] +
            bar.z * texCoord[2][0] +
            bar.w * texCoord[3][0];

where bar stands for the barycentric coordinates:

bar.x = (    uv.x) * (    uv.y);
bar.y = (1 - uv.x) * (    uv.y);
bar.z = (1 - uv.x) * (1 - uv.y);
bar.w = (    uv.x) * (1 - uv.y);

On the edges we interpolate the edge texture coordinates linearly:

if (uv.y == 1) tc = texCoord[0][1] * bar.x + texCoord[1][2] * bar.y;
if (uv.y == 0) tc = texCoord[2][1] * bar.z + texCoord[3][2] * bar.w;
if (uv.x == 1) tc = texCoord[3][1] * bar.w + texCoord[0][2] * bar.x;
if (uv.x == 0) tc = texCoord[1][1] * bar.y + texCoord[2][2] * bar.z;

And at the corners we simply select the appropriate corner texture coordinate:

if (bar.x == 1) tc = texCoord[0][3];
if (bar.y == 1) tc = texCoord[1][3];
if (bar.z == 1) tc = texCoord[2][3];
if (bar.w == 1) tc = texCoord[3][3];

The same thing can be done more efficiently using a single bilinear interpolation preceded by some predicated assignments:

// Interior
float2 t0 = texCoord[0][0];
float2 t1 = texCoord[1][0];
float2 t2 = texCoord[2][0];
float2 t3 = texCoord[3][0];

// Edges
if (uv.y == 1) { t0 = texCoord[0][1]; t1 = texCoord[1][2]; }
if (uv.y == 0) { t2 = texCoord[2][1]; t3 = texCoord[3][2]; }
if (uv.x == 1) { t3 = texCoord[3][1]; t0 = texCoord[0][2]; }
if (uv.x == 0) { t1 = texCoord[1][1]; t2 = texCoord[2][2]; }

// Corners
if (bar.x == 1) t0 = texCoord[0][3];
if (bar.y == 1) t1 = texCoord[1][3];
if (bar.z == 1) t2 = texCoord[2][3];
if (bar.w == 1) t3 = texCoord[3][3];

float2 tc = bar.x * t0 + bar.y * t1 + bar.z * t2 + bar.w * t3;

And finally, the predicated assignments can be simplified and replaced by an index calculation as I proposed in my previous article:

// Compute texture coordinate indices (0: interior, 1,2: edges, 3: corner)
int idx0 = 2 * (uv.x == 1) + (uv.y == 1);
int idx1 = 2 * (uv.y == 1) + (uv.x == 0);
int idx2 = 2 * (uv.x == 0) + (uv.y == 0);
int idx3 = 2 * (uv.y == 0) + (uv.x == 1);

float2 tc = bar.x * texCoord[0][idx0] +
            bar.y * texCoord[1][idx1] +
            bar.z * texCoord[2][idx2] +
            bar.w * texCoord[3][idx3];

The same idea also applies to triangles:

// Interior
float2 t0 = texCoord[0][0];
float2 t1 = texCoord[1][0];
float2 t2 = texCoord[2][0];

// Edges
if (bar.x == 0) { t1 = texCoord[1][1]; t2 = texCoord[2][2]; }
if (bar.y == 0) { t2 = texCoord[2][1]; t0 = texCoord[0][2]; }
if (bar.z == 0) { t0 = texCoord[0][1]; t1 = texCoord[1][2]; }

// Corners
if (bar.x == 1) t0 = texCoord[0][3];
if (bar.y == 1) t1 = texCoord[1][3];
if (bar.z == 1) t2 = texCoord[2][3];

float2 tc = bar.x * t0 + bar.y * t1 + bar.z * t2;

And the resulting code can be optimized the same way:

int idx0 = 2 * (bar.z == 0) + (bar.y == 0);
int idx1 = 2 * (bar.x == 0) + (bar.z == 0);
int idx2 = 2 * (bar.y == 0) + (bar.x == 0);

float2 tc = bar.x * texCoord[0][idx0] +
            bar.y * texCoord[1][idx1] +
            bar.z * texCoord[2][idx2];

Approximate Subdivision Shading

Subdivision Shading is a new approach to compute normal fields of subdivision surfaces that was presented at SIGGRAPH Asia 2008.

Subdivision Shading

It’s a very simple idea that provides surprisingly good results. The idea is to interpolate the subdivision surface normals using the same procedure used for positions. The resulting normal field is not the actual surface normal, but looks smooth and doesn’t exhibit some of the artifacts characteristic of subdivision surfaces at the extraordinary vertices.

The main disadvantage is that it looks too smooth compared to the real surface normal, but I’m not sure that’s necessarily bad. To avoid that problem the paper suggests blending the surface normals and the interpolated vertex normals so that the interpolated normals are used only in the proximity of extraordinary vertices.

The same idea can also be applied to the Approximate Catmull-Clark Subdivision Surfaces (Bezier ACC) proposed by Loop and Schaefer. Instead of constructing the normal from the cross product of the tangent patches, the normal can be interpolated directly using the same approximation used to evaluate positions. The resulting surface has G1 discontinuities around extraordinary vertices in both the geometry and the normal field. However, I haven’t been able to notice any artifact due to that in any of our test models.

This approach is quite efficient. It requires the same number of control points as Bezier ACC, but only one half of the stencil weights, because positions and normals are evaluated exactly the same way. The evaluation of the surface normal itself is also more efficient; evaluating a single 4×4 Bezier patch is faster than evaluating two 3×4 Bezier patches and computing a cross product.

However, the nicest thing about this scheme is that it facilitates watertightness when using displacement maps.

At Gamefest 2008 I mentioned that in order to achieve watertight surfaces when using displacement maps it was necessary to:

a) Sample textures in a watertight way.
b) Construct a watertight normal field.

[[acc tangents]] (right aligned)

In order to obtain a watertight normal field, adjacent patches need to compute the normals along their edges consistently.

The approach proposed in the ACC paper produces a smooth and continuous normal field, but the normals at extraordinary vertices and along the edges that surround them are not consistent. Patches around the extraordinary vertices have tangents that lie on the same plane, but their cross product does not yield exactly the same normal.

There are several ways of to solve that problem, but all of them too complicated to cover them in this post. On the other side, the normal interpolation approach does not suffer from that problem and provides a much more simple solution.

Watertight Texture Sampling

One of the problems when implementing displacement mapping is dealing with texture seams.

Texture seams are discontinuities in the parameterization. These discontinuities are always necessary unless the mesh has the topology of a disk, but in general meshes need to be partitioned into charts that are then parameterized independently.

[missing image]

This problem is not new. When using color and normal maps, texture seams usually cause color and lighting discontinuities. Artists usually deal with these artifacts by placing the seams on areas of the model, where they are less visible, so that the artifacts become less noticeable. However, when using displacement maps, the seams cause holes in the mesh, which produce much more visible artifacts that are harder to hide.

A Simple Solution

A simple solution to this problem is to manually alter the geometry along the seams to make it self intersect. While that does not result in watertight surfaces, the resulting holes are not visible. The main problem of this approach is that it requires artist intervention, creates open meshes, and only works for opaque surfaces; it would be nice to have more robust solutions.

Seamless Parameterization Methods

Fortunately this problem has been studied extensively and seamless parameterization methods have been developed. These are automatic parameterizations in which the chart texels are properly aligned across boundaries.

Displacement maps are generally not painted using traditional image editing applications, but created in specialized sculpting tools (such us ZBrush and Mudbox) or generated procedurally in attribute capture tools like xNormal. That means that it’s possible to store displacements using automatic parameterizations, since the artist does not need to edit the texture manually.

The most straightforward seamless parameterization method is the one used by ZBrush, which is very similar to the ones proposed in Meshed Atlases for Real-Time Procedural Solid Texturing. ZBrush maps every face of the mesh to a quad in texture space, so that all edges are either vertical or horizontal, and have the same length.
This method is very easy to implement, but has several problems:

  • It introduces a large number of seams in the mesh, which increments the number of vertex transforms.
  • It does not make efficient use of the texture space, because all faces, independently of their area, are mapped to quads of the same size.

ZBrush provides an option to group faces into larger charts while preserving the edge length and orientation, which helps reducing the number of vertex transforms. It also has an option to scale the charts in proportion to their surface area, but that breaks the seamlessness.

[missing image]

In order to alleviate these problems, another solution is to use rectangular charts (instead of single faces) to map them to texture-space quads.

That was first proposed in Seamless Texture Atlases, where rectangular charts are created by first clustering polygons into arbitrary sided charts and then partitioning them into quadrilaterals using a Catmull-Clark-inspired scheme:

[missing image]

An entirely different approach is described in Spectral Surface Quadrangulation. In this paper a quadrangulation is constructed from the optimized Morse-Smale complex of the natural harmonics of the surface. The main limitation of this approach is that it only handles closed surfaces, and requires manual selection of the eigenfunction to produce charts of the desired size. Spectral Quadrangulation with Orientation and Alignment Control solves these problems and also adds support for explicit constraints.

[missing image]

These two methods create quadrangulations without T-Junctions. That seems like a nice property, but Rectangular Multi-Chart Geometry Images shows that it’s possible to remove that constraint and still achieve smooth parameterizations, resulting in better parameterizations with less distortion.

The most interesting method is probably the one described in Designing Quadrangulations with Discrete Harmonic Forms. This method is even more flexible; instead of defining charts before the parameterization, it introduces singularity points in the mesh and computes the parameterization globally. Then the mesh can be cut connecting the singularity points arbitrarily.

Other methods try to achieve continuity between patches using constraints and parameterizing adjacent patches simultaneously. That’s for example the case of Matchmaker: Constructing Constrained Texture Maps, but it only minimizes the discontinuities and does not fully remove them.

For more information about parameterization methods these two surveys are a great resource:

Watertightness and Floating Point Precision

Modern hardware uses a floating point representation to interpolate texture coordinates. However, when using programmable tessellation hardware as specified by Direct3D11, interpolation is performed explicitly in the domain shader (or in the vertex shader when using instanced tessellation on pre-Direct3D11 GPUs). This is done to enable the use of higher order interpolation, but also allows the use of fixed point interpolation.

[missing image]

That’s important, because the use of floating point for interpolation causes some problems. Floating point values have more precision closer to the origin than farther from it. As a result, interpolation of texture coordinates along an edge closer to the origin will produce a different set of samples than interpolation along an edge that is farther from it. This is exactly what happens on texture seams and will result in small cracks in the mesh even when using a seamless parameterization.

If that seems surprising, then I’d recommend reading What Every Computer Scientist Should Know About Floating-Point Arithmetic.

It’s also important to note that a seamless parameterization and fixed point interpolation are not sufficient conditions to achieve watertightness. It’s also necessary to generate the displacement maps appropriately so that the values of the texels along the chart boundaries match exactly along both sides of the seams.

Zippering Methods

A different approach is to introduce a triangle strip connecting the vertices along the seam. These strips can be generated with the same tessellation unit used to generate the patches, by setting the tessellation level equal to 1 in one of the parametric directions. This solves the problem nicely, but requires rendering more patches, and introduces additional triangles that are almost degenerate.

[missing image]

Another interesting solution is the zippering method proposed in the Multi-Chart Geometry Images paper. The idea that the paper proposes is to sample the displacement (or the geometry image) on both sides of the seam and to use the average of the two samples.

The main problem of this approach is that it requires two texture samples along the seams, which means you have to take two samples in all cases, or use branching to take an extra sample on the seam vertices only.

The averaging method does not work for corners. Along the edges there are only two possible displacement values, one for each side of the seam. However, on corners there are more than two. Storing an arbitrary number of texture coordinates, and taking an arbitrary number of texture samples would be too expensive. A simple solution is to snap the corner texture coordinates to the nearest texel, and make sure that the displacement value for that vertex is the same for all patches that meet at that corner.

A cheaper solution that only requires a single texture sample, and that handles corners more gracefully is to define patch ownership of the seams. This is what I have proposed at Gamefest and SIGGRAPH last year.

By designating the patch owner for every edge and corner, all patches can agree what texture coordinate to use when sampling the displacement at those locations. That means that for every edge and for every corner we need to store the texture coordinates of the owner of those features. That’s a total of 4 texture coordinates per vertex, (16 for quads and 12 for triangles).

At runtime, only a single texture sample is needed; the corresponding texture coordinate can be selected with a simple calculation:

// Compute texture coordinate indices (0: interior, 1,2: edges, 3: corner)
int idx0 = 2 * (uv.x == 1) + (uv.y == 1);
int idx1 = 2 * (uv.y == 1) + (uv.x == 0);
int idx2 = 2 * (uv.x == 0) + (uv.y == 0);
int idx3 = 2 * (uv.y == 0) + (uv.x == 1);

// Barycentric interpolation of texture coordinates
float2 tc = bar.x * texCoord[0][idx0] +
            bar.y * texCoord[1][idx1] +
            bar.z * texCoord[2][idx2] +
            bar.w * texCoord[3][idx3];

Partition of Unity

The zippering methods produce watertight results independently of the parameterization. However, if the parameterization is not seamless or if the features of the displacement map are different on each side of the seam, then that will result in sharp discontinuities, not holes, but undesirable creases along the seams.

[missing image]

These problems can be avoided using a seamless parameterization and generating the displacement maps making sure that the displacements match along the seams. However, another solution is to use a partition of unity.

A partition unity is a method to combine multiple texture parameterizations to produce smooth transitions between them. The idea is to define transition regions around the seams, so that on those regions both parameterizations are used to sample the texture and the results are blended smoothly.

The zippering methods described before are just a special case of a partition of unity in which the blend function is just the unit step function.

Conclusions

There are many different solutions to achieve watertightness when sampling of displacement maps. I personally prefer the zippering methods, since they don’t impose any restriction on the parameterization of the mesh, and work with arbitrary displacement maps. They are easy to implement, and do not add much overhead to the shaders, even though they increase the number of texture coordinates. Note that even when using zippering methods to guarantee watertightness, the use of seamless (or nearly seamless) parameterizations is still useful, because they eliminate any visible crease or discontinuity along the seams.

cb
Posted 1/1/2009 at 12:51 pm | Permalink
Very awesome post !!

It’s a shame that texture coordinates are true floats and not something like 16.16 fixed point. It basically forces you to use zippering. If you had fixed point you could make quadrilateral charts and put the corners exactly on pixels and make the edges the same length and I believe that would work.

castano
Posted 1/1/2009 at 4:07 pm | Permalink
There’s actually nothing stopping you from representing texture coordinates using fixed point. The floating point precision problems only matter in the domain shader, where two instances of the same vertex at opposite sides of the seam must sample the same value from the texture. In the domain shader interpolation is done explicitly, and the math can be done in fixed point.

However, the problem is a bit more complicated than what I’ve described in the article. Hardware bilinear interpolation of texels is not symmetric. Sampling at 0.7 between A and B is not the same as sampling at 0.3 between B and A. Nearest filtering is not symmetric either, the result of sampling at 0.5 is undefined.

For this reason mapping the faces to equal-sized texture-space quads does not solve the problem entirely. It would also be necessary to make sure that seam edge vectors have the same sense.

All that is doable, but the zippering method is so much more simple and robust that I think is in most cases preferable.

castano
Posted 1/1/2009 at 4:29 pm | Permalink
BTW, all that becomes even more complicated if you take mipmapping into account.

cb
Posted 2/1/2009 at 5:29 pm | Permalink
Urg I wrote something but didn’t answer the challenge and when I hit back my comment was gone. >:|

castano
Posted 2/1/2009 at 7:16 pm | Permalink
Ouch, that sucks. Posts that fail the challenge are discarded and do not go to the moderation queue, so I have no way of recovering it…

The foie gras parable

I loved this talk from Blue Hill’s Chef Dan Barber.

I’m not convinced that what the organic/ethical/slow food movement advocates is the most sustainable way of producing food and feed the world. For example, I’m not necessarily opposed to the use of technology to produce food, as long as it’s done responsibly. It’s the use of technology by profit driven food corporations what I’m concerned about.

Anyway, the talk made me feel saudades. I have this fantasy that one day I’d go back to Spain, buy a small farm, and live in a rural home raising pigs, hens, and sheeps. I’d then spend the rest of my days working in the farm, cooking, hiking in the area, and coding in my spare time.

nvidia-widgets

I just published the immediate mode user interface (IMGUI) toolkit that we are using in our latest OpenGL SDK examples.

The purpose of the SDK examples is to show how to implement certain techniques using OpenGL. We did not want to spend too much time writing UI code, and we did not want the UI code to end up occupying more space than the code dedicated to the technique we are trying to showcase. IMGUI allowed us to accomplish these goals in an elegant way.

Here’s a simple example with two widgets:

ui.begin();
ui.beginGroup(nv::GroupFlags_GrowDownFromLeft);

static bool showButton = false;
ui.doCheckButton(none, "Show exit button", &showButton);

if (showButton) {
    if (ui.doButton(none, "Quit")) exit(0);
}

ui.endGroup();
ui.end();

And this is what the result looks like:

[missing image]

One of the nice things is that, since the UI is being rendered in immediate mode, it’s very easy to create/modify widgets dynamically. For example, this is what happens when the check box is activated:

[missing image]

You can get the whole source code at the nvidia-widgets google project, and feel free to drop by the Molly Rocket IMGUI forums to share your feedback!

Moving

My better half has been accepted to UC Davis, and we’ll be moving there in the next few days. We are pretty excited; I think Davis will be a better place to raise our son. It’s also closer to the Sierra, so I’ll have the chance to spend more time in the backcountry.