Nathan Reed’s coding blog
http://reedbeta.com/
Latest posts on Nathan Reed’s coding blogen-usThu, 21 Dec 2017 21:59:32 -0800Flows Along Conic Sections
http://reedbeta.com/blog/flows-along-conic-sections/
http://reedbeta.com/blog/flows-along-conic-sections/Nathan ReedTue, 12 Dec 2017 20:35:26 -0800http://reedbeta.com/blog/flows-along-conic-sections/#commentsGraphicsMath<p>Here’s a cute bit of math I figured out recently. It probably doesn’t have much practical
application, at least not for graphics programmers, but I thought it was fun and wanted to share it.</p>
<p>First of all, everyone knows about rotations: they make things go in circles! More formally, given
a plane to rotate in and a center point, rotations of any angle will preserve circles in the same
plane and with the same center. By “preserve circles”, I mean that the rotation will send every
point on the circle to somewhere on the same circle. The individual points move, but the <em>set</em>
of points comprising the circle is invariant under rotation.</p>
<!--more-->
<p>Moreover, rotations with a fixed plane and center form a one-parameter family of transformations:
they can be parameterized by a single degree of freedom, the angle. By varying the angle, you move
the points around on each circle. Another way of saying this is that the family of rotations defines
a <em>flow</em> along circles: if you take derivatives with respect to the rotation angle, you can get a
vector field that shows how each point is pushed around by the rotation—like a velocity field in a
fluid simulation. The family of concentric circles preserved by the rotation show up as the
<a href="https://en.wikipedia.org/wiki/Integral_curve">integral curves</a> of this vector field.</p>
<p>So far, so good. Now for the fun part: it happens that <strong>all conic sections</strong>, not just circles,
have a similar family of linear or affine transformations that preserve them and induce a flow
along them.</p>
<p>Here’s a shadertoy to demonstrate. It cycles through circles, ellipses, parabolas, and hyperbolas,
and in each case animates through transformations that preserve that conic. The background
coordinate grid shows what the transformation is doing to the space, and dots trace individual
points to show how they flow along the conic.</p>
<div class="embed-wrapper-outer" >
<div class="embed-wrapper-inner">
<iframe class="embed" type="text/html" allowfullscreen frameborder="0" src="https://www.shadertoy.com/embed/XtXfDS?paused=false&gui=false"></iframe>
</div>
</div>
<p>So, what are these transformations? Let’s look at each type of conic in turn.</p>
<p><strong>Circles</strong>. As we’ve seen, circles are preserved by rotations, which can be parameterized by
their angle $\theta$ and have a matrix of the form:
$$
\begin{bmatrix}
\cos\theta & -\sin\theta \\ \sin\theta & \cos\theta
\end{bmatrix}
$$</p>
<p><strong>Ellipses</strong>. Ellipses are just scaled circles, and the transformations that preserve them can be
derived by: scaling the ellipse to a circle, rotating, then unscaling back to the ellipse. If the
ellipse is axis-aligned and has aspect ratio $\alpha$, then the ellipse-preserving transformations
have the form:
$$
\begin{bmatrix}
\cos\theta & -\alpha\sin\theta \\ \frac{1}{\alpha}\sin\theta & \cos\theta
\end{bmatrix}
$$
Geometrically, this produces a rotation combined with some shear and nonuniform scaling that varies
with angle. Naturally, it reduces to the circle case when $\alpha = 1$.</p>
<p><strong>Parabolas</strong>. This one is different! It turns out there’s no continuous family of <em>linear</em>
transformations that map a parabola to itself. However, there does exist a family of <em>affine</em>
transformations that does so. For the parabolas $y = x^2 + k$, it looks like this:
$$
\begin{bmatrix} x \\ y \end{bmatrix} \mapsto
\begin{bmatrix}
1 & 0 \\ v & 1
\end{bmatrix}
\begin{bmatrix} x \\ y \end{bmatrix} +
\begin{bmatrix} \frac{1}{2}v \\ \frac{1}{4}v^2 \end{bmatrix}
$$
This consists of a shear along the $y$-axis, which sort of “rolls” the parabola, pushing some points
up and others down so that a different point becomes the vertex. Then the translation puts the
vertex back where it was to begin with. The family of transformations is parameterized by the shear
fraction, $v$.</p>
<p>Note that unlike the previous cases, this family isn’t periodic. It’s unbounded; $v$ can range over
all the real numbers. As $v$ gets farther from zero, the shear will get more and more extreme, and
the original coordinate system more and more distorted—but the parabola stays just where it is,
and its points just keep flowing along it.</p>
<p><strong>Hyperbolas</strong>. We’re back to linear transformations again for these ones. Hyperbolas are preserved
by <a href="https://en.wikipedia.org/wiki/Squeeze_mapping">squeeze mappings</a>, which are nonuniform scalings
that have reciprocal scale factors along two axes: if they scale one axis by a factor $a$, they
scale the other by $1/a$. The axes here must be aligned with the asymptotes of the hyperbola.</p>
<p>It turns out that a convenient way to parameterize these is in terms of the
<a href="https://en.wikipedia.org/wiki/Hyperbolic_angle">hyperbolic angle</a> (somewhat of a misnomer, as it
isn’t an <em>angle</em> in the usual sense at all). The hyperbolas $y^2 - x^2 = k$ are preserved by
transformations of the form:
$$
\begin{bmatrix}
\cosh u & \sinh u \\ \sinh u & \cosh u
\end{bmatrix}
$$
This kind of transformation is also known as a “hyperbolic rotation” or a
<a href="https://en.wikipedia.org/wiki/Lorentz_transformation">Lorentz transformation</a>; it’s central in
special relativity (although there usually parameterized differently). Like the parabolic case,
the family is unbounded; $u$, the hyperbolic angle, can range over all real numbers but induces a
consistent flow along the hyperbolas no matter how positive or negative it gets.</p>
<p>Incidentally, these families of transformations we’ve been discussing are examples of continuous
symmetry groups—<a href="https://en.wikipedia.org/wiki/Lie_group">Lie groups</a>. The flow vector field is the
generator of the <a href="https://en.wikipedia.org/wiki/Lie_algebra">Lie algebra</a> for that Lie group.</p>
<p>And, as an application of <a href="https://en.wikipedia.org/wiki/Noether%27s_theorem">Noether’s theorem</a>,
each family also has a corresponding conserved quantity—a particular function of the coordinates
that’s conserved (does not change) when a transformation is applied. Consequently, these quantities
are also constant along the corresponding conic sections. They can therefore serve to identify a
specific conic section amongst all the ones preserved by the same family of transformations.</p>
<ul>
<li>Circles’ conserved quantity is $x^2 + y^2$, the radius of the circle.</li>
<li>Ellipses (in the axis-aligned case) have conserved quantity $(x/\alpha)^2 + y^2$, an
aspect-ratio-corrected radius.</li>
<li>Parabolas (in the standard orientation and aspect ratio we considered) have conserved quantity
$y - x^2$, the height of the parabola above the origin.</li>
<li>Hyperbolas (in the standard orientation and aspect ratio we considered): the conserved quantity
is $y^2 - x^2$, which is the plus or minus the hyperbola’s distance of closest approach to the
origin.</li>
</ul>Conformal Texture Mapping
http://reedbeta.com/blog/conformal-texture-mapping/
http://reedbeta.com/blog/conformal-texture-mapping/Nathan ReedSun, 26 Nov 2017 17:28:39 -0800http://reedbeta.com/blog/conformal-texture-mapping/#commentsGraphicsMath<p>In two <a href="/blog/quadrilateral-interpolation-part-1/">previous</a> <a href="/blog/quadrilateral-interpolation-part-2/">articles</a>,
I’ve explored some unusual methods of texture mapping—beyond the conventional
approach of linearly interpolating UV coordinates across triangles. This post is a sort of
continuation-in-spirit of that work, but I’m no longer focusing specifically on quadrilaterals.</p>
<p>A problem that often afflicts texture mapping on smooth, curvy models (such as characters) is distortion:
in some regions, the texture may appear overly squashed,
stretched, or sheared on the 3D model. A related but distinct problem is that of different
regions of the model having different texel density, due to varying scale in the UV mapping.
I wanted to explore these issues mathematically. Are there ways to create
texture mappings that have low distortion by construction?</p>
<p>Ultimately, I didn’t come to an altogether satisfying resolution of this question, but I encountered
plenty of interesting math along the way and I want to share some of it. This post will be on the
more esoteric, less immediately-applicable side—but I hope you’ll find the topic intriguing
nonetheless.</p>
<!--more-->
<div class="toc">
<ul>
<li><a href="http://reedbeta.com/blog/conformal-texture-mapping/#quantifying-texture-distortion">Quantifying Texture Distortion</a></li>
<li><a href="http://reedbeta.com/blog/conformal-texture-mapping/#conformal-maps">Conformal Maps</a></li>
<li><a href="http://reedbeta.com/blog/conformal-texture-mapping/#mobius-transformations">Möbius Transformations</a></li>
<li><a href="http://reedbeta.com/blog/conformal-texture-mapping/#holomorphic-functions">Holomorphic Functions</a></li>
<li><a href="http://reedbeta.com/blog/conformal-texture-mapping/#invertibility-and-critical-points">Invertibility And Critical Points</a></li>
<li><a href="http://reedbeta.com/blog/conformal-texture-mapping/#compulsory-criticality">Compulsory Criticality</a></li>
<li><a href="http://reedbeta.com/blog/conformal-texture-mapping/#conclusion">Conclusion</a></li>
</ul>
</div>
<h2 id="quantifying-texture-distortion"><a class="link-button" href="http://reedbeta.com/blog/conformal-texture-mapping/#quantifying-texture-distortion"></a>Quantifying Texture Distortion</h2>
<p>How can we mathematically characterize “distortion” in a texture mapping? To begin with, we should
make a distinction between local and global forms of distortion. Local distortion would be visible
when zooming in on a small region of the model—looking at a single triangle, or a small
neighborhood around a point. Conversely, global distortion might only show up when you look at the
whole model and compare texture scale and orientation across widely separated points.</p>
<p>Some amount of global distortion is inevitable in mapping a flat, 2D texture to a non-flat, 3D
object. It’s not necessarily a problem, and it can even be useful in some cases to allow varying
texel density to concentrate more texels in more-important or more-detailed parts of a model. For
example, human head models usually give more texel density to the face region than to the sides, top,
and back of the head.</p>
<p>However, local distortion is usually undesirable. It changes the shapes of features in the texture,
distorts the shape of filter kernels operating in texture space, and gives unequal texel density along
different axes—bad news all around.</p>
<p>To measure local distortion, we can look at the tangent basis implied by the UVs assigned on the
mesh—the same basis we typically compute as part of the setup for normal and parallax mapping.</p>
<p>This basis can be computed per-triangle, and consists of the two 3D vectors within the triangle’s
plane that correspond to the texture’s U and V axes—known as the tangent and bitangent
vectors, respectively. (The triangle normal is usually included as a third basis vector, but we won’t
need that here.) If the triangle’s vertex positions are $p_1, p_2, p_3$ and the corresponding
UVs are $(u_1, v_1) \ldots (u_3, v_3)$, then the tangent and bitangent vectors $T, B$ can be defined by:
$$
\begin{aligned}
p_2 - p_1 &= (u_2 - u_1)T + (v_2 - v_1)B \\
p_3 - p_1 &= (u_3 - u_1)T + (v_3 - v_1)B
\end{aligned}
$$
These equations can be cast in matrix form, and solved as follows:
$$
\begin{bmatrix} T_x & B_x \\ T_y & B_y \\ T_z & B_z \end{bmatrix} =
\begin{bmatrix}
(p_{2x} - p_{1x}) & (p_{3x} - p_{1x}) \\
(p_{2y} - p_{1y}) & (p_{3y} - p_{1y}) \\
(p_{2z} - p_{1z}) & (p_{3z} - p_{1z})
\end{bmatrix}
\begin{bmatrix} (u_2 - u_1) & (u_3 - u_1) \\ (v_2 - v_1) & (v_3 - v_1) \end{bmatrix}^{-1}
$$
In the case of a general parameterized surface given by $p(u, v)$, the tangent basis at a point is
defined as $T = \partial p / \partial u, B = \partial p / \partial v$.</p>
<p>When using a tangent basis for normal mapping, you’d probably orthonormalize it at this point. Here,
we don’t want to do that—the “raw” tangent basis contains the information we want to extract
about texture distortion.</p>
<p>There are a couple of different ways a texture can be distorted locally. One is for it to be
non-uniformly scaled; another is for it to be sheared:</p>
<p class="image-array"><img alt="Non-uniformly scaled texture" src="http://reedbeta.com/blog/conformal-texture-mapping/quad_scaled.png" title="Non-uniformly scaled texture" />
<img alt="Sheared texture" src="http://reedbeta.com/blog/conformal-texture-mapping/quad_sheared.png" title="Sheared texture" /></p>
<p>Both of these effects can be measured by looking at the tangent basis. Nonuniform scaling can be
measured by comparing the lengths of $T$ and $B$, and shear can be measured by the angle between
them; an unsheared texture mapping should have $T$ and $B$ perpendicular.</p>
<p>A convenient metric for both forms of distortion is the eccentricity of the ellipse created by
transforming a unit circle from tangent space to model space. If the mapping is undistorted, it will
map the unit circle to another circle; if there’s any nonuniform scaling or shear present, the
circle will get elongated into an ellipse (though not necessarily an axis-aligned one):</p>
<div class="embed-wrapper-outer" >
<div class="embed-wrapper-inner">
<iframe class="embed" type="text/html" allowfullscreen frameborder="0" src="https://www.shadertoy.com/embed/MlXfDH?paused=false&gui=false"></iframe>
</div>
</div>
<p>How can we compute the eccentricity from the tangent basis? The major and minor radii of the ellipse
are the <a href="https://en.wikipedia.org/wiki/Singular-value_decomposition">singular values</a> of the
tangent-to-model transform (i.e. the $[T, B]$ matrix). I’ll skip the detailed derivation, but I
found that the major and minor radii $a, b$ can be expressed in terms of $T$ and $B$ as follows:
$$
\begin{aligned}
a^2 &= \tfrac{1}{2} \left[ (T^2 + B^2) + \sqrt{(T^2 - B^2)^2 + 4(T \cdot B)^2} \right] \\
b^2 &= \tfrac{1}{2} \left[ (T^2 + B^2) - \sqrt{(T^2 - B^2)^2 + 4(T \cdot B)^2} \right]
\end{aligned}
$$
The eccentricity of the ellipse can then be computed as:
$$
\epsilon = \sqrt{1 - \frac{b^2}{a^2}}
$$
This value equals 0 when the ellipse is a circle, and grows toward 1 as it becomes more elongated.</p>
<h2 id="conformal-maps"><a class="link-button" href="http://reedbeta.com/blog/conformal-texture-mapping/#conformal-maps"></a>Conformal Maps</h2>
<p>If a texture mapping—either on a triangle mesh or a general parameterized surface—has no
local distortion anywhere, i.e. its eccentricity equals 0 at every point, then it belongs to
a class known as <a href="https://en.wikipedia.org/wiki/Conformal_map"><strong>conformal maps</strong></a>.</p>
<p>Conformal maps are a rich seam of mathematics, with a lot of connections to deep parts of
geometry, analysis, and mathematical physics. In two dimensions, they’re particularly powerful and
flexible (their usefulness falls off in higher dimensions).</p>
<p>Moreover, conformal maps are oddly aesthetically pleasing. 😄 There’s often a rather
soothing quality of smoothness to them, owing to their lack of local distortion.</p>
<p>The key geometric property of these maps is that they preserve angles. To be precise:
if two lines or curves intersect at a certain angle, then their images under a conformal
map will intersect with the same angle. However, <em>distances</em> aren’t preserved in general: a
conformal map may scale things up and down, with different scale factors at different points. As
a result, shapes and sizes of things may be distorted in a global sense.</p>
<p>Another way to express the same idea is that a conformal map can be approximated to first order near
any point as a similarity transformation—a linear transformation with no shear or nonuniform scaling,
only rotation and uniform scaling.</p>
<p>We can also relax the definition of a conformal map by allowing the eccentricity to be bounded by a
constant, $0 \leq \epsilon \leq \epsilon_{\text{max}}$, rather than requiring it to be exactly zero
everywhere. This is called a <a href="https://en.wikipedia.org/wiki/Quasiconformal_mapping"><strong>quasiconformal map</strong></a>.</p>
<h2 id="mobius-transformations"><a class="link-button" href="http://reedbeta.com/blog/conformal-texture-mapping/#mobius-transformations"></a>Möbius Transformations</h2>
<p>To get some initial intuition for what conformal maps are like, it’s useful to narrow our focus to a
specific type of conformal map that’s easy to analyze and play with. For this, we’ll look at
<a href="https://en.wikipedia.org/wiki/M%C3%B6bius_transformation"><strong>Möbius transformations</strong></a>, which are
just about the simplest conformal maps that are interesting enough to be worth studying. (They’re
named after <a href="https://en.wikipedia.org/wiki/August_Ferdinand_M%C3%B6bius">August Ferdinand Möbius</a>,
the same fellow better-known for the Möbius strip; he also invented homogeneous coordinates.)</p>
<p>In 2D, the most straightforward way to define these maps is with complex numbers. A 2D Möbius
transformation has the form:
$$
f(z) = \frac{az + b}{cz + d}, \qquad z \in \mathbb{C}
$$
where $a, b, c, d \in \mathbb{C}$ are some constants, which should satisfy $ad - bc \neq 0$ (or the
transform will be degenerate).</p>
<div class="embed-wrapper-outer" >
<div class="embed-wrapper-inner">
<iframe class="embed" type="text/html" allowfullscreen frameborder="0" src="https://www.shadertoy.com/embed/4tXyWs?paused=false&gui=false"></iframe>
</div>
</div>
<p>Here’s a Shadertoy that applies a Möbius transformation to a coordinate grid, animating the
parameters over time.
While watching this, pay attention to the grid intersections. Though the straight lines become
curved, and the overall shapes may distort quite a bit, wherever two grid lines meet each other they
always remain perpendicular! That’s the conformal property at work.</p>
<p>A few interesting facts about Möbius transformations, in no particular order:</p>
<ul>
<li>They form a mathematical group. The composition of two Möbius transformations is another Möbius,
and the inverse of a Möbius is another Möbius!</li>
<li>Although it has four complex parameters (eight components total), a Möbius has only <em>six</em> degrees
of freedom. That’s because an overall complex factor multiplied into all the parameters has no
effect. In other words, parameters $(a, b, c, d)$ and $(ua, ub, uc, ud)$ specify the same
transformation, for any $u \neq 0 \in \mathbb{C}$.</li>
<li>Möbius transformations generally map lines to circles, and circles to other circles. (And,
occasionally, circles to lines.)</li>
<li>Möbius transformations can be defined in higher dimensions as well, and they behave analogously,
with (hyper)planes mapping to (hyper)spheres.</li>
<li>In 3D and higher, Möbius transformations are the <em>only</em> conformal maps that exist. In 2D, they’re
just a small subset of a much richer collection of conformal maps.</li>
</ul>
<p>The six degrees of freedom of a 2D Möbius are just enough that we can construct a transformation to
map any three chosen points to three others. This makes it tempting to think that we could use them
for texture mapping, by applying a Möbius to each triangle of a 3D model.</p>
<p>Unfortunately, the mapping will not in general be continuous from one triangle to the next: the
shared edge will be mapped to different circles by each triangle’s Möbius, and there are no more
degrees of freedom available to try to fix it.
There have been some papers, <a href="http://www.dmg.tuwien.ac.at/geom/ig/publications/2015/conformal2015/conformal2015.pdf">like this one</a>,
trying to patch together piecewise Möbius transformations with least-squares optimization, to produce
<em>approximate</em> conformal maps.</p>
<p>There’s a good deal more that could be said about Möbius transformations.
However, let’s move on for now and look at a broader set of conformal maps.</p>
<h2 id="holomorphic-functions"><a class="link-button" href="http://reedbeta.com/blog/conformal-texture-mapping/#holomorphic-functions"></a>Holomorphic Functions</h2>
<p>From this point forward, we’ll restrict ourselves to the 2D case. Given that Möbius transformations
aren’t quite as flexible as we might like, how can we construct other types of conformal maps?</p>
<p>It’s no accident that we used complex numbers to define the Möbius transformation in the previous
section. Complex numbers, in fact, are intimately linked to conformal maps in 2D.</p>
<p>Why is this? If you recall, I mentioned earlier that one way to define a conformal map is that it
can be approximated to first order near any point as a similarity transformation. Well, multiplication
by a (nonzero) complex number implements a 2D similarity transformation: if $z = re^{i\theta}$, then
multiplication by $z$ will scale by $r$ and rotate by $\theta$.</p>
<p>As you may have guessed, “approximated to first order near a point” is a long-winded way of talking
about derivatives. So, what we’re saying is that for a function on $\mathbb{C}$ to be a conformal
map, its derivative at any point should act as a complex number. In other words, it should be
<em>complex differentiable</em>. Functions that satisfy this requirement are called
<a href="https://en.wikipedia.org/wiki/Holomorphic_function"><strong>holomorphic functions</strong></a>.</p>
<p>I should note that being complex-differentiable is different—and much more restrictive—than just being
differentiable as a vector function on $\mathbb{R}^2$. In other words, it’s not enough for the
$x$ and $y$ components of a mapping to <em>individually</em> be differentiable. As seen before, the
derivative at each point must take the form of a similarity transform; formally, the $x$ and $y$
components must satisfy the <a href="https://en.wikipedia.org/wiki/Cauchy%E2%80%93Riemann_equations">Cauchy–Riemann equations</a>,
which state that the mapping’s tangent and bitangent vectors must be orthogonal, and
of equal length, at each point. Only when these conditions are satisfied can you interpret the mapping
as a differentiable complex function of a complex variable.</p>
<p>Fortunately, the basic differentiation rules we learn in school for real-valued functions do
carry over to complex functions! In particular, all the basic arithmetic operations on complex numbers
are differentiable. So, to make a holomorphic function, all we have to do is write down
an algebraic formula—pretty much whatever we like—for a complex function $f(z)$. These functions
will always produce conformal maps, by construction.</p>
<p>We can also use exp, log, and trig functions, as well as many other special functions; they can be
can be extended into the complex domain and are holomorphic too. However, there are a few operations
we <em>can’t</em> use: the complex conjugate, magnitude, argument, or real or imaginary parts of a complex
number. Those <em>aren’t</em> holomorphic, it turns out. As long as we follow these rules, any function we
build will be holomorphic and therefore conformal.</p>
<p>So, okay! We know a lot about how to build functions to accomplish specific things. In fact, we can
try taking functions we’ve already got experience with, and just extending them to the complex domain.
For instance, take a 1D cubic Bézier curve with control points $c_0, c_1, c_2, c_3$:
$$
B(t) = (1-t)^3 c_0 + 3(1-t)^2 t c_1 + 3(1-t) t^2 c_2 + t^3 c_3
$$
We’ll use the same formula, but make everything complex numbers—both the control points and the
input variable.
$$
B(z) = (1-z)^3 c_0 + 3(1-z)^2 z c_1 + 3(1-z) z^2 c_2 + z^3 c_3
$$
Let’s see what it looks like! Here I’ve set it up to produce the identity map with a little bit of
animated (complex) wiggle in the tangents at the endpoints, 0 and 1.</p>
<div class="embed-wrapper-outer" >
<div class="embed-wrapper-inner">
<iframe class="embed" type="text/html" allowfullscreen frameborder="0" src="https://www.shadertoy.com/embed/llByzW?paused=false&gui=false"></iframe>
</div>
</div>
<p>Huh. Well, it’s doing…<em>something</em>. The mapping does seem to be conformal, for the most part—right
angles are staying right angles. But why are we seeing the unit square getting duplicated and kinda
merging with itself into curvy 8-sided and 12-sided figures? Why do the grid lines seem to break
and reconnect all the time? This is interesting to look at, but doesn’t seem too useful for texture
mapping. What’s going on?</p>
<h2 id="invertibility-and-critical-points"><a class="link-button" href="http://reedbeta.com/blog/conformal-texture-mapping/#invertibility-and-critical-points"></a>Invertibility And Critical Points</h2>
<p>The problem comes down to <em>invertibility</em>. When we get this “multiple copies” phenomenon, what we’re
seeing is the complex function mapping multiple regions of its domain (the Shadertoy’s screen space)
to the same region of its range (the coordinate grid being visualized). In other words, the function
isn’t one-to-one—and therefore it fails to be invertible.</p>
<p>Stepping back to real-valued functions for a moment may help clarify. Here’s the graph of the real
function $f(x) = x^2$:</p>
<p><img alt="Graph of x²" class="not-too-wide" src="http://reedbeta.com/blog/conformal-texture-mapping/xsquared.png" title="Graph of x²" /></p>
<p>It’s a parabola, of course. It’s also one of the simplest examples of a non-invertible function.
Why? Because inputs $x$ and $-x$ both map to the same value, $x^2$. If you squint at it a bit, you
can see the graph as being made up of two distorted copies of the <em>positive half</em> of the real line.</p>
<p>The complex extension of this function $f(z) = z^2$, works the same way—but a bit more
dramatically, it gives us two distorted copies of <em>the entire complex plane</em>, squished together!</p>
<p><img alt="Graph of z²" class="not-too-wide" src="http://reedbeta.com/blog/conformal-texture-mapping/zsquared.png" title="Graph of z²" /></p>
<p>Now, a funny thing happens when we look at higher powers. When we restrict ourselves to the real
numbers only, $x^3$ is invertible, $x^4$ is not, $x^5$ is, and so on: odd powers are invertible,
while even ones aren’t. Even powers all map $x$ and $-x$ to the same value, while odd powers maintain
the distinction.</p>
<p>However, when extended to the complex plane, $z^n$ <em>always</em> fails to be invertible unless $n = 1$!
In fact, graphing $z^n$ gives you $n$ copies of the plane, squished together into wedges around the
origin. All of the copies are conformal mappings—but their scale becomes increasingly extreme as
you approach the origin, and the function is not strictly conformal <em>at</em> the origin.</p>
<div class="embed-wrapper-outer" >
<div class="embed-wrapper-inner">
<iframe class="embed" type="text/html" allowfullscreen frameborder="0" src="https://www.shadertoy.com/embed/llXBDH?paused=false&gui=false"></iframe>
</div>
</div>
<p>An example for the $n = 3$ case, which you can verify by working out the calculations if you like:
$$
\begin{aligned}
1^3 &= 1 \\
(-\tfrac{1}{2} + \tfrac{\sqrt{3}}{2} i)^3 &= 1 \\
(-\tfrac{1}{2} - \tfrac{\sqrt{3}}{2} i)^3 &= 1 \\
\end{aligned}
$$
Three distinct complex numbers, when cubed, all give the same result of 1.</p>
<p>In general, a holomorphic function will fail to be invertible wherever it has a <em>critical point</em>—a
point where its derivative equals zero. In the vicinity of such a place, the function will locally
behave like $z^n$: it will have $n$ copies of the surrounding region of the complex plane, squished
together into wedges around the critical point. Here, $n$ is one plus the order (aka multiplicity)
of the zero in the derivative.</p>
<p>This is what was going on in the Bézier example from the previous section. Since it was a cubic
polynomial, its derivative is quadratic, and quadratic polynomials have two zeros. So the cubic
Bézier curve has two critical points, which move around the plane as the curve’s parameters change.
When the critical points get too close to the region of interest (the unit square, say), we can see
two or even three copies of that region mushed together.</p>
<h2 id="compulsory-criticality"><a class="link-button" href="http://reedbeta.com/blog/conformal-texture-mapping/#compulsory-criticality"></a>Compulsory Criticality</h2>
<p>If we want to build holomorphic functions that are guaranteed to be invertible, we need to avoid
critical points, i.e. zeros of the derivative. Unfortunately, this turns out to be more challenging
than you might expect.</p>
<p>It’s easy to make a real polynomial that doesn’t have any zeros, such as $f(x) = x^2 + 1$.
Correspondingly, it’s easy to make a real polynomial that’s everywhere invertible, by taking the
integral of one that doesn’t have any zeros: $\int (x^2 + 1) \, dx = \tfrac{1}{3}x^3 + x$, for
example.</p>
<p>However, a crucial difference between the real and complex domains comes into play here: while zeros are optional
for real polynomials, they are <em>mandatory</em> for complex ones. A complex polynomial of degree $n \geq 1$
always has <em>exactly</em> $n$ zeros (counted with multiplicity). For example, the zeros of $z^2 + 1$ are
at $z = \pm i$. Thus, a complex polynomial of degree $n \geq 2$ always has at least one critical
point, and possibly up to $n - 1$ of them.</p>
<p>In other words, it’s impossible for complex polynomials of degree 2 or higher to be globally invertible.</p>
<p>Polynomials aren’t the only functions out there, though. What about rational functions? They
obey a similar dictum: if a rational function is degree $p$ over degree $q$, then there are
potentially $p + q - 1$ critical points (remember the quotient rule)—not to mention anywhere from
1 to $q$ poles, where the denominator goes to zero and the rational function blasts off to infinity.
Incidentally, poles behave similarly to critical points in some ways: they come in different orders,
and a pole of order $n$ will have $n$ copies of the complex plane around it. So poles are <em>another</em>
way to break invertibility.</p>
<p>This leads to the somewhat depressing conclusion that the <em>only</em> polynomial or rational complex
functions that are everywhere invertible are those that have degree at-most-1 over degree at-most-1.
In other words: Möbius transformations.</p>
<h2 id="conclusion"><a class="link-button" href="http://reedbeta.com/blog/conformal-texture-mapping/#conclusion"></a>Conclusion</h2>
<p>So, polynomial and rational functions aren’t good enough—we’d need to dig deeper if we’re to
find a class of invertible holomorphic functions more powerful than Möbius. One possibility might
be to define $f(z) = \int e^{g(z)} \, dz$, where $g(z)$ is some holomorphic function without poles.
Then $f(z)$ will have neither poles nor critical points, since its derivative is $e^{g(z)}$. (The
complex exponential function, like the real version, is everywhere nonzero.)</p>
<p>Now, if we step back for a moment, we don’t necessarily need <em>global</em> invertibility. If we’re mainly
interested in some bounded region—such as the unit square, for texture mapping—then it may well
be sufficient for our purposes to maintain <em>local</em> invertibility there. This could be done by
keeping critical points and poles far enough from the region of interest that they don’t weird things
out too much. That still seems like a challenging juggling act to perform, though—and moreover,
the more degrees of freedom we have in our function, the more critical points or poles we probably
have to worry about.</p>
<p>In the course of reading up on this subject, I also found <a href="http://www.cs.technion.ac.il/~gotsman/AmendedPubl/Ofir/hilbert.pdf">another paper</a>
that takes a quite different approach—based on <a href="https://en.wikipedia.org/wiki/Cauchy%27s_integral_formula">Cauchy’s integral formula</a>—to
constructing conformal maps. I might write about that in another post sometime—there’s a lot more
to this rabbit hole of math, and it’s interesting stuff, but ultimately it doesn’t seem very practical.</p>
<p>For more reading on the theory of holomorphic functions, see <a href="https://terrytao.wordpress.com/category/teaching/246a-complex-analysis/">Terry Tao’s complex analysis course notes</a>.
(Be warned, it’s a graduate-level course and the notes are pretty dense and formal.)</p>Quadrilateral Interpolation, Part 2
http://reedbeta.com/blog/quadrilateral-interpolation-part-2/
http://reedbeta.com/blog/quadrilateral-interpolation-part-2/Nathan ReedThu, 18 May 2017 21:15:44 -0700http://reedbeta.com/blog/quadrilateral-interpolation-part-2/#commentsGraphicsGPUMath<p>It’s been quite a while since the <a href="/blog/quadrilateral-interpolation-part-1/">first entry in this series</a>!
I apologize for the long delay—at the time, I’d intended to write at least one more entry, but I
couldn’t get the math to work and lost interest. However, I recently had occasion to
revisit this topic, and this time was able to make progress.</p>
<p>In this article, I’ll cover <strong>bilinear interpolation</strong> on quadrilaterals. Unlike the projective
interpolation covered in Part 1, this method will allow us to maintain regular UV spacing along
all four of the quad’s edges, regardless of its shape; but we’ll see that to achieve this, we’ll
have to accept a different kind of distortion to the texture.<!--more--></p>
<div class="toc">
<ul>
<li><a href="http://reedbeta.com/blog/quadrilateral-interpolation-part-2/#the-story-so-far">The Story So Far</a></li>
<li><a href="http://reedbeta.com/blog/quadrilateral-interpolation-part-2/#bilinear-interpolation">Bilinear Interpolation</a></li>
<li><a href="http://reedbeta.com/blog/quadrilateral-interpolation-part-2/#properties">Properties</a></li>
<li><a href="http://reedbeta.com/blog/quadrilateral-interpolation-part-2/#inversion">Inversion</a></li>
<li><a href="http://reedbeta.com/blog/quadrilateral-interpolation-part-2/#implementation">Implementation</a></li>
<li><a href="http://reedbeta.com/blog/quadrilateral-interpolation-part-2/#conclusion">Conclusion</a></li>
</ul>
</div>
<h2 id="the-story-so-far"><a class="link-button" href="http://reedbeta.com/blog/quadrilateral-interpolation-part-2/#the-story-so-far"></a>The Story So Far</h2>
<p>The central problem of this series is: how can we map a rectangular texture image
onto an arbitrary convex quadrilateral?</p>
<p>If we model the quad as two triangles, and apply ordinary (linear) texture mapping to the mesh,
we get something like this:</p>
<p><img alt="Brick texture on arbitrary quad, showing linear interpolation seam" src="http://reedbeta.com/blog/quadrilateral-interpolation-part-2/quad_arbitrary_seam.png" title="Brick texture on arbitrary quad, showing linear interpolation seam" /></p>
<p>There’s a visible seam at the edge between the two triangles, where the derivatives of the mapping
change abruptly. We could improve the situation by subdividing the quad more finely, and assigning
appropriate UVs to the interior vertices; but perhaps we can’t or don’t wish to do that. Instead,
we’re looking at alternative methods for interpolating the UVs to avoid this problem altogether.</p>
<p>In <a href="/blog/quadrilateral-interpolation-part-1/">part 1</a>, I looked at <em>projective interpolation</em>,
which produces results like this:</p>
<p><img alt="Two projectively-interpolated quads with a seam visible between them" src="http://reedbeta.com/blog/quadrilateral-interpolation-part-2/quad_arbitrary_proj_2.png" title="Two projectively-interpolated quads with a seam visible between them" /></p>
<p>This method, based on perspective projection, succeeds in removing the visible seam between
triangles in a quad. Unfortunately, it has a couple of other issues. First, the perspective-like
transformation tends to produce an unwanted “3D” effect, where a 2D quad that’s flat on the screen
comes to look like a 3D rectangle stretching off into the distance. Second, the UV spacing becomes
nonuniform along the edges of the quad—which introduces a $C^0$ seam between adjacent quads,
even worse than the original $C^1$ seams we were trying to fix!</p>
<h2 id="bilinear-interpolation"><a class="link-button" href="http://reedbeta.com/blog/quadrilateral-interpolation-part-2/#bilinear-interpolation"></a>Bilinear Interpolation</h2>
<p>If you’re reading this, you’re probably familiar with bilinear interpolation in the context of
texture sampling. At a point between texel centers, the sampling result is a blend of all four of
the nearest texels.</p>
<p><img alt="Bilinear interpolation of four neighboring texels" src="http://reedbeta.com/blog/quadrilateral-interpolation-part-2/bilinear-texels.png" title="Bilinear interpolation of four neighboring texels" /></p>
<p>This can be expressed mathematically as follows. If $t_0 \ldots t_3$ are the four texel colors,
the interpolated result is:
$$\begin{aligned}
t(u, v) &= \text{lerp}\bigl(\text{lerp}(t_0, t_1, u), \text{lerp}(t_2, t_3, u), v\bigr) \\
&= (1-u)(1-v) t_0 + u(1-v) t_1 + (1-u)v t_2 + uv t_3
\end{aligned}$$</p>
<p>Let’s now define bilinear interpolation for a quadrilateral exactly the same way, except that
instead of four texel colors, we’ll have the four vertices of the quad. If the vertex positions are
$p_0 \ldots p_3$, then the position corresponding to a given UV on the quad is
$$
p(u, v) = \text{lerp}\bigl(\text{lerp}(p_0, p_1, u), \text{lerp}(p_2, p_3, u), v\bigr)
$$</p>
<p><img alt="Bilinear interpolation of vertices in a quad" src="http://reedbeta.com/blog/quadrilateral-interpolation-part-2/bilinear-verts.png" title="Bilinear interpolation of vertices in a quad" /></p>
<p>This defines the forward UV mapping—from UV to position on the quad’s surface. To actually
implement this technique, we’re going to need to invert this equation, so we can write a pixel
shader that maps the pixel’s position back to the UV at which to sample the texture. I’ll show how
to do that a bit later; but first, let’s have a look at the results.</p>
<p><img alt="Two bilinearly-interpolated quads" src="http://reedbeta.com/blog/quadrilateral-interpolation-part-2/quad_bilinear_2.png" title="Two bilinearly-interpolated quads" /></p>
<p>As hoped, bilinear interpolation both hides the join between the two triangles in each quad, and
keeps uniform spacing along the edges so that the texture will match between adjacent quads. (There’s
still a $C^1$ seam between the quads, where the mapping derivatives jump; but that’s unavoidable as
long as we insist that the texture completely fill the quad.)</p>
<h2 id="properties"><a class="link-button" href="http://reedbeta.com/blog/quadrilateral-interpolation-part-2/#properties"></a>Properties</h2>
<p>Despite the downsides of projective interpolation, it does have one nice feature: it preserves
straight lines. Any line in the original texture space will be mapped to another line by an arbitrary
projective transform. In the transformed grid below, note how all the horizontal, vertical, and
diagonal lines are still straight:</p>
<p><img alt="Two projectively-interpolated grids" src="http://reedbeta.com/blog/quadrilateral-interpolation-part-2/grid_arbitrary_proj_2.png" title="Two projectively-interpolated grids" /></p>
<p>In bilinear interpolation, this is no longer the case—some lines in the original texture space
now come out curved:</p>
<p><img alt="Two bilinearly-interpolated grids" src="http://reedbeta.com/blog/quadrilateral-interpolation-part-2/grid_bilinear_2.png" title="Two bilinearly-interpolated grids" /></p>
<p>The curvature introduced by the mapping is orientation-dependent. All the horizontal and vertical
lines from the original grid are still straight; this is because the bilinear interpolation formula
reduces to linear interpolation when either $u$ or $v$ is held fixed. However, most <em>diagonal</em> lines
will be mapped to curves. In particular, they’ll be mapped to quadratic splines, since the bilinear
interpolation formula becomes quadratic (in the general case) when $u$ and $v$ are held proportional
to each other.</p>
<p>Depending on the texture, this effect may not be very noticeable. In textures that don’t have
a lot of line-like features to begin with, or whose line-like features are mostly vertical
and horizontal (e.g. bricks), the distortion of diagonals is pretty hard to see.</p>
<p>By the way, the fact that bilinear interpolation creates quadratic splines along diagonals
<a href="http://blog.demofox.org/2016/12/08/evaluating-polynomials-with-the-gpu-texture-sampler/">can be exploited to evaluate splines in a GPU texture unit</a>.</p>
<h2 id="inversion"><a class="link-button" href="http://reedbeta.com/blog/quadrilateral-interpolation-part-2/#inversion"></a>Inversion</h2>
<p>(If you prefer not to wade through the mathy details and just want to see the code, feel free
to jump down to the <a href="http://reedbeta.com/blog/quadrilateral-interpolation-part-2/#implementation">Implementation</a> section. In my derivation here, I’m indebted
to <a href="http://iquilezles.org/www/articles/ibilinear/ibilinear.htm">this article by Íñigo Quílez</a>.)</p>
<p>As seen above, the bilinear interpolation setup gives us an expression for the position of a point
in terms of its $u, v$ coordinates within the quad. Let $p_0 \ldots p_3$ be the quad’s vertices (in whatever
space the quad is defined—model space, world space, screen space). Then the position corresponding
to a given $u, v$ is:
$$\begin{aligned}
p(u, v) &= \text{lerp}\bigl(\text{lerp}(p_0, p_1, u), \text{lerp}(p_2, p_3, u), v\bigr) \\
&= (1-u)(1-v) p_0 + u(1-v) p_1 + (1-u)v p_2 + uv p_3
\end{aligned}$$
However, we’ll need to invert this equation in order to apply it in a pixel shader: we need to
calculate the UV at which to sample the texture (we can’t pass it down from the vertex shader
because that would give us linear interpolation, not bilinear). So we need to solve for $u, v$ in
terms of $p$, the pixel’s position within the quad.</p>
<p>First, let’s multiply out and regroup terms:
$$
0 = (p_0 - p) + (p_1 - p_0) u + (p_2 - p_0) v + (p_0 - p_1 - p_2 + p_3) uv
$$
The four vectors in parentheses here can readily be interpreted geometrically. We have the pixel’s
position relative to the origin of UV space; the two UV basis vectors; and one more,
$p_0 - p_1 - p_2 + p_3$. This vector expresses how the quad deviates from being a parallelogram.
It is the vector difference between the quad’s final point $p_3$ and where that point <em>would</em> be to
complete the parallelogram spanned by the UV basis vectors:</p>
<p><img alt="Vectors involved in inverse bilinear interpolation" src="http://reedbeta.com/blog/quadrilateral-interpolation-part-2/invbilin-vectors.png" title="Vectors involved in inverse bilinear interpolation" /></p>
<p>For convenience, let’s give names to these vectors:
$$\begin{aligned}
q &\equiv p - p_0 \\
b_1 &\equiv p_1 - p_0 \\
b_2 &\equiv p_2 - p_0 \\
b_3 &\equiv p_0 - p_1 - p_2 + p_3
\end{aligned}$$
Then the equation we’re solving becomes:
$$
0 = -q + b_1 u + b_2 v + b_3 uv \qquad (*)
$$
Now let’s solve for $u$ in terms of $v$:
$$\begin{aligned}
q - b_2 v &= (b_1 + b_3 v) u \\
u &= \frac{q - b_2 v}{b_1 + b_3 v}
\end{aligned}$$
<em>But wait! These quantities are vectors! Didn’t your mother ever teach you that you cannot divide
vectors?</em> Don’t worry, folks; I know geometric algebra. 😄</p>
<p>In seriousness: when we’ve correctly solved for $v$, then the numerator and denominator of
this fraction must be parallel, so we can divide them to recover $u$ (and it will be a scalar). To
implement this in practice, we’ll just pick one of the vector’s coordinate components do the
calculation with.</p>
<p>Onward to solving for $v$. We can eliminate $u$ from equation $(*)$ by wedging both sides with
$b_1 + b_3 v$. (In case you’re not familiar with the wedge product, it’s a tool from
<a href="http://www.terathon.com/gdc12_lengyel.pdf">Grassmann algebra</a>, part of geometric algebra. For our
purposes here, you can treat it as the signed area of the parallelogram spanned by two vectors.
When you wedge a vector with itself, you get zero.)
$$\begin{aligned}
0 &= (-q + b_1 u + b_2 v + b_3 uv) \wedge (b_1 + b_3 v) \\
&= (b_1 \wedge q) + (b_3 \wedge q - b_1 \wedge b_2) v + (b_2 \wedge b_3) v^2
\end{aligned}$$
We now have a quadratic equation in $v$ and we can apply the usual quadratic formula. (<em>Wait!
These quantities are bivectors! One cannot simply apply the quadratic formula to bivectors!</em>
Again, it’s okay, since these bivectors all lie in the plane of the quad and are therefore
proportional to one other. Again, in practice we’ll just look at one coordinate component of
the bivectors.)</p>
<p>The two possible solutions to $v$ are:
$$
v = \frac{-B \pm \sqrt{B^2 - 4AC}}{2A}, \qquad
\begin{aligned}
A &\equiv b_2 \wedge b_3 \\
B &\equiv b_3 \wedge q - b_1 \wedge b_2 \\
C &\equiv b_1 \wedge q
\end{aligned}
$$
In practice, the discriminant is always positive inside the quad; also, only one of the two roots is
needed—which one depends on the winding of the quad (and on the coordinate system conventions).
Once you have the correct $v$, plug it into the formula for $u$ and you’re done.</p>
<h2 id="implementation"><a class="link-button" href="http://reedbeta.com/blog/quadrilateral-interpolation-part-2/#implementation"></a>Implementation</h2>
<p>Translating all this into shader code is fairly straightforward. We set up the $q, b_1, b_2, b_3$
vectors in the vertex shader, then solve for $u, v$ in the pixel shader. </p>
<p>One complication is that these vectors need to be calculated per quad, so you can’t have vertices
shared between quads. If you’re applying this to a mesh, it will need to be “unwelded” so that each
quad has distinct vertices. (You can still share vertices between the two triangles in each quad.)</p>
<p>Each vertex shader invocation also needs to know all four vertices of the quad it belongs to. To
avoid duplicating all the vertex positions many times in memory, we can use instancing: one instance
per quad, with the per-quad parameters stored in the instance vertex buffer (very similar to
rendering billboard particles).</p>
<p>Here’s what the shader might look like in pseudo-HLSL, for a 2D case where the quad is always in
the $xy$ plane:</p>
<table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre> 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62</pre></div></td><td class="code"><div class="codehilite"><pre><span></span><span class="k">struct</span> <span class="n">InstData</span>
<span class="p">{</span>
<span class="kt">float2</span> <span class="n">p</span><span class="p">[</span><span class="mi">4</span><span class="p">];</span> <span class="c1">// Quad vertices</span>
<span class="p">};</span>
<span class="k">struct</span> <span class="n">V2P</span>
<span class="p">{</span>
<span class="kt">float4</span> <span class="n">pos</span> <span class="o">:</span> <span class="nd">SV_Position</span><span class="p">;</span>
<span class="kt">float2</span> <span class="n">q</span><span class="p">,</span> <span class="n">b1</span><span class="p">,</span> <span class="n">b2</span><span class="p">,</span> <span class="n">b3</span><span class="p">;</span>
<span class="p">};</span>
<span class="kt">void</span> <span class="n">Vs</span><span class="p">(</span>
<span class="k">in</span> <span class="kt">uint</span> <span class="n">iVtx</span> <span class="o">:</span> <span class="nd">SV_VertexID</span><span class="p">,</span>
<span class="k">in</span> <span class="n">InstData</span> <span class="n">inst</span><span class="p">,</span>
<span class="k">out</span> <span class="n">V2P</span> <span class="n">o</span><span class="p">)</span>
<span class="p">{</span>
<span class="n">o</span><span class="p">.</span><span class="n">pos</span> <span class="o">=</span> <span class="n">inst</span><span class="p">.</span><span class="n">p</span><span class="p">[</span><span class="n">iVtx</span><span class="p">];</span>
<span class="c1">// Set up inverse bilinear interpolation</span>
<span class="n">o</span><span class="p">.</span><span class="n">q</span> <span class="o">=</span> <span class="n">o</span><span class="p">.</span><span class="n">pos</span> <span class="o">-</span> <span class="n">inst</span><span class="p">.</span><span class="n">p</span><span class="p">[</span><span class="mo">0</span><span class="p">];</span>
<span class="n">o</span><span class="p">.</span><span class="n">b1</span> <span class="o">=</span> <span class="n">inst</span><span class="p">.</span><span class="n">p</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="o">-</span> <span class="n">inst</span><span class="p">.</span><span class="n">p</span><span class="p">[</span><span class="mo">0</span><span class="p">];</span>
<span class="n">o</span><span class="p">.</span><span class="n">b2</span> <span class="o">=</span> <span class="n">inst</span><span class="p">.</span><span class="n">p</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span> <span class="o">-</span> <span class="n">inst</span><span class="p">.</span><span class="n">p</span><span class="p">[</span><span class="mo">0</span><span class="p">];</span>
<span class="n">o</span><span class="p">.</span><span class="n">b3</span> <span class="o">=</span> <span class="n">inst</span><span class="p">.</span><span class="n">p</span><span class="p">[</span><span class="mo">0</span><span class="p">]</span> <span class="o">-</span> <span class="n">inst</span><span class="p">.</span><span class="n">p</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="o">-</span> <span class="n">inst</span><span class="p">.</span><span class="n">p</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span> <span class="o">+</span> <span class="n">inst</span><span class="p">.</span><span class="n">p</span><span class="p">[</span><span class="mi">3</span><span class="p">];</span>
<span class="p">}</span>
<span class="kt">float</span> <span class="n">Wedge2D</span><span class="p">(</span><span class="kt">float2</span> <span class="n">v</span><span class="p">,</span> <span class="kt">float2</span> <span class="n">w</span><span class="p">)</span>
<span class="p">{</span>
<span class="k">return</span> <span class="n">v</span><span class="p">.</span><span class="n">x</span><span class="o">*</span><span class="n">w</span><span class="p">.</span><span class="n">y</span> <span class="o">-</span> <span class="n">v</span><span class="p">.</span><span class="n">y</span><span class="o">*</span><span class="n">w</span><span class="p">.</span><span class="n">x</span><span class="p">;</span>
<span class="p">}</span>
<span class="kt">void</span> <span class="n">Ps</span><span class="p">(</span>
<span class="k">in</span> <span class="n">V2P</span> <span class="n">i</span><span class="p">,</span>
<span class="k">out</span> <span class="kt">float4</span> <span class="n">color</span> <span class="o">:</span> <span class="nd">SV_Target</span><span class="p">)</span>
<span class="p">{</span>
<span class="c1">// Set up quadratic formula</span>
<span class="kt">float</span> <span class="n">A</span> <span class="o">=</span> <span class="n">Wedge2D</span><span class="p">(</span><span class="n">i</span><span class="p">.</span><span class="n">b2</span><span class="p">,</span> <span class="n">i</span><span class="p">.</span><span class="n">b3</span><span class="p">);</span>
<span class="kt">float</span> <span class="n">B</span> <span class="o">=</span> <span class="n">Wedge2D</span><span class="p">(</span><span class="n">i</span><span class="p">.</span><span class="n">b3</span><span class="p">,</span> <span class="n">i</span><span class="p">.</span><span class="n">q</span><span class="p">)</span> <span class="o">-</span> <span class="n">Wedge2D</span><span class="p">(</span><span class="n">i</span><span class="p">.</span><span class="n">b1</span><span class="p">,</span> <span class="n">i</span><span class="p">.</span><span class="n">b2</span><span class="p">);</span>
<span class="kt">float</span> <span class="n">C</span> <span class="o">=</span> <span class="n">Wedge2D</span><span class="p">(</span><span class="n">i</span><span class="p">.</span><span class="n">b1</span><span class="p">,</span> <span class="n">i</span><span class="p">.</span><span class="n">q</span><span class="p">);</span>
<span class="c1">// Solve for v</span>
<span class="kt">float2</span> <span class="n">uv</span><span class="p">;</span>
<span class="k">if</span> <span class="p">(</span><span class="nb">abs</span><span class="p">(</span><span class="n">A</span><span class="p">)</span> <span class="o"><</span> <span class="mf">0.001</span><span class="p">)</span>
<span class="p">{</span>
<span class="c1">// Linear form</span>
<span class="n">uv</span><span class="p">.</span><span class="n">y</span> <span class="o">=</span> <span class="o">-</span><span class="n">C</span><span class="o">/</span><span class="n">B</span><span class="p">;</span>
<span class="p">}</span>
<span class="k">else</span>
<span class="p">{</span>
<span class="c1">// Quadratic form. Take positive root for CCW winding with V-up</span>
<span class="kt">float</span> <span class="n">discrim</span> <span class="o">=</span> <span class="n">B</span><span class="o">*</span><span class="n">B</span> <span class="o">-</span> <span class="mi">4</span><span class="o">*</span><span class="n">A</span><span class="o">*</span><span class="n">C</span><span class="p">;</span>
<span class="n">uv</span><span class="p">.</span><span class="n">y</span> <span class="o">=</span> <span class="mf">0.5</span> <span class="o">*</span> <span class="p">(</span><span class="o">-</span><span class="n">B</span> <span class="o">+</span> <span class="nb">sqrt</span><span class="p">(</span><span class="n">discrim</span><span class="p">))</span> <span class="o">/</span> <span class="n">A</span><span class="p">;</span>
<span class="p">}</span>
<span class="c1">// Solve for u, using largest-magnitude component</span>
<span class="kt">float2</span> <span class="n">denom</span> <span class="o">=</span> <span class="n">i</span><span class="p">.</span><span class="n">b1</span> <span class="o">+</span> <span class="n">uv</span><span class="p">.</span><span class="n">y</span> <span class="o">*</span> <span class="n">i</span><span class="p">.</span><span class="n">b3</span><span class="p">;</span>
<span class="k">if</span> <span class="p">(</span><span class="nb">abs</span><span class="p">(</span><span class="n">denom</span><span class="p">.</span><span class="n">x</span><span class="p">)</span> <span class="o">></span> <span class="nb">abs</span><span class="p">(</span><span class="n">denom</span><span class="p">.</span><span class="n">y</span><span class="p">))</span>
<span class="n">uv</span><span class="p">.</span><span class="n">x</span> <span class="o">=</span> <span class="p">(</span><span class="n">i</span><span class="p">.</span><span class="n">q</span><span class="p">.</span><span class="n">x</span> <span class="o">-</span> <span class="n">i</span><span class="p">.</span><span class="n">b2</span><span class="p">.</span><span class="n">x</span> <span class="o">*</span> <span class="n">uv</span><span class="p">.</span><span class="n">y</span><span class="p">)</span> <span class="o">/</span> <span class="n">denom</span><span class="p">.</span><span class="n">x</span><span class="p">;</span>
<span class="k">else</span>
<span class="n">uv</span><span class="p">.</span><span class="n">x</span> <span class="o">=</span> <span class="p">(</span><span class="n">i</span><span class="p">.</span><span class="n">q</span><span class="p">.</span><span class="n">y</span> <span class="o">-</span> <span class="n">i</span><span class="p">.</span><span class="n">b2</span><span class="p">.</span><span class="n">y</span> <span class="o">*</span> <span class="n">uv</span><span class="p">.</span><span class="n">y</span><span class="p">)</span> <span class="o">/</span> <span class="n">denom</span><span class="p">.</span><span class="n">y</span><span class="p">;</span>
<span class="n">color</span> <span class="o">=</span> <span class="n">tex</span><span class="p">.</span><span class="n">Sample</span><span class="p">(</span><span class="n">samp</span><span class="p">,</span> <span class="n">uv</span><span class="p">);</span>
<span class="p">}</span>
</pre></div>
</td></tr></table>
<h2 id="conclusion"><a class="link-button" href="http://reedbeta.com/blog/quadrilateral-interpolation-part-2/#conclusion"></a>Conclusion</h2>
<p>Bilinear interpolation solves the problem of mapping a rectangular texture to an arbitrary quad,
with a different set of trade-offs from the projective mapping we saw previously. On the plus side,
bilinear interpolation doesn’t produce as much of a faux-3D effect, and it always maintains uniform
UV spacing along the quad’s edges. On the other hand, it introduces curved diagonals, and it also
has a more complicated (and more expensive) pixel shader than projective interpolation.</p>
<p>We’ve eliminated the seam between the two triangles in a quad, but one lingering issue is the seam
between adjacent quads in a mesh. It would be nice if we could have some control over the tangents
of the UV mapping along the edge, so we could force them to match across that join. But that’s a
topic for another day! </p>A Programmer’s Introduction to Unicode
http://reedbeta.com/blog/programmers-intro-to-unicode/
http://reedbeta.com/blog/programmers-intro-to-unicode/Nathan ReedFri, 03 Mar 2017 22:56:16 -0800http://reedbeta.com/blog/programmers-intro-to-unicode/#commentsCoding<p>Ｕｎｉｃｏｄｅ! 🅤🅝🅘🅒🅞🅓🅔‽ 🇺🇳🇮🇨🇴🇩🇪! 😄 The very name strikes fear and awe into the hearts of programmers
worldwide. We all know we ought to “support Unicode” in our software (whatever that means—like
using <code>wchar_t</code> for all the strings, right?). But Unicode can be abstruse, and diving into the
thousand-page <a href="http://www.unicode.org/versions/latest/">Unicode Standard</a> plus its dozens of
supplementary <a href="http://www.unicode.org/reports/">annexes, reports</a>, and <a href="http://www.unicode.org/notes/">notes</a>
can be more than a little intimidating. I don’t blame programmers for still finding the whole thing
mysterious, even 30 years after Unicode’s inception.</p>
<p>A few months ago, I got interested in Unicode and decided to spend some time learning more about it
in detail. In this article, I’ll give an introduction to it from a programmer’s point of view.</p>
<!--more-->
<p>I’m going to focus on the character set and what’s involved in working with strings and files of Unicode text.
However, in this article I’m not going to talk about fonts, text layout/shaping/rendering, or
localization in detail—those are separate issues, beyond my scope (and knowledge) here.</p>
<div class="toc">
<ul>
<li><a href="http://reedbeta.com/blog/programmers-intro-to-unicode/#diversity-and-inherent-complexity">Diversity and Inherent Complexity</a></li>
<li><a href="http://reedbeta.com/blog/programmers-intro-to-unicode/#the-unicode-codespace">The Unicode Codespace</a><ul>
<li><a href="http://reedbeta.com/blog/programmers-intro-to-unicode/#codespace-allocation">Codespace Allocation</a></li>
<li><a href="http://reedbeta.com/blog/programmers-intro-to-unicode/#scripts">Scripts</a></li>
<li><a href="http://reedbeta.com/blog/programmers-intro-to-unicode/#usage-frequency">Usage Frequency</a></li>
</ul>
</li>
<li><a href="http://reedbeta.com/blog/programmers-intro-to-unicode/#encodings">Encodings</a><ul>
<li><a href="http://reedbeta.com/blog/programmers-intro-to-unicode/#utf-8">UTF-8</a></li>
<li><a href="http://reedbeta.com/blog/programmers-intro-to-unicode/#utf-16">UTF-16</a></li>
</ul>
</li>
<li><a href="http://reedbeta.com/blog/programmers-intro-to-unicode/#combining-marks">Combining Marks</a><ul>
<li><a href="http://reedbeta.com/blog/programmers-intro-to-unicode/#canonical-equivalence">Canonical Equivalence</a></li>
<li><a href="http://reedbeta.com/blog/programmers-intro-to-unicode/#normalization-forms">Normalization Forms</a></li>
<li><a href="http://reedbeta.com/blog/programmers-intro-to-unicode/#grapheme-clusters">Grapheme Clusters</a></li>
</ul>
</li>
<li><a href="http://reedbeta.com/blog/programmers-intro-to-unicode/#and-more">And More…</a></li>
</ul>
</div>
<h2 id="diversity-and-inherent-complexity"><a class="link-button" href="http://reedbeta.com/blog/programmers-intro-to-unicode/#diversity-and-inherent-complexity"></a>Diversity and Inherent Complexity</h2>
<p>As soon as you start to study Unicode, it becomes clear that it represents a large jump in complexity
over character sets like ASCII that you may be more familiar with. It’s not just that Unicode
contains a much larger number of characters, although that’s part of it. Unicode also has a great
deal of internal structure, features, and special cases, making it much more than what one might
expect a mere “character set” to be. We’ll see some of that later in this article.</p>
<p>When confronting all this complexity, especially as an engineer, it’s hard not to find oneself asking,
“Why do we need all this? Is this really necessary? Couldn’t it be simplified?”</p>
<p>However, Unicode aims to faithfully represent the <em>entire world’s</em> writing systems. The Unicode
Consortium’s stated goal is “enabling people around the world to use computers in any language”.
And as you might imagine, the diversity of written languages is immense! To date, Unicode supports
135 different scripts, covering some 1100 languages, and there’s still a long tail of
<a href="http://linguistics.berkeley.edu/sei/">over 100 unsupported scripts</a>, both modern and historical,
which people are still working to add.</p>
<p>Given this enormous diversity, it’s inevitable that representing it is a complicated project.
Unicode embraces that diversity, and accepts the complexity inherent in its mission to include all
human writing systems. It doesn’t make a lot of trade-offs in the name of simplification, and it
makes exceptions to its own rules where necessary to further its mission.</p>
<p>Moreover, Unicode is committed not just to supporting texts in any <em>single</em> language, but also to
letting multiple languages coexist within one text—which introduces even more complexity.</p>
<p>Most programming languages have libraries available to handle the gory low-level details of text
manipulation, but as a programmer, you’ll still need to know about certain Unicode features in order
to know when and how to apply them. It may take some time to wrap your head around it all, but
don’t be discouraged—think about the billions of people for whom your software will be more
accessible through supporting text in their language. Embrace the complexity!</p>
<h2 id="the-unicode-codespace"><a class="link-button" href="http://reedbeta.com/blog/programmers-intro-to-unicode/#the-unicode-codespace"></a>The Unicode Codespace</h2>
<p>Let’s start with some general orientation. The basic elements of Unicode—its “characters”, although
that term isn’t quite right—are called <em>code points</em>. Code points are identified by number,
customarily written in hexadecimal with the prefix “U+”, such as
<a href="http://unicode.org/cldr/utility/character.jsp?a=A">U+0041 “A” <span class="smallcaps">latin capital letter a</span></a> or
<a href="http://unicode.org/cldr/utility/character.jsp?a=θ">U+03B8 “θ” <span class="smallcaps">greek small letter theta</span></a>. Each
code point also has a short name, and quite a few other properties, specified in the
<a href="http://www.unicode.org/reports/tr44/">Unicode Character Database</a>.</p>
<p>The set of all possible code points is called the <em>codespace</em>. The Unicode codespace consists of
1,114,112 code points. However, only 128,237 of them—about 12% of the codespace—are actually
assigned, to date. There’s plenty of room for growth! Unicode also reserves an additional 137,468
code points as “private use” areas, which have no standardized meaning and are available for
individual applications to define for their own purposes.</p>
<h3 id="codespace-allocation"><a class="link-button" href="http://reedbeta.com/blog/programmers-intro-to-unicode/#codespace-allocation"></a>Codespace Allocation</h3>
<p>To get a feel for how the codespace is laid out, it’s helpful to visualize it. Below is a map of the
entire codespace, with one pixel per code point. It’s arranged in tiles for visual coherence;
each small square is 16×16 = 256 code points, and each large square is a “plane” of 65,536 code
points. There are 17 planes altogether.</p>
<p><a href="http://reedbeta.com/blog/programmers-intro-to-unicode/codespace-map.png"><img alt="Map of the Unicode codespace (click to zoom)" src="http://reedbeta.com/blog/programmers-intro-to-unicode/codespace-map.png" title="Map of the Unicode codespace (click to zoom)" /></a></p>
<p>White represents unassigned space. Blue is assigned code points, green is private-use areas, and
the small red area is surrogates (more about those later).
As you can see, the assigned code points are distributed somewhat sparsely, but concentrated in the
first three planes.</p>
<p>Plane 0 is also known as the “Basic Multilingual Plane”, or BMP. The BMP contains essentially all
the characters needed for modern text in any script, including Latin, Cyrillic, Greek, Han (Chinese),
Japanese, Korean, Arabic, Hebrew, Devanagari (Indian), and many more.</p>
<p>(In the past, the codespace was just the BMP and no more—Unicode was originally conceived as a
straightforward 16-bit encoding, with only 65,536 code points. It was expanded to its current size
in 1996. However, the vast majority of code points in modern text belong to the BMP.)</p>
<p>Plane 1 contains historical scripts, such as Sumerian cuneiform and Egyptian hieroglyphs, as well as
emoji and various other symbols. Plane 2 contains a large block of less-common and historical Han
characters. The remaining planes are empty, except for a small number of rarely-used formatting
characters in Plane 14; planes 15–16 are reserved entirely for private use.</p>
<h3 id="scripts"><a class="link-button" href="http://reedbeta.com/blog/programmers-intro-to-unicode/#scripts"></a>Scripts</h3>
<p>Let’s zoom in on the first three planes, since that’s where the action is:</p>
<p><a href="http://reedbeta.com/blog/programmers-intro-to-unicode/script-map.png"><img alt="Map of scripts in Unicode planes 0–2 (click to zoom)" src="http://reedbeta.com/blog/programmers-intro-to-unicode/script-map.png" title="Map of scripts in Unicode planes 0–2 (click to zoom)" /></a></p>
<p>This map color-codes the 135 different scripts in Unicode. You can see how Han
<nobr>(<span class="swatch" style="background-color:#6bd8d3"></span>)</nobr> and Korean
<nobr>(<span class="swatch" style="background-color:#ce996a"></span>)</nobr> take up
most of the range of the BMP (the left large square). By contrast, all of the European, Middle
Eastern, and South Asian scripts fit into the first row of the BMP in this diagram.</p>
<p>Many areas of the codespace are adapted or copied from earlier encodings. For
example, the first 128 code points of Unicode are just a copy of ASCII. This has clear benefits
for compatibility—it’s easy to losslessly convert texts from smaller encodings into Unicode (and
the other direction too, as long as no characters outside the smaller encoding are used).</p>
<h3 id="usage-frequency"><a class="link-button" href="http://reedbeta.com/blog/programmers-intro-to-unicode/#usage-frequency"></a>Usage Frequency</h3>
<p>One more interesting way to visualize the codespace is to look at the distribution of usage—in
other words, how often each code point is actually used in real-world texts. Below
is a heat map of planes 0–2 based on a large sample of text from Wikipedia and Twitter (all
languages). Frequency increases from black (never seen) through red and yellow to white.</p>
<p><a href="http://reedbeta.com/blog/programmers-intro-to-unicode/heatmap-wiki+tweets.png"><img alt="Heat map of code point usage frequency in Unicode planes 0–2 (click to zoom)" src="http://reedbeta.com/blog/programmers-intro-to-unicode/heatmap-wiki+tweets.png" title="Heat map of code point usage frequency in Unicode planes 0–2 (click to zoom)" /></a></p>
<p>You can see that the vast majority of this text sample lies in the BMP, with only scattered
usage of code points from planes 1–2. The biggest exception is emoji, which show up here as the
several bright squares in the bottom row of plane 1.</p>
<h2 id="encodings"><a class="link-button" href="http://reedbeta.com/blog/programmers-intro-to-unicode/#encodings"></a>Encodings</h2>
<p>We’ve seen that Unicode code points are abstractly identified by their index in the codespace,
ranging from U+0000 to U+10FFFF. But how do code points get represented as bytes, in memory or in
a file?</p>
<p>The most convenient, computer-friendliest (and programmer-friendliest) thing to do would be to just
store the code point index as a 32-bit integer. This works, but it consumes 4 bytes per code point,
which is sort of a lot. Using 32-bit ints for Unicode will cost you a bunch of extra storage,
memory, and performance in bandwidth-bound scenarios, if you work with a lot of text.</p>
<p>Consequently, there are several more-compact encodings for Unicode. The 32-bit integer encoding is
officially called UTF-32 (UTF = “Unicode Transformation Format”), but it’s rarely used for storage.
At most, it comes up sometimes as a temporary internal representation, for examining or operating on
the code points in a string.</p>
<p>Much more commonly, you’ll see Unicode text encoded as either UTF-8 or UTF-16. These are both
<em>variable-length</em> encodings, made up of 8-bit or 16-bit units, respectively. In these schemes,
code points with smaller index values take up fewer bytes, which saves a lot of memory for
typical texts. The trade-off is that processing UTF-8/16 texts is more programmatically involved,
and likely slower.</p>
<h3 id="utf-8"><a class="link-button" href="http://reedbeta.com/blog/programmers-intro-to-unicode/#utf-8"></a>UTF-8</h3>
<p>In UTF-8, each code point is stored using 1 to 4 bytes, based on its index value.</p>
<p>UTF-8 uses a system of binary prefixes, in which the high bits of each byte mark whether it’s a
single byte, the beginning of a multi-byte sequence, or a continuation byte; the remaining bits,
concatenated, give the code point index. This table shows how it works:</p>
<table>
<thead>
<tr>
<th>UTF-8 (binary)</td>
<th>Code point (binary)</td>
<th>Range</td>
</tr>
</thead>
<tbody>
<tr>
<td class="mono">0xxxxxxx</td>
<td class="mono">xxxxxxx</td>
<td>U+0000–U+007F</td>
</tr>
<tr>
<td class="mono">110xxxxx 10yyyyyy</td>
<td class="mono">xxxxxyyyyyy</td>
<td>U+0080–U+07FF</td>
</tr>
<tr>
<td class="mono">1110xxxx 10yyyyyy 10zzzzzz</td>
<td class="mono">xxxxyyyyyyzzzzzz</td>
<td>U+0800–U+FFFF</td>
</tr>
<tr>
<td class="mono">11110xxx 10yyyyyy 10zzzzzz 10wwwwww</td>
<td class="mono">xxxyyyyyyzzzzzzwwwwww</td>
<td>U+10000–U+10FFFF</td>
</tr>
</tbody>
</table>
<p>A handy property of UTF-8 is that code points below 128 (ASCII characters) are encoded as single
bytes, and all non-ASCII code points are encoded using sequences of bytes 128–255. This has a couple
of nice consequences. First, any strings or files out there that are already in ASCII can also be
interpreted as UTF-8 without any conversion. Second, lots of widely-used string programming
idioms—such as null termination, or delimiters (newlines, tabs, commas, slashes, etc.)—will
just work on UTF-8 strings. ASCII bytes never occur inside
the encoding of non-ASCII code points, so searching byte-wise for a null terminator or a delimiter
will do the right thing.</p>
<p>Thanks to this convenience, it’s relatively simple to extend legacy ASCII programs and APIs to handle
UTF-8 strings. UTF-8 is very widely used in the Unix/Linux and Web worlds, and many programmers
argue <a href="http://utf8everywhere.org/">UTF-8 should be the default encoding everywhere</a>.</p>
<p>However, UTF-8 isn’t a drop-in replacement for ASCII strings in all respects. For instance,
code that iterates over the “characters” in a string will need to decode UTF-8 and iterate over
code points (or maybe grapheme clusters—more about those later), not bytes. When you measure the
“length” of a string, you’ll need to think about whether you want the length in bytes, the length
in code points, the width of the text when rendered, or something else.</p>
<h3 id="utf-16"><a class="link-button" href="http://reedbeta.com/blog/programmers-intro-to-unicode/#utf-16"></a>UTF-16</h3>
<p>The other encoding that you’re likely to encounter is UTF-16. It uses 16-bit words, with
each code point stored as either 1 or 2 words.</p>
<p>Like UTF-8, we can express the UTF-16 encoding rules in the form of binary prefixes:</p>
<table>
<thead>
<tr>
<th>UTF-16 (binary)</td>
<th>Code point (binary)</td>
<th>Range</td>
</tr>
</thead>
<tbody>
<tr>
<td class="mono">xxxxxxxxxxxxxxxx</td>
<td class="mono">xxxxxxxxxxxxxxxx</td>
<td>U+0000–U+FFFF</td>
</tr>
<tr>
<td class="mono">110110xxxxxxxxxx 110111yyyyyyyyyy</td>
<td class="mono">xxxxxxxxxxyyyyyyyyyy + 0x10000</td>
<td>U+10000–U+10FFFF</td>
</tr>
</tbody>
</table>
<p>A more common way that people talk about UTF-16 encoding, though, is in terms of code points called
“surrogates”. All the code points in the range U+D800–U+DFFF—or in other words, the code points
that match the binary prefixes <code>110110</code> and <code>110111</code> in the table above—are reserved specifically
for UTF-16 encoding, and don’t represent any valid characters on their own. They’re only meant
to occur in the 2-word encoding pattern above, which is called a “surrogate pair”. Surrogate code
points are illegal in any other context! They’re not allowed in UTF-8 or UTF-32 at all.</p>
<p>Historically, UTF-16 is a descendant of the original, pre-1996 versions of Unicode, in which there
were only 65,536 code points. The original intention was that there would be no different “encodings”;
Unicode was supposed to be a straightforward 16-bit character set. Later, the codespace was expanded
to make room for a long tail of less-common (but still important) Han characters, which the Unicode
designers didn’t originally plan for. Surrogates were then introduced, as—to put it bluntly—a
kludge, allowing 16-bit encodings to access the new code points.</p>
<p>Today, Javascript uses UTF-16 as its standard string representation: if you ask for the length of a
string, or iterate over it, etc., the result will be in UTF-16 words, with any
code points outside the BMP expressed as surrogate pairs. UTF-16 is also used by the Microsoft Win32 APIs;
though Win32 supports either 8-bit or 16-bit strings, the 8-bit version unaccountably
still doesn’t support UTF-8—only legacy code-page encodings, like ANSI. This leaves UTF-16 as the
only way to get proper Unicode support in Windows.</p>
<p>By the way, UTF-16’s words can be stored either little-endian or big-endian. Unicode has no opinion
on that issue, though it does encourage the convention of putting
<a href="http://unicode.org/cldr/utility/character.jsp?a=FEFF">U+FEFF <span class="smallcaps">zero width no-break space</span></a>
at the top of a UTF-16 file as a <a href="https://en.wikipedia.org/wiki/Byte_order_mark">byte-order mark</a>,
to disambiguate the endianness. (If the file doesn’t match the system’s endianness, the BOM will be
decoded as U+FFFE, which isn’t a valid code point.)</p>
<h2 id="combining-marks"><a class="link-button" href="http://reedbeta.com/blog/programmers-intro-to-unicode/#combining-marks"></a>Combining Marks</h2>
<p>In the story so far, we’ve been focusing on code points. But in Unicode, a “character” can be more
complicated than just an individual code point!</p>
<p>Unicode includes a system for <em>dynamically composing</em> characters, by combining multiple code points
together. This is used in various ways to gain flexibility without causing a huge combinatorial
explosion in the number of code points.</p>
<p>In European languages, for example, this shows up in the application of diacritics to letters. Unicode supports
a wide range of diacritics, including acute and grave accents, umlauts, cedillas, and many more.
All these diacritics can be applied to any letter of any alphabet—and in fact, <em>multiple</em>
diacritics can be used on a single letter.</p>
<p>If Unicode tried to assign a distinct code point to every possible combination of letter and
diacritics, things would rapidly get out of hand. Instead, the dynamic composition system enables you to construct the
character you want, by starting with a base code point (the letter) and appending additional code
points, called “combining marks”, to specify the diacritics. When a text renderer sees a sequence
like this in a string, it automatically stacks the diacritics over or under the base
letter to create a composed character.</p>
<p>For example, the accented character “Á” can be expressed as a string of two code points:
<a href="http://unicode.org/cldr/utility/character.jsp?a=A">U+0041 “A” <span class="smallcaps">latin capital letter a</span></a>
plus <a href="http://unicode.org/cldr/utility/character.jsp?a=0301">U+0301 “◌́” <span class="smallcaps">combining acute accent</span></a>.
This string automatically gets rendered as a single character: “Á”.</p>
<p>Now, Unicode does also include many “precomposed” code points, each representing a letter with some
combination of diacritics already applied, such as <a href="http://unicode.org/cldr/utility/character.jsp?a=Á">U+00C1 “Á” <span class="smallcaps">latin capital letter a with acute</span></a>
or <a href="http://unicode.org/cldr/utility/character.jsp?a=ệ">U+1EC7 “ệ” <span class="smallcaps">latin small letter e with circumflex and dot below</span></a>.
I suspect these are mostly inherited from older encodings that were assimilated into Unicode, and
kept around for compatibility. In practice, there are precomposed code points for most of the common
letter-with-diacritic combinations in European-script languages, so they don’t use dynamic
composition that much in typical text.</p>
<p>Still, the system of combining marks does allow for an <em>arbitrary number</em> of diacritics to be
stacked on any base character. The reductio-ad-absurdum of this is <a href="https://eeemo.net/">Zalgo text</a>,
which works by ͖͟ͅr͞aṋ̫̠̖͈̗d͖̻̹óm̪͙͕̗̝ļ͇̰͓̳̫ý͓̥̟͍ ̕s̫t̫̱͕̗̰̼̘͜a̼̩͖͇̠͈̣͝c̙͍k̖̱̹͍͘i̢n̨̺̝͇͇̟͙ģ̫̮͎̻̟ͅ ̕n̼̺͈͞u̮͙m̺̭̟̗͞e̞͓̰̤͓̫r̵o̖ṷs҉̪͍̭̬̝̤ ̮͉̝̞̗̟͠d̴̟̜̱͕͚i͇̫̼̯̭̜͡ḁ͙̻̼c̲̲̹r̨̠̹̣̰̦i̱t̤̻̤͍͙̘̕i̵̜̭̤̱͎c̵s ͘o̱̲͈̙͖͇̲͢n͘ ̜͈e̬̲̠̩ac͕̺̠͉h̷̪ ̺̣͖̱ḻ̫̬̝̹ḙ̙̺͙̭͓̲t̞̞͇̲͉͍t̷͔̪͉̲̻̠͙e̦̻͈͉͇r͇̭̭̬͖,̖́ ̜͙͓̣̭s̘̘͈o̱̰̤̲ͅ ̛̬̜̙t̼̦͕̱̹͕̥h̳̲͈͝ͅa̦t̻̲ ̻̟̭̦̖t̛̰̩h̠͕̳̝̫͕e͈̤̘͖̞͘y҉̝͙ ̷͉͔̰̠o̞̰v͈͈̳̘͜er̶f̰͈͔ḻ͕̘̫̺̲o̲̭͙͠ͅw̱̳̺
͜t̸h͇̭͕̳͍e̖̯̟̠ ͍̞̜͔̩̪͜ļ͎̪̲͚i̝̲̹̙̩̹n̨̦̩̖ḙ̼̲̼͢ͅ ̬͝s̼͚̘̞͝p͙̘̻a̙c҉͉̜̤͈̯̖i̥͡n̦̠̱͟g̸̗̻̦̭̮̟ͅ ̳̪̠͖̳̯̕a̫͜n͝d͡ ̣̦̙ͅc̪̗r̴͙̮̦̹̳e͇͚̞͔̹̫͟a̙̺̙ț͔͎̘̹ͅe̥̩͍ a͖̪̜̮͙̹n̢͉̝ ͇͉͓̦̼́a̳͖̪̤̱p̖͔͔̟͇͎͠p̱͍̺ę̲͎͈̰̲̤̫a̯͜r̨̮̫̣̘a̩̯͖n̹̦̰͎̣̞̞c̨̦̱͔͎͍͖e̬͓͘ ̤̰̩͙̤̬͙o̵̼̻̬̻͇̮̪f̴ ̡̙̭͓͖̪̤“̸͙̠̼c̳̗͜o͏̼͙͔̮r̞̫̺̞̥̬ru̺̻̯͉̭̻̯p̰̥͓̣̫̙̤͢t̳͍̳̖ͅi̶͈̝͙̼̙̹o̡͔n̙̺̹̖̩͝ͅ”̨̗͖͚̩.̯͓</p>
<p>A few other places where dynamic character composition shows up in Unicode:</p>
<ul>
<li>
<p><a href="https://en.wikipedia.org/wiki/Vowel_pointing">Vowel-pointing notation</a> in Arabic and Hebrew.
In these languages, words are normally spelled with some of their vowels left out. They then have
diacritic notation to indicate the vowels (used in dictionaries, language-teaching
materials, children’s books, and such). These diacritics are expressed with combining marks.
<table class="borderless">
<tr><td>A Hebrew example, with <a href="https://en.wikipedia.org/wiki/Niqqud">niqqud</a>:</td><td>אֶת דַלְתִּי הֵזִיז הֵנִיעַ, קֶטֶב לִשְׁכַּתִּי יָשׁוֹד</td></tr>
<tr><td>Normal writing (no niqqud):</td><td>את דלתי הזיז הניע, קטב לשכתי ישוד</td></tr>
</table></p>
</li>
<li>
<p><a href="https://en.wikipedia.org/wiki/Devanagari">Devanagari</a>, the script used to write Hindi, Sanskrit,
and many other South Asian languages, expresses certain vowels as combining marks attached
to consonant letters. For example, “ह” + “ि” = “हि” (“h” + “i” = “hi”).</p>
</li>
<li>
<p>Korean characters stand for syllables, but they are composed of letters called <a href="https://en.wikipedia.org/wiki/Hangul#Letters">jamo</a>
that stand for the vowels and consonants in the syllable. While there are code points for precomposed Korean
syllables, it’s also possible to dynamically compose them by concatenating their jamo.
For example, “ᄒ” + “ᅡ” + “ᆫ” = “한” (“h” + “a” + “n” = “han”).</p>
</li>
</ul>
<h3 id="canonical-equivalence"><a class="link-button" href="http://reedbeta.com/blog/programmers-intro-to-unicode/#canonical-equivalence"></a>Canonical Equivalence</h3>
<p>In Unicode, precomposed characters exist alongside the dynamic composition system. A consequence of
this is that there are multiple ways to express “the same” string—different sequences of code
points that result in the same user-perceived characters. For example, as we saw earlier, we can
express the character “Á” either as the single code point U+00C1, <em>or</em> as the string of two code
points U+0041 U+0301.</p>
<p>Another source of ambiguity is the ordering of multiple diacritics in a single character.
Diacritic order matters visually when two diacritics apply to the same side of the base character,
e.g. both above: “ǡ” (dot, then macron) is different from “ā̇” (macron, then dot). However, when
diacritics apply to different sides of the character, e.g. one above and one below, then the order
doesn’t affect rendering. Moreover, a character with multiple diacritics might have one of the
diacritics precomposed and others expressed as combining marks.</p>
<p>For example, the Vietnamese letter “ệ” can be expressed in <em>five</em> different ways:</p>
<ul>
<li>Fully precomposed: U+1EC7 “ệ”</li>
<li>Partially precomposed: U+1EB9 “ẹ” + U+0302 “◌̂”</li>
<li>Partially precomposed: U+00EA “ê” + U+0323 “◌̣”</li>
<li>Fully decomposed: U+0065 “e” + U+0323 “◌̣” + U+0302 “◌̂”</li>
<li>Fully decomposed: U+0065 “e” + U+0302 “◌̂” + U+0323 “◌̣”</li>
</ul>
<p>Unicode refers to set of strings like this as “canonically equivalent”. Canonically equivalent
strings are supposed to be treated as identical for purposes of searching, sorting, rendering,
text selection, and so on. This has implications for how you implement operations on text.
For example, if an app has a “find in file” operation and the user searches for “ệ”, it should, by
default, find occurrences of <em>any</em> of the five versions of “ệ” above!</p>
<h3 id="normalization-forms"><a class="link-button" href="http://reedbeta.com/blog/programmers-intro-to-unicode/#normalization-forms"></a>Normalization Forms</h3>
<p>To address the problem of “how to handle canonically equivalent strings”, Unicode defines several
<em>normalization forms</em>: ways of converting strings into a canonical form so that they can be
compared code-point-by-code-point (or byte-by-byte).</p>
<p>The “NFD” normalization form fully <em>decomposes</em> every character down to its component base and
combining marks, taking apart any precomposed code points in the string. It also sorts the combining
marks in each character according to their rendered position, so e.g. diacritics that go below the
character come before the ones that go above the character. (It doesn’t reorder diacritics in the
same rendered position, since their order matters visually, as previously mentioned.)</p>
<p>The “NFC” form, conversely, puts things back together into precomposed code points as much as
possible. If an unusual combination of diacritics is called for, there may not be any precomposed
code point for it, in which case NFC still precomposes what it can and leaves any remaining
combining marks in place (again ordered by rendered position, as in NFD).</p>
<p>There are also forms called NFKD and NFKC. The “K” here refers to <em>compatibility</em> decompositions,
which cover characters that are “similar” in some sense but not visually identical. However, I’m not
going to cover that here.</p>
<h3 id="grapheme-clusters"><a class="link-button" href="http://reedbeta.com/blog/programmers-intro-to-unicode/#grapheme-clusters"></a>Grapheme Clusters</h3>
<p>As we’ve seen, Unicode contains various cases where a thing that a user thinks of
as a single “character” might actually be made up of multiple code points under the hood. Unicode
formalizes this using the notion of a <em>grapheme cluster</em>: a string of one or more code points that
constitute a single “user-perceived character”.</p>
<p><a href="http://www.unicode.org/reports/tr29/">UAX #29</a> defines the rules for what, precisely, qualifies
as a grapheme cluster. It’s approximately “a base code point followed by any number of combining
marks”, but the actual definition is a bit more complicated; it accounts for things like Korean
jamo, and <a href="http://blog.emojipedia.org/emoji-zwj-sequences-three-letters-many-possibilities/">emoji ZWJ sequences</a>.</p>
<p>The main thing grapheme clusters are used for is text <em>editing</em>: they’re often the most sensible
unit for cursor placement and text selection boundaries. Using grapheme clusters for these purposes
ensures that you can’t accidentally chop off some diacritics when you copy-and-paste text, that
left/right arrow keys always move the cursor by one visible character, and so on.</p>
<p>Another place where grapheme clusters are useful is in enforcing a string length limit—say, on a
database field. While the true, underlying limit might be something like the byte length of the string
in UTF-8, you wouldn’t want to enforce that by just truncating bytes. At a minimum, you’d want to
“round down” to the nearest code point boundary; but even better, round down to the nearest <em>grapheme
cluster boundary</em>. Otherwise, you might be corrupting the last character by cutting off a diacritic,
or interrupting a jamo sequence or ZWJ sequence.</p>
<h2 id="and-more"><a class="link-button" href="http://reedbeta.com/blog/programmers-intro-to-unicode/#and-more"></a>And More…</h2>
<p>There’s much more that could be said about Unicode from a programmer’s perspective! I haven’t gotten
into such fun topics as case mapping, collation, compatibility decompositions and confusables,
Unicode-aware regexes, or bidirectional text. Nor have I said anything yet about implementation
issues—how to efficiently store and look-up data about the sparsely-assigned code points, or how
to optimize UTF-8 decoding, string comparison, or NFC normalization. Perhaps I’ll return to some of
those things in future posts.</p>
<p>Unicode is a fascinating and complex system. It has a many-to-one mapping between bytes and
code points, and on top of that a many-to-one (or, under some circumstances, many-to-many) mapping
between code points and “characters”. It has oddball special cases in every corner. But no one ever
claimed that representing <em>all written languages</em> was going to be <em>easy</em>, and it’s clear that
we’re never going back to the bad old days of a patchwork of incompatible encodings.</p>
<p>Further reading:</p>
<ul>
<li><a href="http://www.unicode.org/versions/latest/">The Unicode Standard</a></li>
<li><a href="http://utf8everywhere.org/">UTF-8 Everywhere Manifesto</a></li>
<li><a href="https://eev.ee/blog/2015/09/12/dark-corners-of-unicode/">Dark corners of Unicode</a> by Eevee</li>
<li><a href="http://site.icu-project.org/">ICU (International Components for Unicode)</a>—C/C++/Java libraries
implementing many Unicode algorithms and related things</li>
<li><a href="https://docs.python.org/3/howto/unicode.html">Python 3 Unicode Howto</a></li>
<li><a href="https://www.google.com/get/noto/">Google Noto Fonts</a>—set of fonts intended to cover all
assigned code points</li>
</ul>The Many Meanings of “Shader”
http://reedbeta.com/blog/many-meanings-of-shader/
http://reedbeta.com/blog/many-meanings-of-shader/Nathan ReedSun, 12 Feb 2017 18:09:30 -0800http://reedbeta.com/blog/many-meanings-of-shader/#commentsGraphicsGPU<p>When the same word is used to mean slightly different things, there’s always a chance of creating
confusion—and the word “shader” is a bit overloaded in computer graphics. Between engineers, artists,
and DCC tools’ terminology, there are at least four different meanings of “shader” out there.</p>
<!--more-->
<h2 id="shader-binaries"><a class="link-button" href="http://reedbeta.com/blog/many-meanings-of-shader/#shader-binaries"></a>Shader Binaries</h2>
<p>To begin with, there’s the low-level engineering meaning of a “shader” as an individual GPU program,
e.g. a vertex shader, pixel shader, or compute shader. They begin life as source code written in
a shading language such as HLSL, and they get compiled (often through multiple stages) down to machine
code that actually executes on the GPU’s shader cores. Graphics APIs represent the resulting binaries
as primitive objects—for example, <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/ff476576.aspx"><code>ID3D11PixelShader</code></a>
instances, or OpenGL <a href="http://docs.gl/gl4/glCreateShader">shader objects</a>.</p>
<h2 id="pipeline-states-and-linked-shader-programs"><a class="link-button" href="http://reedbeta.com/blog/many-meanings-of-shader/#pipeline-states-and-linked-shader-programs"></a>Pipeline States and Linked Shader Programs</h2>
<p>A vertex shader or pixel shader isn’t of much use by itself. Most of the time, one of each—and perhaps
tessellation or geometry shaders also—are designed to work together to accomplish a task. So
engineers like to hop up an abstraction level and refer to the complete graphics pipeline’s worth of
shaders as “a shader”, as well.</p>
<p>Most game engines contain an object expressing this concept, such as Unreal’s
<a href="https://github.com/EpicGames/UnrealEngine/blob/release/Engine/Source/Runtime/ShaderCore/Public/Shader.h?ts=4#L471"><code>FShader</code></a>
class. Certain APIs express it too, as in OpenGL’s longtime “program objects”, and more recently
D3D12/Vulkan/Metal’s concept of “pipeline states”. (Although pipeline states include more than just
shaders.) Moreover, a pipeline-state of shaders frequently acts as a unit in game engines’ asset
systems, e.g. for dependency tracking, asset pack building, and so on.</p>
<p>From this point of view, an individual shader binary is more like a single function out of
a module or library, rather than a program in its own right.</p>
<h2 id="effects-and-shader-variants"><a class="link-button" href="http://reedbeta.com/blog/many-meanings-of-shader/#effects-and-shader-variants"></a>Effects and Shader Variants</h2>
<p>But the ladder of abstraction doesn’t stop there. Given the complexity of renderers these days, a
single pipeline-state “shader” is still too fine-grained an object for many purposes.</p>
<p>First, a game object usually needs to get rendered in several passes, such as: depth prepass,
shadow maps, deferred G-buffer pass, forward lighting pass, etc. Each of these has its own set of
shader binaries and pipeline state, but they all work together to make the object show up in the
game world, properly lit and shadowed. Another case is a postprocessing effect with multiple passes
that cooperate—for instance, the standard bloom implementation involves several downsample, blur, and
upsample passes.</p>
<p>Second, we often generate many variants of a shader, offering different
combinations of features: with a normal map or not, with skinned animation or not, with
subsurface scattering or not, and so on. Conceptually these behave like one giant shader with a
lot of <code>if</code>-statements in it, but in practice we actually compile variants that strip out the code
for unused features, to get better runtime performance. This can lead to
<a href="http://aras-p.info/blog/2017/02/05/Every-Possible-Scalability-Limit-Will-Be-Reached/">vast combinatorial explosion</a>.</p>
<p>For both of these reasons, it’s common to group together a set of related pipeline-state shaders, or
a schema for lazily generating shaders on demand, and call <em>that whole thing</em> a “shader”, too.</p>
<p>This is usually the smallest unit that artists and other non-engineers are concerned with—it’ll
appear in the interface of your editor as “a shader”, and act as a single noun, with
all the underlying structure hidden. We might say that this object represents the sum total
of the game engine’s capabilities of rendering some general type of thing, efficiently and with all
the appropriate lighting/shading phenomena in place.</p>
<p>In days of yore, D3D9 HLSL had the concept of an <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/bb173329(v=vs.85).aspx">effect</a>,
a collection of shaders compiled from a single source file and organized into “techniques” and
“passes”. That original system is long obsolete, but many game engines still organize their
shaders along similar lines.</p>
<h2 id="materials"><a class="link-button" href="http://reedbeta.com/blog/many-meanings-of-shader/#materials"></a>Materials</h2>
<p>The final meaning of “shader” that I’ll discuss here isn’t a higher level of abstraction, but
something more specific: a shader (in the sense of the previous section) together with specific
settings for its user-editable texture inputs and other parameters. This is also called a
“material”. For example, you might have a generic diffuse/normal/specular shader, and when you bind it
to a certain set of textures and specular parameters, it becomes a “brick shader”, or a “metal shader”,
or a “grass shader”.</p>
<p>This terminology comes from certain DCC tools—in particular, Maya, where materials take the form of
shader nodes in Maya’s dependency graph. Users might talk about “editing a shader”, “putting a
shader on a mesh”, and so on, referring to the <em>material</em> settings as opposed to, say, the shader
source code.</p>
<p>That’s why when you hear someone talking about shaders and it’s not clear from the context, you
might have to ask them to clarify just what kind of “shader” they mean!</p>Tessellation Modes Quick Reference
http://reedbeta.com/blog/tess-quick-ref/
http://reedbeta.com/blog/tess-quick-ref/Nathan ReedFri, 30 Dec 2016 21:07:01 -0800http://reedbeta.com/blog/tess-quick-ref/#commentsCodingGPUGraphics<p>One difficulty with GPU hardware tessellation is the complexity of programming it. Tessellation offers
a number of modes and options; it’s hard to remember which things do what, and how all the pieces fit together.
I use tessellation just infrequently enough that I’ve always completely forgotten this stuff since the
last time I used it, and I’m getting sick of looking it up and/or figuring it out by trial and error
every time. So here’s a quick-reference post for how it all works!</p>
<!--more-->
<p>This article is written from a D3D perspective and will mostly use D3D terminology.
However, the same hardware functionality is exposed in OpenGL in essentially the same way, as
<a href="https://www.opengl.org/registry/specs/ARB/tessellation_shader.txt">ARB_tessellation_shader</a>, which
is in core in OpenGL 4.0+.</p>
<div class="toc">
<ul>
<li><a href="http://reedbeta.com/blog/tess-quick-ref/#tessellation-refresher">Tessellation Refresher</a><ul>
<li><a href="http://reedbeta.com/blog/tess-quick-ref/#warning-conventions-ahead">Warning: Conventions Ahead</a></li>
</ul>
</li>
<li><a href="http://reedbeta.com/blog/tess-quick-ref/#domains">Domains</a><ul>
<li><a href="http://reedbeta.com/blog/tess-quick-ref/#triangle">Triangle</a></li>
<li><a href="http://reedbeta.com/blog/tess-quick-ref/#quad">Quad</a></li>
<li><a href="http://reedbeta.com/blog/tess-quick-ref/#isoline">Isoline</a></li>
</ul>
</li>
<li><a href="http://reedbeta.com/blog/tess-quick-ref/#spacing-aka-partitioning-modes">Spacing (aka Partitioning) Modes</a><ul>
<li><a href="http://reedbeta.com/blog/tess-quick-ref/#integer-spacing">Integer Spacing</a></li>
<li><a href="http://reedbeta.com/blog/tess-quick-ref/#fractional-odd-spacing">Fractional-Odd Spacing</a></li>
<li><a href="http://reedbeta.com/blog/tess-quick-ref/#fractional-even-spacing">Fractional-Even Spacing</a></li>
<li><a href="http://reedbeta.com/blog/tess-quick-ref/#why-is-there-no-fractional-integer-mode">Why is there no “fractional-integer” mode?</a></li>
</ul>
</li>
<li><a href="http://reedbeta.com/blog/tess-quick-ref/#further-reading">Further Reading</a></li>
</ul>
</div>
<h2 id="tessellation-refresher"><a class="link-button" href="http://reedbeta.com/blog/tess-quick-ref/#tessellation-refresher"></a>Tessellation Refresher</h2>
<p>When tessellation is enabled, the GPU pipeline gains two(ish) additional stages:</p>
<ul>
<li>
<p><strong>Hull shader</strong>—per-patch shader. Runs right after vertex shading, and can see the data for all the
vertices in the patch. Primarily responsible for setting tessellation factors, though it can also
modify the vertex data if you want. Also useful for per-patch frustum culling.</p>
</li>
<li>
<p><strong>Domain shader</strong>—post-tessellation vertex shader. Gets the UV coordinates of the
generated vertex within the patch, and is responsible for interpolating, displacing, and what-have-you
to produce the final vertex data to rasterize.</p>
</li>
</ul>
<p>The “patch-constant” part of the hull shader looks <em>sort of</em> like an extra stage of its own in D3D; it has a
separate entry point from the main hull shader, and it conceptually runs at a lower frequency
(per-patch rather than per-patch-vertex). In OpenGL, though, it’s all tossed together in one entry
point for the compiler to sort out.</p>
<p>What constitutes a “patch” on the input side is pretty free-form. A patch can have any number of
vertices you want, from 1 to 32, and the meaning of those vertices is up to the interpretation of
the hull and domain shaders. Common patch sizes include: 3 for triangular patches (e.g. using
<a href="http://perso.telecom-paristech.fr/~boubek/papers/PhongTessellation/">Phong tessellation</a>), 4 for
quad patches, or 16 for bicubic patches with all the control points.</p>
<h3 id="warning-conventions-ahead"><a class="link-button" href="http://reedbeta.com/blog/tess-quick-ref/#warning-conventions-ahead"></a>Warning: Conventions Ahead</h3>
<p>Note that because the meaning of the vertices is up to the interpretation of the hull and domain
shaders, there is no canonical vertex order for patches! Different conventions for vertex order
are possible, which can lead to different people’s or projects’ tessellation shaders having
different mappings between vertex index and the patch UVs and tess factors.</p>
<p>The thing that <em>isn’t</em> up to individual convention, but is fixed by the hardware and API definitions,
is the relationship between patch UVs and tess factors. Those relationships are shown in the
diagrams below, and carry through regardless of which conventions you’re using, or which API.</p>
<h2 id="domains"><a class="link-button" href="http://reedbeta.com/blog/tess-quick-ref/#domains"></a>Domains</h2>
<p>The “domain” mode controls the shape of the tessellated mesh that will be generated
and fed through the domain shader. There are three domains supported by the hardware: triangle,
quad, and isoline.</p>
<h3 id="triangle"><a class="link-button" href="http://reedbeta.com/blog/tess-quick-ref/#triangle"></a>Triangle</h3>
<p>The triangle domain generates triangular output patches. It has three edge tess factors,
which specify the number of segments that each edge of the triangle gets subdivided into. It has a
single “inside” tess factor, which specifies the number of mesh segments from each edge
to the opposite vertex. The domain shader receives three barycentric coordinates, which always sum
to 1. The barycentrics are represented by colors (UVW = RGB) in the diagram below.</p>
<p><img alt="UVW layout for the "triangle" tessellation domain" class="not-too-wide" src="http://reedbeta.com/blog/tess-quick-ref/triangle-domain.png" title="UVW layout for the "triangle" tessellation domain" /></p>
<p>Here’s an HLSL snippet for a basic triangle domain shader that just interpolates positions:</p>
<table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre> 1
2
3
4
5
6
7
8
9
10</pre></div></td><td class="code"><div class="codehilite"><pre><span></span><span class="p">[</span><span class="nd">domain</span><span class="p">(</span><span class="s">"tri"</span><span class="p">)]</span>
<span class="kt">void</span> <span class="n">ds</span><span class="p">(</span>
<span class="k">in</span> <span class="kt">float</span> <span class="n">edgeFactors</span><span class="p">[</span><span class="mi">3</span><span class="p">]</span> <span class="o">:</span> <span class="nd">SV_TessFactor</span><span class="p">,</span>
<span class="k">in</span> <span class="kt">float</span> <span class="n">insideFactor</span> <span class="o">:</span> <span class="nd">SV_InsideTessFactor</span><span class="p">,</span>
<span class="k">in</span> <span class="kt">OutputPatch</span><span class="o"><</span><span class="n">VData</span><span class="p">,</span> <span class="mi">3</span><span class="o">></span> <span class="n">inp</span><span class="p">,</span>
<span class="k">in</span> <span class="kt">float3</span> <span class="n">uvw</span> <span class="o">:</span> <span class="nd">SV_DomainLocation</span><span class="p">,</span>
<span class="k">out</span> <span class="kt">float4</span> <span class="n">o_pos</span> <span class="o">:</span> <span class="nd">SV_Position</span><span class="p">)</span>
<span class="p">{</span>
<span class="n">o_pos</span> <span class="o">=</span> <span class="n">inp</span><span class="p">[</span><span class="mo">0</span><span class="p">].</span><span class="n">pos</span> <span class="o">*</span> <span class="n">uvw</span><span class="p">.</span><span class="n">x</span> <span class="o">+</span> <span class="n">inp</span><span class="p">[</span><span class="mi">1</span><span class="p">].</span><span class="n">pos</span> <span class="o">*</span> <span class="n">uvw</span><span class="p">.</span><span class="n">y</span> <span class="o">+</span> <span class="n">inp</span><span class="p">[</span><span class="mi">2</span><span class="p">].</span><span class="n">pos</span> <span class="o">*</span> <span class="n">uvw</span><span class="p">.</span><span class="n">z</span><span class="p">;</span>
<span class="p">}</span>
</pre></div>
</td></tr></table>
<p>Note that here I’ve used the convention that each component of the UVW vector is the weight
for the <em>same-index</em> vertex (the first component goes with the first vertex, etc). This leads to each
edge tess factor controlling the edge <em>opposite</em> the same-index vertex.</p>
<p>Another reasonable convention would be that vertex 0 lies at the origin of UV space (i.e. at <em>u</em> = 0,
<em>v</em> = 0, <em>w</em> = 1), vertex 1 lies
along the U axis, and vertex 2 lies along the V axis. This would lead to a different expression in
the domain shader, and a different mapping between vertices and edge tess factors,
but the diagram above wouldn’t change.</p>
<p>In a real-world case, you’d probably have additional vertex attributes being interpolated similarly.
Higher-order interpolation and displacement mapping could also be applied.</p>
<h3 id="quad"><a class="link-button" href="http://reedbeta.com/blog/tess-quick-ref/#quad"></a>Quad</h3>
<p>The quad domain generates quadrilateral output patches; it has four edge tess factors, and
two inside factors, which control the number of segments between pairs of opposite edges. The
domain shader receives two-dimensional UV coordinates, represented as red and green in the diagram below.</p>
<p><img alt="UV layout for the "quad" tessellation domain" class="not-too-wide" src="http://reedbeta.com/blog/tess-quick-ref/quad-domain.png" title="UV layout for the "quad" tessellation domain" /></p>
<p>HLSL for a basic quad domain shader:</p>
<table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre> 1
2
3
4
5
6
7
8
9
10
11
12</pre></div></td><td class="code"><div class="codehilite"><pre><span></span><span class="p">[</span><span class="nd">domain</span><span class="p">(</span><span class="s">"quad"</span><span class="p">)]</span>
<span class="kt">void</span> <span class="n">ds</span><span class="p">(</span>
<span class="k">in</span> <span class="kt">float</span> <span class="n">edgeFactors</span><span class="p">[</span><span class="mi">4</span><span class="p">]</span> <span class="o">:</span> <span class="nd">SV_TessFactor</span><span class="p">,</span>
<span class="k">in</span> <span class="kt">float</span> <span class="n">insideFactors</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span> <span class="o">:</span> <span class="nd">SV_InsideTessFactor</span><span class="p">,</span>
<span class="k">in</span> <span class="kt">OutputPatch</span><span class="o"><</span><span class="n">VData</span><span class="p">,</span> <span class="mi">4</span><span class="o">></span> <span class="n">inp</span><span class="p">,</span>
<span class="k">in</span> <span class="kt">float2</span> <span class="n">uv</span> <span class="o">:</span> <span class="nd">SV_DomainLocation</span><span class="p">,</span>
<span class="k">out</span> <span class="kt">float4</span> <span class="n">o_pos</span> <span class="o">:</span> <span class="nd">SV_Position</span><span class="p">)</span>
<span class="p">{</span>
<span class="n">o_pos</span> <span class="o">=</span> <span class="nb">lerp</span><span class="p">(</span><span class="nb">lerp</span><span class="p">(</span><span class="n">inp</span><span class="p">[</span><span class="mo">0</span><span class="p">].</span><span class="n">pos</span><span class="p">,</span> <span class="n">inp</span><span class="p">[</span><span class="mi">1</span><span class="p">].</span><span class="n">pos</span><span class="p">,</span> <span class="n">uv</span><span class="p">.</span><span class="n">x</span><span class="p">),</span>
<span class="nb">lerp</span><span class="p">(</span><span class="n">inp</span><span class="p">[</span><span class="mi">2</span><span class="p">].</span><span class="n">pos</span><span class="p">,</span> <span class="n">inp</span><span class="p">[</span><span class="mi">3</span><span class="p">].</span><span class="n">pos</span><span class="p">,</span> <span class="n">uv</span><span class="p">.</span><span class="n">x</span><span class="p">),</span>
<span class="n">uv</span><span class="p">.</span><span class="n">y</span><span class="p">);</span>
<span class="p">}</span>
</pre></div>
</td></tr></table>
<p>As we saw in the triangle case, there are multiple possible vertex order conventions. I’ve chosen
to put the vertices in triangle-strip order: bottom-left, bottom-right, top-left, top-right. This
is the order that you’d submit the vertices to draw a quad as a two-triangle strip. Another
reasonable convention would be counterclockwise order around the quad, which would swap vertices
2 and 3 relative to mine.</p>
<h3 id="isoline"><a class="link-button" href="http://reedbeta.com/blog/tess-quick-ref/#isoline"></a>Isoline</h3>
<p>The isoline domain is an odder and less-used one. Instead of producing triangles, it produces
a set of line strips. The line strips come in a quadrilateral shape, same as the quad domain,
but they’re subdivided along the U axis and discretely spaced along the V axis. The isoline domain has only
two edge tess factors (defined the same way as for the quad domain), and no inside factors.</p>
<p><img alt="UV layout for the "isoline" tessellation domain" class="not-too-wide" src="http://reedbeta.com/blog/tess-quick-ref/isoline-domain.png" title="UV layout for the "isoline" tessellation domain" /></p>
<p>Note that the “last” line strip, that would appear at <em>v</em> = 1, is missing; this is so neighboring
isoline patches don’t produce overlapping line strips along their shared edge.</p>
<p>HLSL for a basic isoline domain shader (note that the body is the same as for the quad
domain, above—the only difference is the number of tess factors):</p>
<table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre> 1
2
3
4
5
6
7
8
9
10
11</pre></div></td><td class="code"><div class="codehilite"><pre><span></span><span class="p">[</span><span class="nd">domain</span><span class="p">(</span><span class="s">"isoline"</span><span class="p">)]</span>
<span class="kt">void</span> <span class="n">ds</span><span class="p">(</span>
<span class="k">in</span> <span class="kt">float</span> <span class="n">factors</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span> <span class="o">:</span> <span class="nd">SV_TessFactor</span><span class="p">,</span>
<span class="k">in</span> <span class="kt">OutputPatch</span><span class="o"><</span><span class="n">VData</span><span class="p">,</span> <span class="mi">4</span><span class="o">></span> <span class="n">inp</span><span class="p">,</span>
<span class="k">in</span> <span class="kt">float2</span> <span class="n">uv</span> <span class="o">:</span> <span class="nd">SV_DomainLocation</span><span class="p">,</span>
<span class="k">out</span> <span class="kt">float4</span> <span class="n">o_pos</span> <span class="o">:</span> <span class="nd">SV_Position</span><span class="p">)</span>
<span class="p">{</span>
<span class="n">o_pos</span> <span class="o">=</span> <span class="nb">lerp</span><span class="p">(</span><span class="nb">lerp</span><span class="p">(</span><span class="n">inp</span><span class="p">[</span><span class="mo">0</span><span class="p">].</span><span class="n">pos</span><span class="p">,</span> <span class="n">inp</span><span class="p">[</span><span class="mi">1</span><span class="p">].</span><span class="n">pos</span><span class="p">,</span> <span class="n">uv</span><span class="p">.</span><span class="n">x</span><span class="p">),</span>
<span class="nb">lerp</span><span class="p">(</span><span class="n">inp</span><span class="p">[</span><span class="mi">2</span><span class="p">].</span><span class="n">pos</span><span class="p">,</span> <span class="n">inp</span><span class="p">[</span><span class="mi">3</span><span class="p">].</span><span class="n">pos</span><span class="p">,</span> <span class="n">uv</span><span class="p">.</span><span class="n">x</span><span class="p">),</span>
<span class="n">uv</span><span class="p">.</span><span class="n">y</span><span class="p">);</span>
<span class="p">}</span>
</pre></div>
</td></tr></table>
<h2 id="spacing-aka-partitioning-modes"><a class="link-button" href="http://reedbeta.com/blog/tess-quick-ref/#spacing-aka-partitioning-modes"></a>Spacing (aka Partitioning) Modes</h2>
<p>Spacing modes (actually called “partitioning” modes in D3D, but I like the OpenGL term better) affect
the interpretation of the tessellation factors. Broadly speaking, the tess factors are just
the number of segments that a given edge or UV axis will be subdivided into, but there are three
choices for the detailed behavior.</p>
<p>(There’s also a fourth mode, “pow2”, but I’ll ignore it here, since it’s just integer mode with an
extra restriction. It also doesn’t exist in OpenGL.)</p>
<h3 id="integer-spacing"><a class="link-button" href="http://reedbeta.com/blog/tess-quick-ref/#integer-spacing"></a>Integer Spacing</h3>
<p>In integer spacing, also called equal spacing, fractional tess factors are rounded <em>up</em> to
the nearest integer. The useful range is [1, 64]. There are no smooth transitions; subdivisions are
always equally spaced in UV distance, and we just discretely pop from one to the next.</p>
<p>Here’s a video showing the tess factors animating from 1 to 64 in integer spacing mode, using the
triangle, quad, and isoline domains respectively. (For the isoline case, I also rendered a dot at
each vertex so you can see where they are along the lines.)</p>
<div class="embed-wrapper-outer" >
<div class="embed-wrapper-inner">
<iframe class="embed" type="text/html" allowfullscreen frameborder="0" src="https://www.youtube.com/embed/KXmV9o4VtOk?origin=http://reedbeta.com"></iframe>
</div>
</div>
<h3 id="fractional-odd-spacing"><a class="link-button" href="http://reedbeta.com/blog/tess-quick-ref/#fractional-odd-spacing"></a>Fractional-Odd Spacing</h3>
<p>The fractional spacing modes provide smooth transitions between different subdivision levels, by
morphing vertices around so that edges smoothly expand or collapse as the tess factors go up and down.</p>
<p>In the case of fractional-odd mode, the number of segments is defined by rounding the tess factors
up to the nearest odd integer, and the blend factor for vertex morphing is defined by how far you
had to round. This mode matches integer spacing when you’re exactly on an odd integer. The useful
range is [1, 63].</p>
<p>Here’s a video:</p>
<div class="embed-wrapper-outer" >
<div class="embed-wrapper-inner">
<iframe class="embed" type="text/html" allowfullscreen frameborder="0" src="https://www.youtube.com/embed/6sTI4yiAQEg?origin=http://reedbeta.com"></iframe>
</div>
</div>
<p>Note that in isoline mode, the V axis (between the lines) always uses integer spacing behavior. Only
the U axis (along the lines) gets fractional spacing.</p>
<h3 id="fractional-even-spacing"><a class="link-button" href="http://reedbeta.com/blog/tess-quick-ref/#fractional-even-spacing"></a>Fractional-Even Spacing</h3>
<p>Fractional-even spacing is the same as fractional-odd, but using even integers instead.
The useful range is [2, 64]. Note that the “identity” tess factor of 1 is not available
in this mode! Everything always gets tessellated by at least a factor of 2.</p>
<p>Video:</p>
<div class="embed-wrapper-outer" >
<div class="embed-wrapper-inner">
<iframe class="embed" type="text/html" allowfullscreen frameborder="0" src="https://www.youtube.com/embed/vtvvYEhRKcQ?origin=http://reedbeta.com"></iframe>
</div>
</div>
<h3 id="why-is-there-no-fractional-integer-mode"><a class="link-button" href="http://reedbeta.com/blog/tess-quick-ref/#why-is-there-no-fractional-integer-mode"></a>Why is there no “fractional-integer” mode?</h3>
<p>You might be wondering why fractional spacing only comes in “odd” and “even” flavors. Why don’t
we have a fractional mode that interpolates between all integer tess factors?</p>
<p>The answer lies with the symmetry of the tessellation patterns. The patterns are chosen to be (as
nearly as possible) invariant under flips and rotations of the UV space. For example, in the
triangle domain, the output mesh looks the same if you rotate the triangle by 120° either
direction, or reflect it across the line between a vertex and the midpoint of the opposite edge.
Similarly, quad-domain meshes are unchanged by a rotation by 90° or a reflection along either the
U or V axis. The only place the symmetry breaks is the very middle row of quads, when a rounded tess
factor is odd; then, that row’s diagonals have to go one way or the other, so they can’t be symmetric.</p>
<p>If this symmetry is going to be maintained, then the tess factors can only change in increments of
<em>two</em>. Anytime you add a subdivision at one place along an edge or UV axis, you must also add another
subdivision at the corresponding place reflected across the midpoint of that edge or axis. Otherwise,
you’ll break the symmetry. Thus, your rounded tess factors must be either all odd or all even integers.</p>
<p>This raises the further question: why do we have these symmetry requirements at all? Well, along the
<em>edges</em> of a patch, the reflection symmetry is critical to prevent cracking between patches! Two
adjacent patches will construct their shared edge with opposite vertex orders, so the vertices
generated when tessellating that edge must be invariant under interchanging the endpoints.</p>
<p>For the interior of the patch, the symmetry seems less critical, but I suppose that maintaining the
same symmetry makes it simpler to generate triangles connecting the interior vertices to the edge
vertices. It also supports the earlier-mentioned idea that there’s no canonical vertex order
for patches: different vertex order conventions may effectively rotate or flip the UV space, but if
it’s all symmetric, that doesn’t matter.</p>
<h2 id="further-reading"><a class="link-button" href="http://reedbeta.com/blog/tess-quick-ref/#further-reading"></a>Further Reading</h2>
<ul>
<li><a href="https://archive.org/details/GDC2014Brainerd">Tessellation in Call of Duty: Ghosts by Wade Brainerd (GDC 2014)</a>
shows how to use tessellation to implement Catmull-Clark subdivision surfaces.</li>
<li><a href="https://fgiesen.wordpress.com/2011/09/06/a-trip-through-the-graphics-pipeline-2011-part-12/">Tessellation chapter from Fabian Giesen’s “A trip through the Graphics Pipeline” series</a>
discusses some finer points of the generated mesh topologies and how the shaders execute on the GPU.</li>
<li><a href="https://developer.nvidia.com/sites/default/files/akamai/gamedev/files/gdc12/GDC12_DUDASH_MyTessellationHasCracks.pdf">My Tessellation Has Cracks! by Bryan Dudash</a>
addresses the annoyingly subtle problem of ensuring that your domain shader doesn’t introduce
cracks between adjacent patches.</li>
<li><a href="http://www.cemyuksel.com/courses/conferences/siggraph2010-hair/">SIGGRAPH 2010 course on hair rendering by Cem Yuksel and Sarah Tariq</a>
uses isoline tessellation to increase the density of rendered hairs (see Chapter 2 at the link).</li>
<li><a href="https://www.opengl.org/registry/specs/ARB/tessellation_shader.txt">OpenGL extension spec for tessellation.</a></li>
</ul>
<p>Tessellation is one of those GPU features that doesn’t seem to get much love. It presents many
challenges—authoring issues and performance, as well as the engineering complexity of using it.
Hopefully, some better documentation will help with the last.</p>little-py-site
http://reedbeta.com/made/little-py-site/
http://reedbeta.com/made/little-py-site/Nathan ReedWed, 12 Oct 2016 11:24:20 -0700http://reedbeta.com/made/little-py-site/#comments<p><a class="biglink" href="https://github.com/Reedbeta/little-py-site/">View on GitHub</a></p>
<p>Welcome back, readers! You may have noticed that the site looks a bit different now. Over the last
few weeks I’ve redesigned the theme, making it more modern and mobile-friendly, and also converted
it from Wordpress to a static site generator, which should make it faster in general as well as
hopefully more resilient to the occasional slashdotting. 😅</p>
<p>I ended up building my own little static site generator in Python, and I’ve
<a href="https://github.com/Reedbeta/little-py-site/">put it up on GitHub</a> in case it’s helpful as a
starting point for anyone else’s efforts.</p>
<!--more-->
<p>There are already lots of static site generators around, of which <a href="https://gohugo.io/">Hugo</a> seems
to be most popular at the moment, so why another one? Well, I tried out Hugo first and it’s a very
nice piece of software—slick, convenient, and fast—but I had a couple of problems with
it. In particular, there doesn’t seem to be a good way to support inline MathJax (LaTeX) code
without needing to escape things like underscores, so that Markdown doesn’t try to interpret them.
Also, Hugo didn’t offer enough flexibility with the directory structure.</p>
<p>I evaluated a couple other static site generators as well, but ultimately decided to build my own
thing for maximum simplicity and flexibility. I used <a href="https://pythonhosted.org/Markdown/">Python-Markdown</a>—an
extensible Markdown implementation that allows me to fix the MathJax issue with a plugin—and
<a href="http://jinja.pocoo.org/">Jinja</a>, a nice and very fast templating engine.</p>Star Trek: TNG Theme Reorchestration
http://reedbeta.com/made/tng-theme/
http://reedbeta.com/made/tng-theme/Nathan ReedMon, 15 Aug 2016 23:14:04 -0700http://reedbeta.com/made/tng-theme/#comments<p><a href="http://reedbeta.com/made/tng-theme/tngtheme.mp3"><img alt="Screenshot of score for TNG theme" src="http://reedbeta.com/made/tng-theme/tngtheme.jpg" title="Screenshot of score for TNG theme" /></a></p>
<p><audio controls ><source src="http://reedbeta.com/made/tng-theme/tngtheme.mp3" /></audio></p>
<p><a class="biglink" href="http://reedbeta.com/made/tng-theme/tngtheme.mp3">Download (2.5 MB)</a></p>
<p>In June 2016, game developer <a href="https://twitter.com/S0phieH">Sophie Houlden</a>
held a month-long <a href="https://itch.io/jam/star-trek-jam">game jam inspired by Star Trek</a>. Although my
initial plan was to actually make a game, after one thing and
another I ended up radically de-scoping and I decided instead to re-arrange the Next
Generation theme music, as an exercise in orchestral writing. Working from a piano score
and my nostalgia for the original, I turned out this take on the classic.</p>
<p>The show’s original version (one of them—there are a few slightly different variants
used in different seasons) can be found <a href="https://www.youtube.com/watch?v=p5kcBxL7-qI">on YouTube here</a>.</p>EEVEE.WAD Doom Map
http://reedbeta.com/made/eevee-wad/
http://reedbeta.com/made/eevee-wad/Nathan ReedThu, 18 Feb 2016 23:38:18 -0800http://reedbeta.com/made/eevee-wad/#comments<p><a href="http://reedbeta.com/made/eevee-wad/eevee.wad.zip"><img alt="Screenshot of EEVEE.WAD" src="http://reedbeta.com/made/eevee-wad/doom-map.jpg" title="Screenshot of EEVEE.WAD" /></a></p>
<p><a class="biglink" href="http://reedbeta.com/made/eevee-wad/eevee.wad.zip">Download (12 MB)</a></p>
<p>Like many people, my first foray into game development was modding. In the early
2000s I spent a lot of time making maps for Doom, and later Half-Life. But I hadn’t
touched it for about ten years, until this winter, when <a href="https://twitter.com/eevee">Eevee</a>
posted a <a href="http://eev.ee/blog/2015/12/19/you-should-make-a-doom-level-part-1/">series of blog articles</a>
on Doom mapping, and I was inspired to take up the editor again. This map was the result.</p>
<p>I spent about a month on this (my initial plan turned out to take a lot longer to
execute than I thought—big surprise), and I’m pretty happy with the result. It was
neat to come back to Doom after this time and see how my perspective had changed. The
tools available today are a lot better than what I remember, and I’m way smarter about
level design than I was ten years ago. Still, by the end of making this, I was starting
to get frustrated with Doom’s limitations, and I’m definitely all mapped out for awhile.</p>
<p>I’ve packaged up the map with a copy of the <a href="http://zdoom.org/">ZDoom</a>
engine and the <a href="https://freedoom.github.io/">Freedoom</a> asset pack (since the
original Doom textures, sprites, sounds, etc. are all under copyright and can’t be
redistributed). If you have a copy of Doom 2, drop your doom2.wad file in the directory
and use that; otherwise, you can play it with the Freedoom assets.</p>SIGGRAPH 2015: NVIDIA GameWorks VR
http://reedbeta.com/talks/gameworks-vr/
http://reedbeta.com/talks/gameworks-vr/Nathan ReedMon, 17 Aug 2015 22:46:35 -0700http://reedbeta.com/talks/gameworks-vr/#comments<p><img alt="GameWorks VR logo" src="http://reedbeta.com/talks/gameworks-vr/gameworks-vr.jpeg" title="GameWorks VR logo" /></p>
<p><span class="biglink">Slides: <a href="http://reedbeta.com/talks/gameworks-vr/GameWorks_VR_SIGGRAPH_2015.pptx">pptx, 10.6 MB</a>
or <a href="http://reedbeta.com/talks/gameworks-vr/GameWorks_VR_SIGGRAPH_2015.pdf">pdf, 7.9 MB</a></span>
(both include speaker notes)</p>
<p>GameWorks VR is a suite of technologies I helped to build at NVIDIA in 2015–2016. It’s an SDK for
VR game, engine, and headset developers, aimed at cutting down graphics latency and accelerating
stereo rendering on NVIDIA GPUs. In this talk, I explain the features of this SDK, including VR SLI,
multi-resolution rendering, context priorities and direct mode.</p>