The Raster Tragedy at Low-Resolution Revisited:
Opportunities and Challenges beyond “Delta-Hinting”

In the preceding chapter we have looked at various anti-aliasing methods. All these anti-aliasing methods have in common that samples are taken at a higher rate than the rate of pixels of the targeted screen. In this chapter we will have a look at how this can be exploited by the constraints of the font scaling mechanism.

Specifically, when we introduced the concept of constraining the scaling mechanism, we discussed workarounds that exaggerated or distorted some features to make others more tolerable. These exaggerations and distortions had to be done one pixel at a time. A stroke was either 1 or 2 pixels wide, with no chance for any intermediate width. Likewise, under- and overshoots were either not rendered at all, or else rendered with 1 pixel each for the under- and overshoot—with dramatic consequences for the proportions of the respective character.

Therefore we will now discuss how the higher sampling rate of anti-aliasing can reduce these distortions in hopes that this makes some of the compromises more tolerable. Notice however that we are not increasing the rate of pixels of the targeted screen; merely the sampling rate. Accordingly, we should not expect the problems to decrease by the same rate by which we increase the sampling rate. Instead, we may see a lateral shift in some of the compromises, from one kind to another, and in turn a change of what we will be expected to tolerate.

In chapter we have discussed that the rate of pixels on today’s screens is way too low (cf ). Equal or substantially equal stems were rendered with an unequal number of pixels (cf ), and parts of the character were not rendered at all (cf ).

Subsequently, we devised workarounds in the form of constraints on the scaling mechanism. Doing so eventually rendered equal stems with equal pixel counts, and it brought back serifs and other parts that unconstrained sampling appeared to “miss.”

To see what anti-aliasing can do before applying any constraints, I’ll repeat the same example, but this time I’ll use full-pixel anti-aliasing with 4x by 4y oversampling for a total of 17 levels of gray:

Unconstrained (“unhinted”) outline rendered with full-pixel anti-aliasing (Times New Roman, lc ‘m,’ 12 pt, 96 DPI, 4x4y oversampling for 17 levels of gray, box filter)
⇒ Hover your mouse over the illustration to see the same outline rendered in bi-level (2 levels of “gray,” aka “black-and-white”)

At first glance it appears that, unlike bi-level rendering, anti-aliasing did not “miss” the serifs, let alone create a “broken” arch. Still, the leftmost stem appears to be a bit “heavier” than the other two stems.

Notice that I emphasized the word appears. The reason I did this is because individual shades of gray are not necessarily rendered the same way by all screens. The same numerical level of gray may appear brighter or darker on your screen than it does on mine. This is one of the previously mentioned lateral shifts of compromises. We will come back to this phenomenon in .

Therefore, to get a definitive answer to the question of stem weights, I’ll repeat the above illustration. This time, instead of showing the (final) pixels, I’ll show the (intermediate) samples.

The same outline as above, shown with pixel and sample boundaries as used for “gray-scaling:” The “in” samples are marked with a solid dot
⇒ Hover your mouse over the illustration to see the samples downsampled (“anti-aliased”) to the corresponding levels of gray. Notice how the level of gray (“darkness” of the pixel) is proportional to the number of “in” samples (pixel coverage)

This shows that, indeed, the leftmost stem was sampled with 6 samples, while the other two stems were sampled with 5 samples.

What it does not show is that the leftmost and the middle stem were designed with the exact same width, while the rightmost stem appears to have been designed ever so slightly heavier than the other two. Yet the rightmost stem is sampled with the same sample count as the middle one. Following is a table that summarizes the weights in design space units, sample space units (cf ), and actual sample counts once sampled.

Stem →
↓ Numerical Value
Left Stem Middle Stem Right Stem
Edge-to-Edge Distance
in Design Space Units
165165166
Edge-to-Edge Distance
in Sample Space Units
5 5/32
(≅5.16)
5 5/32
(≅5.16)
5 3/16
(≅5.19)
Number of Samples
included between Edges
655

Numerical values taken from the same outline as above, representing the stroke weights of the lc ‘m,’ the numerical values taken in design space units, scaled to device space units (12 pt, 96 DPI), and as sampled (top to bottom)

Together with the preceding illustration, the above table shows that, even with oversampling and subsequent anti-aliasing, like strokes don’t necessarily sample to like sample counts. In fact, the slightly larger right stem got rendered with fewer samples than the slightly smaller left stem. This will have to be addressed in the constraining mechanism.

Also, while it doesn’t “miss” any of the serifs or “break” any of the thin arches, it renders the respective features in more or less light shades of gray. This may be an opportunity for the serifs to become less exaggerated at small sizes, while it remains a challenge for thin arches and other “structural” elements not to “fall apart.” We will come back to this in and .

Let’s see if asymmetric sub-pixel anti-aliasing fares any better, again before applying any constraints.

The same unconstrained (“unhinted”) outline as above, rendered with asymmetric sub-pixel anti-aliasing (“ClearType”)
⇒ Hover your mouse over the illustration to see how this compares to full-pixel anti-aliasing (“gray-scaling”)

At first glance this looks like someone got to try out the latest “Electric FX” Crayola set or similar. None of the three stems are rendered with the same colors. It seems anything but intuitive how this should reduce to a 12 pt lc ‘m’ on a 96 DPI flat panel display.

Now, whatever your color perception may be, it is important to understand that colors are not chosen for artistic considerations at this stage of the rendering process. There is no art involved in deciding that e.g. the right edge of the middle stem should be rendered by a column of pixels in a washed out aqua (or cyan). Rather, it is the result of muting the effect of color fringes to a tolerable level. Strictly by numbers, it is the outcome of applying the respective downsampling filter as introduced in .

Notice, however, that this very example illustrates the appearance of “bleeding” of ink into neighboring pixels. The aforementioned column of washed out aqua renders pixels that are substantially outside the confines of the outline, yet they are not rendered white like the rest of the background. This is one of the previously mentioned lateral shifts of compromises. In the case of sub-pixel anti-aliasing, and unlike full-pixel anti-aliasing, this is unavoidable.

Notice also that much like with full-pixel anti-aliasing, it is difficult to tell if any of the three stems is “heavier” than the other two. In fact, owing to the use of colors, it may be even more difficult than in the case of monochromatic anti-aliasing. By using colors, we are facing the possibility that shades of each of the three primary colors individually may look brighter or darker on your screen than on mine.

Moreover, the fact that colors are used at all may be perceived differently by different viewers. This represents another one of the previously mentioned lateral shifts of compromises.

Notice, last but not least, the rendition of the serifs, irrespective of the use of colors. The serifs appear all but absent. The reason for this is that unlike full-pixel anti-aliasing or hybrid sub-pixel anti-aliasing, asymmetric sub-pixel anti-aliasing does not oversample in y-direction. To see this, let’s again have a look at the (intermediate) samples, instead of the (final) pixels:

The same outline as above, shown with pixel and sample boundaries as used for downsampling to “ClearType:” The “in” samples are again marked with a solid dot
⇒ Hover your mouse over the illustration to see the samples downsampled to “ClearType.”

Seen this way, this shows that all three stems were sampled with 8 samples. Rest assured, though, that this is by chance, not by concept.

Notice the few “in” samples along the serifs at the bottom of the ‘m.’ For most of the length of all these serifs, the samples are “out.” This shows how come we don’t see much of the serifs once rendered by asymmetric sub-pixel anti-aliasing.

Notice also that the arches are “broken.” It is a side-effect of the “bleeding” of the ink that they may still be perceived as contiguous. However, as the serifs show, the “bleeding” only goes so far. Hence, as in the case with full-pixel anti-aliasing, all of the above shortcomings will have to be addressed in the constraining mechanism.

Finally, let’s have a look at how hybrid sub-pixel anti-aliasing fares, again before applying any constraints.

The same unconstrained (“unhinted”) outline as above, rendered with hybrid sub-pixel anti-aliasing (“Windows Presentation Foundation”)
⇒ Hover your mouse over the illustration to see how this compares to asymmetric sub-pixel anti-aliasing (plain “Windows XP ClearType”)

At first glance, this may not look all that different. The serifs seem back—sort of—and it looks as if the color palette had been augmented by some shades of pastel, especially at the top of the arches. At the same time, none of the pixels appear to be rendered in a solid black anymore. This is unlike any of the previously illustrated rendering methods—at least at the type size and screen resolution (DPI) used for the illustrations.

Since hybrid sub-pixel anti-aliasing represents a combination of sub-pixel anti-aliasing (in x-direction) with full-pixel anti-aliasing (in y-direction), it may be illuminating to see how (plain) full-pixel anti-aliasing (plain “gray-scaling”) performs compared to hybrid sub-pixel anti-aliasing:

The same outline as above, rendered once more with full-pixel anti-aliasing (Windows’ “Standard Font Smoothing”)
⇒ Hover your mouse over the illustration to see how this compares to hybrid sub-pixel anti-aliasing (“Windows Presentation Foundation”)

Aside from different oversampling rates used in the process (that differ merely for historical reasons), aside from chance effects that do or do not render substantially equal stems of the unconstrained outline with equal sample counts, and aside from the presence or absence of color, the two rendering methods do not appear to differ from one another as dramatically as e.g. full-pixel anti-aliasing does from bi-level rendering.

So, what can we conclude from all the above illustrations? Anti-aliasing seems to ease the problem of rendering like features with like sample counts, but the “lack-of-scalability” problem does not go away. There is still a fair chance for like features to render with unlike sample counts. Once downsampled to pixels, unlike sample counts are merely less obvious to notice than without anti-aliasing.

Likewise, there is a fair chance that oversampling will seem to “miss a spot.” This problem doesn’t go away either, but again it becomes less noticeable than without anti-aliasing. Therefore we will discuss ways to exploit the properties of these anti-aliasing filters next.

In this section we will discuss a fundamental dilemma that comes with the territory of anti-aliasing. For sufficiently small type sizes and device resolutions, anti-aliasing can render strokes with more faithful stroke weights and stroke positions, but at the expense of the perceived contrast and “sharpness” of the rendering process—or vice-versa. Here is why:

One of the “cruelties” of bi-level rendering on low-resolution screens is having to decide for strokes that are either 1 or 2 pixels wide. For some sizes, 1 seems too thin while 2 appears too heavy. The corresponding page color turns out too light or too dark. The best we can do is to “pick the lesser evil.”

By contrast, in anti-aliased rendering there will be several samples to the pixel. Accordingly, we should be able to select several intermediate or fractional stroke weights. For instance, for full pixel anti-aliasing with 4× oversampling, there are 3 intermediate fractional stroke weights as tabulated below.

1
pixel
1 1/4
pixel
1 1/2
pixel
1 3/4
pixel
2
pixel

5 fractional stroke weights, rendered in 4x4y full-pixel anti-aliasing: Both the leftmost and the rightmost stroke render identically to bi-level rendering, with 1 and 2 pixels respectively, while the 3 strokes in-between attempt to render intermediate stroke weights by adding “increasing amounts of gray” to the right of said strokes.

For the leftmost illustration above, I started with a 1 pixel wide stroke, as would be rendered in bi-level rendering. For the subsequent illustrations I added more weight to the right of the original stroke, in increments of 1/4 of a pixel, until I reached a weight of 2 pixels. This should ease the impossible but unavoidable decision to make a stroke either 1 or 2 pixels wide.

In this instance, I chose to add weight to the right, as opposed to the left, for no particular reason. I could have added the weight to the left with substantially equal results, as illustrated below.

1
pixel
1 1/4
pixel
1 1/2
pixel
1 3/4
pixel
2
pixel

5 fractional stroke weights, rendered in 4x4y full-pixel anti-aliasing as above, except that the 3 intermediate strokes attempt to render fractional weights by adding “increasing amounts of gray” to the left of said strokes.

In either set of illustrations I carefully placed the strokes’ edges on sample boundaries. Since the above illustrations use 4 samples to the pixel in x-direction, this sets up the strokes’ weights in increments of 1/4 of a pixel.

Notice that it doesn’t make sense to put any of the strokes’ edges on any position other than a sample boundary. We wouldn’t get different (or more) shades of gray even if we did. Accordingly, for 2× oversampling the edges are placed on the nearest 1/2 of a pixel, for 4× on the nearest 1/4 of a pixel, and for 8× on the nearest 1/8 of a pixel. In each instance the fractional pixel increment corresponds to the respective oversampling rate (cf also ).

Not only does it not make sense, careless edge placement can cause actual harm. If we were to put the edge “dead-on” the sample centers, this would cause an extra column of samples to be “in.” This kind of “chance effect” is much like “unhinted” bi-level rendering, where stroke edges that haphazardly “fall” on the pixel grid can cause one stem to render 1 pixel wide while its neighbor gets 2 pixels.

As we have seen early in the previous section, these “chance effects” transpire to (all forms of) anti-aliasing, except that in anti-aliasing they cause an extra column (or row) of samples instead of pixels. In turn, e.g. in full-pixel anti-aliasing, the extra samples translate to a darker shade of gray, as illustrated below.

1 23/64
(≅1.359)
pixel wide
(“just-off”)
1 3/8
(1.375)
pixel wide
(“dead-on”)

2 fractional stroke weights, rendered in 4x4y full-pixel anti-aliasing as above, with the 2 stroke weights differing by 1/64 of a pixel. The right edge of the left stroke is “just off” the sample centers while the right edge of the right stroke is “dead-on.”

The edges on the right of the above strokes differ by a mere 1/64 of a pixel, which nevertheless “translates” to a 25% increase in portrayed pixel coverage. While this is not quite as “blatant” as an increase from a 1 to a 2 pixel wide stroke, it is likely not as intended, either, and hence my careful placement of the strokes’ edges on sample boundaries.

To repeat the above exercise with the colors of sub-pixel anti-aliasing, we need to know the respective oversampling rate. While I do know said rate for ClearType, I do not know it for CoolType, nor for Quartz. Therefore I’ll illustrate how I would determine it if I didn’t know it, not even for ClearType.

For my illustration, I started out from a 1 pixel wide stroke, like in the preceding examples. But instead of adding weight in increments of 1/4 of a pixel, I used 1/64 of a pixel. This is the smallest increment by which two coordinates can differ. Since I can’t be any more precise than that, I won’t “miss” any intermediate steps at an even finer grain of precision.

Accordingly, as I am adding weight to the stroke, all I’ll have to do is watch what happens to the color pixels and count the discrete number of patterns. Literally—because I am not going to tabulate all 63 intermediate fractional stroke weights individually. Instead, I compiled sort of a “time-lapse” movie of my experiment, which you can watch by hovering your mouse over the illustration below.

Determining the oversampling rate of ClearType:
⇒ Hover your mouse over the illustration to see how to determine ClearType’s oversampling rate used in TrueType environments.

Beyond the initial frame of the “time-lapse” movie above, I count 6 new pixel (or color) patterns. Therefore, the oversampling rate of ClearType must be 6× (in x-direction).

This means that it is sufficient to use increments of 1/6 of a pixel to tabulate all intermediate fractional stroke weights.

1
pixel
1 1/6
pixel
1 1/3
pixel
1 1/2
pixel
1 2/3
pixel
1 5/6
pixel
2
pixel

7 fractional stroke weights, rendered in 6x1y asymmetric sub-pixel anti-aliasing (“ClearType”): The left- and rightmost strokes have their edges aligned with pixel boundaries while the other strokes attempt to “add weight” on the right.

As previously mentioned, there is no artistry involved in selecting individual colors. It is all “happening” strictly by numbers, in an effort to “smudge the line” between black and white.

Notice, however, that none of the above pixels appears to be a solid black. Yet in sub-pixel anti-aliasing, all the above examples are considered the rendition of black text against a white background. We will come back to this shortly, and again in .

Unlike full-pixel anti-aliasing, adding weight to the left, instead of the right, does not yield the same results in sub-pixel anti-aliasing.

1
pixel
1 1/6
pixel
1 1/3
pixel
1 1/2
pixel
1 2/3
pixel
1 5/6
pixel
2
pixel

7 fractional stroke weights, rendered in 6x1y asymmetric sub-pixel anti-aliasing (“ClearType”) as above, except that the intermediate strokes attempt to “add weight” on the left.

Per se, the difference between adding weight on the right vs the left is neither bad nor good. It is simply a fact that eventually may have to be taken into account (cf ). Notice though that like with the previous illustration, none of the above pixels appears to be rendered in a solid black.

Another one of the “cruelties” of bi-level rendering on low-resolution screens is having to position strokes on full pixel boundaries. One position may make the character look too narrow, the other one too wide. One position puts the crossbar too low, the other one too high. Once more we are forced to “pick the lesser evil.”

By contrast, anti-aliasing should enable us to select an intermediate or fractional stroke position, much like selecting a fractional stroke width. For instance, for 4x4y oversampling full pixel anti-aliasing, there are 3 intermediate fractional stroke positions as tabulated below for a 1 pixel wide stroke.

+0
pixel
+1/4
pixel
+1/2
pixel
+3/4
pixel
+1
pixel

5 fractional positions for a 1 pixel wide stroke, rendered in 4x4y full-pixel anti-aliasing. Each combination of fractional stroke weight and position nominally renders a 1 pixel wide black stroke against a white background (!)

All 5 of the above examples are exactly 4 samples wide and are positioned on exact sample boundaries. But the 3 intermediate stroke positions seem to make the strokes look different. They are rendered in combinations of shades of gray without any solid black. Yet in anti-aliasing, all 5 of the above examples are considered a rendition of black text against a white background. We will get back to this shortly.

Let’s repeat the above exercise with the colors of sub-pixel anti-aliasing. As we have established previously, ClearType uses an oversampling rate of 6x, hence there are 5 intermediate fractional stroke positions as tabulated below, again for a 1 pixel wide stroke.

+0
pixel
+1/6
pixel
+1/3
pixel
+1/2
pixel
+2/3
pixel
+5/6
pixel
+1
pixel

7 fractional positions for a 1 pixel wide stroke, rendered in 6x1y asymmetric sub-pixel anti-aliasing (“ClearType”). Each combination of fractional stroke weight and position nominally renders a 1 pixel wide black stroke against a white background (!)

Again, all 7 of the above examples are exactly 6 samples wide and positioned on exact sample boundaries—or at least as “exact” as one may get at representing 1/6 of a pixel in terms of 1/64 of a pixel. Not surprisingly anymore, they all look different, and—you guessed it—they are all considered a rendition of black text against a white background even though there are no black pixels to be seen anywhere at all.

How can it be that the above examples are considered black text against a white background when there are no black pixels involved whatsoever? Strictly speaking, this isn’t black text. What we see here are examples of a limitation of the various forms of anti-aliasing, namely their ability at rendering contrast on sufficiently small parts of characters.

Informally, I’ll define rendering contrast as the ability to produce solid black when displaying black text against a white background. More formally, we could define contrast in terms of a ratio of brightnesses (contrast ratio) which depends on the capabilities of the individual screen and the ambient light. But this doesn’t make it any easier at getting an intuitive understanding of the problem, hence the informal definition for now (but cf ).

For bi-level rendering, it is easy to see what produces a black pixel. Just turn the pixel “off.” Full-pixel anti-aliasing with a (“non-bleeding”) box filter produces a black pixel if the entire pixel is “in.” As soon as a part of the pixel is “out,” we will get gray, as illustrated below.

1/4
pixel
±0
pixel
+1/4
pixel

A 1 pixel wide stroke, rendered in 4x4y full-pixel anti-aliasing, and placed at different offsets relative to the actual pixel boundary

In other words, if we render a 1 pixel wide stroke in full pixel anti-aliasing, and if we position said stroke on a pixel boundary, like we would for a bi-level stroke, then we get black. This is not any better than bi-level rendering, but not any worse either.

The situation improves as the strokes get wider. To illustrate this point, the table below shows a range of fractional stroke widths (from 1 to 2 pixels in increments of 1/4 of a pixel). Each stroke is shown at a range of fractional stroke positions (from −1/2 to +1/2 in increments of 1/4 of a pixel).

1
pixel
1 1/4
pixel
1 1/2
pixel
1 3/4
pixel
2
pixel
  1/2
pixel
1/4
pixel
±0
pixel
+1/4
pixel
+1/2
pixel

A range of strokes, rendered in 4x4y full pixel anti-aliasing with increasing fractional stroke weights (top-to-bottom) and at different fractional positions (left-to-right). Observe which combinations of fractional stroke weight and position yield a “core” of black pixels.

This illustration shows that once a stroke is 1 1/4 pixel wide, there are 2 adjacent stroke positions that produce a “core” of black pixels. For a width of 1 1/2 pixel, there are 3 positions. Finally, once the width reaches 1 3/4 pixel, all 4 fractional stroke positions will produce a “black core.”

In other words, to obtain a “black core” or to render maximum contrast without compromising fractional positioning, a stroke must be at least 1 3/4 pixel wide. Conversely, if the stroke is less than 1 3/4 pixel wide, we will have to forfeit either its positioning or its contrast. In practical terms, we will have to trade the proportions or spacing of a character for its contrast, or vice-versa.

Somehow this sounds familiar: It is related to . Remember the “solution” or workaround to making equal strokes render with equal pixel counts? The weight of a stroke, along with one of its edges, were prioritized, while the other edge of the stroke had to “give way” to make things “add up.”

It is not much different here. Depending on the type size and screen DPI, we will have to prioritize how “badly” we want maximum contrast or how “badly” we want highest fidelity in proportions and spacing, but we may not get both. And whichever workaround we choose may not work equally well in all contexts.

Let’s see if asymmetric sub-pixel anti-aliasing fares any better. Recall that a 1 pixel wide stroke, even if positioned on a pixel boundary, like we would do for a bi-level stroke, will not produce a “black core.” This is a direct consequence of having to “smudge the line” between black and white. With the “smudging” it takes a wider minimum stroke width to produce black, as illustrated below.

1
pixel
@ 0
1 1/6
pixel
@ 0
1 1/3
pixel
@ −1/6
1 1/2
pixel
@ −1/6
1 2/3
pixel
@ −1/3
1 5/6
pixel
@ −1/3
2
pixel
@ −1/2

A stroke, rendered in 6x1y asymmetric sub-pixel anti-aliasing (“ClearType”) at increasing weights, the weight added alternatingly to the right and to the left, to determine at what point this produces a “core” of black pixels.

It takes a 1 2/3 pixel wide stroke that happens to be centered on an actual pixel to render a “black core.” Compared to full pixel anti-aliasing it appears that sub-pixel anti-aliasing is not off to a good start.

But the keyword here is sub-pixel. Hence let’s have a look at the individual sub-pixels of a 1 2/3 pixel wide stroke centered on a full pixel (be sure to hover your mouse over the illustration below to see the actual sub-pixels).

A 1 2/3 pixel wide stroke, rendered in 6x1y asymmetric sub-pixel anti-aliasing (“ClearType”), and centered on a full pixel.
⇒ Hover your mouse over the illustration to see the individual sub-pixels.

Notice how “smudging the line” appears to affect only the sub-pixels immediately adjacent to that “smudged line” (i.e. the edge of the stroke)? For instance, the leftmost sub-pixel of the above stroke is a dark shade of blue—neither black nor the primary color blue. Likewise, the rightmost sub-pixel is a dark shade of red.

In-between, there are 3 solid black sub-pixels. If we positioned the stroke to the left or to the right by 1/3 of a pixel, we would get different colors along the edge, but we would still get 3 solid black sub-pixels in-between.

1/3
pixel
±0
pixel
+1/3
pixel

Individual sub-pixels of a 1 2/3 pixel wide stroke, rendered in 6x1y asymmetric sub-pixel anti-aliasing (“ClearType”), and positioned at different offsets relative to the actual pixel boundary. Notice the relative position of a triplet of solid black sub-pixels (“black core”) in each of the 3 illustrations above.
⇒ Hover your mouse over the illustration to see how this renders in terms of pixels. Seen at this magnification, only the middle illustration appears to render solid black pixels.

In other words, a 1 2/3 pixel wide stroke does not have to be centered on a pixel to render the equivalent of a “black core.” In theory, any increment of 1/3 pixel should do.

But what if we wanted to position a stroke at any increment of 1/6 pixel, corresponding to the oversampling rate of ClearType? That’s elementary: just use a stroke weight of at least 1 5/6 pixel. To see this take any of the 3 strokes shown in the previous illustration and add 1/6 pixel of weight on the left or on the right. This does not decrease the 3 solid black sub-pixels in-between the “smudged edges,” but it does increase the number of acceptable positions.

What have we learned? The combination of rendering method (cf chapter ), stroke weight, and stroke position determines the rendered stroke contrast. We get to choose the rendering method, the stroke weight, and the stroke position.

But we don’t get to choose the rendered contrast. If we want to achieve a particular contrast, we may have to exclude some of the choices. Owing to my background in physics, I’ll call this the Anti-Aliasing “Exclusion Principle.”

For instance, we may choose a particular rendering method and an optimal stroke weight, but for a targeted rendering contrast this may compromise where we can position the strokes. In turn, this may compromise the proportions and the advance width of a character.

Conversely, if we were to choose the stroke positions to render the most faithful proportions and advance width, we may have to compromise the rendering contrast. We will come back to a special case of these choices in .

To get a rough idea of the degree to which the “exclusion principle” may be relevant in practice, I have tried to come up with a “ballpark” figure to illustrate the point of threshold. Specifically, I have taken a “typical” text font and calculated at which ppem or point size a “typical” vertical stem may be positioned at any sample boundary and produce a “black core.” Following are the results for Arial:

Condition Bi-Level
Rendering
Full-Pixel
Anti-Aliasing
Sub-Pixel
Anti-Aliasing
Minimum Stroke Width
in Pixels for “Black Core”
at all Stroke Positions
1 1 3/4
(1.75)
1 5/6
(≅1.83)
Corresponding
Ppem Size for Arial
Uppercase…lowercase
11 18…20 19…21
Corresponding
Point Size @ 96 DPI
Uppercase…lowercase
8.0…8.5 14…15 15…16
Corresponding
Point Size @ 120 DPI
Uppercase…lowercase
6.5…7.0 11…12 12…13
Corresponding
Point Size @ 144 DPI
Uppercase…lowercase
5.5 9.0…10.0 9.5…10.5

Minimum stroke widths (in pixels) to render Arial’s stems with a “black core” (top row), followed by the corresponding ppem and point sizes for a range of current screen resolutions

The above table illustrates that in order to render Arial with a “black core” we need at least 11 ppem for bi-level rendering, 18 to 20 ppem for full-pixel anti-aliasing (4x4y oversampling “box filter”), and 19 to 21 ppem for sub-pixel anti-aliasing (6x1y oversampling ClearType filter). For a comfortable type size of 11 point, this target is met easily by bi-level rendering at 96 DPI, just about by full-pixel anti-aliasing at 120 DPI, but needs 144 DPI for sub-pixel anti-aliasing.

The above minima should not be taken religiously, like some kind of canonical criteria, merely as a guideline to illustrate the trade-offs. Factors that influence the threshold are the individual font, the particular anti-aliasing filter (“bleeding” or not, oversampling rate), and what is tolerable as “enough contrast.” It’s a sliding scale. Only the font makers or their customers will know the “sweet spot.”

In the previous section we have looked at the fundamentals of using anti-aliasing for rendering straight strokes. With straight strokes it is easiest to see any trade-off between intermediate stroke weights and positions, and perceived rendering contrast. In this section we will expand on this knowledge by looking at anti-aliasing as an opportunity to “smooth the jaggies” or to “deemphasize” individual features that bi-level rendering would exaggerate.

Diagonal and italic strokes are prime candidates for “font smoothing.” To see how anti-aliasing can help before applying any constraints, I’ll use full-pixel anti-aliasing with 4x4y oversampling on an italic stroke.

Unconstrained (“unhinted”) outline rendered with full-pixel anti-aliasing (Arial Italic, UC ‘I,’ 12 pt, 96 DPI, 4x4y oversampling for 17 levels of gray, box filter)
⇒ Hover your mouse over the illustration to see the same outline rendered in bi-level (2 levels of “gray,” aka “black-and-white”)

Compared to bi-level rendering we can see how anti-aliasing renders with a more refined notion of pixel coverage. Pixels that are partially “in” are rendered with a level of gray commensurate with their partial coverage. The stroke weight, which happens to be 1 1/2 pixel, appears to be rendered much more faithfully than with bi-level rendering.

Compared to straight strokes rendered with full-pixel anti-aliasing it does not appear as if there is a contiguous “black core.” For instance, in the above illustration only 7 out of 27 pixels are rendered in a solid black, and the fragments of “black cores” are staggered. Granted, it is nowhere nearly as bad as the jagged rendition in bi-level rendering, but it is still present to some degree.

To see whether it is possible to improve on this with appropriate constraints, the table below again illustrates a range of fractional stroke widths at a range of fractional stroke positions.

1
pixel
1 1/4
pixel
1 1/2
pixel
1 3/4
pixel
2
pixel
  1/2
pixel
1/4
pixel
±0
pixel
+1/4
pixel
+1/2
pixel

A range of italic strokes, rendered in 4x4y full pixel anti-aliasing with increasing fractional stroke weights (top-to-bottom) and at different fractional positions (left-to-right).
⇒ Hover your mouse over any of the above combinations of fractional stroke weight and position to see which pixels are actually rendered in a solid black (100% “gray”).

Not surprisingly, it is again impossible to achieve a “black core” at every intermediate stroke position unless the stroke width meets a certain minimum. Moreover, below that minimum there appears to be no intermediate position that produces a contiguous “black core”—not even a staggered one. This may cause a diagonal or italic stroke to look irregular ("wavy"), particularly with inappropriate gamma correction (cf ).

It appears that rendering contrast cannot be optimized below a minimal stroke width. But this does not mean that stroke positioning is inconsequential. It may be preferable for a diagonal or italic stroke to start and end as “contrasty” as possible. Refer to the above table for various patterns.

Moreover, diagonals that happen to be at an angle of ±45° risk having their edges run “dead-on” through the centers of diagonally adjacent samples. In this situation, positioning variations of ±1/64 of a pixel can make a noticeable difference, even with 4x4y oversampling. This is illustrated below.

1/64 pixel in x
+1/64 pixel in y
“centered” +1/64 pixel in x
1/64 pixel in y

A diagonal stroke at an angle of 45°, rendered in 4x4y full pixel anti-aliasing, and placed at minimally different positions.

Such diagonals may occur in fonts that have been previously “hinted” for bi-level rendering. In a way, their rendition is not unlike that of vertical (or horizontal) strokes, as illustrated below.

1/4 px x
+1/4 px y
1/8 px x
+1/8 px y
“centered” +1/8 px x
1/8 px y
+1/4 px x
1/4 px y

A diagonal stroke at an angle of 45°, rendered in 4x4y full pixel anti-aliasing, and placed at positions differing by increments of 1/8 of a pixel, in both x- and y-direction.

To see the similarity with vertical (or horizontal) strokes, imagine sweeping the above 45° diagonal across the “pixel plane” from the “northwest” to the “southeast,” as opposed to left-to-right like shown in .

Out of the various fractional diagonal stroke positions, the one that is “centered” has the darkest “core,” while the ones that are the farthest “off-center” appear in a fairly uniform gray. Notice, however, that while the “centered” diagonal stroke has the darkest pixels, none of the above pixels are rendered in a solid black. They may be close, at least before gamma correction (cf ), but they are not black, because they don’t represent 100% pixel coverage.

As an aside, while Windows now uses 4x4y oversampling, the earliest implementation of “font smoothing” used 2x2y. This meant that a 1 pixel wide 45° diagonal, “centered” as above, did produce solid black pixels. 2x2y oversampling yields a less accurate notion of pixel coverage, and hence the apparent “benefit.”

Let’s see how asymmetric sub-pixel anti-aliasing performs on diagonals and italics. The following illustration again shows an italic UC ‘I’ before applying any constraints.

Unconstrained (“unhinted”) outline rendered with asymmetric sub-pixel anti-aliasing (Arial Italic, UC ‘I,’ 12 pt, 96 DPI, 6x1y oversampling, ClearType filter)
⇒ Hover your mouse over the illustration to see the same outline rendered in full-level anti-aliasing (4x4y oversampling, box filter)

Compared to full-pixel anti-aliasing, this doesn’t look all that different. The gradation may appear a bit finer, but this is mostly related to the “bleeding” of the filter, not the oversampling rate. A suitable “bleeding” filter for full-pixel anti-aliasing very likely could all but eliminate this difference.

Asymmetric sub-pixel anti-aliasing does not render the partial pixels at the top, but this shouldn’t come as a surprise, since it does not anti-alias in y-direction. Using hybrid sub-pixel anti-aliasing would render these two pixels just fine.

If there is much of a difference, it must be noticeable on the level of sub-pixels. The following table illustrates a range of fractional stroke widths, showing each stroke at a range of different fractional stroke positions, in increments of 1/3 pixel. But unlike a similar compilation shown previously, hovering your mouse over any combination of stroke width and position will reveal which sub-pixels are actually rendered as solid black.

1
pixel
1 1/3
pixel
1 2/3
pixel
2
pixel
  1/3
pixel
±0
pixel
+1/3
pixel

A range of italic strokes, rendered in 6x1y asymmetric sub-pixel anti-aliasing with increasing fractional stroke weights (top-to-bottom) and at different fractional positions (left-to-right)
⇒ Hover your mouse over any of the above combinations of fractional stroke weight and position to see which sub-pixels are actually rendered as solid black.

As with full-pixel anti-aliasing, it takes a minimum stroke width to yield a “black core” that is 3 sub-pixels (1 pixel) wide, but unlike full-pixel anti-aliasing, said “black core” appears to exploit the sub-pixel structure. The “black core” looks more “contiguous” or less “staggered” than in full-pixel anti-aliasing. As a result, sub-pixel anti-aliasing may seem to render “smoother” italic strokes than full-pixel anti-aliasing.

1
pixel
1 1/2
pixel
2
pixel

3 italic strokes, rendered in 6x1y asymmetric sub-pixel anti-aliasing, and showing the actual sub-pixels
⇒ Hover your mouse over any of the above italic strokes to see the respective stroke rendered in 4x4y full-pixel anti-aliasing, again showing the actual sub-pixels

The degree to which sub-pixel anti-aliasing may be “smoother” than full-pixel anti-aliasing depends on the angle, width, and position of the (diagonal) stroke. For example, in case of the previously illustrated 1 pixel wide diagonal at an angle of 45°, there appears to be no advantage, as illustrated below.

A diagonal stroke at an angle of 45°, rendered in 6x1y asymmetric sub-pixel anti-aliasing, and showing the actual sub-pixels
⇒ Hover your mouse over the above illustration to see the same stroke rendered in 4x4y full-pixel anti-aliasing, again showing the actual sub-pixels

The stroke is not wide enough for much of a “black core,” and the black or almost black sub-pixels appear as “islands.” Notice that hybrid sub-pixel anti-aliasing doesn’t help in this case. If there is any advantage to sub-pixel anti-aliasing, we have already exploited it in x-direction.

Round strokes, along with under- and overshoots, are the other prime candidates for “font smoothing.” To see how anti-aliasing may help before applying any constraints, I’ll use full-pixel anti-aliasing again with 4x4y oversampling on an UC ‘O.’

Unconstrained (“unhinted”) outlines rendered with full-pixel anti-aliasing (Arial, UC ‘O,’ 12 pt, 96 DPI, 4x4y oversampling for 17 levels of gray, box filter)
⇒ Hover your mouse over the illustration to see the same outline rendered in bi-level

Like with diagonal and italic strokes, anti-aliasing renders round strokes with a more refined notion of pixel coverage than bi-level rendering. Notice also the distribution of black pixels and the overall asymmetry. We will dissect this next.

Recall the first illustrations of fractional stroke weights in . Extra weight was added to one side of the stroke. In that context, it seemed mostly irrelevant whether weight was added to the left, to the right, or to both sides. Subsequently, a table in illustrated a range of fractional stroke weights at fractional stroke positions. Depending on the fractional stroke weight, there were potentially several fractional stroke positions that rendered a “black core.”

Put another way, depending on the fractional stroke weight, it seemed as if we could “put” the gray on the left, on the right, or potentially on both sides. In that context, it didn’t seem to matter where we “put” the gray. But could it be that it does matter, or at least sometimes?

In their seminal paper Perceptually Tuned Generation of Grayscale Fonts, Hersch & al. advocate that it does. Summarizing the relevant typographic design guidelines, they recommend

  1. to sufficiently reinforce thin strokes (cf ),
  2. to allocate the gray on the trailing edge of vertical strokes, leaving the leading edge as sharp as possible,
  3. to allow for sufficient black pixels on diagonal strokes (cf. “black core” in ),
  4. to allocate the gray on the outside of round strokes,
  5. not to overemphasize serifs (cf ), and last but not least,
  6. to use a consistent pattern of contrast profile across the entire font (cf ).

Guidelines 2 and 4 are specific about where to “put” the gray. Given a left-to-right reading direction, guideline 2 recommends to align the left edge of vertical strokes with a pixel boundary. This creates the sharpest transition between the (white) background and the (black) text. It relegates any fractional part of the stroke weight to the right edge. At the same time it limits possible stroke positions to 1 per pixel, like bi-level rendering. Depending on the context, this may or may not be a more tolerable trade-off.

Guideline 4 recommends to “put” the gray on the outside of round strokes, particularly with a box filter. Targeting a “black core” for round strokes this maximizes the length of a row or column of adjacent black pixels, as illustrated for a range of stroke weights below.

1 pixel
(8 pt)
1 1/4 pixel
(10 pt)
1 1/2 pixel
(12 pt)
1 3/4 pixel
(14 pt)
2 pixel
(16 pt)

5 round strokes, rendered with full-pixel anti-aliasing, and “putting” the gray on the outside as per guideline 4 above (Arial, UC ‘O,’ 96 DPI, 4x4y oversampling for 17 levels of gray, box filter)

This is consistent with how we may render the under- and overshoots (cf below). At the same time it again limits possible stroke positions to 1 per pixel, with substantially the same considerations about the involved trade-offs.

Now, strictly speaking, whatever the above choices may be, it is of course not a matter of “putting” deliberate amounts of gray on specific sides of strokes. Rather, the deliberation selects a fractional stroke position. Together with the fractional stroke weight, this determines fractional pixel coverages, which in turn may look as if a specific level of gray was placed next to the black pixels.

For instance, a 1 1/2 pixel wide stroke may cover a full pixel and half of the adjacent pixel to the right. This may look as if the “middle gray” was put next to the “black core.”

Seen this way, guideline 4 above might be “extrapolated” to sub-pixel anti-aliasing: allocate the fractional part of round strokes on the outside. Notice, however, that this does not select a particular color. If a particular color were to be selected, then a “commensurate” fractional part should be allocated on the outside of the round stroke. Notice also that this is not symmetric. The same fractional part allocated on the right will not yield the same color as when allocated on the left.

So far, we have looked at vertical round strokes in more detail. In theory, horizontal round strokes offer similar, if not the same, opportunities. The main difference is that “putting” a little bit of gray on the outside of horizontal round strokes effectively means to “turn on” the under- and overshoots—at least “a little bit.”

In bi-level rendering on low-resolution screens, one of the “cruelties” is having to decide when and how to “turn on” the under- and overshoots. For text sizes on low-resolution displays, they are “off.” For sufficiently large headlines or for text sizes on low-resolution printers, they can come “on.” In-between, “off” is “not enough” while “on” is “too much.” Moreover, when they come “on,” they grow the character by 2 pixels at once.

By contrast, much like selecting a fractional stroke width, anti-aliasing should enable us to “turn on” the under- and overshoots more “gradually.” For instance, with 4x4y oversampling full-pixel anti-aliasing, we can “phase in” the under- or overshoots in increments of 1/4 of a pixel. Accordingly, this grows the character by 1/2 of a pixel at a time, instead of 2 pixels. Following is a table illustrating the “gradual onset” of the undershoots.

0 pixel
(8 point)
1/4 pixel
(11 point)
1/2 pixel
(23 point)
3/4 pixel
(38 point)
1 pixel
(54 point)

5 round strokes, rendered with full-pixel anti-aliasing, and gradually “phasing in” the undershoot (Arial, UC ‘O,’ 96 DPI, 4x4y oversampling for 17 levels of gray, box filter)

To gradually “phase in” the under- and overshoots, I have put the bottom of the ‘O’ on the nearest 1/4 of a pixel. This yields the point size thresholds as tabulated above, with the exception of the point size at which the undershoot is rendered by the lightest shade of gray. Strictly by the numbers, this first threshold would be exceeded already at 8 point.

However, at 8 point the horizontal strokes are still 1 pixel wide only. Thus positioning the bottom round stroke 1/4 of a pixel below the base line would not render this stroke with a “black core,” as illustrated below.

Rendering under- and overshoots “too early” may compromise rendering contrast (Arial UC ‘O,’ 8 pt, 96 DPI).
⇒ Hover your mouse over the illustration to see how not rendering them “early enough” may compromises character proportions.

Depending on the context, this may or may not be tolerable. I chose to push the threshold up to the smallest ppem size at which the fractional stroke weight has a “gray part” that can be “used” as under- or overshoot. Your choice or preference may be different.

At the same time this shows that while anti-aliasing can greatly reduce the “agony” over when and how to “turn on” the under- and overshoots, the problem does not go away completely. In fact, when considering a font with a more pronounced stroke design contrast (cf ), said threshold may have to be pushed up much higher. Alternately, the priority to render a “black core” may have to be relaxed a bit.

By the numbers, the under- and overshoots of Times New Roman uppercase could be rendered by 1/4 of a pixel as low as 9 ppem. Yet it is not until 32 ppem that the horizontal strokes can be rendered with a gray part. Maintaining the priority to render a “black core,” this would push the “onset” of under- and overshoots up to 32 ppem. Alternately, if a 75% gray level is deemed “contrasty enough,” the “onset” of under- and overshoots could be dropped down to 25 ppem.

Rendering under- and overshoots “too late” may compromise character proportions (Times New Roman UC ‘O,’ 25 ppem).
⇒ Hover your mouse over the illustration to see how rendering them “early enough” may compromises rendering contrast.

Notice that all the above opportunities for under- and overshoots are applicable to full-pixel anti-aliasing and to hybrid sub-pixel anti-aliasing only. Asymmetric sub-pixel anti-aliasing lacks the necessary anti-aliasing in y-direction.

In fact, the lack of anti-aliasing in y-direction affects the top and bottom round strokes irrespective of over- or undershoots. The closer a stroke is oriented towards the horizontal direction, the more noticeable is its pixilation. We will come back to this in .

Serifs are yet another one of those “cruelties” of bi-level rendering on low-resolution screens. At small type sizes, serifs either turn into slab serifs, or they “help” to turn the font into a sans-serif. Once more it appears that the best we can do is to “pick the lesser evil.” Following is how anti-aliasing can help.

Understanding that all serifs eventually turn into slab serifs we interpret the slab serifs as short strokes or stubs. In turn this suggests to use fractional stroke weights and possibly fractional stroke positions. We have already seen how anti-aliasing can help with this. But this is not enough.

Recall how I made small features “sampleable” in . I constrained the scaling mechanism to always enforce a minimum distance of one pixel between pairs of edges suitable to define the size of a feature, no matter how small the outline is scaled. We need to refine this.

Depending on the nature of the feature, we will have to select different numerical values for the minimum distance criterion. For strokes (stems, crossbars, arches, or other “structural” elements), it may not make sense to use less than 1 full pixel, as illustrated below.

Trying to render thin features with commensurate levels of gray: The crossbar measures 1/4 of a pixel across, which yields 25% gray (Bodoni UC ‘H,’ 10 ppem)

This may “look” better but it does not necessarily “read” better than when rendering the crossbar with a full pixel. Depending on your vision, your brain may not receive the same signals or signal strength. Accordingly, the brain may have to work harder to “assemble” the received signals to an UC ‘H.’

By contrast, for serifs and other suitable features it may make sense to select a minimum distance smaller than 1 pixel. With anti-aliasing, it is not necessary to turn every seriffed font into a variant of Rockwell like in bi-level rendering. The comparison below illustrates what happens when the bi-level “resolution funnel” is applied to full-pixel anti-aliasing.

Bodoni Times New Roman Rockwell

The “resolution funnel” applied to Bodoni, Times New Roman, and Rockwell: At 10 ppem the UC ‘H’ of these 3 fonts were “hinted” to yield the exact same set of pixels, even when rendered with full-pixel anti-aliasing (!)
⇒ Hover your mouse over the illustrations to see the unconstrained (“unhinted”) outlines, also rendered with full-pixel anti-aliasing

At 10 ppem the 3 UC ‘H’ look identical, even though they originate in clearly different designs. As discussed already, for reasons of readability we may not want to render crossbars with a minimum distance of less than 1 pixel, but we might try smaller values on the serifs to render some of the individuality of the 3 typefaces, as illustrated below.

Rendering thin serifs with an “adequate” level of gray: A minimum distance of 1/4 of a pixel yields a level of gray closest to the actual pixel coverage while 1 pixel yields the highest rendering contrast (Times New Roman, lc ‘m,’ 12 pt, 96 DPI). Intermediate values trade the former for the latter (cf also )
⇒ Be sure to use the above buttons to render the serifs with different minimum distance constraints

Notice how substantially identical serifs are rendered with different shades of gray even though their “thicknesses” are constrained to the exact same value. Likewise, all 6 serifs at the bottom are constrained to the exact same length. Yet there are 2 different shades of gray involved.

The reason for this is easy to see. Since I chose a fractional stroke weight for the 3 main stems, and since I chose to “put” the gray on the trailing edge, the serifs at the bottom right of each stem will protrude from the “black core” farther than the serifs at the bottom left of each stroke. The pixels on the right represent a pixel coverage that is shared by the stem and the serif. Accordingly, these pixels are rendered with a darker shade of gray.

Like with the under- and overshoots, the opportunity to deemphasize serifs is applicable to full pixel anti-aliasing and to hybrid sub-pixel anti-aliasing only. Asymmetric sub-pixel anti-aliasing has to be excluded again because it lacks the necessary anti-aliasing in y-direction.

It is interesting to see how hybrid sub-pixel anti-aliasing renders deemphasized serifs. It may appear that the de-emphasis has a more pronounced effect in 6x5y hybrid sub-pixel anti-aliasing, as illustrated below.

Rendering thin serifs with an “adequate” level of gray in hybrid sub-pixel anti-aliasing: The trade-offs are substantially the same as for full-pixel anti-aliasing.
⇒ Be sure to use the above buttons to render the serifs with different minimum distance constraints

With all these colors it is not straightforward to see why this is, but a thorough explanation is beyond the scope of this website. Suffice it to say that unlike full-pixel anti-aliasing, in hybrid sub-pixel anti-aliasing not all samples make the same contribution to the final pixel. Accordingly, we may have to select separate minimal distance criteria for full-pixel anti-aliasing and for hybrid sub-pixel anti-aliasing.

Having to decide for 1 or 2 pixel wide strokes in bi-level rendering seems “cruel” enough, already, but it gets worse. For many typefaces the vertical strokes of uppercase characters are heavier than their lowercase counterparts. Likewise, the horizontal strokes are often thinner than the vertical ones. Collectively, I’ll call this stroke design contrast (not to be confused with stroke rendering contrast introduced in ).

It can be very agonizing to try to render stroke design contrast in bi-level. Think of it this way: A vertical stroke or stem is designed to a certain width. Once scaled down to screen resolution and rasterized to pixels, there will be a range of point sizes for which this stroke must be rendered 1 pixel wide only.

But there has to be a point on the “size ramp” where this rendition “jumps” from 1 to 2 pixels. Just below this “jump,” the stem will be rendered too light, while just above, it will be rendered too heavy. This is unavoidable.

Enter the contrast between lowercase and uppercase characters. Both will make their “jump” on the “size ramp.” If the former are designed a little lighter than the latter, the lowercase characters will make their “jump” a little further up the “size ramp.” But they will nevertheless “catch up.”

This leaves us with a problem. In the range of point sizes between the “uppercase jump” and the “lowercase catch up,” the uppercase stems will be 2 pixels wide while the lowercase stems remain 1 pixel wide. It looks as if the uppercase characters are bold while the lowercase characters are regular, as illustrated below.

Rendering stroke design contrast with small type sizes at low resolutions (Arial ‘Hn,’ 9…26 pt at 96 DPI, bi-level): Notice how the 12 pt UC ‘H’ appears bold next to the 12 pt lc ‘n.’ Likewise, the 20 and 21 pt UC ‘H’ appear semi-bold next to the respective lc ‘n.’

The agony begins. Do we make the lowercase stems a little bolder, or the uppercase stems a little lighter, or a combination thereof? Because the goal is to have both the uppercase and the lowercase make their “jump” at the exact same point size. Otherwise we do get the confusing mix of seemingly bold and regular characters.

Enter the contrast between horizontal strokes (or crossbars) and vertical strokes (or stems). In many typefaces this is one of the few distinguishing design properties that can (and should!) be rendered at all but the smallest type sizes. Yet much like with the upper- and lowercase stems this can lead to confusion if the stems “jump” a few point sizes ahead of the crossbars, as illustrated below.

Rendering stroke design contrast with small type sizes at low resolutions: The upper- and lowercase characters “jump” at the exact same point sizes, since this difference cannot be rendered at these sizes. But notice how the crossbar of the UC ‘H’ still appears too thin at 13 pt, and again at 21 and 22 pt. Likewise, the arch of the lc ‘n’ appears too thin at 13…15 pt and at 21…25 pt.
⇒ Hover your mouse over the illustration to see the uncontrolled, “staggered stem jumps.” Notice how, in the process, the 20 pt UC ‘H’ stems are thinned, while the 21 pt lc ‘n’ are thickened.

The agony continues, particularly for sans-serif typefaces with “small” yet “non-negligible” contrast between horizontal and vertical strokes.

Have the horizontals “track” the verticals? This creates a brutal “jump” in page color. Every kind of stroke will “jump” from 1 to 2 pixels at the exact same point size. Below that threshold the entire page looks way too light, and above said threshold it looks way too dark, as is easy to see in the “size ramp” below.

Rendering stroke design contrast with small type sizes at low resolutions: The upper- and lowercase characters “jump” at the exact same point sizes, and the crossbars “track” the stems. But notice the pronounced “jumps” in darkness between 12 and 13 pt, and again between 20 and 21 pt.
⇒ Hover your mouse over the illustration to see the above “size ramp” without the crossbars “tracking” the stems.

The alternative to this discontinuity in page color is to let the horizontals and the verticals “jump” on their own. But this creates a “fake stroke design contrast.” At some point sizes it will make Helvetica look like it is trying to imitate Optima.

Helvetica UC ‘H’ (aka Arial)
17 ppem (cap height 12 px)
Optima UC ‘H’ (aka Omega)
18 ppem (cap height 12 px)

Rendering stroke design contrast with small type sizes at low resolutions: If the crossbars don’t “track” the stems, then Helvetica may be confused with Optima—at least at some type sizes. Depending on the taxonomy, Helvetica is a “Grotesque Sans” while Optima is a “Humanist Sans.” To a “Typophile” these categories can be worlds apart.
⇒ Hover your mouse over the illustrations above to see what little can be rendered at these small sizes to keep these worlds apart.

To me, not having the crossbars “track” the stems is not “the lesser evil,” even though it may look better on a “size ramp” or in a “waterfall” of text. I simply read text far more often than I look at “size ramps” or “waterfalls,” hence my preference (or my context).

Anti-aliasing can use intermediate stroke weights. This should help to render stroke design contrast more faithfully. For instance, looking at the outlines of Arial, I find that the horizontal strokes are about 7/8 of the weight of the vertical strokes.

Using 4x4y full-pixel anti-aliasing to render a 2 pixel wide vertical stroke (= 8 samples), this means that the horizontal stroke can be rendered with 1 3/4 pixel (= 7 samples). This is more faithful than having to render the horizontal and vertical strokes with equal pixel counts, and a lot more faithful than rendering them with 1 and 2 pixels, respectively.

How far can we push this? In the preceding example we had a full sample of difference between the horizontal and the vertical strokes. It could be argued that once the difference exceeds 1/2 of a sample, we can render it.

Looking at the vertical strokes or stems of the lower- and uppercase characters, I find that they are slightly more than 15/16 “apart.” In other words, once the uppercase stem is 8 samples, then the lowercase stem is slightly less than 15/16 of 8 samples, or a little under 7 1/2 samples. Accordingly, this could be rendered with 7 samples, as illustrated below.

Rendering stroke design contrast with small type sizes at low resolutions, using full-pixel anti-aliasing: Notice an almost “continuous” transition between the smaller and the larger type sizes. None of the UC ‘H’ appear unusually bold compared to the lc ‘n,’ and none of the crossbars or arches appear artificially thin compared to the stems.
⇒ Hover your mouse over the illustrations above to compare how this renders in bi-level.

Like with the under- and overshoots, the opportunity to render “small” design contrasts is limited to full pixel anti-aliasing and to hybrid sub-pixel anti-aliasing. Asymmetric sub-pixel anti-aliasing has to be excluded again because it lacks the necessary anti-aliasing in y-direction.

In turn, this takes us back to the agony of deciding between 1 or 2 pixel wide crossbars. What’s more, this decision may not be the same for plain bi-level rendering than it is for asymmetric sub-pixel anti-aliasing, as shown below.

Rendering stroke design contrast with small type sizes at low resolutions, using asymmetric sub-pixel anti-aliasing: Using the same “hints” as for bi-level rendering may yield “reverse stroke design contrast” (Arial lc ‘o,’ 11 pt at 120 DPI, Vista SP1)
⇒ Hover your mouse over the illustration above to see how this renders with a more faithful set of constraints

The above example was taken from the RTM version of Arial. At the time, the choice between 1 and 2 pixel wide horizontal strokes was made for full-pixel bi-level rendering. Using the same “hints” for asymmetric sub-pixel anti-aliasing, this renders with what looks like “reverse stroke design contrast”—the horizontal strokes appear to be heavier than the vertical strokes.

Like with the deemphasized serifs rendered in hybrid sub-pixel anti-aliasing vs full-pixel anti-aliasing, we may have to select different values for the point or ppem size at which we allow the horizontal strokes to “jump” from 1 to 2 pixel.

In and we will discuss the major challenges involved in getting legacy fonts to “work” with sub-pixel anti-aliasing. For new fonts the challenge is to account for the rendering method specifics in the constraint system—using suitable concepts, rather than “industriousness.”

In this section we will use some of the individual opportunities of the previous section and explore how they are connected. Specifically, we will look at how seemingly benign decisions to “align” characters with base lines and cap heights or x-heights can “domino” on to proportions, inter-character spacing, and eventually “help” to break the entire text layout. From the minutiae of the samples, this takes us back to “the big picture.”

Professionals in the field of “hinting” commonly start working on a character at its vertical extremes:

“Hinting” commonly starts at the top and bottom of the characters (Arial UC ‘E’ and ‘O,’ 12 pt at 96 DPI, 4x4y full-pixel anti-aliasing, unconstrained outlines)

The top and bottom of the characters are constrained to the corresponding reference lines. Readers familiar with TrueType may recognize pairs of CVT numbers representing surrogates for the “nominal” cap heights or x-heights, along with their overshoots, and for the base lines, along with their undershoots. Readers familiar with Type1 may recognize the respective “blue values.”

Habitually, in the context of bi-level rendering, base lines and cap heights or x-heights have been constrained to the nearest pixel boundary, or sometimes to an adjacent pixel boundary—the latter e.g. to “tweak” the x-height in the context of a given cap height or similar. This “alignment” of characters has two immediate consequences:

  1. It changes the height of a character by as much as ±1/2 of a pixel, or more in case of a “tweak.”
  2. It maximizes the rendering contrast of crossbars at the top and bottom of a character rendered in full-pixel or hybrid sub-pixel anti-aliasing.
Below is an illustration of the “alignment” of the UC characters ‘E’ and ‘O’ using 4x4y full-pixel anti-aliasing:

The same outlines as above but with the top and bottom of the characters aligned to the corresponding reference lines.
⇒ Hover your mouse over the illustration above to revert to the “unaligned” characters.

Once the top and bottom of the characters are aligned to the corresponding reference lines, the attention focuses on their horizontal extremes. For bi-level rendering, this will translate to constraining the “left” and “right” to the nearest pixel boundary, while for anti-aliasing this may select the nearest sample boundary that renders a “black core,” as illustrated below, again for 4x4y full-pixel anti-aliasing:

The same outlines as above but with the stems positioned for a “black core.”
⇒ Hover your mouse over the illustration above to revert to the “unpositioned” stems.

On its own this looks like a pretty decent rendition of the Arial UC ‘O.’ We have used all the opportunities that apply to this character: Smooth round strokes, optimized design and rendering contrasts, deemphasized under- and overshoots, and the recommended “placement” of the gray on the outside of the ‘O’ (cf ).

What the above illustration may not show is a subtle change in proportions. To see this, I have taken a few measurements of the ‘O’ and compiled them in the table below.

Property Unconstrained
Outlines
Constrained to
Full-Pixel Anti-Aliasing
overall height
(pixel)
11 54/64
(≅11.84)
11 1/2
overall width
(pixel)
10 61/64
(≅10.95)
11
width-to-height
ratio
0.9248:10.9565:1

Proportions of the Arial UC ‘O,’ 12 pt at 96 DPI, rendered in 4x4y full-pixel anti-aliasing

This table illustrates that by aligning the character ‘O’ with the base line and cap height and by positioning its stems for a “black core,” its overall height decreases by about 1/3 of a pixel, and its overall width increases by a few 1/64 of a pixel. These rounding errors would seem to be within the expected tolerances (±1/2 of a pixel for the heights, ±1/8 of a pixel or ±1/2 a sample for the widths).

Looking at the “bigger picture,” these errors change the width-to-height ratio by about 3.43%. This figure easily compares to the ratio between the heights of “flat” and “round” characters, as shown in the table below.

Property Value
Overall Height of UC ‘E’
(font design units)
1466
Overall Height of UC ‘O’
(font design units)
1517
UC-‘O’-height-to-UC-‘E’-height
ratio
1.0348:1
(3.48%)

Properties of the Arial UC “flat” vs the “round” characters: the “round” characters are designed to exceed the height of the “flat” characters by about 3.48%

In other words, the above seemingly innocent constraints have “grown” the width-to-height ratio of the ‘O’ at about the same rate as the height of the ‘O’ was designed to exceed the height of the ‘E.’ Hence, if rendering faithful under- and overshoots is a priority, then rendering the ‘O’ with more faithful proportions should probably be considered, as well.

And it gets worse. Consider the part of the ‘O’ that we have not “rendered,” namely the left- and right side-bearings, and the advance width. To see this, the following illustration puts the ‘O’ into the context of the advance width. In this illustration, the vertical blue lines represent the designed advance width, while the green lines represent the “rendered” advance width.

Designed (blue) vs “rendered” (green) advance width (Arial UC ‘O,’ 12 pt at 96 DPI, 4x4y full-pixel anti-aliasing)
⇒ Hover your mouse over the illustration above to revert to the unconstrained outlines.

On its own this looks pretty benign. The “O” appears nicely centered between the green left and right side-bearing lines that delimit the “rendered” or instructed advance width. The following table compiles the subtleties that the illustration might not show at first glance.

Property Unconstrained
Outlines
Constrained
to Full-Pixel
Anti-Aliasing
overall height
(pixel)
11 54/64
(≅11.84)
11 1/2
overall width
(pixel)
10 61/64
(≅10.95)
11
width:height
ratio
0.9248:10.9565:1
top, bottom
stroke width
(pixel)
1 19/64
(≅1.27)
1 1/4
left, right
stroke width
(pixel)
1 36/64
(≅1.56)
1 1/2
left- + right
side-bearing
(pixel)
50/64 + 46/64
= 1 1/2
1/2 + 1/2
= 1
advance
width
(pixel)
12 29/64
(≅12.45)
12

Proportions and spacing of the Arial UC “O,” rendered in 4x4y full-pixel anti-aliasing, at 12 pt/96 DPI

This table illustrates that for the above priorities (“align” top and bottom, “put” the gray on the outside), and the advance width “rendered” with 12 pixels, there is only 1 pixel left for the sum of left and right side-bearings. This is about 33% less than designed! As far as I know, altering inter-character spacing reduces readability, and moreover, decreasing inter-character spacing reduces readability even more than increasing it.

Now, in the past, I had been told repeatedly that “you don’t need x-direction ‘hints’ in ClearType!” Therefore, I’ll repeat the above illustration using asymmetric sub-pixel anti-aliasing, and as implied, constrained in y-direction only.

Designed (blue) vs “rendered” (green) advance width (same as above, except 6x1y asymmetric sub-pixel anti-aliasing [“ClearType”])
⇒ Hover your mouse over the illustration above to revert to the unconstrained outlines.

The under- and overshoots are gone, which is unavoidable in asymmetric sub-pixel anti-aliasing. In turn, this makes the ‘O’ more “squarish” than necessary. What’s worse, the ‘O’ appears to be “pushed” towards the (green) right side-bearing line. This can’t be good for inter-character spacing. Here are all the numbers corresponding to the above illustration:

Property Unconstrained
Outlines
Constrained
to Full-Pixel
Anti-Aliasing
Constrained
to Asymmetric
Sub-Pixel
Anti-Aliasing,
y-direction only
overall height
(pixel)
11 54/64
(≅11.84)
11 1/211
overall width
(pixel)
10 61/64
(≅10.95)
1110 61/64
(≅10.95)
width:height
ratio
0.9248:10.9565:10.9957:1
top, bottom
stroke width
(pixel)
1 19/64
(≅1.27)
1 1/41
left, right
stroke width
(pixel)
1 36/64
(≅1.56)
1 1/21 36/64
(≅1.56)
left- + right
side-bearing
(pixel)
50/64 + 46/64
= 1 1/2
1/2 + 1/2
= 1
50/64 + 17/64
= 1 3/64
(≅1.05)
advance
width
(pixel)
12 29/64
(≅12.45)
1212

Proportions of the Arial UC ‘O,’ 12 pt at 96 DPI, rendered in 6x1y asymmetric sub-pixel anti-aliasing

The above table documents that using asymmetric sub-pixel anti-aliasing doesn’t improve any of the short-comings of full-pixel anti-aliasing, even without “distorting” the x-direction (by omitting the respective constraints). On the contrary, looking at the width-to-height ratio, it increases the “disproportion” to 7.67%, hence the more “squarish” appearance, while it continues to leave about 1 pixel for the sum of left and right side-bearings.

Moreover, this approach “positions” the character way “off-center,” towards the right side-bearing. Consider the left and right side-bearing as rendered, at 50/64 and 17/64 of a pixel, respectively. This indicates that the left side-bearing renders at almost 3 times the right side-bearing, while they are designed to be substantially equal (99 and 92 font design units, respectively). In the context of other characters (inter-character spacing), this doesn’t seem right.

Accordingly, omitting x-direction constraints in asymmetric sub-pixel anti-aliasing can’t possibly be the “final answer” (cf also , , , and ). In general, prioritizing without specifying a context is not the answer. Therefore, we will now look at the interdependencies of constraint priorities with layout contexts.

So far, we have discussed a select number of opportunities of anti-aliasing. Prioritizing these opportunities without keeping an eye on the “bigger picture” can lead to unforeseen trade-offs. These trade-offs may negate the advantages of the prioritized opportunities.

For instance, in the preceding example the chosen set of priorities has changed the character’s proportions, and it has reduced the available inter-character space by about 33%. If many characters get their side-bearings compromised like that, eventually this can break the entire layout.

Recall where we discussed the reason why like strokes do not always sample to like sample counts. We had a left edge, a weight, and a right edge of a stroke. Once scaled and sampled, these three variables didn’t always “add up.” We had to define e.g. the x-coordinate of the right edge to be the sum of the x-coordinate of the left edge plus the width of the stroke.

The situation is not much different here. We have an overall character width (black-body width) and an inter-character space (the sum of the left and right side-bearings). The two should add up to the advance width, but they don’t:

Property Unconstrained
Outlines
Constrained
to Full-Pixel
Anti-Aliasing
black-body width
(pixel)
10 61/64
(≅10.95)
11
side-bearings
(pixel)
1 1/2 1 1/2
advance width
(pixel)
12 29/64
(≅12.45)
12

Black-body width, side-bearings, and advance width of the 12 pt/96 DPI Arial UC ‘O’

The unconstrained numbers add up

10 61/64 + 1 1/2 = 12 29/64

but the constrained (rounded) numbers don’t

11 + 1 1/2 ≠ 12

hence we will have to decide which of the 3 must remain scalable, and which may get compromised.

And herein lies the problem: We may not know the context in which the font is going to be used.

Now, strictly speaking, by rounding the advance width to the nearest pixel boundary, we have already compromised it. It is simply the “least evil” we can do in the context of positioning characters on full pixels (cf ).

Once we know the layout context, we can make an educated choice as to which priorities or constraints to “relax.” For instance, once we know that the context is a scalable layout, then we may have to “relax” the priority to “put” all the gray on the outside. By “insetting” the two vertical strokes by 1/4 of a pixel each, we compromise the black-body width, but we gain 1/2 a pixel for the side-bearings, as illustrated below.

Relaxing the constraint to “put” all the gray on the outside (same as above, except back to 4x4y full-pixel anti-aliasing)
⇒ Hover your mouse over the illustration above to revert to the “unrelaxed” version.

Turns out that at the same time this improves the proportion from 0.9565:1 to 0.9130:1, which is only 1.27% off. This seems like an easy fix. Loose a little on the “gray-on-the-outside” rule, gain a little bit on the proportions, and we’re done.

But it’s not always as easy as that. Suppose another scenario where “aligning” the top and bottom of the character has increased the character’s height. All else being equal, this could turn out rather agonizing. Aligning has made the character a little too “skinny” already, and now we need to make it even “skinnier” to preserve the side-bearings? This may be hard, but it could turn out to be the “the lesser evil.”

To see this suppose we chose to compromise the advance width, instead of making the character even “skinnier,” even though we know the context is a scalable layout. To be scalable, the respective software expects the character to have an uncompromised advance width.

If it isn’t, and if the software has no other way to “absorb” the error, it may simply “truncate” the actual advance width to the expected advance width. In the process the right side-bearing may get lost altogether, with dire consequences on the space separating—or rather not separating—the next character (cf ). Given such a scenario, I’d rather make my own compromises.

If, on the other hand, we know that the context is a reflowable layout, then we are free to prioritize the character’s proportions, whether its height has decreased or increased, and preserve the side-bearings. We will simply forward the total rounding error to the advance width. Since the context doesn’t have to make assumptions about expected advance widths, this should work out just fine.

As this simple example has shown, individual opportunities to make characters “look nice” can have unexpected “knock-on” effects. If the individual opportunities are prioritized irrespective of the layout context, they can impair readability. It all has to work together.

Understanding the workarounds to making it work together is key to successfully rendering text. We will look at two workarounds next. Not surprisingly, these workarounds come with their own trade-offs, which may or may not be more tolerable.

In the preceding example, the black-body width and the side-bearings didn’t add up to the advance width. Depending on the context, the work-around was to compromise the black-body width or the advance width. Depending yet again on the context, there may be a third alternative.

Historically, software has laid out text by putting individual characters on full-pixel boundaries. In an environment of bi-level rendering, this makes perfect sense. Pixels that are either “on” or “off” is all there is to represent the shape and the advance width of characters. Pixels cannot be placed on intermediate pixel positions, thus entire characters cannot be positioned on fractional pixel positions.

Now recall that any form of anti-aliasing first oversamples the outlines. If we could intercept the rendering process at this stage, we could lay out the “sample maps,” instead of laying out finished characters. Positioning of individual “sample maps” (representing individual characters) could be done to the nearest sample, instead of the nearest pixel. Once laid out, the resulting string of “sample maps” could be downsampled as if it were one single—albeit very wide—character.

Conceptually, this is how anti-aliased characters can be positioned on fractional pixel positions. With the premise of fractional pixel positioning it is no longer necessary to round advance widths to the nearest full pixel. Instead, they can be rounded to the nearest fraction of a pixel, compatible with the oversampling rate used by the particular anti-aliasing method, hence the term Fractional Advance Widths.

With anti-aliasing and fractional pixel positioning, the aforementioned third alternative is to use fractional advance widths—the layout context permitting. This will make the constrained (rounded) numbers add up

11 + 1 1/2 = 12 1/2

without compromising any of the involved variables. We can keep the black-body width, the side-bearings, and the advance width!

Now, before getting too exuberant, let’s be clear on one thing: Using fractional advance widths requires that the layout software positions characters on fractional pixels. If it doesn’t, then we can’t use this alternative. Opting for fractional advance widths while blissfully ignoring the layout context may get the advance widths “adjusted” by the layout software, as previously mentioned. It all has to work together.

Notice that this layout method is often referred to as Sub-Pixel Positioning. For most purposes, fractional pixel and sub-pixel positioning refer to the same concept. However, the term sub-pixel also refers to the physical structure of pixels on LCD devices.

At the same time fractional pixel positioning is not restricted to the nearest physical sub-pixel boundary. It can use any fractional pixel position compatible with the oversampling rate, or any sample boundary for short. Accordingly, I’ll use the term Fractional Pixel Positioning.

By the above example it would seem that anti-aliasing with fractional pixel positioning can greatly ease a lot of the agonies of text layout. Sure, the absence of “scalability” is still a problem, but it manifests itself in terms of samples, not pixels. If 1/2 a pixel is missing from the side-bearings totaling 1 1/2 pixels, this is a big deal. If, instead, 1/2 a sample is missing, e.g. in 6x sub-pixel anti-aliasing, then we only loose 1/12 of a pixel, which may be hard to notice.

However, depending on the font, type size, and device resolution, stems may be too small to be rendered with a “black core” (cf ). In full-pixel positioning, this situation may be alleviated by carefully selecting fractional positions for individual stems. This trades positioning precision for rendering contrast.

In fractional pixel positioning, this is not possible, because each instance of one and the same character can be positioned on a different fractional pixel position. Accordingly, each instance would have to select separate “optimal” stem positions. Yet all that these separate “optimal” stem positions can achieve is to compensate for the fact that the preceding character ended on a fractional pixel position.

To see this, let’s assume we are laying out some text in Arial, 11 pt at 96 DPI. Per the table in , this will be too small a size to render stems with a “black core,” but there may be tolerable alternatives.

For the sake of the argument, our text will be very simple, if a touch “academic.” It consists of a few lc “l” in a row. Granted, this is not a very interesting text, and may not occur in practice all that often, but it is simple enough to explain all the issues involved.

Here is what the lc “l” looks like when rendered in asymmetric sub-pixel anti-aliasing, and before applying any constraints.

Preparing for fractional pixel positioning: “unhinted” lc ‘l’ (Arial, 11 pt at 96 DPI, ClearType)

I constrained the top to the ascender height, the bottom to the base line, and I didn’t like the “maroon core.” Aligning the right edge of the ‘l’ with the nearest pixel boundary produced a “navy core” instead, which looks darker to me, and hence more “contrasty.”

For this particular selection I had to shift the ‘l’ to the left by about 1/3 of a pixel. Here is what this looks like after applying these constraints.

Preparing for fractional pixel positioning: lc ‘l’ “hinted” for maximum stroke rendering contrast (else same as above)
⇒ Hover your mouse over the illustration above to revert to the “unhinted” version.

The advance width of the above ‘l’ is 3 1/3 pixels. Thus, in our text layout example, if the first ‘l’ starts at 0, the next one will be positioned at 3 1/3.

If I were to use the exact same instance of the ‘l’ to put a second ‘l’ next to the first one, the fractional part of this position, 1/3 of a pixel, would “undo” my rendering contrast optimization. Recall that I shifted the ‘l’ to the left by 1/3 to turn the “maroon core” into a “navy core.”

But by positioning this second ‘l’ to the fractional pixel position 3 1/3, its “coloring scheme” gets a “head start” of 1/3 of a pixel to the right. At that point, having the constraints shift it to the left merely undoes this “head start,” and hence gets us back to the “maroon core.”

In turn, by positioning a third ‘l’ adjacent to the second one, to the fractional pixel position 6 2/3, its “coloring scheme” gets an even bigger “head start.” Finally, the fourth ‘l’ renders like the first one—by now, we have cycled through three 1/3 of a pixel “head starts,” as illustrated below.

Fractional pixel positioning with the previously “hinted” lc ‘l.’ Maximum stroke rendering contrast seems lost on 2 out of 3 ‘l.’

There are two ways to fix this. One way is for the second ‘l’ to use a separate instance of the ‘l’ that uses a left shift of 2/3 of a pixel. This would get me a “navy core,” as targeted. In turn, for the third ‘l’ next to the second one, I would have to use an instance with a left shift of 3/3 of a pixel, or 1 full pixel, and so forth, as illustrated below (be sure to hover your mouse over the illustration to see this text layout method at work).

Fractional pixel positioning with individually “hinted” instances of the lc ‘l,’ first attempt at maintaining maximum stroke rendering contrast
⇒ Be sure to hover your mouse over the illustration above to see this attempt at work.

This is not fractional pixel positioning! This is plain full-pixel positioning in a reflowable layout. If I truncated the advance width of the ‘l’ from 3 1/3 to 3 pixels, I wouldn’t have to accumulate all these extra shift amounts to make up for the accumulating “head starts,” yet I would achieve the exact same result.

But where did we make the wrong turn? Recall that second instance of the ‘l’ where I used a left shift of 2/3 of a pixel to get the desired color. The other way to fix this is to use a right shift of 1/3 of a pixel, instead. This would have given me the exact same color combinations as a left shift of 2/3 of a pixel. The color combinations repeat.

In turn, if I were to put a third ‘l’ next to the second one, I wouldn’t have to shift it at all. It is already “there.” Subsequent instances would cycle through the same pattern of shifts: 1/3 left, 1/3 right, none, and so forth (once again, be sure to hover your mouse over the illustration to see this text layout method at work).

Fractional pixel positioning with individually “hinted” instances of the lc ‘l,’ second attempt at maintaining maximum stroke rendering contrast
⇒ Once again, be sure to hover your mouse over the illustration above to see this attempt at work.

But this is not fractional pixel positioning, either! This is again plain full-pixel positioning, combined with a very crude algorithm to make the layout scalable (cf ).

The actual advance width of the ‘l’ is 3 1/3 pixels. This yields character positions that full-pixel positioning can not use. Hence a simple way to overcome this deficit is to round these character positions to the nearest full pixel, as tabulated below.

Theoretical
character
position
(pixel)
Rounded
character
position
(pixel)
0 0
3 1/3 3
6 2/3 7
10 10

Fractional pixel positioning of a row of lc ‘l,’ exact and rounded character positions

In other words, the (rounded) full-pixel position is “re-sync’d” with the (exact) fractional pixel position after every character.

The only way to do fractional pixel positioning “properly” is to abandon “optimal” stem positions. In turn, this means that for sufficiently small type sizes and device resolutions, we may have to tolerate sub-optimal rendering contrast and irregular color combinations.

Fractional pixel positioning without any attempt at maximizing stroke rendering contrast
⇒ Hover your mouse over the illustration above to see the actual sub-pixels. The strokes are 4 sub-pixels wide, rendering any of these strokes involves a width of 6 sub-pixels, yet only 2 out of the 6 sub-pixels are a solid black. There are 2 “fringe” sub-pixels on either side of each 2 sub-pixel wide “black core.” Informally put, there is more “fringe” than “core.”

To recap: With fractional advance widths and fractional pixel positioning we do get black-body widths and side-bearings to add up. Potential errors are within ±1/2 of a sample or ±1/12 of a pixel for ClearType, which most likely is unnoticeable. The characters will space pretty much as designed. This includes notably kerning on screen.

The main drawback of fractional advance widths and fractional pixel positioning is the loss of stroke rendering contrast at sufficiently small type sizes and device resolutions (cf ). As we have just seen, it thwarts any intentions to carefully position strokes to “optimize” stroke rendering contrast. Informally put, at text sizes on 96 or even 120 DPI screens, text is simply rendered with more “fringe” than “core.”

The perception of sub-pixel positioned ClearType depends on the visual system of the individual reader, and it may even depend on the context in which it is used. Some readers may consider it the single-best possible way to render text on screen, some readers may be fine with it for reading continuous text but not UI captions, and some readers may find it looks a bit “washed-out,” “blurry,” “fuzzy,” or “annoyingly colorful.” What works for one reader may not work for every reader (cf ).

At the beginning of we were looking at an example where “aligning” a character with the base line and cap height changed the character’s proportions. If rendering faithful proportions is a priority, then this needs to be addressed in the constraint system. Depending once more on the context, there is an alternative approach.

The general idea is the following: The combination of designed cap height (1466 font design units), selected point size (12 pt), and device resolution (96 DPI) has scaled to a cap height of 11 29/64 pixels, as illustrated below.

Unconstrained outlines (Arial UC ‘E’ and ‘O,’ 12 pt at 96 DPI, 6x5y hybrid sub-pixel anti-aliasing)

In order to “align” this cap height to the nearest pixel boundary, the constraints will reduce the cap height to 11 pixels. In percent, this is a reduction of about −3.96%.

If we selected a point size that has been reduced from 12 pt by the same percentage, instead of selecting 12 pt, we wouldn’t have to do any “aligning” at all. In other words, if we selected about 11.53 pt at 96 DPI (or 15.37 ppem), the designed cap height would be scaled to 11 pixels even. This does not need to be “aligned” (the internal rounding accuracy of the rasterizer permitting, cf ).

Since both a 12 pt font with “aligning” constraints and an 11.53 pt font without these constraints would be rendered with a cap height of 11 pixels, we wouldn’t have to worry about proportions, because the fractional point or ppem size would scale both the height and the width by the same percentage.

Unconstrained outlines (same as above, except 11.53 pt)
⇒ Hover your mouse over the illustration above to revert to 12 pt

Call it a “lucky ppem size” but don’t tell anyone else how we got there…

In practice, the result of using “lucky ppem sizes” is that of “minimal hinting:”

  1. Prioritize inter-character spacing
  2. Prioritize individual character proportions
  3. Gradually “phase in” under- and overshoots
  4. Maximize stroke rendering contrast (cf ) of horizontal strokes “sitting” on the baseline or “hanging” from the x-height (lc) or cap height (UC)
Here is what using a “lucky ppem size” looks like in a short piece of text at its actual size (CAUTION: To properly view this and some of the following illustrations, be sure to double-check all your settings as per the check-list introduced in ).

Unconstrained outlines, rendering a short piece of text at a “lucky ppem size,” using fractional advance widths and fractional pixel positioning (Arial 11.53 pt at 96 DPI, no kerning)
⇒ Hover your mouse over the illustration above to see how the constrained (“hinted”) outlines of the RTM version of Arial render the same piece of text at 12 pt

Be sure to hover your mouse over the above illustration to compare the “lucky ppem size” with the RTM version of Arial. Notice how the RTM version produces a longer line of text with taller, “skinnier” characters?

The RTM version is a lot closer to the designed line length while the “lucky ppem size” version is a lot closer to the designed character proportions. To make it easier to see this, the following illustration “zooms” into a part of the above illustration. Just be sure to hover your mouse over the illustration to see the differences.

Unconstrained outlines, rendering a short piece of text at a “lucky ppem size,” using fractional advance widths and fractional pixel positioning (same as above, except enlarged to 400%)
⇒ Once again hover your mouse over the illustration above to see how the RTM version of Arial renders this at 12 pt

Notice how characters like the lc ‘u’ or ‘n’ are taller and “skinnier” in the RTM version? Notice, in particular, the lc ‘c.’ A “Typophile” may “confuse” this rendition with a different font!

Notice also how the horizontal stroke partitions the lc characters ‘a’ and ‘e.’ Particularly for the lc ‘a’ the proportions seem “skewed” in the RTM version, while for the lc ‘e’ the crossbar may appear to be a bit “fuzzy” in the “lucky ppem size” version. The latter is because there are no constraints to optimize stroke rendering contrast (cf ).

Unconstrained (“unhinted”) outlines (Arial lc ‘e,’ 11.53 pt at 96 DPI)
⇒ Hover your mouse over the illustration above for the constrained (“hinted”) RTM version

This concern may depend on the individual viewer’s level of tolerance (cf also ): the crossbar of the above lc ‘e’ indeed does not render with a “black core.” As an aside, the “tail” of the ‘e’ appears to “close up” with the rest of the ‘e.’

Once “lucky ppem sizes” are applied to fonts with much more pronounced stroke design contrasts (cf ) than Arial, the absence of a minimum distance constraint can dominate the outcome. For unfortunate combinations of type sizes and screen resolutions this may make it very hard to distinguish a lc ‘e’ from a lc ‘c,’ as illustrated below.

Unconstrained (“unhinted”) outlines (Times New Roman lc ‘e’ and ‘c,’ 10.05 pt at 96 DPI) both enlarged (top) and at original size (bottom)
⇒ Hover your mouse over the illustration above for the constrained (“hinted”) RTM versions

The crossbar that partitions the lc ‘e’ surely appears a bit “washed-out.” But, as we will see shortly, this “effect” is not limited to the lc ‘e.’ Sufficiently small type sizes in general tend to appear “faded,” notably for nominally black text rendered against a white background (cf also and ).

Moreover, while the “lucky ppem size” approach inherently preserves the proportions of all characters, as long as we don’t do anything else with them, and while it preserves the inter-character spacing, because it uses fractional advance widths and fractional pixel positioning, it does nevertheless compromise the rendered advance widths (or entire line lengths), as illustrated below.

Unconstrained outlines, rendering a short piece of text at a range of nominal point sizes (8 pt to 24 pt) that have been “snapped” to the nearest “lucky ppem size,” using fractional advance widths and fractional pixel positioning (Times New Roman, 96 DPI, no kerning)
⇒ Hover your mouse over the illustration above to compare to unconstrained outlines without any “snapping” to the nearest “lucky ppem size.” Notice how the line lengths increase much more “continuously”

Recall the first example at the beginning of this sub-section. We looked at the difference between the unconstrained and the “aligned” cap height and determined that we were going to use 11.53 pt instead of 12 pt. This “fixed” the proportions, and with the help of fractional advance widths and fractional pixel positioning, this also “fixes” the inter-character spacing.

But: Applying this point size reduction to the entire alphabet, every character is now “too short” by 3.96% (the reduction from 12 pt to 11.53 pt). In a context where the layout reflows gracefully, this is a perfectly good start. But in a context where text is laid out linearly, this is exacerbating the scalability problem.

Think of it this way: To make the layout scalable, the layout software knows the designed advance width of each character. It uses these design widths to calculate the layout. This includes notably the line breaks and the page breaks.

In order to display the layout at a particular resolution, the software then translates or scales all these design widths to the targeted device resolution (cf )—say the above 96 DPI. At that point the layout software already knows that, by design, such-and-such character is supposed to be x number of pixels wide, where x may include a fractional part.

Now the layout software asks the rasterizer to render such-and-such character at the above 12 pt and 96 DPI. However, the constraints of the character override this point size and make it 11.56 pt to preserve the proportions and the spacing. The rasterizer then delivers the character to the layout software. But the delivered character is “too short.”

In turn, now the layout software must somehow “absorb” this systematic error. Every character it asks for will be “too short.” If the text is justified, this is not going to make for “tidy” word gaps. Surely this can’t be good.

Conversely, “aligning” the cap height could just as well yield a “lucky ppem size” that is larger than the nominal one, as illustrated below.

Using a “lucky ppem size” for rendering a Calibri lc ‘z’ at a nominal 14 ppem results in an effective ppem size of 15.07
⇒ Hover your mouse over the illustration above to revert to 14 ppem

The example above shows Calibri, a lc ‘z’ rendered at a nominal 14 ppem. At this ppem size, the x-height translates to 6 1/2 pixels. Using a “lucky ppem size” to “snap” this x-height to 7 pixels changes this to an effective ppem size of 15.07 (!)

If the constraints were to override the targeted 14 ppem size to do just that, namely to increase it to 15.07 ppem, every character would be “too long” by a wide margin. This could obliterate the word gaps altogether (cf ). This can’t be good, either.

Notice that there is nothing wrong with rendering the above lc ‘z’ with an x-height of 7 pixels. The RTM version of Calibri does just that. In fact, it renders 14 ppem, 15 ppem, and 16 ppem with the same x-height of 7 pixels, because—in terms of full pixels—differences in x-heights of less than 1/2 a pixel cannot be rendered.

The main difference between “lucky ppem sizes” and the RTM version is prioritizing proportions over linear advance widths, as illustrated below.

Using a “lucky ppem size” for rendering a Calibri lc ‘z’ at a nominal ppem size of 14, 15, and 16, respectively (left-to-right). Aside from minor differences in stroke weights, the 3 renditions appear substantially equal.
⇒ Hover your mouse over the illustration above to revert to the respective RTM versions of “Calibri.” While the renditions of the 3 x-heights are equal, the proportions (and hence the advance widths and side-bearings) aren’t. This is neither better nor worse—it merely represents a different set of priorities or “trade-offs.”

Given the appropriate context, using “lucky ppem sizes” looks like an interesting opportunity, but on its own it is hardly the silver bullet.

Yet there is a perfectly valid application of fractional ppem sizes even—or particularly—in a scalable layout scenario. To illustrate the point, let’s have a look at how a range of point sizes translates to ppem sizes for two popular screen resolutions.

Point Size Ppem Size
at 96 DPI
Ppem Size
at 120 DPI
8 10 2/3 13 1/3
9 12 15
10 13 1/3 16 2/3
11 14 2/3 18 1/3
12 16 20

Translation of some (integer) point sizes to exact (fractional) ppem sizes for device resolutions of 96 and 120 DPI

Let’s say some text is laid out in 11 pt type on a 96 DPI screen. The layout software asks the rasterizer for characters at 11 pt and 96 DPI. As previously mentioned, it already knows what advance width it expects. Per the above table, 11 pt at 96 DPI translate to 14 2/3 ppem.

What happens next depends on the individual fonts. Most TrueType fonts prefer not to handle fractional ppem sizes, hence they instruct the rasterizer to round the ppem size to the nearest integer. In this case, much like the “lucky ppem size,” the 14 2/3 ppem are rounded to 15 ppem.

In other words, the layout software asks for an 11 pt font but is systematically delivered characters corresponding to an 11 1/4 pt font. Every character it asks for will be “too long.” This sounds more like an “unlucky ppem size.”

Again, the layout software must somehow “absorb” this systematic error, and in turn the word gaps are at risk of being obliterated (cf ). Consequently, to make your font work well in a scalable layout, it would help to allow fractional ppem sizes.

In the case of TrueType fonts this may have an unexpected side-effect. It will make it difficult, if not impossible, to use so-called Delta Exception Instructions. Delta exceptions, or deltas for short, are instructions that permit to arbitrarily dislocate a control point at a particular ppem size. Deltas are sometimes used to “fix” crossbars that are in the wrong position (cf ), or in general, to “fix” unattractive pixel patterns (cf ).

The problem with fractional ppem sizes and deltas is that deltas expect integer ppem sizes, not fractional ones. If a fractional ppem size is used anyway, conceptually the delta instruction will temporarily round it to the nearest integer. In turn, this means that a delta instruction cannot distinguish between e.g. 14 2/3 ppem and 15 ppem.

However, a character rendered at 14 2/3 ppem and 15 ppem can produce two surprisingly different pixel patterns, as illustrated below.

Fractional ppem sizes can make it impossible to use “Delta Instructions.” This illustration shows Arial, figure ‘6,’ rendered in bi-level at 11 pt at 96 DPI (14 2/3 ppem, left) and at 9 pt at 120 DPI (15 ppem, right), and constrained with “basic hints.” Both sets of pixels would have to be “fixed” with the same (!) set of “Delta Instructions”

In both instances of the figure ‘6’ illustrated above, the delta instructions would have to be the same, because in both instances they will be applied to a (rounded!) ppem size of 15. Yet they would have to “fix” substantially different pixel patterns. This seems impossible.

To sum this up: With fractional ppem sizes we can get perfect proportions without further constraints. To do so requires to override the requested ppem size, which in general only works well in the context of a reflowable layout. In a scalable layout, fractional ppem sizes can help the scalability of the layout. To do so requires to use ppem sizes that reflect a given point size and device resolution more accurately than what integer ppem sizes could do.

Both scenarios have in common that it will be all but impossible to use TrueType’s delta instructions. However, this need not be a deterrent. As we will see in , it is often possible to use a more generic approach to “fixing” crossbar positions and unattractive pixel patterns. This both supports fractional ppem sizes and saves the time to code and maintain inordinately large numbers of delta instructions.

*****

In this chapter we have looked at some—if not most—of the essential opportunities offered by anti-aliasing methods. These opportunities are not necessarily listed in the order of their priority. Rather, I started with oversampled “scalability” and the Anti-Aliasing “Exclusion Principle” and then proceeded to discuss progressively higher level concepts in terms of their building blocks. I chose this order because it seemed easier to explain these concepts “from the bottom up.”

To close this chapter, let me summarize how anti-aliasing performs on the four fundamentals of rendering fonts on today’s screens (cf end of chapter ):

  1. There are still not enough pixels but the ones we have are not always too large.
  2. We have to exaggerate fewer features to render them with such pixels.
  3. We have to distort some features a lot less to force others to be tolerable.
  4. All these and all the new workarounds still don’t work equally well in all contexts.

In the next two chapters we will have a look at how the rasterizer can throw a monkey wrench at the best of our intentions, and how the most legibly constrained font can become hard to read through a chain of contextual misfits.

previous chapter, ↑ top, next chapter