The Raster Tragedy at Low-Resolution Revisited:
Opportunities and Challenges beyond “Delta-Hinting”

In the previous chapter we have looked at the fundamentals of sampling outline fonts. In the process, the terms sample and pixel were used fairly casually, as if they were interchangeable. Strictly speaking, this is incorrect: A pixel is the representation or rendition of one or more samples.

Granted, when samples are rendered by “little black squares” as in “black-and-white” rendering, this seems like splitting hair. But not all pixels are “little black squares” nor do they always represent single samples only. Therefore we will now have a look at various methods for turning one or more samples into a pixel—or various rendering methods for short.

Chances are that you are reading this text on a computer with a flat panel screen, such as a Liquid Crystal Display (LCD). An LCD consists of a uniformly backlit surface covered by a two-dimensional array or grid of “valves” or “shutters.” Each “shutter” can be controlled independently to block the light, to transmit the light, or almost anything in between.

Each “shutter” represents exactly one and only one pixel. Open “shutter” No. 123, 456 and the pixel located 123 columns over and 456 rows down comes on. Close that shutter and that same pixel goes off again. On LCDs, black pixels against a white background indeed look pretty much like “little black squares.”

Before flat panels, computers came with Cathode Ray Tubes (CRT). In a CRT, electrons are beamed at the back of a glass screen on the inside of an evacuated glass container (the tube). The back of that screen is coated with some fluorescent material. Once the electrons hit that coating, they make it glow. More electrons make it glow brighter, and vice-versa. This results in a more or less clearly defined round dot visible on the front of the CRT.

This dot is the CRT’s representation of a pixel. But unlike LCDs with a separate “shutter” for each pixel, the CRT has only one beam of electrons. Hence, strictly speaking, a CRT can only represent one pixel at a time. To represent another pixel, the beam has to be turned off, deflected to another position, and then turned back on again. If this is done systematically, row-by-row, turning the beam on and off as appropriate, and repeated all over at a very rapid rate, it will be perceived as a coherent image of many pixels.

Conceptually, there is a fundamental difference between LCDs and CRTs. On an LCD, the position of a pixel is exact, while on a CRT it is merely precise (cf ). Moreover, pixels on an LCD may look incredibly “sharp” compared to the slightly blurry pixels of a CRT.

This is a bit of a mixed blessing, though. On the one hand, the “sharp” pixels appear to make text more contrasty. On the other hand, they exacerbate the pixilated appearance of text compared to text on a CRT.

The 3 characters ‘H,’ ‘O,’ and ‘V,’ featuring straight, round, and diagonal strokes, and rendered with “sharp” square pixels to simulate text on an LCD.
⇒ Hover your mouse over the illustration to see what the exact same set of samples look like when rendered with “blurry” round pixels to simulate text on a CRT.

If nothing else, the “sharp” LCD pixels are a close approximation of pixels as “little black squares.” The “little black squares” serve as a convenient model helping to understand reality. There are enough raster tragedies left to investigate before obsessing over pixel shapes which in the end we cannot change anyway.

With the model of pixels as “little black squares” in mind, the simplest rendering method is to have each pixel render exactly one sample. Since the entire pixel is completely defined by that sample, I’ll call it a full-pixel method. Likewise, since that sample can only assume one of two values (“in” or “out”), the entire pixel can assume one of two values only (“on” or “off”). Accordingly, I’ll call this method a bi-level method.

Colloquially, this full-pixel bi-level rendering (FPBLR) method is often referred to as “black-and-white” rendering (or “b&w” for short). Notice however that this may be a bit confusing, since black-and-white movies or photographs usually contain many intermediate shades of gray, which is not bi-level. Likewise, bi-level can mean e.g. yellow text on a blue background, which is neither black nor white.

Let’s look at this rendering method from the aspect of the pixel as the result of the sampling. The pixel is “on” if the center of the pixel is “in”—else the pixel is “off.” The center of the pixel is the spot where the outline was sampled. This leads to a crude notion of pixel coverage: If the center of the pixel is “in,” then chances are that at least half of the pixel is “in” as well, and vice-versa.

Thus, a pixel which is “on” may be interpreted as “at least half of the pixel is covered by the outline.”

A diagonal stroke in full-pixel bi-level rendering: a pixel is “on” if at least half of the pixel is covered by the outline. Specifically, notice the two adjacent pixels in the middle of this particular diagonal stroke. For both of these pixels exactly half of the pixel is covered by the outline, hence both of them are “on.”

We will look at a refinement of this notion of pixel coverage next.

In the previous section, we have looked at the pixel as the result of one single sample taken at the center of the pixel. This lead to a simple, albeit crude notion of pixel coverage. To refine this crude approach let’s assume that samples were taken at more than one spot within the pixel. For instance, let’s assume that four samples were taken, like so:

A single pixel, sampled at 4 different spots, as if the pixel had been partitioned into 2×2 = 4 equal parts. The two samples on the left are “out,” while the two samples on the right are “in.”
⇒ Hover your mouse over the illustration to see the same pixel sampled at 1 single spot, without partitioning. This sample is “in.”

The same diagonal stroke as above, but sampled at 4 different spots per pixel. The solid dots represent “in” samples, the others are “out.”
⇒ Hover your mouse over the illustration to see the same diagonal stroke sampled at one spot per pixel.

Each of these four samples can now contribute a quarter of the pixel coverage. For instance, if two out of four samples are “in” then half the pixel is covered by the outline. Likewise, if only one out of four samples is “in” then only a quarter of the pixel is covered, etc.

Generally, the number of “in” samples is proportional to the pixel coverage. But instead of the all-or-nothing approach of a single sample per pixel, which yields a pixel coverage of either 0% or 100%, the pixel coverage can now be determined with intermediate values 0%, 25%, 50%, 75%, and 100%.

What is left to do is to translate this coverage to a notion of “on-ness” for the pixel (how much “on” is the pixel?). Previously, in the all-or-nothing approach, a pixel coverage of 0% meant a pixel was fully “off” or white and a coverage of 100% meant a pixel was fully “on” or black. We keep white and black, and we add intermediate levels of gray (light gray, middle gray, and dark gray) like so:

Shade of Gray
Pixel Coverage 0%
(white)
25%
(light gray)
50%
(middle gray)
75%
(dark gray)
100%
(black)

Pixel coverages 0%, 25%, 50%, 75%, and 100%, along with their corresponding shades of gray (including white and black)

4 samples, each of which can be “in” or “out,” combine in 16 different ways. Following is a table that lists all 16 combinations, along with the corresponding shade of gray, and the degree of pixel coverage that each combination represents.

Combination of
“in” and “out” Sample
Shade of Gray Pixel Coverage
0%
(white)
25%
(light gray)
50%
(middle gray)
75%
(dark gray)
100%
(black)

Combinations of “in” and “out” samples that correspond to the pixel coverages of 0%, 25%, 50%, 75%, and 100%, and hence yield the same shade of gray.

Notice that for a pixel to turn out middle gray, two out of four samples must be “in,” but it does not matter which two out of the four. Any two will do (but no more or less than two). For instance, it doesn’t matter if the two samples on the left are “in” or the two on the right. Both combinations of two “in” and two “out” samples will turn out the same.

Likewise, for a pixel to turn out light gray, it doesn’t matter which of the four samples is “in,” as long as one (but only one) sample is “in.” This kind of detail cannot be distinguished. After all, we still have only one pixel for all four samples. But these intermediate shades of gray can better represent how much of the pixel is covered by the outline.

The same diagonal stroke as above, but rendered with intermediate shades of gray, representing partial pixel coverages.
⇒ Hover your mouse over the illustration to the bi-level rendition that cannot represent partial pixel coverage.

In technical terms, what we have just done is oversampling the outline, followed by downsampling. Oversampling is the term for taking samples at a higher rate than the rate of the targeted screen (or printer). In other words, we are taking more samples than there are pixels. Downsampling simply denotes the act of reducing or decimating a number of samples to one single sample.

Moreover, in the process of downsampling, we have applied a simple anti-aliasing filter. Simply put, an anti-aliasing filter is a method that removes details that cannot be resolved by the targeted sampling rate and replaces them by something simpler.

For instance, in the above scheme, the detail of two “on” and two “off” samples cannot be resolved by the targeted sampling rate. We cannot turn one half of a pixel fully “on” and leave the other half fully “off.” But this detail can be replaced by a full pixel that is “a little bit on,” i.e. a pixel whose brightness is halfway between “light” and “dark.”

Notice that the goal of anti-aliasing is not (and cannot be) to increase the actual resolution of the target device. Anti-aliasing cannot increase the number of pixels or dots per inch on the screen. The goal is to prevent or reduce artifacts that result from an insufficient sampling rate, such as equal stems being sampled with an unequal number of samples, or vice-versa, and generally to “smooth” the stair-stepped appearance. Anti-aliasing tries to replace such artifacts by something that in some sense looks “nicer.”

Like with the full-pixel bi-level rendering method, this method renders a full pixel at a time, but unlike the bi-level method, it anti-aliases several samples into this full pixel. Thus I’ll call this method a full-pixel anti-aliasing (FPAA) method. Colloquially, this method is frequently referred to as “gray-scaling” or more generically as “font smoothing.” For most practical purposes, these are equivalent terms. To distinguish it from the method introduced in the next section, standard anti-aliasing may be used, as well.

Up until now we have looked at a very simple anti-aliasing filter. The outlines were oversampled at the rate of 4 samples per pixel—two in x-direction times two in y-direction, or 2x2y for short. This yields 16 combinations of samples and 5 levels of gray (in this context, I’ll call white and black levels of gray, too).

To obtain a finer gradation of gray levels we simply have to oversample at a higher rate. For instance, if we did 4x4y (4 in x-direction times 4 in y-direction), we would get 16 samples per pixel and 65536 combinations of samples. I won’t tabulate them all… Suffice it to note that this yields 17 levels of gray (again including white and black).

In general, the number of gray levels equals the total number of samples taken in a pixel plus one. In turn, the total number of samples in a pixel equals the number of samples in x-direction times the number of samples in y-direction. Thus, 8x8y yields 65 levels of gray (8 × 8 + 1 = 65) and 16x16y would yield 257 levels of gray, which is more than most screens or humans can distinguish.

The same diagonal stroke as above, rendered in full-pixel anti-aliasing with 2, 5, 17, and 65 levels of gray (from left to right). Notice that 2 levels of “gray” correspond to bi-level rendering. Notice also that already the difference between 17 and 65 levels of gray is very subtle. Accordingly, 257 levels of gray weren’t included in this illustration.

By now we understand the simple anti-aliasing filter that relates the sampled coverage of a pixel to a level of gray. The technical term for this kind of filter is box filter. The box filter is an easy to understand (and notably easy to explain) anti-aliasing filter, but it is not the only way to decimate multiple samples into a single sample.

Some filters include samples from adjacent pixels in the process. This looks as if the pixels were “leaking” or “bleeding” ink, but the overall appearance may be smoother than when using a simple box filter. Some filters additionally weigh the contributions of samples in a non-uniform way, giving preference to the samples towards the center of a pixel. One such filter is called the sinc filter.

It is beyond the scope of this website to discuss these advanced filters in great detail. Like with the pixel shapes, there are enough raster tragedies left to investigate before obsessing over some fancy filters. However, with plenty of examples we will discuss how the nature of these filters interacts with the kind of constraints applied to the outlines.

In the previous section we have looked at the basics of anti-aliasing for reducing artifacts of insufficient sampling rates. While anti-aliasing can make fonts look smoother, it cannot increase the physical resolution of the targeted device (that is, its DPI). In this section, we will look at a method that tries to increase the perceived resolution of the display device.

You might remember from science class that when you shine a red, a green, and a blue flashlight at the same spot in darkness, you get white. This is called additive color mixing: Each flashlight adds a particular hue of light to the mix. The three colors involved to obtain white are called the additive primary colors.

Shining the three primary colors red, green, and blue into darkness: Red and green combine to yellow, red and blue combine to magenta (fuchsia), green and blue combine to cyan (aqua), and all three together combine to white

Notice that art class teaches its own model for mixing colors, suitable for applying paints to a white canvas and similar. Every application of paint removes or subtracts from the ability of the white canvas to reflect the white sunlight, hence the name subtractive color mixing. This is not the same as additive mixing. It uses different primary colors, and they mix differently.

Most computer screens produce luminous dots or pixels (backlit screens). Accordingly, to produce color pixels, they use the additive primary colors. Specifically, color LCDs comprise (at least) 3 “shutters” or sub-pixels within each pixel. In turn, each of these 3 sub-pixels gets its own color, conceptually speaking. When fully “on,” it will shine (in) the respective primary color. Conversely, when fully “off,” it will remain black.

Unlike the experiment from science class, these 3 sub-pixels shine in parallel—side-by-side—they’re not (and physically cannot be) located in the same spot. The reason why we still see white instead of the 3 primary colors of the individual sub-pixels is this: From a typical viewing distance, broadly speaking, the sub-pixels are too small to tell them apart. To get an idea of scale, let’s have a look at a typical color LCD pixel:

A theoretical representation of a color LCD pixel, illustrating the relative locations of the red, green, and blue sub-pixels.
⇒ Hover your mouse over the illustration to see an actual pixel, taken from a close-up photo of a commercially available LCD. Close—but surprisingly “unrectangular.”

At 96 DPI, each pixel measures 1/96 of an inch, or about 0.265 mm. The width of a sub-pixel is a third of that, or about 0.088 mm (88 µm). The healthy human visual system can tell things apart if they are spaced by at least 1 arc minute (1/60 of a degree). Accordingly, to tell two adjacent sub-pixels apart, they would have to be viewed from about 0.3 m (1 foot) or less.

So, if we viewed a pixel on a 96 DPI color LCD, with its individual red, green, and blue sub-pixels turned on, from at least 0.3 m (12 inches), we should see a white pixel. Now, to get white, it doesn’t matter in what order the sub-pixels are arranged. It could be red, followed by green, followed by blue (RGB), or it could be blue, followed by green, followed by red (BGR), or any other permutation.

In turn, if we knew the actual sequence, we could use this to turn on only the rightmost two of the 3 sub-pixels, along with the leftmost sub-pixel of the adjacent pixel to the right. Likewise, we could turn on only the rightmost sub-pixel, along with the “following” two sub-pixels to the right. In each of the above cases this should yield white, as illustrated below:

The three primary colors red (R), green (G), and blue (B) combine to white, regardless of the sequence (permutation) of colors:
RGB, GBR, and BRG all combine to white.
⇒ Hover your mouse over the illustration to see the resulting white pixels. On the diagonal line on the right it appears as if white pixels are displaced in increments of 1/3 of a pixel.

CAUTION: To see any benefits of sub-pixel anti-aliasing when viewing the original size white lines at the bottom of the above illustration, make sure that:

While it is straightforward to reset your browser’s “Zoom” and to set your LCD to its “Native Resolution,” your LCD’s “Sub-Pixel Structure” may need a little more work. On a desktop computer make sure that your LCD is in “landscape” orientation, as opposed to “portrait” orientation. This shouldn’t be an issue on a conventional laptop, but be sure to orient tablet PCs and laptops that convert to tablets accordingly.

At this point in the setup process you should have your “Zoom” at 100%, your LCD resolution set to its “Native Resolution,” and your LCD sub-pixels turned vertically, as in the previously illustrated single pixel. But there is still a fair chance that the sub-pixels within a pixel appear in BGR order, as opposed to RGB order.

Therefore, I'll repeat the above illustration, tailored specifically to BGR screens.

The same illustration as above, except that it is targeted specifically at BGR screens.
⇒ Hover your mouse over the illustration to revert to the RGB version

Still in doubt about your LCD being BGR vs RGB? Below is a juxtaposition of the original size white diagonal line, in plain full-pixel bi-level rendition, followed by both RGB and BGR sub-pixel renditions.

The original size diagonals of the previous illustrations, rendered in full-pixel bi-level, followed by both RGB and BGR sub-pixel “displacement” (left to right). On both an RGB and a BGR screen, the left diagonal should look “plain pixilated.” On an RGB screen, the middle diagonal should look “smooth” or at least “less pixilated” than the left diagonal stroke, while the right diagonal should look “fuzzy.” On a BGR screen, these roles should be reversed: The middle diagonal should look “fuzzy” while the right diagonal should look “smooth” or at least “less pixilated” than the left diagonal stroke.

It surely appears that pixels can be displaced in increments of 1/3 of a pixel, and hence that this triples the resolution (DPI) of the screen—at least for positioning pixels in x-direction! Granted, tripling the resolution for positioning pixels in x-direction won’t take the sampling rate up to the 1024 DPI requested in , but it would get us quite a bit closer. At least in theory, this looks like a promising start.

In practice, however, we’re not always displaying white characters against a black background. Moreover, we may want to make use of both “incremental” stem weights (cf ) and “incremental” stem positions (cf ). This complicates matters beyond the seemingly trivial swapping of white for black and vice-versa.

To understand these complications, notice the small but important difference between a sub-pixel that is “on” and one that is “off:” The “off” sub-pixel will always be black, regardless of whether it is a red, a green, or a blue sub-pixel. By contrast, the “on” sub-pixel will always be one of the three primary colors red, green, or blue. But it takes the three of them in very close proximity to “make” white. Any “leftovers” may show up:

RGB BGR

A vertical stroke, 10 pixels tall, rendered in black against a white background, using “incremental” stroke weights (left to right) and at “incremental” stroke positions (top to bottom). Depending on your viewing distance, your screen’s resolution (DPI), your visual acuity, and generally your color vision, you may perceive more or less pronounced “color fringes.”

On an LCD with RGB sub-pixel sequence, displacing a black pixel to the right by 1/3 of a pixel leaves a fully lit red sub-pixel to the left of the displaced black pixel. This is the same as an entire pixel in red. Likewise, it leaves two fully lit sub-pixels to the right, shining in green and blue respectively. Together, they add to an entire pixel in cyan (or aqua). Neither of these pixels are white, nor black.

Displacing that same black pixel to the right by 2/3 of a pixel leaves a fully lit pair of red and green sub-pixels to the left, which add up to yellow, and a fully lit blue sub-pixel to the right. Again, neither of these “leftovers” are white, nor black, as illustrated below:

Displacing black pixels in increments of 1/3 of a pixel against a white background on an RGB screen is almost the same as displacing white pixels against a black background. Almost—but not the same: The “leftovers” should explain the “color fringes” you may perceive on the previous illustration.
⇒ Hover your mouse over the illustration to see how we ended up with these “leftovers.”

Bummer! It seems that there is this extra resolution available from the sub-pixels that somehow we should be able to harness—if it weren’t for those “pesky color leftovers.”

What to do? Strictly speaking, there is no solution. We can’t get rid of the colors completely—unless we physically convert the LCD to monochrome. But, as a workaround, we can try to “smudge the line” between white and black.

In the process, we will mute the effect of color fringes to a tolerable level, but we won’t eliminate them altogether (cf also ). The keywords here are once again workaround and tolerable.

The method I am familiar with is ClearType. Similar to full-pixel anti-aliasing, ClearType starts out with oversampling. However, unlike the former, subsequent downsampling is performed separately on the individual sub-pixels.

In the process of downsampling, an anti-aliasing filter is used. The exact nature of this filter is beyond the scope of this website. If I remember correctly, it is based upon a perceptual model of the human visual system.

In order to differentiate this rendering method from full-pixel anti-aliasing, I’ll call it a sub-pixel anti-aliasing method. Here is what this looks like when applied to the earlier example of vertical strokes:

RGB BGR

The same black strokes with “incremental” stroke weights and positions as above, but rendered with ClearType sub-pixel anti-aliasing.
⇒ Hover your mouse over the illustration to see the same set of strokes unfiltered. Again, depending on your viewing distance, your screen’s resolution (DPI), your visual acuity, and your color vision, you may perceive the “color fringes” anywhere from “eliminated” to “still objectionable.” Notice also that these illustrations do not include any explicit amount of gamma correction (cf ).

With the exception of the particular downsampling filter, CoolType, FreeType, and Quartz do not differ in spirit from ClearType, hence the generic term sub-pixel anti-aliasing method. However, for the purpose of constraining the outlines, all four of them differ from full-pixel anti-aliasing. To that end, I’ll highlight two relevant properties of sub-pixel anti-aliasing next.

To mute the effect of color fringes, or informally, to “smudge the line,” the downsampling filters of the individual sub-pixels must include samples of neighboring sub-pixels. In turn, full pixels may include samples of neighboring full pixels, and vice-versa. Even if a neighboring pixel does not contain any samples inside the outlines before downsampling, there is a fair chance that after downsampling, it looks as if it does.

Informally, this looks like the pixels inside the outlines were “leaking” or “bleeding” ink into neighboring pixels, as illustrated below (cf also “bleeding” in ).

In the process of muting the effect of color fringes (“smudging the line” between black and white), it appears as if ink were “leaking” or “bleeding” into neighboring pixels
⇒ Be sure to hover your mouse over the above illustration to remove the color muting anti-aliasing filter (ClearType filter). Once more, depending on your individual viewing conditions, when looking at the original sized lines at the bottom of the above illustration, you may perceive the unfiltered version as sharper but more colorful. By contrast, you may perceive the filtered version of the strictly vertical line as a little bit “blurrier,” and both the vertical and the diagonal line as a bit “heavier.” In all likelihood, this potentially unexpected “weight gain” is due to the absence of gamma correction (cf ).

The other relevant property of the rendering method introduced up to this point is the complete absence of anti-aliasing in y-direction. This is why I’ll call this method an asymmetric sub-pixel anti-aliasing (ASPAA) method. It anti-aliases in one direction only, but not in the other. We will come back to this in section .

There are alternative attempts to muting the effects of color fringing to a tolerable level, but as with ClearType, neither of them will eliminate them altogether. They basically represent different trade-offs between muting color fringes and “blurring” strokes. To the degree that this can be done merely by observation, following is a comparison between current sub-pixel anti-aliasing methods.

DISCLAIMER: With the exception of ClearType, I do not have any first-hand knowledge of the downsampling filters compared in this sub-section. I’m including this comparison merely for illustrative purposes. I’m sure the individual illustrations won’t be a pixel-by-pixel RGB match. Nevertheless, rest assured that I made an educated effort at visually matching the colors in order to show that, informally put:

Here is the comparison:
ClearType

(GDI in Windows XP, Vista, and 7)
CoolType

(“Smooth Text: For Laptop/LCD Screens”)
FreeType

(“Default”)
Quartz

(“Font smoothing: Medium - best for Flat Panel”)

Sub-Pixel Anti-Aliasing Methods compared: each of the above illustrations renders 3 nominally black 1 pixel wide vertical strokes against a white background (displaced by −1/3, 0, and +1/3 pixels from the full-pixel boundary) and an equally nominally black 1 pixel wide diagonal stroke (displacing pixels in increments of 1/3 of a pixel on each row). Notice the different degrees to which colors are muted and—using the original size insets— how this translates to color fringing and “blurring” of strokes.
⇒ Be sure to hover your mouse over the above illustrations to remove the respective sub-pixel anti-aliasing filter, and observe the color fringing, stroke sharpening, and generally “lighter” appearance without filter. Once more, most likely this potentially unexpected “weight loss” is due to the absence of proper gamma correction (cf ).

As you can see, there are differences between the various downsampling filters used by ClearType, CoolType, FreeType, and Quartz (or my approximations thereof). Depending on your color vision, they may or may not seem to be “worlds apart.” As you are progressing through chapters , , and , you may find that

may have an impact similar in magnitude to the individual downsampling filters, if not larger.

I cannot explain to what degree sub-pixel anti-aliasing actually increases the perceived resolution (DPI) of the LCD device on which it is exercised. It is probably safe to say, though, that the factor is somewhere between 1 (the number of pixels is the same as for the previous rendering methods) and 3 (but we are addressing 3 individual sub-pixels for each pixel), both inclusive. I’ll leave the determination of the exact number to marketing.

In the previous two sections, we have looked at two different anti-aliasing methods for rendering characters on screen. The first method anti-aliases full pixels, but does not attempt to exploit the sub-pixel structure of today’s LCDs. The second method anti-aliases sub-pixels, hence does attempt to exploit their relative location and size, but does not anti-alias in y-direction. Consequently, it would be nice if the two methods somehow could be combined.

Turns out, they can. The general idea is to oversample in both x- and y-direction, potentially allowing for individual oversampling rates for either direction. Subsequently, appropriate rectangular “chunks” of samples are downsampled to an RGB triplet. The filter used in the process can be a combination of the previously mentioned downsampling filters, or variations thereof. Accordingly, I’ll call this method a hybrid sub-pixel anti-aliasing (HSPAA) method.

As with asymmetric sub-pixel anti-aliasing, the exact nature of the downsampling filter used for hybrid sub-pixel anti-aliasing is beyond the scope of this website. To the degree that it is relevant for constraining outlines, suffice it to say that we should expect it to “bleed.” No other new concepts are necessary.

*****

Following is a tabular overview, summarizing the 4 rendering methods with their properties as introduced up to this point:

Rendering Method →
↓ Property
smoothing
(x-direction)
noyesyesyes
smoothing
(y-direction)
noyesnoyes
exploit LCD
sub-pixel structure
nonoyesyes
“bleeding”
(x-direction)
nono(a)yesyes
“bleeding”
(y-direction)
nono(a)noyes(b)

Opportunities: Properties of 4 different rendering methods

  1. Assuming a box filter
  2. Assuming a sinc filter

In the next chapter, we will discuss plenty of examples using these rendering methods.

previous chapter, ↑ top, next chapter