Misleading representations of discrete-time signals

Many audio editor tools draw digitally sampled signals in ways that can be misleading or sometimes downright incorrect. I hope this post will be useful in explaining this recurring and confusing issue.

In Audacity, let's set the project sample rate to 48 kHz and generate a sinusoid tone at, say, 22.5 kHz. Using a tone near the Nyquist frequency and a high zoom level will make what I'm talking about quite apparent:

[Image: Detail of an Audacity screenshot, showing its oscillogram viewer zoomed in so much that only a few dozen samples, represented by points in the X-Y plane, are on screen. Adjacent samples are connected to each other by straight lines, and there's a zero-crossing between every sample.]

It appears as though our newly generated sinusoid is "pumping" or going up and down in amplitude! Of course this can't be true; it's just a pure sine wave and should be perfectly reconstructible as such, according to the sampling theorem. But what's happening here?

The problem lies in drawing straight lines between the samples (known as linear interpolation). This overlooks the fact that the original signal – in this case, coming from Audacity's signal generator – only contained frequencies below the Nyquist frequency. Such sharp corners would have been impossible.

A cleaner, more customary way of drawing discrete-time signals is shown below:

[Image: The same samples as in the above screenshot, but instead of connecting adjacent samples to each other, the samples are connected to the X axis using a dotted line.]

Here, only the samples are shown, and their order is discernible from the lines that connect them to the zero level. But we're not trying to claim anything about what's in between those samples (the value is actually undefined).

The "pumping" pattern we're still seeing emerges from the slowly changing phase difference between the sampling frequency and the signal; in other words, the sinusoid is being sampled at different points.

A more faithful representation of the signal can be obtained by low-pass filtering the waveform shown by Audacity – after resampling at a higher rate – using a 'brick-wall' cut-off at the Nyquist frequency. This is equivalent to sinc interpolation, and it's pretty close to what actually happens in the sound card when the signal is played back. We can immediately see how the weird dot pattern was formed:

[Image: The same set of samples as in the above images, but this time, a sinusoid of constant frequency and amplitude is drawn that passes though all the points.]

Sinc interpolation is computationally more complex (slower) than linear interpolation, which is probably why it isn't used by Audacity and others. Some audio editors however, like Cool Edit, do use a better interpolation at high zoom levels.

See also my related post, Rendering PCM with simulated phosphor persistence.

(And since it's been linked so many times – a great episode of Digital Show & Tell explains this and a lot more about sampled signals.)