ADC and Oversampling

Analog to digital converters make a quantization of the sampled input. Also the input signal is sampled into a discrete time domain, unlike flash ADCs that work on a continuous time domain. As you will see later on this presentation, this fact is essential.
ADC basics: Resolution
Quantized means that the continuous input signal is mapped into a smaller, countable set of numbers. And because we
have limits to both the analog range the input can handle and the maximum set of numbers that can be counted for, this
combined result in a representation of the input value that is a multiple of a base unit, given by the ratio between the
input analog range and the maximum count.
And this is a fancy way to define the maximum resolution at which we can represent the original signal.
In other words, for example, if we can count to maximum 256, and the input ranges from zero to 5V, the resulting base unit is:
Which means that of the all infinite possible values only multiples of 19.5mV can be counted. And this represents the maximum resolution of the analog to digital converter of this example.
Now let's suppose we have an 8 bit ADC, which as we've seen has a maximum resolution of 19.5 mV, but we want to get from it 9 bits, to achieve twice the resolution:
How we can do this?
Sampling concepts
All right, to understand how we can achieve this we must first need to see how sampling is done.
At the beginning I said that, unlike flash ADCs, all other types of ADC samples the analog signal into a discrete time
domain. And for the sake of this presentation I define this kind of ADCs as "discrete time domain ADC". This is an
important distinction to achieve our goal of virtually increase the resolution beyond the physical available bits.
So let's see how this actually works, and its interesting side effects.
For the purpose of this presentation we consider a typical successive approximation ADC (see in slideshow). Here the input signal is
sampled and held while a comparator detects when a generated count, that is converted back to analog through a DAC, match
with the sampled and held input signal. When this happens the counter is stopped, and the counted value is returned as
converted value.
And at that point the whole process is started over again, making a cycle. A cycle means frequency. And indeed this is
the frequency sampling rate.
Also, as you can see the signal is quantized choosing an integer number that falls into a multiple of the base unit, represented here by each step of this staircase curve.
To let you understand better what's going on here, I've wrote a simple web app simulator.
In the simulator try to reduce the maximum count to somewhat see the effect of quantization, even though it must be greatly exaggerated to make it visible by checking Ramp in place of a sinewave as input signal.
Also try to change the input frequency in relation to the sampling frequency so it is visually possible to get an intuitive understanding of the Nyquist frequency, which states that the maximum representable frequency of the input signal must be at most half of the sampling frequency.
In fact you will see that when half of the sampling frequency is reached, which is the Nyquist frequency, just
one sample high and one sample low can be taken. So above this frequency we lose the ability to represent the input frequency.
And when the input signal reaches the same sampling frequency, the digitized signal becomes flat.
So above this frequency (and indeed above half of it) we start to notice aliases: that are a mistaken representation of the original input
signal, and that happens because the sampler misses some cycles of the input frequency.
It is an alias because it is a reconstructed signal that have a relationship with both the input frequency and the
sampling frequency.
And since the purpose of this simulation is to provide an intuitive approach to understand the physics that govern a
discrete time domain ADC through visualization, I added the possibility to show more samples so it becomes easier to grasp
intuitively the resulting effect.
Keep in mind though that your brain tricks you and you may see waveforms that indeed don't exist.
However you may also see better the alias waveform generated from the interference between the sampling frequency and
input frequency. To do this increase the number of samples and uncheck Dots. Also uncheck Link so you can independently
scroll the number of samples shown and the input frequency ratio.
Another interesting aspect is the ability to represent the original signal. If you set for 16 samples you'll notice
that at the Nyquist frequency, remember half of the sampling frequency, while it is still possible to represent the
original frequency it can be represented only a sinewave of such a frequency. This is more evident when you switch between
sine wave to sawtooth waveform checking Ramp.
Obviously this is beause any waveform that is not a sinewave is made up of higher harmonic components, at higher
frequencies that of course will be above the Nyquist frequency and therefore will be impossible to represent, while
their aliases will take their place distorting the reconstructed waveform.
And this is another way to see the frequency limit.
Well, we've seen a lot about discrete time domain ADCs, now we're ready to understand how is it possible to sqeeze more bits from such an ADC.
Virtually increase resolution
The procedure is called Oversampling and decimation, and rely to the fact that increasing the sampling rate (or conversley reducing the input frequency) allows to use additional samples to get more information from the input, achieved by intruducing a small perturbation.
Example
For example, if we take four samples from an 8 bit ADC, which has a 19.5mV base unit with a 5V range, and at each
sample we add a small perturbation so that the input voltage is shifted by one fourth of 19.5mV each time, thus: 4.9mV,
9.8mV, 14.6mV and 19.5mV; and the input voltage to convert was 9 mV, then we would get the following samples: 0, 0, 1, 1.
Shifting them by one bit to map into a 9 bit, makes them to be: 0, 0, 2, 2.
Then averaging these four values gives 1, which mapped into 9 bits corresponds to
which is close to
the actual input value that otherwise would have been lost if we sampled with just one 8 bit sample.
Becasue all this may not sound that intuitive, in the simulator let's try to increase the sampling frequency (number of samples) and/or
reduce the input frequency ratio and see what happens. Of course this is a simulation so it is not an exact emulation of the
real process but it is useful to get an idea.
By increasing the sampling frequency to have 96 cycles, and scrolling the input frequency at a ratio of 0.05 in respect to the
sampling frequency makes it easier to visualize. Now let's bring down the maximum count, which is another
way to say resolution, to 32 or 5 bits or even less to 16 (4 bits).
Now you can see that almost each sample point is at a different level. However if you bring down the input frequency by four
times, at 0.0125 (it is approximated in the simulator at 0.0124) you'll see that more than one sample is at the same level.
So we have some duplicated samples.
Now if the input signal is perturbated at an amplitude around the base unit, or the lowest bit, the LSB, the sampled
values will be slightly different accordingly with the perturbation. So to simulate this you can add some noise to the input
signal by checking Noise and you'll notice that the duplicates are gone and the overall samples reconstruct better the
original signal with more information.
Averaging these values, which implies the reduction of the number of samples, gives a correct more detailed
representation of the input signal, therefore increasing its resolution at a higher equivalent number of bits than the one
that was possible through those phyically available.
Also averaging reduces the amount of noise, in spite we intentionally added it, and reduces aliases.
Noise is reduced because the sum of uncorrelated noise increases its amplitude by (where N is the number of summed
samples), while the sum of correlated signals increases its amplitude by N. So once averaged the net result is an increased
signal to noise ratio by the square root of N:
To summarize, oversampling and decimation
- Increases the resolution
- Increases the signal to noise ratio
- Reduces the available bandwidth
- For each virtual bit gained in resolution the number of samples must be incresed by four times.
Bandwith is reduced also because in many cases given a maximum sampling rate we need to reduce the input maximum frequency to achieve the goal of oversampling. In spite averaging acts as an antialias, it remains important to introduce a low pass filter at the input to prevent aliasing.
Ok, now let's see how this works in practice.
To make it to work we need to sum 4 times the samples for each virtual bit gained. Mathematically the rule is as
follows: where n is the number of virtual bits.
This means that to squeeze two more bits we need to get 16 samples, to squeeze three more bits we need 64 samples, and so on.
It soon hampers the input bandwidth and while you can increase greatly the resolution on the other hand you may only
sample signals at very low frequency.
The full algorithm is as follows:
But the actual implementation in software usually involves shifting the data to both fit the final resolution and performing the scaled mean, as in this example.
Code example
ISR(ADC_vect)
Video
This video provides the same content as of this article but with the help of animated slides.

Don't miss next post
Subscribe to stay up to date when new articles, videos or other contents are available.
RELATED ARTICLES
RECENT ARTICLES