Use the oversamplling followed by the “decimation method” to increase the ADC resolution and not normal averagingLTC2485 I2C Data FormatAny tricks to generate triangle wave to add to analog signal for oversampling?About the meaning of “oversampling”Delta Sigma Converters: ResolutionSampling rate analog to digital converterQuestion about the registers of an ADCOptimal tradeoff between ADC bit depth and sampling rateWhy can't you just average ADC samples to get more resolution from an ADC?Resolution enhancement for a 2wire PT100 setup with ADC and no external amplificationA question about 16 bit ADC representation
Physically unpleasant work environment
Pedaling at different gear ratios on flat terrain: what's the point?
Why use a retrograde orbit?
How to continually and organically let my readers know what time it is in my story?
Would it be fair to use 1d30 (instead of rolling 2d20 and taking the higher die) for advantage rolls?
Why are there five extra turns in tournament Magic?
Square spiral in Mathematica
Usage of the relative pronoun "dont"
Why would company (decision makers) wait for someone to retire, rather than lay them off, when their role is no longer needed?
Is Precocious Apprentice enough for Mystic Theurge?
How can I fix the label locations on my tikzcd diagram?
Why didn't Daenerys' advisers suggest assassinating Cersei?
Holding rent money for my friend which amounts to over $10k?
"Counterexample" for the Inverse function theorem
Working hours and productivity expectations for game artists and programmers
Is it possible to pass a pointer to an operator as an argument like a pointer to a function?
Polynomial division: Is this trick obvious?
Why would you put your input amplifier in front of your filtering for and ECG signal?
How can I make dummy text (like lipsum) grey?
What is this rubber on gear cables
Why does Taylor’s series “work”?
How to handle professionally if colleagues has referred his relative and asking to take easy while taking interview
Would a "ring language" be possible?
Why aren't satellites disintegrated even though they orbit earth within their Roche Limits?
Use the oversamplling followed by the “decimation method” to increase the ADC resolution and not normal averaging
LTC2485 I2C Data FormatAny tricks to generate triangle wave to add to analog signal for oversampling?About the meaning of “oversampling”Delta Sigma Converters: ResolutionSampling rate analog to digital converterQuestion about the registers of an ADCOptimal tradeoff between ADC bit depth and sampling rateWhy can't you just average ADC samples to get more resolution from an ADC?Resolution enhancement for a 2wire PT100 setup with ADC and no external amplificationA question about 16 bit ADC representation
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
$begingroup$
To increase the 12-bit resolution of ADC from 12 bit to 14 bit, this can be done through the 'oversampling and decimation method'. An Atmel Application note says that
The higher the number of samples averaged is, the more selective the low-pass filter will be, and the better the interpolation. The extra samples, m, achieved by oversampling the signal are added, just as in normal averaging, but the result are not divided by m as in normal averaging. Instead the result is right shifted by n, where n is the desired extra bit of resolution, to scale the answer correctly. Right shifting a binary number once is equal to dividing the binary number by a factor of 2.
It is important to remember that normal averaging does not increase the resolution of the conversion. Decimation, or Interpolation, is the averaging method, which combined with oversampling, which increases the resolution
This reference clearly says that for the decimation method, the result is right shifted by the desired extra bit of resolution, and not divided by m as in the normal average.
So, the question is, why do we need to use the decimation method instead of the normal averaging after the oversampling to increase the ADC resolution?
It says above "Right shifting a binary number once is equal to dividing the binary number by a factor of 2", but what if we don't use a binary number? How do we use the decimation method in this case?
microcontroller adc
New contributor
$endgroup$
add a comment |
$begingroup$
To increase the 12-bit resolution of ADC from 12 bit to 14 bit, this can be done through the 'oversampling and decimation method'. An Atmel Application note says that
The higher the number of samples averaged is, the more selective the low-pass filter will be, and the better the interpolation. The extra samples, m, achieved by oversampling the signal are added, just as in normal averaging, but the result are not divided by m as in normal averaging. Instead the result is right shifted by n, where n is the desired extra bit of resolution, to scale the answer correctly. Right shifting a binary number once is equal to dividing the binary number by a factor of 2.
It is important to remember that normal averaging does not increase the resolution of the conversion. Decimation, or Interpolation, is the averaging method, which combined with oversampling, which increases the resolution
This reference clearly says that for the decimation method, the result is right shifted by the desired extra bit of resolution, and not divided by m as in the normal average.
So, the question is, why do we need to use the decimation method instead of the normal averaging after the oversampling to increase the ADC resolution?
It says above "Right shifting a binary number once is equal to dividing the binary number by a factor of 2", but what if we don't use a binary number? How do we use the decimation method in this case?
microcontroller adc
New contributor
$endgroup$
$begingroup$
How do you define "normal averaging"?
$endgroup$
– TimWescott
May 11 at 18:26
3
$begingroup$
"but what if we don't use a binary number" - all numbers are binary numbers in a microcontroller.
$endgroup$
– brhans
May 11 at 18:32
$begingroup$
@TimWescott: It's "defined" (sort of) in the paper.
$endgroup$
– Dave Tweed♦
May 11 at 18:32
$begingroup$
There's a practical limit to how many extra bits you can get; in my experience you might get 2 bits more.
$endgroup$
– Peter Smith
May 12 at 15:56
add a comment |
$begingroup$
To increase the 12-bit resolution of ADC from 12 bit to 14 bit, this can be done through the 'oversampling and decimation method'. An Atmel Application note says that
The higher the number of samples averaged is, the more selective the low-pass filter will be, and the better the interpolation. The extra samples, m, achieved by oversampling the signal are added, just as in normal averaging, but the result are not divided by m as in normal averaging. Instead the result is right shifted by n, where n is the desired extra bit of resolution, to scale the answer correctly. Right shifting a binary number once is equal to dividing the binary number by a factor of 2.
It is important to remember that normal averaging does not increase the resolution of the conversion. Decimation, or Interpolation, is the averaging method, which combined with oversampling, which increases the resolution
This reference clearly says that for the decimation method, the result is right shifted by the desired extra bit of resolution, and not divided by m as in the normal average.
So, the question is, why do we need to use the decimation method instead of the normal averaging after the oversampling to increase the ADC resolution?
It says above "Right shifting a binary number once is equal to dividing the binary number by a factor of 2", but what if we don't use a binary number? How do we use the decimation method in this case?
microcontroller adc
New contributor
$endgroup$
To increase the 12-bit resolution of ADC from 12 bit to 14 bit, this can be done through the 'oversampling and decimation method'. An Atmel Application note says that
The higher the number of samples averaged is, the more selective the low-pass filter will be, and the better the interpolation. The extra samples, m, achieved by oversampling the signal are added, just as in normal averaging, but the result are not divided by m as in normal averaging. Instead the result is right shifted by n, where n is the desired extra bit of resolution, to scale the answer correctly. Right shifting a binary number once is equal to dividing the binary number by a factor of 2.
It is important to remember that normal averaging does not increase the resolution of the conversion. Decimation, or Interpolation, is the averaging method, which combined with oversampling, which increases the resolution
This reference clearly says that for the decimation method, the result is right shifted by the desired extra bit of resolution, and not divided by m as in the normal average.
So, the question is, why do we need to use the decimation method instead of the normal averaging after the oversampling to increase the ADC resolution?
It says above "Right shifting a binary number once is equal to dividing the binary number by a factor of 2", but what if we don't use a binary number? How do we use the decimation method in this case?
microcontroller adc
microcontroller adc
New contributor
New contributor
edited May 12 at 15:38
Peter Mortensen
1,60031422
1,60031422
New contributor
asked May 11 at 18:12
AliAli
161
161
New contributor
New contributor
$begingroup$
How do you define "normal averaging"?
$endgroup$
– TimWescott
May 11 at 18:26
3
$begingroup$
"but what if we don't use a binary number" - all numbers are binary numbers in a microcontroller.
$endgroup$
– brhans
May 11 at 18:32
$begingroup$
@TimWescott: It's "defined" (sort of) in the paper.
$endgroup$
– Dave Tweed♦
May 11 at 18:32
$begingroup$
There's a practical limit to how many extra bits you can get; in my experience you might get 2 bits more.
$endgroup$
– Peter Smith
May 12 at 15:56
add a comment |
$begingroup$
How do you define "normal averaging"?
$endgroup$
– TimWescott
May 11 at 18:26
3
$begingroup$
"but what if we don't use a binary number" - all numbers are binary numbers in a microcontroller.
$endgroup$
– brhans
May 11 at 18:32
$begingroup$
@TimWescott: It's "defined" (sort of) in the paper.
$endgroup$
– Dave Tweed♦
May 11 at 18:32
$begingroup$
There's a practical limit to how many extra bits you can get; in my experience you might get 2 bits more.
$endgroup$
– Peter Smith
May 12 at 15:56
$begingroup$
How do you define "normal averaging"?
$endgroup$
– TimWescott
May 11 at 18:26
$begingroup$
How do you define "normal averaging"?
$endgroup$
– TimWescott
May 11 at 18:26
3
3
$begingroup$
"but what if we don't use a binary number" - all numbers are binary numbers in a microcontroller.
$endgroup$
– brhans
May 11 at 18:32
$begingroup$
"but what if we don't use a binary number" - all numbers are binary numbers in a microcontroller.
$endgroup$
– brhans
May 11 at 18:32
$begingroup$
@TimWescott: It's "defined" (sort of) in the paper.
$endgroup$
– Dave Tweed♦
May 11 at 18:32
$begingroup$
@TimWescott: It's "defined" (sort of) in the paper.
$endgroup$
– Dave Tweed♦
May 11 at 18:32
$begingroup$
There's a practical limit to how many extra bits you can get; in my experience you might get 2 bits more.
$endgroup$
– Peter Smith
May 12 at 15:56
$begingroup$
There's a practical limit to how many extra bits you can get; in my experience you might get 2 bits more.
$endgroup$
– Peter Smith
May 12 at 15:56
add a comment |
3 Answers
3
active
oldest
votes
$begingroup$
I wouldn't take that application note too seriously — it contains many errors, both conceptual1 and typographical.
Adding up a bunch of samples and then scaling the sum by some factor, no matter what you call it, IS averaging. It's also filtering. It is, in fact, just one special case of a finite impulse response (FIR) filter, in which every sample gets its own scale factor and then they get added together to create the result.
So, the question is, why do we need to use decimation method instead of the normal averaging after the oversampling to increase the ADC resolution?
It's all the same thing in the end.
It says above "Right shifting a binary number once is equal to dividing the binary number by a factor of 2", but what if we don't use a binary number, how do we use the decimation method in this case?
Just use ordinary division if the divisor isn't a power of 2.
1 For example, "white" noise is NOT equivalent to "gaussian" noise, although many natural noise sources are both gaussian AND white.
$endgroup$
4
$begingroup$
Although, the distinction that the paper is making (however badly) is that whatever method you use for averaging (or filtering), you need to save the least significant bits. If you average 16 samples, you (roughly) reduce the RMS noise by a factor of 4. If you don't keep the two additional "good" bits one way or another, you lose the advantage. Whether you do that by shifting, or multiplying by floats, or whatever -- it still needs to be done.
$endgroup$
– TimWescott
May 11 at 18:41
add a comment |
$begingroup$
So, the question is, why do we need to use decimation method instead
of the normal averaging after the oversampling to increase the ADC
resolution?
By 'normal' averaging I presume you mean dividing the sum by the number of samples. If you do this the result will have the same number of bits as a single sample, so you lose the extra bits you were trying to get. With decimation you only lose the lowest of the low bits, leaving some of the 'higher' low bits in to contribute to the final result.
It says above "Right shifting a binary number once is equal to
dividing the binary number by a factor of 2", but what if we don't use
a binary number, how do we use the decimation method in this case?
In the AVR (as in most computers) all numbers are binary, so I assume you just mean a number that is not a power of two. If the number of samples is not a power of two then to increase the resolution by a whole number of bits you must divide the sum by a number that is not a power of two. This may require using fixed point fractions or floating point math.
For example if you oversample x 25 and want exactly two extra bits then you need to divide by 25/4 = 6.25, which is not an integer. 8 bit AVRs don't have hardware floating point or even integer divide instructions, so dividing by fractions has to be done in software which is generally very inefficient. But shift instructions are very fast (as little as one CPU cycle per shift per byte) so it makes sense to choose an oversample rate that is a power of 2.
However there could be situations where you just need enough bits to eg. produce a decimal number with a certain number of digits. In that case it may be easier to directly divide the sum by the factor required to get the resolution you need, and not worry about whether it equates to a whole number of bits. In one case I had a 10 bit ADC and wanted a voltage display of 0.00-51.00V. To do this I oversampled by 64 times to get 1023*64 = 0-65472, then divided by 12.8 (using an optimized divide routine hard-coded to that factor) to get 0-5115. This was then displayed as 00.00-51.15 by simply inserting a decimal point after the second digit on the display.
$endgroup$
$begingroup$
It is confusing , some references say that oversampling alone can improve the accuracy of ADC , other say oversampling must be companied with normal averaging , while this reference say that oversampling must be followed by Decimation and, not the normal averaging.
$endgroup$
– Ali
2 days ago
$begingroup$
Oversampling is just taking several samples within a time period. They produce an average when added together, which also increases resolution because the sum has more bits. But the extra bits have less averaging so they are more noisy, and are not normally considered to be significant because they are below the ADC's resolution. 'Normal' averaging removes all the extra bits to reduce noise while maintaining the original resolution. Decimation removes only the lowest (most noisy) bits, keeping the higher (somewhat averaged and therefore less noisy) extra bits to increase resolution.
$endgroup$
– Bruce Abbott
2 days ago
$begingroup$
Thank you for your comment: what do you mean when you said that in the oversampling alone case, the extra bits are not considered to be significant due they are below ADC’s resolution . 2- i did not understand how the decimation can only remove the lowest(most noisy) bits and keep the higher bits (less noisy) , could you please more clarify this point , and list any reference could help for understanding this decimation. Thank you
$endgroup$
– Ali
yesterday
add a comment |
$begingroup$
- Averaging reduces the bandwidth when the number of samples averaged exceeds the number of oversamples.
- averaging and decimation increases resolution when the noise is ideally between 1/2 and 2x LSB.
- averaging improves accuracy only if the noise is > 1 LSB
- The noise does not have to be Gaussian or white
This method must be prefiltered to remove noise or if clean add 1 LSB noise to achieve these improvements, which must be controlled by design. The bandwidth reduction is obvious per above. This may not be possible is noise is uncontrollable or nonlinearity exceeds 1/2 LSB.
.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("schematics", function ()
StackExchange.schematics.init();
);
, "cicuitlab");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "135"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Ali is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f438039%2fuse-the-oversamplling-followed-by-the-decimation-method-to-increase-the-adc-re%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
I wouldn't take that application note too seriously — it contains many errors, both conceptual1 and typographical.
Adding up a bunch of samples and then scaling the sum by some factor, no matter what you call it, IS averaging. It's also filtering. It is, in fact, just one special case of a finite impulse response (FIR) filter, in which every sample gets its own scale factor and then they get added together to create the result.
So, the question is, why do we need to use decimation method instead of the normal averaging after the oversampling to increase the ADC resolution?
It's all the same thing in the end.
It says above "Right shifting a binary number once is equal to dividing the binary number by a factor of 2", but what if we don't use a binary number, how do we use the decimation method in this case?
Just use ordinary division if the divisor isn't a power of 2.
1 For example, "white" noise is NOT equivalent to "gaussian" noise, although many natural noise sources are both gaussian AND white.
$endgroup$
4
$begingroup$
Although, the distinction that the paper is making (however badly) is that whatever method you use for averaging (or filtering), you need to save the least significant bits. If you average 16 samples, you (roughly) reduce the RMS noise by a factor of 4. If you don't keep the two additional "good" bits one way or another, you lose the advantage. Whether you do that by shifting, or multiplying by floats, or whatever -- it still needs to be done.
$endgroup$
– TimWescott
May 11 at 18:41
add a comment |
$begingroup$
I wouldn't take that application note too seriously — it contains many errors, both conceptual1 and typographical.
Adding up a bunch of samples and then scaling the sum by some factor, no matter what you call it, IS averaging. It's also filtering. It is, in fact, just one special case of a finite impulse response (FIR) filter, in which every sample gets its own scale factor and then they get added together to create the result.
So, the question is, why do we need to use decimation method instead of the normal averaging after the oversampling to increase the ADC resolution?
It's all the same thing in the end.
It says above "Right shifting a binary number once is equal to dividing the binary number by a factor of 2", but what if we don't use a binary number, how do we use the decimation method in this case?
Just use ordinary division if the divisor isn't a power of 2.
1 For example, "white" noise is NOT equivalent to "gaussian" noise, although many natural noise sources are both gaussian AND white.
$endgroup$
4
$begingroup$
Although, the distinction that the paper is making (however badly) is that whatever method you use for averaging (or filtering), you need to save the least significant bits. If you average 16 samples, you (roughly) reduce the RMS noise by a factor of 4. If you don't keep the two additional "good" bits one way or another, you lose the advantage. Whether you do that by shifting, or multiplying by floats, or whatever -- it still needs to be done.
$endgroup$
– TimWescott
May 11 at 18:41
add a comment |
$begingroup$
I wouldn't take that application note too seriously — it contains many errors, both conceptual1 and typographical.
Adding up a bunch of samples and then scaling the sum by some factor, no matter what you call it, IS averaging. It's also filtering. It is, in fact, just one special case of a finite impulse response (FIR) filter, in which every sample gets its own scale factor and then they get added together to create the result.
So, the question is, why do we need to use decimation method instead of the normal averaging after the oversampling to increase the ADC resolution?
It's all the same thing in the end.
It says above "Right shifting a binary number once is equal to dividing the binary number by a factor of 2", but what if we don't use a binary number, how do we use the decimation method in this case?
Just use ordinary division if the divisor isn't a power of 2.
1 For example, "white" noise is NOT equivalent to "gaussian" noise, although many natural noise sources are both gaussian AND white.
$endgroup$
I wouldn't take that application note too seriously — it contains many errors, both conceptual1 and typographical.
Adding up a bunch of samples and then scaling the sum by some factor, no matter what you call it, IS averaging. It's also filtering. It is, in fact, just one special case of a finite impulse response (FIR) filter, in which every sample gets its own scale factor and then they get added together to create the result.
So, the question is, why do we need to use decimation method instead of the normal averaging after the oversampling to increase the ADC resolution?
It's all the same thing in the end.
It says above "Right shifting a binary number once is equal to dividing the binary number by a factor of 2", but what if we don't use a binary number, how do we use the decimation method in this case?
Just use ordinary division if the divisor isn't a power of 2.
1 For example, "white" noise is NOT equivalent to "gaussian" noise, although many natural noise sources are both gaussian AND white.
edited May 11 at 18:37
answered May 11 at 18:29
Dave Tweed♦Dave Tweed
126k10156273
126k10156273
4
$begingroup$
Although, the distinction that the paper is making (however badly) is that whatever method you use for averaging (or filtering), you need to save the least significant bits. If you average 16 samples, you (roughly) reduce the RMS noise by a factor of 4. If you don't keep the two additional "good" bits one way or another, you lose the advantage. Whether you do that by shifting, or multiplying by floats, or whatever -- it still needs to be done.
$endgroup$
– TimWescott
May 11 at 18:41
add a comment |
4
$begingroup$
Although, the distinction that the paper is making (however badly) is that whatever method you use for averaging (or filtering), you need to save the least significant bits. If you average 16 samples, you (roughly) reduce the RMS noise by a factor of 4. If you don't keep the two additional "good" bits one way or another, you lose the advantage. Whether you do that by shifting, or multiplying by floats, or whatever -- it still needs to be done.
$endgroup$
– TimWescott
May 11 at 18:41
4
4
$begingroup$
Although, the distinction that the paper is making (however badly) is that whatever method you use for averaging (or filtering), you need to save the least significant bits. If you average 16 samples, you (roughly) reduce the RMS noise by a factor of 4. If you don't keep the two additional "good" bits one way or another, you lose the advantage. Whether you do that by shifting, or multiplying by floats, or whatever -- it still needs to be done.
$endgroup$
– TimWescott
May 11 at 18:41
$begingroup$
Although, the distinction that the paper is making (however badly) is that whatever method you use for averaging (or filtering), you need to save the least significant bits. If you average 16 samples, you (roughly) reduce the RMS noise by a factor of 4. If you don't keep the two additional "good" bits one way or another, you lose the advantage. Whether you do that by shifting, or multiplying by floats, or whatever -- it still needs to be done.
$endgroup$
– TimWescott
May 11 at 18:41
add a comment |
$begingroup$
So, the question is, why do we need to use decimation method instead
of the normal averaging after the oversampling to increase the ADC
resolution?
By 'normal' averaging I presume you mean dividing the sum by the number of samples. If you do this the result will have the same number of bits as a single sample, so you lose the extra bits you were trying to get. With decimation you only lose the lowest of the low bits, leaving some of the 'higher' low bits in to contribute to the final result.
It says above "Right shifting a binary number once is equal to
dividing the binary number by a factor of 2", but what if we don't use
a binary number, how do we use the decimation method in this case?
In the AVR (as in most computers) all numbers are binary, so I assume you just mean a number that is not a power of two. If the number of samples is not a power of two then to increase the resolution by a whole number of bits you must divide the sum by a number that is not a power of two. This may require using fixed point fractions or floating point math.
For example if you oversample x 25 and want exactly two extra bits then you need to divide by 25/4 = 6.25, which is not an integer. 8 bit AVRs don't have hardware floating point or even integer divide instructions, so dividing by fractions has to be done in software which is generally very inefficient. But shift instructions are very fast (as little as one CPU cycle per shift per byte) so it makes sense to choose an oversample rate that is a power of 2.
However there could be situations where you just need enough bits to eg. produce a decimal number with a certain number of digits. In that case it may be easier to directly divide the sum by the factor required to get the resolution you need, and not worry about whether it equates to a whole number of bits. In one case I had a 10 bit ADC and wanted a voltage display of 0.00-51.00V. To do this I oversampled by 64 times to get 1023*64 = 0-65472, then divided by 12.8 (using an optimized divide routine hard-coded to that factor) to get 0-5115. This was then displayed as 00.00-51.15 by simply inserting a decimal point after the second digit on the display.
$endgroup$
$begingroup$
It is confusing , some references say that oversampling alone can improve the accuracy of ADC , other say oversampling must be companied with normal averaging , while this reference say that oversampling must be followed by Decimation and, not the normal averaging.
$endgroup$
– Ali
2 days ago
$begingroup$
Oversampling is just taking several samples within a time period. They produce an average when added together, which also increases resolution because the sum has more bits. But the extra bits have less averaging so they are more noisy, and are not normally considered to be significant because they are below the ADC's resolution. 'Normal' averaging removes all the extra bits to reduce noise while maintaining the original resolution. Decimation removes only the lowest (most noisy) bits, keeping the higher (somewhat averaged and therefore less noisy) extra bits to increase resolution.
$endgroup$
– Bruce Abbott
2 days ago
$begingroup$
Thank you for your comment: what do you mean when you said that in the oversampling alone case, the extra bits are not considered to be significant due they are below ADC’s resolution . 2- i did not understand how the decimation can only remove the lowest(most noisy) bits and keep the higher bits (less noisy) , could you please more clarify this point , and list any reference could help for understanding this decimation. Thank you
$endgroup$
– Ali
yesterday
add a comment |
$begingroup$
So, the question is, why do we need to use decimation method instead
of the normal averaging after the oversampling to increase the ADC
resolution?
By 'normal' averaging I presume you mean dividing the sum by the number of samples. If you do this the result will have the same number of bits as a single sample, so you lose the extra bits you were trying to get. With decimation you only lose the lowest of the low bits, leaving some of the 'higher' low bits in to contribute to the final result.
It says above "Right shifting a binary number once is equal to
dividing the binary number by a factor of 2", but what if we don't use
a binary number, how do we use the decimation method in this case?
In the AVR (as in most computers) all numbers are binary, so I assume you just mean a number that is not a power of two. If the number of samples is not a power of two then to increase the resolution by a whole number of bits you must divide the sum by a number that is not a power of two. This may require using fixed point fractions or floating point math.
For example if you oversample x 25 and want exactly two extra bits then you need to divide by 25/4 = 6.25, which is not an integer. 8 bit AVRs don't have hardware floating point or even integer divide instructions, so dividing by fractions has to be done in software which is generally very inefficient. But shift instructions are very fast (as little as one CPU cycle per shift per byte) so it makes sense to choose an oversample rate that is a power of 2.
However there could be situations where you just need enough bits to eg. produce a decimal number with a certain number of digits. In that case it may be easier to directly divide the sum by the factor required to get the resolution you need, and not worry about whether it equates to a whole number of bits. In one case I had a 10 bit ADC and wanted a voltage display of 0.00-51.00V. To do this I oversampled by 64 times to get 1023*64 = 0-65472, then divided by 12.8 (using an optimized divide routine hard-coded to that factor) to get 0-5115. This was then displayed as 00.00-51.15 by simply inserting a decimal point after the second digit on the display.
$endgroup$
$begingroup$
It is confusing , some references say that oversampling alone can improve the accuracy of ADC , other say oversampling must be companied with normal averaging , while this reference say that oversampling must be followed by Decimation and, not the normal averaging.
$endgroup$
– Ali
2 days ago
$begingroup$
Oversampling is just taking several samples within a time period. They produce an average when added together, which also increases resolution because the sum has more bits. But the extra bits have less averaging so they are more noisy, and are not normally considered to be significant because they are below the ADC's resolution. 'Normal' averaging removes all the extra bits to reduce noise while maintaining the original resolution. Decimation removes only the lowest (most noisy) bits, keeping the higher (somewhat averaged and therefore less noisy) extra bits to increase resolution.
$endgroup$
– Bruce Abbott
2 days ago
$begingroup$
Thank you for your comment: what do you mean when you said that in the oversampling alone case, the extra bits are not considered to be significant due they are below ADC’s resolution . 2- i did not understand how the decimation can only remove the lowest(most noisy) bits and keep the higher bits (less noisy) , could you please more clarify this point , and list any reference could help for understanding this decimation. Thank you
$endgroup$
– Ali
yesterday
add a comment |
$begingroup$
So, the question is, why do we need to use decimation method instead
of the normal averaging after the oversampling to increase the ADC
resolution?
By 'normal' averaging I presume you mean dividing the sum by the number of samples. If you do this the result will have the same number of bits as a single sample, so you lose the extra bits you were trying to get. With decimation you only lose the lowest of the low bits, leaving some of the 'higher' low bits in to contribute to the final result.
It says above "Right shifting a binary number once is equal to
dividing the binary number by a factor of 2", but what if we don't use
a binary number, how do we use the decimation method in this case?
In the AVR (as in most computers) all numbers are binary, so I assume you just mean a number that is not a power of two. If the number of samples is not a power of two then to increase the resolution by a whole number of bits you must divide the sum by a number that is not a power of two. This may require using fixed point fractions or floating point math.
For example if you oversample x 25 and want exactly two extra bits then you need to divide by 25/4 = 6.25, which is not an integer. 8 bit AVRs don't have hardware floating point or even integer divide instructions, so dividing by fractions has to be done in software which is generally very inefficient. But shift instructions are very fast (as little as one CPU cycle per shift per byte) so it makes sense to choose an oversample rate that is a power of 2.
However there could be situations where you just need enough bits to eg. produce a decimal number with a certain number of digits. In that case it may be easier to directly divide the sum by the factor required to get the resolution you need, and not worry about whether it equates to a whole number of bits. In one case I had a 10 bit ADC and wanted a voltage display of 0.00-51.00V. To do this I oversampled by 64 times to get 1023*64 = 0-65472, then divided by 12.8 (using an optimized divide routine hard-coded to that factor) to get 0-5115. This was then displayed as 00.00-51.15 by simply inserting a decimal point after the second digit on the display.
$endgroup$
So, the question is, why do we need to use decimation method instead
of the normal averaging after the oversampling to increase the ADC
resolution?
By 'normal' averaging I presume you mean dividing the sum by the number of samples. If you do this the result will have the same number of bits as a single sample, so you lose the extra bits you were trying to get. With decimation you only lose the lowest of the low bits, leaving some of the 'higher' low bits in to contribute to the final result.
It says above "Right shifting a binary number once is equal to
dividing the binary number by a factor of 2", but what if we don't use
a binary number, how do we use the decimation method in this case?
In the AVR (as in most computers) all numbers are binary, so I assume you just mean a number that is not a power of two. If the number of samples is not a power of two then to increase the resolution by a whole number of bits you must divide the sum by a number that is not a power of two. This may require using fixed point fractions or floating point math.
For example if you oversample x 25 and want exactly two extra bits then you need to divide by 25/4 = 6.25, which is not an integer. 8 bit AVRs don't have hardware floating point or even integer divide instructions, so dividing by fractions has to be done in software which is generally very inefficient. But shift instructions are very fast (as little as one CPU cycle per shift per byte) so it makes sense to choose an oversample rate that is a power of 2.
However there could be situations where you just need enough bits to eg. produce a decimal number with a certain number of digits. In that case it may be easier to directly divide the sum by the factor required to get the resolution you need, and not worry about whether it equates to a whole number of bits. In one case I had a 10 bit ADC and wanted a voltage display of 0.00-51.00V. To do this I oversampled by 64 times to get 1023*64 = 0-65472, then divided by 12.8 (using an optimized divide routine hard-coded to that factor) to get 0-5115. This was then displayed as 00.00-51.15 by simply inserting a decimal point after the second digit on the display.
answered May 12 at 10:42
Bruce AbbottBruce Abbott
26k11936
26k11936
$begingroup$
It is confusing , some references say that oversampling alone can improve the accuracy of ADC , other say oversampling must be companied with normal averaging , while this reference say that oversampling must be followed by Decimation and, not the normal averaging.
$endgroup$
– Ali
2 days ago
$begingroup$
Oversampling is just taking several samples within a time period. They produce an average when added together, which also increases resolution because the sum has more bits. But the extra bits have less averaging so they are more noisy, and are not normally considered to be significant because they are below the ADC's resolution. 'Normal' averaging removes all the extra bits to reduce noise while maintaining the original resolution. Decimation removes only the lowest (most noisy) bits, keeping the higher (somewhat averaged and therefore less noisy) extra bits to increase resolution.
$endgroup$
– Bruce Abbott
2 days ago
$begingroup$
Thank you for your comment: what do you mean when you said that in the oversampling alone case, the extra bits are not considered to be significant due they are below ADC’s resolution . 2- i did not understand how the decimation can only remove the lowest(most noisy) bits and keep the higher bits (less noisy) , could you please more clarify this point , and list any reference could help for understanding this decimation. Thank you
$endgroup$
– Ali
yesterday
add a comment |
$begingroup$
It is confusing , some references say that oversampling alone can improve the accuracy of ADC , other say oversampling must be companied with normal averaging , while this reference say that oversampling must be followed by Decimation and, not the normal averaging.
$endgroup$
– Ali
2 days ago
$begingroup$
Oversampling is just taking several samples within a time period. They produce an average when added together, which also increases resolution because the sum has more bits. But the extra bits have less averaging so they are more noisy, and are not normally considered to be significant because they are below the ADC's resolution. 'Normal' averaging removes all the extra bits to reduce noise while maintaining the original resolution. Decimation removes only the lowest (most noisy) bits, keeping the higher (somewhat averaged and therefore less noisy) extra bits to increase resolution.
$endgroup$
– Bruce Abbott
2 days ago
$begingroup$
Thank you for your comment: what do you mean when you said that in the oversampling alone case, the extra bits are not considered to be significant due they are below ADC’s resolution . 2- i did not understand how the decimation can only remove the lowest(most noisy) bits and keep the higher bits (less noisy) , could you please more clarify this point , and list any reference could help for understanding this decimation. Thank you
$endgroup$
– Ali
yesterday
$begingroup$
It is confusing , some references say that oversampling alone can improve the accuracy of ADC , other say oversampling must be companied with normal averaging , while this reference say that oversampling must be followed by Decimation and, not the normal averaging.
$endgroup$
– Ali
2 days ago
$begingroup$
It is confusing , some references say that oversampling alone can improve the accuracy of ADC , other say oversampling must be companied with normal averaging , while this reference say that oversampling must be followed by Decimation and, not the normal averaging.
$endgroup$
– Ali
2 days ago
$begingroup$
Oversampling is just taking several samples within a time period. They produce an average when added together, which also increases resolution because the sum has more bits. But the extra bits have less averaging so they are more noisy, and are not normally considered to be significant because they are below the ADC's resolution. 'Normal' averaging removes all the extra bits to reduce noise while maintaining the original resolution. Decimation removes only the lowest (most noisy) bits, keeping the higher (somewhat averaged and therefore less noisy) extra bits to increase resolution.
$endgroup$
– Bruce Abbott
2 days ago
$begingroup$
Oversampling is just taking several samples within a time period. They produce an average when added together, which also increases resolution because the sum has more bits. But the extra bits have less averaging so they are more noisy, and are not normally considered to be significant because they are below the ADC's resolution. 'Normal' averaging removes all the extra bits to reduce noise while maintaining the original resolution. Decimation removes only the lowest (most noisy) bits, keeping the higher (somewhat averaged and therefore less noisy) extra bits to increase resolution.
$endgroup$
– Bruce Abbott
2 days ago
$begingroup$
Thank you for your comment: what do you mean when you said that in the oversampling alone case, the extra bits are not considered to be significant due they are below ADC’s resolution . 2- i did not understand how the decimation can only remove the lowest(most noisy) bits and keep the higher bits (less noisy) , could you please more clarify this point , and list any reference could help for understanding this decimation. Thank you
$endgroup$
– Ali
yesterday
$begingroup$
Thank you for your comment: what do you mean when you said that in the oversampling alone case, the extra bits are not considered to be significant due they are below ADC’s resolution . 2- i did not understand how the decimation can only remove the lowest(most noisy) bits and keep the higher bits (less noisy) , could you please more clarify this point , and list any reference could help for understanding this decimation. Thank you
$endgroup$
– Ali
yesterday
add a comment |
$begingroup$
- Averaging reduces the bandwidth when the number of samples averaged exceeds the number of oversamples.
- averaging and decimation increases resolution when the noise is ideally between 1/2 and 2x LSB.
- averaging improves accuracy only if the noise is > 1 LSB
- The noise does not have to be Gaussian or white
This method must be prefiltered to remove noise or if clean add 1 LSB noise to achieve these improvements, which must be controlled by design. The bandwidth reduction is obvious per above. This may not be possible is noise is uncontrollable or nonlinearity exceeds 1/2 LSB.
.
$endgroup$
add a comment |
$begingroup$
- Averaging reduces the bandwidth when the number of samples averaged exceeds the number of oversamples.
- averaging and decimation increases resolution when the noise is ideally between 1/2 and 2x LSB.
- averaging improves accuracy only if the noise is > 1 LSB
- The noise does not have to be Gaussian or white
This method must be prefiltered to remove noise or if clean add 1 LSB noise to achieve these improvements, which must be controlled by design. The bandwidth reduction is obvious per above. This may not be possible is noise is uncontrollable or nonlinearity exceeds 1/2 LSB.
.
$endgroup$
add a comment |
$begingroup$
- Averaging reduces the bandwidth when the number of samples averaged exceeds the number of oversamples.
- averaging and decimation increases resolution when the noise is ideally between 1/2 and 2x LSB.
- averaging improves accuracy only if the noise is > 1 LSB
- The noise does not have to be Gaussian or white
This method must be prefiltered to remove noise or if clean add 1 LSB noise to achieve these improvements, which must be controlled by design. The bandwidth reduction is obvious per above. This may not be possible is noise is uncontrollable or nonlinearity exceeds 1/2 LSB.
.
$endgroup$
- Averaging reduces the bandwidth when the number of samples averaged exceeds the number of oversamples.
- averaging and decimation increases resolution when the noise is ideally between 1/2 and 2x LSB.
- averaging improves accuracy only if the noise is > 1 LSB
- The noise does not have to be Gaussian or white
This method must be prefiltered to remove noise or if clean add 1 LSB noise to achieve these improvements, which must be controlled by design. The bandwidth reduction is obvious per above. This may not be possible is noise is uncontrollable or nonlinearity exceeds 1/2 LSB.
.
answered 2 days ago
Sunnyskyguy EE75Sunnyskyguy EE75
73.7k228104
73.7k228104
add a comment |
add a comment |
Ali is a new contributor. Be nice, and check out our Code of Conduct.
Ali is a new contributor. Be nice, and check out our Code of Conduct.
Ali is a new contributor. Be nice, and check out our Code of Conduct.
Ali is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Electrical Engineering Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f438039%2fuse-the-oversamplling-followed-by-the-decimation-method-to-increase-the-adc-re%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
How do you define "normal averaging"?
$endgroup$
– TimWescott
May 11 at 18:26
3
$begingroup$
"but what if we don't use a binary number" - all numbers are binary numbers in a microcontroller.
$endgroup$
– brhans
May 11 at 18:32
$begingroup$
@TimWescott: It's "defined" (sort of) in the paper.
$endgroup$
– Dave Tweed♦
May 11 at 18:32
$begingroup$
There's a practical limit to how many extra bits you can get; in my experience you might get 2 bits more.
$endgroup$
– Peter Smith
May 12 at 15:56