View previous topic :: View next topic |
Author |
Message |
Ttelmah
Joined: 11 Mar 2010 Posts: 19195
|
|
Posted: Sun Dec 10, 2017 12:52 pm |
|
|
OK.
Now the key thing is with the rolling sum, that you can alter the actual averaging amount by changing how much is added back each time. So:
Code: |
if (ctr==0)
{
over+=value; //rolling sum
final=over*0.00390625;
temp=over/16;
over-=temp;
}
|
Doubles the amount of averaging.
You have to change the preload code to add back 15 samples rather than 7.
Now as Temtronic has correctly pointed out you need a lot more samples to get the extra 3 bits.
However it's also worth understanding that oversampling to give more effective accuracy, only works if the noise on the samples is genuinely random. May not be the case.
The point about using 3 sensors is you can immediately triple the rate at which readings can be taken. So (for instance), where I'm taking the centre reading of three samples to reject spikes, this can be done in a single time slice by using the three sensors. Then increase the overload to work on perhaps 64 or even 256 samples, and the response time won't be too bad. |
|
|
temtronic
Joined: 01 Jul 2010 Posts: 9081 Location: Greensville,Ontario
|
|
Posted: Sun Dec 10, 2017 1:07 pm |
|
|
Alright I'm expanding my 'brilliant' idea...
Use four sensors.
read them at 1Hz rate
1st at 1.00
2nd at 1.15
3rd at 1.30
4th at 1.45
avg 1 and 2 to get reading at 7.5
avg 2 and 3 to get reading at 22.5
avg 3 and 4 to get reading at 37.5
Now you've got 7 readings with a 1 second time frame, that should smooth things down a bit..if not smoooth enough, then apply 'math'.
Jay |
|
|
guy
Joined: 21 Oct 2005 Posts: 291
|
|
Posted: Sun Dec 10, 2017 1:12 pm |
|
|
I am elaborating on what has already been said, just adding my own recent experience:
If we assume the temperature is exactly between the 0.0625 reading and the next one (0.125), and assuming the noise is low, yet existing & random, you will see 50% of the readings are 0.0625 and 50% 0.125. The ratio changes obviously the closer you get to either reading.
I did exactly what was suggested here - simply accumulated 16 samples, then divided the result by 16 *using floating point math* into a floating point variable and gained much higher resolution. Works beautifully for me both with DS18B20 and with analog sensors (LM35). If you go over the datasheet, you see you can get quicker readings by lowering the resolution but if you try to regain the resolution with the oversampling mentioned above you get exactly to the same place (take several samples = longer time) so no point in that. |
|
|
Ttelmah
Joined: 11 Mar 2010 Posts: 19195
|
|
Posted: Sun Dec 10, 2017 1:20 pm |
|
|
Averaging 16 samples, is oversampling..... |
|
|
Ttelmah
Joined: 11 Mar 2010 Posts: 19195
|
|
Posted: Sun Dec 10, 2017 1:28 pm |
|
|
Quote: | ="temtronic"]Alright I'm expanding my 'brilliant' idea...
Use four sensors.
read them at 1Hz rate
1st at 1.00
2nd at 1.15
3rd at 1.30
4th at 1.45
avg 1 and 2 to get reading at 7.5
avg 2 and 3 to get reading at 22.5
avg 3 and 4 to get reading at 37.5
Now you've got 7 readings with a 1 second time frame, that should smooth things down a bit..if not smoooth enough, then apply 'math'.
Jay |
Now if you can accept a 4 second update rate, take four readings from each, and average/16. This would give the potential for the extra bits required. However that he found a median filter was necessary before, suggests that some at least of the noise is 'spike' noise, rather than random. This is probably going to be electrical, and if so might well affect all four sensors at the same time.
Part to the key to what is the best approach is understanding the nature of the noise. |
|
|
Gabriel
Joined: 03 Aug 2009 Posts: 1067 Location: Panama
|
|
Posted: Sun Dec 10, 2017 6:06 pm |
|
|
Quote: | Averaging 16 samples, is oversampling..... |
After reading the explanation for Decimation vs Simple Averaging my understanding of oversampling is that its a bit more than just an average of a lot of readings.
At least that was my take from the AVR document mentioned earlier in this thread. _________________ CCS PCM 5.078 & CCS PCH 5.093 |
|
|
Ttelmah
Joined: 11 Mar 2010 Posts: 19195
|
|
Posted: Mon Dec 11, 2017 4:35 am |
|
|
Not really....
'Oversampling' just means you are sampling faster than the actual sample rate you want to use. So if you need to sample at (say) 1KHz, to sample a 500Hz signal (basic Nyquist), you 'oversample', sampling faster than the rate needed. At 2KHz, 4KHz, 8KHz etc..
Then their 'decimation', is rotating the number back to reduce the effective sample rate (this is the term used for this in DSP), which is just division, but done efficiently by binary rotation.
First key problem is dependant on the rate you actually want data to update. If you want 1 sample per second, then to get three extra bits you would actually need to record 64 samples in this time.
Second is the one I alluded to of the nature of the error. This process only works if the noise on the samples is symmetrically distributed around the 'real' value.
Now the 'point' about the method I posted is that it effectively generates a 'rolling' average, so you can get a new reading each time you take a reading from your sensor. However the 'downside' is that you still need to be using the sum that corresponds to the amount of oversampling you require, and the lag will get larger as you increase the divisor.... |
|
|
|