CCS C Software and Maintenance Offers
FAQFAQ   FAQForum Help   FAQOfficial CCS Support   SearchSearch  RegisterRegister 

ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

CCS does not monitor this forum on a regular basis.

Please do not post bug reports on this forum. Send them to support@ccsinfo.com

Running Average Computation
Goto page 1, 2  Next
 
Post new topic   Reply to topic    CCS Forum Index -> General CCS C Discussion
View previous topic :: View next topic  
Author Message
jecottrell



Joined: 16 Jan 2005
Posts: 559
Location: Tucson, AZ

View user's profile Send private message

Running Average Computation
PostPosted: Sat Feb 26, 2005 7:58 pm     Reply with quote

Hello All,

I'm fine tuning an application that uses the PNI magnetometer chip and I need some help with the computation of a running average to dampen the readings and reduce the apparent noise.

Some of the limitations are:

18F2525 part
Speed (need to maximize speed of whatever algorithm I use)


My concept is to keep a running sum of the readings and each time through drop the 101st reading and add the new reading and then divide by 100 to get a running average.

My question is, is there a better (more efficient, more technically correct, etc.) better way to do this? The only way I can see to achieve this is to rotate through an array....

Comments, suggestions....

Thanks,

John
wireless



Joined: 02 Nov 2003
Posts: 16
Location: London England

View user's profile Send private message

PostPosted: Sun Feb 27, 2005 7:40 am     Reply with quote

Hi John

A very simple way to do a running average is to add the current value to the average and divide the result by 2 and this becomes the new average value. Obviously, you could get a better result by using more than one sample ie. add n samples to the average and divide by n+1.

Regards


Terry
Ttelmah
Guest







PostPosted: Sun Feb 27, 2005 7:41 am     Reply with quote

Have a look at the thread 'averaging or filtering', posted a week ago.
I'd suggst that the first algorithm will work fine for your application, and with a binary divider (/128), will be very fast.

Best Wishes
jecottrell



Joined: 16 Jan 2005
Posts: 559
Location: Tucson, AZ

View user's profile Send private message

Thanks
PostPosted: Sun Feb 27, 2005 9:40 am     Reply with quote

Thanks.

Ttelmah, I'll try your approach. I believe it will work for both the situations I need. I need to be able to dampen the short term readings and also the long term (this will account for any drift in the readings over time). I will probably use a divide by 4 for the short term and a divide by 64 for the longterm.

I tested it on some data in Excel and answered my own question. Because it does not "pack" the total with N data points prior to the divide by N it takes a proportional number of readings to "come up to speed."

Thanks a bunch. I knew there had to be a "non-intuitive" answer to the problem.

John
TSchultz



Joined: 08 Sep 2003
Posts: 66
Location: Toronto, Canada

View user's profile Send private message

PostPosted: Sun Feb 27, 2005 5:14 pm     Reply with quote

If you want things to come up to speed quicker, preprocess the reading and compare with the last, if the difference is larger than X, say 25%, then set last value to equal the current value, this will pre-seed the average with the large signal change, then you will get dampening on subsequent readings if they then don't differ by more than X. I normally tend to aquire a bust of 5 samples at a hight rate, verify that they all agree (within acceptable bit noise levels), average them and use this as the ADC count. This count is then used in the averaging algorithm.

I have done this and the running average filter works very well for more or less steady state signals, or slowly changing singnals. With the preprocessor you can still respond to large singnal changes and have the dampening of the averaging filter when the signal is not changing much.

If you are reading widely fluctuation signals such as sine wave then this method does not work, but then you would not be looking for an averaging type algorithm like this.
guest
Guest







PostPosted: Mon Feb 28, 2005 12:55 pm     Reply with quote

http://www.embedded.com/2000/0011/0011feat3.htm

The above link points to code that does a median average, works very well and handles the startup issues when the array is not full.

Carter Brock
SherpaDoug



Joined: 07 Sep 2003
Posts: 1640
Location: Cape Cod Mass USA

View user's profile Send private message

PostPosted: Mon Feb 28, 2005 1:19 pm     Reply with quote

Another alternative that handles outlyers better than averaging and is easier to compute than median is what I have used for years and I call "Olympic scoring". I sum ten readings keeping track of the highest and lowest. Then I subtract the highest and lowest from the sum and divide by 8 (bit shift). Outlyers tend to be either the highest or lowest and get thrown out. The middle 8 get averaged. You never have to sort the data, or even do a real divide!
_________________
The search for better is endless. Instead simply find very good and get the job done.
Humberto



Joined: 08 Sep 2003
Posts: 1215
Location: Buenos Aires, La Reina del Plata

View user's profile Send private message

PostPosted: Tue Mar 01, 2005 9:39 am     Reply with quote

Quote:

You never have to sort the data, or even do a real divide!


Sherpa,
nice, easy and fast. I like it. Idea
If your aquisition time is a consecutive sampling burst, instead of
substract the highest and the lowest, discarding them will "filter"
glitches in noisy enviroments.

Humberto
SherpaDoug



Joined: 07 Sep 2003
Posts: 1640
Location: Cape Cod Mass USA

View user's profile Send private message

PostPosted: Tue Mar 01, 2005 9:48 am     Reply with quote

I generally sum the samples as they come in. I don't know which is the highest or lowest untill the end when they are already in the sum. That is why I have to then subtract them out. You also don't have to explicitly store all ten samples, just the Min, Max, and cumulative SUM. That was important on the 16C54's that I first used this on!
_________________
The search for better is endless. Instead simply find very good and get the job done.
jecottrell



Joined: 16 Jan 2005
Posts: 559
Location: Tucson, AZ

View user's profile Send private message

Averaging
PostPosted: Tue Mar 01, 2005 11:42 am     Reply with quote

Wow, still getting good ideas. Thanks.

I tested the algorithm suggested by Ttelmah and it works well. It only takes a couple of seconds to come up to speed. Since my app is running 100% of the time this isn't a factor.

I like the Olympic scoring concept SherpaDoug. I'm still trying to figure out whether it would work for my short term reading. I'll have to experiment.

John
Ttelmah
Guest







PostPosted: Tue Mar 01, 2005 4:17 pm     Reply with quote

I must admit I like the Olympic sorting approach for certain types of signal (the rejection of 'spurious' values is nice). This is similar (at a very much simpler level), to the concept of rejecting results that are more than a certain number of standard deviations from the mean of a set. However the 'downside, is if you want a continuous 'result' reflecting the current reading, averaged over the last few values, updating with each sample. I found myself trying to see if one could code something similar on a 'rolling' basis, since this would be rather nice. A 'crude' form, can be done by limiting the change using the rolling average system posted. What you do, is compare the incoming value to the 'last' average, and if the difference is small behave as normal, but if the difference is larger, only apply part of the change. Effectively increasing the damping for large changes. I have done this in the past in a number of ways, and it is a very effective way of damping for normal signals. Preloading the running 'sum', with the first ADC reading * the division factor used, provides a really fast initial acquisition.
So something like:
Code:

#define DIV_FACTOR (8)
#define LARGE_MOVE (64)
#define MOVE_SHIFT (4)

int16 damp_val(int16 adc_reading) {
   static int32 sum;
   static int16 last_avg;
   static int1 first_up=true;
   signed int16 delta;
   //Fast initialisation
   if (first_up) {
      first_up=false;
      sum=adc_reading*DIV_FACTOR;
      last_avg=adc_reading;
      return(last_avg);
   }
   delta=adc_reading-last_avg;
   if (abs(delta)>LARGE_MOVE) {
      delta=delta/MOVE_SHIFT;
      adc_reading=last_avg+delta;
   }
   sum+=adc_reading;
   last_avg=sum/DIV_FACTOR;
   sum-=last_avg;
   return(last_avg);
}


With this, if there is a large sudden 'jump' in the signal, only 1/4 of the change will be used in the average computation, slowing the response massively for such excursions. However the system will 'self load' with the first reading, to give immediate acquisition on startup.
It doesn't have the same ability as the Olympic system to reject spurious 'small' excursions, but should work quite well.

Best Wishes
jecottrell



Joined: 16 Jan 2005
Posts: 559
Location: Tucson, AZ

View user's profile Send private message

More Help.....
PostPosted: Fri Mar 11, 2005 6:43 pm     Reply with quote

SherpaDoug (or any other sympathetic soul...),

I thought I had done a bang up job with your advice on a simple "digital filter". However, during some of my longer tests I found a problem with the ASIC I was using (PNI 11096). After a lot of work to figure out what I was doing wrong I got word from the PNI App Eng that there is a problem with the die. So....I need to modify my filter a bit to deal with that and I was wondering again how to efficiently solve the problem.

Here is some amplifying info....

I need a running average, I can't take 8 or 10 readings and average them into one result. (But I do need to be able to throw out a high and a low reading in the running average.)

The ASIC problem I need to address in firmware is that it will regularly (every 15 sec - 5 minutes) return a wild ball....really wild.

I like SherpaDoug's Olympic scoring method but I'm unsure of how to do it efficiently in the running average scheme of things. Here are my current short and long term filters:

Code:

signed int16 filter(signed int16 adval) {
    static signed int32 filter_avgsum;
    signed int16 result;

    filter_avgsum += adval;
    result = filter_avgsum/filter_factor;
    filter_avgsum -= result;

    return(result);
}

signed int16 zero(signed int16 adval) {
    static signed int32 zero_avgsum;
    signed int16 result;

    zero_avgsum += adval;
    result = zero_avgsum/zero_factor;
    zero_avgsum -= result;

    return(result);
}


(One question as I post these snippets is.....Should I be type casting between shorts, longs, and doubles, etc ?)

My initial approach would be to keep running track of 6 readings. The latest reading would bump the oldest out. I'd check the latest reading to see if it qualified as the new high or low and copy it there if so. Get rid of the high and low and then divide by 4......

My biggest obstacle is how to efficiently keep running track of 6 historical values being returned. Space isn't really an issue, speed may be.....

Thanks again for all the cranial horsepower,

John
Ttelmah
Guest







PostPosted: Sat Mar 12, 2005 11:28 am     Reply with quote

You could do a modified version of what I proposed. Something like:
[code]
#define DIV_FACTOR (8)
#define LARGE_MOVE (128)

int16 damp_val(int16 adc_reading) {
static int32 sum;
static int1 first_up=true;
static int16 lastval;
signed int16 delta;
static int16 avg;
//Fast initialisation
if (first_up) {
first_up=false;
sum=(int32)adc_reading*DIV_FACTOR;
last_val=adc_reading;
avg=adc_reading;
return(adc_reading);
}
delta=adc_reading-last_val;
if (abs(delta)>LARGE_MOVE) {
//Here we have a possibly 'exceptional'
//value.
last_val = adc_reading;
}
else {
last_val=adc_reading;
sum+=adc_reading;
avg=sum/DIV_FACTOR;
sum-=avg;
}
return(avg);
}
This will throw away the first reading after a rapid movement (in this case defined as a change of more than 128 from the last reading). It'll also throw away the 'return' one (if it jumps back on the next reading), but will in each case return the previous average, and provided the change is less than the 128 level, will keep averaging as normal. Generally in the examples posted so far, casting should be automatic. The only place where it really applies is in generating the 'sum' (which is int32), but since the addition to make this is int16+int32, the arithmetic should use int32 by default, and be OK.

Best Wishes
Sherpa Doug
Guest







PostPosted: Sat Mar 12, 2005 12:11 pm     Reply with quote

You might want to look into a "Median Filter". They give fast responce to a real change, but discard outlyers. They require sorting the data so they are usually too computationally intensive for my use.

Also, I used a PNI compass a while ago and IIRC they have a "damping" coefficient in their own software. Look in your manual and see if it does what you want.

Sherpa Doug
Guest








Update
PostPosted: Sun Mar 13, 2005 5:16 pm     Reply with quote

(the PNI part I'm using only provides raw readings....it's up to me to all the processing.)

I've tried several things, but haven't had much time to really troubleshoot and look close at the details of the results.

I tried keeping track of 3 readings, and then throwing out the high and low for a quasi "Olympic" scoring approach. For some reason it didn't work 100% and I moved on before figuring out why. I'll go back to it when I get a chance and try to figure out what was happening.

Next approach was to throw out values that were greater than X above the previous reading. Even this approach had some problems. It seemed that it would work 99% of the time but eventually I would get an offending value of (X-1).

My latest approach, and most successful, used a previous algorithm that tracks the trend of the readings. A valid reading would show a continuous increase or decrease over 5 or more readings. An "excursion" would be a instantaneous diversion and would be excluded from the valid category.

Unfortunately, all of this filtering/error handling slows things down and reduces sensitivity.....

I'll keep you filled in and thanks again for all the help....

John
Display posts from previous:   
Post new topic   Reply to topic    CCS Forum Index -> General CCS C Discussion All times are GMT - 6 Hours
Goto page 1, 2  Next
Page 1 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB © 2001, 2005 phpBB Group