CCS C Software and Maintenance Offers
FAQFAQ   FAQForum Help   FAQOfficial CCS Support   SearchSearch  RegisterRegister 

ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

CCS does not monitor this forum on a regular basis.

Please do not post bug reports on this forum. Send them to CCS Technical Support

Read-Write-Modify issues
Goto page Previous  1, 2
 
Post new topic   Reply to topic    CCS Forum Index -> General CCS C Discussion
View previous topic :: View next topic  
Author Message
Ttelmah



Joined: 11 Mar 2010
Posts: 20059

View user's profile Send private message

PostPosted: Mon Dec 26, 2016 3:38 pm     Reply with quote

All CPU's use multiple clocks per instruction.
Looking 'historically', the old Z80 CPU used four for its standard instructions. So nothing to do with the Harvard architecture. However there are 'alternative ways of organising this'. So another early CPU (the 6502), used two anti-phase clocks, and did it's instructions in two cycles of these. Even CPU's that 'appear' to use one cycle per instruction, then 'cheat' internally and are generating extra clocks using delays or phase inversion/shifting.
On most CPU's though the number of cycles is far more complex to predict. So you have instructions using 1, 2, 3, 4, 8, 10, 12, 16, 24 cycles etc. etc..
The key about the Harvard architecture is that it 'simplifies' things by doing the same operation on each part, and can do two things at once (accessing both memory spaces in the same clock phase). CPU's that can do an operation in one cycle, are actually using multiple higher frequency clocks and things like cache memory operating faster than the CPU clock. They then introduce the extra complexity that the actual time will vary if these components are busy doing other things. So your PC can actually take different times to perform the same operation, depending on where it is in memory (compared to the last instruction), and what else the CPU is doing....
Arakel



Joined: 06 Aug 2016
Posts: 107
Location: Moscow

View user's profile Send private message Visit poster's website AIM Address

PostPosted: Fri Dec 30, 2016 6:27 pm     Reply with quote

Ttelmah wrote:
All CPU's use multiple clocks per instruction.
Looking 'historically', the old Z80 CPU used four for its standard instructions. So nothing to do with the Harvard architecture. However there are 'alternative ways of organising this'. So another early CPU (the 6502), used two anti-phase clocks, and did it's instructions in two cycles of these. Even CPU's that 'appear' to use one cycle per instruction, then 'cheat' internally and are generating extra clocks using delays or phase inversion/shifting.
On most CPU's though the number of cycles is far more complex to predict. So you have instructions using 1, 2, 3, 4, 8, 10, 12, 16, 24 cycles etc. etc..
The key about the Harvard architecture is that it 'simplifies' things by doing the same operation on each part, and can do two things at once (accessing both memory spaces in the same clock phase). CPU's that can do an operation in one cycle, are actually using multiple higher frequency clocks and things like cache memory operating faster than the CPU clock. They then introduce the extra complexity that the actual time will vary if these components are busy doing other things. So your PC can actually take different times to perform the same operation, depending on where it is in memory (compared to the last instruction), and what else the CPU is doing....


Thanks for the explanation. This is enough for what I need for now. I ask too many questions when I am tired.
_________________
Yo! I love learning and technology! I just do not have experience so do not be angry if I ask a stupid question about a detail! From so much to remember sometimes I forget the details in order to remember the big problems!
temtronic



Joined: 01 Jul 2010
Posts: 9632
Location: Greensville,Ontario

View user's profile Send private message

PostPosted: Fri Dec 30, 2016 7:21 pm     Reply with quote

re: Olympic average vs Moving average

While I haven't 'done the math' regarding the 'moving average' link you posted, it does appear to be 'complex' and ANY math that is 'complex' will take a LOT of time as well as resources(memory). Also consider a set of reading(say from 0 to 255) where most are 20+-1 count. In a moving average if one in the chain is high( say 255), it will dramatically give a higher than correct average of the real readings.

Olympic average is fast and better. Better,since it removes the highest and lowest readings, which gets rid of bad readings(say 255 in my example). There are 8 left, which get summed and then the average is computed. 8 is a 'magical' number, to divide by 8 , the PIC will use a simple and FAST shift instruction. NO complex divides, no extra scratch pad RAM needed, just very,very efficent and tight code.

Probably 95% of my ADC use has been in temperature/solar/alarm or PMT data acquisition. All are 'slow' response systems and the Olympic average has worked fine for almost 4 decades.

The .68mfd cap on the ADC input pins is my regular analog signal filter cap. It gets rid of most noise yet allows fairly fast response. There are shelves of books written about analog design though the best teacher is the lab !

Jay
Ttelmah



Joined: 11 Mar 2010
Posts: 20059

View user's profile Send private message

PostPosted: Sat Dec 31, 2016 10:06 am     Reply with quote

Once you have the signal 'reasonable', then you can use 'sneaky' software solutions that are code efficient. For instance:
Code:

#define FACTOR 8 //factor to apply for the integration average should be
//a 'binary' value (2,4,8 etc..) - damping rises as factor increases.
#define MAX_UPDATE(x) if (x>max) max=x
#define MIN_update(x) if (x<min) min=x

//This is a function to provide a long term 'rolling' average
//on incoming data, automatically rejecting abnormal single values
//and using efficient maths.
//The maths here is a 'leaky integrator'. You have an integration
//of the arriving data, 'less' the averages that have been returned
//This is then fed by a 3 sample median filter
int16 smooth(int16 val)
{
   static int32 rolling_sum; //long term sum of values
   static int8 phase=0;
   static int32 max, min, total;
   static int1 first=TRUE;
   static int16 return_val;
   if (phase++==0)
   {
      total=val;
      max=0;
      min=65535;
      if (first)
         return_val=val; //value to return till average starts
   }
   else
   {
      total+=val;
   }
   MAX_UPDATE(val);
   MIN_UPDATE(val);
   if (phase==3)
   {
      //third phase
      total-=min;
      total-=max;
      phase=0; //restart state machine
      //total now is the median value of the last three samples
      if (first)
      {
         rolling_sum=total*(FACTOR-1); //seed the rolling sum
         //avoids it taking a long time to 'catch up' on start
         first=FALSE;
      }
      //now perform the historic average
      rolling_sum+=total; //add in the current value
      return_val=rolling_sum/FACTOR;
      rolling_sum-=return_val;
   }
   return return_val;
}


Call this with an int16 value you want to smooth, and it'll give you a 'chasing' average of the readings, updating every third reading.
It's actually two filters one after the other, both using very fast maths. The first is a simple 3 sample filter, returning the middle one of the three. This updates every third sample. This then feeds a 'leaky integrator'. The values arriving are summed (integration), but the average is then subtracted from this sum (just how a capacitor would discharge if it had a resistor across it). The integration involves just one addition, and then division by a constant (FACTOR). So long as you keep this as a binary value (4,8 etc.), the compiler knows this is binary division, and just uses rotation to perform this. Result nice fast code.

Change the FACTOR to give different amounts of smoothing.

As a demo:
Code:

const int16 values[32] = {120,100,110,140,190,300,110,150,140,0,150,140,160,140,160,0,
120,100,110,140,190,300,110,150,140,0,150,140,160,140,160,0 };

void main()
{
   int ctr;
   int16 avg;
   while(TRUE)
   {
      for (ctr=0;ctr<32;ctr++)
      {
         avg=smooth(values[ctr]);
         printf("smoothed %ld\r", avg);
      }
   }
}

Gives an output of:

Code:

smoothed 120
smoothed 120
smoothed 110
smoothed 110
smoothed 110
smoothed 120
smoothed 120
smoothed 120
smoothed 122
smoothed 122
smoothed 122
smoothed 124
smoothed 124
smoothed 124
smoothed 129
smoothed 129
smoothed 129
smoothed 125
smoothed 125
smoothed 125
smoothed 127
smoothed 127
smoothed 127
smoothed 130
smoothed 130
smoothed 130
smoothed 131
smoothed 131
smoothed 131
smoothed 132
smoothed 132


Note how the big spikes (the zero and 160 values) are completely missing from the output, yet the output responds within only three cycles to changes. Also note how it smooths after a few cycles (the integrator is initially 'seeded' to avoid slow response, but still means it can reflect an abnormal first reading).
Arakel



Joined: 06 Aug 2016
Posts: 107
Location: Moscow

View user's profile Send private message Visit poster's website AIM Address

PostPosted: Sun Jan 01, 2017 8:50 am     Reply with quote

Happy new year to everyone! Lets stop because I can ask questions until the next new year.
_________________
Yo! I love learning and technology! I just do not have experience so do not be angry if I ask a stupid question about a detail! From so much to remember sometimes I forget the details in order to remember the big problems!
Display posts from previous:   
Post new topic   Reply to topic    CCS Forum Index -> General CCS C Discussion All times are GMT - 6 Hours
Goto page Previous  1, 2
Page 2 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB © 2001, 2005 phpBB Group