CCS C Software and Maintenance Offers
FAQFAQ   FAQForum Help   FAQOfficial CCS Support   SearchSearch  RegisterRegister 

ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

CCS does not monitor this forum on a regular basis.

Please do not post bug reports on this forum. Send them to support@ccsinfo.com

Operations efficiency – Memory vs. CPU speed
Goto page Previous  1, 2, 3, 4, 5  Next
 
Post new topic   Reply to topic    CCS Forum Index -> General CCS C Discussion
View previous topic :: View next topic  
Author Message
viki2000



Joined: 08 May 2013
Posts: 233

View user's profile Send private message

PostPosted: Sat Jun 10, 2017 1:53 pm     Reply with quote

Thank you for the code and suggestions.
I will test that too and compare with CORDIC to see the error and the speed.
I noticed you tested with 128 steps, but actually I am interested in 10.000 or 5.000 steps. 128 is 8 bit, but 10K or 5K is 16bit. Do you will lose efficiency or slow down a lot the execution of the subroutine because I want so many steps?
Ttelmah



Joined: 11 Mar 2010
Posts: 19222

View user's profile Send private message

PostPosted: Sat Jun 10, 2017 2:03 pm     Reply with quote

The operation takes the same time for each angle.
I was just testing to show that the error doesn't have any large jumps.
viki2000



Joined: 08 May 2013
Posts: 233

View user's profile Send private message

PostPosted: Sun Jun 11, 2017 12:53 am     Reply with quote

I was wondering how did come up to that polynomial approximation formula and searching on internet I arrived to next websites:
http://www.coranac.com/2009/07/sines/
http://allenchou.net/2014/02/game-math-faster-sine-cosine-with-polynomial-curves/
http://www.nabla.hr/CL-DerivativeG3.htm#The%20approximation%20of%20the%20sine%20function%20by%20polynomial%20using%20Taylor%27s%20or%20Maclaurin%27s%20formula
http://www.math.smith.edu/~rhaas/m114-00/chp4taylor.pdf
https://namoseley.wordpress.com/2012/09/18/a-quick-and-dirty-sinx-approximation/
https://ccrma.stanford.edu/~dattorro/EffectDesignPart3.pdf
Ttelmah



Joined: 11 Mar 2010
Posts: 19222

View user's profile Send private message

PostPosted: Sun Jun 11, 2017 1:02 am     Reply with quote

I'm sure lots of people will have done the same. I just used a second order polynomial fit formula, applied it to the sin curve to get a best fit, then calculated the error terms and applied a third order fit to these.
Nowadays things like Numbers, and Excel even have the ability to do this for you, while in the past I programmed these by hand.

You may find some of the published ones are better than mine, but you have the balancing act of work versus the fit accuracy. 8* the performance while still giving close to 3 decimals, seemed a pretty good compromise.

The second one down you point to is a very similar approach, but he 'misses a trick'. By using abs, I generate both the upper and lower halves of the curve without having to do any if tests. I must admit I didn't bother to see if any similar improvement could be done on cos, but suspect it might be possible to improve this a little.
Ttelmah



Joined: 11 Mar 2010
Posts: 19222

View user's profile Send private message

PostPosted: Sun Jun 11, 2017 3:21 am     Reply with quote

I do find myself wondering what you are actually hoping to achieve?.
Thing is that working to great accuracy, requires accuracy all the way down the system, and this is often not compatible with speed (not just in the PIC itself, but in all the components feeding the PIC, and being fed by the PIC). So (for instance), synthesising a waveform, would not only require DAC's with really high accuracy, but all the subsequent circuitry, and designs that maintain this accuracy while the values are being updated. Then the likelyhood of high accuracy DAC's maintaining this actually accuracy at high slew rates is very low. Then at small angles, it is pointless to actually even try to calculate sin. For example at 1/10000th a full circle (0.0012566 radians), the sin gives the same number (0.0012566). Even up to 10 degrees, the difference between the angle in radians, and the sin of this angle is only in the third decimal!...
Unless you have some unlikely degree of precision in the actual values being fed into the system, or coming out, trying to work to large number of decimal places, especially quickly, is pointless.
Now, that having been said, if you really do need high speed, then the lookup table will always win. You only need 2500 points to generate sin for 10000 points round a circle, and this can be done in 10KB or ROM. Even some PIC16's could do this, while most PIC18's would do it using only a small percentage of their ROM, and 'run rings' round any other approach.
viki2000



Joined: 08 May 2013
Posts: 233

View user's profile Send private message

PostPosted: Mon Jun 12, 2017 1:22 am     Reply with quote

Quote:
“I do find myself wondering what you are actually hoping to achieve?.”

That is the main question actually and sometimes I feel I am not so sure, but here it is.
I do not know how other people are, but when I try to learn things, then I need to grab on subject, stick with it, go in its depths and sides to understand what is behind, different consequences/implications, differences between objects, concepts, methods…and so on. I analyze things until they are clear to me. Only after that I feel comfortable. For me, learning without a specific purpose is not attractive. If I have one particular subject, then I will study as much as I can, learning from more experienced persons in that particular field and I will try almost always to extrapolate and learn about everything related with that concept, slowly, in time, but as much as I can in depth, at my level of understanding and testing possibilities, because only theory without particular tests will not be fixed properly in my mind. I need the hands on, the experience of tests.
I do not have any special project to finish due to a date, it is more o less only an idea of what I would like to do and I do not know what is the best approach and why that will be the best. Nobody answered me that. I spend time only once in a while to learn and test. Maybe is learning driven by a particular subject extended over different things. I have experience in different fields of electronics, automation, electricity, but not very deep on embedded electronics and sometimes it crossed to my mind that in future is a good field where I could work, but I must improve my knowledge. I am not a daily programmer; I do it only once in a while.
In this particular case, which has not a fixed finished time/date as project, I wanted to produce a rectified sine wave 100Hz 1Vpp, a subject that we discussed on other questions that I asked here on the forum. The subject itself became not so important in finding right away a solution, which more or less is clear for me using different approaches from analog or digital, but the subject that surrounds this particular case became more interesting and I am just digging around about DDS. I am not saying that I found 100% a solution that satisfies me, but I would like to test various solutions and compare them, so next time when comes a problem to produce a wave in digital manner I will know different limitations and tricks and the decision time will very short for a given project. I build my references for what I shall do and what I shall not do and with answers at “why”s. In the same time I decided to publish the results, because I was helped and I would like other people to be helped.
My previous subjects related with I2C DAC and SPI DAC are still open for me and I will come back to them in the future.
One thing that made me open the present question was an example on Microchip forum where was shown the Q15 math library for 16bit, particular the XC16 compiler. Have a look at the next pdf file page 232. It is about libq.h and _Q15sinPI:
http://ww1.microchip.com/downloads/en//softwarelibrary/fixed%20point%20math%20library/50001456j.pdf
When I see a thing like that, my mind asks questions like:
- Why Q15 instead normal sin() from math.h? I guess is about speed, but probably we will get some errors, accuracy of the sine with some decimals.
- Is any similar library on CCS C? Is XC16 with this _Q15sinPI faster or better?
- Is the implementation with CORDIC or your last polynomial approach faster or better accuracy thann_Q15sinPI? Maybe inside that libq.h library they do similar approximations, I do not know.
- Why should I use one or another? Why CCS C and not XC16? Which is better at this particular subject? I like CCS more because makes the life a lot easier due to its predefined functions, but here math.h does not offer what libq.h from XC16 offers with _Q15sinPI, so we need to build/write subroutines/libraries as CORDIC or your last polynomial approach.
I would like to compare them to see the speed, memory allocation, error in generating the sine compared with standard float sin().
As you see, I have a lot of thoughts, not necessarily a particular subject/project, even if everything started and is driven by a particular subject, which is generating a 100Hz rectified wave or full sine wave, which seemed so easy in the beginning. Perhaps it is easy, but I jumped in so many details and comparisons, which made the subject bushy, perhaps more than it should be, but that is my way of learning things.
Any thought on _Q15sinPI (libq.h, actually is libq-elf.a) from Microchip?
viki2000



Joined: 08 May 2013
Posts: 233

View user's profile Send private message

PostPosted: Mon Jun 12, 2017 4:13 am     Reply with quote

Speaking about accuracy and error of getting the sine using CORDIC or polynomial approximation, below is a link to an Excel file where I compare two sine waves.
https://goo.gl/cnDyfP
I used PIC14HJ64GP202 with XC16 complier _Q15sinPI mentioned above and I swept the range -32768 to +32767 and I got sine values in the same range, which I sent them with UART to a serial port.
Then I captured them as HEX directly in a .txt file using RealTerm serial software, and then I imported them in Excel in column A.
Column B is a conversion in DEC of the column A.
Then in column C I calculate/generate the sine using sin() Excel function having as index the column D.
In column E I make the difference between sin() Excel and the received data at the serial port converted in DEC (column B), which is basically the sine from _Q15sinPI function inside the PIC.
The charts of the sine waves show almost a perfect overlay.
The max. error/difference is 2.48.
Then considering 32767 as 100%, the max. spread/deviation is maximum 0.00759% between sine calculated with sin() Excel function and sine from _Q15sinPI function inside the PIC, which I find very good.
I would like to know if CORDIC or polynomial approximation can provide similar accuracy and if my comparison is the right approach. Afterwards I can look over the CPU speed, the execution speed of instructions. In the end I want to implement everything in CCS.
viki2000



Joined: 08 May 2013
Posts: 233

View user's profile Send private message

PostPosted: Mon Jun 12, 2017 6:08 am     Reply with quote

I have tried the 1st implementation of CORDIC with the suggestion code, but I get some errors. The code is compiled, but the sine is not produced, the numbers that I get on serial port are not sine.

Code:
#include <24HJ64GP202.h>
#include <math.h>
#use delay(internal=80MHz)

#FUSES FRC_PLL
#FUSES NOWDT                    //No Watch Dog Timer
#FUSES NOWRTB                   //Boot block not write protected
#FUSES NOBSS                    //No boot segment
#FUSES NORBS                    //No Boot RAM defined
#FUSES NOWRTSS                  //Secure segment not write protected
#FUSES NOSSS                    //No secure segment
#FUSES NORSS                    //No secure segment RAM
#FUSES NOWRT                    //Program memory not write protected
#FUSES NOPROTECT                //Code not protected from reading
#FUSES IESO                     //Internal External Switch Over mode enabled
#FUSES NOOSCIO                  //OSC2 is clock output
#FUSES IOL1WAY                  //Allows only one reconfiguration of peripheral pins
#FUSES CKSFSM                   //Clock Switching is enabled, fail Safe clock monitor is enabled
#FUSES WINDIS                   //Watch Dog Timer in non-Window mode
#FUSES PUT128                   //Power On Reset Timer value 128ms
#FUSES NOALTI2C1                //I2C1 mapped to SDA1/SCL1 pins
#FUSES NOJTAG                   //JTAG disabled

#pin_select U1TX=PIN_B6
#pin_select U1RX=PIN_B7
#use rs232(UART1, BAUD=9600, ERRORS)

// CORDIC based SIN and COS in 16 bit signed fixed point math
// Based on http://www.dcs.gla.ac.uk/~jhw/cordic/
// Function is valid for arguments in range -pi/2 -- pi/2
// For values pi/2--pi: value = half_pi-(theta-half_pi) and similarly for values -pi---pi/2
//
// 1.0 = 16384
// 1/k = 0.6072529350088812561694
// pi = 3.1415926536897932384626
// Some useful Constants
// #define M_PI 3.1415926535897932384626

#define cordic_1K 0x000026DD
#define half_pi 0x00006487
#define MUL 16384.000000
#define CORDIC_NTAB 16
       
int s, c;

int cordic_ctab [16] = {0x00003243, 0x00001DAC, 0x00000FAD, 0x000007F5, 0x000003FE,
    0x000001FF, 0x000000FF, 0x0000007F, 0x0000003F, 0x0000001F, 0x0000000F, 0x00000007,
    0x00000003, 0x00000001, 0x00000000, 0x00000000};
   
void cordic(int theta, int *s, int *c, int n);

void main(){
     float p, p1;
     int i;

   while(TRUE)
   {
        for(i=0; i<50; i=i+1){
            p = (i/50.0)*PI/2;
            p1=p*MUL;             
            cordic(p1, &s, &c, 32); // 10 Cordic iterations
 
            //Send to serial port
            putc(make8(s,1)); //MSB                 
            putc(make8(s,0)); //LSB
            //delay_ms(100);
        }
   }
}

void cordic(int theta, int *s, int *c, int n){
  int k, d, tx, ty, tz;
  int x=cordic_1K,y=0,z=theta;
  n = (n>CORDIC_NTAB) ? CORDIC_NTAB : n;
  for (k=0; k<n; ++k)
  {
    d = z>>15;
    //get sign. for other architectures, you might want to use the more portable version
    //d = z>=0 ? 0 : -1;
    tx = x - (((y>>k) ^ d) - d);
    ty = y + (((x>>k) ^ d) - d);
    tz = z - ((cordic_ctab[k] ^ d) - d);
    x = tx; y = ty; z = tz;
  }
 *c = x; *s = y;
}
Ttelmah



Joined: 11 Mar 2010
Posts: 19222

View user's profile Send private message

PostPosted: Mon Jun 12, 2017 6:33 am     Reply with quote

Honestly I have to repeat my comment about using a look-up.

2500 points, will give you a 10000 point sin.

However you have already been told that no matter how good the signal is, you will get better results by changing the approach and synthesising a genuine sin, filtering, and using a precision rectifier.
temtronic



Joined: 01 Jul 2010
Posts: 9108
Location: Greensville,Ontario

View user's profile Send private message

PostPosted: Mon Jun 12, 2017 6:34 am     Reply with quote

Mr T's last paragraph is the real answer....
...a simple lookup table as it will be the fastest to implement,especially when only integers are used
Any ( every) time a PIC is asked to do floating point math or trancendentals(sin,cos,etc) it takes a LOT of time to do the calculations and the more precision you want the longer it takes.

Jay
viki2000



Joined: 08 May 2013
Posts: 233

View user's profile Send private message

PostPosted: Mon Jun 12, 2017 7:30 am     Reply with quote

I will keep that in mind as the best implementation for speed and accuracy, lookup table to be the most reliable, when the memory space allows it.
Nevertheless, because you made me curious and for my learning steps, I will try the Cordic and polynomial approach and I will compare with _Q15sinPI XC16 in terms of accuracy (compared with Excel sin) and speed.
Below it is your polynomial approach calculated in Excel for 128 and for 32767 points.
https://goo.gl/3f2h9o
It gives 0.10902 % error compared with sin() Excel, versus 0.00759 % _Q15sinPI XC16 compared with Excel, so almost 15 times worse.
Now I will try some real implementation on PIC.
Ttelmah



Joined: 11 Mar 2010
Posts: 19222

View user's profile Send private message

PostPosted: Mon Jun 12, 2017 7:57 am     Reply with quote

You have to be aware also though that simple 'error' figures do not always tell the story.
The key point about the polynomial synthesis is it always gives smooth changes. Which for an analog synthesis means the nature of the error is less likely to be objectionable.....
RF_Developer



Joined: 07 Feb 2011
Posts: 839

View user's profile Send private message

PostPosted: Mon Jun 12, 2017 8:36 am     Reply with quote

Again, I am struggling to understand the motivation for this. Knowledge of computing history will probably shed some light on this topic, or vice-versa, as this is a subject that has occupied many of the finest minds of applied computing such as Knuth and others; much of the early applications of computers was algorithmic in nature and so much effort was applied to the efficient computation of transcendental functions (i.e. trig such as sine, cos, etc.).

Polynomial approximation based on Tchebychev polynomials has long (as in since the early 50s) been established as the most effective way of evaluating random sines and cosines. The maths of Tchebychev polynomials is well-established, and in particular provides a way of evaluating errors and controlling them to provide known error bounds over defined intervals.

Take a look at math.h. It is typical of most compilers maths libraries, and is conveniently implemented in C. Sin() and cos() functions are generally the same internally, using the cos() function with an altered input. This is indeed what is in math.h. Sin() is simple coded as cos(x - PI_DIV_BY_TWO). So, calling cos() is always faster than calling sin(). Then there are the cos() routines themselves. They all evaluate polynomials. They all evaluate polynomials (and the coefficients will have come from Tchebychev) but the polynomials are different for each required precision: float32 is different from float48 (yes, there's such as thing in PCD) and float64. Why? Because the coefficients required to give the precision required by each float type are different. There's no need to evaluate all the terms needed by float64 when working in float32. The polynomial is chosen to give the precision required by the float format.

Also, the polynomial is only required to evaluate cos over a limited range, normally one quadrant, generally, as in math.h, 0-Pi radians, or 0-90 degrees. So the input angle has to be normalised into that range. In math.h only six lines of each routine actually compute the cosine, the rest deal with sorting out the quadrant.

You may recall that I wrote this was for evaluating random sines and cosines. That's to say each call is independant and each value is calculated from the ground up. This is fine for general use, but for is not great for sequential evaluation, which is what you need to do when creating a wave live or for a table. One method that I've used successfully for such sequential computation, where a result can be partially derived from the previous value(s), is finite differences. This is the way Babbage would have done it. The basic concept is this:

sin(n+1) = sin(n) * Delta_c + cos(n) * Delta_s
cos(n+1) = cos(n) * Delta_c - sin(n) * Delta_s

The next sine and cosine are calculated from the previous values interactively. The deltas are the ratios between the values for sin/cos(1) and sin/cos(0) and are dependant on the number of steps you want to use. You only need to evaluate a quadrant, and simply reflect and reuse to get the other three quadrants. The limitation comes as Delta_c is almost one and Delta_s is nearly zero, and become much more so as the number of points increases. This results in precision problems. However I have used this twice, once fairly recently for a test pattern, where high precision was not a requirement, and the first time in the mid-eighties where I computed a sine/cosine table for 256 point FFTs in 16 bit integers on a TMS32010. It did the job well. Much later I did some analysis on the errors, which were surprisingly small and had no practical effect on the resulting FFTed data.

I would not use this for evaluating 10000 point sine waves on the fly, but is quite usable and fast compared to a polynomial approach for smaller numbers of points.

In any case a table driven approach is pretty much always going to be faster than any evaluative method if ultimate speed is your prime requirement. If there's not enough memory then simply use a bigger processor, it's that simple.

I have to say, though, that in general ultimate speed has never been my prime requirement. When I first got into computing, in the mid to late seventies, I did have something of an obsession with speed above all else, after all wasn't that what computers were all about? Well, no as it turns out. As I learned to program in high level languages, I realised that raw speed was rarely what I needed to worry about. Also, optimisation was rarely cost-effective in terms of speeding things up: buying faster hardware was generally a much better option, even then. Optimisation rarely gave more than a few percentage points of improvement in any case. Instead adopting better algorithms tended to give much better returns. With PICs I have rarely, if ever, had to get firmware working faster. I have only ever use fast_io once - for an active attenuator, but even then, overlapping ADC conversions with setting the attenuator gave much greater improvement, to the point where fast_io didn't improve the raw speed, instead it just gave faster transition times as the IO was not on one port (due to someone else's design I inherited, I hasten to add) and therefore had to be done in two stages.

In this particular case, it's worth going back to the basic lessons: most, if not all, compilers implement constant collapse. This replaces expressions using constants with an equivalent constant. It is certainly worthwhile re-arranging expressions to exploit this as much as possible, especially divisions. In much the same vein, its often worthwhile replacing divisions by constants with multiplications by the reciprocal, i.e. 1/constant. Look-up tables will always give much faster results, but at the price of using more memory. For PICs, tables in RAM are always faster that tables in other memory, but again, it comes at a price. Pre-calculating partial results that are used more than once often saves some time, but again uses more RAM and code space. using pointers into RAM arrays is generally faster than array indexes, especially into multi-dimensional arrays as array access often involves a multiplication (by the element size and by the size of higher dimensions). The price is that pointers are very easy to get wrong, causing hard to find errors. I've done all in my time, but rarely do any unless I really have to.

Knowing if and when to optimise and what will give you the best result is a matter of experience, and one size never fits all. Its also, unfortunately, hard to get that experience without a project with a definite requirement which targets your effort.
viki2000



Joined: 08 May 2013
Posts: 233

View user's profile Send private message

PostPosted: Mon Jun 12, 2017 12:38 pm     Reply with quote

Thank you for your suggestions.
Quote:
„Again, I am struggling to understand the motivation for this.“

I already answered few posts above when Ttelmah asked:
Quote:
“I do find myself wondering what you are actually hoping to achieve?.”

Regarding the math involved, I have read some of the approximations proposed and besides Tchebychev there is also Taylor or Maclaurin and other guys.
Looking online, in 15min I found enough info about the subject, from which I selected fast next links:
http://www.coranac.com/2009/07/sines/
http://allenchou.net/2014/02/game-math-faster-sine-cosine-with-polynomial-curves/
http://www.nabla.hr/CL-DerivativeG3.htm#The%20approximation%20of%20the%20sine%20function%20by%20polynomial%20using%20Taylor%27s%20or%20Maclaurin%27s%20formula
http://www.math.smith.edu/~rhaas/m114-00/chp4taylor.pdf
https://namoseley.wordpress.com/2012/09/18/a-quick-and-dirty-sinx-approximation/
https://ccrma.stanford.edu/~dattorro/EffectDesignPart3.pdf
I decided to test some and compare and see the results between few approaches.
It would have been interesting to know how did they implement/approximate that inside the _Q15sinPI (libq.h, actually is libq-elf.a), the fixed point math for XC16 Microchip, because the precision is good.
viki2000



Joined: 08 May 2013
Posts: 233

View user's profile Send private message

PostPosted: Tue Jun 13, 2017 1:26 am     Reply with quote

I have made some tests using the proposed polynomial approximation.
The PIC used is PIC24HJ64GP202 with 80MHz internal clock. The data is sent to serial port and analyzed in Excel. I have tried the 128 points and 32767 points.
The result is here: https://goo.gl/3f2h9o
The conclusion is this, so far:
- Considering 32767 as 100%, the max. deviation between sine calculated with sin() Excel function and sine received at serial port, calculated inside the PIC with the polynomial approach below is 0.111%. Inside Excel, without any PIC involved the max. deviation between sine calculated with sin() Excel function and the proposed polynomial approximation is 0.109%.
- The _Q15sinPI fixed point math function from XC16 is still the winner up to now with max. deviation 0.00759%, so better accuracy to approximate the sine with float.
Later I will measure also the speed of execution and I will compare them.
Here is the working code that I used with the proposed polynomial approximation.

Code:
#include <24HJ64GP202.h>
#include <math.h>
#use delay(internal=80MHz)


#FUSES FRC_PLL
#FUSES NOWDT                    //No Watch Dog Timer
#FUSES NOWRTB                   //Boot block not write protected
#FUSES NOBSS                    //No boot segment
#FUSES NORBS                    //No Boot RAM defined
#FUSES NOWRTSS                  //Secure segment not write protected
#FUSES NOSSS                    //No secure segment
#FUSES NORSS                    //No secure segment RAM
#FUSES NOWRT                    //Program memory not write protected
#FUSES NOPROTECT                //Code not protected from reading
#FUSES IESO                     //Internal External Switch Over mode enabled
#FUSES NOOSCIO                  //OSC2 is clock output
#FUSES IOL1WAY                  //Allows only one reconfiguration of peripheral pins
#FUSES CKSFSM                   //Clock Switching is enabled, fail Safe clock monitor is enabled
#FUSES WINDIS                   //Watch Dog Timer in non-Window mode
#FUSES PUT128                   //Power On Reset Timer value 128ms
#FUSES NOALTI2C1                //I2C1 mapped to SDA1/SCL1 pins
#FUSES NOJTAG                   //JTAG disabled

#pin_select U1TX=PIN_B6
#pin_select U1RX=PIN_B7
#use rs232(UART1, BAUD=9600, ERRORS)

//A fast float approximation to sin over the range +/-PI
#define PI2 (PI*PI)
#define INV4  (4/PI)
#define INV4SQ  (-4/(PI2))
#define P  0.225

float fast_sin(float x)
{
    float y;
    y = (INV4 * x) + (INV4SQ * x * fabs(x));
    return P * (y * fabs(y) - y) + y;   
}

void main(){
  //Test the fast sin algorithm for angles from 0 to PI in steps of PI/128
  float an;
  int res;

  while(TRUE){
    for (an=0.0;an<(PI);an+=(PI/128)){
        res=32767*fast_sin(an);
       
        //Send to serial port
        putc(make8(res,1)); //MSB                 
        putc(make8(res,0)); //LSB
    }   
  }
}


I will focus now on debugging Cordic code above.
Display posts from previous:   
Post new topic   Reply to topic    CCS Forum Index -> General CCS C Discussion All times are GMT - 6 Hours
Goto page Previous  1, 2, 3, 4, 5  Next
Page 2 of 5

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB © 2001, 2005 phpBB Group