### RMS calculation, better math anyone?

Splitted from http://openenergymonitor.org/emon/node/2406/12384

John.b:

"Rather than using a zero cross detector and phase locking to improve accuracy, would it make sense to use a more robust method of integration, such as trapezoidal or simpsons?

Trapezoidal integration could be implemented without much processor overhead and would improve accuracy."

John, my math knowledge of that definitions was a little forgotten and had to read about what you are talking on google. Quite interesting your comment.

My conclusion is that we do already some kind of trapezoidal calculations.
This is our actual formula for calculating Vrms and Irms of a perfect sinewave. Kind of industry standart.

However it expects a perfect sinewave, so your sugestion makes sense, but i really don't know a correct formula to solve it. The closer i can think is trigononometic sin() function integration and for that we can know the angle on the wave we are in.

It's a litle complicated to do sin() calculations fast on our arduino so we would loose sample reate so in the end might loose precision because of this..

Anyway if you can come with a formula easy to implement, i may try it.

This site has many examples of what we are talking about: http://demonstrations.wolfram.com/IntegrationByRiemannSums/

### Re: RMS calculation, better math anyone?

This is our actual formula for calculating Vrms and Irms of a perfect sinewave.

Actually, I think that's the formula for calculating Vrms and Irms for any signal.

If your signal is a perfect sine wave (which it rarely is), then there are some very easy shortcuts you can take, such as Vpeak / sqrt(2).

### Re: RMS calculation, better math anyone?

The real limitation of the standard formula is not in the maths, but in the limited sample rate. The only method available to improve accuracy is to increase the sample rate. Anything else assumes we know what happens between samples, which is untrue, we don't know anything about that. In practice, we can be reasonably confident that there will be limited high frequency energy in the voltage wave, and probably the voltage wave and certainly the current wave will be bandwidth-limited by the transducers, thus we hope the errors will be insignificant in a practical case. In any case, the tariff meter is only required to operate correctly up to the 20th harmonic, so accuracy above that might even be a bad thing as it will give results that might mislead some users.

### Re: RMS calculation, better math anyone?

First I should say I’m not an electrical or electronics engineer, but an engineer all the same and hopefully we share the same maths and it seems the same interests.

My comment was directed at the calculation for average real power which is the average of the products of instantaneous voltage and current i.e. the integral of instantaneous power over a period divided by that period.

So mathematically we write:

This integral can be realised by simply summing the individual products of v and i, thus we get:

I believe this is the calculation that is common place here and is the normal discrete time solution for this type of problem.  Mathematically this numerical integration is known as ‘mid ordinate’ or ‘mid point’ rule.  The curve is simply being approximated by a series of rectangles.

It turns out that for a pure sinusoidal wave this method is exact, because errors in the fit of the rectangles above zero are cancelled out by equivalent – ve errors below zero.

We know we are not measuring a pure sinusoidal wave and it may not be symmetrical so what can be done to improve accuracy?

Increasing the samples is one answer with merit and should be the first option, but my suggestion was that the trapezoidal rule could also be used without much processor overhead and could improve the fit / accuracy of the integration.  So the trapezoidal rule would be:

This could be variously re-arranged to speed up processing.  It would be interesting to quantify the difference sample size and alternative integration makes on the accuracy.

The other point is does phase locking make a difference?  Clearly if the samples do not cover complete cycles, then the result will not be average real power.  But I think we are really interested in energy, whether it’s minimising the use off or controlling where it goes; the meter reads energy.

So can we measure energy correctly without phase locking to complete cycles?  My answer is yes; energy used over any period of time, irrespective of whether it’s over whole cycles or not is simply the integral of instantaneous power over that period as below:

Obviously average real power could be calculated over a long period and it should be accurate enough without phase locking.

I can't seem to paste equations here so sorry that what you see above are links to images on the web.

Ref:

http://en.wikipedia.org/wiki/Rectangle_method

http://en.wikipedia.org/wiki/Trapezoidal_rule

### Re: RMS calculation, better math anyone?

"I can't seem to paste equations here"  They look OK to me.

The wave ought to be symmetrical because the permitted level of even harmonics is quite small. However, that will only be the case if the rules are obeyed. If they're not, a transformer will shift the wave so that the average is zero, and recovering the true zero reference to do the maths then becomes very hard.

"because errors in the fit of the rectangles above zero are cancelled out by equivalent – ve errors below zero."  I'm not sure that is absolutely correct, because most of the time V and I are more or less in phase and so have the same sign, so the product is positive for most of the time. Don't you mean "the errors in the rising part of the wave are cancelled by the equivalent opposite errors in the falling part of the wave..." - so it is symmetry in time as well as amplitude that's important. You'll see what I mean if you think about a sawtooth wave, or its close relative the triac-chopped current wave.

### Re: RMS calculation, better math anyone?

I wanted to show the trapezoidal equation for real power, but it’s just a link to a standard form.

Robert you are right I should have said  "the errors in the rising part of the wave are cancelled by the equivalent opposite errors in the falling part of the wave..." but the point is in practice this is not the case and the method of integration affects the result especially where sample size is small.

The trapezoidal rule assumes straight lines between samples, in the absence of any other information that's a good assumption and is found to be a more robust method of numerical integration than the midpoint rule, which assumes steps.  Another way to look at it is that data points are paired and averaged therefore smoothing the curve.

Unless you’re integrating steps (I know there could be steps) I would be surprised if the trapezoidal rule didn't show an improvement in accuracy over the mid point rule for the same sample size.  In practice it could be insignificant, but it’s easy to do and in my view worth testing on real data at realistic sample rates.

Another point is that just increasing sample rate could have exactly the detrimental effect you mention by increasing measurement frequency you are including higher harmonics.  The trapezoidal rule will help filter these out.  Clearly there is an optimum range of sample rates, but whatever that is for this application I believe you gain an increase in accuracy by using trapezoidal rule.

### Re: RMS calculation, better math anyone?

I have (somewhere!) a spreadsheet with a high accuracy high sample rate (viz. Soundcard 'scope - 16 bit word length and 882 samples per cycle) recording of a genuine voltage wave. I'll have a go at calculating the rms three ways ( 1-using every sample, 2-using the emonlib algorithm at the emonlib rate, and 3-using the trapezoidal algorithm at the same emonlib rate). That should give a guide to the relative magnitude of the errors under repeatable conditions.

Hmph. Here are the results using the voltage output of the Ideal voltage adapter. The units are arbitrary, it's the difference that is important.

The "true" rms calculated using 881 samples, zero crossing to zero crossing: 0.483145149205389
The "emonlib" rms calculated using exactly 55 samples
(i.e equivalent to phase locked), zero crossing to zero crossing:             0.48343462267183   = +0.060%
The "trapezoidal" rms calculated using exactly 56 samples
(i.e equivalent to phase locked), zero crossing to zero crossing:             0.483430494125258  = +0.059%

The trapezium is over 56 samples because it includes the first value of the next cycle. The result might be different for a phase-controlled current waveform.

### Re: RMS calculation, better math anyone?

If anyone wants to experiment with real (V, I) pairs captured from some nasty switch mode power supplies, they're welcome to the data in this spreadsheet.  It's captured at 100KHz, or 2000 samples per mains cycle.

### Re: RMS calculation, better math anyone?

I believe that chaveiro's main point is valid, in that improving the numerical integration technique will improve the accuracy of that step, although the difference may be small compared to other errors from sensor non-linearity, phase errors, etc.. The basics of these approximations is that you are estimating the integral of a unknown curve by numerical integration on samples from that curve. The current implementation appears from this discussion to utilize the simplest but largest error mid-point rule, calculating the area of a rectangle at each data point with a height of the data point, and a width equal to the sampling time. (I say appears since I have not looked at the code yet as I am still anxiously awaiting my EmonTX kit!) The trapezoid rule seeks to reduce that error further by calculating the areas of trapezoids formed by linking pairs of sampled points. From my college days, the best method for numerical integration without going too crazy with computational overhead was using Simpson's rule, which interpolates parabolas between groupings of three points, to better match the curve. Although this sounds difficult, our good man Simpson, a mathematician from the 18th century, simplified it down to:

$\int_a^b f(x) \, dx\approx
\frac{h}{3}\bigg[f(x_0)+2\sum_{j=1}^{n/2-1}f(x_{2j})+
4\sum_{j=1}^{n/2}f(x_{2j-1})+f(x_n)
\bigg],$

Which for our sampled data can also be written as:

$\int_a^b f(x) \, dx\approx
\frac{h}{3}\bigg[f(x_0)+4f(x_1)+2f(x_2)+4f(x_3)+2f(x_4)+\cdots+4f(x_{n-1})+f(x_n)\bigg].$

So, you simply step through your sampled data, adding it together with the odd sample points multiplied by 4 and the even sample points multiplied by 2, add in your first and last data points, and divide the total by 1/3 of your sample interval. Fairly close computationally to the midpoint rule but with less error.

I would take dBC's data and chew through it, but you can't show what the error difference is without knowing the EXACT value from a pure mathematical (not numerical) integration. (Also, it only appears to be a picture linked, I could not download the data.) So, if you can simulate a noisy non-sinusoidal signal using a mathematical formula, calculate the exact intergral mathematically, and then numerically simulate sampling it and using these rules, you can determine the actual error, but only for that signal. But, that would give you some feel for the gains you could get by changing the algorithm, but I still think it would be less than the other errors introduced in the system.

Just my thoughts and I welcome further discussion.

### Re: RMS calculation, better math anyone?

I would take dBC's data and chew through it, but you can't show what the error difference is without knowing the EXACT value from a pure mathematical (not numerical) integration.

True, but the sample rate in that dataset is much higher than any of the emontx solutions being discussed here.  It's at 2000 samples / cycle Vs about 100 best case under emontx.    So if you integrate my data wth your favourite approximation algorithm, I think you'll be much closer to the true integral as those little rectangular or trapezoidal strips will be much much narrower (anywhere from about 20 to 60 of my strips fit into one of emontx's strips, depending on which code you use).

So I think that value is going to be as close to the true value as we can get  Then you can sample my data at much lower rates (like emontx does) and use various approximations to see how much difference they make relative to the best (but not perfect) answer.

Will a gzip'd gnumeric file work for you?  Or would you prefer something more generic like CSV?  Probably best to just use columns A thru' D, anything to the right of them are just my last random experiment, and depending how I left it, may not be mathematically sound ;-).

### Re: RMS calculation, better math anyone?

P.S.  There appear to be at least two f(x)'s being discussed in this thread.  It started off discussing how you calculate RMS (which is most commonly used for measuring I or V in isolation) and then moved to discussing the more interesting instantaneous power (or average instantaneous power over an interval).  It's the latter that I've been using my data to experiment with.  The (I,V) pairs are time sync'd so you can just go straight ahead and multiply them together.

### Re: RMS calculation, better math anyone?

This question arose because of my comment on another thread here.  The main point is that the integration method being used thus far is a basic mid point rule i.e. the sum of equally spaced rectangles.  I originally proposed the trapezoidal and Simpsons rules as alternatives to improve the integration accuracy.  This was directed at the calculation for real power but could equally be applied to Vrms and Irms as demonstrated by Robert Wall above.

I'm primarily interested in measuring energy use accurately, so my approach is to integrate the instantaneous power curve directly using the trapezoidal rule.  The area of the power curve is energy.  I'm not interested in average power as this will fluctuate depended on whether its over whole cycles or not, seemingly implying some inaccuracy.  So no need to divide by the number of samples.

It's true that Simpsons rules could be more accurate, but at the expense of more processing and a limitation on number of samples.  Simpsons 1,4,1 or 1/3 rule stated by Dan above requires 3,5,7,9 etc samples.  There are a number of other methods including Simpsons 1,3,3,1 and 5,8,-1 rules, but they all have limitations and assume a parabola.

My preference is to keep it simple and maximise the number of samples with even intervals and integrate with trapezoidal rule.

dBC could you post your data as a CSV file for me to use, thanks

Here you go.....

### Re: RMS calculation, better math anyone?

And this is what the V*I plot looks like for that data.

### Re: RMS calculation, better math anyone?

dBC, thanks for the data to play with. I agree with your statements regarding the high sample rate data being fairly close to exact comapred to lower sample rates. With that data I ran the following test on the trapezoidal and Simpson's rule to tease out the accuracy differences. it is important to note that the differences are only valid for this waveform, and can change with different data sets, although we can infer some decisions based on these results.

The sample interval below is the number of sample of the raw data skipped, so as we increase the sample interval, we are looking at fewer data points. The original data was measured at 100kHz, so the 50 sample interval would be an effective sampling rate of 2 kHz, and the 400 would be 250 Hz. The area was first calculated with all the data points using the basic mid-point rule for the % error numbers. The sample is 5 waves, so that would be 40 samples / wave at the 50 sample interval down to 5 samples per wave for the 400.

 Mid-point sample interval 50 100 200 400 Area 14343550 14312900 14213000 14656800 % error -2.29% -2.50% -3.18% -0.15%
 Trapezoidal Rule sample interval 50 100 200 400 Area 14343450 14312700 14212600 14656000 % error -2.29% -2.50% -3.18% -0.16%

 Simpson's Rule sample interval 50 100 200 400 Area 14353700 14346067 14064800 14381333 % error -2.22% -2.27% -4.19% -2.03%

So, there is not any major difference between the three rules in terms of error for this data although my champion, Simpson's rule was somewhat worse than the rest for this dataset. Maybe next time... The low error associated with the interval of 400 for the first two methods may be due more to filtering of high frequency noise than anything else. So, again, I think errors from other parts of the measurement are probably more of an issue than this, but I am not a signal analysis expert.

I can provide my LibreOffice Calc sheets for these calculations if anyone is interested.

Dan

### Re: RMS calculation, better math anyone?

You're very welcome, glad to see it being used.

The area was first calculated with all the data points using the basic mid-point rule

I wonder how much your choice of integration algorithm on the complete dataset impacts your end comparison.   If you integrate all the data (rather than the lower bandwidth subsets) with the 3 different algorithms, does the result vary much, or does each algorithm give pretty much the same "reference value"?

### Re: RMS calculation, better math anyone?

Dan you beat me to it.

To answer dBC's question, I can add that for the full 100 kHz sample, using the mid point rule as a reference the difference is small for the trapezoidal rule at -0.000354% and -0.001292% for Simpson's 1,3,3,1 rule.

Out of interest I did an FFT of the data.

### Re: RMS calculation, better math anyone?

Can you compare the worst error we get by having a random sampling start point (eg. from first sample midpoint synced at zero cross to first sample mid point at most deviation before another previous sample hits zero cross)  for each of the same rules as above and at sampling rate of about 32, 50 and 100 sample points per full wave cycle (the ones arduino can do).

Thanks

### Re: RMS calculation, better math anyone?

(eg. from first sample midpoint synced at zero cross to first sample mid point at most deviation before another previous sample hits zero cross)

I really don't know what you are requesting here. So, instead I took the 5 waves of data and truncated it to 4 waves of data, starting at sample point 225, and running for 4 complete waves (at about a 1/4 wave into it). I also couldn't do exactly 32 samples per wave and the sampled data would not divide evenly into that value. Instead I did the following samples per wave: 100 (sample interval 20) , 50(40), 25(80), and 20(100). Here are the results:

 Mid-point sample interval 20 40 80 100 Area 11572700 11643160 11710160 11533400 % error -0.64% -0.04% 0.54% -0.98%

 Trapezoidal Rule sample interval 20 40 80 100 Area 11573480 11644720 11713280 11537300 % error -0.64% -0.03% 0.56% -0.95%

 Simpson's Rule sample interval 20 40 80 100 Area 11549733 11621867 11761227 11521733 % error -0.84% -0.22% 0.97% -1.08%

Again, I think it shows that the numerical integration is not a major source of error. Improving the sampling rate will help, but the algorithm at these levels appears to me to not be critical. If the CT has a 2 - 3% nonlinearity, that would seem to me to be the larger concern.

### Re: RMS calculation, better math anyone?

That is usefull info.

But what i'm asking is different and a litle difficult to explain. I made this picture. Your previous calculations use the red dots as mid point sample. I show here sample 1 and 2 of a cycle.

Between red dots you skip the data.

What i ask is to measure to most error for the same wave instead of measuring with red zero cross synced dots with samples skewed like the blues or yellow dots.

A skewed sample start time can be randomly found between 1 and 2.

Since you are skiping original samples to reduce sampling rate, you have the true data already. So it's just to do the math with the sampling data shifted one step at a time and find the worst error we could get by doing that as it's just what the libs not zero cross synced do.

### Re: RMS calculation, better math anyone?

Chaveiro,

I'm not sure exactly what you're asking.

But, for the mid-point rule, which is what you are using then the sample data represents the height of equally spaced rectangles.  If the curve were a sine wave then the mid point rule would give an exact result.  This is explained above.

Since this is real data and is definitely not a sine wave, then the results will differ from the exact integral.  The accuracy depends on the integration method and number of samples.

However you will clearly get a different result if you use the data selectively and start at a different point, this is not an error, but a difference in data being used, which will generate a different result.

I think to do what you ask with some meaning a curve would need to be generated that had an exact closed form integral, then data could be generated from that and selectively used.

I believe this would be a bit contrived, as the reality is that the error is to a large extent caused by noise in the data.

To be clear a different result using different data does not imply an error.

### Re: RMS calculation, better math anyone?

I think what Nuno wants to say is that he would like to see the same calculation as already done with the same sample intervals (20, 40, 80, 100), but for different starting points in the high resolution data. We do not know which starting point in the original data Dan took for the above calculations, presumably the first data point. This is just an arbitrary choice. You could start with sample 2,3,4 and so on and get completely different results (theoretically). It would just be nice to see how different 'starting points' will influence the calculation result.

BR, Jörg.

(Is this what you wanted to say, Nuno?)

### Re: RMS calculation, better math anyone?

In the first set of data, I used the full data set (0 - 9999). For the second numbers I posted I believe I followed chaveiro's request, even though I did not understand it. Instead of using all 10,000 data points I used 8000 data points, starting at data point 225 and running to data point 8225 (4 complete waves). I then calculated the integral of the data between 225 and 8225 using every single data point and the mid-point rule. This I called the 'real' answer in terms of calculating the % error. Then I redid the integration using the three rules, skipping a certain number of data points (the sample interval). The larger that number, the fewer samples per wave. As you can see, the error is not that high, even when only using 1% of the original data (25 samples per wave).

Starting at data point 225 is about a 1/4 wave into the first wave. 250 would be exactly a 1/4 wave, but I just scrolled down in my spreadsheet and ran the numbers. You could repeat this at many different points in the wave, but I think it would so the same thing. Namely that for this waveform, the three methods do not have significant differences in error. Other waveforms might show that one method is better than another, but differences probably won't kick in until you are looking at fairly low samples per wave (15 or less).

Just my thoughts...

### Re: RMS calculation, better math anyone?

Just for the record, that data isn't perfectly aligned with a zero-crossing of V, although it's pretty close.    For the real enthusiasts there's more details on the source of that data here:  http://openenergymonitor.org/emon/node/2296, but in summary:

The trigger point at or around datapoint 5000 was a +ve zero-crossing on the AC supply to the scope.   The so-called V column in that data is actually the output of a CT wrapped around a heavy resistive load.  The first time that hit 0 after the trigger was at datapoint 5019, and it was probably more like about datapoint 5030 before it really hit zero.  I suspect there are all sorts of components to that lag, including the phase shift of the CT, and stuff going on inside the scope.

So at the trigger point (in the centre of the data) the data is running about 300 usecs early relative to a zero-crossing of V.  Jump back to the beginning of the data, 2.5 cycles earlier at it looks to be running about 26 samples or 260 usecs early.  That difference is probably a reflection on how close my grid runs to 50Hz.

I'm not suggesting any of this matters, but for those who really want to drill down to investigate the significance of sync'ing to zero-crossings, it's important to know the limitations of the data.

### Re: RMS calculation, better math anyone?

To satisfy myself and attempt to answer Chaveiro's question, I mathematically constructed an instantaneous power curve using five current loads with different phase angles.  The result is a simplified version of dBC's data, but I know the exact integral.

Using the mid-point rule for a fixed time period the numerical integration result is the same regardless of the starting point.  This is not a surprise as the curve is cyclic and anything missed from the beginning is simply added to the end.

Obviously for real data you can't rely on it being a consistent cyclic curve and a different start point represents the integral of a different part of the curve.

### Re: RMS calculation, better math anyone?

John.b, you did the test for all samples in the file at max sample rate. Of course it is 'almost' the same value.
What I meant to do is to simulate the slow sample rate test (at arduino equivalents) but by shifting start samples with real data from the file.
Like i and JBecker explained above.

The difference exists, i just want to know how much it is and what is wort case expected with this particular file.
Also suspect max error increases with slowing  sampling rate.
My real tests with EmonLibPro development suggests all of this.

How much error we induce on the measured signal by random sampling i can't calculate.
And that's why i asked Dan Woodie the math expert  to calculate it so we have a known exact figure.

### Re: RMS calculation, better math anyone?

Over to you Dan :-)

### Re: RMS calculation, better math anyone?

Using the mid-point rule for a fixed time period the numerical integration result is the same regardless of the starting point.  This is not a surprise as the curve is cyclic and anything missed from the beginning is simply added to the end.

Yes, but only if you 'use' every data sample in your calculation. If you decide to use only every second, third, ... 20th data point it could (and will in most real cases) make a difference where you start. You just use 'different' data for the calculation. And you are 'leaving out' data points for the calculation.

### Re: RMS calculation, better math anyone?

For the curve I constructed, and using an even interval, then the result is still correct with less sample points and different start points.  This is because the mid-point rule is an exact integration for a sine wave.

For real data its a different ball game, and as I said previously if you start at a different point then you're integrating a different part of the curve, as the data doesn't repeat exactly.

### Re: RMS calculation, better math anyone?

For the curve I constructed, and using an even interval, then the result is still correct with less sample points and different start points.  This is because the mid-point rule is an exact integration for a sine wave.

We are starting to split hairs here. What you say is correct for 'constructed' data with perfectly sinusoidal waves, infinite integration time, constant sampling time intervals, constant mains frequency, .....

But in reality in most cases we cannot control or presume any or all of these secondary conditions. And then it is simply ok to have a 'stable' algorithm which is as insensitive as possible to these secondary conditions. We already found some good rules:

- use as many samples as possible (but not more than necessary), 32 samples per full wave seems rather on the low side, ~100 seem sufficient, more do not seem improve the accuracy significantly.

- try to fit the sample interval to the period of the mains cycle so that the mains period is an exact multiple of the sample interval and/or

- average over as many  mains cycles as possible

- anything else...?

Can we already say that the conclusion of this thread is, that the integration method does not play a major role?

### Re: RMS calculation, better math anyone?

Can we already say that the conclusion of this thread is, that the integration method does not play a major role?"

Actually, Jörg, in view of all the other inaccuracies, especially non-linearity in the transducers, and uncertainty in the calibration, I'd decided that a long time ago.

### Re: RMS calculation, better math anyone?

- anything else...?

You could probably add sample continuously.  If you take breaks from sampling, you risk missing life's big events.

Ideally you'd have a dedicated A/D running continuously on each signal of interest.; that's how the various energy chips I've played with work.   The more signals you try to monitor with a single A/D, the more you're not paying attention to the signals.

I may have this wrong, but I think someone on here a while back told me that one of the implementations cycles the A/D through 3 signals:  V, I, Vbattery.  If that is true, it seems an awful waste of a very precious resource.  Does something as slow moving as a DC battery voltage really need to be sampled that often?

### Re: RMS calculation, better math anyone?

The standard emonLib calcVI method - based on the requirement for a long battery life - samples battery voltage to calibrate the ADC against the internal bandgap reference. It then waits until voltage is close to a zero crossing and takes voltage and current samples alternately at approximately 52 pairs per cycle (@ 50 Hz) for, in the example sketch, 20 cycles. If a second and third c.t. is in use, it repeats this another one or two times.

It then transmits the results and sleeps for, in the example sketch, 5 seconds.

Robin's energy router, MartinR's PLL monitor and their derivatives all, as far as I know, monitor continuously.

### Re: RMS calculation, better math anyone?

Clearly all these last posts are valid and there are much bigger inaccuracy's in the overall monitoring process.

It started because I suggested trapezoidal rule rather than phase locking; that was the topic.  So that was the comparison I was making.

The point I also made was that the trapezoidal rule can be implemented without much processing, actually just a bit shift or two.  Does it make a difference? Is it more accurate? Dan made the points and on these results I agree; not much and it depends on the data.

I would not implement trapezoidal rule integration if it slowed down the sampling rate.

Increase sampling rate and maintaining a constant interval are beneficial.

Sampling voltage and current at the same time would be beneficial.

Does Phase locking make a difference to the accuracy of the integration; No is my answer.

A continuous numerical integration of the instantaneous power curve over time is the energy; however if you are trying to calculate average power it will fluctuate if not over whole cycles; I think this is understood.

So to add to dBC comment sample continuously, I would say integrate continuously and calculate accumulated energy directly rather than indirectly  from average power.

### Re: RMS calculation, better math anyone?

You could probably add sample continuously.  If you take breaks from sampling, you risk missing life's big events.

Yes, certainly! I am so used to it that I forgot about it :-)

PS: nice thought, though. So one of your 'big events in life' is using a flatting iron or a cooking plate :-))

### Re: RMS calculation, better math anyone?

So to add to dBC comment sample continuously, I would say integrate continuously and calculate accumulated energy directly rather than indirectly  from average power.

That's how the PV diversion systems on here already work. By continuously taking raw voltage and current samples, multiplying them and adding them to the "energy bucket" you are calculating energy by integrating continuously.

### Re: RMS calculation, better math anyone?

Martin,

Its a small point but I think the emonlib's calculate average power and then calculate energy from that.  In other words you are unnecessarily dividing by the number of samples.  In my experience division is slow and leads to rounding.

So Pavg = (1/n) * sum(samples)

then E = Pavg * time period

I suggest doing this way E = interval * sum(samples)

Like I say a small point

### Re: RMS calculation, better math anyone?

I understand your point John but I was referring to Solar PV diversion code, like Robin's Mk2, mine and others. None of these use emonlib, since it isn't capable of continuous monitoring, and do directly calculate energy as you suggest.

### Re: RMS calculation, better math anyone?

Oh sorry Martin, I haven't read your code, but will now.

I've been developing my own sketch and have read emonlib, emonlibpro and the AVR465 and they all go the average Power route.

### Re: RMS calculation, better math anyone?

In my experience division is slow and leads to rounding.

Yes, that is true. And I also like the idea to calculate energy directly and not take the detour over average power. On the other hand we only have a 4 bytes (long or float) maximum count value so that a division is needed somewhere. Hmmm...

### Re: RMS calculation, better math anyone?

Where do you see a problem Jörg?

I use unsigned long for my energy accumulators (import, export and generated) and they store Wh so I calculate about 4.3 giga Watt hours before they overflow. More than enough surely?

More of an issue is getting emoncms to cope with 32-bit values.

### Re: RMS calculation, better math anyone?

Martin,

I was just thinking about doing the maths without division as john suggested. We have max. 10 bit values for current and voltage, so every single momentary power value is 20 bit worst case. 12 bits left. Summing up ~80 samples per mains period could overflow the long after >~1 s (50Hz * 80 samples = ~4000). This is absolute worst case calculation, I know.

And then, calibration has to be done, phase shifting, etc. No real way to avoid division (or shifting), I think.

### Re: RMS calculation, better math anyone?

Ah, sorry, I wasn't thinking about no division at all.

For phase shifting I think you are right, it's unavoidable, but for calibration/conversion to Wh it's possible to just use subtraction.

If you have an accumulator that can hold slightly more than 1Wh in raw units (or whatever value will fit into 32 bits). Then every time you add to it you check to see if the total is greater than the calibration value for raw units to Wh. If it is you subtract this value and increment your Wh counter.

There is no remainder error, since the remainder stays in the accumulator, and the precision is very good because you are subtracting a large value for a single Wh.

### Re: RMS calculation, better math anyone?

Something like this ?

static uint32_t Ws = 0uL;

Ws += (CalcVal.InvPowerTot+1)/2;         // 500ms interval

if( Ws>_KWHhoundredth )                  // 1/100kWh
{
Ws -= _KWHhoundredth;
CalcVal.InvkWhTot++;
CalcVal.InvkWhDay++;
}

this is code from my PV controller system :-)

### Re: RMS calculation, better math anyone?

If you have an accumulator that can hold slightly more than 1Wh in raw units . . .

I've not looked closely at the AVR code, but I'd infer that is how most (all?) commercial revenue-grade energy meters work.

### Re: RMS calculation, better math anyone?

Why the /2 Jörg?

Can't you just make _KWHhoundredth twice as big?

I do exactly the same thing but updating every 20ms

### Re: RMS calculation, better math anyone?

Ws shall be real Ws and my update interval is 500ms (CAN communication interval between all subsystems). But as Ws is never used again (it is local), you are perfectly right!

There is another problem with this code. If the increase per every 500ms is more than _KWHhoundredth, the Ws counter could theoretically overflow (can't happen in this case as my PV power is lower).

(And then, as I am using a very fast processor with hardware division, the code is a bit 'overoptimized'. But I like to re-use code and it works ok here.)

### Re: RMS calculation, better math anyone?

I'm new to Arduino's, but back in the 70's I wrote assembler on an Acorn to do numerical integration for my Naval Architecture studies.

I used a number of fixed point arithmetic techniques, most of which I've forgotten now.  I wish I had the code, but recently I found the attached paper on the internet.  My methods were self invented, but were similar to the those in the paper.

It's probably common knowledge, but it might be of interest.http://openenergymonitor.org/emon/sites/default/files/fixed point maths.pdf

### Re: RMS calculation, better math anyone?

I'm still interested in knowing real error data for random start sampling technique.

Another way to improve resolution is to over-sample. I will do some tests on this later.

Regarding EmonLibPro it takes a accumulate all until data is needed design.

Only when you want a value in Result structure and call calculateResult() you got the average value since last call.

Internaly it does not skip a single sample as this abstract program flow demonstrates.

## Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.