Optimizing pow() function

J

James Kuyper

Il 22/04/2013 20:49, James Kuyper ha scritto:

exp can be in the range 1.00 to 3.00 (it derives from an integer value
in the range 100-300 divided by 100).

OK, then you can't use that speed-up. Your options for speeding up the
calculation inside func() are limited to pretty much the advice you've
already been given:

1. You need to handle the case of pt==0 separately.
2. Cut the time nearly in half, by using pwr*pow(pd/(double)adc, exp)
3. Produce a small and unreliable speed up by changing to use
exclusively float rather than double. That means that 'exp' should be
float, too. For best results, make sure that the calculation of 'exp'
outside the function is done in single precision: (float)integer/100.0F
(or equivalently, and faster on some systems: (float)integer*0.01F).
4. Pre-compute logf(adc/(float)pt) for all non-zero values of adc and
pt, and calculate pwr*expf(exp*precomputed_table[adc][pt]). This takes
up a fair amount of memory; it's only worthwhile if the function is
going to be evaluated frequently. Note that you must handle values of
zero for adc and pt separately.
5. More extreme: pre-compute pow(adc/pt, exp) for all values of adc, pt,
and the integer you divide by 100 to calculate exp. This uses a lot more
memory, but can save a lot of time if func() is called sufficiently
frequently.

If you need more of a speed-up than that, you'll have to consider the
reorganizing your code that calls func(), and not just func() itself.
It's got four arguments: is it frequently called for several different
values of some of the arguments, while the remaining arguments are
constant for many different consecutive calls to func()? Even if that's
not true of the way func() is called now, could you reorder the
calculations so that it becomes true?
If so, you can achieve substantial speed-ups by breaking the evaluation
of the expression into two parts, one part that depends only upon the
values that are frequently the same, and the other part that depends
upon the values that vary frequently. Whether or not this would help
significantly depends upon which arguments vary infrequently, and which
ones vary frequently. If 'exp' is one of the ones that varies
infrequently, it won't help you much at all. If the combination of adc
and pt varies infrequently, it could help a great deal.
 
J

James Kuyper

On 04/23/2013 01:35 AM, pozz wrote:
....
The exp parameter is a calibration value that is defined during first
stages. After that, it will remain always the same value.

That's unfortunate, because the exp is the argument that provides the
least opportunities for speeding things up by reason of being constant.
I can pre-calculate the look-up table for that particular value. I need
a table of adc/pt combinations length (1023 integers, considering a
fixed pt value), so 2046 bytes... I don't have this space in RAM :-(

If pt can be considered fixed, and exp can be considered fixed, you
could pre-compute denominator = pow(pt,exp), and then calculate
pwr*pow(adc,exp)/denominator, but that's still a multiplication, a
pow(), and a division, exactly the same operation count as
pwr*pow(adc/pt, exp), so it doesn't give you any advantage.
 
E

Eric Sosman

Il 22/04/2013 17:42, Eric Sosman ha scritto:

Sure, I have halfed the time in this way, but I'd like to reduce the
process time more, if possible.

Unless you can provide more information (like, "exp is
always an integer or half-integer" or "I only need six bits'
accuracy in the result"), I see no approach that's likely to
make a further dramatic difference. (Giant precomputed tables
seem inappropriate for "a simple embedded platform.")

Sometimes it's more fruitful to take a step back and look
at the overall problem: Not "How can I do *this* faster," but
"If I did things differently, could I replace *this* with
something else?" What problem are you trying to solve?
 
J

James Kuyper

On 04/23/2013 04:27 AM, (e-mail address removed) wrote:
....
Ok, it's better to explain what I'm trying to do.

I have a 10-bit ADC that acquires a voltage signal and convert it to a decimal number 0..1023.
This signal measures a power (Watt). The relation between voltage and power is theorically quadratic:

P = k * x ^ 2

where k is a constant that depends on the system and x is the ADC result.

In order to calibrate k, I measure with an instrument the power at a nominal/reference value, for example I have Pr power measured ad Vr voltage that xr ADC points. So I can write:

P = Pr * (x / xr) ^ 2

Usually I make the reference measurement at xr=800pt.

The quadratic law above is simple to implement, but in practice I have to fine-tune this law, because at low power levels the voltage/power relation seems to deviate from the quadratic law (I don't know exactly why, maybe some non-linear behaviour of some components). So I changed the formula with:

P = Pr * (x / xr) ^ a

Without seeing your data, I can't be sure, but I think that it's
unlikely that the correct formula is of that form. If you don't have a
theory that tells you what the shape of a curve should be, and you're
trying to fit it empirically, choose the form of the curve to make your
job easier. A power law with a non-integer power makes things very
difficult.

....
The best would be to have a simple formula (polynomial function?) that approximate well enough the above equation at run-time, without too big tables.

I agree.
 
8

88888 Dihedral

glen herrmannsfeldtæ–¼ 2013å¹´4月23日星期二UTC+8上åˆ5時31分24秒寫é“:
If only a small number of different exp values occur, a look-up

table would be faster than pow. If I want an int result, I try to do

the whole calculation in integer arithmetic, even if it needs some

shifts to get the binary point in the right place.



Routines for fixed point scaled exp, log, and I believe pow are

in Knuth's "Metafont: The Program."



With only 10 bit input, I presume you don't need 53 bits of double.

With a little work doing scaling, it can all be done in fixed point

arithmetic. (Except that pow comes in as double, but either generate

the appropriate scaled fixed point value or send it in that way.)
Well, the benefits of integer additions and multiplications
for very long integers in distributed systems by
the number theory transforms and residue number systems
were long long time ago.

But these were applied in the high priced military custom-built
systems only.

There are no ready and cheap to use MCUs with more than
16 cores now with the complete tool-chain .
 
P

pozz

Il 23/04/2013 14:35, Richard Damon ha scritto:
Hearing your problem, my guess would be that the issue isn't that the
exponent is a little bit off, but that there is an offset to the voltage
measured. Thus rather than changing to

P = Pr * (x / xr ) ^ a

you might want to look at

P = k * (x + x0) ^ 2

It is nonsense when x=0, because P must be 0 in that case.
 
P

pozz

Il 22/04/2013 22:09, Malcolm McLean ha scritto:
pow(x, y) = exp(y * log(x)) for positive values.

the expensive part is working out log x. But if you can look it up, you need
far fewer than 200k entries.

I'm trying some series that approximate log() and exp() and it seems is
simpler to approximate log() and not exp().

Any suggestions?
 
P

pozz

Il 23/04/2013 15:19, James Kuyper ha scritto:
On 04/23/2013 04:27 AM, (e-mail address removed) wrote:
...

Without seeing your data, I can't be sure, but I think that it's
unlikely that the correct formula is of that form. If you don't have a
theory that tells you what the shape of a curve should be, and you're
trying to fit it empirically, choose the form of the curve to make your
job easier. A power law with a non-integer power makes things very
difficult.

Ok, I could try adding a linear term to the original equation:

P = k * x ^ 2 + h * x

But now it's very difficult to calibrate and find k and h constant in
such a way the curve passes through the reference point (xr, Pr) and
another reference low-power point (xs, Ps).

With my exponential equation, the calibration is very simple. I move to
the reference point (xr=800) and measure Pr. The curve passes always
throug the reference point (xr, Pr) even changing the second constant a
(the exponent).
So I can move to a second reference point corresponding to a low power
level and calibrate the constant a, without worrying about the first
reference point.
...

I agree.

I'll try to find two good series to calculate exp() and log().
 
P

pozz

Il 23/04/2013 15:08, Eric Sosman ha scritto:
Unless you can provide more information (like, "exp is
always an integer or half-integer" or "I only need six bits'
accuracy in the result"), I see no approach that's likely to
make a further dramatic difference. (Giant precomputed tables
seem inappropriate for "a simple embedded platform.")

Please, take a look at another reply from me about a better explanation
about what I'm trying to do.
 
J

James Kuyper

Il 23/04/2013 14:35, Richard Damon ha scritto:

It is nonsense when x=0, because P must be 0 in that case.

You're essentially claiming certainty that there is no voltage offset,
that x0 must be exactly 0. You've already indicated that the data
doesn't exactly match theory. How certain are you (and on what basis?),
that the failure of reality to match theory doesn't involve a voltage
offset?
 
G

glen herrmannsfeldt

pozz said:
Il 23/04/2013 15:08, Eric Sosman ha scritto:
(snip)

Please, take a look at another reply from me about a better
explanation about what I'm trying to do.

Well, what we still don't know is exactly, or even approximately,
how accurate the resutls need to be.

For a reasonable range of values, log and exp can be done in fixed
point, and likely good enough for you, but we don't know that.

The algoritms in "Metafont: The Program" do computation on fixed
point values with 15 bits before and 16 bits after the binary point.
They should run much faster than software emulation of floating
point, and maybe fast enough for you.

(Metafont was originally in Pascal. It is more often now run as
a machine translation to C of the Pascal source.)

-- glen
 
J

James Kuyper

Il 23/04/2013 15:19, James Kuyper ha scritto:

Ok, I could try adding a linear term to the original equation:

P = k * x ^ 2 + h * x

Depending upon what your data looks like, I'd recommend a cubic term,
and possibly a constant one as well. If you use a package to fit your
data to a polynomial, and one of the terms has a coefficient that is not
statistically significant, you can always drop it at that point, you
don't have to make the decision before determining the parameters of the
fitted model.
But now it's very difficult to calibrate and find k and h constant in
such a way the curve passes through the reference point (xr, Pr) and
another reference low-power point (xs, Ps).

With my exponential equation, the calibration is very simple. I move to
the reference point (xr=800) and measure Pr. The curve passes always
throug the reference point (xr, Pr) even changing the second constant a
(the exponent).
So I can move to a second reference point correPPsponding to a low power
level and calibrate the constant a, without worrying about the first
reference point.

Packages that allow you to fit a data set to a polynomial are widely
available, and you should use a lot more than just two points to
calibrate the parameters of your fitted model.
I'll try to find two good series to calculate exp() and log().

Taking that approach builds into your model the assumption that you've
got a power law: In the absence of a good reason to believe that the
real curve is a power law (and you haven't given any), it's simpler just
to fit your actual data to a polynomial. You've got tight constraints on
the execution time and the available space for evaluating your model;
you don't want to make inefficient use of your limited time and memory
by biasing your fit in favor of a power law for which you can give no
theoretical justification.
 
P

pozz

Il 23/04/2013 23:24, James Kuyper ha scritto:
You're essentially claiming certainty that there is no voltage offset,
that x0 must be exactly 0. You've already indicated that the data
doesn't exactly match theory. How certain are you (and on what basis?),
that the failure of reality to match theory doesn't involve a voltage
offset?

It could be, but not at very low voltages. When I have zero power, I
have zero voltage (0 ADC points)... this is for sure.

Myabe the offset starts increasing with the power level or voltage, like
in the following equation:

P = k * (x + x0(x)) ^ 2

but it's difficult to understand what is x0(x). The only thing I can
say is that x0(0)=0.
 
E

Eric Sosman

Ok, it's better to explain what I'm trying to do.

I have a 10-bit ADC that acquires a voltage signal and convert it to a decimal number 0..1023.

This signal measures a power (Watt). The relation between voltage and power is theorically quadratic:

P = k * x ^ 2

where k is a constant that depends on the system and x is the ADC result.

In order to calibrate k, I measure with an instrument the power at a nominal/reference value, for example I have Pr power measured ad Vr voltage that xr ADC points. So I can write:

P = Pr * (x / xr) ^ 2

Usually I make the reference measurement at xr=800pt.

The quadratic law above is simple to implement, but in practice I have to fine-tune this law, because at low power levels the voltage/power relation seems to deviate from the quadratic law (I don't know exactly why, maybe some non-linear behaviour of some components). So I changed the formula with:

P = Pr * (x / xr) ^ a

where a is in the range 1-3.

The first thing that occurs to me is to use a simpler
parameterization, something along the lines of

P = k(x) * x^2

.... where k(x) is (most likely) an interpolation in a smallish
table. The variability of k seems a better match for your
suspicion "maybe some non-linear behavior" than fiddling around
with the exponent.
 
J

James Kuyper

Il 23/04/2013 23:24, James Kuyper ha scritto:

It could be, but not at very low voltages. When I have zero power, I
have zero voltage (0 ADC points)... this is for sure.

That would make sense only if your method for measuring the voltage was
perfectly accurate, which is simply impossible. Counting of discrete
entities (such as people or houses or cars) can be done, in some
circumstances, with perfect accuracy, but any measurement that doesn't
boil down to counting discrete entities always has an error. Whenever
someone tells you that the error in their measurements is zero, it just
means that they don't properly understand how their measurements were
performed.

Any finite error in the calibration of the instrument used to measure
the voltage will cause x0 to have a non-zero value.
Myabe the offset starts increasing with the power level or voltage, like
in the following equation:

P = k * (x + x0(x)) ^ 2

but it's difficult to understand what is x0(x). The only thing I can
say is that x0(0)=0.

That's the wrong way to think about it. If you're justified in your
certainty that P(0) = 0 (which still seems unlikely to me), then I agree
that x0(x) must also be 0. It then follows, if P(x) is sufficiently
smooth, that x0(x) will have a power series expansion with a constant
term that is zero. (x-x0(x))^2 will therefore also have a power series
expansion, also with a constant term of 0. Therefore, you're better off
just directly fitting P(x) to a power series, and if you insist, you can
tell the fitting program to force the constant to be 0. However, don't
be surprised if letting the constant term be non-zero improves the fit.
 
P

pozz

Well, what we still don't know is exactly, or even approximately,
how accurate the resutls need to be.

The result shouldn't be very accurate. Consider that the result is
always
an integer, so I loose many significant digit.

For example, if Pr=1000W I'd like to have an integer power even with
5W steps.
 
P

pozz

That would make sense only if your method for measuring the voltage was
perfectly accurate, which is simply impossible. Counting of discrete
entities (such as people or houses or cars) can be done, in some
circumstances, with perfect accuracy, but any measurement that doesn't
boil down to counting discrete entities always has an error. Whenever
someone tells you that the error in their measurements is zero, it just
means that they don't properly understand how their measurements were
performed.

Any finite error in the calibration of the instrument used to measure
the voltage will cause x0 to have a non-zero value.




That's the wrong way to think about it. If you're justified in your
certainty that P(0) = 0 (which still seems unlikely to me), then I agree
that x0(x) must also be 0. It then follows, if P(x) is sufficiently
smooth, that x0(x) will have a power series expansion with a constant
term that is zero. (x-x0(x))^2 will therefore also have a power series
expansion, also with a constant term of 0. Therefore, you're better off
just directly fitting P(x) to a power series, and if you insist, you can
tell the fitting program to force the constant to be 0. However, don't
be surprised if letting the constant term be non-zero improves the fit.

Yes, I can use a high-degree polynomial function to best fit the curve
I
measure, but it would be very difficult to calibrate on the field,
machine
by machine. I'd like to have a good fit curve with a few parameters
to
calibrate in a simple way.

The quadratic law (a second degree polynomial) comes from the theory.
My idea to use exponential function is to best-fit the real curve
starting
from the theory, so changing a little the exponent, without adding too
much
parameters difficult to calibrate.
 
P

pozz

Depending upon what your data looks like, I'd recommend a cubic term,
and possibly a constant one as well. If you use a package to fit your
data to a polynomial, and one of the terms has a coefficient that is not
statistically significant, you can always drop it at that point, you
don't have to make the decision before determining the parameters of the
fitted model.



Packages that allow you to fit a data set to a polynomial are widely
available, and you should use a lot more than just two points to
calibrate the parameters of your fitted model.

I can't use the powerful software on the field, machine by machine.

Taking that approach builds into your model the assumption that you've
got a power law: In the absence of a good reason to believe that the
real curve is a power law (and you haven't given any), it's simpler just
to fit your actual data to a polynomial. You've got tight constraints on
the execution time and the available space for evaluating your model;
you don't want to make inefficient use of your limited time and memory
by biasing your fit in favor of a power law for which you can give no
theoretical justification.

The theory says that the exponent should be 2 (P=V^2/R). In practice
I have
a shift from the theory, I don't know why (I'm not an hw expert).

So I need to adjust a law that depends on a small number of parameters
(in my
case, Pr and the exponent) on the field, machine by machine, with a
couple
of measures.
 
P

pozz

     The first thing that occurs to me is to use a simpler
parameterization, something along the lines of

        P = k(x) * x^2

... where k(x) is (most likely) an interpolation in a smallish
table.  The variability of k seems a better match for your
suspicion "maybe some non-linear behavior" than fiddling around
with the exponent.

How can I calibrate this "smallish table"? I would need to make many
measurements for each machine on the field.

If I write your function as

P = Pr * k(x) * x^2

where x is the ADC points normalized to 800 (reference value).

It should be nice if k(x) would be a function that values 1 when x=0
and x=1, and in between change smoothly, with the maximum at the
center
(x=0.5). This maximum correction at the center (about +/-30%) should
be the second parameter to calibrate in a simple way with a second
measurement, without changing the calibration of Pr (made during the
first measurement).
 
B

BartC

I have a 10-bit ADC that acquires a voltage signal and convert it to a
decimal number 0..1023.
The quadratic law above is simple to implement, but in practice I have to
fine-tune this law, because at low power levels the voltage/power relation
seems to deviate from the quadratic law (I don't know exactly why, maybe
some non-linear behaviour of some components).

Low input impedance of the ADC? It would be interesting to find out why
that's happening, although fixing it in hardware would involve extra costs,
and it sounds like many of these units already exist!

However if I had to solve that in software, on a processor where 2KB of
memory is a big deal, then I would stay well away from double-precision
floating point arithmetic, especially involving powers, logarithms and
anti-logarithms. You've also said elsewhere that you only need accuracy of
0.5% (5W in 1000W), so integer/fixed-point methods could be viable.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top