Value of "e" in the C log() function

J

JosephKK

JosephKK wrote:
...

Yes, like most mathematicians, scientists, and engineers, we think it's
special. There's a very good reason why it's on of the few "special
functions" that are actually present in the C standard library.


I'm sure that I can, but I sense a trap; I suspect that you're looking
for something different from what I would naively think you're asking
for. Therefore, could you show me what the kind of answer you're looking
for, by giving me an example of what you think the correct answer would
look like to a similar but simpler question: "What is the value of
ln(4), and why is that the correct answer?".

Actually, i am more interested in the reasoning about why the value of
your method is correct. But this is far enough from C already.
.
 
D

Dik T. Winter

>
> Perhaps I am being dim, but why does that immediately imply it?

The definition of analytic is that it has a derivative over its domain.
So if the requirement is that f'(x) = f(x) over the domain, the function
f can not be non-analytic because if it non-analytic over the domain there
would be an x for wich the derivative does not exist.
> By analytic, I mean given by a convergent power series. For functions
> from C to C, this is equivalent to being infinitely differentiable (in
> the complex sense), but for functions from R to R, being analytic is a
> stronger property.

Is it? What more is required?
> I happily agree that it's immediately clear that
> exp(x) = 1 + x + x^2/2! +...
> is the unique analytic function from R to R (or from C to C) with
> exp'=exp, but I don't see a simple reason why there couldn't be a
> non-analytic function f:R->R with f'=f (of course, f could not be the
> restriction of a holomorphic function on C).

Can you show a non-analytic function in your sense where the derivative
does not exist for some 'x' within the domain?

With regards to a previous remark by you:
> But how do you know that f has an inverse and that it's differentiable?
> You need some version of the Inverse Function Theorem, which is
> distinctly non-trivial. (And to apply it, you need to show that exp'(x)
> never vanishes.)
where f(x) = log(x) = int{1..x} dx/x.

I need to show that f(x) is a bijection from R+ to R, that f'(x) does not
vanish, and that is all. What more do you want?
 
J

JosephKK

You have a point here, albeit not as much as you think. One way
to look at "special" is to say that a function definition is
special if it is free of arbitrary parameters. On that view, at
first sight it would seem that ln isn't special. However there
is more to the story.

The exponential function is special because its definition is
parameter free. The differential equation is

d(exp(x))/dx = exp(x)

In turn ln(x) is special because it is the inverse of exp(x),
which is a parameter free definition.

When we have a family of functions that is distinguished by a
parameter, it is common enough that we pick one member of the
family as the canonical function and express the others in terms
of the canonical function. In a sense the choice is arbitrary.
However it isn't really, because one of the desiderata is that
using the canonical function simplifies equations.





Richard Harter, (e-mail address removed)
http://home.tiac.net/~cri, http://www.varinoma.com
If I do not see as far as others, it is because
I stand in the footprints of giants.

OK. I can accept that.
.
 
J

JosephKK

On 14 Mar 2009 12:01:03 GMT, (e-mail address removed) (Richard Tobin)
wrote: ... ...
Actually it is less than trivial result.

It is trivial. Given the following definitions:
exp(x) is the function whose derivative is exp(x) with exp(0) = 1
log(x) = integral{1..x} dx/x
Now I use the following theorem:
given a function f and a function g where g is the inverse of f in
that case we have (primes note derivative function):
g'(x) = 1/[f'(g(x)]
which is fairly easy to prove using basic properties of derivatives.

Set f = log(x), we have:
g'(x) = 1/[log'(g(x)] = 1/[1/g(x)]] = g(x)
so the inverse function has the functional definition of exp(x). It remains
to prove that g(0) = 1, but that follows because f(1) = 0. So exp(x) is the
inverse of log(x).

So, if we define e = exp(1) we find immediately log(e) = 1.

What part of the above is not trivial?

Spending the time and effort to state it clearly.
.
 
J

JosephKK

I think you're making the mistake of assuming that a statement is
equivalent to the converse of its contrapositive :)

A function f is analytic on R if there is a power series Sum(a_n x^n)
such that the series converges to f(x) for every x in R.

IIRC the fact that that a function is analytic is not necessarily
proven by the fact that it can be represented by a power series. Try
arcsin(x) which has an infinite amount of correct values.
This implies that f is differentiable. (In fact, that f is infinitely
differentiable, and the power series Sum(a_n x^n) is none other than the
Taylor series of f about 0.)

But the converse is false: there are differentiable functions that are
not analytic. The canonical example is
f(x)= exp(-1/x^2) if x!=0
= 0 if x==0
This is infinitely differentiable, but all its derivatives at 0 are 0,
so the Taylor series of f does not converge to f.

Question (since i no longer remember) are Reimann integratable
functions analytic?
.
 
J

JosephKK

There's nothing but a scalar difference separating 1 from e, or e from
pi, or pi from 10. If scalar differences are important, than there are
no special numbers.

Yes, it is a defensable position. It has had applications a few
times.
.
 
J

JosephKK

...

Since when? Or are you twisting away from trying to evaluate those
two cases?

Since those "functions" are applied to complex numbers. In complex
analysis log(x) is not a single-valued "function" (read something about
Riemann surfaces). That is, for any x log(x) is the set of values
{Log(x) + 2k*pi*i}
where Log(x) indicates some main value (for real x it is the real valued
log(x)).
(You can see that for each of these values exp will give the same result,
the reason is that exp(z) is a periodic function with period 2*pi*i.)
So, when we allow multi-valued functions on a proper Riemann surface
we get (using Log for some main value, and I use -k rather than k):
2^(i * pi) = exp(i * pi * log(2)) =
= exp(i * pi * (Log(2) - 2 * k * pi * i)) =
= exp(i * pi * Log(2) + 2 * k * pi * pi) =
= exp(i * pi * Log(2)) * exp(2 * k * pi * pi) =
= (cos(Log(2)*pi) + i * sin(Log(2)*pi) * exp(2 * k * pi * pi).
for arbitrary k.
The mathematical definition
of the exponentiation operator is:
a^b = exp(b.log(a))
where that is well-defined. Where in that definition is 'e' used?

The base value of the exp() and log()[really ln()] functions??

Where in the definitions is the term "base" used? And in mathematics
generally when you state "log" you mean the natural logarithm. A quote
from mathworld.com:
"Note that while logarithm base 10 si denoted 'log x' in this work, on
calculators, and in elementary algebra and calculus textbooks,
mathematicians and advanced mathematics text uniformly use the
notation 'log x' to mean 'ln x', and therefore use 'log_10 x' to
mean the common logarithm. Extreme care is therefore needed when
consulting the literature.'
And I warn you for number theorists they sometimes give a completely
different meaning to 'log_10 x'.

Yeow. <Withdraws singed fingers> In less space, a far better
explanation than in any of the textbooks that i failed to this learn
from way back when.
I do remember ordinary roots and powers on complex Reimann surfaces,
but misremembered application to ln() and exp().
.
 
L

luser-ex-troll

I'm starting to dabble with FPGA coding. I might try to see
how many hand-written cordic blocks I can squeeze into the
same space as one optimised fixed point operational block for
the usual CORDIC functions (log, exp, atan2, maybe sqrt if
I can remember how to do it). The only problem is that I have
no personal use for such a block, I'm more into discrete
mathematics.

Phil

Have you tried any of the C-to-gates compilers you could
comment on for the curious?

lxt
 
N

Nate Eldredge

Dik T. Winter said:
The definition of analytic is that it has a derivative over its domain.
So if the requirement is that f'(x) = f(x) over the domain, the function
f can not be non-analytic because if it non-analytic over the domain there
would be an x for wich the derivative does not exist.

That's not the definition of "analytic" that's usually used in
mathematics. Antonius has given the correct definition: a function f is
analytic at a point c if there is a power series

a_0 + a_1 (x-c) + a_2 (x-c)^2 + a_3 (x-c)^3 + ...

which converges to f(x) for all x in some neighborhood of c. f is
analytic if it is analytic at every point in R. (We often use the term
"real analytic" when talking about functions on R; "analytic" by itself
usually refers to functions on C.)

It's a theorem that if f is analytic, then it is infinitely
differentiable (has derivatives of all orders), and the power series
which converges to f is actually the Taylor series of f. In the case of
functions on R, the converse of this statement is false. See below.
Is it? What more is required?

Yes, it is.

There are functions on R which are infinitely differentiable but for
which the power series at some point does not converge to the function
in a neighborhood of that point.

The classical example is

f(x) = exp(-1/x^2) for x > 0
f(x) = 0 for x <= 0

You can verify that f has derivatives of all orders at every point, and
in particular at x=0. Moreover, at x=0, the derivatives of all orders
are equal to 0. So if you expanded f in a Taylor series about x=0, you
would get the series 0 + 0 + 0 + 0.... The power series converges to 0
at every point, but that's not the value of f(x) for any x > 0.

So f is a function which is infinitely differentiable but not real
analytic.

The point is that any argument which implicitly assumes that the
function can be written as a power series will only apply to analytic
functions.
Can you show a non-analytic function in your sense where the derivative
does not exist for some 'x' within the domain?

Er, that's easy: f(x) = |x| is not analytic and not differentiable at
x=0. With more work I can give you one that's not analytic and not
differentiable at any point. Was that really what you meant to ask?

Anyway, exp(x) is not the only function on R which is its own
derivative; c*exp(x) for any real constant c will also do. But those
are the only examples. Here's a simple proof that doesn't assume f is
analytic.

Theorem. Suppose that f is a once-differentiable function on R such
that f'(x) = f(x) for all x in R. Then there exists a constant c such
that f(x) = c * exp(x).

Proof. Let f be such a function, and let g(x) = f(x) * exp(-x). By the
product rule, g'(x) = f'(x) * exp(-x) - f(x) * exp(-x). Since f'(x) = f(x),
g'(x) = 0 for all x. By the mean value theorem, g is constant. QED.

HTH, etc.
 
N

Nate Eldredge

Nate Eldredge said:
The classical example is

f(x) = exp(-1/x^2) for x > 0
f(x) = 0 for x <= 0

Sorry, I just noticed Antoninus already gave this example. (Also, I
misspelled Antoninus. Apologees.)
 
N

Nate Eldredge

JosephKK said:
Question (since i no longer remember) are Reimann integratable
functions analytic?

Nope. They need not even be continuous.

f(x) = 1, if -1 <= x <= 1
f(x) = 0, otherwise

is Riemann integrable over all of R, but not continuous, hence not
differentiable and certainly not analytic.

f(x) = exp(-1/(x^2-1)^2), if -1 < x < 1
f(x) = 0, otherwise

is infinitely differentiable, Riemann integrable on all of R, but not
analytic.
 
P

Phil Carmody

Nate Eldredge said:
That's not the definition of "analytic" that's usually used in
mathematics.

It's *one* of the definitions that's used. In recent years the
concept Dik is referring to has also been called holomorphic
in preference, but one can't deny that it's also commonly called
analytic.

Phil
 
N

Nate Eldredge

Phil Carmody said:
It's *one* of the definitions that's used. In recent years the
concept Dik is referring to has also been called holomorphic
in preference, but one can't deny that it's also commonly called
analytic.

I think we are at cross purposes.

For functions on C, the property of having a derivative (in the complex
sense!) over an open set is equivalent to being given by a power
series. The terms "analytic" and "holomorphic" are both used for this
property.

For functions on R, the property of having a derivative over an open set
(interval) is strictly weaker than being given by a power series. The
term "analytic" (or "real analytic") is commonly applied to the latter
but never, in my experience, to the former. I have not seen the word
"holomorphic" used to describe functions on R at all.

The discussion was regarding functions on R.

If you have examples of conflicting uses of these terms, I'd be
interested to see them.
 
J

James Kuyper

Nate said:
That's not the definition of "analytic" that's usually used in
mathematics.

"Mathematical Analysis", 2nd Edition, Tom M. Apostol, Definition 16.1:

"Let f = u + iv be a complex-valued function defined on an open set S in
the complex plane C. Then f is said to be analytic on S if the
derivative f' exists and is continuous at every point of S."

I've several other math books that say the same thing in various ways,
but Apostol says it most clearly. All of of them treat convergence of
the power series as a property to be derived from that definition, not
as the definition of the term.
 
D

Dik T. Winter

....
> This implies that f is differentiable. (In fact, that f is infinitely
> differentiable, and the power series Sum(a_n x^n) is none other than the
> Taylor series of f about 0.)

The definition I gave for exp(x) was over the complex plane.
 
A

Antoninus Twink

"Mathematical Analysis", 2nd Edition, Tom M. Apostol, Definition 16.1:

"Let f = u + iv be a complex-valued function defined on an open set S in
the complex plane C. Then f is said to be analytic on S if the
derivative f' exists and is continuous at every point of S."

Note the crucial words "complex-valued function defined on an open set S
in the complex plane". For complex functions, being differentiable on an
open disc is equivalent to being given by a convergent power series
there, so authors are free to use whichever definition they find most
convenient.

For (real valued) functions of a *real* variable, the two conditions are
no longer equivalent - a famous counterexample has been provided at
least twice already in this thread - and "analytic" is conventionally
reserved for the stronger condition.
 
A

Antoninus Twink

Theorem. Suppose that f is a once-differentiable function on R such
that f'(x) = f(x) for all x in R. Then there exists a constant c such
that f(x) = c * exp(x).

Proof. Let f be such a function, and let g(x) = f(x) * exp(-x). By the
product rule, g'(x) = f'(x) * exp(-x) - f(x) * exp(-x). Since f'(x) = f(x),
g'(x) = 0 for all x. By the mean value theorem, g is constant. QED.

Neat! The only solution that came to my mind was invoking uniqueness
results for solutions of linear first order ODEs, and I was sure that
was overkill.
 
N

Nate Eldredge

Antoninus Twink said:
Neat! The only solution that came to my mind was invoking uniqueness
results for solutions of linear first order ODEs, and I was sure that
was overkill.

I thought of the same thing. Then I looked up a proof of the uniqueness
theorem, which led me to a proof of Gronwall's lemma, at which point I
could see how the whole thing would simplify in this case.
 
A

Antoninus Twink

Neat? Nate's proof smells awfully vacuous. I suspect it relies on the assumed
properties of exp(x) to prove something about exp(x).

What assumed properties?
Suppose that you approach the proof without any prior knowledge about exp(x)
being e to the power of x, and all of the properties which that entails.

How do you define a^x for irrational a and x, other than in terms of
exp()?
What are the weakest properties of this mysterious exp(x) for the proof's
argument to be valid?

Nate is just relying on the standard definition of exp, i.e.
exp(x) = 1 + x + x^2/2! + ...

Of course, one must prove that this power series converges for all x,
and that one can differentiate a convergent power series term by term
inside its radius of convergence, and that when one does this in this
case one gets the same power series back.
The proof asks us to accept that whatever exp(x) is, it differentiates to
itself. That is to say exp(x) denotes any member of a class of functions that
differentiate to themselves.

Not at all - see above.
But then, we are asked to accept that this exp(x) function has the property
that exp(x) * exp(-x) is a constant, which is what allows us to algebraically
identify f(x) as c * exp(x).

One must prove that power series converging to functions f and g can be
multiplied in the "obvious" way(*) to give a power series that converges
to fg in a suitable radius. Once again, this will be in any
undergraduate analysis text.

Then one must apply this procedure to the power series for exp(x) and
exp(-x): not a difficult exercise.


(*) i.e. (Sum a_n)(Sum b+n) = Sum c_n, where
c_n = a_0.b_n + a_1.b_(n-1) + ... + a_n.b_0.
 
N

Nate Eldredge

Kaz Kylheku said:
Note that c can be zero, and so the function f(x) = 0 is also its own
derivative.


Neat? Nate's proof smells awfully vacuous. I suspect it relies on the assumed
properties of exp(x) to prove something about exp(x).

Okay, perhaps you should propose a definition for exp(x), and we can see
how the desired properties follow.
Suppose that you approach the proof without any prior knowledge about exp(x)
being e to the power of x, and all of the properties which that entails.

What are the weakest properties of this mysterious exp(x) for the proof's
argument to be valid?

The proof asks us to accept that whatever exp(x) is, it differentiates to
itself. That is to say exp(x) denotes any member of a class of functions that
differentiate to themselves. This allows the product rule step to to hold. So
far so good. It's valid to imagine a set of self-differentiating functions,
give it a name, and plug it into a formula where we apply a valid reduction
from differential calculus. So far so good! Of course g(x) is constant.

But then, we are asked to accept that this exp(x) function has the property
that exp(x) * exp(-x) is a constant, which is what allows us to algebraically
identify f(x) as c * exp(x).

exp(-x) is a notational convenience. You can replace it with 1/exp(x)
and the proof will go through the same way, using the quotient rule
instead of the product rule. Does that make you happier?
Hence the proof's conclusion is bootstrapped from its own assumptions. It
/assumes/ that all functions exp which self-differentiate have the property
that exp(x) = c/exp(-x), and since f(x) is a self-differentiating function,
it must be one of these.

I don't think that it does.

Here is another theorem, which doesn't mention exp(x) at all.

Lemma. Suppose g is a once-differentiable function on R such that
g'(x) = g(x) for all x in R. Then either g is the zero function, or
g(x) is nonzero for all x in R.

Proof. Suppose, in order to get a contradiction, that there exist x0,
x1 such that g(x0) = 0, g(x1) != 0.

Suppose first that x0 < x1 and g(x1) > 0. Let y = sup { x < x1 : g(x) = 0 }.
Because g is differentiable, g is continuous; it follows that g(y) = 0
and g > 0 on the (nonempty) interval (y, x1). Since g = g', we have
that g is strictly increasing on (y, x1). Choose y1 in (y, x1) such
that 0 < y-y1 < 1. By the mean value theorem, there exists x in (y, y1)
such that g'(x) = g(y1)/(y-y1) > g(y1). Since g(x)=g'(x) we have g(x) >
g(y1). But this is absurd since x < y1 and g is increasing on (y, x1).

Suppose next that x0 > x1 and g(x1) > 0. Let y = inf { x > x1 : g(x) = 0 }.
As before, g(y) = 0 and g > 0 on (x1, y). Thus g is increasing on (x1, y).
But g(x1) > 0 and g(y) = 0 so this is absurd.

The cases where g(x1) < 0 are similar and left as an exercise to the
reader. QED.

Theorem. Suppose f,g are two once-differentiable functions on R such
that f'(x) = f(x) and g'(x) = g(x) for all x in R. Suppose further that
g is not the zero function. Then there exists a constant c such that
f(x) = c g(x) for all x in R.

Proof. By the lemma, g(x) is never zero, so the function h(x)=f(x)/g(x)
is differentiable on R. By the quotient rule, for any x in R we have

h'(x) = (f'(x)g(x) - f(x)g'(x))/g(x)^2
= (f(x)g(x) - f(x)g(x))/g(x)^2
= 0

since f'(x)=f(x), g'(x)=g(x). By the mean value theorem h is constant,
i.e. there exists c such that h(x)=c for all x. Thus f(x) = c g(x) for
all x in R. QED.

With this theorem in hand, start with your favorite definition for
exp(x). Use it to prove that exp'(x) = exp(x) for all x, and that
exp(x) is not the zero function. Then take g(x)=exp(x) in the theorem;
it follows that if f is any other function with f(x)=f'(x), then
f(x) = c*exp(x) for all x.

If you can also prove from your definition that exp(x) is never zero,
you can dispense with the lemma, which is most of the work.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,780
Messages
2,569,611
Members
45,264
Latest member
FletcherDa

Latest Threads

Top