Kaz Kylheku said:
Note that c can be zero, and so the function f(x) = 0 is also its own
derivative.
Neat? Nate's proof smells awfully vacuous. I suspect it relies on the assumed
properties of exp(x) to prove something about exp(x).
Okay, perhaps you should propose a definition for exp(x), and we can see
how the desired properties follow.
Suppose that you approach the proof without any prior knowledge about exp(x)
being e to the power of x, and all of the properties which that entails.
What are the weakest properties of this mysterious exp(x) for the proof's
argument to be valid?
The proof asks us to accept that whatever exp(x) is, it differentiates to
itself. That is to say exp(x) denotes any member of a class of functions that
differentiate to themselves. This allows the product rule step to to hold. So
far so good. It's valid to imagine a set of self-differentiating functions,
give it a name, and plug it into a formula where we apply a valid reduction
from differential calculus. So far so good! Of course g(x) is constant.
But then, we are asked to accept that this exp(x) function has the property
that exp(x) * exp(-x) is a constant, which is what allows us to algebraically
identify f(x) as c * exp(x).
exp(-x) is a notational convenience. You can replace it with 1/exp(x)
and the proof will go through the same way, using the quotient rule
instead of the product rule. Does that make you happier?
Hence the proof's conclusion is bootstrapped from its own assumptions. It
/assumes/ that all functions exp which self-differentiate have the property
that exp(x) = c/exp(-x), and since f(x) is a self-differentiating function,
it must be one of these.
I don't think that it does.
Here is another theorem, which doesn't mention exp(x) at all.
Lemma. Suppose g is a once-differentiable function on R such that
g'(x) = g(x) for all x in R. Then either g is the zero function, or
g(x) is nonzero for all x in R.
Proof. Suppose, in order to get a contradiction, that there exist x0,
x1 such that g(x0) = 0, g(x1) != 0.
Suppose first that x0 < x1 and g(x1) > 0. Let y = sup { x < x1 : g(x) = 0 }.
Because g is differentiable, g is continuous; it follows that g(y) = 0
and g > 0 on the (nonempty) interval (y, x1). Since g = g', we have
that g is strictly increasing on (y, x1). Choose y1 in (y, x1) such
that 0 < y-y1 < 1. By the mean value theorem, there exists x in (y, y1)
such that g'(x) = g(y1)/(y-y1) > g(y1). Since g(x)=g'(x) we have g(x) >
g(y1). But this is absurd since x < y1 and g is increasing on (y, x1).
Suppose next that x0 > x1 and g(x1) > 0. Let y = inf { x > x1 : g(x) = 0 }.
As before, g(y) = 0 and g > 0 on (x1, y). Thus g is increasing on (x1, y).
But g(x1) > 0 and g(y) = 0 so this is absurd.
The cases where g(x1) < 0 are similar and left as an exercise to the
reader. QED.
Theorem. Suppose f,g are two once-differentiable functions on R such
that f'(x) = f(x) and g'(x) = g(x) for all x in R. Suppose further that
g is not the zero function. Then there exists a constant c such that
f(x) = c g(x) for all x in R.
Proof. By the lemma, g(x) is never zero, so the function h(x)=f(x)/g(x)
is differentiable on R. By the quotient rule, for any x in R we have
h'(x) = (f'(x)g(x) - f(x)g'(x))/g(x)^2
= (f(x)g(x) - f(x)g(x))/g(x)^2
= 0
since f'(x)=f(x), g'(x)=g(x). By the mean value theorem h is constant,
i.e. there exists c such that h(x)=c for all x. Thus f(x) = c g(x) for
all x in R. QED.
With this theorem in hand, start with your favorite definition for
exp(x). Use it to prove that exp'(x) = exp(x) for all x, and that
exp(x) is not the zero function. Then take g(x)=exp(x) in the theorem;
it follows that if f is any other function with f(x)=f'(x), then
f(x) = c*exp(x) for all x.
If you can also prove from your definition that exp(x) is never zero,
you can dispense with the lemma, which is most of the work.