SciPy Optimization syntax

T

tkpmep

I'm trying to optimize a function using SciPy's optimize.fmin, but am
clearly getting the syntax wrong, and would be grateful for some
guiidance. First, here's the function

def func(Y,x):
"""Y holds samples of a function sampled at t=-3,-2,-1,0,1,2,3.
Y[3]=0 always.
func returns the absolute value of the maximum NEGATIVE
error from a straight line fit with slope x and intercept 0"""

Y[0] = Y[0] - 3*x
Y[1] = Y[1] - 2*x
Y[2] = Y[2] - x
Y[3] = 0
Y[4] = Y[4] + x
Y[5] = Y[5] + 2*x
Y[6] = Y[6] + 3*x

error = abs(max(min(Y),0)

return 0

I'd now like to minimize this using optimize.fmin. I first defined
Y = [0, 0, 0, 0, 1, 2, 3]
x = 1

and then typed
I expected the function to retun x=0 as the optimal value, but instead
got the following error messsage:
Traceback (most recent call last):
File "<pyshell#24>", line 1, in -toplevel-
optimize.fmin(func,args=(optionPnL,x))
TypeError: fmin() takes at least 2 non-keyword arguments (1 given)

I then tried
and got a slightly different error message:
Traceback (most recent call last):
File "<pyshell#25>", line 1, in -toplevel-
optimize.fmin(func,x0=x, args=(optionPnL,1))
File "C:\Python24\lib\site-packages\scipy\optimize\optimize.py", line
176, in fmin
N = len(x0)
TypeError: len() of unsized object

What am I doing wrong, and what's the appropriate fix?

Thanks in advance

Thomas Philips
 
R

Robert Kern

I'm trying to optimize a function using SciPy's optimize.fmin, but am
clearly getting the syntax wrong, and would be grateful for some
guiidance.

You will want to ask such questions on the scipy mailing lists.

http://www.scipy.org/Mailing_Lists
First, here's the function

def func(Y,x):
"""Y holds samples of a function sampled at t=-3,-2,-1,0,1,2,3.
Y[3]=0 always.
func returns the absolute value of the maximum NEGATIVE
error from a straight line fit with slope x and intercept 0"""

Y[0] = Y[0] - 3*x
Y[1] = Y[1] - 2*x
Y[2] = Y[2] - x
Y[3] = 0
Y[4] = Y[4] + x
Y[5] = Y[5] + 2*x
Y[6] = Y[6] + 3*x

error = abs(max(min(Y),0)

return 0

If func(Y,x) == 0 for any Y or x, what exactly do you intend to minimize?

Also, do you really want to modify Y every time? fmin() will call this function
multiple times with different values of x (if you call it correctly); your
original data will be destroyed and your result will be meaningless.

Thirdly, it looks like you used the wrong sign for finding the residuals, or I'm
misunderstanding the docstring. I'll assume that the docstring is correct for
the following.
I'd now like to minimize this using optimize.fmin. I first defined
Y = [0, 0, 0, 0, 1, 2, 3]
x = 1

and then typed
I expected the function to retun x=0 as the optimal value, but instead
got the following error messsage:
Traceback (most recent call last):
File "<pyshell#24>", line 1, in -toplevel-
optimize.fmin(func,args=(optionPnL,x))
TypeError: fmin() takes at least 2 non-keyword arguments (1 given)

Yes, fmin() requires two arguments, the function to minimize and an initial
value. The docstring is pretty clear on this:


Type: function
Base Class: <type 'function'>
String Form: <function fmin at 0x2028670>
Namespace: Interactive
File:
/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy-0.5.2.dev2196-py2.4-
macosx-10.4-ppc.egg/scipy/optimize/optimize.py
Definition: optimize.fmin(func, x0, args=(), xtol=0.0001, ftol=0.0001,
maxiter=None, maxfun=None, full_output=0, dis
p=1, retall=0, callback=None)
Docstring:
Minimize a function using the downhill simplex algorithm.

Description:

Uses a Nelder-Mead simplex algorithm to find the minimum of function
of one or more variables.

Inputs:

func -- the Python function or method to be minimized.
x0 -- the initial guess.
args -- extra arguments for func.
callback -- an optional user-supplied function to call after each
iteration. It is called as callback(xk), where xk is the
current parameter vector.

Outputs: (xopt, {fopt, iter, funcalls, warnflag})

xopt -- minimizer of function

fopt -- value of function at minimum: fopt = func(xopt)
iter -- number of iterations
funcalls -- number of function calls
warnflag -- Integer warning flag:
1 : 'Maximum number of function evaluations.'
2 : 'Maximum number of iterations.'
allvecs -- a list of solutions at each iteration

Additional Inputs:

xtol -- acceptable relative error in xopt for convergence.
ftol -- acceptable relative error in func(xopt) for convergence.
maxiter -- the maximum number of iterations to perform.
maxfun -- the maximum number of function evaluations.
full_output -- non-zero if fval and warnflag outputs are desired.
disp -- non-zero to print convergence messages.
retall -- non-zero to return list of solutions at each iteration
I then tried

and got a slightly different error message:
Traceback (most recent call last):
File "<pyshell#25>", line 1, in -toplevel-
optimize.fmin(func,x0=x, args=(optionPnL,1))
File "C:\Python24\lib\site-packages\scipy\optimize\optimize.py", line
176, in fmin
N = len(x0)
TypeError: len() of unsized object

fmin() minimizes functions which take arrays. They should have a signature like
this:

def func(x):
return stuff

If you need to pass in other arguments, like data, they need to come *after* the
array fmin() is trying to find the optimal value for.

def func(x, Y):
return stuff

xopt = optimize.fmin(func, x0=array([0.0, 1.0]), args=(my_data,))


However, since you are not doing multivariable optimization, you will want to
use one of the univariable optimizers

Scalar function minimizers

fminbound -- Bounded minimization of a scalar function.
brent -- 1-D function minimization using Brent method.
golden -- 1-D function minimization using Golden Section method
bracket -- Bracket a minimum (given two starting points)

For example:

from numpy import array, arange, clip, inf
from scipy import optimize

def func(x, Y):
residuals = Y - x*arange(-3, 4)
error = -clip(residuals, -inf, 0).min()
return error

optionPnL = array([0.0, 0, 0, 0, 1, 2, 3])
x = optimize.brent(func, args=(optionPnL,))


Of course, there are an infinite number of solutions for this data since there
is a cusp and a weird residual function. Any x in [0, 1] will yield 0 error
since it is always on or below the data.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top