Scipy.optimize.minimize : Compute Hessian And Gradient Together
The scipy.optimize.minimize function implements basically the equivalent to MATLAB's 'fminunc' function for finding local minima of functions. In scipy, functions for the gradient
Solution 1:
You could go for a caching solution, but first numpy arrays are not hashable, and second you only need to cache a few values depending on whether the algorithm goes back and forth a lot on x
. If the algorithm only moves from one point to the next, you can cache only the last computed point in this way, with your f_hes
and f_jac
being just lambda interfaces to a longer function computing both:
import numpy as np
# I choose the example f(x,y) = x**2 + y**2, with x,y the 1st and 2nd element of x below:deff(x):
return x[0]**2+x[1]**2deff_jac_hess(x):
ifall(x==f_jac_hess.lastx):
print('fetch cached value')
return f_jac_hess.lastf
print('new elaboration')
res = array([2*x[0],2*x[1]]),array([[2,0],[0,2]])
f_jac_hess.lastx = x
f_jac_hess.lastf = res
return res
f_jac_hess.lastx = np.empty((2,)) * np.nan
f_jac = lambda x : f_jac_hess(x)[0]
f_hes = lambda x : f_jac_hess(x)[1]
Now the second call would cache the saved value:
>>> f_jac([3,2])
new elaboration
Out: [6, 4]
>>> f_hes([3,2])
fetch cached value
Out: [[2, 0], [0, 2]]
You then call it as:
minimize(f,array([1,2]),method='Newton-CG',jac = f_jac, hess= f_hes, options={'xtol': 1e-30, 'disp': True})
Post a Comment for "Scipy.optimize.minimize : Compute Hessian And Gradient Together"