Optimization Part 2: Multivariate Scalar Functions¶
We now move into minimizing objectives that are multivariate functions. They still return a single quantity that we wish to optimize, so they are scalar functions, but we will now move into the case of optimizing that objective function by iteratively varying more than one function input. We encounter this type of problem all the time!
Let’s start with a very basic example: we have a function that describes the value of z for values of x and y:
$$z(x,y) = x^2 + y^2$$
By inspection, we know that this function has a minimum at x = 0, y = 0, but if we can, it’s a good idea to start with a visualization of the problem. This is a 3D problem, so we are still able to visualize it reasonably well. Once we hit 4D, all bets are off!
You can look up 3D plotting in matplotlib; I found the link below to be helpful.
https://jakevdp.github.io/PythonDataScienceHandbook/04.12-three-dimensional-plotting.html
The following syntax will create a surface plot for visualization:
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.gca(projection=’3d’)
#Make data.
x = np.linspace(-5, 5, 1000)
y = np.linspace(-5, 5, 1000)
X, Y = np.meshgrid(x, y) #we’re making a surface plot, so we need to create a grid of (x,y) pairs where we evaluate z
def gen(x,y):
return x**2 + y**2 #This is my function, z(x,y)
Z = gen(X,Y) #generate the Z data on the meshgrid (X,Y)
#Plot the surface.
surf = ax.plot_surface(X, Y, Z)
plt.show()
In [2]:
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
fig=plt.figure()#creating a figure
ax=fig.gca(projection=’3d’)#specifying it is a 3D plot
x=np.linspace(-5,5,1000)
y=np.linspace(-5,5,1000)
X,Y=np.meshgrid(x,y)# constructing a grid and at each point it has the location of xy pair
def gen(x,y):
return x**2+y**2
z=gen(X,Y)#put the value of each xy pair into the function gen
#plotting the surface
surf=ax.plot_surface(X,Y,z)
plt.show
Out[2]:

So, this is a multivariate function, z(x,y), that returns a single output, z. We can minimize z by using any of the opt.minimize routines or any of the global optimzation routines we tested on univariate functions. We just make a simple extention to our original syntax. I know that above, we’ve wrote this out as z(x,y), which is conceptually true, but there is a catch when working with opt.minimize:
It will only vary the value of the first argument when seeking an optimum, and it will treat any additional arguments we pass to the function as fixed parameters. So, if I were to give z(x,y) to an optimizer, it would vary the value of x while holding y fixed at the initial value. Instead, I have to create function that accepts a vector input as its first argument. That first argument should be a collection – usually a list, a numpy array, or a tuple – of variables that I want to adjust until I find the minimum value of my function. So, instead of working with:
def z(x,y):
return x**2 + y**2
or it’s analogous lambda function
z = lambda x,y: x**2 + y**2
We want to write this particular objective function such that its first argument is a collection of all of the variables we want to minimize with respect to:
def z(var):
return var[0]**2 + var[1]**2
or, it’s analgous lambda function:
z = lambda var: var[0]**2 + var[1]**2
Often, to improve readability in a complex function, I will use a long form function definition and redefine elements in “var” using the labels that are specific to the way we express our function on paper:
def z(var):
x = var[0]
y = var[1]
return x**2 + y**2
But any of the above options will work. Now, we can pass this objective function to opt.minimize. The only other catch is that we need an initial guess for the (x,y) pair, and it should be a collection of the same size and type that we used for our function argument – in fact, our initial guess is what sets the data type for var. So, my full optimization script would look something like this. Just so that we can make the optimizer work for it, let’s start at an initial guess of [10, 50] for x and y:
def z(var):
x = var[0]
y = var[1]
return x**2 + y**2
var0 = [10, 50] #my initial guess for x and y formatted as a list; I could also use an array or tuple.
opt.minimize(z, var0)
As we saw in past exercises, opt.minimize will output a solution structure, and I can access individual elements using a dot operator.
In [5]:
import scipy.optimize as opt
def z(var):
x=var[0]
y=var[1]
return x**2+y**2
var0=[10,50]
opt.minimize(z,var0)
Out[5]:
fun: 1.0234243245626242e-11
hess_inv: array([[0.51655197, 0.02688641],
[0.02688641, 0.54367328]])
jac: array([ 6.21683737e-06, -1.55766369e-06])
message: ‘Optimization terminated successfully.’
nfev: 28
nit: 4
njev: 7
status: 0
success: True
x: array([ 3.10096811e-06, -7.86282424e-07])
Now that we have the basic example, we can add bounds just as we did with the univariate functions from Assignment 13. For example, to look for the minimum between x = [1,10] and y = [20, 50]:
opt.minimize(z, var0, bounds = [(1, 10), (20, 50)])
In [6]:
opt.minimize(z,var0,bounds=[(1,10),(20,50)])
Out[6]:
fun: 401.0
hess_inv: <2x2 LbfgsInvHessProduct with dtype=float64>
jac: array([ 2.00000159, 40.00000331])
message: b’CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'
nfev: 6
nit: 1
status: 0
success: True
x: array([ 1., 20.])
We can also add equality or inequality constraints just as easily. For example, let's add the constraints that:
$$x^2 + 2y \geqslant 10$$$$x - 12y^3 = 50$$
For this example, to focus on constructing and adding constraints, we'll remove the bounds on x and y. Remember: we have to specify each constraint in a separate function, and we stack those constraints up in a tuple of dictionaries.
#first constraint function
def conf1(var):
x = var[0]
y = var[1]
return x**2 + 2*y - 10
#second constraint function
def conf2(var):
x = var[0]
y = var[1]
return x - 12*y**3 - 50
#construct constraints in dictionary
con1 = {'type' : 'ineq' , 'fun' : conf1}
con2 = {'type' : 'eq' , 'fun' : conf2}
#optimization
sol = opt.minimize(z, var0, constraints = (con1, con2)) #I didn't specify the method, uses SLSQP by default
If you want to analyze the results a little:
x, y = sol.x
print(sol.fun, x**2 + 2*y >= 10, round(x – 12*y**3, 0) == 50)
Those second and third terms in the print statement look goofy, but it is your two constraints written as Boolean comparisons; If your optimization satisfied your constraints (within the precision of floating point arithmetic), both should print “True.”
In [7]:
def conf1(var):
x=var[0]
y=var[1]
return x**2+2*y-10
def conf2(var):
x=var[0]
y=var[1]
return x-12*y**3-50
con1={‘type’:’ineq’,’fun’:conf1}
con2={‘type’:’eq’,’fun’:conf2}
sol=opt.minimize(z,var0,constraints=(con1,con2))
x,y=sol.x
print(sol.fun,x**2+2*y>=10,round(x-12*y**3,0)==50)
16.008462000638794 True True
Finally, you often will encounter an optimization problem wherein you need to minimize an objective function by varying a set of parameters, but you also need to pass additional information to the function. As an example, let’s use the following function:
$$q(x,y,a,b) = ax^2 + by^2 + x – y$$
In this particular, example, a and b are coefficients, while x and y are the true “independent variables” that we are looking to optimize this function with respect to; however, our solution also will generally depend on the values of a and b, and we may need to consider the solution for multiple values of a and b. To handle this type of function, we would construct the following objective:
def q(var, a, b):
x = var[0]
y = var[1]
return a*x**2 + b*y**2 + x – y
In this form, we include the variables that we want to minimize with respect to (x,y) in the vector input var; they still need to be passed to the objective function as elements in a single collection. In contrast, we pass a and b as additional function arguments that will not be varied by the optimization routine. opt.minimize will only work directly on var; to pass extra arguments, we use the args keyword; here’ I’m solving the problem for a = 1 and b = 3:
opt.minimize(q, var0, args = (1, 3))
It might not yet be apparent why you’d want to do this, but keep working on optimization problems for awhile, and I’m sure you’ll find some examples. I rarely optimize a function that doesn’t use this convention these days. In fact, the next regression example uses this approach as a demonstration of best practice for working with local variables inside of a function scope.
In [ ]:
def q(var,a,b):
x=var[0]
y=var[1]
return a*x**2+b*y**2+x-y
opt.minimize(q,var0,args=(1,3))
You can build almost infinitely from this basic framework of optimizing multivariate functions. You can easily extend to optimize with respect to 3, 4, 5, or 15 parameters; however, just be aware that the more parameters you have, the more difficult it is to perform a meaningful optimization.
Next, we’ll cover a very, very common example of optimization that most of us have used before, even if we aren’t necessarily aware of how it works: least squares regression.
Least Squares
The basic problem should be familiar to everyone. You have a set of data, and you want to create a model that describes data well enough that it is reasonably predictive. Here’s a concrete example that is relevant to anyone working with biological processes: the kinetics of an enzyme catalyzed reaction. We know that the rate of an enzyme-catalyzed reaction will generally depend on the concentration of the reactant, let’s call it the substrate. We perform an experiment by varying the concentration of our substrate ($C_S$) and measuring the resultant reaction rate. I’ve loaded some typical laboratory data into the cell below for visualization; go ahead and plot them using a scatter plot to visualize the relationship between rate and substrate concentration in this system.
In [8]:
CS = np.array([0, 0.001, 0.002, 0.005, 0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1, 2, 5]) #mmol/L
rate = np.array([0, 0.017537467, 0.030941975, 0.080327165, 0.1643835, 0.26569368, 0.745442547, 1.295792328, 2.419014706, 4.0402125, 5.534947297, 5.127217742, 7.074911496]) #mmol/L/min
Looking at the data, I notice two things. First, the data are not perfect. This is typical of experimental data! Second, I can see that, initially, increasing the substrate concentration increases the rate of reaction, but this only occurs up to a certain point. This is referred to as “saturation kinetics” wherein the rate eventually becomes insensitive to additional increases in substrate concentration. This data set looks typical of a Michaelis-Menten mechanism for an enzyme-catalyzed reaction, where the rate is usually described by:
$$rate = \frac{V_{max}C_S}{K_m + C_S}$$
Inspecting the equation, it’s easy to see why the reaction rate becomes zero order in $C_S$ at high concentrations. In this limit, $C_S$ is the dominant term in the denominator, and the rate approaches $V_{max}$, which should give you a hint as to why this parameter is called “$V_{max}$”.
It would be useful to my group if I could figure out the values for $V_{max}$ and $K_m$ — once we know their values, we can predict the rate of that reaction for any substrate concentration, which will allow us to scale this reaction and implement it in any system that we need to. But how do we do that? Looking at the graph, we can see that we have 13 different rate vs. $C_S$ pairs, all of which are noisy and subject to measurement uncertainty. We didn’t do anything wrong per se, this is just typical of a laboratory experiment, where there are many sources of error. Our first instinct might be to pick two rate vs. $C_S$ pairs and plug them into the Michaelis-Menten rate equation (I know because 70% of my CEN 587 students do this on their exams). That would give us two equations and two unknowns ($V_{max}$ and $K_m$), which we could easily solve using a root finding algorithm. Unfortunately, since none of our measurments are exactly correct, we will get different answers for every ($C_S$, rate) pair that we pick. Our system is over-determined, and it is incorrect (and honestly a bit lazy) to try to solve it as a system of equations. Instead, we have to solve it by minimizing the error between our measurements and our experiments. Usually, we do this by minimizing some form of the residual sum of squares, which quantifies the level of agreement between measurement values and model predictions. The most basic version is defined based only on the difference between the model and measurement:
$$\textrm{SSE} = \sum_{i = 1}^n\left(\textrm{measurement}_i – \textrm{model}_i\right)^2$$
But this is really just another example of minimizing an objective function! Which we already know how to do! In this case our objective function is the residual sum of squares. Minimizing that objective function by varying model parameters minimizes the residual sum of squares, which is the process of least squares regression. We use this method all the time! Every time you use a trendline in Excel, for example, you are performing a least squares regression (whether you knew it or not).
Now, let’s look at how to formulate this problem in Python and solve it using the optimization tools in Scipy.
Least Squares Regression of the Michaelis-Menten Model using Experimental Data
The “saturation kinetics” that we observe in this system suggests a Michaelis-Menten mechanisms, which we model as:
$$rate = \frac{V_{max}C_S}{K_m + C_S}$$
Next, we need to build an objective function that calculates the residual sum of squares between the model prediction and the experimental measurment for our experimental set of substrate concentrations. In this case, our objective function will be a multivariate scalar function. It will accept two parameters that we want to find optimum values for — $K_m$ and $V_{max}$ — and it will return a single value that we wish to minimize, the residual sum of squares.
As in in the first example of this worksheet (minimizing z(x,y)), when I construct this objective function, I will need to pass all of the parameters I’m looking to vary in a single array-like structure, so I have to pack $K_m$ and $V_{max}$ into a collection for this to work. Though it isn’t necessary to do so, I often rename them once inside the function scope to make it more readable. Since I want that function to compute the error between model and measurement, I also need to access my experimental data inside of the function. Best practice is to work with local variables inside of a function (i.e., variables that are only defined inside of the function scope), so I add two extra arguments to my objective function, one representing substrate concentration, and one representing the reaction rate.
def obj(par, C, r):
Vmax = par[0]
Km = par[1]
model = (Vmax*C)/(Km + C)
SSE = sum((r – model)**2)
return SSE
Now that I have an objective function, I can use opt.minimize to minimize it. We have to provide initial guesses for our parameters as some sort of collection. Since we’re working with numpy arrays everywhere else, I’ll pass my initial guesses into the minimizer using a numpy array as well. I’ll also use the args keyworkd to send my experimental data into the objective function.
par0 = np.array([1,1])
sol = opt.minimize(obj, par0, args = (CS, rate))
print(sol)
Technical Aside that is important but probably not super interesting to you: Technically, once I’ve loaded them into the Python workspace, CS and rate are global variables, both CS and rate are accessible inside of the function scope. That said, the best practice way to do this is work only with local variables inside of the function and to pass experimental data as additional arguments. Since that is usually best practice, that’s what I did here.
This is different from Matlab’s default behavior, which is to restrict function scope to local variables by default. In Matlab, I would need to either declare CS and rate as global variables (generally not recommended), nest the two functions somehow (generally not recommended), or pass CS and rate as arguments to the objective function (recommended).
There are a lot of reasons why global variable declarations can be problematic. Sometimes, for convenience, we leverage them, particularly in a language like Python and when working in a Jupyter notebook. But best practice is usually to work with local variables defined only within the scope of the particular function or subroutine you’re working on. Just be careful. I find it easy to be lazy about globals, particularly in Jupyter. End Aside
In [9]:
def obj(par, C, r):
Vmax=par[0]
Km=par[1]
model=(Vmax*C)/(Km+C)
SSE=sum((r-model)**2)
return SSE
par0=np.array([1,1])
sol=opt.minimize(obj,par0,args=(CS,rate))
print(sol)
fun: 1.109424939810448
hess_inv: array([[0.53640537, 0.08330392],
[0.08330392, 0.0200761 ]])
jac: array([-4.47034836e-07, 2.44379044e-06])
message: ‘Optimization terminated successfully.’
nfev: 120
nit: 13
njev: 30
status: 0
success: True
x: array([7.20658891, 0.40925668])
Back to reality: It is straightforward to bound both of my parameters with opt.minimize:
opt.minimize(obj, [7,0.3], args = (CS, rate), bounds = [(6.8, 7.4), (0.1, 0.5)])
Where bounds are input as a list of tuples. The first tuple is (lower_bound, upperbound) on argument 1 ($V{max}$), the second is the (lower_bound, upper_bound) on argument 2 ($K_m$), etc.
In [10]:
opt.minimize(obj,[7,0.3],args=(CS,rate),bounds=[(6.8,7.4),(0.1,0.5)])
Out[10]:
fun: 1.1094249398111704
hess_inv: <2x2 LbfgsInvHessProduct with dtype=float64>
jac: array([1.50990331e-06, 1.82076576e-06])
message: b’CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'
nfev: 36
nit: 10
status: 0
success: True
x: array([7.20658997, 0.40925684])
This is an implementation of least squares error minimization, which is generally what we mean when we talk about "nonlinear regression." If you scan the documentation for scipy.optimize, you'll find two specific routines cited for nonlinear regression; both are good and worth exploring. The first is:
opt.least_squares
The second is:
opt.curve_fit
opt.least_squares is a very nice method that is optimized for nonlinear least squares; however, it requires you to provide a vector function from which it generates the scalar objective function for least squares on its own. We'll save consideration of opt.least_squares for the next assignment, where we will introduce vector functions and Python subroutines that are based on vector functions.
The second one, opt.curve_fit, is really great for simple model fitting (exactly like what we are doing with the Michaelis-Menten rate law here). This one has a pretty unique syntax relative to what we find elsewhere in scipy.optimize, so we'll go ahead and cover it here since it doesn't require us to work with vector functions.
Instead of writing a function to return your objective, you provide opt.curvefit with a function that returns your model prediction only. opt.curvefit then uses that model equation to generate the least squares objective function automatically. So that it can do this, you'll also provide opt.curvefit with your raw "xdata" and "ydata", in our case, these are substrate concentration and rate respectively. The basic syntax is straightforward:
opt.curve_fit(function_name, xdata, ydata)
Though you can add a number of options as usual. For our specific problem, "xdata" = CS, "ydata" = rate, and "function" will be a function that returns the Michaelis-Menten rate at a specific value of CS, $V_{max}$, and $K_m$).
Important: when you use opt.curvefit, the function that you create must accept your independent variable, in this case CS, as its first argument. Any parameters you are looking to fit are passed as subsequent arguments. This is completely opposite of opt.minimize syntax; you'll just have to get used to it if you want to use opt.curvefit. Neither is really better than the other, it just depends which you find most intuitive and best suited to your problem.
def f(C, Vmax, Km):
return (Vmax*C)/(Km + C)
sol = opt.curve_fit(f, CS, rate, bounds = [(6.8, 0.1), (7.4, 0.5)] )
print(sol)
I've also added some bounds to this one to illustrate that the general syntax looks very similar to opt.minimize in that it accepts a list of tuples; however, in this case, the first tuple is a set of lower bounds for all variables, and the second tuple is a set of upper bounds for all variables. That is different from the opt.minimize syntax, so be careful!
In [11]:
def f(C, Vmax, Km):
return (Vmax*C)/(Km + C)
sol = opt.curve_fit(f, CS, rate, bounds = [(6.8, 0.1), (7.4, 0.5)] )
print(sol)
(array([7.20658895, 0.40925666]), array([[0.10782994, 0.01660871],
[0.01660871, 0.00407721]]))
Optimization is a very complex topic, and we have only presented a basic overview of some of the tools available here. Still, it should give you a feel for the methods available to you, give you some appreciation for how they work, and provide you enough background to select appropriate algorithms and initial guesses, as well as bound your system in various ways.
Below are a few example problems that you can work through using the methods in this exercise.
Note: The constrained optimization problems are actually linear programming problems I took from Elementary Linear Algebra by Ron Larson. If I actually had these problems in my research, I would probably use linear programming algorithms since they are optimized for linear systems, but you should be able to solve them just for practice using the optimization routines presented here, even though they aren't specifically designed to solve linear problems.
Problem Set¶
Problem 1
Download the data in the Kinetics1.csv file; this is a single experiment wherein you measure the concentration of a component in your reaction chamber as a function of time. Time is in minutes, and concentration is in mol/L.
You know that the reaction is first order in the concentration of this species, and you expect it's concentration should be well-described by the following model:
$$C(t) = C_0e^{-kt}$$
Where $C_0$ is the iniitial concentration of this component, and k is a rate constant. Based on your data, estimate the values of these two parameters. If you don't immediately recognize this type of problem, it is a nonlinear least squares regression; you should use the residual sum of squares as your objective function and allow your optimization routine to vary $C_0$ and k to find the minimum SSE.
In [20]:
t=np.array([0,0.5,0.75,1,2,3,5,7,10,20,50,100])
B=np.array([7.96575,7.597255399,7.253950907,6.732880211,7.135464942,6.807378856,5.696048618,4.166668694,3.822641307,2.471268787,0.564267474,0.040453969])
def C(t,A,k):
return A*np.exp(-k*t)
sol=opt.curve_fit(C,t,B)
print(sol)
(array([7.75223579, 0.06579841]), array([[3.50731734e-02, 6.70000227e-04],
[6.70000227e-04, 3.66196905e-05]]))
C:\Users\15814\anaconda3\lib\site-packages\ipykernel_launcher.py:4: RuntimeWarning: overflow encountered in exp
after removing the cwd from sys.path.
Problem 2
Find the values of $x_1$ and $x_2$ that minimize the Rosenbrock function:
$$f(x) = \frac{1}{2} \left[100\left(x_2 - x_1^2\right)^2 + \left(1 - x_1\right)^2\right]$$
This is an unconstrained minimization of a multivariate function.
In [33]:
def
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
5 return 1/2*[100*(y-x**2)**2+(1-x)**2]
6 var0=[10,50]
—-> 7 opt.minimize(f,var0)
~\anaconda3\lib\site-packages\scipy\optimize\_minimize.py in minimize(fun, x0, args, method, jac, hess, hessp, bounds, constraints, tol, callback, options)
602 return _minimize_cg(fun, x0, args, jac, callback, **options)
603 elif meth == ‘bfgs’:
–> 604 return _minimize_bfgs(fun, x0, args, jac, callback, **options)
605 elif meth == ‘newton-cg’:
606 return _minimize_newtoncg(fun, x0, args, jac, hess, hessp, callback,
~\anaconda3\lib\site-packages\scipy\optimize\optimize.py in _minimize_bfgs(fun, x0, args, jac, callback, gtol, norm, eps, maxiter, disp, return_all, **unknown_options)
1001 func_calls, f = wrap_function(f, args)
1002
-> 1003 old_fval = f(x0)
1004
1005 if fprime is None:
~\anaconda3\lib\site-packages\scipy\optimize\optimize.py in function_wrapper(*wrapper_args)
325 def function_wrapper(*wrapper_args):
326 ncalls[0] += 1
–> 327 return function(*(wrapper_args + args))
328
329 return ncalls, function_wrapper
3 x=var[0]
4 y=var[1]
—-> 5 return 1/2*[100*(y-x**2)**2+(1-x)**2]
6 var0=[10,50]
7 opt.minimize(f,var0)
TypeError: can’t multiply sequence by non-int of type ‘float’
Problem 3
Find the values of x and y that minimize w:
$$w = 3x + 2y$$
subject to the constraints:
$$2x + y \geqslant 6$$$$x + y \geqslant 4$$
Where x and y are both greater than or equal to zero.
This is constrained minimization of a multivariate objective function.
In [31]:
def W(var):
x=var[0]
y=var[1]
return 3*x+2*y
def conf1(var):
x=var[0]
y=var[1]
return 2*x+y-6
def conf2(var):
x=var[0]
y=var[1]
return x+y-4
def conf3(x):
return x
def conf4(y):
return y
con1={‘type’:’ineq’,’fun’:conf1}
con2={‘type’:’ineq’,’fun’:conf2}
con3={‘type’:’ineq’,’fun’:conf3}
con4={‘type’:’ineq’,’fun’:conf4}
var0=[1,1]
sol=opt.minimize(W,var0,constraints=(con1,con2,con3,con4))
print(sol.x)
[2. 2.]
Problem 4
Find the values of x, y, and z that minimize w:
$$w = 2x + 10y + 8z$$
subject to the constraints:
$$x + y + z \geqslant 6$$$$y + 2z \geqslant 8$$$$-x + 2y + 2z \geqslant 4$$
Where x, y, and z are all greater than or equal to zero.
This is constrained minimization of a multivariate objective function.
In [30]:
def w(var):
x=var[0]
y=var[1]
z=var[2]
return 2*x+10*y+8*z
def conf1(var):
x=var[0]
y=var[1]
z=var[2]
return x+y+z-6
def conf2(var):
x=var[0]
y=var[1]
z=var[2]
return y+2*z-8
def conf3(var):
x=var[0]
y=var[1]
z=var[2]
return-x+2*y+2*z-4
def conf4(x):
return x
def conf5(y):
return y
def conf6(z):
return z
con1={‘type’:’ineq’,’fun’:conf1}
con2={‘type’:’ineq’,’fun’:conf2}
con3={‘type’:’ineq’,’fun’:conf3}
con4={‘type’:’ineq’,’fun’:conf4}
con5={‘type’:’ineq’,’fun’:conf5}
con6={‘type’:’ineq’,’fun’:conf6}
var0=[1,1,1]
sol=opt.minimize(w,var0,constraints=(con1,con2,con3,con4,con5,con6))
print(sol.x)
[2.00000000e+00 1.08357767e-13 4.00000000e+00]
Problem 5
A small petroleum company owns two refineries. Refinery 1 costs 20,000 per day to operate, and it can produce 400 barrels of high-grade oil, 300 barrels of medium-grade oil, and 200 barrels of low-grade oil each day. Refinery 2 is newer and more modern. It costs 25,000 per day to operate, and it can produce 300 barrels of high-grade oil, 400 barrels of medium-grade oil, and 500 barrels of low-grade oil each day.
The company has orders totaling 25,000 barrels of high-grade oil, 27,000 barrels of medium-grade oil, and 30,000 barrels of low-grade oil. How many days should it run each refinery to minimize its costs and still refine enough oil to meet its orders?
This is a constrained minimization of a multivariate objective function
In [35]:
def cost(var):
x=var[0]
y=var[1]
return 20000*x+25000*y
def conf1(var):
x=var[0]
y=var[1]
return 400*x+300*y-25000
def conf2(var):
x=var[0]
y=var[1]
return 300*x+400*y-27000
def conf3(var):
x=var[0]
y=var[1]
return 200*x+500*y-30000
def conf4(var):
x=var[0]
return x
def conf5(var):
y=var[1]
return y
con1={‘type’:’ineq’,’fun’:conf1}
con2={‘type’:’ineq’,’fun’:conf2}
con3={‘type’:’ineq’,’fun’:conf3}
con4={‘type’:’ineq’,’fun’:conf4}
con5={‘type’:’ineq’,’fun’:conf5}
var0=[1,1]
sol=opt.minimize(cost,var0,constraints=(con1,con2,con3,con4,con5))
print(sol.x)
[24.99865318 49.99831673]
Problem 6
Find the values of p and q that minimize the Betts function:
$$f(x) = 0.01p^2 + q^2 – 100$$
Subject to the following bounds:
$$2 \leqslant p \leqslant 50$$$$-50 \leqslant q \leqslant 50$$
And the following inequality constraint
$$10p – q \geqslant 10$$
This is a constrained optimization of a nonlinear multivariate function
In [32]:
def w(var):
p=var[0]
q=var[1]
return 0.01*p**2+q**2-100
def conf1(var):
p=var[0]
return p-2
def conf2(var):
p=var[0]
return 50-p
def conf3(var):
q=var[1]
return q+50
def conf4(var):
q=var[1]
return 50-q
def conf5(var):
p=var[0]
q=var[1]
return 10*p-q-10
con1={‘type’:’ineq’,’fun’:conf1}
con2={‘type’:’ineq’,’fun’:conf2}
con3={‘type’:’ineq’,’fun’:conf3}
con4={‘type’:’ineq’,’fun’:conf4}
con5={‘type’:’ineq’,’fun’:conf5}
var0=[1,1]
sol=opt.minimize(w,var0,constraints=(con1,con2,con3,con4,con5))
print(sol.x)
[ 2.0000000e+00 -5.6808484e-08]
Problem 7
Repeat the above parameter estimation for the first order kinetic model:
$$C(t) = C_0e^{-kt}$$
Using the data set in Kinetics2.csv, which shows results from 10 independent replicate experiments.
In [ ]: