University of Liverpool – Question 1: (0 points)
Problems that occur in real life may have objectives that do not naturally complement each other, or even that directly oppose each other.
For example:
a car manufacturer wants to design a new model to be as comfortable as possible, while at the same time as cheap as possible to make;
Copyright By PowCoder代写 加微信 powcoder
a healthcare provider wants to create a new ‘early detection’ protocol that increases the number of illnesses detected using the fewest number of doctors;
a power plant should be created to produce the maximum amount of energy whilst at the same being as environmentally friendly as possible and being as pleasing to the eye as possible.
Not only are many of these objectives contradictory (the car manufacturer can make a very cheap car by sacrificing comfort), but they may be measured in entirely different units. For some objectives we may not even have tangible units, e.g. the aesthetic quality of the power plant.
We will focus our attention on multi-objective problems with two objectives, which may also be called bi-criterion problems. The mathematical ideas we will use to address these work just as well with more dimensions, but the calculations become more complicated.
Question 2: (0 points)
For an unconstrained optimisation problem, the vector 𝐱 of decision variables lies within some space we will now denote by 𝑋.
When we add constraints, these define the feasible region F, which is a subset of 𝑋. The space 𝑋 is called decision space.
For a general multi-objective problem, our aim is to optimise the vector of objective functions
(𝑧 (𝐱), 𝑧 (𝐱), …, 𝑧 (𝐱))
When trying to establish what ‘optimal’ means in this context, it is helpful to think of
(𝑧 , …, 𝑧 ) as co-ordinates of points in a separate space 𝑌. The space 𝑌 is called objective space.
by varying 𝐱 within F.
We can also speak of the feasible region in objective space, which is {(𝑧(𝐱),𝑧(𝐱),…,𝑧(𝐱)):𝐱 ∈ 𝑋} ⊆ 𝑌.
Consider the bi-criterion problem
subject to
maximise: {2𝑥 − 𝑦, 𝑥 + 𝑦}
𝑥 ≤ 7, 2𝑥 + 𝑦 ≤ 16, −2𝑥 + 5𝑦 ≤ 20,
𝑥 + 3𝑦 ≥ 7, 2𝑥 + 𝑦 ≥ 4, 𝑥, 𝑦 ≥ 0.
Since all the constraints are linear, we can find the feasible region in decision space by solving pairs of constraints simultaneously to find the co-ordinates of the vertices.
for the two objectives.
𝑧 (𝑥, 𝑦) = 2𝑥 − 𝑦, 𝑧(𝑥, 𝑦) = 𝑥 + 𝑦
As these are also linear functions of the decision variables, the feasible region in objective space can also be identified just by locating its vertices.
The vertex 𝐴 = (1, 2) in decision space maps to a vertex 𝐴 = (𝑧(𝐴), 𝑧(𝐴)) = (0, 3) in objective space.
Similarly,
the image of 𝐵 = (7,0) is 𝐵 = (14,7),
the image of 𝐶 = (7,2) is 𝐶 = (12,9),
the image of 𝐷 = (5,6) is 𝐷 = (4,11), andtheimageof𝐸=(0,4)is𝐸 =(−4,4).
Question 3: (0 points)
A bi-criterion problem can be written as
max {𝑧 (𝐱), 𝑧 (𝐱)} . 𝐱∈F
In many cases, there will not be a single choice of 𝐱 ∈ F that maximises both 𝑧(𝐱) and 𝑧(𝐱) simultaneously.
So, without more information, there may not be a ‘best’ solution. That said, there are some solutions that are clearly better than others.
Suppose that 𝐱, 𝐱 ∈ F are such that either
𝑧(𝐱) ≥ 𝑧(𝐱), 𝑧(𝐱) > 𝑧(𝐱)
𝑧(𝐱) > 𝑧(𝐱), 𝑧(𝐱) ≥ 𝑧(𝐱) .
That is, 𝐱 gives a strictly better result than 𝐱 in at least one variable, and in the other
variable an equal or better result. Then 𝐱 is ‘better’ than 𝐱. We say that 𝐱 dominates 𝐱, or that 𝐱 is inferior to 𝐱.
The non-inferior set (NIS) for a problem is the set of points in the feasible region that are not inferior to any other point in the feasible region.
Points in this set may also be referred to as Pareto-optimal.
Non-inferior points are straightforward to identify in objective space. In some situations
this may be all the information we need for our purposes.
On the other hand, if we need to we can work back from the points in objective space to find the corresponding points in decision space. Doing so is easy enough when all the functions involved are linear, but may be more difficult in the general case.
For the bi-criterion problem we already looked at, the non-inferior set in objective space consists of the line segment between vertices 𝐵 and 𝐶, together with the line segment between 𝐶 and 𝐷. For each point in this section of the boundary of the feasible region, there is no other point in the feasible region where both the 𝑧 and 𝑧 co-ordinates are larger.
The corresponding points in decision space are the line segment between 𝐵 and 𝐶 together with the line segment between 𝐶 and 𝐷.
In fact, in this case we didn’t need to draw the feasible region in order to identify the non- inferior set. Because everything is linear, we can again focus our attention on the vertices. The line segments between non-inferior vertices must also be non-inferior points in this situation.
A simple way to see which vertices are in the non-inferior set is to put the values we calculated earlier into a table.
(𝑧 , 𝑧 ) (0,3) (14, 7) (12, 9) (4, 11) ( − 4, 4)
Inferior to 𝐵, 𝐶, 𝐷
These graphs show two examples of feasible regions in objective space, with the non- inferior sets marked in black.
The second example demonstrates that the non-inferior set may not be connected in general. However, if all the functions involved are linear then the feasible region will be convex, which will mean the non-inferior set is connected.
Question 4: (0 points)
Any point in the NIS can potentially be seen as ‘optimal’ in some context. Additional information not contained within the problem is usually needed to determine a single best answer.
In this situation, the role of a mathematician will typically be to analyse the problem as far as possible, and present the range of possible solutions. It will then be the responsibility of a decision-maker (e.g. a company CEO) to supply the additional information required and determine which solution to use.
The ideal and nadir solutions are points in objective space (i.e. 𝐳-coordinates) with specific properties. They are defined as follows.
Suppose a multi-objective maximisation problem has feasible region F ⊆ 𝑋 and NIS 𝒩 ⊆ F. Then the ideal solution 𝐳⋆ and the nadir solution ^𝐳 have the following components:
𝑧⋆ = max{𝑧(𝐱):𝐱 ∈ F}, ^𝑧 =min{𝑧(𝐱):𝐱∈𝒩}.
In other words, the ideal solution maximises each component over the given feasible region, regardless of whether or not the combined solution is feasible. In the linear case, unless there is a unique solution to a multi-objective problem, the ideal solution is not feasible. The ideal solution provides a component-wise upper bound on the NIS.
Similarly, the nadir solution provides a component-wise lower bound on the NIS. That is, any solution 𝐳 ∈ 𝒩 satisfies
^𝐳 ≤ 𝐳 ≤ 𝐳 ⋆ ,
so that the decision maker knows the bounds on the optimal solution.
This graph shows the ideal (outside the feasible region) and nadir (inside the feasible region) points in objective space for a convex feasible region. However, it should be noted that even though the nadir solution is feasible in this diagram, there is generally no expectation that the nadir solution should be feasible.
For our linear example, we can read the co-ordinates of the ideal and nadir points directly from the table of values we constructed.
(𝑧 , 𝑧 ) (0,3) (14, 7) (12, 9) (4, 11) ( − 4, 4)
Inferior to 𝐵, 𝐶, 𝐷
The highest 𝑧 value that occurs is 14, and the highest 𝑧 value is 11, so the ideal point is
𝑧⋆ =(14,11).
From the three vertices in the NIS (𝐵, 𝐶 and 𝐷), the lowest 𝑧 value is 4 and the lowest 𝑧
value is 7, so the nadir point is ^𝑧 = (4, 7). Question 5: (0 points)
For a general multi-objective problem, there is no reason why the objectives should be of equal importance to the decision maker.
The weighting method assumes that it is possible to meaningfully quantify the relative importance of the different objectives.
If we knew in what proportions to prioritise the objectives then we could combine them into a single objective. Since we do not have this information (it is up to the decision maker to judge this), we find the best solution for each possible case.
For a bi-criterion problem, to incorporate the relative values of the two objectives, we introduce a weighting variable 𝛾 ∈ [0, 1] and try to maximise
𝑍(𝛾) = (1 − 𝛾)𝑧 + 𝛾𝑧 . We will then present our optimal solution as a function of 𝛾.
In a real life scenario, the decision maker may then use non-modelled properties to make the final decision by choosing the value for 𝛾.
Notice that the set of points defined by
𝑍 = (1−𝛾)𝑧 +𝛾𝑧 = const
is just a line in objective space: the value of 𝛾 determines its slope.
Let’s look at the same example as before, and apply the weighting method. The non- inferior set consists of the line segment between 𝐵 and 𝐶 and the line segment between 𝐶 and 𝐷.
(𝑧 , 𝑧 ) (0,3) (14, 7) (12, 9) (4, 11) ( − 4, 4)
Inferior to 𝐵, 𝐶, 𝐷
Since 𝑍(0) = 𝑧, and 𝑧 is maximised at 𝐵, 𝑍 is maximised at 𝐵 for 𝛾 = 0.
If 𝛾 increases by a small amount, 𝐵 remains the optimal solution for 𝑍 until 𝐶 becomes the optimal solution. This occurs for the value of 𝛾 that gives the same value for 𝑍 at 𝐵 and 𝐶. That is, 𝛾 satisfies
14(1 − 𝛾) + 7𝛾 = 12(1 − 𝛾) + 9𝛾 .
The solution to this equation is 𝛾 = 1/2. So the optimal solution at 𝛾 = 1/2 can be
either 𝐵 or 𝐶, since they give the same answer.
For values of 𝛾 slightly larger than 1/2, the optimal solution will occur at 𝐶, until 𝛾
is so large that the values of 𝑍 at 𝐶 and 𝐷 are equal. We solve 12(1 − 𝛾) + 9𝛾 = 4(1 − 𝛾) + 11𝛾
to find that this occurs at 𝛾 = 4/5. Hence overall the optimal solution is
𝐵, 𝛾 ∈ [0, 1/2], 𝐶, 𝛾 ∈ [1/2, 4/5],
𝐷, 𝛾 ∈ [4/5, 1],
where we note that if 𝛾 is simultaneously optimal at two points (e.g. 𝛾 = 1/2 above), then it is optimal along the line segment connecting them if this line segment is part of the NIS.
Note that it is possible for a problem to be non-linear, but for a component of the NIS to be a line segment.
The weighting method may be extended for more than two objectives 𝑧 , …, 𝑧 by introducingweights𝛾,…,𝛾 ∈[0,1],
𝑍 = 𝛾𝑧,
The latter equation implies that one 𝛾 can be determined from the values of the other
parameters. It is also the equation of a hyperplane. This is why the problem reduces to a one-parameter problem for two objectives. The problem reduces to a 𝑘 − 1 parameter problem for 𝑘 objectives.
Question 6: (0 points)
The 𝜀-constraints method approaches a multi-objective problem from a different viewpoint to the weighting method. Rather than comparing the importance of different objectives with each other, it operates on the basis that, for one or more of the objectives, there is a minimum acceptable value.
Consider a multi-objective problem
max {𝑧 (𝐱), …, 𝑧 (𝐱)} 𝐱∈F
where the feasible region F is defined by
𝑔(𝐱) ≤ 𝑏, …, 𝑔(𝐱) ≤ 𝑏,
The 𝜀-constraints method selects an objective that we initially wanted to maximise, and
instead assigns to it a lower bound.
Without loss of generality, take 𝑧(𝐱) and assign to it the lower bound 𝜀. Then the multi- objective problem reduces to
where F is defined by
max {𝑧 (𝐱), …, 𝑧 (𝐱)} 𝐱∈F
𝑔(𝐱) ≤ 𝑏, …, 𝑔(𝐱) ≤ 𝑏,
𝑧(𝐱) ≥ 𝜀, 𝐱≥𝟎.
We may repeat this process until the program becomes
whereF− isdefinedby
max {𝑧 (𝐱)} 𝐱∈F−
𝑔(𝐱) ≤ 𝑏, …, 𝑔(𝐱) ≤ 𝑏,
𝑧(𝐱) ≥ 𝜀,…,𝑧−(𝐱) ≥ 𝜀−, 𝐱≥𝟎.
Consider the multi-objective problem
subject to
max{𝑥, −𝑥 +𝑥, 2𝑥 +𝑥}
𝑥 + 𝑥 ≥ 1, 𝑥 + 2𝑥 ≤ 5, 𝑥, 𝑥 ≥ 0.
The solution using this method depends on things outside of the model: the decision maker sets the values of 𝜀.
Say we take 𝑥 to be our remaining objective, and choose the values 𝜀 = 2, 𝜀 = 3. Then we solve the LP:
subject to
−𝑥 + 𝑥 ≥ 2, 2𝑥 + 𝑥 ≥ 3, 𝑥 + 𝑥 ≥ 1, 𝑥 + 2𝑥 ≤ 5, 𝑥, 𝑥 ≥ 0.
We won’t go through the details, but the optimal solution turns out to be
(𝑥, 𝑥) = (1/3, 7/3), which has values in objective space
(𝑧,𝑧,𝑧) = (1/3,2,3).
The obvious questions with this method are
how best to choose an objective to survive the reduction procedure, and then how to choose the 𝜀.
The objective we choose is in no sense the ‘primary’ objective, as the choice for 𝜀 could force this objective to be smaller than it would be otherwise. That is, when the objective
becomes a constraint, we are no longer trying to maximise that objective; we have just accepted a lower bound for this objective. Doing so also impacts on the value we can achieve for the other objectives.
Comparing the answer above to the ideal solution
(𝑧, 𝑧, 𝑧) = (5, 5/2, 10)
we see that all co-ordinates are less than their individual optimal value.
Of course, we can’t expect a multi-objective problem to achieve its ideal solution, but without further insight it is very difficult to know whether our solution using this method is ‘good’ for the real life problem we’re solving. Moreover, the choice of 𝜀 could yield an infeasible LP.
Question 7: (0 points)
The Goal Programming method is a refinement of the 𝜀-constraint method that addresses its shortcomings. Instead of specifying a lower bound, we quantify the cost of an objective not meeting a particular value.
Consider again the multi-objective problem
max {𝑧 (𝐱), …, 𝑧 (𝐱)} 𝐱∈F
where the feasible region F is defined by
𝑔(𝐱) ≤ 𝑏, …, 𝑔(𝐱) ≤ 𝑏
(and the usual non-negativity constraints are omitted).
We can choose a target value for each 𝑧, called its goal.
(This may be computed using the ideal or nadir solutions, or may be obtained from
information outside the model.)
The goal for 𝑧 will be denoted by 𝐺, and we aim to achieve
𝑧(𝐱) = 𝐺 .
As usual, we add slack and surplus variables to represent by how much this target has
been missed. So the constraint we actually add is
𝑧 ( 𝐱 ) = 𝐺 + 𝑑 + − 𝑑 − .
The variables 𝑑+ represents how much our solution for 𝑧 is greater than the goal, and 𝑑−
is how much our solution for 𝑧 is less than its goal.
We can then model the penalties associated with overshooting or undershooting our goal
for each objective; these will be denoted by 𝛼+ and 𝛼− respectively.
Our problem is then reduced to a program with a single objective: to minimise the total
penalty. That is,
subject to
⎧⎫ min ⎨⎩ 𝛼+𝑑+ + 𝛼−𝑑−⎬⎭
𝑧(𝐱)=𝐺 +𝑑+−𝑑−,…,𝑧(𝐱)=𝐺 +𝑑+−𝑑−, 𝑔(𝐱) ≤ 𝑏, …, 𝑔(𝐱) ≤ 𝑏,
𝑥 , 𝑑 + , 𝑑 − ≥ 0 .
Note that some sources give a slightly different formula in the case that we only require a particular objective to be less than or equal, or greater than or equal, to its goal. In this case we simply set the relevant penalty equal to zero. Suppose we have the goal 𝑧 ≥ 𝐺. We may still write
𝑧 = 𝐺 + 𝑑 + − 𝑑 − ,
but we would set 𝛼+ = 0 since there should not be a penalty associated with overshooting
this goal.
Consider the bi-criterion problem
subject to
max {𝑥, 𝑥 + 𝑥}
𝑥 − 𝑥 ≤ 3 𝑥 + 2𝑥 ≤ 5, 𝑥, 𝑥 ≥ 0.
Suppose the target for 𝑥 is 4, with no penalty for overshooting and a penalty of 2 for each unit of undershooting.
Suppose the target for 𝑥 + 𝑥 is 5, with a penalty of 1 for each unit of overshooting or undershooting.
The program becomes
subject to
min 2𝑑− + 𝑑+ + 𝑑− 𝑥 =4+𝑑+−𝑑−,
𝑥 + 𝑥 = 5 + 𝑑 + − 𝑑 − ,
𝑥 − 𝑥 ≤ 3, 𝑥 + 2𝑥 ≤ 5, 𝑥, 𝑥 ≥ 0.
We solve this using our preferred method to get the solution
(𝑥, 𝑥) = (11/3, 2/3) with an associated penalty for missing the goals of 4/3.
Question 8: (1 point)
Consider the multi-objective problem
subject to
max {𝑧 = 2 𝑥 − 𝑥 , 𝑧 = 𝑥 + 3 𝑥 }
𝑥 + 2 𝑥 ≤ 18, 𝑥, 𝑥 ≥ 0.
a. Sketch the feasible region in both decision space and objective space, and hence identity the non-inferior set.
Enter the coordinates of the ideal solution 𝐳⋆. Enter two numbers, separated by a comma.
__________
Enter the coordinates of the nadir solution ^𝐳. Enter two numbers, separated by a comma.
__________ b. Set
𝑍=(1−𝛾)𝑧 +𝛾𝑧,
where 𝛾 ∈ [0, 1].
For some values of 𝛾 only a single point of the non-inferior set is optimal, and for some values of 𝛾 more than one point of the non-inferior set is optimal. List the values of 𝛾 for which more than one point of the non-inferior set is optimal.
Enter these numbers exactly (i.e. as fractions, not decimal approximations), and if there is more than one, separate the numbers by commas (the order in which you enter them does not matter).
__________
c. Set 𝜀 = 7 and consider the 𝜀-constraint 𝑧 ≥ 𝜀. Find the coordinates of the optimal solution in both objective space and decision space. In both cases, enter the coordinates as two numbers separated by a comma.
Enter the coordinates in objective space: __________
Enter the coordinates in decision space: __________
Question 9: (1 point)
In real life, some objectives can’t be easily modelled by a decision variable.
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com