ISE 562 Decision Theory Draft
Multiattribute Decision Analysis Notes (excerpt)
J.H. Smith 10/26/2021
Introduction
Copyright By PowCoder代写 加微信 powcoder
This technical note provides a brief introduction to multiattribute decision analysis. The aim here is to present a notation for the purpose of defining formal multiattribute utility functions. These functions are used in multiattribute decision theory to compute the value of alternatives. Note that this is excerpted from one of my publications so the references are correct but I have removed irrelevant citations not related to the purpose of this note. Dr. S.
Computing the Expected Utilities: Multiattribute Decision Analysis
Multiattribute decision analysis is a methodology for providing information to decision makers for comparing and selecting from among complex alternative systems in the presence of uncertainty. The methodology of multiattribute decision analysis is derived from the techniques of operations research, statistics, economics, mathematics, and psychology. Researchers from a wide range of disciplines have participated in the development of multiattribute decision analysis. The first books and papers on the subject appeared in the late 1960’s (References 17, 18, 19, 20). The most practical, extensive, and complete presentation of an approach to multiattribute decision analysis is given in the 1976 work of Keeney and Raiffa (Reference 21). Although several approaches to multiattribute decision analysis have been developed (References 22, 23, 24, 25, 26), the method described here corresponds to an abbreviated form of that of Keeney and Raiffa.
Decision problems involving the preference ranking of alternatives, whatever the specific methodology, require two kinds of models (Figure 2). One is a “system model” and represents the alternative technology systems (including any uncertainties) under consideration. The other is a “value model” and represents the preference structure of the decision makers whose preferences are being assessed. The system model is used here to capture the technical performance of the alternatives while the value model provides the prioritization for evaluating different levels of performance according to decision maker preferences for the relative importance of different performance requirements.
The system model describes the alternative technology tasks available to the decision makers in terms of the risk and possible outcomes that could result from each. Risk arises from the uncertainty associated with each alternative technology task and from the uncertain development environment in which the alternative exists. The outcomes describe the possible consequences of the alternatives based on the expected, probabilistic, or actual realizations of the performance attributes.
ISE 562 Decision Theory Draft
ALTERNATIVE TECHNOLOGY TASKS
OUTCOME DESCRIPTIONS
ALTERNATIVE TECHNOLOGY TASK RANKINGS
OUTCOME UTILITIES
UNCERTAINTY
SYSTEM MODEL
VALUE MODEL
Figure 2. Relationship Between System and Value Models
Because of the element of risk, the selection of a specific alternative technology task does not in general guarantee a specific outcome, but rather results in a probabilistic situation in which only one of several outcomes may occur. These outcomes, with their measurable performance attributes, then form the input to the value model.
The value model prioritizes the outcomes in terms of the preferences of the decision makers (technology managers) for the various outcomes. The measurable attributes of the outcomes are aggregated in a formula (called a multiattribute utility function) whose functional form and parameters are determined by the preference structure of the decision makers. The output of the value model is a multiattribute utility function value for each task outcome (outcome utility). These outcome utilities are entered back into the system model where the utility of an alternative technology task can be calculated by taking the expected utility value of the outcomes for each alternative technology task. These expected utilities for each alternative define a preference ranking over the alternatives, with greater expected utilities being more preferred.
If the performance attributes are measurable and satisfy specific mathematical requirements, a multiattribute utility function can be derived that assigns numbers called outcome utilities to the set of performance attribute states within each performance requirement. The mathematical axioms that must be valid for these two properties to hold were first derived by and Morgenstern (Reference 27). The outcome utilities generated by the Keeney and Raiffa multiattribute utility function have the properties of and Morgenstern utilities, that is:
ISE 562 Decision Theory Draft
(1) Greater outcome utility values correspond to more preferred outcomes.
(2) The utility value to be assigned to a gamble is the expected value of the
outcome utilities of the gamble.
Associated with every outcome there is an N-dimensional vector of performance attributes (t = (1,…, (N , the set of which satisfy the attribute requirements cited above.1 Most of the attribute requirements are self-evident. One requirement, that of attribute independence, is a condition that makes it possible to consider preferences between states of a specific attribute, without consideration of the states of the other N-1 attributes. It is thus possible to construct an attribute utility function that is independent of the other attribute states, and which, like the outcome utility function, satisfies the and Morgenstern properties for utility functions. This condition of independence, or some equivalent mathematical condition (see Reference 21 for alternative formulations), is necessary for the Keeney and Raiffa methodology. It is necessary to verify that this condition is valid in practice, or more correctly, to test and identify the bounds of its validity.
To continue the discussion, it is necessary to introduce additional mathematical notation. Let:
(t = (to =
The state of the nth attribute.
The least-preferred state to be considered for the nth attribute.
The most-preferred state to be considered for the nth attribute.
Vector of attribute states characterizing a specific outcome: (t = ( 1 , ( 2 , á , ( N
An outcome constructed from the least preferred states of all the
attributes:(to = (o ,(o ,…,(o *12N
( , ( = n on
An outcome constructed from the most preferred states of all
attributes:(t* = (* ,(* ,…,(* 12N
th An outcome in which all attributes except the n
their least-preferred state.
The attribute utility of the nth attribute.
attribute are at
The outcome (multiattribute) utility of the outcome ( (vector of attribute states).
The attribute scaling constant for the nth attribute:
k = u ( , ( n n n* on
The master scaling constant for the multiattribute utility equation. It is an algebraic function of the kn values.
The mathematics permit the arbitrary assignments:
1Note that x is equivalent to ( in this discussion.
ISE 562 Decision Theory Draft
u ((o ) = 0.0 (least preferred) and u ((* ) = 1.0 (most preferred) nn nn
Thus, the attribute utility function values will range from 0.0 to 1.0. Attribute utility
function values for attribute states intermediate between the worst and best values are
assessed by determining a probability value of pn such that the decision makers or their
designated experts are indifferent between receiving (n for sure, or, a gamble that yields
(o (the worst outcome) with probability p or (* (the best outcome) with probability nnn
1 pn. Graphically, assess pn , so that:
pn (n* (n μ o
1 – p n ( no
where “ μ ” means indifference. It follows from the mathematics that: un ((n ) = pn. This indifference relation is repeated for various attribute states until either a continuous utility function can be approximated or enough discrete points have been assessed for the attribute states under consideration in the analysis.
A similar approach is used to assess the scaling constants (weights). A value for kn is assessed such that the following indifference holds:
kn (* ( n* , ( on μ o 1 – kn (o
After the performance attribute utility functions and scaling constants have been assessed, evaluation of the multiattribute utility function for different alternative technology tasks can be performed. If the attribute states are known precisely or represented by expected (scalar) values, the multiattribute utility function is deterministic. If the attribute states are represented by probability distributions (the attributes are random variables), the multiattribute utility function is probabilistic. The two cases are presented below.
Deterministic Case
The deterministic case refers to the calculation of a single numerical measure for outcome utility that assumes the attribute state values do not represent uncertainty in any way (with a probability distribution). With the assessment of the single-attribute utility functions and attribute scaling constants, the multiattribute utility equation can be solved to yield a deterministic outcome utility value for any outcome under consideration. The multiattribute utility function can be stated in one of two forms. If certain independence conditions hold, the first form is the multiplicative multiattribute utility function:
ISE 562 Decision Theory Draft N1N
If kn Á1.0,then:Ê”((t)œ Ò1+K†kn †un((n)Ó1 Eq3 K
where the master scaling constant, K, is solved from the equation:
If the sum of the scaling constants add to one, Eq 3 reduces to the second form, the
additive multiattribute utility function: NN
Ò 1 + K † kn Ó
I f , k n œ 1 . 0 , t h e n : Ê ” ( (t ) œ k n † u n ( ( n ) E q 4 n=1 n=1
The outcome utility function values, like the attribute utility function values, will all range from 0.0 to 1.0 with:
”n((to ) = 0.0 (most preferred)
”n((t*) = 1.0 (least preferred)
Although the mathematical equations appear complex, they can be easily solved, and the information required in the interviews with the decision makers can be minimized. An extended discussion of these equations, their solution, and the assessment of the required data, together with examples taken from actual applications, is given in Keeney and Raiffa (see Reference 8).
The discussion in this section uses an abbreviated form of Keeney and Raiffa’s methodology to reduce the interview (questionnaire) time for the interviewee. An assumption is made that utility independence of each attribute implies pair-wise utility independence (i.e., the attributes exhibit utility independence when taken two at a time). This assumption allows the use of Formulation (4) of Theorem 6.2 of Keeney and Raiffa (see Reference 8). Given single-attribute utility independence, it is difficult to construct a realistic example where pair-wise utility independence would be violated.
ISE 562 Decision Theory Draft
References
1. Smith, J.H., “A Decision Support System for Managing a Diverse Portfolio of Technology Resources: Automated Resource Allocation of Deep Space Network Equipment,” Proceedings of the 2000 Information Resources Management Association International Conference: Challenges of Information Technology Management in the 21st Century, Anchorage, Alaska, May 21-24, 2000, pp. 1211-1212.
2. Smith, J.H., “Rapid Search for Rapid Solutions: Technology Investment Analysis Using High-Performance Computation,” Proceedings of the 26th Annual meeting of the Western Decision Sciences Institute, The Hilton Waikoloa Village, Kamuela, Hawaii, pp. 667-669, March 25-29, 1997.
3. Smith, J.H., “The New Millennium Technology Program: Streamlining the R&D Planning Process,” Proceedings of the 25th annual meeting of the Decision Sciences Institute, Seattle, Washington, April 2-6, 1996, pp. 518-520.
4. Ledyard, J., R. Shisko, S. Tsyplakov, and J.H. Smith, The New Millennium
Program: A Decision Support Aid for Multi-Mission Technology Planning; Description and User’s Guide, Version 1.0, JPL Document D-13608, Jet Propulsion Laboratory, Pasadena, California, December 12, 1995.
5. Smith, J.H., and A. Feinberg, “Massively Parallel Processing and the Multicriteria R&D Portfolio Selection Problem: An Application Benchmark,” invited paper for Session on Multicriteria Decision Making, 1994 Joint National Meeting of (ORSA/TIMS): Global Manufacturing in the 21st Century, , Detroit, Michigan, October 23-26, 1994.
6. Feinberg, A., and J.H. Smith, “Selection and Scaling of Attributes in Multi- Criteria Decision Making for Ranking Advanced Technology Options,” presented at the 1994 Joint National Meeting of TIMS/ORSA: OR/MS Without Boundaries, Boston Marriott , Boston, Massachusetts, April 25-27, 1994.
7. Smith, J.H., and A. Feinberg, “Reaching for the Right Portfolio Strategy: Multi- Criteria R&D Planning Problems and High-Performance Computing,” Proceedings of the Twenty-Third annual meeting of the Western Decision Sciences Institute, edited by A. Khade and R. Brown, Maui, Hawaii, pp. 218-219, March 29-April 2, 1994.
8. Smith, J.H., and A. Feinberg, “Applying Supercomputing Methods to Multicriteria R&D Planning Problems,” invited paper for Session on Multicriteria Decision Making, 1993 Operations Research Society of America (ORSA/TIMS), Phoenix, Arizona, October 31-November 3, 1993.
9. Smith, J.H., Technology Investment Strategy Analysis Task: Final Report, JPL Document No. D-11439, October 8, 1993.
10. Feinberg, A. and J.H. Smith, “Issues in Implementation of Multiple Criteria Decision Analysis for Ranking Advanced Technology Options,” invited paper for session on Multiple Criteria Decision Making, presented at the Joint National Meeting of The Institute of Management Sciences/Operations Research Society of America, San Francisco, California, November 1-4, 1992.
ISE 562 Decision Theory Draft
11. Feinberg, A., and J.H. Smith, “Telerobotics Research and Development Planning,” presented at ORSA/TIMS Joint National Meeting, Anaheim, California, November 3-6, 1991.
12. Smith, J.H., “Probabilistic Simulation of Multiattribute Utilities With An Application to Space Station Automation and Robotics Technologies,” presented at the Twelfth Biennial Conference on Subjective Probability, Utility, and Decision Making, Lomonosov University, Moscow, USSR, August 21-25, 1989.
13. Bard, J.W., and A. Feinberg, “A Two-Phase Methodology for Technology Selection and System Design,” IEEE Transactions on Engineering Management, Vol. 36, No. 1, February, 1989.
14. Bard, J., “Parallel Funding of R&D Tasks with Probabilistic Outcomes,” Management Science, Vol. 31, No. 7, July 1985.
15. Czajkowski, A. F., and Jones, S., “Selecting Interrelated R&D Projects in Space Technology Planning,” IEEE Transactions on Engineering Management, 1986, 33(1).
16. Manvi, R, C. R. Weisbin, W. Zimmerman, and G. Rodriguez, “Technology Portfolio Options For NASA Missions Using Decision Trees” IEEE 2002 Aerospace Conference, Big Sky, Montana, March 9-13, 2002.
17. Souder, W. E., and Mandakovic, T., “R&D Project Selection Models,” Research Technology Management, 1986, 29(4).
18. “Special Issue on Decision Analysis,” IEEE Transactions on Systems Science and Cybernetics, Vol. SSC-4, No. 3, pp.199-366, September 1968.
19. Raiffa, H., Decision Analysis: Introductory Lectures on Choices Under Uncertainty, Addison-Wesley, Reading, Massachusetts, 1968.
20. Raiffa, H., Preferences for Multi-Attributed Alternatives, RM-5686-DOT/RC, The Rand Corporation, Santa Monica, California, 1969.
21. Schlaifer, R., Analysis of Decisions Under Uncertainty, McGraw-Hill, , 1969.
22. Keeney, R. L., and Raiffa, H., Decisions with Multiple Objectives: Preferences and Value Tradeoffs, , , 1976.
23. Bell, ., “Disappointment in Decision Making Under Uncertainty,” Operations Research, Vol. 33, No. 1, January-February 1985, pp. 1-27.
24. Bell, ., “Regret in Decision Making Under Uncertainty,” Operations Research, Vol. 30, No. 5, September-October 1982, pp. 961-981.
25. Kahneman, D., and A. Tversky, “Prospect Theory: An Analysis of Decision Under Risk,” Econometrica, 47, 2, pp. 263-291, 1979.
26. Kahneman, Daniel, , and , editors, Judgment Under Uncertainty, Heuristics and Biases, Cambridge University Press, Cambridge, U.K., 1982.
27. Saaty, T., The Analytic Hierarchy Process, McGraw-Hill, , 1980.
28. , J., and Morgenstern, O., Theory of Games and Economic
Behavior, , , 3rd Ed., 1967 (lst Ed., Princeton University Press, Princeton, , 1944).
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com