CS计算机代考程序代写 matlab data structure chain Bayesian flex finance data mining computer architecture information theory cache AI Excel algorithm Convex Optimization

Convex Optimization

Convex Optimization Stephen Boyd
Department of Electrical Engineering Stanford University
Lieven Vandenberghe
Electrical Engineering Department University of California, Los Angeles

cambridge university press
Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, S ̃ao Paolo, Delhi
Cambridge University Press
The Edinburgh Building, Cambridge, CB2 8RU, UK
Published in the United States of America by Cambridge University Press, New York
http://www.cambridge.org
Information on this title: www.cambridge.org/9780521833783
⃝c Cambridge University Press 2004
This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without
the written permission of Cambridge University Press.
First published 2004
Seventh printing with corrections 2009
Printed in the United Kingdom at the University Press, Cambridge
A catalogue record for this publication is available from the British Library
Library of Congress Cataloguing-in-Publication data
Boyd, Stephen P.
Convex Optimization / Stephen Boyd & Lieven Vandenberghe
p. cm.
Includes bibliographical references and index.
ISBN 0 521 83378 7
1. Mathematical optimization. 2. Convex functions. I. Vandenberghe, Lieven. II. Title.
QA402.5.B69 2004 519.6–dc22 2003063284
ISBN 978-0-521-83378-3 hardback
Cambridge University Press has no responsiblity for the persistency or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

For
Anna, Nicholas, and Nora Dani ̈el and Margriet

Contents
Preface xi
1 Introduction 1
1.1 Mathematicaloptimization …………………… 1 1.2 Least-squaresandlinearprogramming ……………… 4 1.3 Convexoptimization………………………. 7 1.4 Nonlinearoptimization …………………….. 9 1.5 Outline…………………………….. 11 1.6 Notation……………………………. 14 Bibliography…………………………….. 16
I Theory 19
2 Convex sets 21
2.1 Affineandconvexsets……………………… 21 2.2 Someimportantexamples……………………. 27 2.3 Operationsthatpreserveconvexity ……………….. 35 2.4 Generalizedinequalities …………………….. 43 2.5 Separatingandsupportinghyperplanes……………… 46 2.6 Dualconesandgeneralizedinequalities. . . . . . . . . . . . . . . . . . 51 Bibliography…………………………….. 59 Exercises………………………………. 60
3 Convex functions 67
3.1 Basicpropertiesandexamples …………………. 67 3.2 Operationsthatpreserveconvexity ……………….. 79 3.3 Theconjugatefunction …………………….. 90 3.4 Quasiconvexfunctions……………………… 95 3.5 Log-concaveandlog-convexfunctions ………………104 3.6 Convexity with respect to generalized inequalities . . . . . . . . . . . . 108 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

viii
Contents
Convex optimization problems 127
4.1 Optimizationproblems ……………………..127 4.2 Convexoptimization……………………….136 4.3 Linearoptimizationproblems …………………..146 4.4 Quadraticoptimizationproblems …………………152 4.5 Geometricprogramming……………………..160 4.6 Generalizedinequalityconstraints…………………167 4.7 Vectoroptimization ……………………….174 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
4
5 Duality 215
5.1 TheLagrangedualfunction……………………215 5.2 TheLagrangedualproblem……………………223 5.3 Geometricinterpretation …………………….232 5.4 Saddle-pointinterpretation ……………………237 5.5 Optimalityconditions ………………………241 5.6 Perturbationandsensitivityanalysis ……………….249 5.7 Examples…………………………….253 5.8 Theoremsofalternatives …………………….258 5.9 Generalizedinequalities ……………………..264 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
II Applications 289
6 Approximation and fitting 291
6.1 Normapproximation……………………….291 6.2 Least-normproblems ………………………302 6.3 Regularizedapproximation ……………………305 6.4 Robustapproximation………………………318 6.5 Functionfittingandinterpolation…………………324 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
7 Statistical estimation 351
7.1 Parametricdistributionestimation ………………..351 7.2 Nonparametricdistributionestimation . . . . . . . . . . . . . . . . . .359 7.3 Optimaldetectordesignandhypothesistesting . . . . . . . . . . . . .364 7.4 ChebyshevandChernoffbounds …………………374 7.5 Experimentdesign………………………..384 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393

Contents ix
8 Geometric problems 397
8.1 Projectiononaset ……………………….397 8.2 Distancebetweensets………………………402 8.3 Euclideandistanceandangleproblems………………405 8.4 Extremalvolumeellipsoids ……………………410 8.5 Centering ……………………………416 8.6 Classification…………………………..422 8.7 Placementandlocation……………………..432 8.8 Floorplanning………………………….438 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
III Algorithms 455
9 Unconstrained minimization 457
9.1 Unconstrainedminimizationproblems ………………457 9.2 Descentmethods ………………………..463 9.3 Gradientdescentmethod …………………….466 9.4 Steepestdescentmethod …………………….475 9.5 Newton’smethod ………………………..484 9.6 Self-concordance…………………………496 9.7 Implementation …………………………508
Bibliography . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . .
10 Equality constrained minimization
. . . . . . . . . . . . . . . . . . . . . . 513 . . . . . . . . . . . . . . . . . . . . . . 514
521
10.1 Equalityconstrainedminimizationproblems . . . . . . . . . . . . . . .521 10.2 Newton’smethodwithequalityconstraints. . . . . . . . . . . . . . . .525 10.3 Infeasible start Newton method . . . . . . . . . . . . . . . . . . . . . . 531 10.4Implementation …………………………542 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
11 Interior-point methods 561
11.1 Inequality constrained minimization problems . . . . . . . . . . . . . . 561 11.2 Logarithmicbarrierfunctionandcentralpath . . . . . . . . . . . . . .562 11.3Thebarriermethod ……………………….568 11.4 Feasibility and phase I methods . . . . . . . . . . . . . . . . . . . . . . 579 11.5 Complexityanalysisviaself-concordance . . . . . . . . . . . . . . . . .585 11.6 Problemswithgeneralizedinequalities . . . . . . . . . . . . . . . . . .596 11.7Primal-dualinterior-pointmethods ………………..609 11.8Implementation …………………………615 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623

x Contents
Appendices 631
A Mathematical background 633
A.1 Norms ……………………………..633 A.2 Analysis …………………………….637 A.3 Functions ……………………………639 A.4 Derivatives……………………………640 A.5 Linearalgebra ………………………….645 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652
B Problems involving two quadratic functions 653
B.1 Singleconstraintquadraticoptimization . . . . . . . . . . . . . . . . .653 B.2 TheS-procedure…………………………655 B.3 Thefieldofvaluesoftwosymmetricmatrices . . . . . . . . . . . . . .656 B.4 Proofsofthestrongdualityresults………………..657 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659
C Numerical linear algebra background 661
C.1 Matrixstructureandalgorithmcomplexity . . . . . . . . . . . . . . . .661 C.2 Solving linear equations with factored matrices. . . . . . . . . . . . . . 664 C.3 LU,Cholesky,andLDLTfactorization ………………668 C.4 BlockeliminationandSchurcomplements . . . . . . . . . . . . . . . .672 C.5 Solvingunderdeterminedlinearequations. . . . . . . . . . . . . . . . .681 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
References 685 Notation 697 Index 701

Preface
This book is about convex optimization, a special class of mathematical optimiza- tion problems, which includes least-squares and linear programming problems. It is well known that least-squares and linear programming problems have a fairly complete theory, arise in a variety of applications, and can be solved numerically very efficiently. The basic point of this book is that the same can be said for the larger class of convex optimization problems.
While the mathematics of convex optimization has been studied for about a century, several related recent developments have stimulated new interest in the topic. The first is the recognition that interior-point methods, developed in the 1980s to solve linear programming problems, can be used to solve convex optimiza- tion problems as well. These new methods allow us to solve certain new classes of convex optimization problems, such as semidefinite programs and second-order cone programs, almost as easily as linear programs.
The second development is the discovery that convex optimization problems (beyond least-squares and linear programs) are more prevalent in practice than was previously thought. Since 1990 many applications have been discovered in areas such as automatic control systems, estimation and signal processing, com- munications and networks, electronic circuit design, data analysis and modeling, statistics, and finance. Convex optimization has also found wide application in com- binatorial optimization and global optimization, where it is used to find bounds on the optimal value, as well as approximate solutions. We believe that many other applications of convex optimization are still waiting to be discovered.
There are great advantages to recognizing or formulating a problem as a convex optimization problem. The most basic advantage is that the problem can then be solved, very reliably and efficiently, using interior-point methods or other special methods for convex optimization. These solution methods are reliable enough to be embedded in a computer-aided design or analysis tool, or even a real-time reactive or automatic control system. There are also theoretical or conceptual advantages of formulating a problem as a convex optimization problem. The associated dual problem, for example, often has an interesting interpretation in terms of the original problem, and sometimes leads to an efficient or distributed method for solving it.
We think that convex optimization is an important enough topic that everyone who uses computational mathematics should know at least a little bit about it. In our opinion, convex optimization is a natural next topic after advanced linear algebra (topics like least-squares, singular values), and linear programming.

xii
Preface
Goal of this book
For many general purpose optimization methods, the typical approach is to just try out the method on the problem to be solved. The full benefits of convex optimization, in contrast, only come when the problem is known ahead of time to be convex. Of course, many optimization problems are not convex, and it can be difficult to recognize the ones that are, or to reformulate a problem so that it is convex.
Our main goal is to help the reader develop a working knowledge of convex optimization, i.e., to develop the skills and background needed to recognize, formulate, and solve convex optimization problems.
Developing a working knowledge of convex optimization can be mathematically demanding, especially for the reader interested primarily in applications. In our experience (mostly with graduate students in electrical engineering and computer science), the investment often pays off well, and sometimes very well.
There are several books on linear programming, and general nonlinear pro- gramming, that focus on problem formulation, modeling, and applications. Several other books cover the theory of convex optimization, or interior-point methods and their complexity analysis. This book is meant to be something in between, a book on general convex optimization that focuses on problem formulation and modeling.
We should also mention what this book is not. It is not a text primarily about convex analysis, or the mathematics of convex optimization; several existing texts cover these topics well. Nor is the book a survey of algorithms for convex optimiza- tion. Instead we have chosen just a few good algorithms, and describe only simple, stylized versions of them (which, however, do work well in practice). We make no attempt to cover the most recent state of the art in interior-point (or other) meth- ods for solving convex problems. Our coverage of numerical implementation issues is also highly simplified, but we feel that it is adequate for the potential user to develop working implementations, and we do cover, in some detail, techniques for exploiting structure to improve the efficiency of the methods. We also do not cover, in more than a simplified way, the complexity theory of the algorithms we describe. We do, however, give an introduction to the important ideas of self-concordance and complexity analysis for interior-point methods.
Audience
This book is meant for the researcher, scientist, or engineer who uses mathemat- ical optimization, or more generally, computational mathematics. This includes, naturally, those working directly in optimization and operations research, and also many others who use optimization, in fields like computer science, economics, fi- nance, statistics, data mining, and many fields of science and engineering. Our primary focus is on the latter group, the potential users of convex optimization, and not the (less numerous) experts in the field of convex optimization.
The only background required of the reader is a good knowledge of advanced calculus and linear algebra. If the reader has seen basic mathematical analysis (e.g., norms, convergence, elementary topology), and basic probability theory, he or she should be able to follow every argument and discussion in the book. We hope that

Preface xiii
readers who have not seen analysis and probability, however, can still get all of the essential ideas and important points. Prior exposure to numerical computing or optimization is not needed, since we develop all of the needed material from these areas in the text or appendices.
Using this book in courses
We hope that this book will be useful as the primary or alternate textbook for several types of courses. Since 1995 we have been using drafts of this book for graduate courses on linear, nonlinear, and convex optimization (with engineering applications) at Stanford and UCLA. We are able to cover most of the material, though not in detail, in a one quarter graduate course. A one semester course allows for a more leisurely pace, more applications, more detailed treatment of theory, and perhaps a short student project. A two quarter sequence allows an expanded treatment of the more basic topics such as linear and quadratic programming (which are very useful for the applications oriented student), or a more substantial student project.
This book can also be used as a reference or alternate text for a more traditional course on linear and nonlinear optimization, or a course on control systems (or other applications area), that includes some coverage of convex optimization. As the secondary text in a more theoretically oriented course on convex optimization, it can be used as a source of simple practical examples.
Acknowledgments
We have been developing the material for this book for almost a decade. Over the years we have benefited from feedback and suggestions from many people, including our own graduate students, students in our courses, and our colleagues at Stanford, UCLA, and elsewhere. Unfortunately, space limitations and shoddy record keeping do not allow us to name everyone who has contributed. However, we wish to particularly thank A. Aggarwal, V. Balakrishnan, A. Bernard, B. Bray, R. Cottle, A. d’Aspremont, J. Dahl, J. Dattorro, D. Donoho, J. Doyle, L. El Ghaoui, P. Glynn, M. Grant, A. Hansson, T. Hastie, A. Lewis, M. Lobo, Z.-Q. Luo, M. Mesbahi, W. Naylor, P. Parrilo, I. Pressman, R. Tibshirani, B. Van Roy, L. Xiao, and Y. Ye. J. Jalden and A. d’Aspremont contributed the time-frequency analysis example in §6.5.4, and the consumer preference bounding example in §6.5.5, respectively. P. Parrilo suggested exercises 4.4 and 4.56. Newer printings benefited greatly from Igal Sason’s meticulous reading of the book.
We want to single out two others for special acknowledgment. Arkadi Ne- mirovski incited our original interest in convex optimization, and encouraged us to write this book. We also want to thank Kishan Baheti for playing a critical role in the development of this book. In 1994 he encouraged us to apply for a Na- tional Science Foundation combined research and curriculum development grant, on convex optimization with engineering applications, and this book is a direct (if delayed) consequence.
Stephen Boyd Stanford, California Lieven Vandenberghe Los Angeles, California

Chapter 1
Introduction
In this introduction we give an overview of mathematical optimization, focusing on the special role of convex optimization. The concepts introduced informally here will be covered in later chapters, with more care and technical detail.
1.1 Mathematical optimization
A mathematical optimization problem, or just optimization problem, has the form
minimize f0 (x)
subject to fi(x) ≤ bi, i = 1,…,m.
(1.1)
Here the vector x = (x1 , . . . , xn ) is the optimization variable of the problem, the function f0 : Rn → R is the objective function, the functions fi : Rn → R, i = 1, . . . , m, are the (inequality) constraint functions, and the constants b1 , . . . , bm are the limits, or bounds, for the constraints. A vector x⋆ is called optimal, or a solution of the problem (1.1), if it has the smallest objective value among all vectors that satisfy the constraints: for any z with f1(z) ≤ b1,…,fm(z) ≤ bm, we have f0(z) ≥ f0(x⋆).
We generally consider families or classes of optimization problems, characterized by particular forms of the objective and constraint functions. As an important example, the optimization problem (1.1) is called a linear program if the objective andconstraintfunctionsf0,…,fm arelinear,i.e.,satisfy
fi(αx + βy) = αfi(x) + βfi(y) (1.2)
for all x, y ∈ Rn and all α, β ∈ R. If the optimization problem is not linear, it is called a nonlinear program.
This book is about a class of optimization problems called convex optimiza- tion problems. A convex optimization problem is one in which the objective and constraint functions are convex, which means they satisfy the inequality
fi(αx + βy) ≤ αfi(x) + βfi(y) (1.3)

2
1 Introduction
forallx, y∈Rn andallα, β∈Rwithα+β=1,α≥0,β≥0. Comparing(1.3) and (1.2), we see that convexity is more general than linearity: inequality replaces the more restrictive equality, and the inequality must hold only for certain values of α and β. Since any linear program is therefore a convex optimization problem, we can consider convex optimization to be a generalization of linear programming.
1.1.1 Applications
The optimization problem (1.1) is an abstraction of the problem of making the best possible choice of a vector in Rn from a set of candidate choices. The variable x represents the choice made; the constraints fi(x) ≤ bi represent firm requirements or specifications that limit the possible choices, and the objective value f0(x) rep- resents the cost of choosing x. (We can also think of −f0(x) as representing the value, or utility, of choosing x.) A solution of the optimization problem (1.1) corre- sponds to a choice that has minimum cost (or maximum utility), among all choices that meet the firm requirements.
In portfolio optimization, for example, we seek the best way to invest some capital in a set of n assets. The variable xi represents the investment in the ith asset, so the vector x ∈ Rn describes the overall portfolio allocation across the set of assets. The constraints might represent a limit on the budget (i.e., a limit on the total amount to be invested), the requirement that investments are nonnegative (assuming short positions are not allowed), and a minimum acceptable value of expected return for the whole portfolio. The objective or cost function might be a measure of the overall risk or variance of the portfolio return. In this case, the optimization problem (1.1) corresponds to choosing a portfolio allocation that minimizes risk, among all possible allocations that meet the firm requirements.
Another example is device sizing in electronic design, which is the task of choos- ing the width and length of each device in an electronic circuit. Here the variables represent the widths and lengths of the devices. The constraints represent a va- riety of engineering requirements, such as limits on the device sizes imposed by the manufacturing process, timing requirements that ensure that the circuit can operate reliably at a specified speed, and a limit on the total area of the circuit. A common objective in a device sizing problem is the total power consumed by the circuit. The optimization problem (1.1) is to find the device sizes that satisfy the design requirements (on manufacturability, timing, and area) and are most power efficient.
In data fitting, the task is to find a model, from a family of potential models, that best fits some observed data and prior information. Here the variables are the parameters in the model, and the constraints can represent prior information or required limits on the parameters (such as nonnegativity). The objective function might be a measure of misfit or prediction error between the observed data and the values predicted by the model, or a statistical measure of the unlikeliness or implausibility of the parameter values. The optimization problem (1.1) is to find the model parameter values that are consistent with the prior information, and give the smallest misfit or prediction error with the observed data (or, in a statistical

1.1 Mathematical optimization 3
framework, are most likely).
An amazing variety of practical problems involving decision making (or system
design, analysis, and operation) can be cast in the form of a mathematical opti- mization problem, or some variation such as a multicriterion optimization problem. Indeed, mathematical optimization has become an important tool in many areas. It is widely used in engineering, in electronic design automation, automatic con- trol systems, and optimal design problems arising in civil, chemical, mechanical, and aerospace engineering. Optimization is used for problems arising in network design and operation, finance, supply chain management, scheduling, and many other areas. The list of applications is still steadily expanding.
For most of these applications, mathematical optimization is used as an aid to a human decision maker, system designer, or system operator, who supervises the process, checks the results, and modifies the problem (or the solution approach) when necessary. This human decision maker also carries out any actions suggested by the optimization problem, e.g., buying or selling assets to achieve the optimal portfolio.
A relatively recent phenomenon opens the possibility of many other applications for mathematical optimization. With the proliferation of computers embedded in products, we have seen a rapid growth in embedded optimization. In these em- bedded applications, optimization is used to automatically make real-time choices, and even carry out the associated actions, with no (or little) human intervention or oversight. In some application areas, this blending of traditional automatic control systems and embedded optimization is well under way; in others, it is just start- ing. Embedded real-time optimization raises some new challenges: in particular, it requires solution methods that are extremely reliable, and solve problems in a predictable amount of time (and memory).
1.1.2 Solving optimization problems
A solution method for a class of optimization problems is an algorithm that com- putes a solution of the problem (to some given accuracy), given a particular problem from the class, i.e., an instance of the problem. Since the late 1940s, a large effort has gone into developing algorithms for solving various classes of optimization prob- lems, analyzing their properties, and developing good software implementations. The effectiveness of these algorithms, i.e., our ability to solve the optimization prob- lem (1.1), varies considerably, and depends on factors such as the particular forms of the objective and constraint functions, how many variables and constraints there are, and special structure, such as sparsity. (A problem is sparse if each constraint function depends on only a small number of the variables).
Even when the objective and constraint functions are smooth (for example, polynomials) the general optimization problem (1.1) is surprisingly difficult to solve. Approaches to the general problem therefore involve some kind of compromise, such as very long computation time, or the possibility of not finding the solution. Some of these methods are discussed in §1.4.
There are, however, some important exceptions to the general rule that most optimization problems are difficult to solve. For a few problem classes we have

4
1 Introduction
effective algorithms that can reliably solve even large problems, with hundreds or thousands of variables and constraints. Two important and well known examples, described in §1.2 below (and in detail in chapter 4), are least-squares problems and linear programs. It is less well known that convex optimization is another exception to the rule: Like least-squares or linear programming, there are very effective algorithms that can reliably and efficiently solve even large convex problems.
Least-squares and linear programming
In this section we describe two very widely known and used special subclasses of convex optimization: least-squares and linear programming. (A complete technical treatment of these problems will be given in chapter 4.)
Least-squares problems
A least-squares problem is an optimization problem with no constraints (i.e., m = 0) and an objective which is a sum of squares of terms of the form aTi x − bi:
minimize f0(x) = ∥Ax − b∥2 = 􏰉ki=1(aTi x − bi)2. (1.4) HereA∈Rk×n (withk≥n),aTi aretherowsofA,andthevectorx∈Rn isthe
optimization variable.
Solving least-squares problems
The solution of a least-squares problem (1.4) can be reduced to solving a set of linear equations,
(AT A)x = AT b,
so we have the analytical solution x = (AT A)−1AT b. For least-squares problems we have good algorithms (and software implementations) for solving the problem to high accuracy, with very high reliability. The least-squares problem can be solved in a time approximately proportional to n2k, with a known constant. A current desktop computer can solve a least-squares problem with hundreds of variables, and thousands of terms, in a few seconds; more powerful computers, of course, can solve larger problems, or the same size problems, faster. (Moreover, these solution times will decrease exponentially in the future, according to Moore’s law.) Algorithms and software for solving least-squares problems are reliable enough for embedded optimization.
In many cases we can solve even larger least-squares problems, by exploiting some special structure in the coefficient matrix A. Suppose, for example, that the matrix A is sparse, which means that it has far fewer than kn nonzero entries. By exploiting sparsity, we can usually solve the least-squares problem much faster than order n2k. A current desktop computer can solve a sparse least-squares problem
1.2
1.2.1

1.2 Least-squares and linear programming 5
with tens of thousands of variables, and hundreds of thousands of terms, in around a minute (although this depends on the particular sparsity pattern).
For extremely large problems (say, with millions of variables), or for problems with exacting real-time computing requirements, solving a least-squares problem can be a challenge. But in the vast majority of cases, we can say that existing methods are very effective, and extremely reliable. Indeed, we can say that solving least-squares problems (that are not on the boundary of what is currently achiev- able) is a (mature) technology, that can be reliably used by many people who do not know, and do not need to know, the details.
Using least-squares
The least-squares problem is the basis for regression analysis, optimal control, and many parameter estimation and data fitting methods. It has a number of statistical interpretations, e.g., as maximum likelihood estimation of a vector x, given linear measurements corrupted by Gaussian measurement errors.
Recognizing an optimization problem as a least-squares problem is straightfor- ward; we only need to verify that the objective is a quadratic function (and then test whether the associated quadratic form is positive semidefinite). While the basic least-squares problem has a simple fixed form, several standard techniques are used to increase its flexibility in applications.
In weighted least-squares, the weighted least-squares cost
􏰊k
wi(aTi x−bi)2, i=1
where w1, . . . , wk are positive, is minimized. (This problem is readily cast and solved as a standard least-squares problem.) Here the weights wi are chosen to reflect differing levels of concern about the sizes of the terms aTi x − bi, or simply to influence the solution. In a statistical setting, weighted least-squares arises in estimation of a vector x, given linear measurements corrupted by errors with unequal variances.
Another technique in least-squares is regularization, in which extra terms are added to the cost function. In the simplest case, a positive multiple of the sum of squares of the variables is added to the cost function:
􏰊k i=1
( a Ti x − b i ) 2 + ρ
􏰊n i=1
x 2i ,
where ρ > 0. (This problem too can be formulated as a standard least-squares problem.) The extra terms penalize large values of x, and result in a sensible solution in cases when minimizing the first sum only does not. The parameter ρ is chosen by the user to give the right trade-off between making the original objective function 􏰉ki=1(aTi x−bi)2 small, while keeping 􏰉ni=1 x2i not too big. Regularization comes up in statistical estimation when the vector x to be estimated is given a prior distribution.
Weighted least-squares and regularization are covered in chapter 6; their sta- tistical interpretations are given in chapter 7.

6
1.2.2
1 Introduction
Linear programming
Another important class of optimization problems is linear programming, in which the objective and all constraint functions are linear:
minimize cT x
subject to aTi x ≤ bi, i = 1,…,m.
(1.5)
Here the vectors c,a1,…,am ∈ Rn and scalars b1,…,bm ∈ R are problem pa- rameters that specify the objective and constraint functions.
Solving linear programs
There is no simple analytical formula for the solution of a linear program (as there is for a least-squares problem), but there are a variety of very effective methods for solving them, including Dantzig’s simplex method, and the more recent interior- point methods described later in this book. While we cannot give the exact number of arithmetic operations required to solve a linear program (as we can for least- squares), we can establish rigorous bounds on the number of operations required to solve a linear program, to a given accuracy, using an interior-point method. The complexity in practice is order n2m (assuming m ≥ n) but with a constant that is less well characterized than for least-squares. These algorithms are quite reliable, although perhaps not quite as reliable as methods for least-squares. We can easily solve problems with hundreds of variables and thousands of constraints on a small desktop computer, in a matter of seconds. If the problem is sparse, or has some other exploitable structure, we can often solve problems with tens or hundreds of thousands of variables and constraints.
As with least-squares problems, it is still a challenge to solve extremely large linear programs, or to solve linear programs with exacting real-time computing re- quirements. But, like least-squares, we can say that solving (most) linear programs is a mature technology. Linear programming solvers can be (and are) embedded in many tools and applications.
Using linear programming
Some applications lead directly to linear programs in the form (1.5), or one of several other standard forms. In many other cases the original optimization prob- lem does not have a standard linear program form, but can be transformed to an equivalent linear program (and then, of course, solved) using techniques covered in detail in chapter 4.
As a simple example, consider the Chebyshev approximation problem:
minimize maxi=1,…,k |aTi x − bi|. (1.6)
Here x ∈ Rn is the variable, and a1,…,ak ∈ Rn, b1,…,bk ∈ R are parameters that specify the problem instance. Note the resemblance to the least-squares prob- lem (1.4). For both problems, the objective is a measure of the size of the terms aTi x − bi. In least-squares, we use the sum of squares of the terms as objective, whereas in Chebyshev approximation, we use the maximum of the absolute values.

1.3 Convex optimization 7
One other important distinction is that the objective function in the Chebyshev approximation problem (1.6) is not differentiable; the objective in the least-squares problem (1.4) is quadratic, and therefore differentiable.
The Chebyshev approximation problem (1.6) can be solved by solving the linear program
minimize t
subjectto aTi x−t≤bi, i=1,…,k (1.7)
−aTi x−t≤−bi, i=1,…,k,
with variables x ∈ Rn and t ∈ R. (The details will be given in chapter 6.) Since linear programs are readily solved, the Chebyshev approximation problem is therefore readily solved.
Anyone with a working knowledge of linear programming would recognize the Chebyshev approximation problem (1.6) as one that can be reduced to a linear program. For those without this background, though, it might not be obvious that the Chebyshev approximation problem (1.6), with its nondifferentiable objective, can be formulated and solved as a linear program.
While recognizing problems that can be reduced to linear programs is more involved than recognizing a least-squares problem, it is a skill that is readily ac- quired, since only a few standard tricks are used. The task can even be partially automated; some software systems for specifying and solving optimization prob- lems can automatically recognize (some) problems that can be reformulated as linear programs.
1.3 Convex optimization
A convex optimization problem is one of the form minimize f0 (x)
subject to fi(x) ≤ bi, i = 1,…,m, where the functions f0 , . . . , fm : Rn → R are convex, i.e., satisfy
fi(αx + βy) ≤ αfi(x) + βfi(y)
(1.8)
forallx, y∈Rn andallα, β∈Rwithα+β=1,α≥0,β≥0. Theleast-squares problem (1.4) and linear programming problem (1.5) are both special cases of the general convex optimization problem (1.8).
1.3.1 Solving convex optimization problems
There is in general no analytical formula for the solution of convex optimization problems, but (as with linear programming problems) there are very effective meth- ods for solving them. Interior-point methods work very well in practice, and in some cases can be proved to solve the problem to a specified accuracy with a number of

8
1 Introduction
operations that does not exceed a polynomial of the problem dimensions. (This is covered in chapter 11.)
We will see that interior-point methods can solve the problem (1.8) in a num- ber of steps or iterations that is almost always in the range between 10 and 100. Ignoring any structure in the problem (such as sparsity), each step requires on the order of
max{n3, n2m, F }
operations, where F is the cost of evaluating the first and second derivatives of the objective and constraint functions f0, . . . , fm.
Like methods for solving linear programs, these interior-point methods are quite reliable. We can easily solve problems with hundreds of variables and thousands of constraints on a current desktop computer, in at most a few tens of seconds. By exploiting problem structure (such as sparsity), we can solve far larger problems, with many thousands of variables and constraints.
We cannot yet claim that solving general convex optimization problems is a mature technology, like solving least-squares or linear programming problems. Re- search on interior-point methods for general nonlinear convex optimization is still a very active research area, and no consensus has emerged yet as to what the best method or methods are. But it is reasonable to expect that solving general con- vex optimization problems will become a technology within a few years. And for some subclasses of convex optimization problems, for example second-order cone programming or geometric programming (studied in detail in chapter 4), it is fair to say that interior-point methods are approaching a technology.
Using convex optimization
Using convex optimization is, at least conceptually, very much like using least- squares or linear programming. If we can formulate a problem as a convex opti- mization problem, then we can solve it efficiently, just as we can solve a least-squares problem efficiently. With only a bit of exaggeration, we can say that, if you formu- late a practical problem as a convex optimization problem, then you have solved the original problem.
There are also some important differences. Recognizing a least-squares problem is straightforward, but recognizing a convex function can be difficult. In addition, there are many more tricks for transforming convex problems than for transforming linear programs. Recognizing convex optimization problems, or those that can be transformed to convex optimization problems, can therefore be challenging. The main goal of this book is to give the reader the background needed to do this. Once the skill of recognizing or formulating convex optimization problems is developed, you will find that surprisingly many problems can be solved via convex optimization.
The challenge, and art, in using convex optimization is in recognizing and for- mulating the problem. Once this formulation is done, solving the problem is, like least-squares or linear programming, (almost) technology.
1.3.2

1.4 Nonlinear optimization 9
1.4 Nonlinear optimization
Nonlinear optimization (or nonlinear programming) is the term used to describe an optimization problem when the objective or constraint functions are not linear, but not known to be convex. Sadly, there are no effective methods for solving the general nonlinear programming problem (1.1). Even simple looking problems with as few as ten variables can be extremely challenging, while problems with a few hundreds of variables can be intractable. Methods for the general nonlinear programming problem therefore take several different approaches, each of which involves some compromise.
1.4.1 Local optimization
In local optimization, the compromise is to give up seeking the optimal x, which minimizes the objective over all feasible points. Instead we seek a point that is only locally optimal, which means that it minimizes the objective function among feasible points that are near it, but is not guaranteed to have a lower objective value than all other feasible points. A large fraction of the research on general nonlinear programming has focused on methods for local optimization, which as a consequence are well developed.
Local optimization methods can be fast, can handle large-scale problems, and are widely applicable, since they only require differentiability of the objective and constraint functions. As a result, local optimization methods are widely used in applications where there is value in finding a good point, if not the very best. In an engineering design application, for example, local optimization can be used to improve the performance of a design originally obtained by manual, or other, design methods.
There are several disadvantages of local optimization methods, beyond (possi- bly) not finding the true, globally optimal solution. The methods require an initial guess for the optimization variable. This initial guess or starting point is critical, and can greatly affect the objective value of the local solution obtained. Little information is provided about how far from (globally) optimal the local solution is. Local optimization methods are often sensitive to algorithm parameter values, which may need to be adjusted for a particular problem, or family of problems.
Using a local optimization method is trickier than solving a least-squares prob- lem, linear program, or convex optimization problem. It involves experimenting with the choice of algorithm, adjusting algorithm parameters, and finding a good enough initial guess (when one instance is to be solved) or a method for producing a good enough initial guess (when a family of problems is to be solved). Roughly speaking, local optimization methods are more art than technology. Local opti- mization is a well developed art, and often very effective, but it is nevertheless an art. In contrast, there is little art involved in solving a least-squares problem or a linear program (except, of course, those on the boundary of what is currently possible).
An interesting comparison can be made between local optimization methods for nonlinear programming, and convex optimization. Since differentiability of the ob-

10
1 Introduction
jective and constraint functions is the only requirement for most local optimization methods, formulating a practical problem as a nonlinear optimization problem is relatively straightforward. The art in local optimization is in solving the problem (in the weakened sense of finding a locally optimal point), once it is formulated. In convex optimization these are reversed: The art and challenge is in problem formulation; once a problem is formulated as a convex optimization problem, it is relatively straightforward to solve it.
Global optimization
In global optimization, the true global solution of the optimization problem (1.1) is found; the compromise is efficiency. The worst-case complexity of global opti- mization methods grows exponentially with the problem sizes n and m; the hope is that in practice, for the particular problem instances encountered, the method is far faster. While this favorable situation does occur, it is not typical. Even small problems, with a few tens of variables, can take a very long time (e.g., hours or days) to solve.
Global optimization is used for problems with a small number of variables, where computing time is not critical, and the value of finding the true global solution is very high. One example from engineering design is worst-case analysis or verifica- tion of a high value or safety-critical system. Here the variables represent uncertain parameters, that can vary during manufacturing, or with the environment or op- erating condition. The objective function is a utility function, i.e., one for which smaller values are worse than larger values, and the constraints represent prior knowledge about the possible parameter values. The optimization problem (1.1) is the problem of finding the worst-case values of the parameters. If the worst-case value is acceptable, we can certify the system as safe or reliable (with respect to the parameter variations).
A local optimization method can rapidly find a set of parameter values that is bad, but not guaranteed to be the absolute worst possible. If a local optimiza- tion method finds parameter values that yield unacceptable performance, it has succeeded in determining that the system is not reliable. But a local optimization method cannot certify the system as reliable; it can only fail to find bad parameter values. A global optimization method, in contrast, will find the absolute worst val- ues of the parameters, and if the associated performance is acceptable, can certify the system as safe. The cost is computation time, which can be very large, even for a relatively small number of parameters. But it may be worth it in cases where the value of certifying the performance is high, or the cost of being wrong about the reliability or safety is high.
Role of convex optimization in nonconvex problems
In this book we focus primarily on convex optimization problems, and applications that can be reduced to convex optimization problems. But convex optimization also plays an important role in problems that are not convex.
1.4.2
1.4.3

1.5 Outline 11
Initialization for local optimization
One obvious use is to combine convex optimization with a local optimization method. Starting with a nonconvex problem, we first find an approximate, but convex, formulation of the problem. By solving this approximate problem, which can be done easily and without an initial guess, we obtain the exact solution to the approximate convex problem. This point is then used as the starting point for a local optimization method, applied to the original nonconvex problem.
Convex heuristics for nonconvex optimization
Convex optimization is the basis for several heuristics for solving nonconvex prob- lems. One interesting example we will see is the problem of finding a sparse vector x (i.e., one with few nonzero entries) that satisfies some constraints. While this is a difficult combinatorial problem, there are some simple heuristics, based on con- vex optimization, that often find fairly sparse solutions. (These are described in chapter 6.)
Another broad example is given by randomized algorithms, in which an ap- proximate solution to a nonconvex problem is found by drawing some number of candidates from a probability distribution, and taking the best one found as the approximate solution. Now suppose the family of distributions from which we will draw the candidates is parametrized, e.g., by its mean and covariance. We can then pose the question, which of these distributions gives us the smallest expected value of the objective? It turns out that this problem is sometimes a convex problem, and therefore efficiently solved. (See, e.g., exercise 11.23.)
Bounds for global optimization
Many methods for global optimization require a cheaply computable lower bound on the optimal value of the nonconvex problem. Two standard methods for doing this are based on convex optimization. In relaxation, each nonconvex constraint is replaced with a looser, but convex, constraint. In Lagrangian relaxation, the Lagrangian dual problem (described in chapter 5) is solved. This problem is convex, and provides a lower bound on the optimal value of the nonconvex problem.
1.5 Outline
The book is divided into three main parts, titled Theory, Applications, and Algo- rithms.
1.5.1 Part I: Theory
In part I, Theory, we cover basic definitions, concepts, and results from convex analysis and convex optimization. We make no attempt to be encyclopedic, and skew our selection of topics toward those that we think are useful in recognizing

12
1 Introduction
and formulating convex optimization problems. This is classical material, almost all of which can be found in other texts on convex analysis and optimization. We make no attempt to give the most general form of the results; for that the reader can refer to any of the standard texts on convex analysis.
Chapters 2 and 3 cover convex sets and convex functions, respectively. We give some common examples of convex sets and functions, as well as a number of convex calculus rules, i.e., operations on sets and functions that preserve convexity. Combining the basic examples with the convex calculus rules allows us to form (or perhaps more importantly, recognize) some fairly complicated convex sets and functions.
In chapter 4, Convex optimization problems, we give a careful treatment of op- timization problems, and describe a number of transformations that can be used to reformulate problems. We also introduce some common subclasses of convex opti- mization, such as linear programming and geometric programming, and the more recently developed second-order cone programming and semidefinite programming.
Chapter 5 covers Lagrangian duality, which plays a central role in convex opti- mization. Here we give the classical Karush-Kuhn-Tucker conditions for optimality, and a local and global sensitivity analysis for convex optimization problems.
Part II: Applications
In part II, Applications, we describe a variety of applications of convex optimization, in areas like probability and statistics, computational geometry, and data fitting. We have described these applications in a way that is accessible, we hope, to a broad audience. To keep each application short, we consider only simple cases, sometimes adding comments about possible extensions. We are sure that our treatment of some of the applications will cause experts to cringe, and we apologize to them in advance. But our goal is to convey the flavor of the application, quickly and to a broad audience, and not to give an elegant, theoretically sound, or complete treatment. Our own backgrounds are in electrical engineering, in areas like control systems, signal processing, and circuit analysis and design. Although we include these topics in the courses we teach (using this book as the main text), only a few of these applications are broadly enough accessible to be included here.
The aim of part II is to show the reader, by example, how convex optimization can be applied in practice.
Part III: Algorithms
In part III, Algorithms, we describe numerical methods for solving convex opti- mization problems, focusing on Newton’s algorithm and interior-point methods. Part III is organized as three chapters, which cover unconstrained optimization, equality constrained optimization, and inequality constrained optimization, respec- tively. These chapters follow a natural hierarchy, in which solving a problem is reduced to solving a sequence of simpler problems. Quadratic optimization prob- lems (including, e.g., least-squares) form the base of the hierarchy; they can be
1.5.2
1.5.3

1.5 Outline 13
solved exactly by solving a set of linear equations. Newton’s method, developed in chapters 9 and 10, is the next level in the hierarchy. In Newton’s method, solving an unconstrained or equality constrained problem is reduced to solving a sequence of quadratic problems. In chapter 11, we describe interior-point methods, which form the top level of the hierarchy. These methods solve an inequality constrained problem by solving a sequence of unconstrained, or equality constrained, problems.
Overall we cover just a handful of algorithms, and omit entire classes of good methods, such as quasi-Newton, conjugate-gradient, bundle, and cutting-plane al- gorithms. For the methods we do describe, we give simplified variants, and not the latest, most sophisticated versions. Our choice of algorithms was guided by several criteria. We chose algorithms that are simple (to describe and implement), but also reliable and robust, and effective and fast enough for most problems.
Many users of convex optimization end up using (but not developing) standard software, such as a linear or semidefinite programming solver. For these users, the material in part III is meant to convey the basic flavor of the methods, and give some ideas of their basic attributes. For those few who will end up developing new algorithms, we think that part III serves as a good introduction.
1.5.4 Appendices
There are three appendices. The first lists some basic facts from mathematics that we use, and serves the secondary purpose of setting out our notation. The second appendix covers a fairly particular topic, optimization problems with quadratic objective and one quadratic constraint. These are nonconvex problems that never- theless can be effectively solved, and we use the results in several of the applications described in part II.
The final appendix gives a brief introduction to numerical linear algebra, con- centrating on methods that can exploit problem structure, such as sparsity, to gain efficiency. We do not cover a number of important topics, including roundoff analy- sis, or give any details of the methods used to carry out the required factorizations. These topics are covered by a number of excellent texts.
1.5.5 Comments on examples
In many places in the text (but particularly in parts II and III, which cover ap- plications and algorithms, respectively) we illustrate ideas using specific examples. In some cases, the examples are chosen (or designed) specifically to illustrate our point; in other cases, the examples are chosen to be ‘typical’. This means that the examples were chosen as samples from some obvious or simple probability distri- bution. The dangers of drawing conclusions about algorithm performance from a few tens or hundreds of randomly generated examples are well known, so we will not repeat them here. These examples are meant only to give a rough idea of al- gorithm performance, or a rough idea of how the computational effort varies with problem dimensions, and not as accurate predictors of algorithm performance. In particular, your results may vary from ours.

14
1.5.6
1 Introduction
Comments on exercises
Each chapter concludes with a set of exercises. Some involve working out the de- tails of an argument or claim made in the text. Others focus on determining, or establishing, convexity of some given sets, functions, or problems; or more gener- ally, convex optimization problem formulation. Some chapters include numerical exercises, which require some (but not much) programming in an appropriate high level language. The difficulty level of the exercises is mixed, and varies without warning from quite straightforward to rather tricky.
1.6 Notation
Our notation is more or less standard, with a few exceptions. In this section we describe our basic notation; a more complete list appears on page 697.
We use R to denote the set of real numbers, R+ to denote the set of nonnegative real numbers, and R++ to denote the set of positive real numbers. The set of real n-vectors is denoted Rn, and the set of real m × n matrices is denoted Rm×n. We delimit vectors and matrices with square brackets, with the components separated by space. We use parentheses to construct column vectors from comma separated lists. For example, if a, b, c ∈ R, we have
a
(a,b,c)= b =[ a b c ]T,
c
which is an element of R3. The symbol 1 denotes a vector all of whose components are one (with dimension determined from context). The notation xi can refer to the ith component of the vector x, or to the ith element of a set or sequence of vectors x1 , x2 , . . .. The context, or the text, makes it clear which is meant.
We use Sk to denote the set of symmetric k × k matrices, Sk+ to denote the set of symmetric positive semidefinite k × k matrices, and Sk++ to denote the set of symmetric positive definite k × k matrices. The curled inequality symbol ≽ (and its strict form ≻) is used to denote generalized inequality: between vectors, it represents componentwise inequality; between symmetric matrices, it represents matrix inequality. With a subscript, the symbol ≼K (or ≺K) denotes generalized inequality with respect to the cone K (explained in §2.4.1).
Our notation for describing functions deviates a bit from standard notation, but we hope it will cause no confusion. We use the notation f : Rp → Rq to mean that f is an Rq-valued function on some subset of Rp, specifically, its domain, which we denote domf. We can think of our use of the notation f : Rp → Rq as a declaration of the function type, as in a computer language: f : Rp → Rq means that the function f takes as argument a real p-vector, and returns a real q-vector. The set domf, the domain of the function f, specifies the subset of Rp of points x for which f(x) is defined. As an example, we describe the logarithm function aslog:R→R,withdomlog=R++. Thenotationlog:R→Rmeansthat

1.6 Notation 15
the logarithm function accepts and returns a real number; dom log = R++ means that the logarithm is defined only for positive numbers.
We use Rn as a generic finite-dimensional vector space. We will encounter several other finite-dimensional vector spaces, e.g., the space of polynomials of a variable with a given maximum degree, or the space Sk of symmetric k×k matrices. By identifying a basis for a vector space, we can always identify it with Rn (where n is its dimension), and therefore the generic results, stated for the vector space Rn, can be applied. We usually leave it to the reader to translate general results or statements to other vector spaces. For example, any linear function f : Rn → R can be represented in the formkf(x) = cTx, where c ∈ Rn. The corresponding statement for the vector space S can be found by choosing a basis and translating. This results in the statement: any linear function f : Sk → R can be represented in the form f(X) = tr(CX), where C ∈ Sk.

16
1 Introduction
Bibliography
Least-squares is a very old subject; see, for example, the treatise written (in Latin) by Gauss in the 1820s, and recently translated by Stewart [Gau95]. More recent work in- cludes the books by Lawson and Hanson [LH95] and Bj ̈orck [Bj ̈o96]. References on linear programming can be found in chapter 4.
There are many good texts on local methods for nonlinear programming, including Gill, Murray, and Wright [GMW81], Nocedal and Wright [NW99], Luenberger [Lue84], and Bertsekas [Ber99].
Global optimization is covered in the books by Horst and Pardalos [HP94], Pinter [Pin95], and Tuy [Tuy98]. Using convex optimization to find bounds for nonconvex problems is an active research topic, and addressed in the books above on global optimization, the book by Ben-Tal and Nemirovski [BTN01, §4.3], and the survey by Nesterov, Wolkowicz, and Ye [NWY00]. Some notable papers on this subject are Goemans and Williamson [GW95], Nesterov [Nes00, Nes98], Ye [Ye99], and Parrilo [Par03]. Randomized methods are discussed in Motwani and Raghavan [MR95].
Convex analysis, the mathematics of convex sets, functions, and optimization problems, is a well developed subfield of mathematics. Basic references include the books by Rockafel- lar [Roc70], Hiriart-Urruty and Lemar ́echal [HUL93, HUL01], Borwein and Lewis [BL00], and Bertsekas, Nedi ́c, and Ozdaglar [Ber03]. More references on convex analysis can be found in chapters 2–5.
Nesterov and Nemirovski [NN94] were the first to point out that interior-point methods can solve many convex optimization problems; see also the references in chapter 11. The book by Ben-Tal and Nemirovski [BTN01] covers modern convex optimization, interior- point methods, and applications.
Solution methods for convex optimization that we do not cover in this book include subgradient methods [Sho85], bundle methods [HUL93], cutting-plane methods [Kel60, EM75, GLY96], and the ellipsoid method [Sho91, BGT81].
The idea that convex optimization problems are tractable is not new. It has long been rec- ognized that the theory of convex optimization is far more straightforward (and complete) than the theory of general nonlinear optimization. In this context Rockafellar stated, in his 1993 SIAM Review survey paper [Roc93],
In fact the great watershed in optimization isn’t between linearity and nonlin- earity, but convexity and nonconvexity.
The first formal argument that convex optimization problems are easier to solve than general nonlinear optimization problems was made by Nemirovski and Yudin, in their 1983 book Problem Complexity and Method Efficiency in Optimization [NY83]. They showed that the information-based complexity of convex optimization problems is far lower than that of general nonlinear optimization problems. A more recent book on this topic is Vavasis [Vav91].
The low (theoretical) complexity of interior-point methods is integral to modern research in this area. Much of the research focuses on proving that an interior-point (or other) method can solve some class of convex optimization problems with a number of operations that grows no faster than a polynomial of the problem dimensions and log(1/ǫ), where ǫ > 0 is the required accuracy. (We will see some simple results like these in chapter 11.) The first comprehensive work on this topic is the book by Nesterov and Nemirovski [NN94]. Other books include Ben-Tal and Nemirovski [BTN01, lecture 5] and Renegar [Ren01]. The polynomial-time complexity of interior-point methods for various convex optimization problems is in marked contrast to the situation for a number of nonconvex optimization problems, for which all known algorithms require, in the worst case, a number of operations that is exponential in the problem dimensions.

Bibliography 17
Convex optimization has been used in many applications areas, too numerous to cite here. Convex analysis is central in economics and finance, where it is the basis of many results. For example the separating hyperplane theorem, together with a no-arbitrage assumption, is used to deduce the existence of prices and risk-neutral probabilities (see, e.g., Luenberger [Lue95, Lue98] and Ross [Ros99]). Convex optimization, especially our ability to solve semidefinite programs, has recently received particular attention in au- tomatic control theory. Applications of convex optimization in control theory can be found in the books by Boyd and Barratt [BB91], Boyd, El Ghaoui, Feron, and Balakrish- nan [BEFB94], Dahleh and Diaz-Bobillo [DDB95], El Ghaoui and Niculescu [EN00], and Dullerud and Paganini [DP00]. A good example of embedded (convex) optimization is model predictive control, an automatic control technique that requires the solution of a (convex) quadratic program at each step. Model predictive control is now widely used in the chemical process control industry; see Morari and Zafirou [MZ89]. Another applica- tions area where convex optimization (and especially, geometric programming) has a long history is electronic circuit design. Research papers on this topic include Fishburn and Dunlop [FD85], Sapatnekar, Rao, Vaidya, and Kang [SRVK93], and Hershenson, Boyd, and Lee [HBL01]. Luo [Luo03] gives a survey of applications in signal processing and communications. More references on applications of convex optimization can be found in chapters 4 and 6–8.
High quality implementations of recent interior-point methods for convex optimization problems are available in the LOQO [Van97] and MOSEK [MOS02] software packages, and the codes listed in chapter 11. Software systems for specifying optimization prob- lems include AMPL [FGK99] and GAMS [BKMR98]. Both provide some support for recognizing problems that can be transformed to linear programs.

Part I Theory

Chapter 2 Convex sets
2.1 Affine and convex sets
2.1.1 Lines and line segments
Suppose x1 ̸= x2 are two points in Rn. Points of the form y = θx1 + (1 − θ)x2,
where θ ∈ R, form the line passing through x1 and x2. The parameter value θ = 0 corresponds to y = x2, and the parameter value θ = 1 corresponds to y = x1. Values of the parameter θ between 0 and 1 correspond to the (closed) line segment between x1 and x2.
Expressing y in the form
y = x2 + θ(x1 − x2)
gives another interpretation: y is the sum of the base point x2 (corresponding to θ = 0) and the direction x1 − x2 (which points from x2 to x1) scaled by the parameter θ. Thus, θ gives the fraction of the way from x2 to x1 where y lies. As θincreasesfrom0to1,thepointymovesfromx2 tox1;forθ>1,thepointylies on the line beyond x1. This is illustrated in figure 2.1.
2.1.2 Affine sets
A set C ⊆ Rn is affine if the line through any two distinct points in C lies in C, i.e.,ifforanyx1, x2 ∈C andθ∈R,wehaveθx1+(1−θ)x2 ∈C. Inotherwords, C contains the linear combination of any two points in C, provided the coefficients in the linear combination sum to one.
This idea can be generalized to more than two points. We refer to a point of the form θ1×1 + ··· + θkxk, where θ1 + ··· + θk = 1, as an affine combination
of the points x1, . . . , xk. Using induction from the definition of affine set (i.e., that it contains every affine combination of two points in it), it can be shown that

22
2 Convex sets
θ = 1.2
x1
θ=1
θ = 0.6
Figure 2.1 The line passing through x1 and x2 is described parametrically by θx1 +(1−θ)x2, where θ varies over R. The line segment between x1 and x2, which corresponds to θ between 0 and 1, is shown darker.
an affine set contains every affine combination of its points: If C is an affine set, x1,…,xk ∈ C, and θ1 +···+θk = 1, then the point θ1×1 +···+θkxk also belongs to C.
If C is an affine set and x0 ∈ C, then the set
V =C−x0 ={x−x0 |x∈C}
is a subspace, i.e., closed under sums and scalar multiplication. To see this, suppose v1, v2 ∈ V and α, β ∈ R. Then we have v1 + x0 ∈ C and v2 + x0 ∈ C, and so
αv1 +βv2 +x0 =α(v1 +x0)+β(v2 +x0)+(1−α−β)x0 ∈C,
since C is affine, and α + β + (1 − α − β) = 1. We conclude that αv1 + βv2 ∈ V , since αv1 + βv2 + x0 ∈ C.
Thus, the affine set C can be expressed as
C=V +x0 ={v+x0 |v∈V},
i.e., as a subspace plus an offset. The subspace V associated with the affine set C does not depend on the choice of x0, so x0 can be chosen as any point in C. We define the dimension of an affine set C as the dimension of the subspace V = C−x0, where x0 is any element of C.
Example 2.1 Solution set of linear equations. The solution set of a system of linear equations,C={x|Ax=b},whereA∈Rm×n andb∈Rm,isanaffineset. To show this, suppose x1, x2 ∈ C, i.e., Ax1 = b, Ax2 = b. Then for any θ, we have
A(θx1 +(1−θ)x2) = θAx1 +(1−θ)Ax2 = θb+(1−θ)b
= b,
which shows that the affine combination θx1 + (1 − θ)x2 is also in C. The subspace
associated with the affine set C is the nullspace of A.
We also have a converse: every affine set can be expressed as the solution set of a system of linear equations.
x2 θ=0
θ = −0.2

2.1 Affine and convex sets 23
The set of all affine combinations of points in some set C ⊆ Rn is called the affine hull of C, and denoted aff C:
affC ={θ1×1 +···+θkxk |x1,…,xk ∈C, θ1 +···+θk =1}.
The affine hull is the smallest affine set that contains C, in the following sense: if
S is any affine set with C ⊆ S, then aff C ⊆ S.
2.1.3 Affine dimension and relative interior
We define the affine dimension of a set C as the dimension of its affine hull. Affine dimension is useful in the context of convex analysis and optimization, but is not always consistent with other definitions of dimension. As an example consider the unitcircleinR2,i.e.,{x∈R2 |x21+x2 =1}. ItsaffinehullisallofR2,soits affine dimension is two. By most definitions of dimension, however, the unit circle in R2 has dimension one.
If the affine dimension of a set C ⊆ Rn is less than n, then the set lies in the affine set aff C ̸= Rn. We define the relative interior of the set C, denoted relint C, as its interior relative to aff C:
relint C = {x ∈ C | B(x, r) ∩ aff C ⊆ C for some r > 0},
where B(x, r) = {y | ∥y − x∥ ≤ r}, the ball of radius r and center x in the norm ∥ · ∥. (Here ∥ · ∥ is any norm; all norms define the same relative interior.) We can then define the relative boundary of a set C as cl C \ relint C , where cl C is the closure of C.
Example 2.2 Consider a square in the (x1 , x2 )-plane in R3 , defined as C ={x∈R3 | −1≤x1 ≤1, −1≤x2 ≤1, x3 =0}.
Its affine hull is the (x1, x2)-plane, i.e., aff C = {x ∈ R3 | x3 = 0}. The interior of C is empty, but the relative interior is
relintC ={x∈R3 | −1 0, and ∥ · ∥2 denotes the Euclidean norm, i.e., ∥u∥2 = (uT u)1/2. The vector xc is the center of the ball and the scalar r is its radius; B(xc,r) consists of all points within a distance r of the center xc. Another common representation for the Euclidean ball is
B(xc,r)={xc +ru|∥u∥2 ≤1}.
x2

30
2 Convex sets
2.2.3
Figure 2.9 An ellipsoid in R2, shown shaded. The center xc is shown as a dot, and the two semi-axes are shown as line segments.
A Euclidean ball is a convex set: if ∥x1 −xc∥2 ≤ r, ∥x2 −xc∥2 ≤ r, and 0 ≤ θ ≤ 1, then
∥θx1 +(1−θ)x2 −xc∥2 = ∥θ(x1 −xc)+(1−θ)(x2 −xc)∥2 ≤ θ∥x1 −xc∥2 +(1−θ)∥x2 −xc∥2
≤ r.
(Here we use the homogeneity property and triangle inequality for ∥·∥2; see §A.1.2.)
A related family of convex sets is the ellipsoids, which have the form
E ={x|(x−xc)TP−1(x−xc)≤1}, (2.3)
where P = P T ≻ 0, i.e., P is symmetric and positive definite. The vector xc ∈ Rn is the center of the ellipsoid. The matrix P determines how far the ellipsoid extends in every direction from xc; the lengths of the semi-axes of E are given by √λi, where λi are the eigenvalues of P. A ball is an ellipsoid with P = r2I. Figure 2.9 shows an ellipsoid in R2.
Another common representation of an ellipsoid is
E ={xc +Au|∥u∥2 ≤1}, (2.4)
where A is square and nonsingular. In this representation we can assume without loss of generality that A is symmetric and positive definite. By taking A = P1/2, this representation gives the ellipsoid defined in (2.3). When the matrix A in (2.4) is symmetric positive semidefinite but singular, the set in (2.4) is called a degenerate ellipsoid; its affine dimension is equal to the rank of A. Degenerate ellipsoids are also convex.
Norm balls and norm cones
Suppose ∥·∥ is any norm on Rn (see §A.1.2). From the general properties of norms it can be shown that a norm ball of radius r and center xc, given by {x | ∥x−xc∥ ≤ r}, is convex. The norm cone associated with the norm ∥ · ∥ is the set
C ={(x,t)|∥x∥≤t}⊆Rn+1.
xc

2.2 Some important examples
31
1
0.5
0 1
0
0
−1 −1
Figure 2.10 Boundary of second-order cone in R3, {(x1,x2,t) | (x21+x2)1/2 ≤
t}.
It is (as the name suggests) a convex cone.
x2
x1
1
Example 2.3 The second-order cone is the norm cone for the Euclidean norm, i.e., C = {(x,t)∈Rn+1 |∥x∥2 ≤t}
􏰐􏰒􏰓􏰍􏰍􏰒􏰓􏰒 􏰓􏰒􏰓 􏰚 = x􏰍􏰍􏰍xTI0 x≤0,t≥0.
t t 0−1 t
The second-order cone is also known by several other names. It is called the quadratic cone, since it is defined by a quadratic inequality. It is also called the Lorentz cone or ice-cream cone. Figure 2.10 shows the second-order cone in R3.
2.2.4 Polyhedra
A polyhedron is defined as the solution set of a finite number of linear equalities and inequalities:
P ={x|aTj x≤bj, j =1,…,m, cTj x=dj, j =1,…,p}. (2.5)
A polyhedron is thus the intersection of a finite number of halfspaces and hyper- planes. Affine sets (e.g., subspaces, hyperplanes, lines), rays, line segments, and halfspaces are all polyhedra. It is easily shown that polyhedra are convex sets. A bounded polyhedron is sometimes called a polytope, but some authors use the opposite convention (i.e., polytope for any set of the form (2.5), and polyhedron
t

32
2 Convex sets
a1
a5
a2
P
a4
Figure 2.11 The polyhedron P (shown shaded) is the intersection of five
halfspaces, with outward normal vectors a1,….,a5.
when it is bounded). Figure 2.11 shows an example of a polyhedron defined as the intersection of five halfspaces.
It will be convenient to use the compact notation P={x|Ax≼b, Cx=d}
(2.6)
Example 2.4 The nonnegative orthant is the set of points with nonnegative compo- nents, i.e.,
Rn+ ={x∈Rn |xi ≥0, i=1,…,n}={x∈Rn |x≽0}.
(Here R+ denotes the set of nonnegative numbers: R+ = {x ∈ R | x ≥ 0}.) The nonnegative orthant is a polyhedron and a cone (and therefore called a polyhedral cone).
Simplexes
Simplexes are another important family of polyhedra. Suppose the k + 1 points v0,…,vk ∈ Rn are affinely independent, which means v1 − v0,…,vk − v0 are linearly independent. The simplex determined by them is given by
C=conv{v0,…,vk}={θ0v0+···+θkvk |θ≽0, 1Tθ=1}, (2.7)
for (2.5), where
A =  . . .  , C =  . . .  ,
aTm u≼vmeansui ≤vi fori=1,…,m.
 a T1   c T1 
cTp
and the symbol ≼ denotes vector inequality or componentwise inequality in Rm:
a3

2.2 Some important examples 33
where 1 denotes the vector with all entries one. The affine dimension of this simplex is k, so it is sometimes referred to as a k-dimensional simplex in Rn.
Example 2.5 Some common simplexes. A 1-dimensional simplex is a line segment; a 2-dimensional simplex is a triangle (including its interior); and a 3-dimensional simplex is a tetrahedron.
The unit simplex is the n-dimensional simplex determined by the zero vector and the unit vectors, i.e., 0, e1, . . . , en ∈ Rn. It can be expressed as the set of vectors that satisfy
x ≽ 0, 1T x ≤ 1.
The probability simplex is the (n − 1)-dimensional simplex determined by the unit
vectors e1, . . . , en ∈ Rn. It is the set of vectors that satisfy x ≽ 0, 1T x = 1.
Vectors in the probability simplex correspond to probability distributions on a set with n elements, with xi interpreted as the probability of the ith element.
To describe the simplex (2.7) as a polyhedron, i.e., in the form (2.6), we proceed as follows. By definition, x ∈ C if and only if x = θ0v0 +θ1v1 +···+θkvk for some θ≽0with1Tθ=1. Equivalently,ifwedefiney=(θ1,…,θk)and
B=􏰋v1−v0 ··· vk−v0 􏰌∈Rn×k, wecansaythatx∈C ifandonlyif
x = v0 + By (2.8)
for some y ≽ 0 with 1T y ≤ 1. Now we note that affine independence of the points v0,…,vk implies that the matrix B has rank k. Therefore there exists a nonsingular matrix A = (A1, A2) ∈ Rn×n such that
AB=􏰒A1 􏰓B=􏰒I􏰓. A2 0
Multiplying (2.8) on the left with A, we obtain
A1x = A1v0 + y, A2x = A2v0.
From this we see that x ∈ C if and only if A2x = A2v0, and the vector y = A1x−A1v0 satisfiesy≽0and1Ty≤1. Inotherwordswehavex∈C ifandonly if
A2x = A2v0, A1x ≽ A1v0, 1T A1x ≤ 1 + 1T A1v0,
which is a set of linear equalities and inequalities in x, and so describes a polyhe- dron.

34
2 Convex sets
2.2.5
The convex hull of the finite set {v1,…,vk} is conv{v1,…,vk}={θ1v1 +···+θkvk |θ≽0, 1Tθ=1}.
This set is a polyhedron, and bounded, but (except in special cases, e.g., a simplex) it is not simple to express it in the form (2.5), i.e., by a set of linear equalities and inequalities.
A generalization of this convex hull description is
{θ1v1 +···+θkvk |θ1 +···+θm =1, θi ≥0, i=1,…,k}, (2.9)
where m ≤ k. Here we consider nonnegative linear combinations of vi, but only the first m coefficients are required to sum to one. Alternatively, we can inter- pret (2.9) as the convex hull of the points v1,…,vm, plus the conic hull of the points vm+1, . . . , vk. The set (2.9) defines a polyhedron, and conversely, every polyhedron can be represented in this form (although we will not show this).
The question of how a polyhedron is represented is subtle, and has very im- portant practical consequences. As a simple example consider the unit ball in the l∞-norm in Rn,
C ={x||xi|≤1, i=1,…,n}.
The set C can be described in the form (2.5) with 2n linear inequalities ±eTi x ≤ 1, where ei is the ith unit vector. To describe it in the convex hull form (2.9) requires at least 2n points:
C = conv{v1, . . . , v2n },
where v1,…,v2n are the 2n vectors all of whose components are 1 or −1. Thus
the size of the two descriptions differs greatly, for large n.
The positive semidefinite cone
We use the notation Sn to denote the set of symmetric n × n matrices, Sn ={X ∈Rn×n |X =XT},
which is a vector space with dimension n(n + 1)/2. We use the notation Sn+ to denote the set of symmetric positive semidefinite matrices:
Sn+ ={X∈Sn |X≽0},
and the notation Sn++ to denote the set of symmetric positive definite matrices:
Sn++ ={X ∈Sn |X ≻0}.
(This notation is meant to be analogous to R+, which denotes the nonnegative reals, and R++, which denotes the positive reals.)
Convex hull description of polyhedra

2.3
Operations that preserve convexity
35
1
0.5
0 1
0
y −10
1 0.5
x
Figure 2.12 Boundary of positive semidefinite cone in S2.
The set Sn+ is a convex cone: if θ1,θ2 ≥ 0 and A, B ∈ Sn+, then θ1A+θ2B ∈ Sn+. This can be seen directly from the definition of positive semidefiniteness: for any x ∈ Rn, we have
xT (θ1A + θ2B)x = θ1xT Ax + θ2xT Bx ≥ 0, ifA≽0,B≽0andθ1,θ2 ≥0.
Example 2.6 Positive semidefinite cone in S2. We have
X=􏰒x y􏰓∈S2+ ⇐⇒ x≥0, z≥0, xz≥y2.
yz
The boundary of this cone is shown in figure 2.12, plotted in R3 as (x, y, z).
2.3 Operations that preserve convexity
In this section we describe some operations that preserve convexity of sets, or allow us to construct convex sets from others. These operations, together with the simple examples described in §2.2, form a calculus of convex sets that is useful for determining or establishing convexity of sets.
z

36 2 Convex sets
2.3.1 Intersection
Convexity is preserved under inters􏰮ection: if S1 and S2 are convex, then S1 ∩ S2 is convex. This property extends to the intersection of an infinite number of sets: if Sα is convex for every α ∈ A, then α∈A Sα is convex. (Subspaces, affine sets, and convex cones are also closed under arbitrary intersections.) As a simple example, a polyhedron is the intersection of halfspaces and hyperplanes (which are convex), and therefore is convex.
Example 2.7 The positive semidefinite cone Sn+ can be expressed as 􏳹{X∈Sn |zTXz≥0}.
z̸=0
For each z ̸= 0, zT Xz is a (not identically zero) linear function of X, so the sets
{X ∈ Sn | zT Xz ≥ 0}
are, in fact, halfspaces in Sn. Thus the positive semidefinite cone is the intersection
of an infinite number of halfspaces, and so is convex.
Example 2.8 We consider the set
S={x∈Rm ||p(t)|≤1for|t|≤π/3}, (2.10)
where p(t) = 􏰉m xk cos kt. The set S can be expressed as the intersection of an k=1 􏰮
infinite number of slabs: S = |t|≤π/3 St, where
St ={x| −1≤(cost,…,cosmt)Tx≤1},
and so is convex. The definition and the set are illustrated in figures 2.13 and 2.14, for m = 2.
In the examples above we establish convexity of a set by expressing it as a (possibly infinite) intersection of halfspaces. We will see in §2.5.1 that a converse holds: every closed convex set S is a (usually infinite) intersection of halfspaces. In fact, a closed convex set S is the intersection of all halfspaces that contain it:
S=􏳹{H|Hhalfspace, S⊆H}. 2.3.2 Affine functions
Recall that a function f : Rn → Rm is affine if it is a sum of a linear function and a constant, i.e., if it has the form f(x) = Ax+b, where A ∈ Rm×n and b ∈ Rm. Suppose S ⊆ Rn is convex and f : Rn → Rm is an affine function. Then the image of S under f,
f(S) = {f(x) | x ∈ S},

2.3 Operations that preserve convexity 37
1
0 −1
0 π/3 t 2π/3 π
Figure 2.13 Three trigonometric polynomials associated with points in the set S defined in (2.10), for m = 2. The trigonometric polynomial plotted with dashed line type is the average of the other two.
2 1 0
−1
−2
−2 −1 0 1 2
x1
Figure 2.14 The set S defined in (2.10), for m = 2, is shown as the white area in the middle of the plot. The set is the intersection of an infinite number of slabs (20 of which are shown), hence convex.
S
x2
p(t)

38
2 Convex sets
is convex. Similarly, if f : Rk → Rn is an affine function, the inverse image of S under f,
f−1(S) = {x | f(x) ∈ S},
Two simple examples are scaling and translation. If S ⊆ Rn is convex, α ∈ R,
and a ∈ Rn, then the sets αS and S + a are convex, where
αS = {αx | x ∈ S}, S + a = {x + a | x ∈ S}.
The projection of a convex set onto some of its coordinates is convex: if S ⊆ Rm × Rn is convex, then
T = {x1 ∈ Rm | (x1,x2) ∈ S for some x2 ∈ Rn}
is convex.
The sum of two sets is defined as
S1 +S2 ={x+y|x∈S1, y∈S2}.
If S1 and S2 are convex, then S1 + S2 is convex. To see this, if S1 and S2 are
convex, then so is the direct or Cartesian product
S1 ×S2 ={(x1,x2)|x1 ∈S1, x2 ∈S2}.
The image of this set under the linear function f(x1,x2) = x1 +x2 is the sum S1 +S2.
We can also consider the partial sum of S1, S2 ∈ Rn × Rm, defined as S={(x,y1+y2)|(x,y1)∈S1, (x,y2)∈S2},
where x ∈ Rn and yi ∈ Rm. For m = 0, the partial sum gives the intersection of S1 and S2; for n = 0, it is set addition. Partial sums of convex sets are convex (see exercise 2.16).
Example 2.9 Polyhedron. The polyhedron {x | Ax ≼ b, Cx = d} can be expressed as the inverse image of the Cartesian product of the nonnegative orthant and the origin under the affine function f (x) = (b − Ax, d − C x):
{x|Ax≼b, Cx=d}={x|f(x)∈Rm+ ×{0}}.
Example 2.10 Solution set of linear matrix inequality. The condition
A(x)=x1A1 +···+xnAn ≼B, (2.11)
where B, Ai ∈ Sm, is called a linear matrix inequality (LMI) in x. (Note the similarity to an ordinary linear inequality,
aTx=x1a1 +···+xnan ≤b,
with b, ai ∈ R.)
The solution set of a linear matrix inequality, {x | A(x) ≼ B}, is convex. Indeed, it is the inverse image of the positive semidefinite cone under the affine function f:Rn →Sm givenbyf(x)=B−A(x).
is convex.

2.3
Operations that preserve convexity 39
Example 2.11 Hyperbolic cone. The set {x|xTPx≤(cTx)2, cTx≥0}
where P ∈ Sn+ and c ∈ Rn, is convex, since it is the inverse image of the second-order cone,
{(z,t)|zTz≤t2, t≥0}, under the affine function f (x) = (P 1/2 x, cT x).
Example 2.12 Ellipsoid. The ellipsoid
E ={x|(x−xc)TP−1(x−xc)≤1},
where P ∈ Sn++, is the image of the unit Euclidean ball {u | ∥u∥2 ≤ 1} under the affine mapping f(u) = P1/2u+xc. (It is also the inverse image of the unit ball under the affine mapping g(x) = P−1/2(x − xc).)
2.3.3 Linear-fractional and perspective functions
In this section we explore a class of functions, called linear-fractional, that is more general than affine but still preserves convexity.
The perspective function
We define the perspective function P : Rn+1 → Rn, with domain dom P = Rn × R++, as P(z,t) = z/t. (Here R++ denotes the set of positive numbers: R++ = {x ∈ R | x > 0}.) The perspective function scales or normalizes vectors so the last component is one, and then drops the last component.
Remark 2.1 We can interpret the perspective function as the action of a pin-hole camera. A pin-hole camera (in R3) consists of an opaque horizontal plane x3 = 0, with a single pin-hole at the origin, through which light can pass, and a horizontal image plane x3 = −1. An object at x, above the camera (i.e., with x3 > 0), forms an image at the point −(x1/x3,x2/x3,1) on the image plane. Dropping the last component of the image point (since it is always −1), the image of a point at x appears at y = −(x1/x3,x2/x3) = −P(x) on the image plane. This is illustrated in figure 2.15.
If C ⊆ dom P is convex, then its image
P(C) = {P(x) | x ∈ C}
is convex. This result is certainly intuitive: a convex object, viewed through a pin-hole camera, yields a convex image. To establish this fact we show that line segments are mapped to line segments under the perspective function. (This too

40
2 Convex sets
x3 = 0 x3 = −1
Figure 2.15 Pin-hole camera interpretation of perspective function. The dark horizontal line represents the plane x3 = 0 in R3, which is opaque, except for a pin-hole at the origin. Objects or light sources above the plane appear on the image plane x3 = −1, which is shown as the lighter horizontal line. The mapping of the position of a source to the position of its image is related to the perspective function.
makes sense: a line segment, viewed through a pin-hole camera, yields a line seg- ment image.) Suppose that x = (x ̃,xn+1), y = (y ̃,yn+1) ∈ Rn+1 with xn+1 > 0, yn+1 >0. Thenfor0≤θ≤1,
where
μ = θxn+1
θxn+1 + (1 − θ)yn+1
P ( θ x + ( 1 − θ ) y ) = θ x ̃ + ( 1 − θ ) y ̃ θxn+1 + (1 − θ)yn+1
= μ P ( x ) + ( 1 − μ ) P ( y ) , ∈ [0, 1].
This correspondence between θ and μ is monotonic: as θ varies between 0 and 1 (which sweeps out the line segment [x, y]), μ varies between 0 and 1 (which sweeps out the line segment [P (x), P (y)]). This shows that P ([x, y]) = [P (x), P (y)].
Now suppose C is convex with C ⊆ domP (i.e., xn+1 > 0 for all x ∈ C), and x, y ∈ C. To establish convexity of P(C) we need to show that the line segment [P (x), P (y)] is in P (C). But this line segment is the image of the line segment [x,y] under P, and so lies in P(C).
The inverse image of a convex set under the perspective function is also convex: if C ⊆ Rn is convex, then
P−1(C)={(x,t)∈Rn+1 |x/t∈C, t>0}
is convex. To show this, suppose (x,t) ∈ P−1(C), (y,s) ∈ P−1(C), and 0 ≤ θ ≤ 1.
We need to show that
i.e., that
θ(x, t) + (1 − θ)(y, s) ∈ P −1(C),
θx+(1−θ)y ∈C θt+(1−θ)s

2.3 Operations that preserve convexity 41
(θt + (1 − θ)s > 0 is obvious). This follows from θx+(1−θ)y = μ(x/t)+(1−μ)(y/s),
where
θt+(1−θ)s
μ = θt ∈ [0, 1].
θt+(1−θ)s
A linear-fractional function is formed by composing the perspective function with
Linear-fractional functions
an affine function. Suppose g : Rn → Rm+1 is affine, i.e.,
g(x)=􏰒 A 􏰓x+􏰒 b 􏰓, (2.12)
cT d
where A ∈ Rm×n, b ∈ Rm, c ∈ Rn, and d ∈ R. The function f : Rn → Rm given
by f = P ◦ g, i.e.,
f(x)=(Ax+b)/(cTx+d), domf ={x|cTx+d>0}, (2.13)
is called a linear-fractional (or projective) function. If c = 0 and d > 0, the domain of f is Rn, and f is an affine function. So we can think of affine and linear functions as special cases of linear-fractional functions.
Remark 2.2 Projective interpretation. It is often convenient to represent a linear- fractional function as a matrix
Q = 􏰒 A b 􏰓 ∈ R(m+1)×(n+1) (2.14) cT d
that acts on (multiplies) points of form (x, 1), which yields (Ax + b, cT x + d). This result is then scaled or normalized so that its last component is one, which yields (f (x), 1).
This representation can be interpreted geometrically by associating Rn with a set of rays in Rn+1 as follows. With each point z in Rn we associate the (open) ray P(z) = {t(z,1) | t > 0} in Rn+1. The last component of this ray takes on positive values. Conversely any ray in Rn+1, with base at the origin and last component which takes on positive values, can be written as P(v) = {t(v,1) | t ≥ 0} for some v ∈ Rn. This (projective) correspondence P between Rn and the halfspace of rays with positive last component is one-to-one and onto.
The linear-fractional function (2.13) can be expressed as f(x) = P−1(QP(x)).
Thus, we start with x ∈ domf, i.e., cTx+d > 0. We then form the ray P(x) in Rn+1. The linear transformation with matrix Q acts on this ray to produce another ray QP(x). Since x ∈ domf, the last component of this ray assumes positive values. Finally we take the inverse projective transformation to recover f(x).

42
2 Convex sets
11
0C0
f(C)
x2
x2
−1 −1
−1 0 1−1 0 1
Figure 2.16 Left. A set C ⊆ R2. The dashed line shows the boundary of the domain of the linear-fractional function f(x) = x/(x1 + x2 + 1) with domf = {(x1,x2) | x1 +x2 +1 > 0}. Right. Image of C under f. The dashed line shows the boundary of the domain of f−1.
Like the perspective function, linear-fractional functions preserve convexity. If C is convex and lies in the domain of f (i.e., cTx+d > 0 for x ∈ C), then its image f(C) is convex. This follows immediately from results above: the image of C under the affine mapping (2.12) is convex, and the image of the resulting set under the perspective function P, which yields f(C), is convex. Similarly, if C ⊆ Rm is convex, then the inverse image f−1(C) is convex.
Example 2.13 Conditional probabilities. Suppose u and v are random variables that take on values in {1, . . . , n} and {1, . . . , m}, respectively, and let pij denote prob(u = i,v = j). Then the conditional probability fij = prob(u = i|v = j) is given by
fij = 􏰉 pij . nk=1 pkj
Thus f is obtained by a linear-fractional mapping from p.
It follows that if C is a convex set of joint probabilities for (u, v), then the associated
set of conditional probabilities of u given v is also convex.
Figure 2.16 shows a set C ⊆ R2, and its image under the linear-fractional
function
x1 x1
f(x)= 1 x, domf ={(x1,x2)|x1 +x2 +1>0}. x1 +x2 +1

2.4 Generalized inequalities 43
2.4 Generalized inequalities
2.4.1 Proper cones and generalized inequalities
A cone K ⊆ Rn is called a proper cone if it satisfies the following:
• K is convex.
• K is closed.
• K is solid, which means it has nonempty interior.
• K is pointed, which means that it contains no line (or equivalently, x ∈ K, −x∈K =⇒ x=0).
A proper cone K can be used to define a generalized inequality, which is a partial ordering on Rn that has many of the properties of the standard ordering on R. We associate with the proper cone K the partial ordering on Rn defined by
x≼K y ⇐⇒ y−x∈K.
We also write x ≽K y for y ≼K x. Similarly, we define an associated strict partial
ordering by
x≺K y ⇐⇒ y−x∈intK,
and write x ≻K y for y ≺K x. (To distinguish the generalized inequality ≼K from the strict generalized inequality, we sometimes refer to ≼K as the nonstrict generalized inequality.)
When K = R+, the partial ordering ≼K is the usual ordering ≤ on R, and the strict partial ordering ≺K is the same as the usual strict ordering < on R. So generalized inequalities include as a special case ordinary (nonstrict and strict) inequality in R. Example 2.14 Nonnegative orthant and componentwise inequality. The nonnegative orthant K = Rn+ is a proper cone. The associated generalized inequality ≼K corre- sponds to componentwise inequality between vectors: x ≼K y means that xi ≤ yi, i = 1, . . . , n. The associated strict inequality corresponds to componentwise strict inequality: x ≺K y means that xi < yi, i = 1,...,n. The nonstrict and strict partial orderings associated with the nonnegative orthant arise so frequently that we drop the subscript Rn+; it is understood when the symbol ≼ or ≺ appears between vectors. Example 2.15 Positive semidefinite cone and matrix inequality. The positive semidef- inite cone Sn+ is a proper cone in Sn. The associated generalized inequality ≼K is the usual matrix inequality: X ≼K Y means Y − X is positive semidefinite. The inte- rior of Sn+ (in Sn) consists of the positive definite matrices, so the strict generalized inequality also agrees with the usual strict inequality between symmetric matrices: X ≺K Y means Y − X is positive definite. Here, too, the partial ordering arises so frequently that we drop the subscript: for symmetric matrices we write simply X ≼ Y or X ≺ Y . It is understood that the generalized inequalities are with respect to the positive semidefinite cone. 44 2 Convex sets Example 2.16 Cone of polynomials nonnegative on [0, 1]. Let K be defined as K = {c ∈ Rn | c1 +c2t+···+cntn−1 ≥ 0 for t ∈ [0,1]}, (2.15) i.e., K is the cone of (coefficients of) polynomials of degree n−1 that are nonnegative on the interval [0, 1]. It can be shown that K is a proper cone; its interior is the set of coefficients of polynomials that are positive on the interval [0, 1]. Twovectorsc, d∈Rn satisfyc≼K difandonlyif c1 +c2t+···+cntn−1 ≤ d1 +d2t+···+dntn−1 for all t ∈ [0, 1]. Properties of generalized inequalities A generalized inequality ≼K satisfies many properties, such as • ≼K is preserved under addition: if x ≼K y and u ≼K v, then x+u ≼K y+v. •≼K istransitive:ifx≼Kyandy≼Kzthenx≼Kz. • ≼K is preserved under nonnegative scaling: if x ≼K y and α ≥ 0 then αx ≼K αy. • ≼K is reflexive: x ≼K x. • ≼K isantisymmetric: ifx≼K yandy≼K x,thenx=y. • ≼K ispreservedunderlimits: ifxi ≼K yi fori=1, 2,...,xi →xandyi →y asi→∞,thenx≼K y. The corresponding strict generalized inequality ≺K satisfies, for example, • ifx≺K ythenx≼K y. • ifx≺K yandu≼K vthenx+u≺K y+v. • ifx≺K yandα>0thenαx≺K αy.
• x̸≺K x.
• ifx≺K y,thenforuandvsmallenough,x+u≺K y+v.
These properties are inherited from the definitions of ≼K and ≺K, and the prop- erties of proper cones; see exercise 2.30.

2.4 Generalized inequalities 45
2.4.2 Minimum and minimal elements
The notation of generalized inequality (i.e., ≼K, ≺K) is meant to suggest the analogy to ordinary inequality on R (i.e., ≤, <). While many properties of ordinary inequality do hold for generalized inequalities, some important ones do not. The most obvious difference is that ≤ on R is a linear ordering: any two points are comparable, meaning either x ≤ y or y ≤ x. This property does not hold for other generalized inequalities. One implication is that concepts like minimum and maximum are more complicated in the context of generalized inequalities. We briefly discuss this in this section. We say that x ∈ S is the minimum element of S (with respect to the general- ized inequality ≼K) if for every y ∈ S we have x ≼K y. We define the maximum element of a set S, with respect to a generalized inequality, in a similar way. If a set has a minimum (maximum) element, then it is unique. A related concept is minimal element. We say that x ∈ S is a minimal element of S (with respect to the generalized inequality ≼K) if y ∈ S, y ≼K x only if y = x. We define maxi- mal element in a similar way. A set can have many different minimal (maximal) elements. We can describe minimum and minimal elements using simple set notation. A point x ∈ S is the minimum element of S if and only if S ⊆ x + K. Here x + K denotes all the points that are comparable to x and greater than or equal to x (according to ≼K ). A point x ∈ S is a minimal element if and only if (x − K) ∩ S = {x}. Here x − K denotes all the points that are comparable to x and less than or equal to x (according to ≼K); the only point in common with S is x. For K = R+, which induces the usual ordering on R, the concepts of minimal and minimum are the same, and agree with the usual definition of the minimum element of a set. Example 2.17 Consider the cone R2+, which induces componentwise inequality in R2. Here we can give some simple geometric descriptions of minimal and minimum elements. The inequality x ≼ y means y is above and to the right of x. To say that x ∈ S is the minimum element of a set S means that all other points of S lie above and to the right. To say that x is a minimal element of a set S means that no other point of S lies to the left and below x. This is illustrated in figure 2.17. Example 2.18 Minimum and minimal elements of a set of symmetric matrices. We associate with each A ∈ Sn++ an ellipsoid centered at the origin, given by EA ={x|xTA−1x≤1}. WehaveA≼BifandonlyifEA ⊆EB. Let v1,...,vk ∈ Rn be given and define S={P ∈Sn++ |viTP−1vi ≤1, i=1,...,k}, 46 2 Convex sets x1 S1 S2 x2 2.5 2.5.1 Figure 2.17 Left. The set S1 has a minimum element x1 with respect to componentwise inequality in R2. The set x1 + K is shaded lightly; x1 is the minimum element of S1 since S1 ⊆ x1 + K. Right. The point x2 is a minimal point of S2. The set x2 − K is shown lightly shaded. The point x2 is minimal because x2 − K and S2 intersect only at x2. which corresponds to the set of ellipsoids that contain the points v1, . . . , vk. The set S does not have a minimum element: for any ellipsoid that contains the points v1, . . . , vk we can find another one that contains the points, and is not comparable to it. An ellipsoid is minimal if it contains the points, but no smaller ellipsoid does. Figure 2.18 shows an example in R2 with k = 2. Separating and supporting hyperplanes Separating hyperplane theorem In this section we describe an idea that will be important later: the use of hyper- planes or affine functions to separate convex sets that do not intersect. The basic result is the separating hyperplane theorem: Suppose C and D are nonempty dis- jointconvexsets,i.e.,C∩D=∅. Thenthereexista̸=0andbsuchthataTx≤b forallx∈C andaTx≥bforallx∈D. Inotherwords,theaffinefunctionaTx−b is nonpositive on C and nonnegative on D. The hyperplane {x | aT x = b} is called a separating hyperplane for the sets C and D, or is said to separate the sets C and D. This is illustrated in figure 2.19. Proof of separating hyperplane theorem Here we consider a special case, and leave the extension of the proof to the gen- eral case as an exercise (exercise 2.22). We assume that the (Euclidean) distance between C and D, defined as dist(C,D)=inf{∥u−v∥2 |u∈C, v∈D}, 2.5 Separating and supporting hyperplanes E1 E3 Figure 2.18 Three ellipsoids in R2, centered at the origin (shown as the lower dot), that contain the points shown as the upper dots. The ellipsoid E1 is not minimal, since there exist ellipsoids that contain the points, and are smaller (e.g., E3). E3 is not minimal for the same reason. The ellipsoid E2 is minimal, since no other ellipsoid (centered at the origin) contains the points and is contained in E2. 47 E2 aT x ≥ b aT x ≤ b D a C Figure 2.19 The hyperplane {x | aT x = b} separates the disjoint convex sets C and D. The affine function aT x − b is nonpositive on C and nonnegative on D. 48 2 Convex sets a d D C c Figure 2.20 Construction of a separating hyperplane between two convex sets. The points c ∈ C and d ∈ D are the pair of points in the two sets that are closest to each other. The separating hyperplane is orthogonal to, and bisects, the line segment between c and d. is positive, and that there exist points c ∈ C and d ∈ D that achieve the minimum distance, i.e., ∥c − d∥2 = dist(C, D). (These conditions are satisfied, for example, when C and D are closed and one set is bounded.) Define a=d−c, b= ∥d∥2 −∥c∥2. 2 We will show that the affine function f(x) = aT x − b = (d − c)T (x − (1/2)(d + c)) is nonpositive on C and nonnegative on D, i.e., that the hyperplane {x | aT x = b} separates C and D. This hyperplane is perpendicular to the line segment between c and d, and passes through its midpoint, as shown in figure 2.20. We first show that f is nonnegative on D. The proof that f is nonpositive on C is similar (or follows by swapping C and D and considering −f). Suppose there were a point u ∈ D for which f(u) = (d − c)T (u − (1/2)(d + c)) < 0. (2.16) We can express f(u) as f(u) = (d − c)T (u − d + (1/2)(d − c)) = (d − c)T (u − d) + (1/2)∥d − c∥2. We see that (2.16) implies (d − c)T (u 􏰍− d) < 0. Now we observe that d ∥ d + t ( u − d ) − c ∥ 2 2 􏰍􏰍􏰍 = 2 ( d − c ) T ( u − d ) < 0 , dt t=0 so for some small t > 0, with t ≤ 1, we have ∥d+t(u−d)−c∥2 <∥d−c∥2, 2.5 Separating and supporting hyperplanes 49 i.e., the point d + t(u − d) is closer to c than d is. Since D is convex and contains d and u, we have d+t(u−d) ∈ D. But this is impossible, since d is assumed to be the point in D that is closest to C. Example 2.19 Separation of an affine and a convex set. Suppose C is convex and D is affine, i.e., D = {Fu + g | u ∈ Rm}, where F ∈ Rn×m. Suppose C and D are disjoint, so by the separating hyperplane theorem there are a ̸= 0 and b such that aTx≤bforallx∈C andaTx≥bforallx∈D. NowaTx≥bforallx∈DmeansaTFu≥b−aTgforallu∈Rm. Butalinear function is bounded below on Rm only when it is zero, so we conclude aT F = 0 (and hence, b ≤ aT g). Thus we conclude that there exists a ̸= 0 such that FTa = 0 and aTx ≤ aTg for all x ∈ C. Strict separation The separating hyperplane we constructed above satisfies the stronger condition thataTxbforallx∈D. Thisiscalledstrict separation of the sets C and D. Simple examples show that in general, disjoint convex sets need not be strictly separable by a hyperplane (even when the sets are closed; see exercise 2.23). In many special cases, however, strict separation can be established.
Example 2.20 Strict separation of a point and a closed convex set. Let C be a closed convex set and x0 ̸∈ C. Then there exists a hyperplane that strictly separates x0 from C.
To see this, note that the two sets C and B(x0,ǫ) do not intersect for some ǫ > 0. By the separating hyperplane theorem, there exist a ̸= 0 and b such that aT x ≤ b for x ∈ C and aT x ≥ b for x ∈ B(x0, ǫ).
Using B(x0, ǫ) = {x0 + u | ∥u∥2 ≤ ǫ}, the second condition can be expressed as aT(x0 +u)≥b forall ∥u∥2 ≤ǫ.
The u that minimizes the lefthand side is u = −ǫa/∥a∥2; using this value we have aTx0 −ǫ∥a∥2 ≥b.
Therefore the affine function
f(x) = aT x − b − ǫ∥a∥2/2 is negative on C and positive at x0.
As an immediate consequence we can establish a fact that we already mentioned above: a closed convex set is the intersection of all halfspaces that contain it. Indeed, let C be closed and convex, and let S be the intersection of all halfspaces containing C. Obviously x ∈ C ⇒ x ∈ S. To show the converse, suppose there exists x ∈ S, x ̸∈ C. By the strict separation result there exists a hyperplane that strictly separates x from C, i.e., there is a halfspace containing C but not x. In other words, x ̸∈ S.

50
2 Convex sets
Converse separating hyperplane theorems
2.5.2
The converse of the separating hyperplane theorem (i.e., existence of a separating hyperplane implies that C and D do not intersect) is not true, unless one imposes additional constraints on C or D, even beyond convexity. As a simple counterex- ample, consider C = D = {0} ⊆ R. Here the hyperplane x = 0 separates C and D.
By adding conditions on C and D various converse separation theorems can be derived. As a very simple example, suppose C and D are convex sets, with C open, and there exists an affine function f that is nonpositive on C and nonnegative on D. Then C and D are disjoint. (To see this we first note that f must be negative on C; for if f were zero at a point of C then f would take on positive values near the point, which is a contradiction. But then C and D must be disjoint since f is negative on C and nonnegative on D.) Putting this converse together with the separating hyperplane theorem, we have the following result: any two convex sets C and D, at least one of which is open, are disjoint if and only if there exists a separating hyperplane.
Example 2.21 Theorem of alternatives for strict linear inequalities. We derive the necessary and sufficient conditions for solvability of a system of strict linear inequal- ities
Ax ≺ b. (2.17) These inequalities are infeasible if and only if the (convex) sets
C={b−Ax|x∈Rn}, D=Rm++ ={y∈Rm |y≻0}
do not intersect. The set D is open; C is an affine set. Hence by the result above, C and D are disjoint if and only if there exists a separating hyperplane, i.e., a nonzero λ∈Rm andμ∈RsuchthatλTy≤μonC andλTy≥μonD.
Each of these conditions can be simplified. The first means λT (b − Ax) ≤ μ for all x. This implies (as in example 2.19) that AT λ = 0 and λT b ≤ μ. The second inequality meansλTy≥μforally≻0. Thisimpliesμ≤0andλ≽0,λ̸=0.
Putting it all together, we find that the set of strict inequalities (2.17) is infeasible if and only if there exists λ ∈ Rm such that
λ ̸= 0, λ ≽ 0, AT λ = 0, λT b ≤ 0. (2.18)
This is also a system of linear inequalities and linear equations in the variable λ ∈ Rm. We say that (2.17) and (2.18) form a pair of alternatives: for any data A and b, exactly one of them is solvable.
Supporting hyperplanes
Suppose C ⊆ Rn, and x0 is a point in its boundary bd C, i.e., x0 ∈ bd C = cl C \ int C.
Ifa̸=0satisfiesaTx≤aTx0 forallx∈C,thenthehyperplane{x|aTx=aTx0} is called a supporting hyperplane to C at the point x0. This is equivalent to saying

2.6 Dual cones and generalized inequalities
x0 C
Figure 2.21 The hyperplane {x | aT x = aT x0} supports C at x0.
that the point x0 and the set C are separated by the hyperplane {x | aT x = aT x0}. The geometric interpretation is that the hyperplane {x | aT x = aT x0} is tangent to C at x0, and the halfspace {x | aT x ≤ aT x0} contains C. This is illustrated in figure 2.21.
A basic result, called the supporting hyperplane theorem, states that for any nonempty convex set C, and any x0 ∈ bd C, there exists a supporting hyperplane to C at x0. The supporting hyperplane theorem is readily proved from the separating hyperplane theorem. We distinguish two cases. If the interior of C is nonempty, the result follows immediately by applying the separating hyperplane theorem to the sets {x0} and intC. If the interior of C is empty, then C must lie in an affine set of dimension less than n, and any hyperplane containing that affine set contains C and x0, and is a (trivial) supporting hyperplane.
There is also a partial converse of the supporting hyperplane theorem: If a set is closed, has nonempty interior, and has a supporting hyperplane at every point in its boundary, then it is convex. (See exercise 2.27.)
2.6 Dual cones and generalized inequalities 2.6.1 Dual cones
Let K be a cone. The set
K∗ ={y|xTy≥0forallx∈K} (2.19)
is called the dual cone of K. As the name suggests, K∗ is a cone, and is always convex, even when the original cone K is not (see exercise 2.31).
Geometrically, y ∈ K∗ if and only if −y is the normal of a hyperplane that supports K at the origin. This is illustrated in figure 2.22.
Example 2.22 Subspace. The dual cone of a subspace V ⊆ Rn (which is a cone) is itsorthogonalcomplementV⊥ ={y|vTy=0forallv∈V}.
51
a

52
2 Convex sets
y
KK z
Figure 2.22 Left. The halfspace with inward normal y contains the cone K, so y ∈ K∗. Right. The halfspace with inward normal z does not contain K, so z ̸∈ K∗.
Example 2.23 Nonnegative orthant. The cone Rn+ is its own dual: xTy≥0forallx≽0 ⇐⇒ y≽0.
We call such a cone self-dual.
Example 2.24 Positive semidefinite cone. On the 􏰉set of symmetric n × n matrices Sn, we use the standard inner product tr(XY ) = ni,j=1 Xij Yij (see §A.1.1). The positive semidefinite cone Sn+ is self-dual, i.e., for X, Y ∈ Sn,
tr(XY)≥0forallX≽0 ⇐⇒ Y ≽0. We will establish this fact.
Suppose Y ̸∈ Sn+. Then there exists q ∈ Rn with
qT Y q = tr(qqT Y ) < 0. Hence the positive semidefinite matrix X = qqT satisfies tr(XY ) < 0; it follows that Y ̸∈(Sn+)∗. Now sup􏰉pose X, Y ∈ Sn+. We can express X in terms of its eigenvalue decomposition as X = ni=1 λiqiqiT , where (the eigenvalues) λi ≥ 0, i = 1,...,n. Then we have 􏰇􏰊n 􏰈􏰊n tr(YX)=tr Y λiqiqiT = λiqiTYqi ≥0. i=1 i=1 This shows that Y ∈ (Sn+)∗. Example 2.25 Dual of a norm cone. Let ∥·∥ be a norm on Rn. The dual of the associated cone K = {(x, t) ∈ Rn+1 | ∥x∥ ≤ t} is the cone defined by the dual norm, i.e., K∗ ={(u,v)∈Rn+1 |∥u∥∗ ≤v}, 2.6 Dual cones and generalized inequalities 53 where the dual norm is given by ∥u∥∗ = sup{uT x | ∥x∥ ≤ 1} (see (A.1.6)). To prove the result we have to show that xTu+tv≥0whenever∥x∥≤t ⇐⇒ ∥u∥∗ ≤v. (2.20) Let us start by showing that the righthand condition on (u,v) implies the lefthand condition. Suppose∥u∥∗ ≤v,and∥x∥≤tforsomet>0. (Ift=0,xmustbezero, so obviously uT x + vt ≥ 0.) Applying the definition of the dual norm, and the fact that ∥−x/t∥ ≤ 1, we have
and therefore uT x + vt ≥ 0.
uT (−x/t) ≤ ∥u∥∗ ≤ v,
Next we show that the lefthand condition in (2.20) implies the righthand condition in (2.20). Suppose ∥u∥∗ > v, i.e., that the righthand condition does not hold. Then by the definition of the dual norm, there exists an x with ∥x∥ ≤ 1 and xT u > v. Taking t = 1, we have
uT (−x) + v < 0, which contradicts the lefthand condition in (2.20). Dual cones satisfy several properties, such as: • K∗ is closed and convex. • K 1 ⊆ K 2 i m p l i e s K 2∗ ⊆ K 1∗ . • If K has nonempty interior, then K∗ is pointed. • If the closure of K is pointed then K∗ has nonempty interior. • K∗∗ is the closure of the convex hull of K. (Hence if K is convex and closed, K∗∗ = K.) (See exercise 2.31.) These properties show that if K is a proper cone, then so is its dual K∗, and moreover, that K∗∗ = K. 2.6.2 Dual generalized inequalities Now suppose that the convex cone K is proper, so it induces a generalized inequality ≼K. Then its dual cone K∗ is also proper, and therefore induces a generalized inequality.Werefertothegeneralizedinequality≼K∗ asthedualofthegeneralized inequality ≼K. Some important properties relating a generalized inequality and its dual are: • x≼K yifandonlyifλTx≤λTyforallλ≽K∗ 0. • x≺K yifandonlyifλTx<λTyforallλ≽K∗ 0,λ̸=0. Since K = K∗∗, the dual generalized inequality associated with ≼K∗ is ≼K, so these properties hold if the generalized inequality and its dual are swapped. As a specificexample,wehaveλ≼K∗ μifandonlyifλTx≤μTxforallx≽K 0. 54 2 Convex sets Example 2.26 Theorem of alternatives for linear strict generalized inequalities. Sup- pose K ⊆ Rm is a proper cone. Consider the strict generalized inequality Ax ≺K b, (2.21) where x ∈ Rn. We will derive a theorem of alternatives for this inequality. Suppose it is infeasible, i.e., the affine set {b − Ax | x ∈ Rn} does not intersect the open convex set int K. Then there is a separating hyperplane, i.e., a nonzero λ ∈ Rm and μ ∈ R such that λT(b−Ax)≤μforallx,andλTy≥μforally∈intK. Thefirstconditionimplies ATλ = 0 and λTb ≤ μ. The second condition implies λTy ≥ μ for all y ∈ K, which canonlyhappenifλ∈K∗ andμ≤0. Putting it all together we find that if (2.21) is infeasible, then there exists λ such that λ̸=0, λ≽K∗ 0, ATλ=0, λTb≤0. (2.22) Now we show the converse: if (2.22) holds, then the inequality system (2.21) cannot be feasible. Suppose that both inequality systems hold. Then we have λT (b − Ax) > 0,sinceλ̸=0,λ≽K∗ 0,andb−Ax≻K 0. ButusingATλ=0wefindthat λT (b − Ax) = λT b ≤ 0, which is a contradiction.
Thus, the inequality systems (2.21) and (2.22) are alternatives: for any data A, b, exactly one of them is feasible. (This generalizes the alternatives (2.17), (2.18) for the special case K = Rm+ .)
Minimum and minimal elements via dual inequalities
We can use dual generalized inequalities to characterize minimum and minimal elements of a (possibly nonconvex) set S ⊆ Rm with respect to the generalized inequality induced by a proper cone K.
Dual characterization of minimum element
We first consider a characterization of the minimum element: x is the minimum element of S, with respect to the generalized inequality ≼K, if and only if for all λ ≻K∗ 0, x is the unique minimizer of λT z over z ∈ S. Geometrically, this means that for any λ ≻K∗ 0, the hyperplane
{z | λT (z − x) = 0}
is a strict supporting hyperplane to S at x. (By strict supporting hyperplane, we mean that the hyperplane intersects S only at the point x.) Note that convexity of the set S is not required. This is illustrated in figure 2.23.
To show this result, suppose x is the minimum element of S, i.e., x ≼K z for allz∈S,andletλ≻K∗ 0. Letz∈S,z̸=x. Sincexistheminimumelementof S,wehavez−x≽K 0. Fromλ≻K∗ 0andz−x≽K 0,z−x̸=0,weconclude λT (z − x) > 0. Since z is an arbitrary element of S, not equal to x, this shows that x is the unique minimizer of λT z over z ∈ S. Conversely, suppose that for all λ ≻K∗ 0, x is the unique minimizer of λTz over z ∈ S, but x is not the minimum
2.6.3

2.6 Dual cones and generalized inequalities 55
S x
Figure 2.23 Dual characterization of minimum element. The point x is the minimum element of the set S with respect to R2+. This is equivalent to: for every λ ≻ 0, the hyperplane {z | λT (z − x) = 0} strictly supports S at x, i.e., contains S on one side, and touches it only at x.
element of S. Then there exists z ∈ S with z ̸≽K x. Since z − x ̸≽K 0, there exists λ ̃ ≽K∗ 0 with λ ̃T (z−x) < 0. Hence λT (z−x) < 0 for λ ≻K∗ 0 in the neighborhood of λ ̃. This contradicts the assumption that x is the unique minimizer of λT z over S. Dual characterization of minimal elements We now turn to a similar characterization of minimal elements. Here there is a gap between the necessary and sufficient conditions. If λ ≻K∗ 0 and x minimizes λT z over z ∈ S, then x is minimal. This is illustrated in figure 2.24. To show this, suppose that λ ≻K∗ 0, and x minimizes λT z over S, but x is not minimal,i.e.,thereexistsaz∈S,z̸=x,andz≼K x. ThenλT(x−z)>0,which contradicts our assumption that x is the minimizer of λT z over S.
The converse is in general false: a point x can be minimal in S, but not a minimizer of λTz over z ∈ S, for any λ, as shown in figure 2.25. This figure suggests that convexity plays an important role in the converse, which is correct. Provided the set S is convex, we can say that for any minimal element x there exists a nonzero λ ≽K∗ 0 such that x minimizes λT z over z ∈ S.
To show this, suppose x is minimal, which means that ((x − K) \ {x}) ∩ S = ∅. Applying the separating hyperplane theorem to the convex sets (x − K) \ {x} and S, we conclude that there is a λ ̸= 0 and μ such that λT (x − y) ≤ μ for all y ∈ K, and λT z ≥ μ for all z ∈ S. From the first inequality we conclude λ ≽K∗ 0. Since x ∈ S and x ∈ x−K, we have λTx = μ, so the second inequality implies that μ is the minimum value of λT z over S. Therefore, x is a minimizer of λT z over S, whereλ̸=0,λ≽K∗ 0.
This converse theorem cannot be strengthened to λ ≻K∗ 0. Examples show that a point x can be a minimal point of a convex set S, but not a minimizer of

56
2 Convex sets
λ1
x1
S x2
λ2
Figure 2.24 A set S ⊆ R2. Its set of minimal points, with respect to R2+, is shown as the darker section of its (lower, left) boundary. The minimizer of λT1 z over S is x1, and is minimal since λ1 ≻ 0. The minimizer of λT2 z over S is x2, which is another minimal point of S, since λ2 ≻ 0.
S x
Figure 2.25 The point x is a minimal element of S ⊆ R2 with respect to R2+. However there exists no λ for which x minimizes λT z over z ∈ S.

2.6 Dual cones and generalized inequalities
S1
57
x1
Figure 2.26 Left. The point x1 ∈ S1 is minimal, but is not a minimizer of λTzoverS1 foranyλ≻0. (Itdoes,however,minimizeλTzoverz∈S1 for λ = (1, 0).) Right. The point x2 ∈ S2 is not minimal, but it does minimize λTzoverz∈S2 forλ=(0,1)≽0.
λTz over z ∈ S for any λ ≻K∗ 0. (See figure 2.26, left.) Nor is it true that any minimizer of λT z over z ∈ S, with λ ≽K∗ 0, is minimal (see figure 2.26, right.)
Example 2.27 Pareto optimal production frontier. We consider a product which requires n resources (such as labor, electricity, natural gas, water) to manufacture. The product can be manufactured or produced in many ways. With each production method, we associate a resource vector x ∈ Rn, where xi denotes the amount of resource i consumed by the method to manufacture the product. We assume that xi ≥ 0 (i.e., resources are consumed by the production methods) and that the resources are valuable (so using less of any resource is preferred).
The production set P ⊆ Rn is defined as the set of all resource vectors x that correspond to some production method.
Production methods with resource vectors that are minimal elements of P, with respect to componentwise inequality, are called Pareto optimal or efficient. The set of minimal elements of P is called the efficient production frontier.
We can give a simple interpretation of Pareto optimality. We say that one production method, with resource vector x, is better than another, with resource vector y, if xi ≤ yi for all i, and for some i, xi < yi. In other words, one production method is better than another if it uses no more of each resource than another method, and for at least one resource, actually uses less. This corresponds to x ≼ y, x ̸= y. Then we can say: A production method is Pareto optimal or efficient if there is no better production method. We can find Pareto optimal production methods (i.e., minimal resource vectors) by minimizing over the set P of production vectors, using any λ that satisfies λ ≻ 0. Here the vector λ has a simple interpretation: λi is the price of resource i. By minimizing λT x over P we are finding the overall cheapest production method (for the resource prices λi). As long as the prices are positive, the resulting production method is guaranteed to be efficient. These ideas are illustrated in figure 2.27. λTx=λ1x1 +···+λnxn S2 x2 58 2 Convex sets fuel x1 x2 x5 x4 P λ x3 Figure 2.27 The production set P, for a product that requires labor and fuel to produce, is shown shaded. The two dark curves show the efficient production frontier. The points x1, x2 and x3 are efficient. The points x4 and x5 are not (since in particular, x2 corresponds to a production method that uses no more fuel, and less labor). The point x1 is also the minimum cost production method for the price vector λ (which is positive). The point x2 is efficient, but cannot be found by minimizing the total cost λT x for any price vector λ ≽ 0. labor Bibliography 59 Bibliography Minkowski is generally credited with the first systematic study of convex sets, and the introduction of fundamental concepts such as supporting hyperplanes and the supporting hyperplane theorem, the Minkowski distance function (exercise 3.34), extreme points of a convex set, and many others. Some well known early surveys are Bonnesen and Fenchel [BF48], Eggleston [Egg58], Klee [Kle63], and Valentine [Val64]. More recent books devoted to the geometry of convex sets include Lay [Lay82] and Webster [Web94]. Klee [Kle71], Fenchel [Fen83], Tikhomorov [Tik90], and Berger [Ber90] give very readable overviews of the history of convexity and its applications throughout mathematics. Linear inequalities and polyhedral sets are studied extensively in connection with the lin- ear programming problem, for which we give references at the end of chapter 4. Some landmark publications in the history of linear inequalities and linear programming are Motzkin [Mot33], von Neumann and Morgenstern [vNM53], Kantorovich [Kan60], Koop- mans [Koo51], and Dantzig [Dan63]. Dantzig [Dan63, Chapter 2] includes an historical survey of linear inequalities, up to around 1963. Generalized inequalities were introduced in nonlinear optimization during the 1960s (see Luenberger [Lue69, §8.2] and Isii [Isi64]), and are used extensively in cone programming (see the references in chapter 4). Bellman and Fan [BF63] is an early paper on sets of generalized linear inequalities (with respect to the positive semidefinite cone). For extensions and a proof of the separating hyperplane theorem we refer the reader to Rockafellar [Roc70, part III], and Hiriart-Urruty and Lemar ́echal [HUL93, volume 1, §III4]. Dantzig [Dan63, page 21] attributes the term theorem of the alternative to von Neumann and Morgenstern [vNM53, page 138]. For more references on theorems of alternatives, see chapter 5. The terminology of example 2.27 (including Pareto optimality, efficient production, and the price interpretation of λ) is discussed in detail by Luenberger [Lue95]. Convex geometry plays a prominent role in the classical theory of moments (Krein and Nudelman [KN77], Karlin and Studden [KS66]). A famous example is the duality between the cone of nonnegative polynomials and the cone of power moments; see exercise 2.37. 60 2 Convex sets 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 Exercises Definition of convexity Let C ⊆ Rn be a convex set, with x1,...,xk ∈ C, and let θ1,...,θk ∈ R satisfy θi ≥ 0, θ1 + ··· + θk = 1. Show that θ1x1 + ··· + θkxk ∈ C. (The definition of convexity is that this holds for k = 2; you must show it for arbitrary k.) Hint. Use induction on k. Show that a set is convex if and only if its intersection with any line is convex. Show that a set is affine if and only if its intersection with any line is affine. Midpoint convexity. A set C is midpoint convex if whenever two points a,b are in C, the average or midpoint (a + b)/2 is in C. Obviously a convex set is midpoint convex. It can be proved that under mild conditions midpoint convexity implies convexity. As a simple case, prove that if C is closed and midpoint convex, then C is convex. Show that the convex hull of a set S is the intersection of all convex sets that contain S. (The same method can be used to show that the conic, or affine, or linear hull of a set S is the intersection of all conic sets, or affine sets, or subspaces that contain S.) Examples What is the distance between two parallel hyperplanes {x ∈ Rn | aT x = b1} and {x ∈ Rn |aTx=b2}? When does one halfspace contain another? Give conditions under which { x | a T x ≤ b } ⊆ { x | a ̃ T x ≤ ̃b } (where a ̸= 0, a ̃ ̸= 0). Also find the conditions under which the two halfspaces are equal. Voronoi description of halfspace. Let a and b be distinct points in Rn. Show that the set of all points that are closer (in Euclidean norm) to a than b, i.e., {x | ∥x−a∥2 ≤ ∥x−b∥2}, is a halfspace. Describe it explicitly as an inequality of the form cT x ≤ d. Draw a picture. Which of the following sets S are polyhedra? If possible, express S in the form S = {x|Ax≼b, Fx=g}. (a) S={y1a1 +y2a2 | −1≤y1 ≤1, −1≤y2 ≤1},wherea1,a2 ∈Rn. (b)S={x∈Rn |x≽0,1Tx=1,􏰉ni=1xiai =b1,􏰉ni=1xia2i =b2},where a1,...,an ∈ R and b1,b2 ∈ R. (c) S={x∈Rn |x≽0, xTy≤1forallywith∥􏰉y∥2 =1}. (d) S={x∈Rn |x≽0, xTy≤1forallywith ni=1|yi|=1}. Voronoi sets and polyhedral decomposition. Let x0 , . . . , xK ∈ Rn . Consider the set of points that are closer (in Euclidean norm) to x0 than the other xi, i.e., V ={x∈Rn |∥x−x0∥2 ≤∥x−xi∥2, i=1,...,K}. V is called the Voronoi region around x0 with respect to x1, . . . , xK . (a) ShowthatV isapolyhedron. ExpressV intheformV ={x|Ax≼b}. (b) Conversely, given a polyhedron P with nonempty interior, show how to find x0, . . . , xK so that the polyhedron is the Voronoi region of x0 with respect to x1, . . . , xK . (c) We can also consider the sets Vk ={x∈Rn |∥x−xk∥2 ≤∥x−xi∥2, i̸=k}. The set Vk consists of points in Rn for which the closest point in the set {x0, . . . , xK } is xk. Exercises 61 The sets V0, . . . , VK􏰭give a polyhedral decomposition of Rn. More precisely, the sets Vk are polyhedra, Kk=0 Vk = Rn, and intVi ∩ intVj􏰭= ∅ for i ̸= j, i.e., Vi and Vj intersect at most along a boundary. Suppose that P1, . . . , Pm are polyhedra such that mi=1 Pi = Rn, and int Pi ∩ intPj = ∅ for i ̸= j. Can this polyhedral decomposition of Rn be described as the Voronoi regions generated by an appropriate set of points? 2.10 Solution set of a quadratic inequality. Let C ⊆ Rn be the solution set of a quadratic inequality, C ={x∈Rn |xTAx+bTx+c≤0}, with A ∈ Sn, b ∈ Rn, and c ∈ R. (a) ShowthatC isconvexifA≽0. (b) Show that the intersection of C and the hyperplane defined by gT x + h = 0 (where g̸=0)isconvexifA+λggT ≽0forsomeλ∈R. Are the converses of these statements true? 2.11 Hyperbolic sets. Show that the hyperbolic set {x ∈ R2 | x1x2 ≥ 1} is convex. As a n􏰛n + generalization, show that {x ∈ R+ | i=1 xi ≥ 1} is convex. Hint. If a, b ≥ 0 and 0 ≤ θ ≤ 1, then aθb1−θ ≤ θa+(1−θ)b; see §3.1.9. 2.12 Which of the following sets are convex? (a) Aslab,i.e.,asetoftheform{x∈Rn |α≤aTx≤β}. (b) A rectangle, i.e., a set of the form {x ∈ Rn | αi ≤ xi ≤ βi, i = 1,...,n}. A rectangle is sometimes called a hyperrectangle when n > 2.
(c) Awedge,i.e.,{x∈Rn |aT1x≤b1, aT2x≤b2}.
(d) The set of points closer to a given point than a given set, i.e., {x|∥x−x0∥2 ≤∥x−y∥2 forally∈S}
where S ⊆ Rn.
(e) The set of points closer to one set than another, i.e.,
(g) The set of points whose distance to a does not exceed a fixed fraction θ of the distancetob,i.e.,theset{x|∥x−a∥2 ≤θ∥x−b∥2}. Youcanassumea̸=band 0 ≤ θ ≤ 1.
2.13 Conic hull of outer products. Consider the set of rank-k outer products, defined as {XXT | X ∈ Rn×k, rankX = k}. Describe its conic hull in simple terms.
2.14 Expanded and restricted sets. Let S ⊆ Rn, and let ∥ · ∥ be a norm on Rn.
(a) For a ≥ 0 we define Sa as {x | dist(x,S) ≤ a}, where dist(x,S) = infy∈S ∥x − y∥. We refer to Sa as S expanded or extended by a. Show that if S is convex, then Sa is convex.
(b) Fora≥0wedefineS−a ={x|B(x,a)⊆S},whereB(x,a)istheball(inthenorm ∥ · ∥), centered at x, with radius a. We refer to S−a as S shrunk or restricted by a, since S−a consists of all points that are at least a distance a from Rn\S. Show that if S is convex, then S−a is convex.
{x | dist(x, S) ≤ dist(x, T )}, dist(x,S) = inf{∥x − z∥2 | z ∈ S}.
where S,T ⊆ Rn, and
(f) [HUL93,volume1,page93]Theset{x|x+S2 ⊆S1},whereS1,S2 ⊆Rn withS1
convex.

62 2 Convex sets
2.15
Some sets of probability distributions. Let x be a real-valued random variable with prob(x=ai)=pi,i=1,…,n,wherea1 α) ≤ β. (c) E|x3|≤αE|x|.
(d) Ex2 ≤α.
(e) Ex2 ≥α.
(f ) var(x) ≤ α, where var(x) = E(x − E x)2 is the variance of x. (g) var(x) ≥ α.
(h) quartile(x) ≥ α, where quartile(x) = inf{β | prob(x ≤ β) ≥ 0.25}. (i) quartile(x) ≤ α.
Operations that preserve convexity
Show that if S1 and S2 are convex sets in Rm+n, then so is their partial sum S={(x,y1 +y2)|x∈Rm, y1, y2 ∈Rn,(x,y1)∈S1, (x,y2)∈S2}.
Image of polyhedral sets under perspective function. In this problem we study the image of hyperplanes, halfspaces, and polyhedra under the perspective function P(x,t) = x/t, with dom P = Rn × R++ . For each of the following sets C , give a simple description of
P(C)={v/t|(v,t)∈C, t>0}.
(a) The polyhedron C = conv{(v1,t1),…,(vK,tK)} where vi ∈ Rn and ti > 0. (b) ThehyperplaneC={(v,t)|fTv+gt=h}(withf andgnotbothzero).
(c) ThehalfspaceC={(v,t)|fTv+gt≤h}(withf andgnotbothzero). (d) ThepolyhedronC={(v,t)|Fv+gt≼h}.
Invertible linear-fractional functions. Let f : Rn → Rn be the linear-fractional function f(x)=(Ax+b)/(cTx+d), domf ={x|cTx+d>0}.
2.16
2.17
2.18
2.19
Suppose the matrix
Q=􏰒Ab􏰓 cT d
is nonsingular. Show that f is invertible and that f−1 is a linear-fractional mapping. Give an explicit expression for f−1 and its domain in terms of A, b, c, and d. Hint. It may be easier to express f−1 in terms of Q.
Linear-fractional functions and convex sets. Let f : Rm → Rn be the linear-fractional
function
f(x)=(Ax+b)/(cTx+d), domf ={x|cTx+d>0}. In this problem we study the inverse image of a convex set C under f, i.e.,
f−1(C)={x∈domf |f(x)∈C}.
For each of the following sets C ⊆ Rn, give a simple description of f−1(C).

Exercises 63
(a) ThehalfspaceC={y|gTy≤h}(withg̸=0).
(b) The polyhedron C = {y | Gy ≼ h}.
(c) The ellipsoid {y | yT P−1y ≤ 1} (where P ∈ Sn++).
(d) The solution set of a linear matrix inequality, C = {y | y1A1 + · · · + ynAn ≼ B},
where A1, …, An, B ∈ Sp.
Separation theorems and supporting hyperplanes
2.20 Strictly positive solution of linear equations. Suppose A ∈ Rm×n, b ∈ Rm, with b ∈ R(A). Show that there exists an x satisfying
x ≻ 0, Ax = b if and only if there exists no λ with
AT λ ≽ 0, AT λ ̸= 0, bT λ ≤ 0.
Hint. First prove the following fact from linear algebra: cT x = d for all x satisfying
Ax = b if and only if there is a vector λ such that c = AT λ, d = bT λ.
2.21 The set of separating hyperplanes. Suppose that C and D are disjoint subsets of Rn. Considerthesetof(a,b)∈Rn+1 forwhichaTx≤bforallx∈C,andaTx≥bforall x ∈ D. Show that this set is a convex cone (which is the singleton {0} if there is no hyperplane that separates C and D).
2.22 Finish the proof of the separating hyperplane theorem in §2.5.1: Show that a separating hyperplane exists for two disjoint convex sets C and D. You can use the result proved in §2.5.1, i.e., that a separating hyperplane exists when there exist points in the two sets whose distance is equal to the distance between the two sets.
Hint. If C and D are disjoint convex sets, then the set {x−y | x ∈ C, y ∈ D} is convex and does not contain the origin.
2.23 Give an example of two closed convex sets that are disjoint but cannot be strictly sepa- rated.
2.24 Supporting hyperplanes.
(a) Express the closed convex set {x ∈ R2+ | x1x2 ≥ 1} as an intersection of halfspaces.
(b) LetC={x∈Rn |∥x∥∞ ≤1},thel∞-normunitballinRn,andletxˆbeapoint in the boundary of C. Identify the supporting hyperplanes of C at xˆ explicitly.
2.25 Inner and outer polyhedral approximations. Let C ⊆ Rn be a closed convex set, and suppose that x1, . . . , xK are on the boundary of C. Suppose that for each i, aTi (x−xi) = 0 defines a supporting hyperplane for C at xi, i.e., C ⊆ {x | aTi (x − xi) ≤ 0}. Consider the two polyhedra
Pinner =conv{x1,…,xK}, Pouter ={x|aTi (x−xi)≤0, i=1,…,K}. Show that Pinner ⊆ C ⊆ Pouter. Draw a picture illustrating this.
2.26 Support function. The support function of a set C ⊆ Rn is defined as SC(y) = sup{yT x | x ∈ C}.
(We allow SC (y) to take on the value +∞.) Suppose that C and D are closed convex sets in Rn. Show that C = D if and only if their support functions are equal.
2.27 Converse supporting hyperplane theorem. Suppose the set C is closed, has nonempty interior, and has a supporting hyperplane at every point in its boundary. Show that C is convex.

64 2 Convex sets
2.28
2.29
2.30 2.31
Convex cones and generalized inequalities
Positive semidefinite cone for n = 1, 2, 3. Give an explicit description of the positive semidefinite cone Sn+, in terms of the matrix coefficients and ordinary inequalities, for n = 1, 2, 3. To describe a general element of Sn, for n = 1, 2, 3, use the notation
􏰒x1x2􏰓􏰔x1 x2x3􏰕 x1, x2 x3 , x2 x4 x5 .
x3 x5 x6 Cones in R2. Suppose K ⊆ R2 is a closed convex cone.
(a) Give a simple description of K in terms of the polar coordinates of its elements (x = r(cosφ,sinφ) with r ≥ 0).
(b) Give a simple description of K∗, and draw a plot illustrating the relation between K and K∗.
(c) When is K pointed?
(d) When is K proper (hence, defines a generalized inequality)? Draw a plot illustrating
what x ≼K y means when K is proper.
Properties of generalized inequalities. Prove the properties of (nonstrict and strict) gen-
eralized inequalities listed in §2.4.1.
Properties of dual cones. Let K∗ be the dual cone of a convex cone K, as defined in (2.19).
Prove the following.
(a) K∗ is indeed a convex cone.
(b) K1 ⊆ K2 implies K2∗ ⊆ K1∗.
(c) K∗ is closed.
(d) TheinteriorofK∗ isgivenbyintK∗ ={y|yTx>0forallx∈clK}. (e) If K has nonempty interior then K∗ is pointed.
(f) K∗∗ is the closure of K. (Hence if K is closed, K∗∗ = K.) (g) If the closure of K is pointed then K∗ has nonempty interior.
Findthedualconeof{Ax|x≽0},whereA∈Rm×n.
The monotone nonnegative cone. We define the monotone nonnegative cone as
Km+ ={x∈Rn |x1 ≥x2 ≥···≥xn ≥0}.
i.e., all nonnegative vectors with components sorted in nonincreasing order. (a) Show that Km+ is a proper cone.
(b) Find the dual cone Km∗ +. Hint. Use the identity
􏰊n
xiyi = (x1 −x2)y1 +(x2 −x3)(y1 +y2)+(x3 −x4)(y1 +y2 +y3)+··· i=1
+(xn−1 −xn)(y1 +···+yn−1)+xn(y1 +···+yn). The lexicographic cone and ordering. The lexicographic cone is defined as
Klex ={0}∪{x∈Rn |x1 =···=xk =0, xk+1 >0, forsomek, 0≤k f(x) + ∇f(x)T (y − x). (3.3) For concave functions we have the corresponding characterization: f is concave
if and only if dom f is convex and
f(y) ≤ f(x) + ∇f(x)T (y − x)
forallx, y∈domf.
Proof of first-order convexity condition
To prove (3.2), we first consider the case n = 1: We show that a differentiable function f : R → R is convex if and only if
f(y) ≥ f(x) + f′(x)(y − x) (3.4)
for all x and y in domf.
Assume first that f is convex and x, y ∈ dom f . Since dom f is convex (i.e.,
an interval), we conclude that for all 0 < t ≤ 1, x+t(y−x) ∈ domf, and by convexity of f, f(x + t(y − x)) ≤ (1 − t)f(x) + tf(y). If we divide both sides by t, we obtain f(y)≥f(x)+ f(x+t(y−x))−f(x), t and taking the limit as t → 0 yields (3.4). To show sufficiency, assume the function satisfies (3.4) for all x and y in dom f (which is an interval). Choose any x ̸= y, and 0 ≤ θ ≤ 1, and let z = θx+(1−θ)y. Applying (3.4) twice yields f(x) ≥ f(z) + f′(z)(x − z), f(y) ≥ f(z) + f′(z)(y − z). Multiplying the first inequality by θ, the second by 1 − θ, and adding them yields θf(x) + (1 − θ)f(y) ≥ f(z), which proves that f is convex. Now we can prove the general case, with f : Rn → R. Let x, y ∈ Rn and consider f restricted to the line passing through them, i.e., the function defined by g(t) = f(ty + (1 − t)x), so g′(t) = ∇f(ty + (1 − t)x)T (y − x). First assume f is convex, which implies g is convex, so by the argument above we have g(1) ≥ g(0) + g′(0), which means f(y) ≥ f(x) + ∇f(x)T (y − x). Now assume that this inequality holds for any x and y, so if ty + (1 − t)x ∈ dom f ̃ ̃ and ty + (1 − t)x ∈ dom f , we have ̃ ̃ ̃ ̃T ̃ f(ty+(1−t)x)≥f(ty+(1−t)x)+∇f(ty+(1−t)x) (y−x)(t−t), ̃′ ̃ ̃ i.e., g(t) ≥ g(t) + g (t)(t − t). We have seen that this implies that g is convex. 3.1 Basic properties and examples 71 3.1.4 Second-order conditions We now assume that f is twice differentiable, that is, its Hessian or second deriva- tive ∇2f exists at each point in domf, which is open. Then f is convex if and only if dom f is convex and its Hessian is positive semidefinite: for all x ∈ dom f , ∇2f(x) ≽ 0. For a function on R, this reduces to the simple condition f′′(x) ≥ 0 (and domf convex, i.e., an interval), which means that the derivative is nondecreasing. The condition ∇2f(x) ≽ 0 can be interpreted geometrically as the requirement that the graph of the function have positive (upward) curvature at x. We leave the proof of the second-order condition as an exercise (exercise 3.8). Similarly, f is concave if and only if domf is convex and ∇2f(x) ≼ 0 for all x ∈ domf. Strict convexity can be partially characterized by second-order conditions. If ∇2 f (x) ≻ 0 for all x ∈ dom f , then f is strictly convex. The converse, however, is not true: for example, the function f : R → R given by f(x) = x4 is strictly convex but has zero second derivative at x = 0. Example 3.2 Quadratic functions. Consider the quadratic function f : Rn → R, with domf = Rn, given by f(x) = (1/2)xT Px + qT x + r, withP ∈Sn,q∈Rn,andr∈R. Since∇2f(x)=P forallx,f isconvexifandonly ifP ≽0(andconcaveifandonlyifP ≼0). For quadratic functions, strict convexity is easily characterized: f is strictly convex if and only if P ≻ 0 (and strictly concave if and only if P ≺ 0). Remark 3.1 The separate requirement that dom f be convex cannot be dropped from the first- or second-order characterizations of convexity and concavity. For example, the function f(x) = 1/x2, with domf = {x ∈ R | x ̸= 0}, satisfies f′′(x) > 0 for all x ∈ dom f , but is not a convex function.
3.1.5 Examples
We have already mentioned that all linear and affine functions are convex (and concave), and have described the convex and concave quadratic functions. In this section we give a few more examples of convex and concave functions. We start with some functions on R, with variable x.
• Exponential. eax is convex on R, for any a ∈ R.
• Powers. xa isconvexonR++ whena≥1ora≤0,andconcavefor0≤a≤1. • Powers of absolute value. |x|p, for p ≥ 1, is convex on R.
• Logarithm. log x is concave on R++.

72
3 Convex functions
2
1
0 22
10 y 0−2 x
Figure 3.3 Graph of f(x,y) = x2/y.
• Negative entropy. x log x (either on R++, or on R+, defined as 0 for x = 0) is convex.
Convexity or concavity of these examples can be shown by verifying the ba- sic inequality (3.1), or by checking that the second derivative is nonnegative or nonpositive. For example, with f (x) = x log x we have
f′(x) = log x + 1, f′′(x) = 1/x,
so that f′′(x) > 0 for x > 0. This shows that the negative entropy function is (strictly) convex.
We now give a few interesting examples of functions on Rn.
• Norms. Every norm on Rn is convex.
• Max function. f(x) = max{x1,…,xn} is convex on Rn.
• Quadratic-over-linear function. The function f(x,y) = x2/y, with
domf =R×R++ ={(x,y)∈R2 |y>0}, is convex (figure 3.3).
• Log-sum-exp. The function f(x) = log (ex1 + · · · + exn ) is convex on Rn. This function can be interpreted as a differentiable (in fact, analytic) approx- imation of the max function, since
max{x1 , . . . , xn } ≤ f (x) ≤ max{x1 , . . . , xn } + log n
for all x. (The second inequality is tight when all components of x are equal.) Figure 3.4 shows f for n = 2.
f(x,y)

3.1
Basic properties and examples 73
4 2 0
−2
2
Figure 3.4 Graph of f (x, y) = log(ex + ey ).
• Geometric mean. The geometric mean f(x) = (􏰛ni=1 xi)1/n is concave on
d o m f = R n+ + .
• Log-determinant. The function f (X ) = log det X is concave on dom f =
S n+ + .
002 y−2 −2x
Convexity (or concavity) of these examples can be verified in several ways, such as directly verifying the inequality (3.1), verifying that the Hessian is positive semidefinite, or restricting the function to an arbitrary line and verifying convexity of the resulting function of one variable.
Norms. Iff:Rn →Risanorm,and0≤θ≤1,then
f(θx + (1 − θ)y) ≤ f(θx) + f((1 − θ)y) = θf(x) + (1 − θ)f(y).
The inequality follows from the triangle inequality, and the equality follows from homogeneity of a norm.
Max function.
The function f(x) = maxi xi satisfies, for 0 ≤ θ ≤ 1,
f(θx+(1−θ)y) =
≤ θmaxxi +(1−θ)maxyi
Quadratic-over-linear function.
ii
= θf(x) + (1 − θ)f(y).
To show that the quadratic-over-linear function
f(x,y) = x2/y is convex, we note that (for y > 0),
2 2􏰒y2 −xy􏰓 2􏰒y􏰓􏰒y􏰓T
∇ f(x,y)= y3 −xy x2 = y3 −x −x ≽0.
max(θxi +(1−θ)yi) i
f(x,y)

74 3 Convex functions
Log-sum-exp. The Hessian of the log-sum-exp function is ∇2f(x) = 1 􏰀(1T z) diag(z) − zzT 􏰁 ,
􏰈
(1T z)2
where z = (ex1,…,exn). To verify that ∇2f(x) ≽ 0 we must show that for all v,
vT ∇2f(x)v ≥ 0, i.e.,
vT ∇2f(x)v = (1T z)2
1  􏰇 􏰊n i=1
􏰈 􏰇 􏰊n i=1
􏰇 􏰊n i=1
􏰈 2 
vizi ≥ 0.
vi2zi
But this follows from the Cauchy-Schwarz inequality (aT a)(bT b) ≥ (aT b)2 applied
− to the vectors with components ai = vi√zi, bi = √zi.
∂2f(x) (􏰛ni=1 xi)1/n ∂2f(x) (􏰛ni=1 xi)1/n ∂x2 =−(n−1) n2x2 , ∂x ∂x = n2x x
and can be expressed as
∇2f(x)=− i=1 i ndiag(1/x21,…,1/x2n)−qqT
fork̸=l,
zi
G􏰛eometric mean. In a similar way we can show that the geometric mean f(x) = ( ni=1 xi)1/n is concave on domf = Rn++. Its Hessian ∇2f(x) is given by
kkklkl
􏰛n x1/n 􏰀 n2
􏰁 􏰈 2 
where qi = 1/xi. We must show that ∇2f(x) ≼ 0, i.e., that
􏰛 n x 1 / n  􏰊n
v i2 / x 2i − (aT b)2, applied to the vectors a = 1 and bi = vi/xi.
Log-determinant. For the function f(X) = logdetX, we can verify concavity by considering an arbitrary line, given by X = Z + tV , where Z, V ∈ Sn. We define g(t) = f(Z +tV ), and restrict g to the interval of values of t for which Z +tV ≻ 0. Without loss of generality, we can assume that t = 0 is inside this interval, i.e., Z ≻ 0. We have
g(t) = log det(Z + tV )
= log det(Z1/2(I + tZ−1/2V Z−1/2)Z1/2) 􏰊n
= log(1+tλi)+logdetZ i=1
v T ∇ 2 f ( x ) v = − i = 1 i n n2
≤ 0
for all v. Again this follows from the Cauchy-Schwarz inequality (aT a)(bT b) ≥
where λ1, . . . , λn are the eigenvalues of Z−1/2V Z−1/2. Therefore we have g ′ ( t ) = 􏰊n λ i , g ′ ′ ( t ) = − 􏰊n λ 2i .
i=1
􏰇 􏰊n i=1
v i / x i
i=1 1+tλi i=1 (1+tλi)2 Since g′′(t) ≤ 0, we conclude that f is concave.

3.1 Basic properties and examples 75
3.1.6 Sublevel sets
The α-sublevel set of a function f : Rn → R is defined as Cα ={x∈domf |f(x)≤α}.
Sublevel sets of a convex function are convex, for any value of α. The proof is immediate from the definition of convexity: if x, y ∈ Cα, then f(x) ≤ α and f(y) ≤ α, and so f(θx+(1−θ)y) ≤ α for 0 ≤ θ ≤ 1, and hence θx+(1−θ)y ∈ Cα.
The converse is not true: a function can have all its sublevel sets convex, but not be a convex function. For example, f(x) = −ex is not convex on R (indeed, it is strictly concave) but all its sublevel sets are convex.
If f is concave, then its α-superlevel set, given by {x ∈ dom f | f (x) ≥ α}, is a convex set. The sublevel set property is often a good way to establish convexity of a set, by expressing it as a sublevel set of a convex function, or as the superlevel set of a concave function.
Example 3.3 The geometric and arithmetic means of x ∈ Rn+ are, respectively,
􏰈1/n
(where we take 01/n = 0 in our definition of G). The arithmetic-geometric mean
inequality states that G(x) ≤ A(x). Suppose 0 ≤ α ≤ 1, and consider the set
􏰇􏰖n G ( x ) =
􏰊n i=1
A ( x ) = n1 {x ∈ Rn+ | G(x) ≥ αA(x)},
i.e., the set of vectors with geometric mean at least as large as a factor α times the arithmetic mean. This set is convex, since it is the 0-superlevel set of the function G(x) − αA(x), which is concave. In fact, the set is positively homogeneous, so it is a convex cone.
3.1.7 Epigraph
Thegraphofafunctionf:Rn →Risdefinedas {(x,f(x)) | x ∈ domf},
which is a subset of Rn+1. The epigraph of a function f : Rn → R is defined as epif={(x,t)|x∈domf, f(x)≤t},
which is a subset of Rn+1. (‘Epi’ means ‘above’ so epigraph means ‘above the graph’.) The definition is illustrated in figure 3.5.
The link between convex sets and convex functions is via the epigraph: A function is convex if and only if its epigraph is a convex set. A function is concave if and only if its hypograph, defined as
i=1
x i
,
x i ,
is a convex set.
hypof = {(x,t) | t ≤ f(x)},

76
3 Convex functions
epi f
f
Figure 3.5 Epigraph of a function f, shown shaded. The lower boundary, shown darker, is the graph of f.
Example 3.4 Matrix fractional function. The function f : Rn × Sn → R, defined as
f (x, Y ) = xT Y −1 x
is convex on dom f = Rn × Sn++ . (This generalizes the quadratic-over-linear function f(x, y) = x2/y, with dom f = R × R++.)
One easy way to establish convexity of f is via its epigraph: epif = {(x,Y,t)|􏰍Y ≻0, xTY−1x≤t}
= 􏰆(x,Y,t)􏰍􏰍􏰍􏰒Y x􏰓≽0,Y≻0􏰙, xT t
using the Schur complement condition for positive semidefiniteness of a block matrix (see §A.5.5). The last condition is a linear matrix inequality in (x, Y, t), and therefore epi f is convex.
For the special case n = 1, the matrix fractional function reduces to the quadratic- over-linear function x2/y, and the associated LMI representation is
􏰒yx􏰓≽0, y>0 xt
(the graph of which is shown in figure 3.3).
Many results for convex functions can be proved (or interpreted) geometrically using epigraphs, and applying results for convex sets. As an example, consider the first-order condition for convexity:
f(y) ≥ f(x) + ∇f(x)T (y − x),
where f is convex and x, y ∈ domf. We can interpret this basic inequality
geometrically in terms of epi f . If (y, t) ∈ epi f , then
t ≥ f(y) ≥ f(x) + ∇f(x)T (y − x).

3.1 Basic properties and examples
77
epi f
(∇f (x), −1)
Figure 3.6 For a differentiable convex function f, the vector (∇f(x),−1)
defines a supporting hyperplane to the epigraph of f at x.
We can express this as:
(y,t)∈epif=⇒􏰒∇f(x)􏰓T􏰄􏰒y􏰓−􏰒 x 􏰓􏰅≤0.
−1 t f(x)
This means that the hyperplane defined by (∇f(x),−1) supports epif at the
boundary point (x,f(x)); see figure 3.6. 3.1.8 Jensen’s inequality and extensions
The basic inequality (3.1), i.e.,
f(θx + (1 − θ)y) ≤ θf(x) + (1 − θ)f(y),
is sometimes called Jensen’s inequality. It is easily extended to convex combinations of more than two points: If f is convex, x1,…,xk ∈ domf, and θ1,…,θk ≥ 0 with θ1 + · · · + θk = 1, then
f(θ1×1 +···+θkxk)≤θ1f(x1)+···+θkf(xk).
As in the case of convex sets, the inequality extends to infin􏰜ite sums, integrals, and
expected values. For example, if p(x) ≥ 0 on S ⊆ domf, S p(x) dx = 1, then f 􏰄􏰑 p(x)x dx􏰅 ≤ 􏰑 f(x)p(x) dx,
SS
provided the integrals exist. In the most general case we can take any probability measure with support in domf. If x is a random variable such that x ∈ domf with probability one, and f is convex, then we have
f(E x) ≤ E f(x), (3.5)
provided the expectations exist. We can recover the basic inequality (3.1) from this general form, by taking the random variable x to have support {x1,x2}, with
(x, f (x))

78
3 Convex functions
prob(x = x1) = θ, prob(x = x2) = 1 − θ. Thus the inequality (3.5) characterizes convexity: If f is not convex, there is a random variable x, with x ∈ dom f with probability one, such that f (E x) > E f (x).
All of these inequalities are now called Jensen’s inequality, even though the inequality studied by Jensen was the very simple one
f􏰄x+y􏰅≤ f(x)+f(y). 22
Remark 3.2 We can interpret (3.5) as follows. Suppose x ∈ dom f ⊆ Rn and z is any zero mean random vector in Rn. Then we have
E f (x + z) ≥ f (x).
Thus, randomization or dithering (i.e., adding a zero mean random vector to the
argument) cannot decrease the value of a convex function on average.
3.1.9 Inequalities
Many famous inequalities can be derived by applying Jensen’s inequality to some appropriate convex function. (Indeed, convexity and Jensen’s inequality can be made the foundation of a theory of inequalities.) As a simple example, consider the arithmetic-geometric mean inequality:
√ab ≤ (a + b)/2 (3.6) for a, b ≥ 0. The function − log x is convex; Jensen’s inequality with θ = 1/2 yields
−log􏰄a+b􏰅≤ −loga−logb. 22
Taking the exponential of both sides yields (3.6).
As a less trivial example we prove Ho ̈lder’s inequality: for p > 1, 1/p + 1/q = 1,
and x, y ∈ Rn,
By convexity of − log x, and Jensen’s inequality with general θ, we obtain the more
􏰊n
􏰇􏰊n i=1
􏰈1/p 􏰇􏰊n i=1
􏰈1/q |yi|q .
|xi|p
general arithmetic-geometric mean inequality
aθb1−θ ≤θa+(1−θ)b,
validfora, b≥0and0≤θ≤1. Applyingthiswith
i=1
xiyi ≤
|yi |q
b= 􏰉nj=1|yj|q,
􏰇 |xi|p 􏰈1/p 􏰇 |yi|q
Summing over i then yields Ho ̈lder’s inequality.
|xi |p
a= 􏰉nj=1|xj|p,
θ=1/p,
yields
􏰈1/q |xi|p
􏰉nj=1 |xj|p 􏰉nj=1 |yj|q ≤ p􏰉nj=1 |xj|p + q􏰉nj=1 |yj|q .
|yi|q

3.2 Operations that preserve convexity 79
3.2 Operations that preserve convexity
In this section we describe some operations that preserve convexity or concavity of functions, or allow us to construct new convex and concave functions. We start with some simple operations such as addition, scaling, and pointwise supremum, and then describe some more sophisticated operations (some of which include the simple operations as special cases).
3.2.1 Nonnegative weighted sums
Evidently if f is a convex function and α ≥ 0, then the function αf is convex. If f1 and f2 are both convex functions, then so is their sum f1 + f2. Combining nonnegative scaling and addition, we see that the set of convex functions is itself a convex cone: a nonnegative weighted sum of convex functions,
f =w1f1 +···+wmfm,
is convex. Similarly, a nonnegative weighted sum of concave functions is concave. A nonnegative, nonzero weighted sum of strictly convex (concave) functions is strictly convex (concave).
These properties extend to infinite sums and integrals. For example if f(x,y) is convex in x for each y ∈ A, and w(y) ≥ 0 for each y ∈ A, then the function g
defined as
􏰑
g(x) = w(y)f (x, y) dy A
is convex in x (provided the integral exists).
The fact that convexity is preserved under nonnegative scaling and addition is
easily verified directly, or can be seen in terms of the associated epigraphs. For example, if w ≥ 0 and f is convex, we have
epi(wf)=􏰒I 0􏰓epif, 0w
which is convex because the image of a convex set under a linear mapping is convex.
3.2.2 Composition with an affine mapping
Suppose f : Rn → R, A ∈ Rn×m, and b ∈ Rn. Define g : Rm → R by g(x) = f(Ax + b),
with domg = {x | Ax+b ∈ domf}. Then if f is convex, so is g; if f is concave, so is g.

80
3.2.3
3 Convex functions
Pointwise maximum and supremum
If f1 and f2 are convex functions then their pointwise maximum f, defined by f(x) = max{f1(x),f2(x)},
with dom f = dom f1 ∩ dom f2, is also convex. This property is easily verified: if 0≤θ≤1andx, y∈domf,then
f(θx+(1−θ)y) = max{f1(θx + (1 − θ)y), f2(θx + (1 − θ)y)}
≤ max{θf1(x) + (1 − θ)f1(y), θf2(x) + (1 − θ)f2(y)}
≤ θ max{f1(x), f2(x)} + (1 − θ) max{f1(y), f2(y)} = θf(x) + (1 − θ)f(y),
which establishes convexity of f. It is easily shown that if f1,…,fm are convex, then their pointwise maximum
f(x) = max{f1(x), . . . , fm(x)}
is also convex.
Example 3.5 Piecewise-linear functions. The function
f(x) = max{aT1 x + b1,…,aTLx + bL}
defines a piecewise-linear (or really, affine) function (with L or fewer regions). It is convex since it is the pointwise maximum of affine functions.
The converse can also be shown: any piecewise-linear convex function with L or fewer regions can be expressed in this form. (See exercise 3.29.)
Example 3.6 Sum of r largest components. For x ∈ Rn we denote by x[i] the ith largest component of x, i.e.,
x[1] ≥x[2] ≥···≥x[n]
are the components of x sorted in nonincreasing order. Then the function
x[i] =max{xi1 +···+xir |1≤i1 −∞ 􏰍 x i=1
􏰚
.

82
3 Convex functions
Since g is the infimum of a family of linear functions of w (indexed by x ∈ Rm), it is a concave function of w.
We can derive an explicit expression for g, at least on part of its domain. Let W = diag(w), the diagonal matrix with elements w1,…,wn, and let A ∈ Rn×m have rows aTi , so we have
g(w) = inf(Ax − b)T W(Ax − b) = inf(xT AT WAx − 2bT WAx + bT Wb). xx
From this we see that if ATWA ̸≽ 0, the quadratic function is unbounded below in x, so g(w) = −∞, i.e., w ̸∈ domg. We can give a simple expression for g when AT W A ≻ 0 (which defines a strict linear matrix inequality), by analytically minimizing the quadratic function:
g(w) = bT Wb − bT WA(AT WA)−1AT Wb
Example 3.10 Maximum eigenvalue of a symmetric matrix. The function f(X) = λmax(X), with dom f = Sm, is convex. To see this, we express f as
f(X) = sup{yT Xy | ∥y∥2 = 1},
i.e., as the pointwise supremum of a family of linear functions of X (i.e., yT Xy)
indexed by y ∈ Rm.
Example 3.11 Norm of a matrix. Consider f (X ) = ∥X ∥2 with dom f = Rp×q , where ∥ · ∥2 denotes the spectral norm or maximum singular value. Convexity of f
􏰈−1
Concavity of g from this expression is not immediately obvious (but does follow, for
􏰊n =
􏰊n i=1
􏰇􏰊n j=1
wjajaTj
example, from convexity of the matrix fractional function; see example 3.4).
i=1
wib2i −
wi2b2iaTi
ai.
follows from
which shows it is the pointwise supremum of a family of linear functions of X.
f(X)=sup{uTXv|∥u∥2 =1, ∥v∥2 =1},
As a generalization suppose ∥ · ∥a and ∥ · ∥b are norms on Rp and Rq , respectively.
The induced norm of a matrix X ∈ Rp×q is defined as ∥X∥a,b = sup ∥Xv∥a .
v̸=0 ∥v∥b
(This reduces to the spectral norm when both norms are Euclidean.) The induced
norm can be expressed as
∥X∥a,b = sup{∥Xv∥a | ∥v∥b = 1}
= sup{uTXv|∥u∥a∗ =1, ∥v∥b =1}, where ∥·∥a∗ is the dual norm of ∥·∥a, and we use the fact that
∥z∥a = sup{uT z | ∥u∥a∗ = 1}.
Since we have expressed ∥X∥a,b as a supremum of linear functions of X, it is a convex function.

3.2 Operations that preserve convexity 83
Representation as pointwise supremum of affine functions
The examples above illustrate a good method for establishing convexity of a func- tion: by expressing it as the pointwise supremum of a family of affine functions. Except for a technical condition, a converse holds: almost every convex function can be expressed as the pointwise supremum of a family of affine functions. For example, if f : Rn → R is convex, with domf = Rn, then we have
f(x) = sup{g(x) | g affine, g(z) ≤ f(z) for all z}.
In other words, f is the pointwise supremum of the set of all affine global under- estimators of it. We give the proof of this result below, and leave the case where dom f ̸= Rn as an exercise (exercise 3.28).
Suppose f is convex with dom f = Rn. The inequality
f(x) ≥ sup{g(x) | g affine, g(z) ≤ f(z) for all z}
is clear, since if g is any affine underestimator of f, we have g(x) ≤ f(x). To establish equality, we will show that for each x ∈ Rn, there is an affine function g, which is a global underestimator of f, and satisfies g(x) = f(x).
The epigraph of f is, of course, a convex set. Hence we can find a supporting hyperplane to it at (x,f(x)), i.e., a ∈ Rn and b ∈ R with (a,b) ̸= 0 and
􏰒a􏰓T􏰒 x−z 􏰓≤0 b f(x)−t
for all (z,t) ∈ epif. This means that
aT (x − z) + b(f(x) − f(z) − s) ≤ 0 (3.8)
forallz∈domf=Rn andalls≥0(since(z,t)∈epifmeanst=f(z)+sfor somes≥0). Fortheinequality(3.8)toholdforalls≥0,wemusthaveb≥0. If b = 0, then the inequality (3.8) reduces to aT(x−z) ≤ 0 for all z ∈ Rn, which implies a = 0 and contradicts (a, b) ̸= 0. We conclude that b > 0, i.e., that the supporting hyperplane is not vertical.
Using the fact that b > 0 we rewrite (3.8) for s = 0 as g(z) = f(x) + (a/b)T (x − z) ≤ f(z)
for all z. The function g is an affine underestimator of f, and satisfies g(x) = f(x). 3.2.4 Composition
Inthissectionweexamineconditionsonh:Rk →Randg:Rn →Rk that guarantee convexity or concavity of their composition f = h ◦ g : Rn → R, defined by
f(x) = h(g(x)), domf = {x ∈ domg | g(x) ∈ domh}.

84
3 Convex functions
Scalar composition
Wefirstconsiderthecasek=1,soh:R→Randg:Rn →R. Wecanrestrict ourselves to the case n = 1 (since convexity is determined by the behavior of a function on arbitrary lines that intersect its domain).
To discover the composition rules, we start by assuming that h and g are twice differentiable, with dom g = dom h = R. In this case, convexity of f reduces to f′′ ≥ 0 (meaning, f′′(x) ≥ 0 for all x ∈ R).
The second derivative of the composition function f = h ◦ g is given by
f′′(x) = h′′(g(x))g′(x)2 + h′(g(x))g′′(x). (3.9)
Now suppose, for example, that g is convex (so g′′ ≥ 0) and h is convex and nondecreasing (so h′′ ≥ 0 and h′ ≥ 0). It follows from (3.9) that f′′ ≥ 0, i.e., f is convex. In a similar way, the expression (3.9) gives the results:
f is convex if h is convex and nondecreasing, and g is convex,
f is convex if h is convex and nonincreasing, and g is concave,
f is concave if h is concave and nondecreasing, and g is concave, f is concave if h is concave and nonincreasing, and g is convex.
(3.10)
These statements are valid when the functions g and h are twice differentiable and have domains that are all of R. It turns out that very similar composition rules hold in the general case n > 1, without assuming differentiability of h and g, or thatdomg=Rn anddomh=R:
f is convex if h is convex, h ̃ is nondecreasing, and g is convex,
f is convex if h is convex, h ̃ is nonincreasing, and g is concave,
f is concave if h is concave, h ̃ is nondecreasing, and g is concave, f is concave if h is concave, h ̃ is nonincreasing, and g is convex.
(3.11)
Here h ̃ denotes the extended-value extension of the function h, which assigns the value ∞ (−∞) to points not in dom h for h convex (concave). The only difference between these results, and the results in (3.10), is that we require that the extended- value extension function h ̃ be nonincreasing or nondecreasing, on all of R.
To understand what this means, suppose h is convex, so h ̃ takes on the value ∞ outside dom h. To say that h ̃ is nondecreasing means that for any x, y ∈ R, with x < y, we have h ̃(x) ≤ h ̃(y). In particular, this means that if y ∈ dom h, then x ∈ dom h. In other words, the domain of h extends infinitely in the negative direction; it is either R, or an interval of the form (−∞, a) or (−∞, a]. In a similar way, to say that h is convex and h ̃ is nonincreasing means that h is nonincreasing and dom h extends infinitely in the positive direction. This is illustrated in figure 3.7. Example 3.12 Some simple examples will illustrate the conditions on h that appear in the composition theorems. • The function h(x) = logx, with domh = R++, is concave and satisfies h ̃ nondecreasing. 3.2 Operations that preserve convexity 85 11 epi f epi f 001001 xx Figure 3.7 Left. The function x2, with domain R+, is convex and nonde- creasing on its domain, but its extended-value extension is not nondecreas- ing. Right. The function max{x,0}2, with domain R, is convex, and its extended-value extension is nondecreasing. • The function h(x) = x1/2, with domh = R+, is concave and satisfies the condition h ̃ nondecreasing. • The function h(x) = x3/2 , with dom h = R+ , is convex but does not satisfy the condition h ̃ nondecreasing. For example, we have h ̃(−1) = ∞, but h ̃(1) = 1. • Thefunctionh(x)=x3/2 forx≥0,andh(x)=0forx<0,withdomh=R, is convex and does satisfy the condition h ̃ nondecreasing. The composition results (3.11) can be proved directly, without assuming dif- ferentiability, or using the formula (3.9). As an example, we will prove the fol- lowing composition theorem: if g is convex, h is convex, and h ̃ is nondecreasing, thenf=h◦gisconvex. Assumethatx,y∈domf,and0≤θ≤1. Since x, y ∈ domf, we have that x, y ∈ domg and g(x), g(y) ∈ domh. Since domg is convex, we conclude that θx + (1 − θ)y ∈ dom g, and from convexity of g, we have g(θx + (1 − θ)y) ≤ θg(x) + (1 − θ)g(y). (3.12) Since g(x), g(y) ∈ dom h, we conclude that θg(x) + (1 − θ)g(y) ∈ dom h, i.e., the righthand side of (3.12) is in domh. Now we use the assumption that h ̃ is nondecreasing, which means that its domain extends infinitely in the negative direction. Since the righthand side of (3.12) is in domh, we conclude that the lefthand side, i.e., g(θx+(1−θ)y) ∈ dom h. This means that θx+(1−θ)y ∈ dom f . At this point, we have shown that dom f is convex. Now using the fact that h ̃ is nondecreasing and the inequality (3.12), we get h(g(θx + (1 − θ)y)) ≤ h(θg(x) + (1 − θ)g(y)). (3.13) From convexity of h, we have h(θg(x) + (1 − θ)g(y)) ≤ θh(g(x)) + (1 − θ)h(g(y)). (3.14) 86 3 Convex functions Putting (3.13) and (3.14) together, we have h(g(θx + (1 − θ)y)) ≤ θh(g(x)) + (1 − θ)h(g(y)). which proves the composition theorem. Example 3.13 Simple composition results. • If g is convex then exp g(x) is convex. • If g is concave and positive, then log g(x) is concave. • If g is concave and positive, then 1/g(x) is convex. • If g is convex and nonnegative and p ≥ 1, then g(x)p is convex. • If g is convex then − log(−g(x)) is convex on {x | g(x) < 0}. Remark 3.3 The requirement that monotonicity hold for the extended-value extension h ̃, and not just the function h, cannot be removed. For example, consider the function g(x) = x2, with domg = R, and h(x) = 0, with domh = [1,2]. Here g is convex, and h is convex and nondecreasing. But the function f = h ◦ g, given by f(x) = 0, domf = [−√2,−1] ∪ [1,√2], is not convex, since its domain is not convex. Here, of course, the function h ̃ is not nondecreasing. Vector composition We now turn to the more complicated case when k ≥ 1. Suppose f(x) = h(g(x)) = h(g1(x), . . . , gk(x)), withh:Rk →R,gi :Rn →R. Againwithoutlossofgeneralitywecanassumen= 1. As in the case k = 1, we start by assuming the functions are twice differentiable, with dom g = R and dom h = Rk , in order to discover the composition rules. We have f′′(x) = g′(x)T ∇2h(g(x))g′(x) + ∇h(g(x))T g′′(x), (3.15) which is the vector analog of (3.9). Again the issue is to determine conditions under which f′′(x) ≥ 0 for all x (or f′′(x) ≤ 0 for all x for concavity). From (3.15) we can derive many rules, for example: f is convex if h is convex, h is nondecreasing in each argument, and gi are convex, f is convex if h is convex, h is nonincreasing in each argument, and gi are concave, f is concave if h is concave, h is nondecreasing in each argument, and gi are concave. 3.2 Operations that preserve convexity 87 As in the scalar case, similar composition results hold in general, with n > 1, no as- sumption of differentiability of h or g, and general domains. For the general results, the monotonicity condition on h must hold for the extended-value extension h ̃.
To understand the meaning of the condition that the extended-value exten- sion h ̃ be monotonic, we consider the case where h : Rk → R is convex, and h ̃ nondecreasing, i.e., whenever u ≼ v, we have h ̃(u) ≤ h ̃(v). This implies that if v ∈ domh, then so is u: the domain of h must extend infinitely in the −Rk+ directions. We can express this compactly as dom h − Rk+ = dom h.
Example 3.14 Vector composition examples.
• Let h(z) = z[1] +···+z[r], the sum of the r largest components of z ∈ Rk. Then h is convex and nondecreasing in each argument. Suppose g1, . . . , gk are convex functions on Rn. Then the composition function f = h ◦ g, i.e., the pointwise sum of the r largest gi’s, is convex.
• The function􏰉h(z) = log(􏰉k ezi ) is convex and nondecreasing in each argu- i=1
ment, so log( k egi ) is convex whenever gi are.
i=1 􏰉kp1/p k
•For0 −∞ for all x. The domain of g is the projection of dom f on its x-coordinates, i.e.,
domg={x|(x,y)∈domf forsomey∈C}.
We prove this by verifying Jensen’s inequality for x1, x2 ∈ domg. Let ǫ > 0. Then there are y1, y2 ∈ C such that f(xi,yi) ≤ g(xi)+ǫ for i = 1, 2. Now let θ ∈ [0, 1]. We have
g(θx1 +(1−θ)x2) =
inf f(θx1 +(1−θ)x2,y) y∈C
≤ f(θx1 +(1−θ)x2,θy1 +(1−θ)y2)
≤ θf(x1, y1) + (1 − θ)f(x2, y2)
≤ θg(x1) + (1 − θ)g(x2) + ǫ.
Since this holds for any ǫ > 0, we have
g(θx1 + (1 − θ)x2) ≤ θg(x1) + (1 − θ)g(x2).
The result can also be seen in terms of epigraphs. With f, g, and C defined as in (3.16), and assuming the infimum over y ∈ C is attained for each x, we have
epig = {(x,t) | (x,y,t) ∈ epif for some y ∈ C}.
Thus epig is convex, since it is the projection of a convex set on some of its
components.
Example 3.15 Schur complement. Suppose the quadratic function f(x, y) = xT Ax + 2xT By + yT Cy,
(where A and C are symmetric) is convex in (x, y), which means 􏰒A B􏰓≽0.
BT C We can express g(x) = infy f(x, y) as
g(x) = xT (A − BC†BT )x,
where C† is the pseudo-inverse of C (see §A.5.4). By the minimization rule, g is
convex, so we conclude that A − BC†BT ≽ 0.
If C is invertible, i.e., C ≻ 0, then the matrix A − BC−1BT is called the Schur
complement of C in the matrix
(see §A.5.5).
Example 3.16 Distance to a set. The distance of a point x to a set S ⊆ Rn, in the
norm ∥ · ∥, is defined as
dist(x, S) = inf ∥x − y∥. y∈S
􏰒AB􏰓 BT C
The function ∥x−y∥ is convex in (x, y), so if the set S is convex, the distance function dist(x,S) is a convex function of x.

3.2
Operations that preserve convexity 89
Example 3.17 Suppose h is convex. Then the function g defined as g(x) = inf{h(y) | Ay = x}
is convex. To see this, we define f by
f(x,y)=􏰆h(y) ifAy=x
∞ otherwise,
which is convex in (x,y). Then g is the minimum of f over y, and hence is convex.
(It is not hard to show directly that g is convex.)
3.2.6 Perspective of a function
If f : Rn → R, then the perspective of f is the function g : Rn+1 → R defined by g(x, t) = tf (x/t),
with domain
domg={(x,t)|x/t∈domf, t>0}.
The perspective operation preserves convexity: If f is a convex function, then so is its perspective function g. Similarly, if f is concave, then so is g.
This can be proved several ways, for example, direct verification of the defining inequality (see exercise 3.33). We give a short proof here using epigraphs and the perspective mapping on Rn+1 described in §2.3.3 (which will also explain the name ‘perspective’). For t > 0 we have
(x,t,s) ∈ epig ⇐⇒ tf(x/t) ≤ s ⇐⇒ f(x/t)≤s/t
⇐⇒ (x/t, s/t) ∈ epi f.
Therefore epig is the inverse image of epif under the perspective mapping that takes (u, v, w) to (u, w)/v. It follows (see §2.3.3) that epi g is convex, so the function g is convex.
Example 3.18 Euclidean norm squared. The perspective of the convex function f(x)=xTxonRn is
g(x,t)=t(x/t)T(x/t)= xTx, t
which is convex in (x, t) for t > 0.
We can deduce convexity of g using several other methods. First, we can express g as the sum of the quadratic-over-linear functions x2i /t, which were shown to be convex in §3.1.5. We can also express g as a special case of the matrix fractional function xT (tI)−1x (see example 3.4).

90
3 Convex functions
Example 3.19 Negative logarithm. Consider the convex function f (x) = − log x on R++. Its perspective is
g(x,t) = −tlog(x/t) = tlog(t/x) = tlogt−tlogx,
and is convex on R2++. The function g is called the relative entropy of t and x. For
x = 1, g reduces to the negative entropy function.
From convexity of g we can establish convexity or concavity of several interesting
related functions. First, the relative entropy of two vectors u, v ∈ Rn++, defined as
􏰊n
ui log(ui/vi),
i=1
is convex in (u,v), since it is a sum of relative entropies of ui, vi.
A closely related function is the Kullback-Leibler divergence between u, v ∈ Rn++,
given by
􏰊n
i=1
which is convex, since it is the relative entropy plus a linear function of (u,v). The
Kullback-Leibler divergence satisfies Dkl(u,v) ≥ 0, and Dkl(u,v) = 0 if and only if u = v, and so can be used as a measure of deviation between two positive vectors; see exercise 3.13. (Note that the relative entropy and the Kullback-Leibler divergence are the same when u and v are probability vectors, i.e., satisfy 1T u = 1T v = 1.)
If we take vi = 1T u in the relative entropy function, we obtain the concave (and
Dkl(u, v) =
(ui log(ui/vi) − ui + vi) , (3.17)
homogeneous) function of u ∈ Rn++ given by
􏰊n i=1
􏰊n i=1
ui log(1T u/ui) = (1T u)
zi log(1/zi),
where z = u/(1T u), which is called the normalized entropy function. The vector z = u/1T u is a normalized vector or probability distribution, since its components sum to one; the normalized entropy of u is 1T u times the entropy of this normalized distribution.
Example3.20 Supposef:Rm →Risconvex,andA∈Rm×n,b∈Rm,c∈Rn, and d ∈ R. We define 􏰀 􏰁
g(x)=(cTx+d)f (Ax+b)/(cTx+d) , domg={x|cTx+d>0, (Ax+b)/(cTx+d)∈domf}.
with
Then g is convex.
3.3
The conjugate function
In this section we introduce an operation that will play an important role in later chapters.

3.3 The conjugate function
91
f (x)
Figure 3.8∗A function f : R → R, and a value y ∈ R. The conjugate function f (y) is the maximum gap between the linear function yx and f(x), as shown by the dashed line in the figure. If f is differentiable, this occurs at a point x where f′(x) = y.
3.3.1 Definition and examples
Let f : Rn → R. The function f∗ : Rn → R, defined as f∗(y)= sup 􏰀yTx−f(x)􏰁,
x∈dom f
(3.18)
is called the conjugate of the function f. The domain of the conjugate function consists of y ∈ Rn for which the supremum is finite, i.e., for which the difference yTx−f(x)isboundedaboveondomf. Thisdefinitionisillustratedinfigure3.8.
We see immediately that f∗ is a convex function, since it is the pointwise supremum of a family of convex (indeed, affine) functions of y. This is true whether or not f is convex. (Note that when f is convex, the subscript x ∈ dom f is not necessary since, by convention, yT x − f (x) = −∞ for x ̸∈ dom f .)
We start with some simple examples, and then describe some rules for conjugat- ing functions. This allows us to derive an analytical expression for the conjugate of many common convex functions.
Example 3.21 We derive the conjugates of some convex functions on R.
• Affine function. f(x) = ax+b. As a function of x, yx−ax−b is bounded if and only if y = a, in which case it is constant. Therefore the domain of the conjugate function f∗ is the singleton {a}, and f∗(a) = −b.
• Negative logarithm. f(x) = − log x, with dom f = R++. The function xy+log x is unbounded above if y ≥ 0 and reaches its maximum at x = −1/y otherwise. Therefore, domf∗ = {y | y < 0} = −R++ and f∗(y) = −log(−y)−1 for y < 0. • Exponential. f(x) = ex. xy−ex is unbounded if y < 0. For y > 0, xy−ex reachesitsmaximumatx=logy,sowehavef∗(y)=ylogy−y. Fory=0,
xy
x (0,−f∗(y))

92
3 Convex functions
f∗(y) = supx −ex = 0. In summary, domf∗ = R+ and f∗(y) = ylogy − y (with the interpretation 0 log 0 = 0).
• Negative entropy. f(x) = xlogx, with domf = R+ (and f(0) = 0). The functionxy−xlogxisboundedaboveonR+ forally,hencedomf∗ =R. It attains its maximum at x = ey−1, and substituting we find f∗(y) = ey−1.
• Inverse. f(x) = 1/x on R++. For y > 0, yx − 1/x is unbounded above. For y = 0 this function has supremum 0; for y < 0 the supremum is attained at x = (−y)−1/2. Therefore we have f∗(y) = −2(−y)1/2, with domf∗ = −R+. Example 3.22 Strictly convex quadratic function. Consider f (x) = 12 xT Qx, with Q∈Sn++. ThefunctionyTx−21xTQxisboundedaboveasafunctionofxforally. It attains its maximum at x = Q−1y, so f ∗ ( y ) = 21 y T Q − 1 y . Example 3.23 Log-determinant. We consider f (X ) = log det X −1 conjugate function is defined as f∗(Y)= sup(tr(YX)+logdetX), X≻0 on Sn++ . The since tr(Y X) is the standard inner product on Sn. We first show that tr(Y X) + log det X is unbounded above unless Y ≺ 0. If Y ̸≺ 0, then Y has an eigenvector v, with∥v∥2 =1,andeigenvalueλ≥0. TakingX=I+tvvT wefindthat tr(YX)+logdetX=trY +tλ+logdet(I+tvvT)=trY +tλ+log(1+t), which is unbounded above as t → ∞. Now consider the case Y ≺ 0. We can find the maximizing X by setting the gradient with respect to X equal to zero: ∇X (tr(YX)+logdetX)=Y +X−1 =0 (see §A.4.1), which yields X = −Y −1 (which is, indeed, positive definite). Therefore sarily convex) set S ⊆ Rn, i.e., IS(x) = 0 on domIS = S. Its conjugate is I S∗ ( y ) = s u p y T x , x∈S which is the support function of the set S. we have withdomf∗ =−Sn++. f∗(Y)=logdet(−Y)−1 −n, Example 3.24 Indicator function. Let IS be the indicator function of a (not neces- 3.3 The conjugate function 93 Example 3.25 Log-su􏰉m-exp function. To derive the conjugate of the log-sum-exp function f(x) = log( n exi), we first determine the values of y for which the i=1 maximum over x of yT x − f(x) is attained. By setting the gradient with respect to x equal to zero, we obtain the condition exi yi=􏰉nj=1exj, i=1,...􏰉,n. These equations are solvable for x if and only if y ≻ 0 and 1T y = 1. By substituting the expression for yi into yT x−f(x) we obtain f∗(y) = ni=1 yi log yi. This expression forf∗ isstillcorrectifsomecomponentsofyarezero,aslongasy≽0and1Ty=1, and we interpret 0 log 0 as 0. In fact the domain of f∗ is exactly given by 1T y = 1, y ≽ 0. To show this, suppose thatacomponentofyisnegative,say,yk <0. ThenwecanshowthatyTx−f(x)is unbounded above by choosing xk = −t, and xi = 0, i ̸= k, and letting t go to infinity. If y ≽ 0 but 1T y ̸= 1, we choose x = t1, so that yT x − f(x) = t1T y − t − log n. If 1T y > 1, this grows unboundedly as t → ∞; if 1T y < 1, it grows unboundedly as t → −∞. In summary, f∗(y)=􏰆 􏰉ni=1yilogyi ify≽0and1Ty=1 ∞ otherwise. In other words, the conjugate of the log-sum-exp function is the negative entropy function, restricted to the probability simplex. Example 3.26 Norm. Let ∥·∥ be a norm on Rn, with dual norm ∥·∥∗. We will show that the conjugate of f(x) = ∥x􏰆∥ is If∥y∥∗ >1,thenbydefinitionofthedualnorm,thereisaz∈Rn with∥z∥≤1and yT z > 1. Taking x = tz and letting t → ∞, we have
yT x − ∥x∥ = t(yT z − ∥z∥) → ∞,
which shows that f∗(y) = ∞. Conversely, if ∥y∥∗ ≤ 1, then we have yT x ≤ ∥x∥∥y∥∗ for all x, which implies for all x, yTx−∥x∥ ≤ 0. Therefore x = 0 is the value that maximizes yT x − ∥x∥, with maximum value 0.
Example 3.27 Norm squared. Now consider the function f(x) = (1/2)∥x∥2, where ∥·∥ is a norm, with dual norm ∥·∥∗. We will show that its conjugate is f∗(y) = (1/2)∥y∥2∗. From yT x ≤ ∥y∥∗∥x∥, we conclude
yT x − (1/2)∥x∥2 ≤ ∥y∥∗∥x∥ − (1/2)∥x∥2
0 ∥y∥∗ ≤ 1 ∞ otherwise,
f∗(y) =
i.e., the conjugate of a norm is the indicator function of the dual norm unit ball.

94
3 Convex functions
for all x. The righthand side is a quadratic function of ∥x∥, which has maximum value (1/2)∥y∥2∗. Therefore for all x, we have
yT x − (1/2)∥x∥2 ≤ (1/2)∥y∥2∗, which shows that f∗(y) ≤ (1/2)∥y∥2∗.
To show the other inequality, let x be any vector with yT x = ∥y∥∗∥x∥, scaled so that ∥x∥ = ∥y∥∗. Then we have, for this x,
yT x − (1/2)∥x∥2 = (1/2)∥y∥2∗, which shows that f∗(y) ≥ (1/2)∥y∥2∗.
Example 3.28 Revenue and profit functions. We consider a business or enterprise that consumesnresourcesandproducesaproductthatcanbesold. Weletr=(r1,…,rn) denote the vector of resource quantities consumed, and S(r) denote the sales revenue derived from the product produced (as a function of the resources consumed). Now let pi denote the price (per unit) of resource i, so the total amount paid for resources by the enterprise is pT r. The profit derived by the firm is then S(r) − pT r. Let us fix the prices of the resources, and ask what is the maximum profit that can be made, by wisely choosing the quantities of resources consumed. This maximum profit is given by
M(p) = sup􏰀S(r) − pT r􏰁. r
3.3.2
The function M(p) gives the maximum profit attainable, as a function of the resource prices. In terms of conjugate functions, we can express M as
M(p) = (−S)∗(−p).
Thus the maximum profit (as a function of resource prices) is closely related to the
conjugate of gross sales (as a function of resources consumed).
Basic properties
Fenchel’s inequality
From the definition of conjugate function, we immediately obtain the inequality f(x) + f∗(y) ≥ xT y
for all x, y. This is called Fenchel’s inequality (or Young’s inequality when f is differentiable).
For example with f(x) = (1/2)xT Qx, where Q ∈ Sn++, we obtain the inequality xT y ≤ (1/2)xT Qx + (1/2)yT Q−1y.
Conjugate of the conjugate
The examples above, and the name ‘conjugate’, suggest that the conjugate of the conjugate of a convex function is the original function. This is the case provided a technical condition holds: if f is convex, and f is closed (i.e., epi f is a closed set; see §A.3.3), then f∗∗ = f. For example, if domf = Rn, then we have f∗∗ = f, i.e., the conjugate of the conjugate of f is f again (see exercise 3.39).

3.4 Quasiconvex functions 95
Differentiable functions
The conjugate of a differentiable function f is also called the Legendre transform of f. (To distinguish the general definition from the differentiable case, the term Fenchel conjugate is sometimes used instead of conjugate.)
Suppose f is convex and differentiable, with domf = Rn. Any maximizer x∗ of yT x−f(x) satisfies y = ∇f(x∗), and conversely, if x∗ satisfies y = ∇f(x∗), then x∗ maximizes yT x − f(x). Therefore, if y = ∇f(x∗), we have
f∗(y) = x∗T ∇f(x∗) − f(x∗).
This allows us to determine f∗(y) for any y for which we can solve the gradient equation y = ∇f(z) for z. n
We can express this another way. Let z ∈ R be arbitrary and define y = ∇f(z). Then we have
f∗(y) = zT ∇f(z) − f(z). Scaling and composition with affine transformation
For a > 0 and b ∈ R, the conjugate of g(x) = af(x) + b is g∗(y) = af∗(y/a) − b. Suppose A ∈ Rn×n is nonsingular and b ∈ Rn. Then the conjugate of g(x) =
f(Ax + b) is
withdomg∗ =AT domf∗.
g∗(y) = f∗(A−T y) − bT A−T y, Sums of independent functions
If f(u, v) = f1(u) + f2(v), where f1 and f2 are convex functions with conjugates f1∗ and f2∗, respectively, then
f ∗ ( w , z ) = f 1∗ ( w ) + f 2∗ ( z ) .
In other words, the conjugate of the sum of independent convex functions is the sum
of the conjugates. (‘Independent’ means they are functions of different variables.)
3.4 Quasiconvex functions 3.4.1 Definition and examples
A function f : Rn → R is called quasiconvex (or unimodal) if its domain and all its sublevel sets
Sα ={x∈domf |f(x)≤α},
for α ∈ R, are convex. A function is quasiconcave if −f is quasiconvex, i.e., every superlevel set {x | f(x) ≥ α} is convex. A function that is both quasiconvex and quasiconcave is called quasilinear. If a function f is quasilinear, then its domain, and every level set {x | f (x) = α} is convex.

96
3 Convex functions
abc
β α
Figure 3.9 A quasiconvex function on R. For each α, the α-sublevel set Sα is convex, i.e., an interval. The sublevel set Sα is the interval [a,b]. The sublevel set Sβ is the interval (−∞, c].
For a function on R, quasiconvexity requires that each sublevel set be an interval (including, possibly, an infinite interval). An example of a quasiconvex function on R is shown in figure 3.9.
Convex functions have convex sublevel sets, and so are quasiconvex. But simple examples, such as the one shown in figure 3.9, show that the converse is not true.
Example 3.29 Some examples on R:
• Logarithm. log x on R++ is quasiconvex (and quasiconcave, hence quasilinear).
• Ceiling function. ceil(x) = inf{z ∈ Z | z ≥ x} is quasiconvex (and quasicon- cave).
These examples show that quasiconvex functions can be concave, or discontinuous. We now give some examples on Rn.
Example 3.30 Length of a vector. We define the length of x ∈ Rn as the largest index of a nonzero component, i.e.,
f(x)=max{i|xi ̸=0}.
(We define the length of the zero vector to be zero.) This function is quasiconvex on
Rn, since its sublevel sets are subspaces:
f(x)≤α ⇐⇒ xi =0fori=⌊α⌋+1,…,n.
Example 3.31 Consider f : R2 → R, with domf = R2+ and f(x1,x2) = x1x2. This function is neither convex nor concave since its Hessian
∇2f(x)=􏰒0 1􏰓 10

3.4
Quasiconvex functions 97
is indefinite; it has one positive and one negative eigenvalue. The function f is quasiconcave, however, since the superlevel sets
{x∈R2+ |x1x2 ≥α}
are convex sets for all α. (Note, however, that f is not quasiconcave on R2.)
Example 3.32 Linear-fractional function. The function f(x)= aTx+b,
cT x + d
with dom f = {x | cT x + d > 0}, is quasiconvex, and quasiconcave, i.e., quasilinear.
Its α-sublevel set is
Sα = {x|cTx+d>0, (aTx+b)/(cTx+d)≤α}
= {x|cTx+d>0, aTx+b≤α(cTx+d)},
which is convex, since it is the intersection of an open halfspace and a closed halfspace.
(The same method can be used to show its superlevel sets are convex.) Example 3.33 Distance ratio function. Suppose a, b ∈ Rn, and define
f(x)= ∥x−a∥2, ∥x−b∥2
i.e., the ratio of the Euclidean distance to a to the distance to b. Then f is quasiconvex on the halfspace {x | ∥x − a∥2 ≤ ∥x − b∥2}. To see this, we consider the α-sublevel setoff,withα≤1sincef(x)≤1onthehalfspace{x|∥x−a∥2 ≤∥x−b∥2}. This sublevel set is the set of points satisfying
∥x−a∥2 ≤α∥x−b∥2.
Squaring both sides, and rearranging terms, we see that this is equivalent to
(1 − α2)xT x − 2(a − α2b)T x + aT a − α2bT b ≤ 0. This describes a convex set (in fact a Euclidean ball) if α ≤ 1.
Example 3.34 Internal rate of return. Let x = (x0, x1, . . . , xn) denote a cash flow sequence over n periods, where xi > 0 means a payment to us in period i, and xi < 0 means a payment by us in period i. We define the present value of a cash flow, with interest rate r ≥ 0, to be Now we consider cash flows for which x0 < 0 and x0 +x1 +···+xn > 0. This means that we start with an investment of |x0| in period 0, and that the total of the
􏰊n i=0
(1 + r)−ixi.
(The factor (1 + r)−i is a discount factor for a payment by or to us in period i.)
PV(x, r) =

98
3 Convex functions
remaining cash flow, x1 + · · · + xn , (not taking any discount factors into account) exceeds our initial investment.
Forsuchacashflow,PV(x,0)>0andPV(x,r)→x0 <0asr→∞,soitfollows that for at least one r ≥ 0, we have PV(x,r) = 0. We define the internal rate of return of the cash flow as the smallest interest rate r ≥ 0 for which the present value is zero: IRR(x) = inf{r ≥ 0 | PV(x,r) = 0}. Internal rate of return is a quasiconcave function of x (restricted to x0 < 0, x1 + · · · + xn > 0). To see this, we note that
IRR(x)≥R ⇐⇒ PV(x,r)>0for0≤r 0}, indexed by r, over the range 0 ≤ r < R. For each r, PV(x, r) > 0 defines an open halfspace, so the righthand side defines a convex set.
Basic properties
The examples above show that quasiconvexity is a considerable generalization of convexity. Still, many of the properties of convex functions hold, or have analogs, for quasiconvex functions. For example, there is a variation on Jensen’s inequality that characterizes quasiconvexity: A function f is quasiconvex if and only if dom f isconvexandforanyx, y∈domf and0≤θ≤1,
f (θx + (1 − θ)y) ≤ max{f (x), f (y)}, (3.19)
i.e., the value of the function on a segment does not exceed the maximum of its values at the endpoints. The inequality (3.19) is sometimes called Jensen’s inequality for quasiconvex functions, and is illustrated in figure 3.10.
Example 3.35 Cardinality of a nonnegative vector. The cardinality or size of a vector x ∈ Rn is the number of nonzero components, and denoted card(x). The function card is quasiconcave on Rn+ (but not Rn). This follows immediately from the modified Jensen inequality
card(x + y) ≥ min{card(x), card(y)}, which holds for x, y ≽ 0.
Example 3.36 Rank of positive semidefinite matrix. The function rank X is quasi- concave on Sn+. This follows from the modified Jensen inequality (3.19),
rank(X + Y ) ≥ min{rankX,rankY }
which holds for X, Y ∈ Sn+. (This can be considered an extension of the previous example, since rank(diag(x)) = card(x) for x ≽ 0.)
3.4.2

3.4
Quasiconvex functions
max{f (x), f (y)} (x, f (x))
99
Figure 3.10 A quasiconvex function on R. The value of f between x and y is no more than max{f(x),f(y)}.
Like convexity, quasiconvexity is characterized by the behavior of a function f on lines: f is quasiconvex if and only if its restriction to any line intersecting its domain is quasiconvex. In particular, quasiconvexity of a function can be verified by restricting it to an arbitrary line, and then checking quasiconvexity of the resulting function on R.
Quasiconvex functions on R
We can give a simple characterization of quasiconvex functions on R. We consider continuous functions, since stating the conditions in the general case is cumbersome. A continuous function f : R → R is quasiconvex if and only if at least one of the following conditions holds:
• f is nondecreasing
• f is nonincreasing
• there is a point c ∈ domf such that for t ≤ c (and t ∈ domf), f is nonincreasing, and for t ≥ c (and t ∈ dom f ), f is nondecreasing.
The point c can be chosen as any point which is a global minimizer of f. Figure 3.11 illustrates this.
3.4.3 Differentiable quasiconvex functions First-order conditions
Suppose f : Rn → R is differentiable. Then f is quasiconvex if and only if dom f is convex and for all x, y ∈ domf
f(y) ≤ f(x) =⇒ ∇f(x)T (y − x) ≤ 0. (3.20)
(y,f(y))

100
3 Convex functions
t
c
Figure 3.11 A quasiconvex function on R. The function is nonincreasing for t ≤ c and nondecreasing for t ≥ c.
∇f (x) x
Figure 3.12 Three level curves of a quasiconvex function f are shown. The vector ∇f(x) defines a supporting hyperplane to the sublevel set {z | f(z) ≤ f(x)} at x.
This is the analog of inequality (3.2), for quasiconvex functions. We leave the proof as an exercise (exercise 3.43).
The condition (3.20) has a simple geometric interpretation when ∇f(x) ̸= 0. It states that ∇f(x) defines a supporting hyperplane to the sublevel set {y | f(y) ≤ f(x)}, at the point x, as illustrated in figure 3.12.
While the first-order condition for convexity (3.2), and the first-order condition for quasiconvexity (3.20) are similar, there are some important differences. For example, if f is convex and ∇f(x) = 0, then x is a global minimizer of f. But this statement is false for quasiconvex functions: it is possible that ∇f(x) = 0, but x is not a global minimizer of f.

3.4 Quasiconvex functions 101
Second-order conditions
Now suppose f is twice differentiable. If f is quasiconvex, then for all x ∈ dom f , and all y ∈ Rn, we have
yT ∇f(x) = 0 =⇒ yT ∇2f(x)y ≥ 0. (3.21) For a quasiconvex function on R, this reduces to the simple condition
f′(x) = 0 =⇒ f′′(x) ≥ 0,
i.e., at any point with zero slope, the second derivative is nonnegative. For a quasiconvex function on Rn, the interpretation of the condition (3.21) is a bit more complicated. As in the case n = 1, we conclude that whenever ∇f(x) = 0, we must have ∇2 f (x) ≽ 0. When ∇f (x) ̸= 0, the condition (3.21) means that ∇2f(x) is positive semidefinite on the (n − 1)-dimensional subspace ∇f(x)⊥. This implies that ∇2f(x) can have at most one negative eigenvalue.
As a (partial) converse, if f satisfies
yT ∇f(x) = 0 =⇒ yT ∇2f(x)y > 0 (3.22)
for all x ∈ dom f and all y ∈ Rn, y ̸= 0, then f is quasiconvex. This condition is the same as requiring ∇2f(x) to be positive definite for any point with ∇f(x) = 0, and for all other points, requiring ∇2f(x) to be positive definite on the (n − 1)- dimensional subspace ∇f(x)⊥.
Proof of second-order conditions for quasiconvexity
By restricting the function to an arbitrary line, it suffices to consider the case in which f : R → R.
We first show that if f : R → R is quasiconvex on an interval (a, b), then it must satisfy (3.21), i.e., if f′(c) = 0 with c ∈ (a,b), then we must have f′′(c) ≥ 0. If f′(c) = 0 with c ∈ (a,b), f′′(c) < 0, then for small positive ǫ we have f(c−ǫ) < f(c) and f(c + ǫ) < f(c). It follows that the sublevel set {x | f(x) ≤ f(c) − ǫ} is disconnected for small positive ǫ, and therefore not convex, which contradicts our assumption that f is quasiconvex. Now we show that if the condition (3.22) holds, then f is quasiconvex. Assume that (3.22) holds, i.e., for each c ∈ (a,b) with f′(c) = 0, we have f′′(c) > 0. This means that whenever the function f′ crosses the value 0, it is strictly increasing. Therefore it can cross the value 0 at most once. If f′ does not cross the value 0 at all, then f is either nonincreasing or nondecreasing on (a,b), and therefore quasiconvex. Otherwise it must cross the value 0 exactly once, say at c ∈ (a, b). Since f′′(c) > 0, it follows that f′(t) ≤ 0 for a < t ≤ c, and f′(t) ≥ 0 for c ≤ t < b. This shows that f is quasiconvex. 3.4.4 Operations that preserve quasiconvexity Nonnegative weighted maximum A nonnegative weighted maximum of quasiconvex functions, i.e., f = max{w1f1,...,wmfm}, 102 3 Convex functions with wi ≥ 0 and fi quasiconvex, is quasiconvex. The property extends to the general pointwise supremum f (x) = sup(w(y)g(x, y)) y∈C where w(y) ≥ 0 and g(x, y) is quasiconvex in x for each y. This fact can be easily verified: f(x) ≤ α if and only if w(y)g(x,y) ≤ α for all y ∈ C, i.e., the α-sublevel set of f is the intersection of the α-sublevel sets of the functions w(y)g(x, y) in the variable x. Example 3.37 Generalized eigenvalue. The maximum generalized eigenvalue of a pair of symmetric matrices (X, Y ), with Y ≻ 0, is defined as λmax(X,Y)=supuTXu =sup{λ| det(λY −X)=0}. u̸=0 uTYu (See §A.5.3). This function is quasiconvex on dom f = Sn × Sn++ . To see this we consider the expression λmax (X, Y ) = sup uT X u . u̸=0 uTYu For each u ̸= 0, the function uTXu/uTYu is linear-fractional in (X,Y), hence a quasiconvex function of (X, Y ). We conclude that λmax is quasiconvex, since it is the supremum of a family of quasiconvex functions. Composition Ifg:Rn →Risquasiconvexandh:R→Risnondecreasing,thenf=h◦gis quasiconvex. The composition of a quasiconvex function with an affine or linear-fractional transformation yields a quasiconvex function. If f is quasiconvex, then g(x) = f (Ax + b) is quasiconvex, and g ̃(x) = f ((Ax + b)/(cT x + d)) is quasiconvex on the set {x|cTx+d>0, (Ax+b)/(cTx+d)∈domf}.
If f(x,y) is quasiconvex jointly in x and y and C is a convex set, then the function
g(x)= inf f(x,y) y∈C
is quasiconvex.
To show this, we need to show that {x | g(x) ≤ α} is convex, where α ∈ R is
arbitrary. From the definition of g, g(x) ≤ α if and only if for any ǫ > 0 there exists
Minimization

3.4 Quasiconvex functions 103
a y ∈ C with f(x,y) ≤ α+ǫ. Now let x1 and x2 be two points in the α-sublevel setofg. Thenforanyǫ>0,thereexistsy1,y2 ∈C with
f (x1 , y1 ) ≤ α + ǫ, f (x2 , y2 ) ≤ α + ǫ, and since f is quasiconvex in x and y, we also have
f(θx1 +(1−θ)x2,θy1 +(1−θ)y2)≤α+ǫ,
for 0 ≤ θ ≤ 1. Hence g(θx1 +(1−θ)x2) ≤ α, which proves that {x | g(x) ≤ α} is
convex.
3.4.5 Representation via family of convex functions
In the sequel, it will be convenient to represent the sublevel sets of a quasiconvex function f (which are convex) via inequalities of convex functions. We seek a family of convex functions φt : Rn → R, indexed by t ∈ R, with
f(x) ≤ t ⇐⇒ φt(x) ≤ 0, (3.23)
i.e., the t-sublevel set of the quasiconvex function f is the 0-sublevel set of the convex function φt. Evidently φt must satisfy the property that for all x ∈ Rn, φt(x)≤0 =⇒ φs(x)≤0fors≥t. Thisissatisfiedifforeachx,φt(x)isa nonincreasing function of t, i.e., φs(x) ≤ φt(x) whenever s ≥ t.
To see that such a representation always exists, we can take φt(x)=􏰆 0 f(x)≤t
∞ otherwise,
i.e., φt is the indicator function of the t-sublevel of f . Obviously this representation
is not unique; for example if the sublevel sets of f are closed, we can take φt(x) = dist(x,{z | f(z) ≤ t}).
We are usually interested in a family φt with nice properties, such as differentia- bility.
Example 3.38 Convex over concave function. Suppose p is a convex function, q is a concave function, with p(x) ≥ 0 and q(x) > 0 on a convex set C. Then the function f defined by f(x) = p(x)/q(x), on C, is quasiconvex.
Here we have
f(x) ≤ t ⇐⇒ p(x)−tq(x) ≤ 0,
so we can take φt(x) = p(x)−tq(x) for t ≥ 0. For each t, φt is convex and for each x, φt(x) is decreasing in t.

104 3 Convex functions
3.5 Log-concave and log-convex functions 3.5.1 Definition
A function f : Rn → R is logarithmically concave or log-concave if f(x) > 0 for all x ∈ domf and logf is concave. It is said to be logarithmically convex or log-convex if logf is convex. Thus f is log-convex if and only if 1/f is log- concave. It is convenient to allow f to take on the value zero, in which case we take logf(x) = −∞. In this case we say f is log-concave if the extended-value function log f is concave.
We can express log-concavity directly, without logarithms: a function f : Rn → R, with convex domain and f (x) > 0 for all x ∈ dom f , is log-concave if and only ifforallx, y∈domf and0≤θ≤1,wehave
f(θx + (1 − θ)y) ≥ f(x)θf(y)1−θ.
In particular, the value of a log-concave function at the average of two points is at least the geometric mean of the values at the two points.
From the composition rules we know that eh is convex if h is convex, so a log- convex function is convex. Similarly, a nonnegative concave function is log-concave. It is also clear that a log-convex function is quasiconvex and a log-concave function is quasiconcave, since the logarithm is monotone increasing.
Example 3.39 Some simple examples of log-concave and log-convex functions.
• Affinefunction. f(x)=aTx+bislog-concaveon{x|aTx+b>0}.
• Powers. f(x) = xa, on R++, is log-convex for a ≤ 0, and log-concave for a ≥ 0.
• Exponentials. f(x) = eax is log-convex and log-concave.
• The cumulative distribution function o􏰑f a Gaussian density,
Φ(x) = √2π
is log-concave (see exercise 3.54).
• Gamma function. The Gamma fun􏰑ction,
1×2
e−u /2 du,

Γ(x) = ux−1e−u du,
0
is log-convex for x ≥ 1 (see exercise 3.52).
• Determinant. detX is log concave on Sn++.
• Determinant over trace. det X/ tr X is log concave on Sn++ (see exercise 3.49).
Example 3.40 Log-concave density functions. Many common probability density functions are log-concave. Two examples are the multivariate normal distribution,
f (x) = 􏰣 1 e− 21 (x−x ̄)T Σ−1 (x−x ̄) (2π)n det Σ
−∞

3.5
Log-concave and log-convex functions 105
(where x ̄ ∈ Rn and Σ ∈ Sn++), and the exponential distribution on Rn+,
i=1
􏰇􏰖n 􏰈
λi e−λT x
f(x) =
(where λ ≻ 0). Another example is th􏰆e uniform distribution over a convex set C,
1/α x∈C 0 x ̸∈ C
f(x) =
where α = vol(C) is the volume (Lebesgue measure) of C. In this case logf takes
on the value −∞ outside C, and −logα on C, hence is concave.
As a more exotic example consider the Wishart distribution, defined as follows. Let
n􏰉
x1, . . . , xp ∈ R be independent Gaussian random vectors with zero mean and co-
variance Σ ∈ Sn, with p > n. The random matrix X = pi=1 xixTi has the Wishart density
f(X) = a (det X)(p−n−1)/2 e− 12 tr(Σ−1X),
with dom f = Sn++, and a is a positive constant. The Wishart density is log-concave,
since
which is a concave function of X.
log f (X ) = log a + p − n − 1 log det X − 1 tr(Σ−1 X ), 22
3.5.2 Properties
Twice differentiable log-convex/concave functions
Suppose f is twice differentiable, with dom f convex, so ∇2logf(x)= 1 ∇2f(x)− 1 ∇f(x)∇f(x)T.
f (x) f (x)2
We conclude that f is log-convex if and only if for all x ∈ dom f ,
f(x)∇2f(x) ≽ ∇f(x)∇f(x)T , and log-concave if and only if for all x ∈ dom f ,
f(x)∇2f(x) ≼ ∇f(x)∇f(x)T . Multiplication, addition, and integration
Log-convexity and log-concavity are closed under multiplication and positive scal- ing. For example, if f and g are log-concave, then so is the pointwise product h(x) = f (x)g(x), since log h(x) = log f (x) + log g(x), and log f (x) and log g(x) are concave functions of x.
Simple examples show that the sum of log-concave functions is not, in general, log-concave. Log-convexity, however, is preserved under sums. Let f and g be log- convex functions, i.e., F = log f and G = log g are convex. From the composition rules for convex functions, it follows that
log (exp F + exp G) = log(f + g)

106
3 Convex functions
is convex. Therefore the sum of two log-convex functions is log-convex. More generally, if f(x,y) is log-con􏰑vex in x for each y ∈ C then
g(x) = f(x,y) dy C
Example 3.41 Laplace transform of a nonnegative function and the moment and cumulant generating functions. Suppo􏰑se p : Rn → R satisfies p(x) ≥ 0 for all x. The Laplace transform of p,
P (z) = p(x)e−zT x dx,
is log-convex on Rn. (Here domP is, na􏰜turally, {z | P(z) < ∞}.) Now suppose p is a density, i.e., satisfies p(x) dx = 1. The function M(z) = P(−z) is called the moment generating function of the density. It gets its name from the fact that the moments of the density can be found from the derivatives of the moment generating function, evaluated at z = 0, e.g., ∇M(0) = Ev, ∇2M(0) = EvvT , where v is a random variable with density p. The function logM(z), which is convex, is called the cumulant generating function for p, since its derivatives give the cumulants of the density. For example, the first and second derivatives of the cumulant generating function, evaluated at zero, are the mean and covariance of the associated random variable: ∇ log M (0) = E v, ∇2 log M (0) = E(v − E v)(v − E v)T . Integration of log-concave functions In some special cases log-concavity is pr􏰑eserved by integration. If f : Rn ×Rm → R is log-concave, then g(x) = f(x,y) dy is a log-concave function of x (on Rn). (The integration here is over Rm.) A proof of this result is not simple; see the references. This result has many important consequences, some of which we describe in the rest of this section. It implies, for example, that marginal distributions of log- concave probability densities are log-concave. It also implies that log-concavity is closed under convolution, i.e., if f and g are log-concave on Rn, then so is the convolution (f ∗ g)(x) = f(x − y)g(y) dy. (To see this, note that g(y) and f(x−y) are log-concave in (x,y), hence the product f (x − y)g(y) is; then the integration result applies.) is log-convex. 􏰑 3.5 Log-concave and log-convex functions 107 Suppose C ⊆ Rn is a convex set and w is a random vector in Rn with log- concave probability density p. Then the function f(x) = prob(x + w ∈ C) is log-concave in x. To see this, express f as f(x)=􏰑 g(x+w)p(w)dw, wheregisdefinedas 􏰆 1 u∈C g(u)= 0 u̸∈C, (which is log-concave) and apply the integration result. Example 3.42 The cumulative distribution function of a probability density function f:Rn →Risdefinedas F(x) = prob(w ≼ x) = 􏰑 xn −∞ 􏰑 x1 ··· f(z) dz1 ···dzn, −∞ where w is a random variable with density f. If f is log-concave, then F is log- concave. We have already encountered a special case: the cumulative distribution function of a Gaussian random variable, 1􏰑x2 f(x) = √2π e−t /2 dt, −∞ is log-concave. (See example 3.39 and exercise 3.54.) Example 3.43 Yield function. Let x ∈ Rn denote the nominal or target value of a set of parameters of a product that is manufactured. Variation in the manufacturing process causes the parameters of the product, when manufactured, to have the value x + w, where w ∈ Rn is a random vector that represents manufacturing variation, and is usually assumed to have zero mean. The yield of the manufacturing process, as a function of the nominal parameter values, is given by Y(x) = prob(x+w ∈ S), where S ⊆ Rn denotes the set of acceptable parameter values for the product, i.e., the product specifications. If the density of the manufacturing error w is log-concave (for example, Gaussian) and the set S of product specifications is convex, then the yield function Y is log-concave. This implies that the α-yield region, defined as the set of nominal parameters for which the yield exceeds α, is convex. For example, the 95% yield region {x | Y(x) ≥ 0.95} = {x | logY(x) ≥ log0.95} is convex, since it is a superlevel set of the concave function log Y . 108 3 Convex functions Example 3.44 Volume of polyhedron. Let A ∈ Rm×n. Define Pu ={x∈Rn |Ax≼u}. Then its volume vol Pu is a log-concave function of u. To prove this, note that the function Ψ(x,u)=􏰆 1 Ax≼u 0 otherwise, is log-concave. By the integration result, we conclude that 􏰑 3.6 3.6.1 Convexity with respect to generalized inequalities We now consider generalizations of the notions of monotonicity and convexity, using generalized inequalities instead of the usual ordering on R. Monotonicity with respect to a generalized inequality Suppose K ⊆ Rn is a proper cone with associated generalized inequality ≼K. A function f : Rn → R is called K-nondecreasing if is log-concave. Ψ(x, u) dx = vol Pu x≼K y=⇒f(x)≤f(y), x≼K y, x̸=y=⇒f(x) 0 for all x ∈ dom f (but the converse is not true). These conditions are readily extended to the case of monotonicity with respect to a generalized inequality. A differentiable function f, with convex domain, is K-nondecreasing if and only if
∇f(x) ≽K∗ 0 (3.24)
for all x ∈ domf. Note the difference with the simple scalar case: the gradi- ent must be nonnegative in the dual inequality. For the strict case, we have the following: If
∇f(x) ≻K∗ 0 (3.25)
for all x ∈ domf, then f is K-increasing. As in the scalar case, the converse is not true.
Let us prove these first-order conditions for monotonicity. First, assume that f satisfies (3.24) for all x, but is not K-nondecreasing, i.e., there exist x, y with x ≼K y and f(y) < f(x). By differentiability of f there exists a t ∈ [0,1] with d f(x + t(y − x)) = ∇f(x + t(y − x))T (y − x) < 0. dt Sincey−x∈K thismeans ∇f(x + t(y − x)) ̸∈ K∗, which contradicts our assumption that (3.24) is satisfied everywhere. In a similar way it can be shown that (3.25) implies f is K-increasing. It is also straightforward to see that it is necessary that (3.24) hold everywhere. Assume (3.24) does not hold for x = z. By the definition of dual cone this means there exists a v ∈ K with ∇f(z)T v < 0. Now consider h(t) = f(z + tv) as a function of t. We have h′(0) = ∇f(z)T v < 0, and therefore there exists t > 0 with h(t) = f(z + tv) < h(0) = f(z), which means f is not K-nondecreasing. 3.6.2 Convexity with respect to a generalized inequality Suppose K ⊆ Rm is a proper cone with associated generalized inequality ≼K . We say f : Rn → Rm is K-convex if for all x, y, and 0 ≤ θ ≤ 1, f(θx + (1 − θ)y) ≼K θf(x) + (1 − θ)f(y). 110 3 Convex functions The function is strictly K-convex if f(θx+(1−θ)y) ≺K θf(x)+(1−θ)f(y) for all x ̸= y and 0 < θ < 1. These definitions reduce to ordinary convexity and strict convexity when m = 1 (and K = R+). Example 3.47 Convexity with respect to componentwise inequality. A function f : Rn → Rm is convex with respect to componentwise inequality (i.e., the generalized inequalityinducedbyRm+)ifandonlyifforallx, yand0≤θ≤1, f(θx + (1 − θ)y) ≼ θf(x) + (1 − θ)f(y), i.e., each component fi is a convex function. The function f is strictly convex with respect to componentwise inequality if and only if each component fi is strictly con- vex. Example 3.48 Matrix convexity. Suppose f is a symmetric matrix valued function, i.e., f : Rn → Sm. The function f is convex with respect to matrix inequality if f(θx + (1 − θ)y) ≼ θf(x) + (1 − θ)f(y) for any x and y, and for θ ∈ [0,1]. This is sometimes called matrix convexity. An equivalent definition is that the scalar function zT f(x)z is convex for all vectors z. (This is often a good way to prove matrix convexity). A matrix function is strictly matrix convex if f(θx + (1 − θ)y) ≺ θf(x) + (1 − θ)f(y) when x ̸= y and 0 < θ < 1, or, equivalently, if zT fz is strictly convex for every z ̸= 0. Some examples: • The function f(X) = XXT where X ∈ Rn×m is matrix convex, since for fixed z the function zT XXT z = ∥XT z∥2 is a convex quadratic function of (the components of) X. For the same reason, f(X) = X2 is matrix convex on Sn. • ThefunctionXp ismatrixconvexonSn++ for1≤p≤2or−1≤p≤0,and matrix concave for 0 ≤ p ≤ 1. • The function f(X) = eX is not matrix convex on Sn, for n ≥ 2. Many of the results for convex functions have extensions to K-convex functions. As a simple example, a function is K-convex if and only if its restriction to any line in its domain is K-convex. In the rest of this section we list a few results for K-convexity that we will use later; more results are explored in the exercises. Dual characterization of K-convexity A function f is K-convex if and only if for every w ≽K∗ 0, the (real-valued) function wT f is convex (in the ordinary sense); f is strictly K-convex if and only if for every nonzero w ≽K∗ 0 the function wT f is strictly convex. (These follow directly from the definitions and properties of dual inequality.) 3.6 Convexity with respect to generalized inequalities 111 Differentiable K-convex functions A differentiable function f is K-convex if and only if its domain is convex, and for allx, y∈domf, (Here Df(x) ∈ Rm×n is the derivative or Jacobian matrix of f at x; see §A.4.1.) f(y)≻K f(x)+Df(x)(y−x). Composition theorem Many of the results on composition can be generalized to K-convexity. For example, if g : Rn → Rp is K-convex, h : Rp → R is convex, and h ̃ (the extended-value extension of h) is K-nondecreasing, then h ◦ g is convex. This generalizes the fact that a nondecreasing convex function of a convex function is convex. The condition that h ̃ be K-nondecreasing implies that dom h − K = dom h. Example 3.49 The quadratic matrix function g : Rm×n → Sn defined by g(X) = XT AX + BT X + XT B + C, where A ∈ Sm, B ∈ Rm×n, and C ∈ Sn, is convex when A ≽ 0. The function h : Sn → R defined by h(Y ) = − log det(−Y ) is convex and increasing on domh = −Sn++. By the composition theorem, we conclude that f(X) = − log det(−(XT AX + BT X + XT B + C)) is convex on domf={X∈Rm×n |XTAX+BTX+XTB+C≺0}. This generalizes the fact that f(y)≽K f(x)+Df(x)(y−x). The function f is strictly K-convex if and only if for all x, y ∈ dom f with x ̸= y, is convex on provided a ≥ 0. − log(−(ax2 + bx + c)) {x∈R|ax2 +bx+c<0}, 112 3 Convex functions Bibliography The standard reference on convex analysis is Rockafellar [Roc70]. Other books on convex functions are Stoer and Witzgall [SW70], Roberts and Varberg [RV73], Van Tiel [vT84], Hiriart-Urruty and Lemar ́echal [HUL93], Ekeland and T ́emam [ET99], Borwein and Lewis [BL00], Florenzano and Le Van [FL01], Barvinok [Bar02], and Bertsekas, Nedi ́c, and Ozdaglar [Ber03]. Most nonlinear programming texts also include chapters on convex functions (see, for example, Mangasarian [Man94], Bazaraa, Sherali, and Shetty [BSS93], Bertsekas [Ber99], Polyak [Pol87], and Peressini, Sullivan, and Uhl [PSU88]). Jensen’s inequality appears in [Jen06]. A general study of inequalities, in which Jensen’s inequality plays a central role, is presented by Hardy, Littlewood, and P ́olya [HLP52], and Beckenbach and Bellman [BB65]. The term perspective function is from Hiriart-Urruty and Lemar ́echal [HUL93, volume 1, page 100]. For the definitions in example 3.19 (relative entropy and Kullback-Leibler divergence), and the related exercise 3.13, see Cover and Thomas [CT91]. Some important early references on quasiconvex functions (as well as other extensions of convexity) are Nikaidˆo [Nik54], Mangasarian [Man94, chapter 9], Arrow and Enthoven [AE61], Ponstein [Pon67], and Luenberger [Lue68]. For a more comprehensive reference list, we refer to Bazaraa, Sherali, and Shetty [BSS93, page 126]. Pr ́ekopa [Pr ́e80] gives a survey of log-concave functions. Log-convexity of the Laplace transform is mentioned in Barndorff-Nielsen [BN78, §7]. For a proof of the integration result of log-concave functions, see Pr ́ekopa [Pr ́e71, Pr ́e73]. Generalized inequalities are used extensively in the recent literature on cone programming, starting with Nesterov and Nemirovski [NN94, page 156]; see also Ben-Tal and Nemirovski [BTN01] and the references at the end of chapter 4. Convexity with respect to generalized inequalities also appears in the work of Luenberger [Lue69, §8.2] and Isii [Isi64]. Matrix monotonicity and matrix convexity are attributed to L ̈owner [L ̈ow34], and are discussed in detail by Davis [Dav63], Roberts and Varberg [RV73, page 216] and Marshall and Olkin [MO79, §16E]. For the result on convexity and concavity of the function Xp in example 3.48, see Bondar [Bon94, theorem 16.1]. For a simple example that demonstrates that eX is not matrix convex, see Marshall and Olkin [MO79, page 474]. Exercises 113 Exercises Definition of convexity 3.1 Suppose f : R → R is convex, and a, b ∈ dom f with a < b. (a) Show that for all x ∈ [a, b]. (b) Show that f(x)≤ b−xf(a)+ x−af(b) b−a b−a f(x)−f(a) ≤ f(b)−f(a) ≤ f(b)−f(x) x−a b−a b−x for all x ∈ (a, b). Draw a sketch that illustrates this inequality. (c) Suppose f is differentiable. Use the result in (b) to show that f′(a) ≤ f(b) − f(a) ≤ f′(b). b−a Note that these inequalities also follow from (3.2): f(b) ≥ f(a) + f′(a)(b − a), f(a) ≥ f(b) + f′(b)(a − b). (d) Suppose f is twice differentiable. Use the result in (c) to show that f′′(a) ≥ 0 and f′′(b) ≥ 0. 3.2 Level sets of convex, concave, quasiconvex, and quasiconcave functions. Some level sets of a function f are shown below. The curve labeled 1 shows {x | f(x) = 1}, etc. 3 2 1 Could f be convex (concave, quasiconvex, quasiconcave)? Explain your answer. Repeat for the level curves shown below. 123456 114 3 Convex functions 3.3 3.4 3.5 3.6 3.7 3.8 3.9 Inverse of an increasing convex function. Suppose f : R → R is increasing and convex on its domain (a,b). Let g denote its inverse, i.e., the function with domain (f(a),f(b)) and g(f(x)) = x for a < x < b. What can you say about convexity or concavity of g? 3.10 [RV73, page 15] Show that a continuous function f : Rn → R is convex if and only if for every line segment, its average value on the segment is less than or equal to the average of its values at the endpo􏰑ints of the segment: For every x, y ∈ Rn, 1 f(x)+f(y) f (x + λ(y − x)) dλ ≤ 2 . 0 [RV73, page 22] Running average of a convex function. Suppose f : R → R is convex, with R+ ⊆ dom f . Show that its􏰑running average F , defined as 1x F(x) = x f(t) dt, domF = R++, 0 is convex. Hint. For each s, f(sx) is convex in x, so 􏰜01 f(sx) ds is convex. Functions and epigraphs. When is the epigraph of a function a halfspace? When is the epigraph of a function a convex cone? When is the epigraph of a function a polyhedron? Suppose f : Rn → R is convex with dom f = Rn, and bounded above on Rn. Show that f is constant. Second-order condition for convexity. Prove that a twice differentiable function f is convex if and only if its domain is convex and ∇2f(x) ≽ 0 for all x ∈ domf. Hint. First consider the case f : R → R. You can use the first-order condition for convexity (which was proved on page 70). Second-order conditions for convexity on an affine set. Let F ∈ Rn×m, xˆ ∈ Rn. The restrictionoff:Rn →Rtotheaffineset{Fz+xˆ|z∈Rm}isdefinedasthefunction f ̃ : R m → R w i t h ̃ ̃ f ( z ) = f ( F z + xˆ ) , d o m f = { z | F z + xˆ ∈ d o m f } . Suppose f is twice differentiable with a convex domain. (a) Show that f ̃is convex if and only if for all z ∈ domf ̃ F T ∇ 2 f ( F z + xˆ ) F ≽ 0 . (b) Suppose A ∈ Rp×n is a matrix whose nullspace is equal to the range of F, i.e., AF = 0 and rankA = n−rankF. Show that f ̃is convex if and only if for all z ∈ domf ̃there exists a λ ∈ R such that ∇ 2 f ( F z + xˆ ) + λ A T A ≽ 0 . Hint. Use the following result: If B ∈ Sn and A ∈ Rp×n, then xT Bx ≥ 0 for all x ∈ N (A) if and only if there exists a λ such that B + λAT A ≽ 0. An extension of Jensen’s inequality. One interpretation of Jensen’s inequality is that randomization or dithering hurts, i.e., raises the average value of a convex function: For f convex and v a zero mean random variable, we have Ef(x0 +v) ≥ f(x0). This leads to the following conjecture. If f is convex, then the larger the variance of v, the larger Ef(x0 +v). (a) Give a counterexample that shows that this conjecture is false. Find zero mean random variables v and w, with var(v) > var(w), a convex function f, and a point x0,suchthatEf(x0 +v) 0, u0(x) = limα→0 uα(x).

116 3 Convex functions
3.16
3.17
3.18
3.19
(b) Show that uα are concave, monotone increasing, and all satisfy uα(1) = 0.
These functions are often used in economics to model the benefit or utility of some quantity of goods or money. Concavity of uα means that the marginal utility (i.e., the increase in utility obtained for a fixed increase in the goods) decreases as the amount of goods increases. In other words, concavity models the effect of satiation.
For each of the following functions determine whether it is convex, concave, quasiconvex, or quasiconcave.
(a) f(x) = ex − 1 on R.
(b) f(x1,x2) = x1x2 on R2++.
(c) f(x1, x2) = 1/(x1x2) on R2++. (d) f(x1,x2) = x1/x2 on R2++.
3.20
(e) f(x1,x2)=x21/x2 onR×R++.
(f) f(x1,x2)=xαx1−α,where0≤α≤1,onR2 .
Suppose p < 1, p ̸= 0. Show that the function 12 ++ 􏰈1/p n􏰉 􏰉n1/22 Adapt the proof of concavity of the log-determinant function in §3.1.5 to show the follow- ing. (a) f(X) = tr 􏰀X−1􏰁 is convex on dom f = Sn++. (b) f(X) = (detX)1/n is concave on domf = Sn++. Nonnegative weighted sums and integrals. (a) Show that f(x) = 􏰉ri=1 αix[i] is a convex function of x, where α1 ≥ α2 ≥ ··· ≥ αr ≥ 0,􏰉and x[i] denotes the ith largest component of x. (You can use the fact that f(x) = ki=1 x[i] is convex on Rn.) (b) Let T (x, ω) denote the trigonometric polynomial T(x,ω)=x1 +x2cosω+x3cos2ω+···+xncos(n−1)ω. Show that the function 􏰑 2π 􏰇􏰊n f ( x ) = x pi i=1 with domf = R++ is concave. This includes as special cases f(x) = ( i=1 xi ) and the harmonic mean f(x) = ( ni=1 1/xi)−1. Hint. Adapt the proofs for the log-sum-exp function and the geometric mean in §3.1.5. f(x) = − isconvexon{x∈Rn |T(x,ω)>0, 0≤ω≤2π}.
Composition with an affine function. Show that the following functions f : Rn → R are convex.
(a) f(x)=∥Ax−b∥,whereA∈Rm×n,b∈Rm,and∥·∥isanormonRm.
(b) f(x)=−(det(A0+x1A1+···+xnAn))1/m,on{x|A0+x1A1+···+xnAn ≻0},
where Ai ∈ Sm.
(c) f(X) = tr(A0 + x1A1 + ··· + xnAn)−1, on {x | A0 +x1A1 +···+xnAn ≻ 0}, where Ai ∈ Sm. (Use the fact that tr(X−1) is convex on Sm++; see exercise 3.18.)
0
logT(x,ω) dω

Exercises 117
3.21 Pointwise maximum and supremum. Show that the following functions f : Rn → R are convex.
(a) f(x) = maxi=1,…,k ∥A(i)x − b(i)∥, where A(i) ∈ Rm×n, b(i) ∈ Rm and ∥ · ∥ is a norm on Rm.􏰉r n
(b) f(x) = i=1 |x|[i] on R , where |x| denotes the vector with |x|i = |xi| (i.e., |x| is the absolute value of x, componentwise), and |x|[i] is the ith largest component of |x|. In other words, |x|[1], |x|[2], . . . , |x|[n] are the absolute values of the components of x, sorted in nonincreasing order.
3.22 Composition rules. Show that the following functions are convex.
(a) f(x) = −log(−log(􏰉m eaTi x+bi)) on domf = {x | 􏰉m eaTi x+bi < 1}. You can 􏰉i=1 i=1 use the fact that log( n eyi ) is convex. √ i=1 (b) f(x,u,v) = − uv−xTx on domf = {(x,u,v) | uv > xTx, u, v > 0}. Use the
fact that xT x/u is convex in (x, u) for u > 0, and that −√x1x2 is convex on R2++. (c) f (x, u, v) = − log(uv − xT x) on dom f = {(x, u, v) | uv > xT x, u, v > 0}.
(d) f(x,t) = −(tp −∥x∥p)1/p where p > 1 and domf = {(x,t) | t ≥ ∥x∥p}. You can use the fact that ∥x∥p/up−1 is convex in (x,u) for u > 0 (see exercise 3.23), and that −x1/py1−1/p is convex on R2+ (see exercise 3.16).
(e) f(x,t) = −log(tp − ∥x∥p) where p > 1 and domf = {(x,t) | t > ∥x∥p}. You can use the fact that ∥x∥p /up−1 is convex in (x, u) for u > 0 (see exercise 3.23).
3.23 Perspective of a function.
(a) Show that for p > 1,
f(x,t)= is convex on {(x,t) | t > 0}.
(b) Show that
f(x) = ∥Ax + b∥2 cT x + d
isconvexon{x|cTx+d>0},whereA∈Rm×n,b∈Rm,c∈Rn andd∈R.
3.24 Some functions on the probability simplex. Let x be a real-valued random variable which takes values in {a1,…,an} where a1 < a2 < ··· < an, with prob(x = ai) = pi, i = 1,...,n. For each of the following functions of p (on the probability simplex {p ∈ Rn+ | 1T p = 1}), determine if the function is convex, concave, quasiconvex, or quasicon- cave. (a) Ex. (b) prob(x ≥ α). (c) 􏰉prob(α ≤ x ≤ β). (d) ni=1 pi logpi, the negative entropy of the distribution. (e) varx=E(x−Ex)2. (f) quartile(x) = inf{β | prob(x ≤ β) ≥ 0.25}. (g) The cardinality of the smallest set A ⊆ {a1,...,an} with probability ≥ 90%. (By cardinality we mean the number of elements in A.) (h) The minimum width interval that contains 90% of the probability, i.e., inf{β−α| prob(α≤x≤β)≥0.9}. |x1|p + · · · + |xn|p tp−1 ∥x∥p = tp−1 118 3 Convex functions 3.25 Maximum probability distance between distributions. Let p, q ∈ Rn represent two proba- bility distributions on {1,...,n} (so p, q ≽ 0, 1T p = 1T q = 1). We define the maximum probability distance dmp(p,q) between p and q as the maximum difference in probability assigned by p and q, over all events: dmp(p,q) = max{|prob(p,C) − prob(q,C)| | C ⊆ {1,...,n}}. 􏰉Here prob(p,C) is the probability of C, under the distribution p, i.e., prob(p,C) = i∈C pi. 􏰉n Find a simple expression for dmp, involving ∥p − q∥1 = i=1 |pi − qi|, and show that dmp isaconvexfunctiononRn ×Rn. (Itsdomainis{(p,q)|p, q≽0, 1Tp=1Tq=1},but it has a natural extension to all of Rn × Rn.) 3.26 More functions of eigenvalues. Let λ1(X) ≥ λ2(X) ≥ ··· ≥ λn(X) denote the eigenvalues of a matrix X ∈ Sn. We have already seen several functions of the eigenvalues that are convex or concave functions of X. • The maximum eigenvalue λ1(X) is convex (example 3.10). The minimum eigenvalue λn(X) is concave. • 􏰉The sum of the eigenvalues (or trace), tr X = λ1(X) + · · · + λn(X), is linear. • The sum of the inverses of the eigenvalues (or trace of the inverse), tr(X−1) = ni=1 1/λi(X), is convex on Sn++ (exercise 3.18). 􏰛 • The geometric mean of the eigenvalues, (det X )1/n = ( n λi (X ))1/n , and the 􏰉n i=1 logarithm of the product of the eigenvalues, log det X = i=1 log λi(X), are concave on X ∈ Sn++ (exercise 3.18 and page 74). In this problem we explore some more functions of eigenvalues, by exploiting variational characterizations. 􏰉k n (a) Sum of k largest eigenvalues. Show that i=1 λi(X) is convex on S . Hint. [HJ85, page 191] Use the variational characterization 􏰊k λi(X)=sup{tr(VTXV)|V ∈Rn×k, VTV =I}. i=1 (b) Geometric mean of k smallest eigenvalues. Show that (􏰛ni=n−k+1 λi(X))1/k is con- cave on Sn++. Hint. [MO79, page 513] For X ≻ 0, we have 􏰇 􏰖n i=n−k+1 􏰈1/k λi(X) = k1inf{tr(VTXV)|V ∈Rn×k, detVTV =1}. (c) Log of product of k smallest eigenvalues. Show that 􏰉ni=n−k+1 log λi(X) is concave on Sn++. Hint. [MO79, page 513] For X ≻ 0, 􏰖n 􏰖k 􏰍 􏰐 􏰍􏰍 􏰚 λi(X)=inf (VTXV)ii 􏰍􏰍 V ∈Rn×k, VTV =I . i=n−k+1 i=1 3.27 Diagonal elements of Cholesky factor. Each X ∈ Sn++ has a unique Cholesky factorization X = LLT , where L is lower triangular, with Lii > 0. Show that Lii is a concave function of X (with domain Sn++).
Hint. Lii can be expressed as Lii = (w − zT Y −1z)1/2, where
􏰒Y z􏰓 zT w
is the leading i × i submatrix of X.

Exercises 119
Operations that preserve convexity
3.28 Expressing a convex function as the pointwise supremum of a family of affine functions. In this problem we extend the result proved on page 83 to the case where domf ̸= Rn. Let f : Rn → R be a convex function. Define f ̃ : Rn → R as the pointwise supremum of all affine functions that are global underestimators of f:
̃
f(x) = sup{g(x) | g affine, g(z) ≤ f(z) for all z}.
̃
(a) Show that f(x) = f(x) for x ∈ intdomf.
(b) Show that f = f ̃ if f is closed (i.e., epi f is a closed set; see §A.3.3).
3.29 Representation of piecewise-linear convex functions. A function f : Rn → R, with
dom f = Rn , is called piecewise-linear if there exists a partition of Rn as Rn =X1 ∪X2 ∪···∪XL,
where intXi ̸= ∅ and intXi ∩ intXj = ∅ for i ̸= j, and a family of affine functions aT1x+b1,…,aTLx+bL suchthatf(x)=aTi x+bi forx∈Xi.
S h o w t h a t t h i s m e a n s t h a t f ( x ) = m a x { a T1 x + b 1 , . . . , a TL x + b L } .
3.30 Convex hull or envelope of a function. The convex hull or convex envelope of a function f:Rn →Risdefinedas
g(x) = inf{t | (x,t) ∈ convepif}.
Geometrically, the epigraph of g is the convex hull of the epigraph of f.
Show that g is the largest convex underestimator of f. In other words, show that if h is convex and satisfies h(x) ≤ f(x) for all x, then h(x) ≤ g(x) for all x.
3.31 [Roc70, page 35] Largest homogeneous underestimator. Let f be a convex function. Define the function g as
g(x)= inf f(αx).
α>0 α
(a) Show that g is homogeneous (g(tx) = tg(x) for all t ≥ 0).
(b) Show that g is the largest homogeneous underestimator of f: If h is homogeneous
and h(x) ≤ f(x) for all x, then we have h(x) ≤ g(x) for all x.
(c) Show that g is convex.
3.32 Products and ratios of convex functions. In general the product or ratio of two convex functions is not convex. However, there are some results that apply to functions on R. Prove the following.
(a) If f and g are convex, both nondecreasing (or nonincreasing), and positive functions on an interval, then fg is convex.
(b) If f, g are concave, positive, with one nondecreasing and the other nonincreasing, then fg is concave.
(c) If f is convex, nondecreasing, and positive, and g is concave, nonincreasing, and positive, then f/g is convex.
3.33 Direct proof of perspective theorem. Give a direct proof that the perspective function g, as defined in §3.2.6, of a convex function f is convex: Show that dom g is a convex set, andthatfor(x,t), (y,s)∈domg,and0≤θ≤1,wehave
g(θx + (1 − θ)y, θt + (1 − θ)s) ≤ θg(x, t) + (1 − θ)g(y, s).
3.34 The Minkowski function. The Minkowski function of a convex set C is defined as
MC(x) = inf{t > 0 | t−1x ∈ C}.

120
3.35
3.36
3 Convex functions
(a) Draw a picture giving a geometric interpretation of how to find MC (x).
(b) Show that MC is homogeneous, i.e., MC (αx) = αMC (x) for α ≥ 0.
(c) WhatisdomMC?
(d) Show that MC is a convex function.
(e) Suppose C is also closed, bounded, symmetric (if x ∈ C then −x ∈ C), and has nonempty interior. Show that MC is a norm. What is the corresponding unit ball?
Support function calculus. Recall that the support function of a set C ⊆ Rn is defined as SC (y) = sup{yT x | x ∈ C}. On page 81 we showed that SC is a convex function.
(a) ShowthatSB =SconvB.
(b) Show that SA+B = SA + SB .
(c) Show that SA∪B = max{SA,SB}.
(d) Let B be closed and convex. Show that A ⊆ B if and only if SA(y) ≤ SB(y) for all
y.
Conjugate functions
Derive the conjugates of the following functions.
(a) Max function. f(x) = maxi=1,…,n􏰉xi on Rn.
(b) Sum of largest elements. f(x) = ri=1 x[i] on Rn.
3.37
3.38
3.39
xy ≤ F (x) + G(y).
(a) Conjugate of convex plus affine function. Define g(x) = f(x) + cT x + d, where f is
convex. Express g∗ in terms of f∗ (and c, d).
(b) Conjugate of perspective. Express the conjugate of the perspective of a convex function f in terms of f∗.
(c) Piecewise-linear function on R. f (x) = maxi=1,…,m (ai x + bi )
assume that the ai are sorted in increasing order, i.e., a1 ≤ · · · ≤ am , and that none of the functions aix + bi is redundant, i.e., for each k there is at least one x with f(x) = akx + bk.
(d) Power function. f(x) = xp on R++, where p > 1. Repeat for p < 0. (e) Negative geometric mean. f(x) = −(􏰛xi)1/n on Rn++. (f) Negative generalized logarithm for second-order cone. f(x,t) = −log(t2 −xTx) on {(x,t)∈Rn ×R|∥x∥2 0 over the interval [0,T] if |f(t)−f0(t)| ≤ ǫ for 0 ≤ t ≤ T. Now we choose a fixed tolerance ǫ > 0 and define the approximation width as the largest T such that f approximates f0 over the interval [0,T]:
W(x) = sup{T | |x1f1(t)+···+xnfn(t)−f0(t)| ≤ ǫ for 0 ≤ t ≤ T}. Show that W is quasiconcave.
3.43 First-order condition for quasiconvexity. Prove the first-order condition for quasiconvexity given in §3.4.3: A differentiable function f : Rn → R, with dom f convex, is quasiconvex if and only if for all x,y ∈ domf,
f ( y ) ≤ f ( x ) =⇒ ∇ f ( x ) T ( y − x ) ≤ 0 .
Hint. It suffices to prove the result for a function on R; the general result follows by
restriction to an arbitrary line.
3.44 Second-order conditions for quasiconvexity. In this problem we derive alternate repre- sentations of the second-order conditions for quasiconvexity given in §3.4.3. Prove the following.
􏰊n i=1
xi log(xi/1T x), f∗(y)=􏰆0 􏰉ni=1eyi ≤1
(a) A point x ∈ dom f satisfies (3.21) if and only if there exists a σ such that ∇2f(x)+σ∇f(x)∇f(x)T ≽0.
It satisfies (3.22) for all y ̸= 0 if and only if there exists a σ such ∇2f(x)+σ∇f(x)∇f(x)T ≻0.
Hint. We can assume without loss of generality that ∇2f(x) is diagonal.
(3.26) (3.27)

122 3 Convex functions
A point x ∈ domf satisfies (3.21) if and only if either ∇f(x) = 0 and ∇2f(x) ≽ 0, or ∇f(x) ̸= 0 and the matrix
H(x) = 􏰒 ∇2f(x) ∇f(x) 􏰓 ∇f (x)T 0
has exactly one negative eigenvalue. It satisfies (3.22) for all y ̸= 0 if and only if H(x) has exactly one nonpositive eigenvalue.
Hint. You can use the result of part (a). The following result, which follows from the eigenvalue interlacing theorem in linear algebra, may also be useful: If B ∈ Sn
and a ∈ Rn, then
3.45 Use the first and second-order conditions for quasiconvexity given in §3.4.3 to verify
quasiconvexity of the function f(x) = −x1x2, with domf = R2++.
3.46 Quasilinear functions with domain Rn. A function on R that is quasilinear (i.e., qua- siconvex and quasiconcave) is monotone, i.e., either nondecreasing or nonincreasing. In this problem we consider a generalization of this result to functions on Rn.
Suppose the function f : Rn → R is quasilinear and continuous with dom f = Rn. Show that it can be expressed as f(x) = g(aTx), where g : R → R is monotone and a ∈ Rn. In other words, a quasilinear function with domain Rn must be a monotone function of a linear function. (The converse is also true.)
Log-concave and log-convex functions
3.47 Suppose f : Rn → R is differentiable, domf is convex, and f(x) > 0 for all x ∈ domf.
Show that f is log-concave if and only if for all x,y ∈ domf, f(y) ≤exp􏰄∇f(x)T(y−x)􏰅.
f (x) f (x)
3.48Showthatiff:Rn →Rislog-concaveanda≥0,thenthefunctiong=f−ais
log-concave, where domg = {x ∈ domf | f(x) > a}. 3.49 Show that the following functions are log-concave.
(a) Logisticfunction: f(x)=ex/(1+ex)withdomf=R. (b) Harmonic mean:
(b)
1
1/x1 + ··· + 1/xn
􏰛ni=1 xi f(x) = 􏰉n ,
(d) Determinant over trace:
domf = Rn++. n
domf =Sn++.
f(x) = (c) Product over sum:
,
λn􏰄􏰒B a􏰓􏰅≥λn(B). aT 0
f(X)= detX, trX
i=1 xi
domf = R++.

Exercises 123
3.50 Coefficients of a polynomial as a function of the roots. Show that the coefficients of a polynomial with real negative roots are log-concave functions of the roots. In other words, the functions ai : Rn → R, defined by the identity
sn + a1(λ)sn−1 + ··· + an−1(λ)s + an(λ) = (s − λ1)(s − λ2)···(s − λn), are log-concave on −Rn++. 􏰊
Hint. The function
Sk(x)= xi1xi2 ···xik, 1≤i1 0.
(b) [MO79, page 306] The Dirichlet density
Γ ( 1 T λ ) λ 1 − 1 λ n − 1 􏰇
􏰈 λ n + 1 − 1 with dom f = {x ∈ Rn++ | 1T x < 1}. The parameter λ satisfies λ ≽ 1. f(x) = Γ(λ1)···Γ(λn+1)x1 ···xn 1 − Convexity with respect to a generalized inequality 􏰊n i=1 xi 3.57 3.58 3.59 Show that the function f(X) = X−1 is matrix convex on Sn++. Schur complement. Suppose X ∈ Sn partitioned as X=􏰒AB􏰓, BT C where A ∈ Sk. The Schur complement of X (with respect to A) is S = C − BT A−1B (see §A.5.5). Show that the Schur complement, viewed as a function from Sn into Sn−k, is matrix concave on Sn++. Second-order conditions for K-convexity. Let K ⊆ Rm be a proper convex cone, with associated generalized inequality ≼K . Show that a twice differentiable function f : Rn → Rm, with convex domain, is K-convex if and only if for all x ∈ dom f and all y ∈ Rn, 􏰊n ∂2f(x)yiyj ≽K 0, i.e., the second derivative is a K -nonnegative bilinear form. with components ∂2fk/∂xi∂xj, for k = 1,...,m; see §A.4.1.) ∂xi∂xj i,j =1 (Here ∂ 2 f /∂ xi ∂ xj ∈ Rm , Exercises 125 3.60 Sublevel sets and epigraph of K-convex functions. Let K ⊆ Rm be a proper convex cone with associated generalized inequality ≼K, and let f : Rn → Rm. For α ∈ Rm, the α-sublevel set of f (with respect to ≼K) is defined as Cα ={x∈Rn |f(x)≼K α}. The epigraph of f, with respect to ≼K, is defined as the set epiKf = {(x,t) ∈ Rn+m | f(x) ≼K t}. Show the following: (a) If f is K-convex, then its sublevel sets Cα are convex for all α. (b) f is K-convex if and only if epiK f is a convex set. Chapter 4 Convex optimization problems 4.1 Optimization problems 4.1.1 Basic terminology We use the notation minimize subject to to describe the problem of finding an x that minimizes f0(x) among all x that satisfy the conditions fi(x) ≤ 0, i = 1,...,m, and hi(x) = 0, i = 1,...,p. We call x ∈ Rn the optimization variable and the function f0 : Rn → R the objective function or cost function. The inequalities fi(x) ≤ 0 are called inequality constraints, and the corresponding functions fi : Rn → R are called the inequality constraint functions. The equations hi(x) = 0 are called the equality constraints, and the functions hi : Rn → R are the equality constraint functions. If there are no constraints (i.e., m = p = 0) we say the problem (4.1) is unconstrained. The set of points for which the objective and all constraint functions are defined, 􏳹m D= i=0 domfi ∩ 􏳹p i=1 domhi, f0 (x) fi(x) ≤ 0, hi(x) = 0, i = 1,...,m i = 1,...,p (4.1) is called the domain of the optimization problem (4.1). A point x ∈ D is feasible if it satisfies the constraints fi(x) ≤ 0, i = 1,...,m, and hi(x) = 0, i = 1,...,p. The problem (4.1) is said to be feasible if there exists at least one feasible point, and infeasible otherwise. The set of all feasible points is called the feasible set or the constraint set. The optimal value p⋆ of the problem (4.1) is defined as p⋆ = inf {f0(x) | fi(x) ≤ 0, i = 1,...,m, hi(x) = 0, i = 1,...,p}. We allow p⋆ to take on the extended values ±∞. If the problem is infeasible, we have p⋆ = ∞ (following the standard convention that the infimum of the empty set 128 4 Convex optimization problems is ∞). If there are feasible points xk with f0(xk) → −∞ as k → ∞, then p⋆ = −∞, and we say the problem (4.1) is unbounded below. Optimal and locally optimal points We say x⋆ is an optimal point, or solves the problem (4.1), if x⋆ is feasible and f0(x⋆) = p⋆. The set of all optimal points is the optimal set, denoted Xopt ={x|fi(x)≤0, i=1,...,m, hi(x)=0, i=1,...,p, f0(x)=p⋆}. If there exists an optimal point for the problem (4.1), we say the optimal value is attained or achieved, and the problem is solvable. If Xopt is empty, we say the optimal value is not attained or not achieved. (This always occurs when the problem is unbounded below.) A feasible point x with f0(x) ≤ p⋆ + ǫ (where ǫ > 0) is called ǫ-suboptimal, and the set of all ǫ-suboptimal points is called the ǫ-suboptimal set for the problem (4.1).
We say a feasible point x is locally optimal if there is an R > 0 such that f0(x) = inf{f0(z) | fi(z) ≤ 0, i = 1,…,m,
hi(z)=0, i=1,…,p, ∥z−x∥2 ≤R}, or, in other words, x solves the optimization problem
minimize f0(z)
subject to fi(z) ≤ 0, i = 1,…,m
hi(z) = 0, i = 1,…,p ∥z−x∥2 ≤R
with variable z. Roughly speaking, this means x minimizes f0 over nearby points in the feasible set. The term ‘globally optimal’ is sometimes used for ‘optimal’ to distinguish between ‘locally optimal’ and ‘optimal’. Throughout this book, however, optimal will mean globally optimal.
If x is feasible and fi(x) = 0, we say the ith inequality constraint fi(x) ≤ 0 is active at x. If fi(x) < 0, we say the constraint fi(x) ≤ 0 is inactive. (The equality constraints are active at all feasible points.) We say that a constraint is redundant if deleting it does not change the feasible set. Example 4.1 We illustrate these definitions with a few simple unconstrained opti- mization problems with variable x ∈ R, and domf0 = R++. • f0(x) = 1/x: p⋆ = 0, but the optimal value is not achieved. • f0(x) = − log x: p⋆ = −∞, so this problem is unbounded below. • f0(x) = x log x: p⋆ = −1/e, achieved at the (unique) optimal point x⋆ = 1/e. Feasibility problems If the objective function is identically zero, the optimal value is either zero (if the feasible set is nonempty) or ∞ (if the feasible set is empty). We call this the 4.1 Optimization problems 129 feasibility problem, and will sometimes write it as find x subject to fi(x) ≤ 0, i = 1,...,m hi(x) = 0, i = 1,...,p. The feasibility problem is thus to determine whether the constraints are consistent, and if so, find a point that satisfies them. 4.1.2 Expressing problems in standard form We refer to (4.1) as an optimization problem in standard form. In the standard form problem we adopt the convention that the righthand side of the inequality and equality constraints are zero. This can always be arranged by subtracting any nonzero righthand side: we represent the equality constraint gi(x) = g ̃i(x), for example, as hi(x) = 0, where hi(x) = gi(x) − g ̃i(x). In a similar way we express inequalities of the form fi(x) ≥ 0 as −fi(x) ≤ 0. Example 4.2 Box constraints. Consider the optimization problem minimize f0 (x) subject to li ≤ xi ≤ ui, i = 1,...,n, where x ∈ Rn is the variable. The constraints are called variable bounds (since they give lower and upper bounds for each xi) or box constraints (since the feasible set is a box). We can express this problem in standard form as minimize f0 (x) subjectto li−xi≤0, i=1,...,n xi −ui ≤0, i=1,...,n. There are 2n inequality constraint functions: and fi(x)=li −xi, i=1,...,n, fi(x) = xi−n − ui−n, i = n + 1,...,2n. Maximization problems We concentrate on the minimization problem by convention. We can solve the maximization problem maximize f0 (x) subject to fi(x) ≤ 0, i = 1,...,m (4.2) hi(x) = 0, i = 1,...,p 130 4 Convex optimization problems by minimizing the function −f0 subject to the constraints. By this correspondence we can define all the terms above for the maximization problem (4.2). For example the optimal value of (4.2) is defined as p⋆ = sup{f0(x) | fi(x) ≤ 0, i = 1,...,m, hi(x) = 0, i = 1,...,p}, and a feasible point x is ǫ-suboptimal if f0(x) ≥ p⋆ − ǫ. When the maximization problem is considered, the objective is sometimes called the utility or satisfaction level instead of the cost. Equivalent problems In this book we will use the notion of equivalence of optimization problems in an informal way. We call two problems equivalent if from a solution of one, a solution of the other is readily found, and vice versa. (It is possible, but complicated, to give a formal definition of equivalence.) 4.1.3 As a simple example, consider the problem ̃ minimize f(x) = α0f0(x) subject to f ̃(x) = α f (x) ≤ 0, i = 1,...,m i = 1,...,p, (4.3) iii h ̃i(x) = βihi(x) = 0, where αi > 0, i = 0,…,m, and βi ̸= 0, i = 1,…,p. This problem is obtained from the standard form problem (4.1) by scaling the objective and inequality constraint functions by positive constants, and scaling the equality constraint functions by nonzero constants. As a result, the feasible sets of the problem (4.3) and the original problem (4.1) are identical. A point x is optimal for the original problem (4.1) if and only if it is optimal for the scaled problem (4.3), so we say the two problems are equivalent. The two problems (4.1) and (4.3) are not, however, the same (unless αi and βi are all equal to one), since the objective and constraint functions differ.
We now describe some general transformations that yield equivalent problems.
Change of variables
Suppose φ : Rn → Rn is one-to-one, with image covering the problem domain D, i.e., φ(dom φ) ⊇ D. We define functions f ̃ and h ̃ as
f ̃(z) = f (φ(z)), i = 0,…,m, h ̃ (z) = h (φ(z)), ii ii
i = 1,…,p.
(4.4)
Now consider the problem
minimize subject to
f ̃ (z) 0
f ̃(z) ≤ 0, i
h ̃i(z) = 0,
i = 1,…,m i = 1,…,p,
with variable z. We say that the standard form problem (4.1) and the problem (4.4) are related by the change of variable or substitution of variable x = φ(z).
The two problems are clearly equivalent: if x solves the problem (4.1), then z = φ−1(x) solves the problem (4.4); if z solves the problem (4.4), then x = φ(z) solves the problem (4.1).
ii

4.1 Optimization problems 131
Transformation of objective and constraint functions
Suppose that ψ0 : R → R is monotone increasing, ψ1,…,ψm : R → R satisfy
ψi(u)≤0ifandonlyifu≤0,andψm+1,…,ψm+p :R→Rsatisfyψi(u)=0if
and only if u = 0. We define functions f ̃ and h ̃ as the compositions ii
f ̃ ( x ) = ψ ( f ( x ) ) , i = 0 , . . . , m , i i i
h ̃ ( x ) = ψ i
( h ( x ) ) , m+i i
i = 1 , . . . , p .
Evidently the associated problem
minimize subject to
f ̃ (x) 0
f ̃(x) ≤ 0, i
h ̃i(x) = 0,
i = 1,…,m i = 1,…,p
and the standard form problem (4.1) are equivalent; indeed, the feasible sets are identical, and the optimal points are identical. (The example (4.3) above, in which the objective and constraint functions are scaled by appropriate constants, is the special case when all ψi are linear.)
Example 4.3 Least-norm and least-norm-squared problems. As a simple example consider the unconstrained Euclidean norm minimization problem
minimize ∥Ax − b∥2, (4.5) with variable x ∈ Rn. Since the norm is always nonnegative, we can just as well solve
the problem
minimize ∥Ax − b∥2 = (Ax − b)T (Ax − b), (4.6)
in which we minimize the square of the Euclidean norm. The problems (4.5) and (4.6) are clearly equivalent; the optimal points are the same. The two problems are not the same, however. For example, the objective in (4.5) is not differentiable at any x with Ax − b = 0, whereas the objective in (4.6) is differentiable for all x (in fact, quadratic).
Slack variables
One simple transformation is based on the observation that fi(x) ≤ 0 if and only if there is an si ≥ 0 that satisfies fi(x)+si = 0. Using this transformation we obtain the problem
minimize subject to
f0 (x)
si ≥ 0, i = 1,…,m fi(x)+si =0, i=1,…,m hi(x) = 0, i = 1,…,p,
(4.7)
where the variables are x ∈ Rn and s ∈ Rm. This problem has n + m variables, m inequality constraints (the nonnegativity constraints on si), and m + p equality constraints. The new variable si is called the slack variable associated with the original inequality constraint fi(x) ≤ 0. Introducing slack variables replaces each inequality constraint with an equality constraint, and a nonnegativity constraint.
The problem (4.7) is equivalent to the original standard form problem (4.1). Indeed, if (x, s) is feasible for the problem (4.7), then x is feasible for the original

132
4 Convex optimization problems
problem, since si = −fi(x) ≥ 0. Conversely, if x is feasible for the original problem, then (x, s) is feasible for the problem (4.7), where we take si = −fi (x). Similarly, x is optimal for the original problem (4.1) if and only if (x, s) is optimal for the problem (4.7), where si = −fi(x).
Eliminating equality constraints
If we can explicitly parametrize all solutions of the equality constraints
hi(x) = 0, i = 1,…,p, (4.8)
using some parameter z ∈ Rk, then we can eliminate the equality constraints from the problem, as follows. Suppose the functionkφ : Rk → Rn is such that x satisfies (4.8) if and only if there is some z ∈ R such that x = φ(z). The optimization problem
minimize f ̃ (z) = f (φ(z)) 00
subject to f ̃(z) = f (φ(z)) ≤ 0, i = 1,…,m ii
is then equivalent to the original problem (4.1). This transformed problem has variable z ∈ Rk, m inequality constraints, and no equality constraints. If z is optimal for the transformed problem, then x = φ(z) is optimal for the original problem. Conversely, if x is optimal for the original problem, then (since x is feasible) there is at least one z such that x = φ(z). Any such z is optimal for the transformed problem.
Eliminating linear equality constraints
The process of eliminating variables can be described more explicitly, and easily carried out numerically, when the equality constraints are all linear, i.e., have the form Ax = b. If Ax = b is inconsistent, i.e., b ̸∈ R(A), then the original problem is infeasible. Assuming this is not the case, let x0 denote any solution of the equality constraints. Let F ∈ Rn×k be any matrix with R(F) = N(A), so the general solution of the linear equations Ax = b is given by Fz + x0, where z ∈ Rk. (We canchooseF tobefullrank,inwhichcasewehavek=n−rankA.)
Substituting x = F z + x0 into the original problem yields the problem minimize f0(Fz + x0)
subjectto fi(Fz+x0)≤0, i=1,…,m,
with variable z, which is equivalent to the original problem, has no equality con-
straints, and rank A fewer variables. Introducing equality constraints
We can also introduce equality constraints and new variables into a problem. In- stead of describing the general case, which is complicated and not very illuminating, we give a typical example that will be useful later. Consider the problem
minimize f0(A0x + b0)
subjectto fi(Aix+bi)≤0, i=1,…,m
hi(x) = 0, i = 1,…,p,

4.1 Optimization problems 133
where x ∈ Rn, Ai ∈ Rki×n, and fi : Rki → R. In this problem the objective and constraint functions are given as compositions of the functions fi with affine transformations defined by Aix + bi.
We introduce new variables yi ∈ Rki , as well as new equality constraints yi = Aix + bi, for i = 0, . . . , m, and form the equivalent problem
minimize f0 (y0 )
subject to fi(yi) ≤ 0, i = 1,…,m
yi =Aix+bi, i=0,…,m hi(x) = 0, i = 1,…,p.
This problem has k0 + · · · + km new variables,
y0 ∈ Rk0, …, ym ∈ Rkm,
and k0 + · · · + km new equality constraints,
y0 =A0x+b0, …, ym =Amx+bm.
The objective and inequality constraints in this problem are independent, i.e., in- volve different optimization variables.
Optimizing over some variables
̃ inf f(x, y) = inf f(x)
x,y x
where f(x) = infy f(x,y). In other words, we can always minimize a function by first minimizing over some of the variables, and then minimizing over the remaining ones. This simple and general principle can be used to transform problems into equivalent forms. The general case is cumbersome to describe and not illuminating, so we describe instead an example.
Suppose the variable x ∈ Rn is partitioned as x = (x1,x2), with x1 ∈ Rn1, x2 ∈ Rn2 , and n1 + n2 = n. We consider the problem
minimize f0(x1, x2)
subject to fi(x1) ≤ 0, i = 1,…,m1 (4.9)
f ̃(x)≤0, i=1,…,m, i22
in which the constraints are independent, in the sense that each constraint function depends on x or x . We first minimize over x . Define the function f ̃ of x by
We always have ̃
12201
f ̃(x )=inf{f (x ,z)|f ̃(z)≤0, i=1,…,m }. 0101i2
The problem (4.9) is then equivalent to m i n i m i z e f ̃ ( x )
01
subject to fi(x1) ≤ 0, i = 1,…,m1.
(4.10)

134
4 Convex optimization problems
Example 4.4 Minimizing a quadratic function with constraints on some variables. Consider a problem with strictly convex quadratic objective, with some of the vari- ables unconstrained:
minimize xT1 P11x1 + 2xT1 P12x2 + xT2 P22x2 subject to fi(x1) ≤ 0, i = 1,…,m,
where P11 and P22 are symmetric. Here we can analytically minimize over x2: inf 􏰀xT P11x1 + 2xT P12x2 + xT P22x2􏰁 = xT 􏰀P11 − P12P−1PT 􏰁x1
1 1 2 1 2212
x2
(see §A.5.5). Therefore the original problem is equivalent to
minimize xT 􏰀P11 −P12P−1PT 􏰁x1 1 22 12
subject to fi(x1) ≤ 0, i = 1,…,m. The epigraph form of the standard problem (4.1) is the problem
minimize t
subject to f0(x) − t ≤ 0
fi(x) ≤ 0, i = 1,…,m hi(x) = 0, i = 1,…,p,
Epigraph problem form
with variables x ∈ Rn and t ∈ R. We can easily see that it is equivalent to the original problem: (x,t) is optimal for (4.11) if and only if x is optimal for (4.1) and t = f0(x). Note that the objective function of the epigraph form problem is a linear function of the variables x, t.
The epigraph form problem (4.11) can be interpreted geometrically as an op- timization problem in the ‘graph space’ (x, t): we minimize t over the epigraph of f0, subject to the constraints on x. This is illustrated in figure 4.1.
Implicit and explicit constraints
By a simple trick already mentioned in §3.1.2, we can include any of the constraints implicitly in the objective function, by redefining its domain. As an extreme ex- ample, the standard form problem can be expressed as the unconstrained problem
minimize F (x), (4.12) where we define the function F as f0, but with domain restricted to the feasible
set:
domF ={x∈domf0 |fi(x)≤0, i=1,…,m, hi(x)=0, i=1,…,p},
and F (x) = f0 (x) for x ∈ dom F . (Equivalently, we can define F (x) to have value ∞ for x not feasible.) The problems (4.1) and (4.12) are clearly equivalent: they have the same feasible set, optimal points, and optimal value.
Of course this transformation is nothing more than a notational trick. Making the constraints implicit has not made the problem any easier to analyze or solve,
(4.11)

4.1 Optimization problems
t
135
epi f0
(x⋆, t⋆)
Figure 4.1 Geometric interpretation of epigraph form problem, for a prob- lem with no constraints. The problem is to find the point in the epigraph (shown shaded) that minimizes t, i.e., the ‘lowest’ point in the epigraph. The optimal point is (x⋆, t⋆).
even though the problem (4.12) is, at least nominally, unconstrained. In some ways the transformation makes the problem more difficult. Suppose, for example, that the objective f0 in the original problem is differentiable, so in particular its domain is open. The restricted objective function F is probably not differentiable, since its domain is likely not to be open.
Conversely, we will encounter problems with implicit constraints, which we can then make explicit. As a simple example, consider the unconstrained problem
minimize f (x) (4.13)
where the function f is given by
f(x)=􏰆xTx Ax=b
∞ otherwise.
Thus, the objective function is equal to the quadratic form xT x on the affine set defined by Ax = b, and ∞ off the affine set. Since we can clearly restrict our attention to points that satisfy Ax = b, we say that the problem (4.13) has an implicit equality constraint Ax = b hidden in the objective. We can make the implicit equality constraint explicit, by forming the equivalent problem
minimize xT x subject to Ax = b.
(4.14)
While the problems (4.13) and (4.14) are clearly equivalent, they are not the same. The problem (4.13) is unconstrained, but its objective function is not differentiable. The problem (4.14), however, has an equality constraint, but its objective and constraint functions are differentiable.
x

136 4 Convex optimization problems
4.1.4 Parameter and oracle problem descriptions
For a problem in the standard form (4.1), there is still the question of how the objective and constraint functions are specified. In many cases these functions have some analytical or closed form, i.e., are given by a formula or expression that involves the variable x as well as some parameters. Suppose, for example, the objectiveisquadratic,soithastheformf0(x)=(1/2)xTPx+qTx+r. Tospecify the objective function we give the coefficients (also called problem parameters or problem data) P ∈ Sn, q ∈ Rn, and r ∈ R. We call this a parameter problem description, since the specific problem to be solved (i.e., the problem instance) is specified by giving the values of the parameters that appear in the expressions for the objective and constraint functions.
In other cases the objective and constraint functions are described by oracle models (which are also called black box or subroutine models). In an oracle model, we do not know f explicitly, but can evaluate f(x) (and usually also some deriva- tives) at any x ∈ dom f . This is referred to as querying the oracle, and is usually associated with some cost, such as time. We are also given some prior information about the function, such as convexity and a bound on its values. As a concrete example of an oracle model, consider an unconstrained problem, in which we are to minimize the function f . The function value f (x) and its gradient ∇f (x) are evaluated in a subroutine. We can call the subroutine at any x ∈ domf, but do not have access to its source code. Calling the subroutine with argument x yields (when the subroutine returns) f(x) and ∇f(x). Note that in the oracle model, we never really know the function; we only know the function value (and some derivatives) at the points where we have queried the oracle. (We also know some given prior information about the function, such as differentiability and convexity.)
In practice the distinction between a parameter and oracle problem description is not so sharp. If we are given a parameter problem description, we can construct an oracle for it, which simply evaluates the required functions and derivatives when queried. Most of the algorithms we study in part III work with an oracle model, but can be made more efficient when they are restricted to solve a specific parametrized family of problems.
4.2 Convex optimization
4.2.1 Convex optimization problems in standard form
A convex optimization problem is one of the form
minimize f0 (x)
subject to fi(x) ≤ 0, i = 1,…,m
aTi x=bi, i=1,…,p,
(4.15)
where f0 , . . . , fm are convex functions. Comparing (4.15) with the general standard form problem (4.1), the convex problem has three additional requirements:

4.2 Convex optimization 137
• the objective function must be convex,
• the inequality constraint functions must be convex,
• the equality constraint functions hi(x) = aTi x − bi must be affine.
We immediately note an important property: The feasible set of a convex optimiza- tion problem is convex, since it is the intersection of the domain of the problem
􏳹m
D= domfi,
i=0
which is a convex set, with m (convex) sublevel sets {x | fi(x) ≤ 0} and p hyper- planes {x | aTi x = bi}. (We can assume without loss of generality that ai ̸= 0: if ai = 0 and bi = 0 for some i, then the ith equality constraint can be deleted; if ai = 0 and bi ̸= 0, the ith equality constraint is inconsistent, and the problem is in- feasible.) Thus, in a convex optimization problem, we minimize a convex objective function over a convex set.
If f0 is quasiconvex instead of convex, we say the problem (4.15) is a (standard form) quasiconvex optimization problem. Since the sublevel sets of a convex or quasiconvex function are convex, we conclude that for a convex or quasiconvex optimization problem the ǫ-suboptimal sets are convex. In particular, the optimal set is convex. If the objective is strictly convex, then the optimal set contains at most one point.
Concave maximization problems
With a slight abuse of notation, we will also refer to
maximize f0 (x)
subject to fi(x) ≤ 0, i = 1,…,m
aTi x=bi, i=1,…,p,
(4.16)
as a convex optimization problem if the objective function f0 is concave, and the inequality constraint functions f1 , . . . , fm are convex. This concave maximization problem is readily solved by minimizing the convex objective function −f0. All of the results, conclusions, and algorithms that we describe for the minimization problem are easily transposed to the maximization case. In a similar way the maximization problem (4.16) is called quasiconvex if f0 is quasiconcave.
Abstract form convex optimization problem
It is important to note a subtlety in our definition of convex optimization problem. Consider the example with x ∈ R2,
minimize f0(x) = x21 + x2
subject to f1(x) = x1/(1 + x2) ≤ 0 (4.17)
h1(x) = (x1 + x2)2 = 0,
which is in the standard form (4.1). This problem is not a convex optimization problem in standard form since the equality constraint function h1 is not affine, and

138
4 Convex optimization problems
the inequality constraint function f1 is not convex. Nevertheless the feasible set, which is {x | x1 ≤ 0, x1 +x2 = 0}, is convex. So although in this problem we are minimizing a convex function f0 over a convex set, it is not a convex optimization problem by our definition.
Of course, the problem is readily reformulated as minimize f0(x) = x21 + x2
subject to f ̃ (x) = x ≤ 0 (4.18) 11
h ̃ 1 ( x ) = x 1 + x 2 = 0 ,
which is in standard convex optimization form, since f and f ̃ are convex, and h ̃
4.2.2
is affine.
Some authors use the term abstract convex optimization problem to describe the
(abstract) problem of minimizing a convex function over a convex set. Using this terminology, the problem (4.17) is an abstract convex optimization problem. We will not use this terminology in this book. For us, a convex optimization problem is not just one of minimizing a convex function over a convex set; it is also required that the feasible set be described specifically by a set of inequalities involving convex functions, and a set of linear equality constraints. The problem (4.17) is not a convex optimization problem, but the problem (4.18) is a convex optimization problem. (The two problems are, however, equivalent.)
Our adoption of the stricter definition of convex optimization problem does not matter much in practice. To solve the abstract problem of minimizing a convex function over a convex set, we need to find a description of the set in terms of convex inequalities and linear equality constraints. As the example above suggests, this is usually straightforward.
Local and global optima
A fundamental property of convex optimization problems is that any locally optimal point is also (globally) optimal. To see this, suppose that x is locally optimal for a convex optimization problem, i.e., x is feasible and
f0(x) = inf{f0(z) | z feasible, ∥z − x∥2 ≤ R}, (4.19)
for some R > 0. Now suppose that x is not globally optimal, i.e., there is a feasible y such that f0(y) < f0(x). Evidently ∥y − x∥2 > R, since otherwise f0(x) ≤ f0(y). Consider the point z given by
z = (1 − θ)x + θy, θ = R . 2∥y − x∥2
Then we have ∥z − x∥2 = R/2 < R, and by convexity of the feasible set, z is feasible. By convexity of f0 we have f0(z) ≤ (1 − θ)f0(x) + θf0(y) < f0(x), which contradicts (4.19). Hence there exists no feasible y with f0(y) < f0(x), i.e., x is globally optimal. 011 4.2 Convex optimization 139 x) X −∇f0 ( x Figure 4.2 Geometric interpretation of the optimality condition (4.21). The feasible set X is shown shaded. Some level curves of f0 are shown as dashed lines. The point x is optimal: −∇f0(x) defines a supporting hyperplane (shown as a solid line) to X at x. It is not true that locally optimal points of quasiconvex optimization problems are globally optimal; see §4.2.5. 4.2.3 An optimality criterion for differentiable f0 Suppose that the objective f0 in a convex optimization problem is differentiable, so that for all x,y ∈ domf0, f0(y) ≥ f0(x) + ∇f0(x)T (y − x) (see §3.1.3). Let X denote the feasible set, i.e., X ={x|fi(x)≤0, i=1,...,m, hi(x)=0, i=1,...,p}. Thenxisoptimalifandonlyifx∈X and ∇f0(x)T (y − x) ≥ 0 for all y ∈ X. (4.20) (4.21) This optimality criterion can be understood geometrically: If ∇f0(x) ̸= 0, it means that −∇f0(x) defines a supporting hyperplane to the feasible set at x (see fig- ure 4.2). Proof of optimality condition First suppose x ∈ X and satisfies (4.21). Then if y ∈ X we have, by (4.20), f0(y) ≥ f0(x). This shows x is an optimal point for (4.1). Conversely, suppose x is optimal, but the condition (4.21) does not hold, i.e., forsomey∈X wehave ∇f0(x)T (y − x) < 0. 140 4 Convex optimization problems Consider the point z(t) = ty + (1 − t)x, where t ∈ [0, 1] is a parameter. Since z(t) is on the line segment between x and y, and the feasible set is convex, z(t) is feasible. We claim that for small positive t we have f0(z(t)) < f0(x), which will prove that x is not optimal. To show this, note that d f0(z(t))􏰍􏰍􏰍􏰍 = ∇f0(x)T (y − x) < 0, dt t=0 so for small positive t, we have f0(z(t)) < f0(x). We will pursue the topic of optimality conditions in much more depth in chap- ter 5, but here we examine a few simple examples. Unconstrained problems For an unconstrained problem (i.e., m = p = 0), the condition (4.21) reduces to the well known necessary and sufficient condition ∇f0(x) = 0 (4.22) for x to be optimal. While we have already seen this optimality condition, it is useful to see how it follows from (4.21). Suppose x is optimal, which means here that x ∈ domf0, and for all feasible y we have ∇f0(x)T (y − x) ≥ 0. Since f0 is differentiable, its domain is (by definition) open, so all y sufficiently close to x are feasible. Let us take y = x − t∇f0 (x), where t ∈ R is a parameter. For t small and positive, y is feasible, and so ∇f0(x)T (y − x) = −t∥∇f0(x)∥2 ≥ 0, from which we conclude ∇f0(x) = 0. There are several possible situations, depending on the number of solutions of (4.22). If there are no solutions of (4.22), then there are no optimal points; the optimal value of the problem is not attained. Here we can distinguish between two cases: the problem is unbounded below, or the optimal value is finite, but not attained. On the other hand we can have multiple solutions of the equation (4.22), in which case each such solution is a minimizer of f0. Example 4.5 Unconstrained quadratic optimization. Consider the problem of mini- mizing the quadratic function f0(x) = (1/2)xT Px + qT x + r, where P ∈ Sn+ (which makes f0 convex). The necessary and sufficient condition for x to be a minimizer of f0 is Several cases can occur, depending on whether this (linear) equation has no solutions, ∇f0(x) = Px + q = 0. one solution, or many solutions. • If q ̸∈ R(P ), then there is no solution. In this case f0 is unbounded below. • If P ≻ 0 (which is the condition for f0 to be strictly convex), then there is a unique minimizer, x⋆ = −P −1q. 4.2 Convex optimization 141 • If P is singular, but q ∈ R(P ), then the set of optimal points is the (affine) set Xopt = −P † q + N (P ), where P † denotes the pseudo-inverse of P (see §A.5.4). Example 4.6 Analytic centering. Consider the (unconstrained) problem of minimiz- ing the (convex) function f0 : Rn → R, defined as A x ≺ b , ∇ f 0 ( x ) = 􏰊m 1 a i = 0 . ( 4 . 2 3 ) i=1 bi − aTi x (The condition Ax ≺ b is just x ∈ domf0.) If Ax ≺ b is infeasible, then the domain of f0 is empty. Assuming Ax ≺ b is feasible, there are still several possible cases (see exercise 4.2): • There are no solutions of (4.23), and hence no optimal points for the problem. This occurs if and only if f0 is unbounded below. • There are many solutions of (4.23). In this case it can be shown that the solutions form an affine set. • There is a unique solution of (4.23), i.e., a unique minimizer of f0. This occurs if and only if the open polyhedron {x | Ax ≺ b} is nonempty and bounded. 􏰊m i=1 log(bi −aTi x), domf0 ={x|Ax≺b}, and sufficient conditions for x to be optimal are f0(x)=− where aT1 , . . . , aTm are the rows of A. The function f0 is differentiable, so the necessary Problems with equality constraints only Consider the case where there are equality constraints but no inequality constraints, i.e., minimize f0 (x) subject to Ax = b. Here the feasible set is affine. We assume that it is nonempty; otherwise the problem is infeasible. The optimality condition for a feasible x is that ∇f0(x)T (y − x) ≥ 0 must hold for all y satisfying Ay = b. Since x is feasible, every feasible y has the form y = x + v for some v ∈ N (A). The optimality condition can therefore be expressed as: ∇f0(x)T v ≥ 0 for all v ∈ N(A). If a linear function is nonnegative on a subspace, then it must be zero on the subspace, so it follows that ∇f0 (x)T v = 0 for all v ∈ N (A). In other words, ∇f0(x) ⊥ N(A). 142 4 Convex optimization problems Using the fact that N (A)⊥ = R(AT ), this optimality condition can be expressed as ∇f0(x) ∈ R(AT ), i.e., there exists a ν ∈ Rp such that ∇f0(x)+ATν =0. Together with the requirement Ax = b (i.e., that x is feasible), this is the classical Lagrange multiplier optimality condition, which we will study in greater detail in chapter 5. Minimization over the nonnegative orthant As another example we consider the problem minimize f0 (x) subject to x ≽ 0, where the only inequality constraints are nonnegativity constraints on the variables. The optimality condition (4.21) is then x ≽ 0, ∇f0(x)T (y − x) ≥ 0 for all y ≽ 0. The term ∇f0(x)T y, which is a linear function of y, is unbounded below on y ≽ 0, unless we have ∇f0(x) ≽ 0. The condition then reduces to −∇f0(x)T x ≥ 0. But x ≽ 0 and ∇f0(x) ≽ 0, so we must have ∇f0(x)T x = 0, i.e., 􏰊n (∇f0(x))ixi = 0. i=1 Now each of the terms in this sum is the product of two nonnegative numbers, so we conclude that each term must be zero, i.e., (∇f0(x))i xi = 0 for i = 1,...,n. The optimality condition can therefore be expressed as x ≽ 0, ∇f0(x) ≽ 0, xi (∇f0(x))i = 0, i = 1,...,n. The last condition is called complementarity, since it means that the sparsity pat- terns (i.e., the set of indices corresponding to nonzero components) of the vectors x and ∇f0(x) are complementary (i.e., have empty intersection). We will encounter complementarity conditions again in chapter 5. Equivalent convex problems It is useful to see which of the transformations described in §4.1.3 preserve convex- ity. Eliminating equality constraints For a convex problem the equality constraints must be linear, i.e., of the form Ax = b. In this case they can be eliminated by finding a particular solution x0 of 4.2.4 4.2 Convex optimization 143 Ax = b, and a matrix F whose range is the nullspace of A, which results in the problem minimize f0(Fz + x0) subjectto fi(Fz+x0)≤0, i=1,...,m, with variable z. Since the composition of a convex function with an affine func- tion is convex, eliminating equality constraints preserves convexity of a problem. Moreover, the process of eliminating equality constraints (and reconstructing the solution of the original problem from the solution of the transformed problem) involves standard linear algebra operations. At least in principle, this means we can restrict our attention to convex opti- mization problems which have no equality constraints. In many cases, however, it is better to retain the equality constraints, since eliminating them can make the problem harder to understand and analyze, or ruin the efficiency of an algorithm that solves it. This is true, for example, when the variable x has very large dimen- sion, and eliminating the equality constraints would destroy sparsity or some other useful structure of the problem. Introducing equality constraints We can introduce new variables and equality constraints into a convex optimization problem, provided the equality constraints are linear, and the resulting problem will also be convex. For example, if an objective or constraint function has the form fi(Aix + bi), where Ai ∈ Rki×n, we can introduce a new variable yi ∈ Rki , replace fi(Aix + bi) with fi(yi), and add the linear equality constraint yi = Aix + bi. Slack variables By introducing slack variables we have the new constraints fi(x) + si = 0. Since equality constraint functions must be affine in a convex problem, we must have fi affine. In other words: introducing slack variables for linear inequalities preserves convexity of a problem. Epigraph problem form The epigraph form of the convex optimization problem (4.15) is minimize t subject to f0(x) − t ≤ 0 fi(x) ≤ 0, i = 1,...,m aTi x=bi, i=1,...,p. The objective is linear (hence convex) and the new constraint function f0(x) − t is also convex in (x, t), so the epigraph form problem is convex as well. It is sometimes said that a linear objective is universal for convex optimization, since any convex optimization problem is readily transformed to one with linear objective. The epigraph form of a convex problem has several practical uses. By assuming the objective of a convex optimization problem is linear, we can simplify theoretical analysis. It can also simplify algorithm development, since an algo- rithm that solves convex optimization problems with linear objective can, using 144 4 Convex optimization problems the transformation above, solve any convex optimization problem (provided it can handle the constraint f0(x) − t ≤ 0). Minimizing over some variables 4.2.5 Quasiconvex optimization Recall that a quasiconvex optimization problem has the standard form minimize f0 (x) subject to fi(x) ≤ 0, i = 1,...,m (4.24) Ax = b, where the inequality constraint functions f1, . . . , fm are convex, and the objective f0 is quasiconvex (instead of convex, as in a convex optimization problem). (Qua- siconvex constraint functions can be replaced with equivalent convex constraint functions, i.e., constraint functions that are convex and have the same 0-sublevel set, as in §3.4.5.) In this section we point out some basic differences between convex and quasicon- vex optimization problems, and also show how solving a quasiconvex optimization problem can be reduced to solving a sequence of convex optimization problems. Locally optimal solutions and optimality conditions The most important difference between convex and quasiconvex optimization is that a quasiconvex optimization problem can have locally optimal solutions that are not (globally) optimal. This phenomenon can be seen even in the simple case of unconstrained minimization of a quasiconvex function on R, such as the one shown in figure 4.3. Nevertheless, a variation of the optimality condition (4.21) given in §4.2.3 does hold for quasiconvex optimization problems with differentiable objective function. Let X denote the feasible set for the quasiconvex optimization problem (4.24). It follows from the first-order condition for quasiconvexity (3.20) that x is optimal if x ∈ X, ∇f0(x)T (y − x) > 0 for all y ∈ X \ {x}. (4.25) There are two important differences between this criterion and the analogous
one (4.21) for convex optimization:
• The condition (4.25) is only sufficient for optimality; simple examples show that it need not hold for an optimal point. In contrast, the condition (4.21) is necessary and sufficient for x to solve the convex problem.
• The condition (4.25) requires the gradient of f0 to be nonzero, whereas the condition (4.21) does not. Indeed, when ∇f0(x) = 0 in the convex case, the condition (4.21) is satisfied, and x is optimal.
Minimizing a convex function over some variables preserves convexity. Therefore, iff in(4.9)isjointlyconvexinx andx,andf,i=1,…,m,andf ̃,i=
012i1i 1, . . . , m2 , are convex, then the equivalent problem (4.10) is convex.

4.2 Convex optimization 145
(x, f (x))
Figure 4.3 A quasiconvex function f on R, with a locally optimal point x that is not globally optimal. This example shows that the simple optimality condition f′(x) = 0, valid for convex functions, does not hold for quasiconvex functions.
Quasiconvex optimization via convex feasibility problems
One general approach to quasiconvex optimization relies on the representation of the sublevel sets of a quasiconvex function via a family of convex inequalities, as described in §3.4.5. Let φt : Rn → R, t ∈ R, be a family of convex functions that satisfy
f0(x)≤t ⇐⇒ φt(x)≤0,
and also, for each x, φt(x) is a nonincreasing function of t, i.e., φs(x) ≤ φt(x) whenever s ≥ t.
Let p⋆ denote the optimal value of the quasiconvex optimization problem (4.24). If the feasibility problem
find subject to
x
φt(x) ≤ 0 fi(x) ≤ 0, Ax = b,
(4.26)
is feasible, then we have p⋆ ≤ t. Conversely, if the problem (4.26) is infeasible, then we can conclude p⋆ ≥ t. The problem (4.26) is a convex feasibility problem, since the inequality constraint functions are all convex, and the equality constraints are linear. Thus, we can check whether the optimal value p⋆ of a quasiconvex optimization problem is less than or more than a given value t by solving the convex feasibility problem (4.26). If the convex feasibility problem is feasible then we have p⋆ ≤ t, and any feasible point x is feasible for the quasiconvex problem and satisfies f0(x) ≤ t. If the convex feasibility problem is infeasible, then we know that p⋆ ≥ t.
This observation can be used as the basis of a simple algorithm for solving the quasiconvex optimization problem (4.24) using bisection, solving a convex feasi- bility problem at each step. We assume that the problem is feasible, and start with an interval [l,u] known to contain the optimal value p⋆. We then solve the convex feasibility problem at its midpoint t = (l + u)/2, to determine whether the
i = 1,…,m

146
4 Convex optimization problems
optimal value is in the lower or upper half of the interval, and update the interval accordingly. This produces a new interval, which also contains the optimal value, but has half the width of the initial interval. This is repeated until the width of the interval is small enough:
Algorithm 4.1 Bisection method for quasiconvex optimization. given l ≤ p⋆, u ≥ p⋆, tolerance ǫ > 0.
repeat
1. t:=(l+u)/2.
2. Solve the convex feasibility problem (4.26). 3. if (4.26) is feasible, u := t; else l := t.
until u − l ≤ ǫ.
The interval [l,u] is guaranteed to contain p⋆, i.e., we have l ≤ p⋆ ≤ u at each step. In each iteration the interval is divided in two, i.e., bisected, so the length of the interval after k iterations is 2−k(u − l), where u − l is the length of the initial interval. It follows that exactly ⌈log2((u − l)/ǫ)⌉ iterations are required before the algorithm terminates. Each step involves solving the convex feasibility problem (4.26).
Linear optimization problems
When the objective and constraint functions are all affine, the problem is called a linear program (LP). A general linear program has the form
minimize cT x + d
subject to Gx ≼ h (4.27)
Ax = b,
where G ∈ Rm×n and A ∈ Rp×n. Linear programs are, of course, convex opti- mization problems.
It is common to omit the constant d in the objective function, since it does not affect the optimal (or feasible) set. Since we can maximize an affine objective cT x+ d, by minimizing −cT x − d (which is still convex), we also refer to a maximization problem with affine objective and constraint functions as an LP.
The geometric interpretation of an LP is illustrated in figure 4.4. The feasible set of the LP (4.27) is a polyhedron P; the problem is to minimize the affine function cT x + d (or, equivalently, the linear function cT x) over P.
Standard and inequality form linear programs
Two special cases of the LP (4.27) are so widely encountered that they have been given separate names. In a standard form LP the only inequalities are componen-
4.3

4.3 Linear optimization problems
147
Figure 4.4 Geometric interpretation of an LP. The feasible set P, which is a polyhedron, is shaded. The objective cT x is linear, so its level curves are hyperplanes orthogonal to c (shown as dashed lines). The point x⋆ is optimal; it is the point in P as far as possible in the direction −c.
twise nonnegativity constraints x ≽ 0:
written as
x ≽ 0. minimize cT x
subject to Ax ≼ b. Converting LPs to standard form
(4.29)
P
−c x⋆
minimize cT x subject to Ax = b
(4.28) If the LP has no equality constraints, it is called an inequality form LP, usually
It is sometimes useful to transform a general LP (4.27) to one in standard form (4.28) (for example in order to use an algorithm for standard form LPs). The first step is to introduce slack variables si for the inequalities, which results in
minimize cT x + d subjectto Gx+s=h
Ax = b s ≽ 0.
The second step is to express the variable x as the difference of two nonnegative variables x+ and x−, i.e., x = x+ − x−, x+, x− ≽ 0. This yields the problem
minimize cTx+ −cTx− +d subjectto Gx+−Gx−+s=h
Ax+ − Ax− = b
x+ ≽0, x− ≽0, s≽0,

148
4 Convex optimization problems
which is an LP in standard form, with variables x+, x−, and s. (For equivalence of this problem and the original one (4.27), see exercise 4.10.)
These techniques for manipulating problems (along with many others we will see in the examples and exercises) can be used to formulate many problems as linear programs. With some abuse of terminology, it is common to refer to a problem that can be formulated as an LP as an LP, even if it does not have the form (4.27).
4.3.1 Examples
LPs arise in a vast number of fields and applications; here we give a few typical examples.
Diet problem
A healthy diet contains m different nutrients in quantities at least equal to b1 , . . . , bm. We can compose such a diet by choosing nonnegative quantities x1, . . . , xn of n different foods. One unit quantity of food j contains an amount aij of nutrient i, and has a cost of cj. We want to determine the cheapest diet that satisfies the nutritional requirements. This problem can be formulated as the LP
minimize cT x subject to Ax ≽ b
x ≽ 0.
Several variations on this problem can also be formulated as LPs. For example, we can insist on an exact amount of a nutrient in the diet (which gives a linear equality constraint), or we can impose an upper bound on the amount of a nutrient, in addition to the lower bound as above.
Chebyshev center of a polyhedron
We consider the problem of finding the largest Euclidean ball that lies in a poly- hedron described by linear inequalities,
P ={x∈Rn |aTi x≤bi, i=1,…,m}.
(The center of the optimal ball is called the Chebyshev center of the polyhedron; it is the point deepest inside the polyhedron, i.e., farthest from the boundary; see §8.5.1.) We represent the ball as
B = {xc + u | ∥u∥2 ≤ r}.
The variables in the problem are the center xc ∈ Rn and the radius r; we wish to maximize r subject to the constraint B ⊆ P.
We start by considering the simpler constraint that B lies in one halfspace
aTi x ≤ bi, i.e., Since
∥u∥2 ≤r =⇒ aTi (xc +u)≤bi. (4.30) sup{aTi u | ∥u∥2 ≤ r} = r∥ai∥2

4.3 Linear optimization problems 149
we can write (4.30) as
a Ti x c + r ∥ a i ∥ 2 ≤ b i , ( 4 . 3 1 )
which is a linear inequality in xc and r. In other words, the constraint that the ball lies in the halfspace determined by the inequality aTi x ≤ bi can be written as a linear inequality.
Therefore B ⊆ P if and only if (4.31) holds for all i = 1,…,m. Hence the Chebyshev center can be determined by solving the LP
maximize r
subjectto aTixc+r∥ai∥2≤bi, i=1,…,m,
with variables r and xc. (For more on the Chebyshev center, see §8.5.1.) Dynamic activity planning
We consider the problem of choosing, or planning, the activity levels of n activities, or sectors of an economy, over N time periods. We let xj(t) ≥ 0, t = 1,…,N, denote the activity level of sector j, in period t. The activities both consume and produce products or goods in proportion to their activity levels. The amount of good i produced per unit of activity j is given by aij . Similarly, the amount of good i consumed per unit of activity j is bij . The total amount of goods produced in period t is given by Ax(t) ∈ Rm, and the amount of goods consumed is Bx(t) ∈ Rm. (Although we refer to these products as ‘goods’, they can also include unwanted products such as pollutants.)
The goods consumed in a period cannot exceed those produced in the previous period: we must have Bx(t + 1) ≼ Ax(t) for t = 1,…,N. A vector g0 ∈ Rm of initial goods is given, which constrains the first period activity levels: Bx(1) ≼ g0. The (vectors of) excess goods not consumed by the activities are given by
s(0) = g0 − Bx(1)
s(t) = Ax(t)−Bx(t+1), t=1,…,N−1
s(N ) = Ax(N ).
The objective is to maximize a discounted total value of excess goods:
cTs(0)+γcTs(1)+···+γNcTs(N),
where c ∈ Rm gives the values of the goods, and γ > 0 is a discount factor. (The value ci is negative if the ith product is unwanted, e.g., a pollutant; |ci| is then the cost of disposal per unit.)
Putting it all together we arrive at the LP
maximize subject to
cTs(0)+γcTs(1)+···+γNcTs(N)
x(t) ≽ 0, t = 1,…,N
s(t) ≽ 0, t = 0,…,N
s(0) = g0 − Bx(1)
s(t)=Ax(t)−Bx(t+1), t=1,…,N−1 s(N) = Ax(N),
with variables x(1), . . . , x(N ), s(0), . . . , s(N ). This problem is a standard form LP; the variables s(t) are the slack variables associated with the constraints Bx(t+1) ≼ Ax(t).

150
4 Convex optimization problems
Chebyshev inequalities
We consider a probability distribution for a discrete random variable x on a set {u1, . . . , un} ⊆ R with n elements. We describe the distribution of x by a vector
p ∈ Rn, where
pi = prob(x = ui),
sopsatisfiesp≽0and1Tp=1. Conversely,ifpsatisfiesp≽0and1Tp=1,then it defines a probability distribution for x. We assume that ui are known and fixed, but the distribution p is not known.
If f is any function of x, then
prob(x∈S)= 􏰊 pi ui ∈S
is a linear function of p.
Although we do not know p, we are given prior knowledge of the following form:
We know upper and lower bounds on expected values of some functions of x, and probabilities of some subsets of R. This prior knowledge can be expressed as linear inequality constraints on p,
αi ≤aTi p≤βi, i=1,…,m.
The problem is to give lower and upper bounds on E f0(x) = aT0 p, where f0 is some function of x.
To find a lower bound we solve the LP
minimize aT0 p
subjectto p≽0, 1Tp=1
αi ≤aTi p≤βi, i=1,…,m,
with variable p. The optimal value of this LP gives the lowest possible value of Ef0(X) for any distribution that is consistent with the prior information. More- over, the bound is sharp: the optimal solution gives a distribution that is consistent with the prior information and achieves the lower bound. In a similar way, we can find the best upper bound by maximizing aT0 p subject to the same constraints. (We will consider Chebyshev inequalities in more detail in §7.4.1.)
Piecewise-linear minimization
Consider the (unconstrained) problem of minimizing the piecewise-linear, convex function
f ( x ) = m a x ( a Ti x + b i ) . i=1,…,m
This problem can be transformed to an equivalent LP by first forming the epigraph problem,
minimize t
subject to maxi=1,…,m(aTi x + bi) ≤ t,
pif(ui)
is a linear function of p. If S is any subset of R, then
Ef =
􏰊n i=1

4.3 Linear optimization problems 151
and then expressing the inequality as a set of m separate inequalities: minimize t
subjectto aTi x+bi ≤t, i=1,…,m. This is an LP (in inequality form), with variables x and t.
4.3.2 Linear-fractional programming
The problem of minimizing a ratio of affine functions over a polyhedron is called a linear-fractional program:
alent linear program
cT y + dz Gy−hz≼0 Ay − bz = 0 eT y + fz = 1 z≥0
pair
y=x,z=1 eTx+f eTx+f
minimize subject to
f0 (x) Gx ≼ h Ax = b
(4.32)
where the objective function is given by
f0(x)= cTx+d, domf0 ={x|eTx+f >0}.
eTx+f
The objective function is quasiconvex (in fact, quasilinear) so linear-fractional pro-
grams are quasiconvex optimization problems.
Transforming to a linear program
If the feasible set
{x|Gx≼h, Ax=b, eTx+f >0}
is nonempty, the linear-fractional program (4.32) can be transformed to an equiv-
minimize subjectto
with variables y, z.
To show the equivalence, we first note that if x is feasible in (4.32) then the
(4.33)
isfeasiblein(4.33),withthesameobjectivevaluecTy+dz=f0(x). Itfollowsthat the optimal value of (4.32) is greater than or equal to the optimal value of (4.33). Conversely, if (y,z) is feasible in (4.33), with z ̸= 0, then x = y/z is feasible in (4.32), with the same objective value f0(x) = cT y + dz. If (y, z) is feasible in (4.33) with z = 0, and x0 is feasible for (4.32), then x = x0 + ty is feasible in(4.32)forallt≥0. Moreover,limt→∞f0(x0+ty)=cTy+dz,sowecanfind feasible points in (4.32) with objective values arbitrarily close to the objective value of (y, z). We conclude that the optimal value of (4.32) is less than or equal to the
optimal value of (4.33).

152
4 Convex optimization problems
Generalized linear-fractional programming
4.4
A generalization of the linear-fractional program (4.32) is the generalized linear- fractional program in which
f0(x)= max cTi x+di, domf0 ={x|eTi x+fi >0, i=1,…,r}. i=1,…,r eTi x + fi
The objective function is the pointwise maximum of r quasiconvex functions, and therefore quasiconvex, so this problem is quasiconvex. When r = 1 it reduces to the standard linear-fractional program.
Example 4.7 Von Neumann growth problem. We consider an economy with n sectors, and activity levels xi > 0 in the current period, and activity levels x+i > 0 in the next period. (In this problem we only consider one period.) There are m goods which are consumed, and also produced, by the activity: An activity level x consumes goods Bx ∈ Rm, and produces goods Ax. The goods consumed in the next period cannot exceed the goods produced in the current period, i.e., Bx+ ≼ Ax. The growth rate in sector i, over the period, is given by x+i /xi.
Von Neumann’s growth problem is to find an activity level vector x that maximizes the minimum growth rate across all sectors of the economy. This problem can be expressed as a generalized linear-fractional problem
maximize mini=1,…,n x+i /xi subject to x+ ≽ 0
Bx+ ≼ Ax
with domain {(x, x+ ) | x ≻ 0}. Note that this problem is homogeneous in x and x+ ,
so we can replace the implicit constraint x ≻ 0 by the explicit constraint x ≽ 1. Quadratic optimization problems
The convex optimization problem (4.15) is called a quadratic program (QP) if the objective function is (convex) quadratic, and the constraint functions are affine. A quadratic program can be expressed in the form
minimize (1/2)xT Px + qT x + r
subject to Gx ≼ h (4.34)
Ax = b,
where P ∈ Sn+, G ∈ Rm×n, and A ∈ Rp×n. In a quadratic program, we minimize a convex quadratic function over a polyhedron, as illustrated in figure 4.5.
If the objective in (4.15) as well as the inequality constraint functions are (con- vex) quadratic, as in
minimize (1/2)xT P0x + q0T x + r0
subjectto (1/2)xTPix+qiTx+ri ≤0, i=1,…,m (4.35)
Ax = b,

4.4 Quadratic optimization problems
153
Figure 4.5 Geometric illustration of QP. The feasible set P, which is a poly- hedron, is shown shaded. The contour lines of the objective function, which is convex quadratic, are shown as dashed curves. The point x⋆ is optimal.
where Pi ∈ Sn+, i = 0,1…,m, the problem is called a quadratically constrained quadratic program (QCQP). In a QCQP, we minimize a convex quadratic function over a feasible region that is the intersection of ellipsoids (when Pi ≻ 0).
Quadratic programs include linear programs as a special case, by taking P = 0 in (4.34). Quadratically constrained quadratic programs include quadratic pro- grams (and therefore also linear programs) as a special case, by taking Pi = 0 in (4.35), for i = 1,…,m.
4.4.1 Examples
Least-squares and regression
The problem of minimizing the convex quadratic function ∥Ax−b∥2 =xTATAx−2bTAx+bTb
is an (unconstrained) QP. It arises in many fields and has many names, e.g., re- gression analysis or least-squares approximation. This problem is simple enough to have the well known analytical solution x = A†b, where A† is the pseudo-inverse of A (see §A.5.4).
When linear inequality constraints are added, the problem is called constrained regression or constrained least-squares, and there is no longer a simple analytical solution. As an example we can consider regression with lower and upper bounds on the variables, i.e.,
minimize ∥Ax − b∥2
subject to li ≤ xi ≤ ui, i = 1,…,n,
−∇f0 (x⋆ ) x⋆
P

154
4 Convex optimization problems
which is a QP. (We will study least-squares and regression problems in far more depth in chapters 6 and 7.)
Distance between polyhedra
The (Euclidean) distance between the polyhedra P1 = {x | A1x ≼ b1} and P2 = {x|A2x≼b2}inRn isdefinedas
dist(P1, P2) = inf{∥x1 − x2∥2 | x1 ∈ P1, x2 ∈ P2}. If the polyhedra intersect, the distance is zero.
To find the distance between P1 and P2, we can solve the QP minimize ∥x1 − x2∥2
subject to A1x1 ≼ b1, A2x2 ≼ b2,
with variables x1, x2 ∈ Rn. This problem is infeasible if and only if one of the polyhedra is empty. The optimal value is zero if and only if the polyhedra intersect, in which case the optimal x1 and x2 are equal (and is a point in the intersection P1∩P2). Otherwise the optimal x1 and x2 are the points in P1 and P2, respectively, that are closest to each other. (We will study geometric problems involving distance in more detail in chapter 8.)
Bounding variance
We consider again the Chebyshev inequalities example (page 150), where the vari- able is an unknown probability distribution given by p ∈ Rn, about which we have some prior information. The variance of a random variable f(x) is given by
􏰊n 􏰇􏰊n 􏰈2 E f 2 − ( E f ) 2 = f i2 p i − f i p i ,
i=1 i=1
(where fi = f(ui)), which is a concave quadratic function of p.
It follows that we can maximize the variance of f(x), subject to the given prior
information, by solving the QP
maximize 􏰉ni=1 fi2pi − (􏰉ni=1 fipi)2 subjectto p≽0, 1Tp=1
αi ≤aTi p≤βi, i=1,…,m.
The optimal value gives the maximum possible variance of f(x), over all distribu- tions that are consistent with the prior information; the optimal p gives a distri- bution that achieves this maximum variance.
Linear program with random cost
We consider an LP,
minimize cT x subject to Gx ≼ h Ax = b,

4.4 Quadratic optimization problems 155
with variable x ∈ Rn. We suppose that the cost function (vector) c ∈ Rn is random, with mean value c and covariance E(c − c)(c − c)T = Σ. (We assume for simplicity that the other problem parameters are deterministic.) For a given x ∈ Rn, the cost cTx is a (scalar) random variable with mean EcTx = cTx and variance
var(cTx)=E(cTx−EcTx)2 =xTΣx.
In general there is a trade-off between small expected cost and small cost vari- ance. One way to take variance into account is to minimize a linear combination of the expected value and the variance of the cost, i.e.,
EcTx+γvar(cTx),
which is called the risk-sensitive cost. The parameter γ ≥ 0 is called the risk- aversion parameter, since it sets the relative values of cost variance and expected value. (For γ > 0, we are willing to trade off an increase in expected cost for a sufficiently large decrease in cost variance).
To minimize the risk-sensitive cost we solve the QP
minimize cT x + γxT Σx subject to Gx ≼ h
Ax = b.
Markowitz portfolio optimization
We consider a classical portfolio problem with n assets or stocks held over a period of time. We let xi denote the amount of asset i held throughout the period, with xi in dollars, at the price at the beginning of the period. A normal long position in asset i corresponds to xi > 0; a short position in asset i (i.e., the obligation to buy the asset at the end of the period) corresponds to xi < 0. We let pi denote the relative price change of asset i over the period, i.e., its change in price over the period divided by its price at the beginning of the period. The overall return on the portfolio is r = pT x (given in dollars). The optimization variable is the portfolio vector x ∈ Rn. A wide variety of constraints on the portfolio can be considered. The simplest set of constraints is that xi ≥ 0 (i.e., no short positions) and 1T x = B (i.e., the total budget to be invested is B, which is often taken to be one). We take a stochastic model for price changes: p ∈ Rn is a random vector, with known mean p and covariance Σ. Therefore with portfolio x ∈ Rn, the return r is a (scalar) random variable with mean pT x and variance xT Σx. The choice of portfolio x involves a trade-off between the mean of the return, and its variance. The classical portfolio optimization problem, introduced by Markowitz, is the QP minimize xT Σx subject to pT x ≥ rmin 1Tx=1, x≽0, where x, the portfolio, is the variable. Here we find the portfolio that minimizes the return variance (which is associated with the risk of the portfolio) subject to 156 4 Convex optimization problems achieving a minimum acceptable mean return rmin, and satisfying the portfolio budget and no-shorting constraints. Many extensions are possible. One standard extension, for example, is to allow short positions, i.e., xi < 0. To do this we introduce variables xlong and xshort, with xlong ≽ 0, xshort ≽ 0, x = xlong − xshort, 1T xshort ≤ η1T xlong. The last constraint limits the total short position at the beginning of the period to some fraction η of the total long position at the beginning of the period. As another extension we can include linear transaction costs in the portfolio optimization problem. Starting from a given initial portfolio xinit we buy and sell assets to achieve the portfolio x, which we then hold over the period as described above. We are charged a transaction fee for buying and selling assets, which is proportional to the amount bought or sold. To handle this, we introduce variables ubuy and usell, which determine the amount of each asset we buy and sell before the holding period. We have the constraints x=xinit +ubuy −usell, ubuy ≽0, usell ≽0. We replace the simple budget constraint 1T x = 1 with the condition that the initial buying and selling, including transaction fees, involves zero net cash: (1 − fsell)1T usell = (1 + fbuy)1T ubuy Here the lefthand side is the total proceeds from selling assets, less the selling transaction fee, and the righthand side is the total cost, including transaction fee, of buying assets. The constants fbuy ≥ 0 and fsell ≥ 0 are the transaction fee rates for buying and selling (assumed the same across assets, for simplicity). The problem of minimizing return variance, subject to a minimum mean return, and the budget and trading constraints, is a QP with variables x, ubuy, usell. Second-order cone programming A problem that is closely related to quadratic programming is the second-order cone program (SOCP): minimize fTx subjectto ∥Aix+bi∥2 ≤cTi x+di, i=1,...,m (4.36) Fx = g, where x ∈ Rn is the optimization variable, Ai ∈ Rni×n, and F ∈ Rp×n. We call a constraint of the form ∥Ax+b∥2 ≤cTx+d, where A ∈ Rk×n, a second-order cone constraint, since it is the same as requiring the affine function (Ax + b, cT x + d) to lie in the second-order cone in Rk+1. When ci = 0, i = 1,...,m, the SOCP (4.36) is equivalent to a QCQP (which is obtained by squaring each of the constraints). Similarly, if Ai = 0, i = 1, . . . , m, then the SOCP (4.36) reduces to a (general) LP. Second-order cone programs are, however, more general than QCQPs (and of course, LPs). 4.4.2 4.4 Quadratic optimization problems 157 Robust linear programming We consider a linear program in inequality form, minimize cT x subject to aTi x ≤ bi, i = 1,...,m, in which there is some uncertainty or variation in the parameters c, ai, bi. To simplify the exposition we assume that c and bi are fixed, and that ai are known to lie in given ellipsoids: ai ∈Ei ={ai +Piu|∥u∥2 ≤1}, where Pi ∈ Rn×n . (If Pi is singular we obtain ‘flat’ ellipsoids, of dimension rank Pi ; Pi = 0 means that ai is known perfectly.) We will require that the constraints be satisfied for all possible values of the parameters ai, which leads us to the robust linear program minimize cT x subjectto aTi x≤bi forallai ∈Ei, i=1,...,m. (4.37) The robust linear constraint, aTi x ≤ bi for all ai ∈ Ei, can be expressed as sup{aTi x|ai ∈Ei}≤bi, the lefthand side of which can be expressed as sup{aTi x|ai ∈Ei} = aTi x+sup{uTPiTx|∥u∥2 ≤1} = aTix+∥PiTx∥2. Thus, the robust linear constraint can be expressed as aTi x+∥PiTx∥2 ≤bi, which is evidently a second-order cone constraint. Hence the robust LP (4.37) can be expressed as the SOCP minimize cT x subjectto aTi x+∥PiTx∥2 ≤bi, i=1,...,m. Note that the additional norm terms act as regularization terms; they prevent x from being large in directions with considerable uncertainty in the parameters ai. Linear programming with random constraints The robust LP described above can also be considered in a statistical framework. Here we suppose that the parameters ai are independent Gaussian random vectors, with mean ai and covariance Σi. We require that each constraint aTi x ≤ bi should hold with a probability (or confidence) exceeding η, where η ≥ 0.5, i.e., prob(aTi x ≤ bi) ≥ η. (4.38) 158 4 Convex optimization problems We will show that this probability constraint can be expressed as a second-order cone constraint. Letting u = aTi x, with σ2 denoting its variance, this constraint can be written prob􏰄u−u ≤ bi −u􏰅≥η. as Since (u − u)/σ is a zero mean unit variance Gaussian variable, the probability σσ above is simply Φ((bi − u)/σ), where 􏰑 1z2 Φ(z) = √ e−t /2 dt 2π −∞ is the cumulative distribution function of a zero mean unit variance Gaussian ran- dom variable. Thus the probability constraint (4.38) can be expressed as bi − u ≥ Φ−1(η), or, equivalently, F r o m u = a Ti x a n d σ = ( x T Σ i x ) 1 / 2 w e o b t a i n σ u + Φ−1(η)σ ≤ bi. aT x + Φ−1(η)∥Σ1/2x∥ ≤ b . i i2i By our assumption that η ≥ 1/2, we have Φ−1(η) ≥ 0, so this constraint is a second-order cone constraint. In summary, the problem minimize cT x subject to prob(aTi x ≤ bi) ≥ η, i = 1,...,m can be expressed as the SOCP minimize cT x subjectto aTx+Φ−1(η)∥Σ1/2x∥ ≤b, i=1,...,m. i i2i (We will consider robust convex optimization problems in more depth in chapter 6. See also exercises 4.13, 4.28, and 4.59.) Example 4.8 Portfolio optimization with loss risk constraints. We consider again the classical Markowitz portfolio problem described above (page 155). We assume here that the price change vector p ∈ Rn is a Gaussian random variable, with mean p and covariance Σ. Therefore the return r is a Gaussian random variable with mean r=pTxandvarianceσr2 =xTΣx. Consider a loss risk constraint of the form prob(r ≤ α) ≤ β, (4.39) where α is a given unwanted return level (e.g., a large loss) and β is a given maximum probability. 4.4 Quadratic optimization problems 159 As in the stochastic interpretation of the robust LP given above, we can express this constraint using the cumulative distribution function Φ of a unit Gaussian random variable. The inequality (4.39) is equivalent to pTx+Φ−1(β)∥Σ1/2x∥2 ≥α. Provided β ≤ 1/2 (i.e., Φ−1(β) ≤ 0), this loss risk constraint is a second-order cone constraint. (If β > 1/2, the loss risk constraint becomes nonconvex in x.)
The problem of maximizing the expected return subject to a bound on the loss risk (with β ≤ 1/2), can therefore be cast as an SOCP with one second-order cone
constraint:
maximize pT x
subject to pT x + Φ−1(β) ∥Σ1/2x∥2 ≥ α
x≽0, 1Tx=1.
There are many extensions of this problem. For example, we can impose several loss
risk constraints, i.e.,
(where βi ≤ 1/2), which expresses the risks (βi) we are willing to accept for various
prob(r ≤ αi) ≤ βi, i = 1,…,k,
levels of loss (αi). Minimal surface
Consider a differentiable function f : R2 → R with dom f = C . The surface area of its graph is given by
A=􏰑 􏰤1+∥∇f(x)∥2 dx=􏰑 ∥(∇f(x),1)∥2 dx, CC
which is a convex functional of f. The minimal surface problem is to find the function f that minimizes A subject to some constraints, for example, some given values of f on the boundary of C.
We will approximate this problem by discretizing the function f. Let C = [0, 1] × [0, 1], and let fij denote the value of f at the point (i/K, j/K), for i, j = 0, . . . , K. An approximate expression for the gradient of f at the point x = (i/K,j/K) can be found using forward differences:
∇f(x)≈K􏰒 fi+1,j −fi,j 􏰓. fi,j+1 − fi,j
Substituting this into the expression for the area of the graph, and approximating the integral as a sum, we obtain an approximation for the area of the graph:
K−1􏳶􏳶K(f −f )􏳶􏳶 1􏰊􏳶􏳶 i+1,j i,j􏳶􏳶
A≈Adisc=K2 􏳶􏳶 K(fi,j+1−fi,j) 􏳶􏳶 i,j=0 1 2
The discretized area approximation Adisc is a convex function of fij.
We can consider a wide variety of constraints on fij, such as equality or in- equality constraints on any of its entries (for example, on the boundary values), or

160
4 Convex optimization problems
on its moments. As an example, we consider the problem of finding the minimal area surface with fixed boundary values on the left and right edges of the square:
minimize Adisc
subject to f0j = lj, j = 0,…,K (4.40)
fKj=rj, j=0,…,K
where fij, i,j = 0,…,K, are the variables, and lj, rj are the given boundary values on the left and right sides of the square.
We can transform the problem (4.40) into an SOCP by introducing new vari- ablestij,i, j=0,…,K−1:
4.5
4.5.1
subjectto 􏳶􏳶􏳶 K(fi,j+1−fi,j) 􏳶􏳶􏳶 ≤tij, i, j=0,…,K−1 12
f0j = lj, j = 0,…,K fKj=rj, j=0,…,K.
Geometric programming
In this section we describe a family of optimization problems that are not convex in their natural form. These problems can, however, be transformed to convex op- timization problems, by a change of variables and a transformation of the objective and constraint functions.
Monomials and posynomials
A function f : Rn → R with domf = Rn++, defined as
f(x) = cxa1xa2 ···xan, (4.41)
where c > 0 and ai ∈ R, is called a monomial function, or simply, a monomial. The exponents ai of a monomial can be any real numbers, including fractional or negative, but the coefficient c can only be positive. (The term ‘monomial’ conflicts with the standard definition from algebra, in which the exponents must be non- negative integers, but this should not cause any confusion.) A sum of monomials, i.e., a function of the form
k=1
minimize (1/K2) 􏰉K−1 tij 􏳶􏳶 i,j =0
􏳶􏳶 􏳶 K(fi+1,j − fi,j) 􏳶
f(x)=
c xa1kxa2k ···xank, (4.42)
12n
􏰊K
k12n
where ck > 0, is called a posynomial function (with K terms), or simply, a posyn- omial.

4.5 Geometric programming 161
Posynomials are closed under addition, multiplication, and nonnegative scal- ing. Monomials are closed under multiplication and division. If a posynomial is multiplied by a monomial, the result is a posynomial; similarly, a posynomial can be divided by a monomial, with the result a posynomial.
4.5.2 Geometric programming
An optimization problem of the form
minimize f0 (x)
subject to fi(x) ≤ 1, i = 1,…,m
hi(x) = 1, i = 1,…,p
(4.43)
where f0, . . . , fm are posynomials and h1, . . . , hp are monomials, is called a geomet- ric program (GP). The domain of this problem is D = Rn++; the constraint x ≻ 0 is implicit.
Extensions of geometric programming
Several extensions are readily handled. If f is a posynomial and h is a monomial, then the constraint f(x) ≤ h(x) can be handled by expressing it as f(x)/h(x) ≤ 1 (since f/h is posynomial). This includes as a special case a constraint of the form f(x) ≤ a, where f is posynomial and a > 0. In a similar way if h1 and h2 are both nonzero monomial functions, then we can handle the equality constraint h1(x) = h2(x) by expressing it as h1(x)/h2(x) = 1 (since h1/h2 is monomial). We can maximize a nonzero monomial objective function, by minimizing its inverse (which is also a monomial).
For example, consider the problem
maximize x/y
subjectto 2≤x≤3 √
x2+3y/z≤ y x/y = z2,
with variables x, y, z ∈ R (and the implicit constraint x, y, z > 0). Using the simple transformations described above, we obtain the equivalent standard form GP
minimize x−1 y
subject to 2x−1 ≤ 1, (1/3)x ≤ 1
x2y−1/2 + 3y1/2z−1 ≤ 1 xy−1z−2 = 1.
We will refer to a problem like this one, that is easily transformed to an equiva- lent GP in the standard form (4.43), also as a GP. (In the same way that we refer to a problem easily transformed to an LP as an LP.)

162
4.5.3
4 Convex optimization problems
Geometric program in convex form
Geometric programs are not (in general) convex optimization problems, but they can be transformed to convex problems by a change of variables and a transforma- tion of the objective and constraint functions.
We will use the variables defined as yi = log xi, so xi = eyi . If f is the monomial function of x given in (4.41), i.e.,
then
f(x) = cxa1xa2 ···xan, 12n
f(x) = f(ey1,…,eyn)
= c(ey1)a1 ···(eyn)an
= eaT y+b,
where b = log c. The change of variables yi = log xi turns a monomial function into the exponential of an affine function.
Similarly, if f is the posynomial given by (4.42), i.e.,
then
as
􏰉K0 eaT0k y+b0k 􏰉k=1 T
Ki eaiky+bik ≤ 1, i = 1,…,m k=1
egiT y+hi = 1, i = 1,…,p,
minimize subject to
f(x)=
c xa1kxa2k ···xank,
minimize f ̃ (y) = log 􏰎􏰉K0 eaT0k y+b0k 􏰏 0 􏰎k=1 􏰏
subject to f ̃(y) = log 􏰉Ki eaTiky+bik ≤ 0, i k=1
(4.44)
􏰊K
k12n
k=1
􏰊K
f ( x ) =
e a Tk y + b k ,
k=1
where ak = (a1k, . . . , ank) and bk = log ck. After the change of variables, a posyn- omial becomes a sum of exponentials of affine functions.
The geometric program (4.43) can be expressed in terms of the new variable y
where aik ∈ Rn, i = 0, . . . , m, contain the exponents of the posynomial inequality constraints, and gi ∈ Rn, i = 1, . . . , p, contain the exponents of the monomial equality constraints of the original geometric program.
Now we transform the objective and constraint functions, by taking the loga- rithm. This results in the problem
i = 1,…,m
Since the functions f ̃ are convex, and h ̃ are affine, this problem is a convex
h ̃i(y)=giTy+hi =0, i=1,…,p.
ii
optimization problem. We refer to it as a geometric program in convex form. To

4.5 Geometric programming 163
distinguish it from the original geometric program, we refer to (4.43) as a geometric program in posynomial form.
Note that the transformation between the posynomial form geometric pro- gram (4.43) and the convex form geometric program (4.44) does not involve any computation; the problem data for the two problems are the same. It simply changes the form of the objective and constraint functions.
If the posynomial objective and constraint functions all have only one term, i.e., are monomials, then the convex form geometric program (4.44) reduces to a (general) linear program. We can therefore consider geometric programming to be a generalization, or extension, of linear programming.
4.5.4 Examples
Frobenius norm diagonal scaling
Consider a matrix M ∈ Rn×n, and the associated linear function that maps u into y = Mu. Suppose we scale the coordinates, i.e., change variables to u ̃ = Du, y ̃ = Dy, where D is diagonal, with Dii > 0. In the new coordinates the linear function is given by y ̃ = DMD−1u ̃.
Now suppose we want to choose the scaling in such a way that the resulting matrix, DMD−1, is small. We will use the Frobenius norm (squared) to measure the size of the matrix:
− 1 􏰁 􏰏
where D = diag(d). Since this is a posynomial in d, the problem of choosing the
− 1 2 􏰊􏰎 􏰀 ∥DMD ∥F = tr DMD
− 1 􏰁 T 􏰀
DMD
= =
n
􏰊 􏰀DMD−1􏰁2ij
i,j =1 n
i,j =1
M2 d2/d2, ij i j
scaling d to minimize the Frobenius norm is an unconstrained geometric program, minimize 􏰉n M2 d2/d2,
i,j=1 iji j
with variable d. The only exponents in this geometric program are 0, 2, and −2.
Design of a cantilever beam
We consider the design of a cantilever beam, which consists of N segments, num- bered from right to left as 1, . . . , N , as shown in figure 4.6. Each segment has unit length and a uniform rectangular cross-section with width wi and height hi. A vertical load (force) F is applied at the right end of the beam. This load causes the beam to deflect (downward), and induces stress in each segment of the beam. We assume that the deflections are small, and that the material is linearly elastic, with Young’s modulus E.

164 4 Convex optimization problems
segment 4 segment 3 segment 2 segment 1
F
Figure 4.6 Segmented cantilever beam with 4 segments. Each segment has unit length and a rectangular profile. A vertical force F is applied at the right end of the beam.
The design variables in the problem are the widths wi and heights hi of the N segments. We seek to minimize the total volume of the beam (which is proportional to its weight),
w1h1 +···+wNhN,
subject to some design constraints. We impose upper and lower bounds on width
and height of the segments,
wmin ≤ wi ≤ wmax, hmin ≤ hi ≤ hmax, i = 1,…,N,
as well as the aspect ratios,
Smin ≤ hi/wi ≤ Smax.
In addition, we have a limit on the maximum allowable stress in the material, and on the vertical deflection at the end of the beam.
We first consider the maximum stress constraint. The maximum stress in seg- ment i, which we denote σi, is given by σi = 6iF/(wih2i ). We impose the constraints
6iF ≤ σmax, i = 1,…,N, w i h 2i
to ensure that the stress does not exceed the maximum allowable value σmax any- where in the beam.
The last constraint is a limit on the vertical deflection at the end of the beam, which we will denote y1:
y1 ≤ ymax.
The deflection y1 can be found by a recursion that involves the deflection and slope
of the beam segments:
vi =12(i−1/2) F +vi+1, yi =6(i−1/3) F +vi+1 +yi+1, (4.45)
fori=N,N−1,…,1,withstartingvaluesvN+1 =yN+1 =0. Inthisrecursion, yi is the deflection at the right end of segment i, and vi is the slope at that point. We can use the recursion (4.45) to show that these deflection and slope quantities
E w i h 3i E w i h 3i

4.5 Geometric programming 165
are in fact posynomial functions of the variables w and h. We first note that vN+1 and yN+1 are zero, and therefore posynomials. Now assume that vi+1 and yi+1 are posynomial functions of w and h. The lefthand equation in (4.45) shows that vi is the sum of a monomial and a posynomial (i.e., vi+1), and therefore is a posynomial. From the righthand equation in (4.45), we see that the deflection yi is the sum of a monomial and two posynomials (vi+1 and yi+1), and so is a posynomial. In particular, the deflection at the end of the beam, y1, is a posynomial.
The problem is then
minimize subject to
􏰉Ni=1 wihi
wmin ≤ wi ≤ wmax, i = 1,…,N hmin ≤ hi ≤ hmax, i = 1,…,N Smin ≤ hi/wi ≤ Smax, i = 1,…,N 6iF/(wih2i ) ≤ σmax, i = 1,…,N y1 ≤ ymax,
with variables w and h. This is a GP, since the objective is a posynomial, and the constraints can all be expressed as posynomial inequalities. (In fact, the con- straints can be all be expressed as monomial inequalities, with the exception of the deflection limit, which is a complicated posynomial inequality.)
When the number of segments N is large, the number of monomial terms ap- pearing in the posynomial y1 grows approximately as N2. Another formulation of thisproblem,exploredinexercise4.31,isobtainedbyintroducingv1,…,vN and y1, . . . , yN as variables, and including a modified version of the recursion as a set of constraints. This formulation avoids this growth in the number of monomial terms.
Minimizing spectral radius via Perron-Frobenius theory
Suppose the matrix A ∈ Rn×n is elementwise nonnegative, i.e., Aij ≥ 0 for i, j = 1, . . . , n, and irreducible, which means that the matrix (I + A)n−1 is elementwise positive. The Perron-Frobenius theorem states that A has a positive real eigenvalue λpf equal to its spectral radius, i.e., the largest magnitude of its eigenvalues. The Perron-Frobenius eigenvalue λpf determines the asymptotic rate of growth or decay of Ak, as k → ∞; in fact, the matrix ((1/λpf)A)k converges. Roughly speaking, this means that as k → ∞, Ak grows like λkpf, if λpf > 1, or decays like λkpf, if λpf < 1. A basic result in the theory of nonnegative matrices states that the Perron- Frobenius eigenvalue is given by λpf =inf{λ|Av≼λvforsomev≻0} (and moreover, that the infimum is achieved). The inequality Av ≼ λv can be expressed as 􏰊n Aijvj/(λvi) ≤ 1, i = 1,...,n, (4.47) j=1 which is a set of posynomial inequalities in the variables Aij, vi, and λ. Thus, the condition that λpf ≤ λ can be expressed as a set of posynomial inequalities (4.46) 166 4 Convex optimization problems in A, v, and λ. This allows us to solve some optimization problems involving the Perron-Frobenius eigenvalue using geometric programming. Suppose that the entries of the matrix A are posynomial functions of some underlying variable x ∈ Rk. In this case the inequalities (4.47) are posynomial inequalities in the variables x ∈ Rk, v ∈ Rn, and λ ∈ R. We consider the problem of choosing x to minimize the Perron-Frobenius eigenvalue (or spectral radius) of A, possibly subject to posynomial inequalities on x, minimize λpf (A(x)) subject to fi(x) ≤ 1, i = 1,...,p, where fi are posynomials. Using the characterization above, we can express this problem as the GP minimize λ􏰉n subject to j=1 Aijvj/(λvi) ≤ 1, i = 1,...,n fi(x) ≤ 1, i = 1,...,p, where the variables are x, v, and λ. As a specific example, we consider a simple model for the population dynamics for a bacterium, with time or period denoted by t = 0, 1, 2, . . ., in hours. The vector p(t) ∈ R4+ characterizes the population age distribution at period t: p1(t) is the total population between 0 and 1 hours old; p2(t) is the total population between 1 and 2 hours old; and so on. We (arbitrarily) assume that no bacteria live more than 4 hours. The population propagates in time as p(t + 1) = Ap(t), where  b 1 b 2 b 3 b 4  A=s1 0 0 0.  0 s2 0 0  0 0 s3 0 Here bi is the birth rate among bacteria in age group i, and si is the survival rate fromagegroupiintoagegroupi+1. Weassumethatbi >0and0 1 the population grows geometrically like λtpf , with a doubling time of 1/ log2 λpf hours. Minimizing the spectral radius of A corresponds to finding the fastest decay rate, or slowest growth rate, for the population.
As our underlying variables, on which the matrix A depends, we take c1 and c2, the concentrations of two chemicals in the environment that affect the birth and survival rates of the bacteria. We model the birth and survival rates as monomial functions of the two concentrations:
b = bnom(c /cnom)αi(c /cnom)βi, i = 1,…,4 ii1122
s = snom(c /cnom)γi(c /cnom)δi, i = 1,…,3. ii1122
Here, bnom is nominal birth rate, snom is nominal survival rate, and cnom is nominal iii
concentration of chemical i. The constants αi, βi, γi, and δi give the effect on the

4.6 Generalized inequality constraints 167
birth and survival rates due to changes in the concentrations of the chemicals away from the nominal values. For example α2 = −0.3 and γ1 = 0.5 means that an increase in concentration of chemical 1, over the nominal concentration, causes a decrease in the birth rate of bacteria that are between 1 and 2 hours old, and an increase in the survival rate of bacteria from 0 to 1 hours old.
We assume that the concentrations c1 and c2 can be independently increased or decreased (say, within a factor of 2), by administering drugs, and pose the problem of finding the drug mix that maximizes the population decay rate (i.e., minimizes λpf(A)). Using the approach described above, this problem can be posed as the GP
minimize subject to
with variables bi, si, ci, vi, and λ.
4.6 Generalized inequality constraints
λ
b1v1 +b2v2 +b3v3 +b4v4 ≤λv1
s1v1 ≤ λv2
s2v2 ≤ λv3
s3v3 ≤ λv4
1/2≤c/cnom ≤2, i=1,2 ii
b = bnom(c /cnom)αi(c /cnom)βi, ii1122
s = snom(c /cnom)γi(c /cnom)δi, ii1122
i = 1,…,4 i = 1,…,3,
One very useful generalization of the standard form convex optimization prob- lem (4.15) is obtained by allowing the inequality constraint functions to be vector valued, and using generalized inequalities in the constraints:
minimize f0 (x)
subject to fi(x) ≼Ki 0, i = 1,…,m (4.48)
Ax = b,
where f0 : Rn → R, Ki ⊆ Rki are proper cones, and fi : Rn → Rki are Ki-convex. We refer to this problem as a (standard form) convex optimization problem with generalized inequality constraints. Problem (4.15) is a special case with Ki = R+, i = 1,…,m.
Many of the results for ordinary convex optimization problems hold for problems with generalized inequalities. Some examples are:
• The feasible set, any sublevel set, and the optimal set are convex.
• Any point that is locally optimal for the problem (4.48) is globally optimal.
• The optimality condition for differentiable f0, given in §4.2.3, holds without any change.
We will also see (in chapter 11) that convex optimization problems with generalized inequality constraints can often be solved as easily as ordinary convex optimization problems.

168
4.6.1
4 Convex optimization problems
4.6.2
as a conic form problem in standard form. Similarly, the problem minimize cT x
subjectto Fx+g≼K0 is called a conic form problem in inequality form.
Semidefinite programming
When K is Sk+, the cone of positive semidefinite k × k matrices, the associated conic form problem is called a semidefinite program (SDP), and has the form
minimize cT x
subjectto x1F1+···+xnFn+G≼0 (4.50)
Ax = b,
where G, F1, . . . , Fn ∈ Sk, and A ∈ Rp×n. The inequality here is a linear matrix inequality (see example 2.10).
If the matrices G, F1 , . . . , Fn are all diagonal, then the LMI in (4.50) is equiva- lent to a set of n linear inequalities, and the SDP (4.50) reduces to a linear program.
Standard and inequality form semidefinite programs
Following the analogy to LP, a standard form SDP has linear equality constraints, and a (matrix) nonnegativity constraint on the variable X ∈ Sn:
minimize tr(C X )
subject to tr(AiX) = bi, i = 1,…,p (4.51)
X ≽ 0,
Conic form problems
Among the simplest convex optimization problems with generalized inequalities are the conic form problems (or cone programs), which have a linear objective and one inequality constraint function, which is affine (and therefore K-convex):
minimize cT x
subjectto Fx+g≼K 0 (4.49)
Ax = b.
When K is the nonnegative orthant, the conic form problem reduces to a linear program. We can view conic form problems as a generalization of linear programs in which componentwise inequality is replaced with a generalized linear inequality.
Continuing the analogy to linear programming, we refer to the conic form prob-
lem
minimize cT x subject to x ≽K 0 Ax = b

4.6 Generalized inequality constraints 169
where C, A1,…,Ap ∈ Sn. (Recall that tr(CX) = 􏰉ni,j=1 CijXij is the form of a general real-valued linear function on Sn.) This form should be compared to the standard form linear program (4.28). In LP and SDP standard forms, we minimize a linear function of the variable, subject to p linear equality constraints on the variable, and a nonnegativity constraint on the variable.
An inequality form SDP, analogous to an inequality form LP (4.29), has no equality constraints, and one LMI:
minimize cT x
subjectto x1A1+···+xnAn≼B,
with variable x ∈ Rn, and parameters B, A1,…,An ∈ Sk, c ∈ Rn. Multiple LMIs and linear inequalities
It is common to refer to a problem with linear objective, linear equality and in- equality constraints, and several LMI constraints, i.e.,
minimize cT x
(i) (i) (i) (i)
subjectto F (x)=x1F1 +···+xnFn +G ≼0, i=1,…,K Gx ≼ h, Ax = b,
as an SDP as well. Such problems are readily transformed to an SDP, by forming a large block diagonal LMI from the individual LMIs and linear inequalities:
minimize cT x
subjectto diag(Gx−h,F(1)(x),…,F(K)(x))≼0
Ax = b.
4.6.3 Examples
Second-order cone programming
The SOCP (4.36) can be expressed as a conic form problem
minimize cT x
subjectto −(Aix+bi,cTi x+di)≼Ki 0, i=1,…,m
Fx = g,
Ki ={(y,t)∈Rni+1 |∥y∥2 ≤t},
in which
i.e., the second-order cone in Rni+1. This explains the name second-order cone
program for the optimization problem (4.36). Matrix norm minimization
Let A(x) = A0 + x1A1 + ··· + xnAn, where Ai ∈ Rp×q. We consider the uncon- strained problem
minimize ∥A(x)∥2 ,

170
4 Convex optimization problems
where ∥ · ∥2 denotes the spectral norm (maximum singular value), and x ∈ Rn is the variable. This is a convex problem since ∥A(x)∥2 is a convex function of x.
Usingthefactthat∥A∥2 ≤sifandonlyifATA≼s2I(ands≥0),wecan express the problem in the form
minimize s
subject to A(x)T A(x) ≼ sI,
with variables x and s. Since the function A(x)T A(x) − sI is matrix convex in (x, s), this is a convex optimization problem with a single q × q matrix inequality constraint.
We can also formulate the problem using a single linear matrix inequality of
size (p+q)×(p+q), using the fact that
A T A ≼ t 2 I ( a n d t ≥ 0 ) ⇐⇒ 􏰒 t I
A 􏰓 ≽ 0 . tI
(see §A.5.5). This results in the SDP minimize t
A(x) 􏰓 ≽ 0 tI
in the variables x and t. Moment problems
subject to 􏰒 tI A(x)T
Let t be a random variable in R. The expected values E tk (assuming they exist) are called the (power) moments of the distribution of t. The following classical results give a characterization of a moment sequence.
If there is a probability distribution on R such that xk = Etk, k = 0,…,2n, then x0 = 1 and
 x0 x1 x2 …  x1 x2 x3 …
xn−1 xn  xn xn+1 
H(x ,…,x )= x2 x3 x4 … 0 2n  . . .
xn+1 xn+2 ≽0. (4.52) . . 
 xn−1 xn xn+1 … xn xn+1 xn+2 …
x2n−2 x2n−1  x2n−1 x2n
(The matrix H is called the Hankel matrix associated with x0,…,x2n.) This is easy to see: Let xi = Eti, i = 0,…,2n be the moments of some distribution, and let y = (y0,y1,…yn) ∈ Rn+1. Then we have
􏰊n i,j =0
The following partial converse is less obvious: If x0 = 1 and H(x) ≻ 0, then there exists a probability distribution on R such that xi = Eti, i = 0,…,2n. (For a
yTH(x0,…,x2n)y=
yiyjEti+j =E(y0+y1t1+···+yntn)2 ≥0.
AT

4.6 Generalized inequality constraints 171
proof, see exercise 2.37.) Now suppose that x0 = 1, and H(x) ≽ 0 (but possibly H(x) ̸≻ 0), i.e., the linear matrix inequality (4.52) holds, but possibly not strictly. In this case, there is a sequence of distributions on R, whose moments converge to x. In summary: the condition that x0, . . . , x2n be the moments of some distribution on R (or the limit of the moments of a sequence of distributions) can be expressed as the linear matrix inequality (4.52) in the variable x, together with the linear equality x0 = 1. Using this fact, we can cast some interesting problems involving moments as SDPs.
Suppose t is a random variable on R. We do not know its distribution, but we do know some bounds on the moments, i.e.,
μk ≤ Etk ≤ μk, k = 1,…,2n
(which includes, as a special case, knowing exact values of some of the moments). Let p(t) = c0 + c1t + · · · + c2nt2n be a given polynomial in t. The expected value of p(t) is linear in the moments Eti:
2n 2n Ep(t) = 􏰊ci Eti = 􏰊cixi.
i=0 i=0 We can compute upper and lower bounds for E p(t),
minimize (maximize) E p(t)
subject to μk ≤ Etk ≤ μk, k = 1,…,2n,
over all probability distributions that satisfy the given moment bounds, by solving
the SDP
minimize (maximize) c1x1 + · · · + c2nx2n
subject to μk ≤ xk ≤ μk, k = 1,…,2n
H(1,×1,…,x2n) ≽ 0
with variables x1, …, x2n. This gives bounds on Ep(t), over all probability dis- tributions that satisfy the known moment constraints. The bounds are sharp in the sense that there exists a sequence of distributions, whose moments satisfy the given moment bounds, for which E p(t) converges to the upper and lower bounds found by these SDPs.
Bounding portfolio risk with incomplete covariance information
We consider once again the setup for the classical Markowitz portfolio problem (see page 155). We have a portfolio of n assets or stocks, with xi denoting the amount of asset i that is held over some investment period, and pi denoting the relative price change of asset i over the period. The change in total value of the portfolio is pT x. The price change vector p is modeled as a random vector, with mean and covariance
p = E p, Σ = E(p − p)(p − p)T .
The change in value of the portfolio is therefore a random variable with mean pT x and standard deviation σ = (xT Σx)1/2. The risk of a large loss, i.e., a change in portfolio value that is substantially below its expected value, is directly related

172
4 Convex optimization problems
to the standard deviation σ, and increases with it. For this reason the standard deviation σ (or the variance σ2) is used as a measure of the risk associated with the portfolio.
In the classical portfolio optimization problem, the portfolio x is the optimiza- tion variable, and we minimize the risk subject to a minimum mean return and other constraints. The price change statistics p and Σ are known problem param- eters. In the risk bounding problem considered here, we turn the problem around: we assume the portfolio x is known, but only partial information is available about the covariance matrix Σ. We might have, for example, an upper and lower bound on each entry:
Lij ≤ Σij ≤ Uij, i, j = 1,…,n,
where L and U are given. We now pose the question: what is the maximum risk for our portfolio, over all covariance matrices consistent with the given bounds? We define the worst-case variance of the portfolio as
σw2c =sup{xTΣx|Lij ≤Σij ≤Uij, i,j=1,…,n, Σ≽0}.
We have added the condition Σ ≽ 0, which the covariance matrix must, of course, satisfy.
We can find σwc by solving the SDP
maximize xT Σx
subject to Lij ≤ Σij ≤ Uij, i, j = 1,…,n
Σ≽0
with variable Σ ∈ Sn (and problem parameters x, L, and U). The optimal Σ is the worst covariance matrix consistent with our given bounds on the entries, where ‘worst’ means largest risk with the (given) portfolio x. We can easily construct a distribution for p that is consistent with the given bounds, and achieves the worst-case variance, from an optimal Σ for the SDP. For example, we can take p=p+Σ1/2v,wherevisanyrandomvectorwithEv=0andEvvT =I.
Evidently we can use the same method to determine σwc for any prior informa- tion about Σ that is convex. We list here some examples.
• Known variance of certain portfolios. We might have equality constraints such as
u Tk Σ u k = σ k2 ,
where uk and σk are given. This corresponds to prior knowledge that certain known portfolios (given by uk) have known (or very accurately estimated) variance.
• Including effects of estimation error. If the covariance Σ is estimated from empirical data, the estimation method will give an estimate Σˆ, and some in- formation about the reliability of the estimate, such as a confidence ellipsoid. This can be expressed as
C ( Σ − Σˆ ) ≤ α ,
where C is a positive definite quadratic form on Sn, and the constant α determines the confidence level.

4.6
Generalized inequality constraints 173
• Factor models. The covariance might have the form Σ=FΣfactorFT +D,
where F ∈ Rn×k, Σfactor ∈ Sk, and D is diagonal. This corresponds to a model of the price changes of the form
p = F z + d,
where z is a random variable (the underlying factors that affect the price changes) and di are independent (additional volatility of each asset price). We assume that the factors are known. Since Σ is linearly related to Σfactor and D, we can impose any convex constraint on them (representing prior information) and still compute σwc using convex optimization.
• Information about correlation coefficients. In the simplest case, the diagonal entries of Σ (i.e., the volatilities of each asset price) are known, and bounds on correlation coefficients between price changes are known:
lij ≤ρij = Σij ≤uij, i, j=1,…,n. Σ1/2 Σ1/2
ii jj
Since Σii are known, but Σij for i ̸= j are not, these are linear inequalities.
Fastest mixing Markov chain on a graph
We consider an undirected graph, with nodes 1, . . . , n, and a set of edges E ⊆{1,…,n}×{1,…,n}.
Here (i,j) ∈ E means that nodes i and j are connected by an edge. Since the graph is undirected, E is symmetric: (i,j) ∈ E if and only if (j,i) ∈ E. We allow the possibility of self-loops, i.e., we can have (i,i) ∈ E.
We define a Markov chain, with state X(t) ∈ {1,…,n}, for t ∈ Z+ (the set of nonnegative integers), as follows. With each edge (i,j) ∈ E we associate a probability Pij, which is the probability that X makes a transition between nodes i and j. State transitions can only occur across edges; we have Pij = 0 for (i, j) ̸∈ E. The probabilities associated with the edges must be nonnegative, and for each node, the sum of the probabilities of links connected to the node (including a self-loop, if there is one) must equal one.
The Markov chain has transition probability matrix
Pij =prob(X(t+1)=i|X(t)=j), i,j=1,…,n.
This matrix must satisfy
Pij ≥0, i, j=1,…,n, 1TP =1T, P =PT,
(4.53) (4.54)
and also
Pij = 0 for (i,j) ̸∈ E.

174
4 Convex optimization problems
Since P is symmetric and 1TP = 1T, we conclude P1 = 1, so the uniform distribution (1/n)1 is an equilibrium distribution for the Markov chain. Conver- gence of the distribution of X(t) to (1/n)1 is determined by the second largest (in magnitude) eigenvalue of P, i.e., by r = max{λ2,−λn}, where
1=λ1 ≥λ2 ≥···≥λn
are the eigenvalues of P. We refer to r as the mixing rate of the Markov chain. If r = 1, then the distribution of X(t) need not converge to (1/n)1 (which means the Markov chain does not mix). When r < 1, the distribution of X(t) approaches (1/n)1 asymptotically as rt, as t → ∞. Thus, the smaller r is, the faster the Markov chain mixes. The fastest mixing Markov chain problem is to find P, subject to the con- straints (4.53) and (4.54), that minimizes r. (The problem data is the graph, i.e., E.) We will show that this problem can be formulated as an SDP. Since the eigenvalue λ1 = 1 is associated with the eigenvector 1, we can express the mixing rate as the norm of the matrix P, restricted to the subspace 1⊥: r = ∥QPQ∥2, where Q = I−(1/n)11T is the matrix representing orthogonal projection on 1⊥. Using the property P1 = 1, we have 4.7 4.7.1 General and convex vector optimization problems In §4.6 we extended the standard form problem (4.1) to include vector-valued constraint functions. In this section we investigate the meaning of a vector-valued ∥QP Q∥2 This shows that the mixing rate r is a convex function of P , so the fastest mixing minimize subject to Vector optimization t −tI≼P−(1/n)11T ≼tI P1=1 (4.55) Pij ≥ 0, i,j = 1,...,n Pij =0for(i,j)̸∈E. r = = ∥(I − (1/n)11T )P (I − (1/n)11T )∥2 = ∥P − (1/n)11T ∥2. Markov chain problem can be cast as the convex optimization problem minimize ∥P − (1/n)11T ∥2 subjectto P1=1 Pij ≥ 0, i,j = 1,...,n Pij =0for(i,j)̸∈E, with variable P ∈ Sn. We can express the problem as an SDP by introducing a scalar variable t to bound the norm of P − (1/n)11T : 4.7 Vector optimization 175 objective function. We denote a general vector optimization problem as minimize (with respect to K) f0(x) subject to fi(x) ≤ 0, i = 1,...,m (4.56) hi(x) = 0, i = 1,...,p. Here x ∈ Rn is the optimization variable, K ⊆ Rq is a proper cone, f0 : Rn → Rq is the objective function, fi : Rn → R are the inequality constraint functions, and hi : Rn → R are the equality constraint functions. The only difference between this problem and the standard optimization problem (4.1) is that here, the objective function takes values in Rq, and the problem specification includes a proper cone K, which is used to compare objective values. In the context of vector optimization, the standard optimization problem (4.1) is sometimes called a scalar optimization problem. We say the vector optimization problem (4.56) is a convex vector optimization problem if the objective function f0 is K-convex, the inequality constraint functions f1, . . . , fm are convex, and the equality constraint functions h1, . . . , hp are affine. (As in the scalar case, we usually express the equality constraints as Ax = b, where A ∈ Rp×n.) What meaning can we give to the vector optimization problem (4.56)? Suppose x and y are two feasible points (i.e., they satisfy the constraints). Their associated objective values, f0(x) and f0(y), are to be compared using the generalized inequal- ity ≼K. We interpret f0(x) ≼K f0(y) as meaning that x is ‘better than or equal’ in value to y (as judged by the objective f0, with respect to K). The confusing aspect of vector optimization is that the two objective values f0(x) and f0(y) need not be comparable; we can have neither f0(x) ≼K f0(y) nor f0(y) ≼K f0(x), i.e., neither is better than the other. This cannot happen in a scalar objective optimization problem. 4.7.2 Optimal points and values We first consider a special case, in which the meaning of the vector optimization problem is clear. Consider the set of objective values of feasible points, O={f0(x)|∃x∈D, fi(x)≤0, i=1,...,m, hi(x)=0, i=1,...,p}⊆Rq, which is called the set of achievable objective values. If this set has a minimum element (see §2.4.2), i.e., there is a feasible x such that f0(x) ≼K f0(y) for all feasible y, then we say x is optimal for the problem (4.56), and refer to f0(x) as the optimal value of the problem. (When a vector optimization problem has an optimal value, it is unique.) If x⋆ is an optimal point, then f0(x⋆), the objective at x⋆, can be compared to the objective at every other feasible point, and is better than or equal to it. Roughly speaking, x⋆ is unambiguously a best choice for x, among feasible points. A point x⋆ is optimal if and only if it is feasible and O ⊆ f0(x⋆) + K (4.57) 176 4 Convex optimization problems O f0(x⋆) Figure 4.7 The set O of achievable values for a vector optimization with objective values in R2, with cone K = R2+, is shown shaded. In this case, the point labeled f0(x⋆) is the optimal value of the problem, and x⋆ is an optimal point. The objective value f0(x⋆) can be compared to every other achievable value f0(y), and is better than or equal to f0(y). (Here, ‘better than or equal to’ means ‘is below and to the left of’.) The lightly shaded region is f0(x⋆)+K, which is the set of all z ∈ R2 corresponding to objective values worse than (or equal to) f0(x⋆). (see §2.4.2). The set f0(x⋆) + K can be interpreted as the set of values that are worse than, or equal to, f0(x⋆), so the condition (4.57) states that every achievable value falls in this set. This is illustrated in figure 4.7. Most vector optimization problems do not have an optimal point and an optimal value, but this does occur in some special cases. Example 4.9 Best linear unbiased estimator. Suppose y = Ax + v, where v ∈ Rm is a measurement noise, y ∈ Rm is a vector of measurements, and x ∈ Rn is a vector to be estimated, given the measurement y. We assume that A has rank n, and that the measurement noise satisfies Ev = 0, EvvT = I, i.e., its components are zero mean and uncorrelated. A linear estimator of x has the form x􏰝 = F y. The estimator is called unbiased if for all x we have E x􏰝 = x, i.e., if F A = I . The error covariance of an unbiased estimator is E(x􏰝− x)(x􏰝− x)T = EFvvT FT = FFT . Our goal is to find an unbiased estimator that has a ‘small’ error covariance matrix. We can compare error covariances using matrix inequality, i.e., with respect to Sn+. This has the following interpretation: Suppose x􏰝1 = F1y, x􏰝2 = F2y are two unbiased estimators. Then the first estimator is at least as good as the second, i.e., F1F1T ≼ F2F2T , if and only if for all c, E(cTx􏰝1 −cTx)2 ≤E(cTx􏰝2 −cTx)2. In other words, for any linear function of x, the estimator F1 yields at least as good an estimate as does F2. 4.7 Vector optimization 177 We can express the problem of finding an unbiased estimator for x as the vector optimization problem minimize (w.r.t. Sn+) FFT subject to FA = I, (4.58) with variable F ∈ Rn×m. The objective FFT is convex with respect to Sn+, so the problem (4.58) is a convex vector optimization problem. An easy way to see this is to observe that vT FFT v = ∥FT v∥2 is a convex function of F for any fixed v. It is a famous result that the problem (4.58) has an optimal solution, the least-squares estimator, or pseudo-inverse, F⋆ =A† =(ATA)−1AT. ForanyF withFA=I,wehaveFFT ≽F⋆F⋆T. Thematrix F⋆F⋆T = A†A†T = (AT A)−1 is the optimal value of the problem (4.58). 4.7.3 Pareto optimal points and values We now consider the case (which occurs in most vector optimization problems of interest) in which the set of achievable objective values does not have a minimum element, so the problem does not have an optimal point or optimal value. In these cases minimal elements of the set of achievable values play an important role. We say that a feasible point x is Pareto optimal (or efficient) if f0(x) is a minimal element of the set of achievable values O. In this case we say that f0(x) is a Pareto optimal value for the vector optimization problem (4.56). Thus, a point x is Pareto optimal if it is feasible and, for any feasible y, f0(y) ≼K f0(x) implies f0(y) = f0(x). In other words: any feasible point y that is better than or equal to x (i.e., f0(y) ≼K f0(x)) has exactly the same objective value as x. A point x is Pareto optimal if and only if it is feasible and (f0(x) − K) ∩ O = {f0(x)} (4.59) (see §2.4.2). The set f0(x) − K can be interpreted as the set of values that are better than or equal to f0(x), so the condition (4.59) states that the only achievable value better than or equal to f0(x) is f0(x) itself. This is illustrated in figure 4.8. A vector optimization problem can have many Pareto optimal values (and points). The set of Pareto optimal values, denoted P, satisfies P ⊆ O ∩ bd O, i.e., every Pareto optimal value is an achievable objective value that lies in the boundary of the set of achievable objective values (see exercise 4.52). 178 4 Convex optimization problems f0(xpo) O Figure 4.8 The set O of achievable values for a vector optimization problem with ob jective values in R2 , with cone K = R2+ , is shown shaded. This problem does not have an optimal point or value, but it does have a set of Pareto optimal points, whose corresponding values are shown as the dark- ened curve on the lower left boundary of O. The point labeled f0(xpo) is a Pareto optimal value, and xpo is a Pareto optimal point. The lightly shaded region is f0(xpo) − K, which is the set of all z ∈ R2 corresponding to objective values better than (or equal to) f0(xpo). 4.7.4 Scalarization Scalarization is a standard technique for finding Pareto optimal (or optimal) points for a vector optimization problem, based on the characterization of minimum and minimal points via dual generalized inequalities given in §2.6.3. Choose any λ ≻K∗ 0, i.e., any vector that is positive in the dual generalized inequality. Now consider the scalar optimization problem minimize λT f0(x) subject to fi(x) ≤ 0, i = 1,...,m (4.60) hi(x) = 0, i = 1,...,p, and let x be an optimal point. Then x is Pareto optimal for the vector optimization problem (4.56). This follows from the dual inequality characterization of minimal points given in §2.6.3, and is also easily shown directly. If x were not Pareto optimal, then there is a y that is feasible, satisfies f0(y) ≼K f0(x), and f0(x) ̸= f0(y). Since f0(x) − f0(y) ≽K 0 and is nonzero, we have λT (f0(x) − f0(y)) > 0, i.e., λTf0(x) > λTf0(y). This contradicts the assumption that x is optimal for the scalar problem (4.60).
Using scalarization, we can find Pareto optimal points for any vector opti- mization problem by solving the ordinary scalar optimization problem (4.60). The vector λ, which is sometimes called the weight vector, must satisfy λ ≻K∗ 0. The weight vector is a free parameter; by varying it we obtain (possibly) different Pareto optimal solutions of the vector optimization problem (4.56). This is illustrated in figure 4.9. The figure also shows an example of a Pareto optimal point that cannot

4.7 Vector optimization
179
f0(x1)
λ1
O
f0(x3)
f0(x2)
λ2
Figure 4.9 Scalarization. The set O of achievable values for a vector opti- mization problem with cone K = R2+. Three Pareto optimal values f0(x1), f0(x2), f0(x3) are shown. The first two values can be obtained by scalar- ization: f0(x1) minimizes λT1 u over all u ∈ O and f0(x2) minimizes λT2 u, where λ1,λ2 ≻ 0. The value f0(x3) is Pareto optimal, but cannot be found by scalarization.
be obtained via scalarization, for any value of the weight vector λ ≻K∗ 0.
The method of scalarization can be interpreted geometrically. A point x is optimal for the scalarized problem, i.e., minimizes λT f0 over the feasible set, if and only if λT (f0(y) − f0(x)) ≥ 0 for all feasible y. But this is the same as saying that {u | − λT (u − f0(x)) = 0} is a supporting hyperplane to the set of achievable
objective values O at the point f0(x); in particular
{u | λT (u − f0(x)) < 0} ∩ O = ∅. (4.61) (See figure 4.9.) Thus, when we find an optimal point for the scalarized problem, we not only find a Pareto optimal point for the original vector optimization problem; we also find an entire halfspace in Rq, given by (4.61), of objective values that cannot be achieved. Scalarization of convex vector optimization problems Now suppose the vector optimization problem (4.56) is convex. Then the scalarized problem (4.60) is also convex, since λT f0 is a (scalar-valued) convex function (by the results in §3.6). This means that we can find Pareto optimal points of a convex vector optimization problem by solving a convex scalar optimization problem. For each choice of the weight vector λ ≻K∗ 0 we get a (usually different) Pareto optimal point. For convex vector optimization problems we have a partial converse: For every Pareto optimal point xpo, there is some nonzero λ ≽K∗ 0 such that xpo is a solution of the scalarized problem (4.60). So, roughly speaking, for convex problems the method of scalarization yields all Pareto optimal points, as the weight vector λ 180 4 Convex optimization problems varies over the K∗-nonnegative, nonzero values. We have to be careful here, because it is not true that every solution of the scalarized problem, with λ ≽K∗ 0 and λ ̸= 0, is a Pareto optimal point for the vector problem. (In contrast, every solution of the scalarized problem with λ ≻K∗ 0 is Pareto optimal.) In some cases we can use this partial converse to find all Pareto optimal points of a convex vector optimization problem. Scalarization with λ ≻K∗ 0 gives a set of Pareto optimal points (as it would in a nonconvex vector optimization problem as well). To find the remaining Pareto optimal solutions, we have to consider nonzero weight vectors λ that satisfy λ ≽K∗ 0. For each such weight vector, we first identify all solutions of the scalarized problem. Then among these solutions we must check which are, in fact, Pareto optimal for the vector optimization problem. These ‘extreme’ Pareto optimal points can also be found as the limits of the Pareto optimal points obtained from positive weight vectors. To establish this partial converse, we consider the set A=O+K={t∈Rq |f0(x)≼K tforsomefeasiblex}, (4.62) which consists of all values that are worse than or equal to (with respect to ≼K) some achievable objective value. While the set O of achievable objective values need not be convex, the set A is convex, when the problem is convex. Moreover, the minimal elements of A are exactly the same as the minimal elements of the set O of achievable values, i.e., they are the same as the Pareto optimal values. (See exercise 4.53.) Now we use the results of §2.6.3 to conclude that any minimal element of A minimizes λT z over A for some nonzero λ ≽K∗ 0. This means that every Pareto optimal point for the vector optimization problem is optimal for the scalarized problem, for some nonzero weight λ ≽K∗ 0. Example 4.10 Minimal upper bound on a set of matrices. We consider the (convex) vector optimization problem, with respect to the positive semidefinite cone, minimize (w.r.t. Sn+) X subject to X ≽ Ai, i = 1,...,m, (4.63) where Ai ∈ Sn, i = 1,...,m, are given. The constraints mean that X is an upper bound on the given matrices A1,...,Am; a Pareto optimal solution of (4.63) is a minimal upper bound on the matrices. To find a Pareto optimal point, we apply scalarization: we choose any W ∈ Sn++ and form the problem minimize tr(W X ) subject to X ≽ Ai, i = 1,...,m, (4.64) which is an SDP. Different choices for W will, in general, give different minimal solutions. The partial converse tells us that if X is Pareto optimal for the vector problem (4.63) then it is optimal for the SDP (4.64), for some nonzero weight matrix W ≽ 0. (In this case, however, not every solution of (4.64) is Pareto optimal for the vector optimization problem.) We can give a simple geometric interpretation for this problem. We associate with each A ∈ Sn++ an ellipsoid centered at the origin, given by EA ={u|uTA−1u≤1}, 4.7 Vector optimization 181 X2 Figure 4.10 Geometric interpretation of the problem (4.63). The three shaded ellipsoids correspond to the data A1, A2, A3 ∈ S2++; the Pareto optimal points correspond to minimal ellipsoids that contain them. The two ellipsoids, with boundaries labeled X1 and X2, show two minimal ellipsoids obtained by solving the SDP (4.64) for two different weight matrices W1 and W2. sothatA≼BifandonlyifEA ⊆EB. AParetooptimalpointXfortheprob- lem (4.63) corresponds to a minimal ellipsoid that contains the ellipsoids associated with A1, . . . , Am. An example is shown in figure 4.10. 4.7.5 Multicriterion optimization When a vector optimization problem involves the cone K = Rq+, it is called a multicriterion or multi-objective optimization problem. The components of f0, say, F1,...,Fq, can be interpreted as q different scalar objectives, each of which we would like to minimize. We refer to Fi as the ith objective of the problem. A multicriterion optimization problem is convex if f1, . . . , fm are convex, h1, . . . , hp areaffine,andtheobjectivesF1,...,Fq areconvex. Since multicriterion problems are vector optimization problems, all of the ma- terial of §4.7.1–§4.7.4 applies. For multicriterion problems, though, we can be a bit more specific in the interpretations. If x is feasible, we can think of Fi(x) as its score or value, according to the ith objective. If x and y are both feasible, Fi(x) ≤ Fi(y) means that x is at least as good as y, according to the ith objective; Fi(x) < Fi(y) means that x is better than y, or x beats y, according to the ith ob- jective. If x and y are both feasible, we say that x is better than y, or x dominates y, if Fi(x) ≤ Fi(y) for i = 1,...,q, and for at least one j, Fj(x) < Fj(y). Roughly speaking, x is better than y if x meets or beats y on all objectives, and beats it in at least one objective. In a multicriterion problem, an optimal point x⋆ satisfies Fi(x⋆) ≤ Fi(y), i = 1,...,q, X1 182 4 Convex optimization problems for every feasible y. In other words, x⋆ is simultaneously optimal for each of the scalar problems minimize Fj (x) subject to fi(x) ≤ 0, i = 1,...,m hi(x) = 0, i = 1,...,p, for j = 1,...,q. When there is an optimal point, we say that the objectives are noncompeting, since no compromises have to be made among the objectives; each objective is as small as it could be made, even if the others were ignored. A Pareto optimal point xpo satisfies the following: if y is feasible and Fi(y) ≤ Fi(xpo) for i = 1,...,q, then Fi(xpo) = Fi(y), i = 1,...,q. This can be restated as: a point is Pareto optimal if and only if it is feasible and there is no better feasible point. In particular, if a feasible point is not Pareto optimal, there is at least one other feasible point that is better. In searching for good points, then, we can clearly limit our search to Pareto optimal points. Trade-off analysis Now suppose that x and y are Pareto optimal points with, say, Fi(x) < Fi(y), Fi(x) = Fi(y), Fi(x) > Fi(y),
i ∈ A i ∈ B i ∈ C,
where A∪B∪C = {1,…,q}. In other words, A is the set of (indices of) objectives for which x beats y, B is the set of objectives for which the points x and y are tied, and C is the set of objectives for which y beats x. If A and C are empty, then the two points x and y have exactly the same objective values. If this is not the case, then both A and C must be nonempty. In other words, when comparing two Pareto optimal points, they either obtain the same performance (i.e., all objectives equal), or, each beats the other in at least one objective.
In comparing the point x to y, we say that we have traded or traded off better objective values for i ∈ A for worse objective values for i ∈ C. Optimal trade-off analysis (or just trade-off analysis) is the study of how much worse we must do in one or more objectives in order to do better in some other objectives, or more generally, the study of what sets of objective values are achievable.
As an example, consider a bi-criterion (i.e., two criterion) problem. Suppose x is a Pareto optimal point, with objectives F1(x) and F2(x). We might ask how much larger F2(z) would have to be, in order to obtain a feasible point z with F1(z) ≤ F1(x)−a, where a > 0 is some constant. Roughly speaking, we are asking how much we must pay in the second objective to obtain an improvement of a in the first objective. If a large increase in F2 must be accepted to realize a small decrease in F1, we say that there is a strong trade-off between the objectives, near the Pareto optimal value (F1(x),F2(x)). If, on the other hand, a large decrease in F1 can be obtained with only a small increase in F2, we say that the trade-off between the objectives is weak (near the Pareto optimal value (F1(x),F2(x))).
We can also consider the case in which we trade worse performance in the first objective for an improvement in the second. Here we find how much smaller F2(z)

4.7 Vector optimization 183
can be made, to obtain a feasible point z with F1(z) ≤ F1(x) + a, where a > 0 is some constant. In this case we receive a benefit in the second objective, i.e., a reduction in F2 compared to F2(x). If this benefit is large (i.e., by increasing F1 a small amount we obtain a large reduction in F2), we say the objectives exhibit a strong trade-off. If it is small, we say the objectives trade off weakly (near the Pareto optimal value (F1(x), F2(x))).
Optimal trade-off surface
The set of Pareto optimal values for a multicriterion problem is called the optimal trade-off surface (in general, when q > 2) or the optimal trade-off curve (when q = 2). (Since it would be foolish to accept any point that is not Pareto optimal, we can restrict our trade-off analysis to Pareto optimal points.) Trade-off analysis is also sometimes called exploring the optimal trade-off surface. (The optimal trade- off surface is usually, but not always, a surface in the usual sense. If the problem has an optimal point, for example, the optimal trade-off surface consists of a single point, the optimal value.)
An optimal trade-off curve is readily interpreted. An example is shown in figure 4.11, on page 185, for a (convex) bi-criterion problem. From this curve we can easily visualize and understand the trade-offs between the two objectives.
• The endpoint at the right shows the smallest possible value of F2, without any consideration of F1.
• The endpoint at the left shows the smallest possible value of F1, without any consideration of F2.
• By finding the intersection of the curve with a vertical line at F1 = α, we can see how large F2 must be to achieve F1 ≤ α.
• By finding the intersection of the curve with a horizontal line at F2 = β, we can see how large F1 must be to achieve F2 ≤ β.
• The slope of the optimal trade-off curve at a point on the curve (i.e., a Pareto optimal value) shows the local optimal trade-off between the two objectives. Where the slope is steep, small changes in F1 are accompanied by large changes in F2.
• A point of large curvature is one where small decreases in one objective can only be accomplished by a large increase in the other. This is the prover- bial knee of the trade-off curve, and in many applications represents a good compromise solution.
All of these have simple extensions to a trade-off surface, although visualizing a surface with more than three objectives is difficult.
Scalarizing multicriterion problems
When we scalarize a multicriterion problem by forming the weighted sum objective
T 􏰊q
λ f0(x)= λiFi(x),
i=1

184
4 Convex optimization problems
where λ ≻ 0, we can interpret λi as the weight we attach to the ith objective. The weight λi can be thought of as quantifying our desire to make Fi small (or our objection to having Fi large). In particular, we should take λi large if we want Fi to be small; if we care much less about Fi, we can take λi small. We can interpret the ratio λi/λj as the relative weight or relative importance of the ith objective compared to the jth objective. Alternatively, we can think of λi/λj as exchange rate between the two objectives, since in the weighted sum objective a decrease (say) in Fi by α is considered the same as an increase in Fj in the amount (λi /λj )α.
These interpretations give us some intuition about how to set or change the weights while exploring the optimal trade-off surface. Suppose, for example, that the weight vector λ ≻ 0 yields the Pareto optimal point xpo, with objective values F1(xpo),…,Fq(xpo). To find a (possibly) new Pareto optimal point which trades off a better kth objective value (say), for (possibly) worse objective values for the other objectives, we form a new weight vector λ ̃ with
λ ̃k >λk, λ ̃j =λj, j̸=k, j=1,…,q,
i.e., we increase the weight on the kth objective. This yields a new Pareto optimal point x ̃po with Fk(x ̃po) ≤ Fk(xpo) (and usually, Fk(x ̃po) < Fk(xpo)), i.e., a new Pareto optimal point with an improved kth objective. We can also see that at any point where the optimal trade-off surface is smooth, λ gives the inward normal to the surface at the associated Pareto optimal point. In particular, when we choose a weight vector λ and apply scalarization, we obtain a Pareto optimal point where λ gives the local trade-offs among objectives. In practice, optimal trade-off surfaces are explored by ad hoc adjustment of the weights, based on the intuitive ideas above. We will see later (in chapter 5) that the basic idea of scalarization, i.e., minimizing a weighted sum of objectives, and then adjusting the weights to obtain a suitable solution, is the essence of duality. 4.7.6 Examples Regularized least-squares WearegivenA∈Rm×n andb∈Rm,andwanttochoosex∈Rn takinginto account two quadratic objectives: • F1(x) = ∥Ax − b∥2 = xTATAx − 2bTAx + bTb is a measure of the misfit between Ax and b, • F2(x)=∥x∥2 =xTxisameasureofthesizeofx. Our goal is to find x that gives a good fit (i.e., small F1) and that is not large (i.e., small F2). We can formulate this problem as a vector optimization problem with respect to the cone R2+, i.e., a bi-criterion problem (with no constraints): minimize (w.r.t. R2+) f0(x) = (F1(x), F2(x)). 4.7 Vector optimization 185 15 10 5 00 5 10 15 F1(x) = ∥Ax − b∥2 Figure 4.11 Optimal trade-off curve for a regularized least-squares problem. The shaded set is the set of achievable values (∥Ax−b∥2,∥x∥2). The optimal trade-off curve, shown darker, is the lower left part of the boundary. We can scalarize this problem by taking λ1 > 0 and λ2 > 0 and minimizing the scalar weighted sum objective
which yields
λT f0(x) = λ1F1(x) + λ2F2(x)
= xT(λ1ATA+λ2I)x−2λ1bTAx+λ1bTb,
x(μ) = (λ1AT A + λ2I)−1λ1AT b = (AT A + μI)−1AT b,
where μ = λ2/λ1. For any μ > 0, this point is Pareto optimal for the bi-criterion problem. We can interpret μ = λ2/λ1 as the relative weight we assign F2 compared to F1.
This method produces all Pareto optimal points, except two, associated with the extremes μ → ∞ and μ → 0. In the first case we have the Pareto optimal solution x = 0, which would be obtained by scalarization with λ = (0, 1). At the other extreme we have the Pareto optimal solution A†b, where A† is the pseudo- inverse of A. This Pareto optimal solution is obtained as the limit of the optimal solution of the scalarized problem as μ → 0, i.e., as λ → (1, 0). (We will encounter the regularized least-squares problem again in §6.3.2.)
Figure 4.11 shows the optimal trade-off curve and the set of achievable values for a regularized least-squares problem with problem data A ∈ R100×10, b ∈ R100. (See exercise 4.50 for more discussion.)
Risk-return trade-off in portfolio optimization
The classical Markowitz portfolio optimization problem described on page 155 is naturally expressed as a bi-criterion problem, where the objectives are the negative
F2(x) = ∥x∥2

186
4 Convex optimization problems
mean return (since we wish to maximize mean return) and the variance of the
return:
minimize (w.r.t. R2+) (F1(x), F2(x)) = (−pT x, xT Σx) subjectto 1Tx=1, x≽0.
In forming the associated scalarized problem, we can (without loss of generality) takeλ1 =1andλ2 =μ>0:
minimize −pT x + μxT Σx subjectto 1Tx=1, x≽0,
which is a QP. In this example too, we get all Pareto optimal portfolios except for the two limiting cases corresponding to μ → 0 and μ → ∞. Roughly speaking, in the first case we get a maximum mean return, without regard for return variance; in the second case we form a minimum variance return, without regard for mean return. Assuming that pk > pi for i ̸= k, i.e., that asset k is the unique asset with maximum mean return, the portfolio allocation x = ek is the only one correspond- ing to μ → 0. (In other words, we concentrate the portfolio entirely in the asset that has maximum mean return.) In many portfolio problems asset n corresponds to a risk-free investment, with (deterministic) return rrf . Assuming that Σ, with its last row and column (which are zero) removed, is full rank, then the other extreme Pareto optimal portfolio is x = en, i.e., the portfolio is concentrated entirely in the risk-free asset.
As a specific example, we consider a simple portfolio optimization problem with 4 assets, with price change mean and standard deviations given in the following table.
Asset 1
2
p Σ1/2 i ii 12% 20%
10% 10% 3 7% 5% 4 3% 0%
Asset 4 is a risk-free asset, with a (certain) 3% return. Assets 3, 2, and 1 have increasing mean returns, ranging from 7% to 12%, as well as increasing standard deviations, which range from 5% to 20%. The correlation coefficients between the assets are ρ12 = 30%, ρ13 = −40%, and ρ23 = 0%.
Figure 4.12 shows the optimal trade-off curve for this portfolio optimization problem. The plot is given in the conventional way, with the horizontal axis show- ing standard deviation (i.e., squareroot of variance) and the vertical axis showing expected return. The lower plot shows the optimal asset allocation vector x for each Pareto optimal point.
The results in this simple example agree with our intuition. For small risk, the optimal allocation consists mostly of the risk-free asset, with a mixture of the other assets in smaller quantities. Note that a mixture of asset 3 and asset 1, which are negatively correlated, gives some hedging, i.e., lowers variance for a given level of mean return. At the other end of the trade-off curve, we see that aggressive growth portfolios (i.e., those with large mean returns) concentrate the allocation in assets 1 and 2, the ones with the largest mean returns (and variances).

4.7 Vector optimization
15%
10%
5%
0% 0% 10% 1
0.5
0
187
20%
x(4) x(3) x(2)
x(1)
0% 10%
standard deviation of return
20%
Figure 4.12 Top. Optimal risk-return trade-off curve for a simple portfolio optimization problem. The lefthand endpoint corresponds to putting all resources in the risk-free asset, and so has zero standard deviation. The righthand endpoint corresponds to putting all resources in asset 1, which has highest mean return. Bottom. Corresponding optimal allocations.
allocation mean return

188
4 Convex optimization problems
Bibliography
Linear programming has been studied extensively since the 1940s, and is the subject of many excellent books, including Dantzig [Dan63], Luenberger [Lue84], Schrijver [Sch86], Papadimitriou and Steiglitz [PS98], Bertsimas and Tsitsiklis [BT97], Vanderbei [Van96], and Roos, Terlaky, and Vial [RTV97]. Dantzig and Schrijver also provide detailed ac- counts of the history of linear programming. For a recent survey, see Todd [Tod02].
Schaible [Sch82, Sch83] gives an overview of fractional programming, which includes linear-fractional problems and extensions such as convex-concave fractional problems (see exercise 4.7). The model of a growing economy in example 4.7 appears in von Neumann [vN46].
Research on quadratic programming began in the 1950s (see, e.g., Frank and Wolfe [FW56], Markowitz [Mar56], Hildreth [Hil57]), and was in part motivated by the portfo- lio optimization problem discussed on page 155 (Markowitz [Mar52]), and the LP with random cost discussed on page 154 (see Freund [Fre56]).
Interest in second-order cone programming is more recent, and started with Nesterov and Nemirovski [NN94, §6.2.3]. The theory and applications of SOCPs are surveyed by Alizadeh and Goldfarb [AG03], Ben-Tal and Nemirovski [BTN01, lecture 3] (where the problem is referred to as conic quadratic programming), and Lobo, Vandenberghe, Boyd, and Lebret [LVBL98].
Robust linear programming, and robust convex optimization in general, originated with Ben-Tal and Nemirovski [BTN98, BTN99] and El Ghaoui and Lebret [EL97]. Goldfarb and Iyengar [GI03a, GI03b] discuss robust QCQPs and applications in portfolio optimiza- tion. El Ghaoui, Oustry, and Lebret [EOL98] focus on robust semidefinite programming.
Geometric programming has been known since the 1960s. Its use in engineering design was first advocated by Duffin, Peterson, and Zener [DPZ67] and Zener [Zen71]. Peterson [Pet76] and Ecker [Eck80] describe the progress made during the 1970s. These articles and books also include examples of engineering applications, in particular in chemical and civil engineering. Fishburn and Dunlop [FD85], Sapatnekar, Rao, Vaidya, and Kang [SRVK93], and Hershenson, Boyd, and Lee [HBL01]) apply geometric programming to problems in integrated circuit design. The cantilever beam design example (page 163) is from Vanderplaats [Van84, page 147]. The variational characterization of the Perron- Frobenius eigenvalue (page 165) is proved in Berman and Plemmons [BP94, page 31].
Nesterov and Nemirovski [NN94, chapter 4] introduced the conic form problem (4.49) as a standard problem format in nonlinear convex optimization. The cone programming approach is further developed in Ben-Tal and Nemirovski [BTN01], who also describe numerous applications.
Alizadeh [Ali91] and Nesterov and Nemirovski [NN94, §6.4] were the first to make a systematic study of semidefinite programming, and to point out the wide variety of applications in convex optimization. Subsequent research in semidefinite programming during the 1990s was driven by applications in combinatorial optimization (Goemans and Williamson [GW95]), control (Boyd, El Ghaoui, Feron, and Balakrishnan [BEFB94], Scherer, Gahinet, and Chilali [SGC97], Dullerud and Paganini [DP00]), communications and signal processing (Luo [Luo03], Davidson, Luo, Wong, and Ma [DLW00, MDW+02]), and other areas of engineering. The book edited by Wolkowicz, Saigal, and Vandenberghe [WSV00] and the articles by Todd [Tod01], Lewis and Overton [LO96], and Vandenberghe and Boyd [VB95] provide overviews and extensive bibliographies. Connections between SDP and moment problems, of which we give a simple example on page 170, are explored in detail by Bertsimas and Sethuraman [BS00], Nesterov [Nes00], and Lasserre [Las02]. The fastest mixing Markov chain problem is from Boyd, Diaconis, and Xiao [BDX04].
Multicriterion optimization and Pareto optimality are fundamental tools in economics; see Pareto [Par71], Debreu [Deb59] and Luenberger [Lue95]. The result in example 4.9 is known as the Gauss-Markov theorem (Kailath, Sayed, and Hassibi [KSH00, page 97]).

Exercises 189
Exercises
Basic terminology and optimality conditions
4.1 Consider the optimization problem
minimize f0 (x1 , x2 ) subjectto 2×1+x2≥1 x1 + 3×2 ≥ 1
x1≥0, x2≥0.
Make a sketch of the feasible set. For each of the following objective functions, give the
optimal set and the optimal value.
(a) f0(x1, x2) = x1 + x2. (b) f0(x1, x2) = −x1 − x2.
(c) f0(x1, x2) = x1.
(d) f0(x1,x2)=max{x1,x2}.
(e) f0(x1, x2) = x21 + 9×2.
4.2 Consider the optimization problem
f0(x) = − 􏰉mi=1 log(bi − aTi x)
Prove the following facts (which include the results quoted without proof on page 141).
minimize
with domain dom f0 = {x | Ax ≺ b}, where A ∈ Rm×n (with rows aTi ). We assume that
domf0 isnonempty.
(a) dom f0 is unbounded if and only if there exists a v ̸= 0 with Av ≼ 0.
(b) f0 is unbounded below if and only if there exists a v with Av ≼ 0, Av ̸= 0. Hint. There exists a v such that Av ≼ 0, Av ̸= 0 if and only if there exists no z ≻ 0 such that AT z = 0. This follows from the theorem of alternatives in example 2.21, page 50.
(c) If f0 is bounded below then its minimum is attained, i.e., there exists an x that satisfies the optimality condition (4.23).
(d) The optimal set is affine: Xopt = {x⋆ + v | Av = 0}, where x⋆ is any optimal point.
4.3 Prove that x⋆ = (1, 1/2, −1) is optimal for the optimization problem
minimize (1/2)xT Px + qT x + r subjectto −1≤xi ≤1, i=1,2,3,
􏰔13 12 −2􏰕 􏰔−22.0􏰕
P= 12 17 6 , q= −14.5 , r=1.
−2 6 12 13.0
4.4 [P. Parrilo] Symmetries and convex optimization. Suppose G = {Q1, . . . , Qk} ⊆ Rn×n is a group, i.e., closed und􏰉er products and inverse. We say that the function f : Rn → R is G- invariant, or symmetric with respect to G, if f(Qix) = f(x) holds for all x and i = 1, . . . , k.
where
We define x = (1/k) ki=1 Qix, which is the average of x over its G-orbit. We define the
fixed subspace of G as
(a) Showthatforanyx∈Rn,wehavex∈F.
F ={x|Qix=x, i=1,…,k}.

190 4 Convex optimization problems
(b) (c)
Show that if f : Rn → R is convex and G-invariant, then f(x) ≤ f(x). We say the optimization problem
minimize f0 (x)
subject to fi(x) ≤ 0, i = 1,…,m
is G-invariant if the objective f0 is G-invariant, and the feasible set is G-invariant, which means
f1(x) ≤ 0,…,fm(x) ≤ 0 =⇒ f1(Qix) ≤ 0,…,fm(Qix) ≤ 0,
for i = 1, . . . , k. Show that if the problem is convex and G-invariant, and there exists an optimal point, then there exists an optimal point in F. In other words, we can adjoin the equality constraints x ∈ F to the problem, without loss of generality.
As an example, suppose f is convex and symmetric, i.e., f(Px) = f(x) for every permutation P. Show that if f has a minimizer, then it has a minimizer of the form α1. (This means to minimize f over x ∈ Rn, we can just as well minimize f(t1) over t ∈ R.)
(d)
4.5 Equivalent convex problems. Show that the following three convex problems are equiva- lent. Carefully explain how the solution of each problem is obtained from the solution of the other problems. The problem data are the matrix A ∈ Rm×n (with rows aTi ), the vector b ∈ Rm, and the constant M > 0.
(a) The robust least-squares problem
minimize 􏰉mi=1 φ(aTi x − bi),
with variable x ∈ Rn, where φ􏰆: R → R is defined as
(b) The least-squares problem with variable weights
minimize 􏰉mi=1(aTi x − bi)2/(wi + 1) + M21T w
subject to w ≽ 0,
withvariablesx∈Rn andw∈Rm,anddomainD={(x,w)∈Rn×Rm |w≻−1}.
Hint. Optimize over w assuming x is fixed, to establish a relation with the problem in part (a).
(This problem can be interpreted as a weighted least-squares problem in which we are allowed to adjust the weight of the ith residual. The weight is one if wi = 0, and decreases if we increase wi. The second term in the objective penalizes large values of w, i.e., large adjustments of the weights.)
(c) The quadratic program
minimize 􏰉mi=1(u2i +2Mvi)
sub ject to −u−v≼Ax−b≼u+v
0 ≼ u ≼ M1 v ≽ 0.
φ(u) =
(This function is known as the Huber penalty function; see §6.1.2.)
u2 |u| ≤ M M(2|u|−M) |u|>M.

Exercises 191
4.6 Handling convex equality constraints. A convex optimization problem can have only linear equality constraint functions. In some special cases, however, it is possible to handle convex equality constraint functions, i.e., constraints of the form h(x) = 0, where h is convex. We explore this idea in this problem.
Consider the optimization problem
minimize f0 (x)
subject to fi(x) ≤ 0, i = 1,…,m (4.65)
h(x) = 0,
where fi and h are convex functions with domain Rn. Unless h is affine, this is not a
convex optimization problem. Consider the related problem
minimize f0 (x)
subject to fi(x) ≤ 0, i = 1,…,m, (4.66)
h(x) ≤ 0,
where the convex equality constraint has been relaxed to a convex inequality. This prob-
lem is, of course, convex.
Now suppose we can guarantee that at any optimal solution x⋆ of the convex prob- lem (4.66), we have h(x⋆) = 0, i.e., the inequality h(x) ≤ 0 is always active at the solution. Then we can solve the (nonconvex) problem (4.65) by solving the convex problem (4.66).
Show that this is the case if there is an index r such that
• f0 is monotonically increasing in xr • f1,…,fm are nondecreasing in xr
• h is monotonically decreasing in xr.
We will see specific examples in exercises 4.31 and 4.58.
4.7 Convex-concave fractional problems. minimize
subject to where f0, f1, . . . , fm are convex, and
Consider a problem of the form
f0(x)/(cT x + d)
fi(x) ≤ 0, i = 1,…,m Ax = b
the domain of the objective function is defined as
{x∈domf0 |cTx+d>0}.
(a) Show that this is a quasiconvex optimization problem.
(b) Show that the problem is equivalent to
minimize g0(y, t)
subject to gi(y,t) ≤ 0, i = 1,…,m
Ay = bt
cT y + dt = 1,
where gi is the perspective of fi (see §3.2.6). The variables are y ∈ Rn and t ∈ R. Show that this problem is convex.
(c) Following a similar argument, derive a convex formulation for the convex-concave fractional problem
minimize f0 (x)/h(x)
subject to fi(x) ≤ 0, i = 1,…,m
Ax = b

192
4 Convex optimization problems
where f0, f1, . . . , fm are convex, h is concave, the domain of the objective function is defined as {x ∈ domf0 ∩ domh | h(x) > 0} and f0(x) ≥ 0 everywhere.
As an example, apply your technique to the (unconstrained) problem with
f0(x) = (trF(x))/m, h(x) = (det(F(x))1/m,
withdom(f0/h)={x|F(x)≻0},whereF(x)=F0 +x1F1 +···+xnFn forgiven Fi ∈ Sm. In this problem, we minimize the ratio of the arithmetic mean over the geometric mean of the eigenvalues of an affine matrix function F (x).
Linear optimization problems
4.8 Some simple LPs. Give an explicit solution of each of the following LPs.
(a) Minimizing a linear function over an affine set. minimize cT x
subject to Ax = b.
(b) Minimizing a linear function over a halfspace.
minimize cT x subject to aT x ≤ b,
where a ̸= 0.
(c) Minimizing a linear function over a rectangle.
minimize cT x subjectto l≼x≼u,
where l and u satisfy l ≼ u.
(d) Minimizing a linear function over the probability simplex.
minimize cT x
subjectto 1Tx=1, x≽0.
What happens if the equality constraint is replaced by an inequality 1T x ≤ 1?
We can interpret this LP as a simple portfolio optimization problem. The vector x represents the allocation of our total budget over different assets, with xi the fraction invested in asset i. The return of each investment is fixed and given by −ci, so our total return (which we want to maximize) is −cT x. If we replace the budget constraint 1T x = 1 with an inequality 1T x ≤ 1, we have the option of not investing a portion of the total budget.
(e) Minimizing a linear function over a unit box with a total budget constraint. minimize cT x
subjectto 1Tx=α, 0≼x≼1,
where α is an integer between 0 and n. What happens if α is not an integer (but
satisfies 0 ≤ α ≤ n)? What if we change the equality to an inequality 1T x ≤ α?
(f) Minimizing a linear function over a unit box with a weighted budget constraint.
minimize cT x
subjectto dTx=α, 0≼x≼1,
with d ≻ 0, and 0 ≤ α ≤ 1T d.

Exercises
4.9 Square LP. Consider the LP
with A square and nonsingular. Show that the optimal value is given by
p⋆ =􏰆 cTA−1b A−Tc≼0 −∞ otherwise.
4.10 Converting general LP to standard form. Work out the details on page 147 of §4.3. Explain in detail the relation between the feasible sets, the optimal solutions, and the optimal values of the standard form LP and the original LP.
4.11 Problems involving l1- and l∞-norms. Formulate the following problems as LPs. Explain in detail the relation between the optimal solution of each problem and the solution of its equivalent LP.
(a) Minimize ∥Ax − b∥∞ (l∞-norm approximation). (b) Minimize ∥Ax − b∥1 (l1-norm approximation).
(c) Minimize ∥Ax − b∥1 subject to ∥x∥∞ ≤ 1. (d) Minimize ∥x∥1 subject to ∥Ax − b∥∞ ≤ 1.
(e) Minimize ∥Ax − b∥1 + ∥x∥∞.
In each problem, A ∈ Rm×n and b ∈ Rm are given. (See §6.1 for more problems involving
approximation and constrained approximation.)
4.12 Network flow problem. Consider a network of n nodes, with directed links connecting each pair of nodes. The variables in the problem are the flows on each link: xij will denote the flow from node i to node j. The cost of the flow along the link from node i to node j is given by cij xij , where cij are given constants. The total cost across the network is
􏰊n
C= cijxij.
i,j =1
Each link flow xij is also subject to a given lower bound lij (usually assumed to be nonnegative) and an upper bound uij.
The external supply at node i is given by bi, where bi > 0 means an external flow enters the network at node i, and bi < 0 means that at node i, an amount |bi| flows out of the network. We assume that 1T b = 0, i.e., the total external supply equals total external demand. At each node we have conservation of flow: the total flow into node i along links and the external supply, minus the total flow out along the links, equals zero. The problem is to minimize the total cost of flow through the network, subject to the constraints described above. Formulate this problem as an LP. 4.13 Robust LP with interval coefficients. Consider the problem, with variable x ∈ Rn, minimize cT x subjectto Ax≼bforallA∈A, where A ⊆ Rm×n is the set A={A∈Rm×n |A ̄ij −Vij ≤Aij ≤A ̄ij +Vij, i=1,...,m, j=1,...,n}. (The matrices A ̄ and V are given.) This problem can be interpreted as an LP where each coefficient of A is only known to lie in an interval, and we require that x must satisfy the constraints for all possible values of the coefficients. Express this problem as an LP. The LP you construct should be efficient, i.e., it should not have dimensions that grow exponentially with n or m. 193 minimize subject to cT x Ax ≼ b 194 4 Convex optimization problems 4.14 Approximating a matrix in infinity norm. The l∞-norm induced norm of a matrix A ∈ Rm×n, denoted ∥A∥∞, is given by ∥Ax∥∞ x̸=0 ∥x∥∞ 􏰊n = max |aij|. ∥A∥∞ =sup This norm is sometimes called the max-row-sum norm, for obvious reasons (see §A.1.5). Consider the problem of approximating a matrix, in the max-row-sum norm, by a linear combination of other matrices. That is, we are given k + 1 matrices A0 , . . . , Ak ∈ Rm×n , and need to find x ∈ Rk that minimizes ∥A0 + x1A1 + ··· + xkAk∥∞. Express this problem as a linear program. Explain the significance of any extra variables in your LP. Carefully explain how your LP formulation solves this problem, e.g., what is the relation between the feasible set for your LP and this problem? 4.15 Relaxation of Boolean LP. In a Boolean linear program, the variable x is constrained to have components equal to zero or one: minimize cT x subject to Ax ≼ b (4.67) xi ∈ {0,1}, i = 1,...,n. In general, such problems are very difficult to solve, even though the feasible set is finite (containing at most 2n points). In a general method called relaxation, the constraint that xi be zero or one is replaced with the linear inequalities 0 ≤ xi ≤ 1: minimize cT x subject to Ax ≼ b (4.68) 0≤xi ≤1, i=1,...,n. We refer to this problem as the LP relaxation of the Boolean LP (4.67). The LP relaxation is far easier to solve than the original Boolean LP. (a) Show that the optimal value of the LP relaxation (4.68) is a lower bound on the optimal value of the Boolean LP (4.67). What can you say about the Boolean LP if the LP relaxation is infeasible? (b) It sometimes happens that the LP relaxation has a solution with xi ∈ {0, 1}. What can you say in this case? 4.16 Minimum fuel optimal control. We consider a linear dynamical system with state x(t) ∈ Rn, t = 0,...,N, and actuator or input signal u(t) ∈ R, for t = 0,...,N − 1. The dynamics of the system is given by the linear recurrence x(t+1)=Ax(t)+bu(t), t=0,...,N−1, where A ∈ Rn×n and b ∈ Rn are given. We assume that the initial state is zero, i.e., x(0) = 0. The minimum fuel optimal control problem is to choose the inputs u(0), . . . , u(N − 1) so as to minimize the total fuel consumed, which is given by N􏰊−1 t=0 F = f(u(t)), i=1,...,m j=1 Exercises 195 subject to the constraint that x(N) = xdes, where N is the (given) time horizon, and xdes ∈ Rn is the (given) desired final or target state. The function f : R → R is the fuel use map for the actuator, and gives the amount of fuel used as a function of the actuator signal amplitude. In this problem we use f(a)=􏰆 |a| |a|≤1 2|a|−1 |a| > 1.
This means that fuel use is proportional to the absolute value of the actuator signal, for actuator signals between −1 and 1; for larger actuator signals the marginal fuel efficiency is half.
Formulate the minimum fuel optimal control problem as an LP.
4.17 Optimal activity levels. We consider the selection of n nonnegative activity levels, denoted
x1 , . . . , xn . These activities consume m resources, which are limited. Activity j consumes
Aijxj of resource i, where Aij are given.􏰉The total resource consumption is additive, so
the total of resource i consumed is ci = nj=1 Aijxj. (Ordinarily we have Aij ≥ 0, i.e.,
activity j consumes resource i. But we allow the possibility that Aij < 0, which means that activity j actually generates resource i as a by-product.) Each resource consumption is limited: we must have ci ≤ cmax, where cmax are given. Each activity generates revenue, ii which is a piecewise-linear concave function of the activity level: rj(xj)=􏰆 pjxj 0≤xj ≤qj pjqj + pdisc(xj − qj) xj ≥ qj. j Here pj > 0 is the basic price, qj > 0 is the quantity discount level, and pdisc is the j
disc􏰉
quantity discount price, for (the product of) activity j. (We have 0 < pj < pj.) The total revenue is the sum of the revenues associated with each activity, i.e., nj=1 rj(xj). The goal is to choose activity levels that maximize the total revenue while respecting the resource limits. Show how to formulate this problem as an LP. 4.18 Separating hyperplanes and spheres. Suppose you are given two sets of points in Rn, {v1, v2, . . . , vK } and {w1, w2, . . . , wL}. Formulate the following two problems as LP fea- sibility problems. (a) Determine a hyperplane that separates the two sets, i.e., find a ∈ Rn and b ∈ R with a ̸= 0 such that aTvi ≤b, i=1,...,K, aTwi ≥b, i=1,...,L. Note that we require a ̸= 0, so you have to make sure that your formulation excludes the trivial solution a = 0, b = 0. You can assume that rank􏰒 v1 v2 ··· vK w1 w2 ··· wL 􏰓=n+1 11···111···1 (i.e., the affine hull of the K + L points has dimension n). (b) Determine a sphere separating the two sets of points, i.e., find xc ∈ Rn and R ≥ 0 such that ∥vi −xc∥2 ≤R, i=1,...,K, ∥wi −xc∥2 ≥R, i=1,...,L. (Here xc is the center of the sphere; R is its radius.) (See chapter 8 for more on separating hyperplanes, separating spheres, and related topics.) 196 4 Convex optimization problems ∥Ax − b∥1/(cT x + d) ∥x∥∞ ≤ 1, 4.19 Consider the problem minimize subject to whereA∈Rm×n,b∈Rm,c∈Rn,andd∈R. Weassumethatd>∥c∥1,whichimplies that cT x + d > 0 for all feasible x.
(a) Show that this is a quasiconvex optimization problem.
(b) Show that it is equivalent to the convex optimization problem
minimize ∥Ay − bt∥1 subject to ∥y∥∞ ≤ t
cT y + dt = 1,
with variables y ∈ Rn, t ∈ R.
4.20 Power assignment in a wireless communication system. We consider n transmitters with powers p1, . . . , pn ≥ 0, transmitting to n receivers. These powers are the optimization variables in the problem. We let G ∈ Rn×n denote the matrix of path gains from the transm􏰉itters to the receivers; Gij ≥ 0 is the path gain from transmitter j to receiver i. The signal power at receiver i is then Si = Giipi, and the interference power at receiver i is Ii = k̸=i Gikpk. The signal to interference plus noise ratio, denoted SINR, at receiver
i, is given by Si/(Ii + σi), where σi > 0 is the (self-) noise power in receiver i. The objective in the problem is to maximize the minimum SINR ratio, over all receivers, i.e.,
to maximize
min Si . i=1,…,n Ii + σi
There are a number of constraints on the powers that must be satisfied, in addition to the
obvious one pi ≥ 0. The first is a maximum allowable power for each transmitter, i.e.,
pi ≤ Pmax, where Pmax > 0 is given. In addition, the transmitters are partitioned into ii
groups, with each group sharing the same power supply, so there is a total power constraint for each group of transmitter powers. More precisely, we have subsets K1, . . . , Km of {1,…,n} with K1 ∪···∪Km = {1,…,n}, and Kj ∩Kl = 0 if j ̸= l. For each group Kl, the total associated transmitter power cannot exceed P gp > 0:
l
􏰊pk ≤Pgp, l=1,…,m. l
k∈Kl
Finally, we have a limit P rc > 0 on the total received power at each receiver:
Gikpk ≤ Prc, i = 1,…,n. i
k=1
(This constraint reflects the fact that the receivers will saturate if the total received power is too large.)
Formulate the SINR maximization problem as a generalized linear-fractional program.
Quadratic optimization problems
4.21 Some simple QCQPs. Give an explicit solution of each of the following QCQPs.
(a) Minimizing a linear function over an ellipsoid centered at the origin. minimize cT x
subject to xT Ax ≤ 1,
where A ∈ Sn++ and c ̸= 0. What is the solution if the problem is not convex (A ̸∈ Sn+)?
k
􏰊n

Exercises 197
(b) Minimizing a linear function over an ellipsoid. minimize cT x
subjectto (x−xc)TA(x−xc)≤1, whereA∈Sn++ andc̸=0.
(c) Minimizing a quadratic form over an ellipsoid centered at the origin. minimize xT Bx
subject to xT Ax ≤ 1,
where A ∈ Sn++ and B ∈ Sn+. Also consider the nonconvex extension with B ̸∈ Sn+.
(See §B.1.)
4.22 Consider the QCQP
minimize (1/2)xT Px + qT x + r subject to xT x ≤ 1,
with P ∈ Sn++. Show that x⋆ = −(P + λI)−1q where λ = max{0, λ ̄} and λ ̄ is the largest solution of the nonlinear equation
qT (P + λI)−2q = 1.
4.23 l4-norm approximation via QCQP. Formulate the l4-norm approximation problem
minimize ∥Ax − b∥4 = (􏰉mi=1(aTi x − bi)4)1/4
as a QCQP. The matrix A ∈ Rm×n (with rows aTi ) and the vector b ∈ Rm are given.
4.24 Complex l1-, l2- and l∞-norm approximation. Consider the problem minimize ∥Ax − b∥p,
where A ∈ Cm×n, b ∈ Cm, and the variable is x ∈ Cn. The complex lp-norm is defined
by
􏰈1/p
for p ≥ 1, and ∥y∥∞ = maxi=1,…,m |yi|. For p = 1, 2, and ∞, express the complex lp-norm
􏰇􏰊m ∥y∥p = |yi|p
i=1
approximation problem as a QCQP or SOCP with real variables and data.
4.25 Linear separation of two sets of ellipsoids. Suppose we are given K + L ellipsoids Ei = {Piu+qi | ∥u∥2 ≤ 1}, i = 1,…,K +L,
where Pi ∈ Sn. We are interested in finding a hyperplane that strictly separates E1, . . . , EK from EK+1, …, EK+L, i.e., we want to compute a ∈ Rn, b ∈ R such that
aTx+b>0forx∈E1 ∪···∪EK, aTx+b<0forx∈EK+1 ∪···∪EK+L, or prove that no such hyperplane exists. Express this problem as an SOCP feasibility problem. 4.26 Hyperbolic constraints as SOC constraints. Verify that x ∈ Rn, y, z ∈ R satisfy if and only if xT x ≤ yz, y ≥ 0, z ≥ 0 􏳶􏳶􏳶􏳶􏰒 2x 􏰓􏳶􏳶􏳶􏳶 ≤y+z, y≥0, z≥0. y−z 2 Use this observation to cast the following problems as SOCPs. 198 4 Convex optimization problems (a) (b) Maximizing harmonic mean. maximize 􏰀􏰉mi=1 1/(aTi x − bi)􏰁−1 , withdomain{x|Ax≻b},whereaTi istheithrowofA. Maximizing geometric mean. maximize 􏰀􏰛mi=1(aTi x − bi)􏰁1/m , withdomain{x|Ax≽b},whereaTi istheithrowofA. 4.27 Matrix fractional minimization via SOCP. Express the following problem as an SOCP: minimize (Ax + b)T (I + B diag(x)BT )−1(Ax + b) subject to x ≽ 0, withA∈Rm×n,b∈Rm,B∈Rm×n. Thevariableisx∈Rn. Hint. First show that the problem is equivalent to minimize vT v + wT diag(x)−1w subjectto v+Bw=Ax+b x ≽ 0, withvariablesv∈Rm,w,x∈Rn. (Ifxi =0weinterpretwi2/xi aszeroifwi =0andas ∞ otherwise.) Then use the results of exercise 4.26. 4.28 Robust quadratic programming. In §4.4.2 we discussed robust linear programming as an application of second-order cone programming. In this problem we consider a similar robust variation of the (convex) quadratic program minimize (1/2)xT Px + qT x + r subject to Ax ≼ b. For simplicity we assume that only the matrix P is subject to errors, and the other parameters (q, r, A, b) are exactly known. The robust quadratic program is defined as minimize supP ∈E ((1/2)xT P x + qT x + r) subject to Ax ≼ b where E is the set of possible matrices P. For each of the following sets E, express the robust QP as a convex problem. Be as specific as you can. If the problem can be expressed in a standard form (e.g., QP, QCQP, SOCP, SDP), say so. (a) A finite set of matrices: E = {P1,...,PK}, where Pi ∈ Sn+, i = 1,...,K. (b) A set specified by a nominal value P0 ∈ Sn+ plus a bound on the eigenvalues of the deviation P − P0: whereγ∈RandP0 ∈Sn+, E = {P ∈ Sn | −γI ≼ P − P0 ≼ γI} (c) An ellipsoid of matrices: 􏰐 􏰊K 􏰍􏰍􏰍􏰍 􏰚 E = P0 + Piui 􏰍 ∥u∥2 ≤ 1 . i=1 You can assume Pi ∈ Sn+, i = 0,...,K. Exercises 199 4.29 Maximizing probability of satisfying a linear inequality. Let c be a random variable in Rn, normally distributed with mean c ̄ and covariance matrix R. Consider the problem maximize prob(cT x ≥ α) subjectto Fx≼g, Ax=b. T 4.30 A heated fluid at temperature T (degrees above ambient temperature) flows in a pipe with fixed length and circular cross section with radius r. A layer of insulation, with thickness w ≪ r, surrounds the pipe to reduce heat loss through the pipe walls. The design variables in this problem are T, r, and w. The heat loss is (approximately) proportional to T r/w, so over a fixed lifetime, the energy cost due to heat loss is given by α1Tr/w. The cost of the pipe, which has a fixed wall thickness, is approximately proportional to the total material, i.e., it is given by α2r. The cost of the insulation is also approximately proportional to the total insulation material, i.e., α3rw (using w ≪ r). The total cost is the sum of these three costs. The heat flow down the pipe is entirely due to the flow of the fluid, which has a fixed velocity, i.e., it is given by α4Tr2. The constants αi are all positive, as are the variables T, r, and w. Now the problem: maximize the total heat flow down the pipe, subject to an upper limit Cmax on total cost, and the constraints Tmin ≤T ≤Tmax, rmin ≤r≤rmax, wmin ≤w≤wmax, w≤0.1r. Express this problem as a geometric program. 4.31 Recursive formulation of optimal beam design problem. Show that the GP (4.46) is equiv- alent to the GP Assuming there exists a feasible point x ̃ for which c ̄ x ̃ ≥ α, show that this problem is equivalent to a convex or quasiconvex optimization problem. Formulate the problem as a QP, QCQP, or SOCP (if the problem is convex), or explain how you can solve it by solving a sequence of QP, QCQP, or SOCP feasibility problems (if the problem is quasiconvex). Geometric programming minimize subject to 􏰉Ni=1 wihi wi/wmax ≤ 1, wmin/wi ≤ 1, i = 1,...,N hi/hmax ≤ 1, hmin/hi ≤ 1, i = 1,...,N hi/(wiSmax) ≤ 1, Sminwi/hi ≤ 1, i = 1,...,N 6iF/(σmaxwih2i ) ≤ 1, i = 1,...,N (2i − 1)di/vi + vi+1/vi ≤ 1, i = 1,...,N (i − 1/3)di/yi + vi+1/yi + yi+1/yi ≤ 1, i = 1,...,N y1/ymax ≤ 1 Ewih3idi/(6F)=1, i=1,...,N. The variables are wi, hi, vi, di, yi for i = 1,...,N. 4.32 Approximating a function as a monomial. Suppose the function f : Rn → R is differ- entiable at a point x0 ≻ 0, with f(x0) > 0. How would you find a monomial function
ˆnˆˆ
f:R →Rsuchthatf(x0)=f(x0)andforxnearx0,f(x)isverynearf(x)?
4.33 Express the following problems as convex optimization problems.
(a) Minimize max{p(x), q(x)}, where p and q are posynomials.
(b) Minimize exp(p(x)) + exp(q(x)), where p and q are posynomials.
(c) Minimize p(x)/(r(x) − q(x)), subject to r(x) > q(x), where p, q are posynomials, and r is a monomial.

200
4 Convex optimization problems
4.34 Log-convexity of Perron-Frobenius eigenvalue. Let A ∈ Rn×n be an elementwise positive matrix, i.e., Aij > 0. (The results of this problem hold for irreducible nonnegative matrices as well.) Let λpf (A) denotes its Perron-Frobenius eigenvalue, i.e., its eigenvalue of largest magnitude. (See the definition and the example on page 165.) Show that log λpf (A) is a convex function of log Aij . This means, for example, that we have the inequality
λpf (C) ≤ (λpf (A)λpf (B))1/2 ,
where Cij = (AijBij)1/2, and A and B are elementwise positive matrices.
Hint. Use the characterization of the Perron-Frobenius eigenvalue given in (4.47), or, alternatively, use the characterization
logλpf(A)= lim(1/k)log(1TAk1). k→∞
4.35 Signomial and geometric programs. A signomial is a linear combination of monomials of some positive variables x1, . . . , xn. Signomials are more general than posynomials, which are signomials with all positive coefficients. A signomial program is an optimization problem of the form
minimize f0 (x)
subject to fi(x) ≤ 0, i = 1,…,m
hi(x) = 0, i = 1,…,p,
where f0,…,fm and h1,…,hp are signomials. In general, signomial programs are very difficult to solve.
Some signomial programs can be transformed to GPs, and therefore solved efficiently. Show how to do this for a signomial program of the following form:
• The objective signomial f0 is a posynomial, i.e., its terms have only positive coeffi- cients.
• Each inequality constraint signomial f1 , . . . , fm has exactly one term with a negative coefficient: fi = pi − qi where pi is posynomial, and qi is monomial.
• Each equality constraint signomial h1, . . . , hp has exactly one term with a positive coefficient and one term with a negative coefficient: hi = ri − si where ri and si are monomials.
4.36 Explain how to reformulate a general GP as an equivalent GP in which every posynomial (in the objective and constraints) has at most two monomial terms. Hint. Express each sum (of monomials) as a sum of sums, each with two terms.
4.37 Generalized posynomials and geometric programming. Let x1, . . . , xn be positive variables, and suppose the functions fi : Rn → R, i = 1,…,k, are posynomials of x1,…,xn. If φ : Rk → R is a polynomial with nonnegative coefficients, then the composition
h(x) = φ(f1(x), . . . , fk(x)) (4.69)
is a posynomial, since posynomials are closed under products, sums, and multiplication by nonnegative scalars. For example, suppose f1 and f2 are posynomials, and consider the polynomial φ(z1, z2) = 3z12z2 + 2z1 + 3z23 (which has nonnegative coefficients). Then h = 3f12f2 + 2f1 + f23 is a posynomial.
In this problem we consider a generalization of this idea, in which φ is allowed to be a posynomial, i.e., can have fractional exponents. Specifically, assume that φ : Rk → R is a posynomial, with all its exponents nonnegative. In this case we will call the function h defined in (4.69) a generalized posynomial. As an example, suppose f1 and f2 are posynomials, and consider the posynomial (with nonnegative exponents) φ(z1,z2) =
2z0.3z1.2 + z1z0.5 + 2. Then the function 122
h(x) = 2f1(x)0.3f2(x)1.2 + f1(x)f2(x)0.5 + 2

Exercises 201
is a generalized posynomial. Note that it is not a posynomial, however (unless f1 and f2 are monomials or constants).
A generalized geometric program (GGP) is an optimization problem of the form
minimize h0 (x)
subject to hi(x) ≤ 1, i = 1,…,m
gi(x) = 1, i = 1,…,p,
(4.70)
whereg1,…,gp aremonomials,andh0,…,hm aregeneralizedposynomials.
Show how to express this generalized geometric program as an equivalent geometric pro- gram. Explain any new variables you introduce, and explain how your GP is equivalent to the GGP (4.70).
Semidefinite programming and conic form problems
4.38 LMIs and SDPs with one variable. The generalized eigenvalues of a matrix pair (A,B), where A, B ∈ Sn, are defined as the roots of the polynomial det(λB − A) (see §A.5.3). Suppose B is nonsingular, and that A and B can be simultaneously diagonalized by a congruence, i.e., there exists a nonsingular R ∈ Rn×n such that
RT AR = diag(a), RT BR = diag(b),
where a,b ∈ Rn. (A sufficient condition for this to hold is that there exists t1, t2 such
that t1A + t2B ≻ 0.)
(a) Show that the generalized eigenvalues of (A,B) are real, and given by λi = ai/bi,
i = 1,…,n.
(b) Express the solution of the SDP
minimize ct subject to tB ≼ A,
with variable t ∈ R, in terms of a and b.
4.39 SDPs and congruence transformations. Consider the SDP
minimize cT x
subject to x1F1 +x2F2 +···+xnFn +G ≼ 0,
with Fi,G ∈ Sk, c ∈ Rn.
(a) Suppose R ∈ Rk×k is nonsingular. Show that the SDP is equivalent to the SDP
minimize cT x
subjectto x1F ̃1 +x2F ̃2 +···+xnF ̃n +G ̃≼0,
w h e r e F ̃ i = R T F i R , G ̃ = R T G R .
(b) Suppose there exists a nonsingular R such that F ̃i and G ̃ are diagonal. Show that
the SDP is equivalent to an LP.
(c) Suppose there exists a nonsingular R such that F ̃i and G ̃ have the form
F ̃i=􏰒αiI ai 􏰓, i=1,…,n, G ̃=􏰒βI b􏰓, aTi αi bT β
where αi,β ∈ R, ai,b ∈ Rk−1. Show that the SDP is equivalent to an SOCP with a single second-order cone constraint.

202 4 Convex optimization problems
4.40 LPs, (a)
(b)
(c)
QPs, QCQPs, and SOCPs as SDPs. Express the following problems as SDPs. The LP (4.27).
The QP (4.34), the QCQP (4.35) and the SOCP (4.36). Hint. Suppose A ∈ Sr++, C∈Ss,andB∈Rr×s. Then
􏰒 A B 􏰓≽0 ⇐⇒ C−BTA−1B≽0. BT C
For a more complete statement, which applies also to singular A, and a proof, see §A.5.5.
The matrix fractional optimization problem
minimize (Ax + b)T F (x)−1 (Ax + b)
where A ∈ Rm×n, b ∈ Rm,
F(x) = F0 +x1F1 +···+xnFn,
with Fi ∈ Sm, and we take the domain of the objective to be {x | F(x) ≻ 0}. You can assume the problem is feasible (there exists at least one x with F (x) ≻ 0).
4.41 LMI tests for copositive matrices and P0-matrices. A matrix A ∈ Sn is said to be copositive ifxTAx≥0forallx≽0(seeexercise2.35). AmatrixA∈Rn×n issaidtobeaP0- matrix if maxi=1,…,n xi(Ax)i ≥ 0 for all x. Checking whether a matrix is copositive or a P0-matrix is very difficult in general. However, there exist useful sufficient conditions that can be verified using semidefinite programming.
(a) Show that A is copositive if it can be decomposed as a sum of a positive semidefinite and an elementwise nonnegative matrix:
A=B+C, B≽0, Cij ≥0, i,j=1,…,n. (4.71) Express the problem of finding B and C that satisfy (4.71) as an SDP feasibility
problem.
(b) Show that A is a P0-matrix if there exists a positive diagonal matrix D such that
DA + AT D ≽ 0. (4.72) Express the problem of finding a D that satisfies (4.72) as an SDP feasibility problem.
4.42 Complex LMIs and SDPs. A complex LMI has the form x1F1 +···+xnFn +G≼0
where F1,…,Fn, G are complex n×n Hermitian matrices, i.e., FiH = Fi, GH = G, and x ∈ Rn is a real variable. A complex SDP is the problem of minimizing a (real) linear function of x subject to a complex LMI constraint.
Complex LMIs and SDPs can be transformed to real LMIs and SDPs, using the fact that X≽0⇐⇒􏰒RX −IX􏰓≽0,
IX RX
where RX ∈ Rn×n is the real part of the complex Hermitian matrix X, and IX ∈ Rn×n
is the imaginary part of X.
Verify this result, and show how to pose a complex SDP as a real SDP.

Exercises 203
4.43 Eigenvalue optimization via SDP. Suppose A : Rn → Sm is affine, i.e., A(x)=A0 +x1A1 +···+xnAn
where Ai ∈ Sm. Let λ1(x) ≥ λ2(x) ≥ ··· ≥ λm(x) denote the eigenvalues of A(x). Show how to pose the following problems as SDPs.
(a) Minimize the maximum eigenvalue λ1(x).
(b) Minimize the spread of the eigenvalues, λ1(x) − λm(x).
(c) Minimize the condition number of A(x), subject to A(x) ≻ 0. The condition number is defined as κ(A(x)) = λ1(x)/λm(x), with domain {x | A(x) ≻ 0}. You may assume that A(x) ≻ 0 for at least one x.
Hint. You need to minimize λ/γ, subject to
0≺γI ≼A(x)≼λI.
Change variables to y = x/γ, t = λ/γ, s = 1/γ.
(d) Minimize the sum of the absolute values of the eigenvalues, |λ1 (x)| + · · · + |λm (x)|.
Hint. ExpressA(x)asA(x)=A+ −A−,whereA+ ≽0,A− ≽0.
4.44 Optimization over polynomials. Pose the following problem as an SDP. Find the polyno-
mial p:R→R,
that satisfies given bounds li ≤ p(ti) ≤ ui, at m specified points ti, and, of all the
p(t) = x1 + x2t + ··· + x2k+1t2k, polynomials that satisfy these bounds, has the greatest minimum value:
maximize inft p(t)
subject to li ≤ p(ti) ≤ ui, i = 1,…,m.
The variables are x ∈ R2k+1.
Hint. Use the LMI characterization of nonnegative polynomials derived in exercise 2.37,
part (b).
4.45 [Nes00, Par00] Sum-of-squares representation via LMIs. Consider a polynomial p : Rn → R of degree 2k. The polynomial is said to be positive semidefinite (PSD) if p(x) ≥ 0 for all x ∈ Rn. Except for special cases (e.g., n = 1 or k = 1), it is extremely difficult to determine whether or not a given polynomial is PSD, let alone solve an optimization problem, with the coefficients of p as variables, with the constraint that p be PSD.
A famous sufficient condition for a polynomial to be PSD is that it have the form
The condition that a polynomial p be SOS (viewed as a constraint on its coefficients) turns out to be equivalent to an LMI, and therefore a variety of optimization problems, with SOS constraints, can be posed as SDPs. You will explore these ideas in this problem.
􏰊r i=1
qi(x)2,
for some polynomials qi, with degree no more than k. A polynomial p that has this
sum-of-squares form is called SOS.
p(x) =
(a) Let f1, . . . , fs be all monomials of degree k or less. (Here we mean monomial in the standard sense, i.e., xm1 · · · xmn , where mi ∈ Z+ , and not in the sense used in
1n
geometric programming.) Show that if p can be expressed as a positive semidefinite
quadratic form p = fT V f, with V ∈ Ss+, then p is SOS. Conversely, show that if p is SOS, then it can be expressed as a positive semidefinite quadratic form in the monomials, i.e., p = fT V f, for some V ∈ Ss+.

204 4 Convex optimization problems
(b)
(c)
Show that the condition p = f T V f is a set of linear equality constraints relating the coefficients of p and the matrix V . Combined with part (a) above, this shows that the condition that p be SOS is equivalent to a set of linear equalities relating V and the coefficients of p, and the matrix inequality V ≽ 0.
Work out the LMI conditions for SOS explicitly for the case where p is polynomial of degree four in two variables.
4.46 Multidimensional moments. The moments of a random variable t on R2 are defined as μij = E ti1 tj2 , where i, j are nonnegative integers. In this problem we derive necessary conditions for a set of numbers μij, 0 ≤ i,j ≤ 2k, i+j ≤ 2k, to be the moments of a distribution on R2.
Let p : R2 → R be a polynomial of degree k with coefficients cij ,
k k−i
p ( t ) = 􏰊 􏰊 c i j t i1 t j2 , i=0 j=0
and let t be a random variable with moments μij. Suppose c ∈ R(k+1)(k+2)/2 contains the coefficients cij in some specific order, and μ ∈ R(k+1)(2k+1) contains the moments μij in the same order. Show that E p(t)2 can be expressed as a quadratic form in c:
Ep(t)2 =cTH(μ)c,
where H : R(k+1)(2k+1) → S(k+1)(k+2)/2 is a linear function of μ. From this, conclude that μ must satisfy the LMI H(μ) ≽ 0.
Remark: For random variables on R, the matrix H can be taken as the Hankel matrix defined in (4.52). In this case, H(μ) ≽ 0 is a necessary and sufficient condition for μ to be the moments of a distribution, or the limit of a sequence of moments. On R2, however, the LMI is only a necessary condition.
4.47 Maximum determinant positive semidefinite matrix completion. We consider a matrix A ∈ Sn, with some entries specified, and the others not specified. The positive semidefinite matrix completion problem is to determine values of the unspecified entries of the matrix so that A ≽ 0 (or to determine that such a completion does not exist).
(a) Explain why we can assume without loss of generality that the diagonal entries of A are specified.
(b) Show how to formulate the positive semidefinite completion problem as an SDP feasibility problem.
(c) Assume that A has at least one completion that is positive definite, and the diag- onal entries of A are specified (i.e., fixed). The positive definite completion with largest determinant is called the maximum determinant completion. Show that the maximum determinant completion is unique. Show that if A⋆ is the maximum de- terminant completion, then (A⋆)−1 has zeros in all the entries of the original matrix that were not specified. Hint. The gradient of the function f(X) = logdetX is ∇f(X) = X−1 (see §A.4.1).
(d) Suppose A is specified on its tridiagonal part, i.e., we are given A11,…,Ann and A12, . . . , An−1,n. Show that if there exists a positive definite completion of A, then there is a positive definite completion whose inverse is tridiagonal.
4.48 Generalized eigenvalue minimization. Recall (from example 3.37, or §A.5.3) that the largest generalized eigenvalue of a pair of matrices (A, B) ∈ Sk × Sk++ is given by
λmax(A,B)=supuTAu =max{λ| det(λB−A)=0}. u̸=0 uT Bu
As we have seen, this function is quasiconvex (if we take Sk × Sk++ as its domain).

Exercises
We consider the problem
205
minimize λmax(A(x), B(x))
(4.73)
where A, B : Rn → Sk are affine functions, defined as
A(x)=A0 +x1A1 +···+xnAn, B(x)=B0 +x1B1 +···+xnBn.
withAi,Bi ∈Sk.
(a) Give a family of convex functions φt : Sk × Sk → R, that satisfy
λmax(A,B) ≤ t ⇐⇒ φt(A,B) ≤ 0
for all (A,B) ∈ Sk ×Sk++. Show that this allows us to solve (4.73) by solving a
sequence of convex feasibility problems.
(b) Give a family of matrix-convex functions Φt : Sk × Sk → Sk that satisfy
λmax(A,B) ≤ t ⇐⇒ Φt(A,B) ≼ 0
for all (A,B) ∈ Sk ×Sk++. Show that this allows us to solve (4.73) by solving a
sequence of convex feasibility problems with LMI constraints.
(c) Suppose B(x) = (aT x+b)I, with a ̸= 0. Show that (4.73) is equivalent to the convex problem
minimize λmax(sA0 + y1A1 + · · · + ynAn) subjectto aTy+bs=1
s ≥ 0, with variables y ∈ Rn, s ∈ R.
4.49 Generalized fractional programming. Let K ∈ Rm be a proper cone. Show that the function f0 : Rn → Rm, defined by
f0(x) = inf{t | Cx+d ≼K t(Fx+g)}, domf0 = {x | Fx+g ≻K 0}, with C, F ∈ Rm×n, d, g ∈ Rm, is quasiconvex.
A quasiconvex optimization problem with objective function of this form is called a gen- eralized fractional program. Express the generalized linear-fractional program of page 152 and the generalized eigenvalue minimization problem (4.73) as generalized fractional pro- grams.
Vector and multicriterion optimization
4.50 Bi-criterion optimization. Figure 4.11 shows the optimal trade-off curve and the set of
achievable values for the bi-criterion optimization problem minimize (w.r.t. R2+ ) (∥Ax − b∥2 , ∥x∥2 ),
for some A ∈ R100×10, b ∈ R100. Answer the following questions using information from the plot. We denote by xls the solution of the least-squares problem
(a) What is ∥xls∥2?
(b) What is ∥Axls − b∥2?
(c) What is ∥b∥2?
minimize ∥Ax − b∥2.

206
4 Convex optimization problems
∥Ax − b∥2 ∥x∥2 = 1.
∥Ax − b∥2 ∥x∥2 ≤ 1.
4.51
4.52
4.53
4.54
4.55
(f) Give the optimal value of the problem
minimize ∥Ax − b∥2 + ∥x∥2.
(g) What is the rank of A?
Monotone transformation of objective in vector optimization. Consider the vector opti- mization problem (4.56). Suppose we form a new vector optimization problem by replacing the objective f0 with φ ◦ f0, where φ : Rq → Rq satisfies
u≼K v, u̸=v=⇒φ(u)≼K φ(v), φ(u)̸=φ(v).
Show that a point x is Pareto optimal (or optimal) for one problem if and only if it is Pareto optimal (optimal) for the other, so the two problems are equivalent. In particular, composing each objective in a multicriterion problem with an increasing function does not affect the Pareto optimal points.
Pareto optimal points and the boundary of the set of achievable values. Consider a vector optimization problem with cone K. Let P denote the set of Pareto optimal values, and let O denote the set of achievable objective values. Show that P ⊆ O ∩ bd O, i.e., every Pareto optimal value is an achievable objective value that lies in the boundary of the set of achievable objective values.
Suppose the vector optimization problem (4.56) is convex. Show that the set A=O+K={t∈Rq |f0(x)≼K tforsomefeasiblex},
is convex. Also show that the minimal elements of A are the same as the minimal points of O.
Scalarization and optimal points. Suppose a (not necessarily convex) vector optimization problem has an optimal point x⋆. Show that x⋆ is a solution of the associated scalarized problem for any choice of λ ≻K∗ 0. Also show the converse: If a point x is a solution of the scalarized problem for any choice of λ ≻K∗ 0, then it is an optimal point for the (not necessarily convex) vector optimization problem.
Generalization of weighted-sum scalarization. In §4.7.4 we showed how to obtain Pareto optimal solutions of a vector optimization problem by replacing the vector objective f0 : Rn →Rq withthescalarobjectiveλTf0,whereλ≻K∗ 0. Letψ:Rq →Rbea K-increasing function, i.e., satisfying
u≼K v, u̸=v=⇒ψ(u)<ψ(v). Show that any solution of the problem minimize ψ(f0(x)) subject to fi(x) ≤ 0, i = 1,...,m hi(x) = 0, i = 1,...,p (d) Give the optimal value of the problem minimize sub ject to (e) Give the optimal value of the problem minimize sub ject to Exercises is Pareto optimal for the vector optimization problem minimize (w.r.t. K) f0(x) subject to fi(x) ≤ 0, hi(x) = 0, 207 Note that ψ(u) = λT u, where λ ≻K∗ 0, is a special case. As a related example, show that in a multicriterion optimization problem (i.e., a vector optimization problem with f0 = F : Rn → Rq, and K = Rq+), a unique solution of the scalar optimization problem is Pareto optimal. Miscellaneous problems minimize subject to maxi=1,...,q Fi(x) fi(x) ≤ 0, i = 1,...,m hi(x) = 0, i = 1,...,p, 4.56 [P. Parrilo] We consider the problem of minimizing the convex function f0 : Rn → R over the convex hull of the union of some convex sets, conv 􏰀􏰭qi=1 Ci 􏰁. These sets are described via convex inequalities, Ci ={x|fij(x)≤0, j=1,...,ki}, where fij : Rn → R are convex. Our goal is to formulate this problem as a convex optimization problem. The obvious approach is to introduce variables x1, . . . , xq ∈ Rn, with xi ∈ Ci, θ ∈ Rq with θ ≽ 0, 1Tθ = 1, and a variable x ∈ Rn, with x = θ1x1 +···+θqxq. This equality constraint is not affine in the variables, so this approach does not yield a convex problem. A more sophisticated formulation is given by minimize f0 (x) subject to sifij(zi/si) ≤ 0, i = 1,...,q, j = 1,...,ki 1Ts=1, s≽0 x=z1 +···+zq, with variables z1,...,zq ∈ Rn, x ∈ Rn, and s1,...,sq ∈ R. (When si = 0, we take sifij(zi/si) to be 0 if zi = 0 and ∞ if zi ̸= 0.) Explain why this problem is convex, and equivalent to the original problem. 4.57 Capacity of a communication channel. We consider a communication channel, with input X(t) ∈ {1,...,n}, and output Y(t) ∈ {1,...,m}, for t = 1,2,... (in seconds, say). The relation between the input and the output is given statistically: pij = prob(Y (t) = i|X(t) = j), i = 1,...,m, j = 1,...,n. The matrix P ∈ Rm×n is called the channel transition matrix, and the channel is called a discrete memoryless channel. A famous result of Shannon states that information can be sent over the communication channel, with arbitrarily small probability of error, at any rate less than a number C, called the channel capacity, in bits per second. Shannon also showed that the capacity of a discrete memoryless channel can be found by solving an optimization problem. Assume that X has a probability distribution denoted x ∈ Rn, i.e., xj = prob(X = j), j = 1,...,n. i = 1,...,m i = 1,...,p. 208 4 Convex optimization problems The mutual information between X and Y is given by 􏰊m 􏰊n I(X;Y)= i=1 j=1 Then the channel capacity C is given by p i j xjpijlog2 􏰉n . where the supremum is over all possible probability distributions for the input X, i.e., over x ≽ 0, 1T x = 1. Show how the channel capacity can be computed using convex optimization. Hint. Introduce the variable y = Px, which gives the probability distribution of the output Y , and show that the mutual information can be expressed as T 􏰊m I(X;Y)=c x− yilog2yi, i=1 where cj = 􏰉mi=1 pij log2 pij, j = 1,...,n. 4.58 Optimal consumption. In this problem we consider the optimal way to consume (or spend) an initial amount of money (or other asset) k0 over time. The variables are c0,...,cT, where ct ≥ 0 denotes the consumption in period t. The utility derived from a consumption level c is given by u(c), where u : R → R is an increasing concave function. The present value of the utility derived from the consumption is given by 􏰊T t=0 Let kt denote the amount of money available for investment in period t. We assume that it earns an investment return given by f(kt), where f : R → R is an increasing, concave investment return function, which satisfies f(0) = 0. For example if the funds earn simple interest at rate R percent per period, we have f(a) = (R/100)a. The amount to be consumed, i.e., ct, is withdrawn at the end of the period, so we have the recursion kt+1 =kt +f(kt)−ct, t=0,...,T. The initial sum k0 > 0 is given. We require kt ≥ 0, t = 1,…,T+1 (but more sophisticated
models, which allow kt < 0, can be considered). Show how to formulate the problem of maximizing U as a convex optimization problem. Explain how the problem you formulate is equivalent to this one, and exactly how the two are related. Hint. Show that we can replace the recursion for kt given above with the inequalities kt+1 ≤kt +f(kt)−ct, t=0,...,T. (Interpretation: the inequalities give you the option of throwing money away in each period.) For a more general version of this trick, see exercise 4.6. 4.59 Robust optimization. In some optimization problems there is uncertainty or variation in the objective and constraint functions, due to parameters or factors that are either beyond our control or unknown. We can model this situation by making the objective and constraint functions f0 , . . . , fm functions of the optimization variable x ∈ Rn and a parameter vector u ∈ Rk that is unknown, or varies. In the stochastic optimization U = where 0 < β < 1 is a discount factor. βtu(ct), C = sup I(X; Y ), x k=1 xkpik Exercises 209 approach, the parameter vector u is modeled as a random variable with a known dis- tribution, and we work with the expected values Eu fi(x,u). In the worst-case analysis approach, we are given a set U that u is known to lie in, and we work with the maximum or worst-case values supu∈U fi(x,u). To simplify the discussion, we assume there are no equality constraints. (a) Stochastic optimization. We consider the problem minimize E f0 (x, u) subject to Efi(x,u) ≤ 0, i = 1,...,m, where the expectation is with respect to u. Show that if fi are convex in x for each u, then this stochastic optimization problem is convex. (b) Worst-case optimization. We consider the problem minimize supu∈U f0(x,u) subject to supu∈U fi(x,u) ≤ 0, i = 1,...,m. Show that if fi are convex in x for each u, then this worst-case optimization problem is convex. (c) Finite set of possible parameter values. The observations made in parts (a) and (b) are most useful when we have analytical or easily evaluated expressions for the expected values E fi(x, u) or the worst-case values supu∈U fi(x, u). Suppose we are given the set of possible values of the parameter is finite, i.e., we have u ∈ {u1, . . . , uN }. For the stochastic case, we are also given the probabilities ofeachvalue: prob(u=ui)=pi,wherep∈RN,p≽0,1Tp=1. Intheworst-case formulation, we simply take U ∈ {u1, . . . , uN }. Show how to set up the worst-case and stochastic optimization problems explicitly (i.e., give explicit expressions for supu∈U fi and Eu fi). 4.60 Log-optimal investment strategy. We consider a portfolio problem with n assets held over N periods. At the beginning of each period, we re-invest our total wealth, redistributing itTover the n assets using a fixed, constant, allocation strategy x ∈ Rn, where x ≽ 0, 1 x = 1. In other words, if W (t − 1) is our wealth at the beginning of period t, then during period t we invest􏰛xiW(t−1) in asset i. We denote by λ(t) the total return during period t, i.e., λ(t) = W (t)/W (t − 1). At the end of the N periods our wealth has been multiplied by the factor Nt=1 λ(t). We call 1 􏰊N N the growth rate of the investment over the N periods. We are interested in determining an allocation strategy x that maximizes growth of our total wealth for large N. We use a discrete stochastic model to account for the uncertainty in the returns. We assume that during each period there are m possible scenarios, with probabilities πj, j = 1,...,m. In scenario j, the return for asset i over one period is given by pij. Therefore, the return λ(t) of our portfolio during period t is a random variable, with m possible values pT1 x, . . . , pTmx, and distribution πj = prob(λ(t) = pTj x), j = 1,...,m. We assume the same scenarios for each period, with (identical) independent distributions. Using the law of large numbers, we have 1􏰄W(N)􏰅 1􏰊N lim log = lim log λ(t) = E log λ(t) = 􏰊m j=1 πj log(pTj x). N→∞ N W(0) N→∞ N t=1 t=1 log λ(t) 210 4 Convex optimization problems In other words, with investment strategy x, the long term growth rate is given by ment strategy, and can be found by solving the optimization problem maximize 􏰉mj=1 πj log(pTj x) subjectto x≽0, 1Tx=1, Show that this is a convex optimization problem. Rlt = with variable x ∈ Rn. 4.61 Optimization with logistic model. A random variable X ∈ {0, 1} satisfies 􏰊m j=1 πj log(pTj x). The investment strategy x that maximizes this quantity is called the log-optimal invest- prob(X =1)=p= exp(aTx+b) , 1 + exp(aT x + b) where x ∈ Rn is a vector of variables that affect the probability, and a and b are known parameters. We can think of X = 1 as the event that a consumer buys a product, and x as a vector of variables that affect the probability, e.g., advertising effort, retail price, discounted price, packaging expense, and other factors. The variable x, which we are to optimize over, is subject to a set of linear constraints, F x ≼ g. Formulate the following problems as convex optimization problems. (a) Maximizing buying probability. The goal is to choose x to maximize p. (b) Maximizing expected profit. Let cT x+d be the profit derived from selling the product, which we assume is positive for all feasible x. The goal is to maximize the expected profit, which is p(cT x + d). 4.62 Optimal power and bandwidth allocation in a Gaussian broadcast channel. We consider a communication system in which a central node transmits messages to n receivers. (‘Gaus- sian’ refers to the type of noise that corrupts the transmissions.) Each receiver channel is characterized by its (transmit) power level Pi ≥ 0 and its bandwidth Wi ≥ 0. The power and bandwidth of a receiver channel determine its bit rate Ri (the rate at which information can be sent) via Ri = αiWi log(1 + βiPi/Wi), where αi and βi are known positive constants. For Wi = 0, we take Ri = 0 (which is what you get if you take the limit as Wi → 0). The powers must satisfy a total power constraint, which has the form P1 +···+Pn =Ptot, where Ptot > 0 is a given total power available to allocate among the channels. Similarly,
the bandwidths must satisfy
W1 +···+Wn =Wtot,
where Wtot > 0 is the (given) total available bandwidth. The optimization variables in this problem are the powers and bandwidths, i.e., P1, . . . , Pn, W1, . . . , Wn.
The objective is to maximize the total utility,
􏰊n
ui (Ri ),
i=1

Exercises 211
where ui : R → R is the utility function associated with the ith receiver. (You can think of ui(Ri) as the revenue obtained for providing a bit rate Ri to receiver i, so the objective is to maximize the total revenue.) You can assume that the utility functions ui are nondecreasing and concave.
Pose this problem as a convex optimization problem.
4.63 Optimally balancing manufacturing cost and yield. The vector x ∈ Rn denotes the nomi- nal parameters in a manufacturing process. The yield of the process, i.e., the fraction of manufactured goods that is acceptable, is given by Y (x). We assume that Y is log-concave (which is often the case; see example 3.43). The cost per unit to manufacture the product is given by cT x, where c ∈ Rn. The cost per acceptable unit is cT x/Y (x). We want to minimize cT x/Y (x), subject to some convex constraints on x such as a linear inequalities Ax≼b. (YoucanassumethatoverthefeasiblesetwehavecTx>0andY(x)>0.) This problem is not a convex or quasiconvex optimization problem, but it can be solved using convex optimization and a one-dimensional search. The basic ideas are given below; you must supply all details and justification.
(a) Show that the function f : R → R given by f(a)=sup{Y(x)|Ax≼b, cTx=a},
which gives the maximum yield versus cost, is log-concave. This means that by solving a convex optimization problem (in x) we can evaluate the function f.
(b) Suppose that we evaluate the function f for enough values of a to give a good approx- imation over the range of interest. Explain how to use these data to (approximately) solve the problem of minimizing cost per good product.
4.64 Optimization with recourse. In an optimization problem with recourse, also called two- stage optimization, the cost function and constraints depend not only on our choice of variables, but also on a discrete random variable s ∈ {1, . . . , S}, which is interpreted as specifying which of S scenarios occurred. The scenario random variable s has known probability distribution π, with πi = prob(s = i), i = 1, . . . , S.
In two-stage optimization, we are to choose the values of two variables, x ∈ Rn and z ∈ Rq. The variable x must be chosen before the particular scenario s is known; the variable z, however, is chosen after the value of the scenario random variable is known. In other words, z is a function of the scenario random variable s. To describe our choice z, we list the values we would choose under the different scenarios, i.e., we list the vectors
z1,…,zS ∈Rq.
Here z3 is our choice of z when s = 3 occurs, and so on. The set of values
x∈Rn, z1,…,zS ∈Rq
is called the policy, since it tells us what choice to make for x (independent of which scenario occurs), and also, what choice to make for z in each possible scenario.
The variable z is called the recourse variable (or second-stage variable), since it allows us to take some action or make a choice after we know which scenario occurred. In contrast, our choice of x (which is called the first-stage variable) must be made without any knowledge of the scenario.
For simplicity we will consider the case with no constraints. The cost function is given by
f : Rn × Rq × {1, . . . , S} → R,
where f(x,z,i) gives the cost when the first-stage choice x is made, second-stage choice z is made, and scenario i occurs. We will take as the overall objective, to be minimized over all policies, the expected cost
􏰊S i=1
E f(x, zs, s) =
πif(x, zi, i).

212 4 Convex optimization problems
Suppose that f is a convex function of (x, z), for each scenario i = 1, . . . , S. Explain how to find an optimal policy, i.e., one that minimizes the expected cost over all possible policies, using convex optimization.
4.65 Optimal operation of a hybrid vehicle. A hybrid vehicle has an internal combustion engine, a motor/generator connected to a storage battery, and a conventional (friction) brake. In this exercise we consider a (highly simplified) model of a parallel hybrid vehicle, in which both the motor/generator and the engine are directly connected to the drive wheels. The engine can provide power to the wheels, and the brake can take power from the wheels, turning it into heat. The motor/generator can act as a motor, when it uses energy stored in the battery to deliver power to the wheels, or as a generator, when it takes power from the wheels or engine, and uses the power to charge the battery. When the generator takes power from the wheels and charges the battery, it is called regenerative braking; unlike ordinary friction braking, the energy taken from the wheels is stored, and can be used later. The vehicle is judged by driving it over a known, fixed test track to evaluate its fuel efficiency.
A diagram illustrating the power flow in the hybrid vehicle is shown below. The arrows indicate the direction in which the power flow is considered positive. The engine power peng, for example, is positive when it is delivering power; the brake power pbr is positive when it is taking power from the wheels. The power preq is the required power at the wheels. It is positive when the wheels require power (e.g., when the vehicle accelerates, climbs a hill, or cruises on level terrain). The required wheel power is negative when the vehicle must decelerate rapidly, or descend a hill.
Engine
peng
pbr preq
mg mg
pmg
wheels
All of these powers are functions of time, which we discretize in one second intervals, with t = 1,2,…,T. The required wheel power preq(1),…,preq(T) is given. (The speed of the vehicle on the track is specified, so together with known road slope information, and known aerodynamic and other losses, the power required at the wheels can be calculated.)
Power is conserved, which means we have
preq(t) = peng(t) + pmg(t) − pbr(t), t = 1,…,T.
The brake can only dissipate power, so we have pbr(t) ≥ 0 for each t. The engine can only provide power, and only up to a given limit Pmax, i.e., we have
0 ≤ peng(t) ≤ Pmax, t = 1,…,T. eng
The motor/generator power is also limited: pmg must satisfy Pmin ≤ pmg(t) ≤ Pmax, t = 1,…,T.
Here P max > 0 is the maximum motor power, and −P min > 0 is the maximum generator mg mg
power.
The battery charge or energy at time t is denoted E(t), t = 1,…,T + 1. The battery energy satisfies
E(t + 1) = E(t) − pmg(t) − η|pmg(t)|, t = 1,…,T,
eng
Brake
Motor/ generator
Battery

Exercises 213
where η > 0 is a known parameter. (The term −pmg(t) represents the energy removed
or added the battery by the motor/generator, ignoring any losses. The term −η|pmg(t)|
represents energy lost through inefficiencies in the battery or motor/generator.)
The battery charge must be between 0 (empty) and its limit Emax (full), at all times. (If batt
E(t) = 0, the battery is fully discharged, and no more energy can be extracted from it; when E(t) = Emax, the battery is full and cannot be charged.) To make the comparison
with non-hybrid vehicles fair, we fix the initial battery charge to equal the final battery charge, so the net energy change is zero over the track: E(1) = E(T + 1). We do not specify the value of the initial (and final) energy.
The objective in the problem (to be minimized) is the total fuel consumed by the engine, which is
􏰊T t=1
where F : R → R is the fuel use characteristic of the engine. We assume that F is positive, increasing, and convex.
Formulate this problem as a convex optimization problem, with variables peng(t), pmg(t),
andpbr(t)fort=1,…,T,andE(t)fort=1,…,T+1. Explainwhyyourformulation is equivalent to the problem described above.
batt
Ftotal =
F(peng(t)),

Chapter 5 Duality
5.1 The Lagrange dual function 5.1.1 The Lagrangian
We consider an optimization problem in the standard form (4.1):
minimize f0 (x)
subject to fi(x) ≤ 0, i = 1,…,m
hi(x) = 0, i = 1,…,p,
(5.1)
with variable x ∈ Rn. We assume its domain D = 􏰮mi=0 domfi ∩ 􏰮pi=1 domhi is nonempty, and denote the optimal value of (5.1) by p⋆. We do not assume the problem (5.1) is convex.
The basic idea in Lagrangian duality is to take the constraints in (5.1) into account by augmenting the objective function with a weighted sum of the constraint functions. We define the Lagrangian L : Rn × Rm × Rp → R associated with the problem (5.1) as
􏰊m L(x, λ, ν) = f0(x) +
i=1
λifi(x) +
􏰊p i=1
νihi(x),
with dom L = D × Rm × Rp. We refer to λi as the Lagrange multiplier associated with the ith inequality constraint fi(x) ≤ 0; similarly we refer to νi as the Lagrange multiplier associated with the ith equality constraint hi(x) = 0. The vectors λ and ν are called the dual variables or Lagrange multiplier vectors associated with the problem (5.1).

216 5 Duality
5.1.2 The Lagrange dual function
We define the Lagrange dual function (or just dual function) g : Rm × Rp → R as the minimum value of the Lagrangian over x: for λ ∈ Rm, ν ∈ Rp,
􏰇 􏰊m 􏰊p 􏰈 g(λ, ν) = inf L(x, λ, ν) = inf f0(x) + λifi(x) + νihi(x) .
x∈D x∈D
When the Lagrangian is unbounded below in x, the dual function takes on the value −∞. Since the dual function is the pointwise infimum of a family of affine functions of (λ,ν), it is concave, even when the problem (5.1) is not convex.
5.1.3 Lower bounds on optimal value
The dual function yields lower bounds on the optimal value p⋆ of the problem (5.1): For any λ ≽ 0 and any ν we have
g(λ, ν) ≤ p⋆. (5.2) This important property is easily verified. Suppose x ̃ is a feasible point for the
problem (5.1), i.e., fi(x ̃) ≤ 0 and hi(x ̃) = 0, and λ ≽ 0. Then we have
􏰊m i=1
􏰊p i=1
νihi(x ̃) ≤ 0,
λifi(x ̃) +
since each term in the first sum is nonpositive, and each term in the second sum is
zero, and therefore
L(x ̃, λ, ν) = f0(x ̃) +
Hence
x∈D
Since g(λ,ν) ≤ f0(x ̃) holds for every feasible point x ̃, the inequality (5.2) follows. The lower bound (5.2) is illustrated in figure 5.1, for a simple problem with x ∈ R and one inequality constraint.
The inequality (5.2) holds, but is vacuous, when g(λ,ν) = −∞. The dual function gives a nontrivial lower bound on p⋆ only when λ ≽ 0 and (λ, ν) ∈ dom g, i.e., g(λ,ν) > −∞. We refer to a pair (λ,ν) with λ ≽ 0 and (λ,ν) ∈ domg as dual feasible, for reasons that will become clear later.
5.1.4 Linear approximation interpretation
The Lagrangian and lower bound property can be given a simple interpretation, based on a linear approximation of the indicator functions of the sets {0} and −R+.
􏰊m i=1
􏰊p i=1
νihi(x ̃) ≤ f0(x ̃).
λifi(x ̃) +
g ( λ , ν ) = i n f L ( x , λ , ν ) ≤ L ( x ̃ , λ , ν ) ≤ f 0 ( x ̃ ) .
i=1 i=1

5.1 The Lagrange dual function
5 4 3 2 1 0
−1
217
−2
−1 −0.5 0 0.5 1
x
Figure 5.1 Lower bound from a dual feasible point. The solid curve shows the objective function f0, and the dashed curve shows the constraint function f1. The feasible set is the interval [−0.46, 0.46], which is indicated by the two dotted vertical lines. The optimal point and value are x⋆ = −0.46, p⋆ = 1.54 (shown as a circle). The dotted curves show L(x, λ) for λ = 0.1, 0.2, . . . , 1.0. Each of these has a minimum value smaller than p⋆, since on the feasible set (and for λ ≥ 0) we have L(x,λ) ≤ f0(x).
1.6 1.5 1.4 1.3 1.2 1.1
1
0 0.2
0.4 λ 0.6
0.8 1
Figure 5.2 The dual function g for the problem in figure 5.1. Neither f0 nor f1 is convex, but the dual function is concave. The horizontal dashed line shows p⋆, the optimal value of the problem.
g(λ)

218
5 Duality
We first rewrite the original problem (5.1) as an unconstrained problem, minimize f0(x) + 􏰉mi=1 I−(fi(x)) + 􏰉pi=1 I0(hi(x)), (5.3)
where I− : R → R is the indicator fun􏰆ction for the nonpositive reals,
0 u≤0 ∞ u>0,
and similarly, I0 is the indicator function of {0}. In the formulation (5.3), the func- tion I−(u) can be interpreted as expressing our irritation or displeasure associated with a constraint function value u = fi(x): It is zero if fi(x) ≤ 0, and infinite if fi(x) > 0. In a similar way, I0(u) gives our displeasure for an equality constraint value u = hi(x). We can think of I− as a “brick wall” or “infinitely hard” displea- sure function; our displeasure rises from zero to infinite as fi(x) transitions from nonpositive to positive.
Now suppose in the formulation (5.3) we replace the function I−(u) with the linear function λiu, where λi ≥ 0, and the function I0(u) with νiu. The objective becomes the Lagrangian function L(x, λ, ν), and the dual function value g(λ, ν) is the optimal value of the problem
minimize L(x, λ, ν) = f0(x) + 􏰉mi=1 λifi(x) + 􏰉pi=1 νihi(x). (5.4)
In this formulation, we use a linear or “soft” displeasure function in place of I− and I0. For an inequality constraint, our displeasure is zero when fi(x) = 0, and is positive when fi(x) > 0 (assuming λi > 0); our displeasure grows as the constraint becomes “more violated”. Unlike the original formulation, in which any nonpositive value of fi(x) is acceptable, in the soft formulation we actually derive pleasure from constraints that have margin, i.e., from fi(x) < 0. Clearly the approximation of the indicator function I−(u) with a linear function λiu is rather poor. But the linear function is at least an underestimator of the indicator function. Since λiu ≤ I−(u) and νiu ≤ I0(u) for all u, we see immediately that the dual function yields a lower bound on the optimal value of the original problem. The idea of replacing the “hard” constraints with “soft” versions will come up again when we consider interior-point methods (§11.2.1). I−(u) = 5.1.5 Examples In this section we give some examples for which we can derive an analytical ex- pression for the Lagrange dual function. Least-squares solution of linear equations We consider the problem minimize subject to where A ∈ Rp×n. This problem has no inequality constraints and p (linear) equality constraints. The Lagrangian is L(x, ν) = xT x + νT (Ax − b), with domain Rn × xT x Ax = b, (5.5) 5.1 The Lagrange dual function 219 Rp. The dual function is given by g(ν) = infx L(x, ν). Since L(x, ν) is a convex quadratic function of x, we can find the minimizing x from the optimality condition ∇xL(x, ν) = 2x + AT ν = 0, which yields x = −(1/2)AT ν. Therefore the dual function is g(ν) = L(−(1/2)AT ν, ν) = −(1/4)νT AAT ν − bT ν, which is a concave quadratic function, with domain Rp. The lower bound prop- erty (5.2) states that for any ν ∈ Rp, we have −(1/4)νT AAT ν − bT ν ≤ inf{xT x | Ax = b}. Standard form LP Consider an LP in standard form, minimize cTx subject to Ax = b x ≽ 0, (5.6) which has inequality constraint functions fi(x) = −xi, i = 1,...,n. To form the Lagrangian we introduce multipliers λi for the n inequality constraints and multipliers νi for the equality constraints, and obtain xx which is easily determined analytically, since a linear function is bounded below only when it is identically zero. Thus, g(λ, ν) = −∞ except when c + AT ν − λ = 0, inwhichcaseitis−bTν: 􏰆 −bTν ATν−λ+c=0 g(λ, ν) = −∞ otherwise. Note that the dual function g is finite only on a proper affine subset of Rm × Rp. We will see that this is a common occurrence. The lower bound property (5.2) is nontrivial only when λ and ν satisfy λ ≽ 0 and ATν −λ+c = 0. When this occurs, −bTν is a lower bound on the optimal value of the LP (5.6). L(x,λ,ν)=cTx− The dual function is 􏰊n i=1 λixi +νT(Ax−b)=−bTν+(c+ATν−λ)Tx. g(λ, ν) = inf L(x, λ, ν) = −bT ν + inf(c + AT ν − λ)T x, Two-way partitioning problem We consider the (nonconvex) problem minimize xT W x subjectto x2i =1, i=1,...,n, (5.7) 220 5 Duality where W ∈ Sn. The constraints restrict the values of xi to 1 or −1, so the problem is equivalent to finding the vector with components ±1 that minimizes xT W x. The feasible set here is finite (it contains 2n points) so this problem can in principle be solved by simply checking the objective value of each feasible point. Since the number of feasible points grows exponentially, however, this is possible only for small problems (say, with n ≤ 30). In general (and for n larger than, say, 50) the problem (5.7) is very difficult to solve. We can interpret the problem (5.7) as a two-way partitioning problem on a set of n elements, say, {1, . . . , n}: A feasible x corresponds to the partition {1,...,n} = {i|xi =−1} ∪ {i|xi =1}. The matrix coefficient Wij can be interpreted as the cost of having the elements i and j in the same partition, and −Wij is the cost of having i and j in different partitions. The objective in (5.7) is the total cost, over all pairs of elements, and the problem (5.7) is to find the partition with least total cost. We now derive the dual function for this problem. The Lagrangian is 􏰊n L(x,ν) = xTWx+ νi(x2i −1) i=1 = xT (W + diag(ν))x − 1T ν. We obtain the Lagrange dual function by minimizing over x: g(ν) = 􏰆inf xT (W + diag(ν))x − 1T ν x −1Tν W+diag(ν)≽0 −∞ otherwise, where we use the fact that the infimum of a quadratic form is either zero (if the form is positive semidefinite) or −∞ (if the form is not positive semidefinite). This dual function provides lower bounds on the optimal value of the difficult problem (5.7). For example, we can take the specific value of the dual variable ν = −λmin(W)1, which is dual feasible, since W +diag(ν)=W −λmin(W)I ≽0. This yields the bound on the optimal value p⋆ p⋆ ≥−1Tν=nλmin(W). (5.8) ⋆􏰉 Remark 5.1 This lower bound on p can also be obtained without using the Lagrange = dual function. First, we replace the constraints x21 = 1, . . . , x2n = 1 with to obtain the modified problem ni=1 x2i = n, (5.9) 􏰉T minimize x Wx subject to ni=1 x2i = n. 5.1 The Lagrange dual function 221 The constraints of the original problem (5.7) imply the constraint here, so the optimal value of the problem (5.9) is a lower bound on p⋆, the optimal value of (5.7). But the modified problem (5.9) is easily solved as an eigenvalue problem, with optimal value nλmin (W ). 5.1.6 The Lagrange dual function and conjugate functions Recall from §3.3 that the conjugate f∗ of a function f : Rn → R is given by f∗(y)= sup 􏰀yTx−f(x)􏰁. x∈dom f The conjugate function and Lagrange dual function are closely related. To see one simple connection, consider the problem minimize f (x) subject to x = 0 (which is not very interesting, and solvable by inspection). This problem has Lagrangian L(x, ν ) = f (x) + ν T x, and dual function g(ν) = inf 􏰀f(x) + νT x􏰁 = − sup 􏰀(−ν)T x − f(x)􏰁 = −f∗(−ν). x More generally (and more usefully), consider an optimization problem with linear inequality and equality constraints, minimize f0 (x) subject to Ax ≼ b (5.10) Cx = d. Using the conjugate of f0 we can write the dual function for the problem (5.10) as (5.11) x g(λ,ν) = inf􏰀f0(x)+λT(Ax−b)+νT(Cx−d)􏰁 x􏰀􏰁 Equality constrained norm minimization Consider the problem = −bTλ−dTν +inf f0(x)+(ATλ+CTν)Tx x = −bTλ−dTν −f0∗(−ATλ−CTν). The domain of g follows from the domain of f0∗: domg={(λ,ν)| −ATλ−CTν∈domf0∗}. Let us illustrate this with a few examples. minimize subject to ∥x∥ Ax = b, (5.12) 222 5 Duality where ∥ · ∥ is any norm. Recall (from example 3.26 on page 93) that the conjugate off0 =∥·∥isgivenby the indicator function of the dual norm unit ball. Using the result (5.11) above, the dual function for the problem (5.12) is given 􏰆 f 0∗ ( y ) = 0 ∥ y ∥ ∗ ≤ 1 ∞ otherwise, by 􏰆−bTν ∥ATν∥≤1 g ( ν ) = − b T ν − f 0∗ ( − A T ν ) = ∗ −∞ otherwise. Consider the entropy maximization problem Entropy maximization minimize f0(x) = 􏰉ni=1 xi log xi subject to Ax ≼ b (5.13) 1T x = 1 where dom f0 = Rn++. The conjugate of the negative entropy function u log u, with scalar variable u, is ev−1 (see example 3.21 on page 91). Since f0 is a sum of negative entropy functions of different variables, we conclude that its conjugate is given by g(λ,ν)=−bTλ−ν− 􏰊n i=1 􏰊n i=1 f0∗(y) = e−aTi λ−ν−1 =−bTλ−ν−e−ν−1 Minimum volume covering ellipsoid e−aTi λ where ai is the ith column of A. Consider the problem with variable X ∈ Sn, 􏰊n i=1 eyi−1, with domf0∗ = Rn. Using the result (5.11) above, the dual function of (5.13) is minimize f0(X) = log det X−1 subject to aTi Xai ≤ 1, i = 1,...,m, (5.14) where domf0 = Sn++. The problem (5.14) has a simple geometric interpretation. With each X ∈ Sn++ we associate the ellipsoid, centered at the origin, EX ={z|zTXz≤1}. The volume of this ellipsoid is proportional to 􏰀detX−1􏰁1/2, so the objective of (5.14) is, except for a constant and a factor of two, the logarithm of the volume 5.2 The Lagrange dual problem 223 of EX. The constraints of the problem (5.14) are that ai ∈ EX. Thus the prob- lem (5.14) is to determine the minimum volume ellipsoid, centered at the origin, that includes the points a1, . . . , am. The inequality constraints in problem (5.14) are affine; they can be expressed tr􏰀(aiaTi )X􏰁≤1. In example 3.23 (page 92) we found that the conjugate of f0 is f0∗(Y ) = log det(−Y )−1 − n, with dom f0∗ = −Sn++. Applying the result (5.11) above, the dual function for the problem (5.14) is given by g(λ) = 􏰆 log det 􏰀􏰉mi=1 λiaiaTi 􏰁 − 1T λ + n 􏰉mi=1 λiaiaTi ≻ 0 (5.15) −∞ otherwise. Thus, for any λ ≽ 0 with 􏰉mi=1 λiaiaTi ≻ 0, the number 􏰇􏰊m 􏰈 logdet λiaiaTi −1Tλ+n i=1 is a lower bound on the optimal value of the problem (5.14). 5.2 The Lagrange dual problem For each pair (λ, ν) with λ ≽ 0, the Lagrange dual function gives us a lower bound on the optimal value p⋆ of the optimization problem (5.1). Thus we have a lower bound that depends on some parameters λ, ν. A natural question is: What is the best lower bound that can be obtained from the Lagrange dual function? as This leads to the optimization problem maximize g(λ, ν) subject to λ ≽ 0. (5.16) This problem is called the Lagrange dual problem associated with the problem (5.1). In this context the original problem (5.1) is sometimes called the primal problem. The term dual feasible, to describe a pair (λ,ν) with λ ≽ 0 and g(λ,ν) > −∞, now makes sense. It means, as the name implies, that (λ, ν) is feasible for the dual problem (5.16). We refer to (λ⋆, ν⋆) as dual optimal or optimal Lagrange multipliers if they are optimal for the problem (5.16).
The Lagrange dual problem (5.16) is a convex optimization problem, since the objective to be maximized is concave and the constraint is convex. This is the case whether or not the primal problem (5.1) is convex.

224
5.2.1
5 Duality
Making dual constraints explicit
The examples above show that it is not uncommon for the domain of the dual function,
domg = {(λ,ν) | g(λ,ν) > −∞},
to have dimension smaller than m + p. In many cases we can identify the affine hull of domg, and describe it as a set of linear equality constraints. Roughly speaking, this means we can identify the equality constraints that are ‘hidden’ or ‘implicit’ in the objective g of the dual problem (5.16). In this case we can form an equivalent problem, in which these equality constraints are given explicitly as constraints. The following examples demonstrate this idea.
Lagrange dual of standard form LP
On page 219 we found that the Lagrange dual function for the standard form LP
minimize cT x
subject to Ax = b (5.17)
x≽0 isgivenby 􏰆 −bTν ATν−λ+c=0
g(λ, ν) = −∞ otherwise.
Strictly speaking, the Lagrange dual problem of the standard form LP is to maxi-
mize this dual function g subject to λ􏰆≽ 0, i.e.,
maximize g(λ, ν) = subject to λ ≽ 0.
−bTν ATν−λ+c=0
−∞ otherwise (5.18)
Here g is finite only when AT ν − λ + c = 0. We can form an equivalent problem by making these equality constraints explicit:
maximize −bT ν
subjectto ATν−λ+c=0
λ ≽ 0. This problem, in turn, can be expressed as
maximize −bT ν subjectto ATν+c≽0,
(5.19)
(5.20)
which is an LP in inequality form.
Note the subtle distinctions between these three problems. The Lagrange dual
of the standard form LP (5.17) is the problem (5.18), which is equivalent to (but not the same as) the problems (5.19) and (5.20). With some abuse of terminology, we refer to the problem (5.19) or the problem (5.20) as the Lagrange dual of the standard form LP (5.17).

5.2 The Lagrange dual problem 225
Lagrange dual of inequality form LP
In a similar way we can find the Lagrange dual problem of a linear program in
inequality form
minimize cT x subject to Ax ≼ b.
The Lagrangian is
L(x, λ) = cT x + λT (Ax − b) = −bT λ + (AT λ + c)T x,
so the dual function is
g(λ) = inf L(x, λ) = −bT λ + inf(AT λ + c)T x.
xx
The infimum of a linear function is −∞, except in the special case when it is identically zero, so the dual function is
g(λ)=􏰆 −bTλ ATλ+c=0 −∞ otherwise.
The dual variable λ is dual feasible if λ ≽ 0 and AT λ + c = 0.
The Lagrange dual of the LP (5.21) is to maximize g over all λ ≽ 0. Again
we can reformulate this by explicitly including the dual feasibility conditions as
constraints, as in
maximize −bT λ
subjectto ATλ+c=0 (5.22)
λ ≽ 0,
which is an LP in standard form.
Note the interesting symmetry between the standard and inequality form LPs
and their duals: The dual of a standard form LP is an LP with only inequality constraints, and vice versa. One can also verify that the Lagrange dual of (5.22) is (equivalent to) the primal problem (5.21).
5.2.2 Weak duality
The optimal value of the Lagrange dual problem, which we denote d⋆, is, by def- inition, the best lower bound on p⋆ that can be obtained from the Lagrange dual function. In particular, we have the simple but important inequality
d⋆ ≤ p⋆, (5.23)
which holds even if the original problem is not convex. This property is called weak duality.
The weak duality inequality (5.23) holds when d⋆ and p⋆ are infinite. For example, if the primal problem is unbounded below, so that p⋆ = −∞, we must have d⋆ = −∞, i.e., the Lagrange dual problem is infeasible. Conversely, if the dual problem is unbounded above, so that d⋆ = ∞, we must have p⋆ = ∞, i.e., the primal problem is infeasible.
(5.21)

226
5 Duality
We refer to the difference p⋆ − d⋆ as the optimal duality gap of the original problem, since it gives the gap between the optimal value of the primal problem and the best (i.e., greatest) lower bound on it that can be obtained from the Lagrange dual function. The optimal duality gap is always nonnegative.
The bound (5.23) can sometimes be used to find a lower bound on the optimal value of a problem that is difficult to solve, since the dual problem is always convex, and in many cases can be solved efficiently, to find d⋆. As an example, consider the two-way partitioning problem (5.7) described on page 219. The dual problem is an SDP,
maximize −1T ν
subject to W + diag(ν) ≽ 0,
with variable ν ∈ Rn. This problem can be solved efficiently, even for relatively large values of n, such as n = 1000. Its optimal value is a lower bound on the optimal value of the two-way partitioning problem, and is always at least as good as the lower bound (5.8) based on λmin(W).
Strong duality and Slater’s constraint qualification
If the equality
d⋆ = p⋆ (5.24)
holds, i.e., the optimal duality gap is zero, then we say that strong duality holds. This means that the best bound that can be obtained from the Lagrange dual function is tight.
Strong duality does not, in general, hold. But if the primal problem (5.1) is convex, i.e., of the form
minimize f0 (x)
subject to fi(x) ≤ 0, i = 1,…,m, (5.25)
Ax = b,
with f0 , . . . , fm convex, we usually (but not always) have strong duality. There are many results that establish conditions on the problem, beyond convexity, under which strong duality holds. These conditions are called constraint qualifications.
One simple constraint qualification is Slater’s condition: There exists an x ∈ relint D such that
fi(x) < 0, i = 1,...,m, Ax = b. (5.26) Such a point is sometimes called strictly feasible, since the inequality constraints hold with strict inequalities. Slater’s theorem states that strong duality holds, if Slater’s condition holds (and the problem is convex). Slater’s condition can be refined when some of the inequality constraint func- tions fi are affine. If the first k constraint functions f1, . . . , fk are affine, then strong duality holds provided the following weaker condition holds: There exists an x ∈ relint D with fi(x) ≤ 0, i = 1, . . . , k, fi(x) < 0, i = k + 1, . . . , m, Ax = b. (5.27) 5.2.3 5.2 The Lagrange dual problem 227 In other words, the affine inequalities do not need to hold with strict inequal- ity. Note that the refined Slater condition (5.27) reduces to feasibility when the constraints are all linear equalities and inequalities, and dom f0 is open. Slater’s condition (and the refinement (5.27)) not only implies strong duality for convex problems. It also implies that the dual optimal value is attained when d⋆ > −∞, i.e., there exists a dual feasible (λ⋆,ν⋆) with g(λ⋆,ν⋆) = d⋆ = p⋆. We will prove that strong duality obtains, when the primal problem is convex and Slater’s condition holds, in §5.3.2.
5.2.4 Examples
Least-squares solution of linear equations
Recall the problem (5.5):
minimize subject to
The associated dual problem is
maximize −(1/4)νT AAT ν − bT ν,
which is an unconstrained concave quadratic maximization problem.
Slater’s condition is simply that the primal problem is feasible, so p⋆ = d⋆ provided b ∈ R(A), i.e., p⋆ < ∞. In fact for this problem we always have strong duality, even when p⋆ = ∞. This is the case when b ̸∈ R(A), so there is a z with AT z = 0, bT z ̸= 0. It follows that the dual function is unbounded above along the line{tz|t∈R},sod⋆ =∞aswell. Lagrange dual of LP By the weaker form of Slater’s condition, we find that strong duality holds for any LP (in standard or inequality form) provided the primal problem is feasible. Applying this result to the duals, we conclude that strong duality holds for LPs if the dual is feasible. This leaves only one possible situation in which strong duality for LPs can fail: both the primal and dual problems are infeasible. This pathological case can, in fact, occur; see exercise 5.23. Lagrange dual of QCQP We consider the QCQP minimize (1/2)xT P0x + q0T x + r0 subjectto (1/2)xTPix+qiTx+ri ≤0, i=1,...,m, with P0 ∈ Sn++, and Pi ∈ Sn+, i = 1,...,m. The Lagrangian is L(x, λ) = (1/2)xT P (λ)x + q(λ)T x + r(λ), where (5.28) 􏰊m 􏰊m 􏰊m P(λ)=P0 + λiPi, q(λ)=q0 + λiqi, r(λ)=r0 + λiri. i=1 i=1 i=1 xT x Ax = b. 228 5 Duality It is possible to derive an expression for g(λ) for general λ, but it is quite compli- cated. If λ ≽ 0, however, we have P(λ) ≻ 0 and g(λ) = inf L(x, λ) = −(1/2)q(λ)T P (λ)−1q(λ) + r(λ). x We can therefore express the dual problem as maximize −(1/2)q(λ)T P (λ)−1q(λ) + r(λ) (5.29) The Slater condition says that strong duality between (5.29) and (5.28) holds if the subject to λ ≽ 0. quadratic inequality constraints are strictly feasible, i.e., there exists an x with (1/2)xTPix+qiTx+ri <0, i=1,...,m. Entropy maximization Our next example is the entropy maximization problem (5.13): minimize 􏰉ni=1 xi log xi subject to Ax ≼ b 1T x = 1, with domain D = Rn+. The Lagrange dual function was derived on page 222; the dual problem is maximize −bT λ − ν − e−ν−1 􏰉ni=1 e−aTi λ subject to λ ≽ 0, (5.30) with variables λ ∈ Rm, ν ∈ R. The (weaker) Slater condition for (5.13) tells us that the optimal duality gap is zero if there exists an x ≻ 0 with Ax ≼ b and 1T x = 1. We can simplify the dual problem (5.30) by maximizing over the dual variable ν analytically. For fixed λ, the objective function is maximized when the derivative with respect to ν is zero, i.e., maximize −bT λ − log 􏰎􏰉ni=1 e−aTi λ􏰏 subject to λ ≽ 0, which is a geometric program (in convex form) with nonnegativity constraints. Minimum volume covering ellipsoid We consider the problem (5.14): minimize log det X −1 subject to aTi Xai ≤ 1, i = 1,...,m, e − a Ti λ − 1 . Substituting this optimal value of ν into the dual problem gives ν = l o g 􏰊n i=1 5.2 The Lagrange dual problem 229 with domain D = Sn++. The Lagrange dual function is given by (5.15), so the dual problem can be expressed as thus maximize −bT (A + λI)†b − λ subjectto A+λI≽0, b∈R(A+λI), maximize logdet􏰀􏰉mi=1λiaiaTi 􏰁−1Tλ+n subject to λ ≽ 0 (5.31) where we take logdetX = −∞ if X ̸≻ 0. The (weaker) Slater condition for the problem (5.14) is that there exists an X ∈ Sn++ with aTi Xai ≤ 1, for i = 1,...,m. This is always satisfied, so strong duality always obtains between (5.14) and the dual problem (5.31). A nonconvex quadratic problem with strong duality On rare occasions strong duality obtains for a nonconvex problem. As an important example, we consider the problem of minimizing a nonconvex quadratic function over the unit ball, minimize xT Ax + 2bT x subject to xT x ≤ 1, (5.32) whereA∈Sn,A̸≽0,andb∈Rn. SinceA̸≽0,thisisnotaconvexproblem. This problem is sometimes called the trust region problem, and arises in minimizing a second-order approximation of a function over the unit ball, which is the region in which the approximation is assumed to be approximately valid. The Lagrangian is L(x, λ) = xT Ax + 2bT x + λ(xT x − 1) = xT (A + λI)x + 2bT x − λ, so the dual function is given by g(λ)=􏰆 −bT(A+λI)†b−λ A+λI ≽0, b∈R(A+λI) −∞ otherwise, where (A + λI)† is the pseudo-inverse of A + λI. The Lagrange dual problem is with variable λ ∈ R. Although it is not obvious from this expression, this is a convex optimization problem. In fact, it is readily solved since it can be expressed as 􏰉n T 2 − i=1(qi b) /(λi + λ) − λ λ ≥ −λmin(A), maximize subject to where λi and qi are the eigenvalues and corresponding (orthonormal) eigenvectors of A, and we interpret (qiT b)2/0 as 0 if qiT b = 0 and as ∞ otherwise. Despite the fact that the original problem (5.32) is not convex, we always have zero optimal duality gap for this problem: The optimal values of (5.32) and (5.33) are always the same. In fact, a more general result holds: strong duality holds for any optimization problem with quadratic objective and one quadratic inequality constraint, provided Slater’s condition holds; see §B.1. (5.33) 230 5.2.5 5 Duality Mixed strategies for matrix games In this section we use strong duality to derive a basic result for zero-sum matrix games. We consider a game with two players. Player 1 makes a choice (or move) k ∈ {1,...,n}, and player 2 makes a choice l ∈ {1,...,m}. Player 1 then makes a payment of Pkl to player 2, where P ∈ Rn×m is the payoff matrix for the game. The goal of player 1 is to make the payment as small as possible, while the goal of player 2 is to maximize it. The players use randomized or mixed strategies, which means that each player makes his or her choice randomly and independently of the other player’s choice, according to a probability distribution: prob(k = i) = ui, i = 1,...,n, prob(l = i) = vi, i = 1,...,m. Here u and v give the probability distributions of the choices of the two players, i.e., their associated strategies. The expected payoff from player 1 to player 2 is then 􏰊n 􏰊m ukvlPkl =uTPv. k=1 l=1 Player 1 wishes to choose u to minimize uT P v, while player 2 wishes to choose v to maximize uT P v. Let us first analyze the game from the point of view of player 1, assuming her strategy u is known to player 2 (which clearly gives an advantage to player 2). Player 2 will choose v to maximize uT P v, which results in the expected payoff sup{uTPv|v≽0, 1Tv=1}= max (PTu)i. i=1,...,m The best thing player 1 can do is to choose u to minimize this worst-case payoff to player 2, i.e., to choose a strategy u that solves the problem minimize maxi=1,...,m (P T u)i subjectto u≽0, 1Tu=1, (5.34) which is a piecewise-linear convex optimization problem. We will denote the opti- mal value of this problem as p⋆1. This is the smallest expected payoff player 1 can arrange to have, assuming that player 2 knows the strategy of player 1, and plays to his own maximum advantage. In a similar way we can consider the situation in which v, the strategy of player 2, is known to player 1 (which gives an advantage to player 1). In this case player 1 chooses u to minimize uT P v, which results in an expected payoff of inf{uTPv|u≽0, 1Tu=1}= min (Pv)i. i=1,...,n Player 2 chooses v to maximize this, i.e., chooses a strategy v that solves the problem maximize mini=1,...,n(P v)i subjectto v≽0, 1Tv=1, (5.35) 5.2 The Lagrange dual problem 231 which is another convex optimization problem, with piecewise-linear (concave) ob- jective. We will denote the optimal value of this problem as p⋆2. This is the largest expected payoff player 2 can guarantee getting, assuming that player 1 knows the strategy of player 2. It is intuitively obvious that knowing your opponent’s strategy gives an advan- tage (or at least, cannot hurt), and indeed, it is easily shown that we always have p⋆1 ≥ p⋆2. We can interpret the difference, p⋆1 − p⋆2, which is nonnegative, as the advantage conferred on a player by knowing the opponent’s strategy. Using duality, we can establish a result that is at first surprising: p⋆1 = p⋆2. In other words, in a matrix game with mixed strategies, there is no advantage to knowing your opponent’s strategy. We will establish this result by showing that the two problems (5.34) and (5.35) are Lagrange dual problems, for which strong duality obtains. We start by formulating (5.34) as an LP, minimize t subjectto u≽0, 1Tu=1 P T u ≼ t1, with extra variable t ∈ R. Introducing the multiplier λ for P T u ≼ t1, μ for u ≽ 0, and ν for 1T u = 1, the Lagrangian is t + λT (PT u − t1) − μT u + ν(1 − 1T u) = ν + (1 − 1T λ)t + (Pλ − ν1 − μ)T u, so the dual function is g(λ,μ,ν)=􏰆 ν 1Tλ=1, Pλ−ν1=μ −∞ otherwise. The dual problem is then maximize ν subjectto λ≽0, 1Tλ=1, μ≽0 Pλ − ν1 = μ. Eliminating μ we obtain the following Lagrange dual of (5.34): maximize ν subjectto λ≽0, 1Tλ=1 Pλ ≽ ν1, with variables λ, ν. But this is clearly equivalent to (5.35). Since the LPs are feasible, we have strong duality; the optimal values of (5.34) and (5.35) are equal. 232 5.3 5.3.1 5 Duality Geometric interpretation Weak and strong duality via set of values We can give a simple geometric interpretation of the dual function in terms of the set G={(f1(x),...,fm(x),h1(x),...,hp(x),f0(x))∈Rm×Rp×R|x∈D}, (5.36) which is the set of values taken on by the constraint and objective functions. The optimal value p⋆ of (5.1) is easily expressed in terms of G as p⋆ =inf{t|(u,v,t)∈G, u≼0, v=0}. To evaluate the dual function at (λ,ν), we minimize the affine function T 􏰊m 􏰊p (λ,ν,1) (u,v,t)= λiui+ νivi+t i=1 i=1 over (u, v, t) ∈ G, i.e., we have g(λ,ν) = inf{(λ,ν,1)T (u,v,t) | (u,v,t) ∈ G}. In particular, we see that if the infimum is finite, then the inequality (λ,ν,1)T (u,v,t) ≥ g(λ,ν) defines a supporting hyperplane to G. This is sometimes referred to as a nonvertical supporting hyperplane, because the last component of the normal vector is nonzero. Nowsupposeλ≽0. Then,obviously,t≥(λ,ν,1)T(u,v,t)ifu≼0andv=0. Therefore p⋆ = inf{t|(u,v,t)∈G, u≼0, v=0} ≥ inf{(λ,ν,1)T (u,v,t) | (u,v,t) ∈ G, u ≼ 0, v = 0} ≥ inf{(λ,ν,1)T (u,v,t) | (u,v,t) ∈ G} = g(λ,ν), i.e., we have weak duality. This interpretation is illustrated in figures 5.3 and 5.4, for a simple problem with one inequality constraint. Epigraph variation In this section we describe a variation on the geometric interpretation of duality in terms of G, which explains why strong duality obtains for (most) convex problems. We define the set A ⊆ Rm × Rp × R as or, more explicitly, A=G+􏰀Rm+ ×{0}×R+􏰁, (5.37) A={(u,v,t)|∃x∈D, fi(x)≤ui, i=1,...,m, hi(x) = vi, i = 1,...,p, f0(x) ≤ t}, 5.3 Geometric interpretation 233 λu + t = g(λ) t G p⋆ g(λ) Figure 5.3 Geometric interpretation of dual function and lower bound g(λ) ≤ p⋆, for a problem with one (inequality) constraint. Given λ, we minimize (λ, 1)T (u, t) over G = {(f1(x), f0(x)) | x ∈ D}. This yields a supporting hyperplane with slope −λ. The intersection of this hyperplane with the u = 0 axis gives g(λ). u λ2u + t = g(λ2) λ⋆u + t = g(λ⋆) λ1u + t = g(λ1) t G p⋆ d⋆ Figure 5.4 Supporting hyperplanes corresponding to three dual feasible val- ues of λ, including the optimum λ⋆. Strong duality does not hold; the optimal duality gap p⋆ − d⋆ is positive. u 234 5 Duality 5.3.2 the weak duality lower bound. Strong duality holds if and only if we have equality in (5.38) for some dual feasible (λ,ν), i.e., there exists a nonvertical supporting hyperplane to A at its boundary point (0, 0, p⋆). This second interpretation is illustrated in figure 5.5. Proof of strong duality under constraint qualification In this section we prove that Slater’s constraint qualification guarantees strong duality (and that the dual optimum is attained) for a convex problem. We consider λu + t = g(λ) t (0,p⋆) (0, g(λ)) Figure 5.5 Geometric interpretation of dual function and lower bound g(λ) ≤ p⋆, for a problem with one (inequality) constraint. Given λ, we minimize (λ,1)T (u,t) over A = {(u,t) | ∃x ∈ D, f0(x) ≤ t, f1(x) ≤ u}. This yields a supporting hyperplane with slope −λ. The intersection of this hyperplane with the u = 0 axis gives g(λ). We can think of A as a sort of epigraph form of G, since A includes all the points in G, as well as points that are ‘worse’, i.e., those with larger objective or inequality constraint function values. We can express the optimal value in terms of A as p⋆ = inf{t | (0,0,t) ∈ A}. To evaluate the dual function at a point (λ,ν) with λ ≽ 0, we can minimize the affine function (λ, ν, 1)T (u, v, t) over A: If λ ≽ 0, then g(λ,ν) = inf{(λ,ν,1)T (u,v,t) | (u,v,t) ∈ A}. If the infimum is finite, then (λ,ν,1)T (u,v,t) ≥ g(λ,ν) defines a nonvertical supporting hyperplane to A. In particular, since (0, 0, p⋆) ∈ bd A, we have p⋆ =(λ,ν,1)T(0,0,p⋆)≥g(λ,ν), (5.38) A u 5.3 Geometric interpretation 235 the primal problem (5.25), with f0 , . . . , fm convex, and assume Slater’s condition holds: There exists x ̃ ∈ relintD with fi(x ̃) < 0, i = 1,...,m, and Ax ̃ = b. In order to simplify the proof, we make two additional assumptions: first that D has nonempty interior (hence, relintD = intD) and second, that rankA = p. We assume that p⋆ is finite. (Since there is a feasible point, we can only have p⋆ = −∞ or p⋆ finite; if p⋆ = −∞, then d⋆ = −∞ by weak duality.) The set A defined in (5.37) is readily shown to be convex if the underlying problem is convex. We define a second convex set B as B={(0,0,s)∈Rm ×Rp ×R|s 0. In that case we can divide (5.41) by μ to obtain L ( x , λ ̃ / μ , ν ̃ / μ ) ≥ p ⋆
for all x ∈ D, from which it follows, by minimizing over x, that g(λ, ν) ≥ p⋆, where
we define
λ = λ ̃/μ, ν = ν ̃/μ.
By weak duality we have g(λ,ν) ≤ p⋆, so in fact g(λ,ν) = p⋆. This shows that strong duality holds, and that the dual optimum is attained, at least in the case when μ > 0.
Now consider the case μ = 0. From (5.41), we conclude that for all x ∈ D,
􏰊m
λ ̃ifi(x) + ν ̃T (Ax − b) ≥ 0.
Applying this to the point x ̃ that satisfies the Slater condition, we have
􏰊m
λ ̃ifi(x ̃) ≥ 0. i=1
(5.42)
i=1

236
5 Duality
t
̃ ( u ̃ , t )
A
B
u
5.3.3
Figure 5.6 Illustration of strong duality proof, for a convex problem that sat- isfies Slater’s constraint qualification. The set A is shown shaded, and the set B is the thick vertical line segment, not including the point (0, p⋆), shown as a small open circle. The two sets are convex and do not intersect, so they can be separated by a hyperplane. Slater’s constraint qualification guaran- tees that any separating hyperplane must be nonvertical, since it must pass
̃
to the left of the point (u ̃, t) = (f1(x ̃), f0(x ̃)), where x ̃ is strictly feasible.
Since fi(x ̃) < 0 and λ ̃i ≥ 0, we conclude that λ ̃ = 0. From (λ ̃,ν ̃,μ) ̸= 0 and λ ̃=0,μ=0,weconcludethatν ̸̃=0. Then(5.42)impliesthatforallx∈D, ν ̃T(Ax−b) ≥ 0. But x ̃ satisfies ν ̃T(Ax ̃−b) = 0, and since x ̃ ∈ intD, there are points in D with ν ̃T (Ax − b) < 0 unless AT ν ̃ = 0. This, of course, contradicts our assumption that rank A = p. The geometric idea behind the proof is illustrated in figure 5.6, for a simple problem with one inequality constraint. The hyperplane separating A and B defines a supporting hyperplane to A at (0,p⋆). Slater’s constraint qualification is used to establish that the hyperplane must be nonvertical (i.e., has a normal vector of the form (λ⋆,1)). (For a simple example of a convex problem with one inequality constraint for which strong duality fails, see exercise 5.21.) Multicriterion interpretation There is a natural connection between Lagrange duality for a problem without equality constraints, minimize f0 (x) subject to fi(x) ≤ 0, i = 1,...,m, (5.43) 5.4 Saddle-point interpretation 237 and the scalarization method for the (unconstrained) multicriterion problem minimize (w.r.t. Rm+1) F (x) = (f (x), . . . , f (x), f (x)) (5.44) +1m0 (see §4.7.4). In scalarization, we choose a positive vector λ ̃, and minimize the scalar function λ ̃T F (x); any minimizer is guaranteed to be Pareto optimal. Since we can scale λ ̃ by a positive constant, without affecting the minimizers, we can, without loss of generality, take λ ̃ = (λ, 1). Thus, in scalarization we minimize the function ̃T 􏰊m λ F(x)=f0(x)+ λifi(x), i=1 which is exactly the Lagrangian for the problem (5.43). To establish that every Pareto optimal point of a convex multicriterion problem minimizes the function λ ̃T F (x) for some nonnegative weight vector λ ̃, we considered the set A, defined in (4.62), A={t∈Rm+1 |∃x∈D, fi(x)≤ti, i=0,...,m}, which is exactly the same as the set A defined in (5.37), that arises in Lagrange dual- ity. Here too we constructed the required weight vector as a supporting hyperplane to the set, at an arbitrary Pareto optimal point. In multicriterion optimization, we interpret the components of the weight vector as giving the relative weights between the objective functions. When we fix the last component of the weight vector (associated with f0) to be one, the other weights have the interpretation of the cost relative to f0, i.e., the cost relative to the objective. 5.4 Saddle-point interpretation In this section we give several interpretations of Lagrange duality. The material of this section will not be used in the sequel. 5.4.1 Max-min characterization of weak and strong duality It is possible to express the primal and the dual optimization problems in a form that is more symmetric. To simplify the discussion we assume there are no equality constraints; the results are easily extended to cover them. First note that sup L(x, λ) λ≽0 = = 􏰇 􏰊m 􏰈 sup f0(x) + λifi(x) 􏰆λ≽0 f0(x) fi(x) ≤ 0, i = 1,...,m ∞ otherwise. i=1 238 5 Duality Indeed, suppose x is not feasible, and fi(x) > 0 for some i. Then supλ≽0 L(x, λ) = ∞,ascanbeseenbychoosingλj =0,j̸=i,andλi →∞. Ontheother hand, if fi(x) ≤ 0, i = 1,…,m, then the optimal choice of λ is λ = 0 and supλ≽0 L(x,λ) = f0(x). This means that we can express the optimal value of the primal problem as
5.4.2
Strong duality means that the order of the minimization over x and the maximiza- tion over λ ≽ 0 can be switched without affecting the result.
In fact, the inequality (5.45) does not depend on any properties of L: We have sup inf f(w,z)≤ inf supf(w,z) (5.46)
z∈Z w∈W w∈W z∈Z
for any f : Rn×Rm → R (and any W ⊆ Rn and Z ⊆ Rm). This general inequality
is called the max-min inequality. When equality holds, i.e.,
sup inf f(w,z)= inf supf(w,z) (5.47)
z∈Z w∈W w∈W z∈Z
we say that f (and W and Z) satisfy the strong max-min property or the saddle- point property. Of course the strong max-min property holds only in special cases, for example, when f : Rn × Rm → R is the Lagrangian of a problem for which strong duality obtains, W = Rn, and Z = Rm+ .
Saddle-point interpretation
Werefertoapairw ̃∈W,z ̃∈Z asasaddle-point forf (andW andZ)if f ( w ̃ , z ) ≤ f ( w ̃ , z ̃ ) ≤ f ( w , z ̃ )
for all w ∈ W and z ∈ Z. In other words, w ̃ minimizes f(w,z ̃) (over w ∈ W) and z ̃ m a x i m i z e s f ( w ̃ , z ) ( o v e r z ∈ Z ) :
f(w ̃,z ̃)= inf f(w,z ̃), f(w ̃,z ̃)=supf(w ̃,z). w∈W z∈Z
p⋆ = inf sup L(x, λ). x λ≽0
By the definition of the dual function, we also have d⋆ = sup inf L(x,λ).
λ≽0 x
Thus, weak duality can be expressed as the inequality
sup inf L(x, λ) ≤ inf sup L(x, λ), λ≽0 x x λ≽0
and strong duality as the equality
sup inf L(x, λ) = inf sup L(x, λ).
λ≽0 x x λ≽0
(5.45)

5.4 Saddle-point interpretation 239
This implies that the strong max-min property (5.47) holds, and that the common v a l u e i s f ( w ̃ , z ̃ ) .
Returning to our discussion of Lagrange duality, we see that if x⋆ and λ⋆ are primal and dual optimal points for a problem in which strong duality obtains, they form a saddle-point for the Lagrangian. The converse is also true: If (x,λ) is a saddle-point of the Lagrangian, then x is primal optimal, λ is dual optimal, and the optimal duality gap is zero.
5.4.3 Game interpretation
We can interpret the max-min inequality (5.46), the max-min equality (5.47), and the saddle-point property, in terms of a continuous zero-sum game. If the first player chooses w ∈ W, and the second player selects z ∈ Z, then player 1 pays an amount f(w,z) to player 2. Player 1 therefore wants to minimize f, while player 2 wants to maximize f. (The game is called continuous since the choices are vectors, and not discrete.)
Suppose that player 1 makes his choice first, and then player 2, after learning the choice of player 1, makes her selection. Player 2 wants to maximize the payoff f(w,z), and so will choose z ∈ Z to maximize f(w,z). The resulting payoff will be supz∈Z f(w,z), which depends on w, the choice of the first player. (We assume here that the supremum is achieved; if not the optimal payoff can be arbitrarily close to supz∈Z f(w,z).) Player 1 knows (or assumes) that player 2 will follow this strategy, and so will choose w ∈ W to make this worst-case payoff to player 2 as small as possible. Thus player 1 chooses
which results in the payoff
argmin sup f (w, z), w∈W z∈Z
inf sup f(w,z) w∈W z∈Z
from player 1 to player 2.
Now suppose the order of play is reversed: Player 2 must choose z ∈ Z first, and
then player 1 chooses w ∈ W (with knowledge of z). Following a similar argument, if the players follow the optimal strategy, player 2 should choose z ∈ Z to maximize infw∈W f(w,z), which results in the payoff of
sup inf f(w,z) z∈Z w∈W
from player 1 to player 2.
The max-min inequality (5.46) states the (intuitively obvious) fact that it is
better for a player to go second, or more precisely, for a player to know his or her opponent’s choice before choosing. In other words, the payoff to player 2 will be larger if player 1 must choose first. When the saddle-point property (5.47) holds, there is no advantage to playing second.
If (w ̃,z ̃) is a saddle-point for f (and W and Z), then it is called a solution of the game; w ̃ is called the optimal choice or strategy for player 1, and z ̃ is called

240
5 Duality
the optimal choice or strategy for player 2. In this case there is no advantage to playing second.
Now consider the special case where the payoff function is the Lagrangian, W = Rn and Z = Rm+ . Here player 1 chooses the primal variable x, while player 2 chooses the dual variable λ ≽ 0. By the argument above, the optimal choice for player 2, if she must choose first, is any λ⋆ which is dual optimal, which results in a payoff to player 2 of d⋆. Conversely, if player 1 must choose first, his optimal choice is any primal optimal x⋆, which results in a payoff of p⋆.
The optimal duality gap for the problem is exactly equal to the advantage afforded the player who goes second, i.e., the player who has the advantage of knowing his or her opponent’s choice before choosing. If strong duality holds, then there is no advantage to the players of knowing their opponent’s choice.
Price or tax interpretation
Lagrange duality has an interesting economic interpretation. Suppose the variable x denotes how an enterprise operates and f0(x) denotes the cost of operating at x, i.e., −f0(x) is the profit (say, in dollars) made at the operating condition x. Each constraint fi(x) ≤ 0 represents some limit, such as a limit on resources (e.g., warehouse space, labor) or a regulatory limit (e.g., environmental). The operating condition that maximizes profit while respecting the limits can be found by solving the problem
minimize f0 (x)
subject to fi(x) ≤ 0, i = 1,…,m.
The resulting optimal profit is −p⋆.
Now imagine a second scenario in which the limits can be violated, by paying an
additional cost which is linear in the amount of violation, measured by fi. Thus the payment made by the enterprise for the ith limit or constraint is λifi(x). Payments are also made to the firm for constraints that are not tight; if fi(x) < 0, then λifi(x) represents a payment to the firm. The coefficient λi has the interpretation of the price for violating fi(x) ≤ 0; its units are dollars per unit violation (as measured by fi). For the same price the enterprise can sell any ‘unused’ portion of the ith constraint. We assume λi ≥ 0, i.e., the firm must pay for violations (and receives income if a constraint is not tight). As an example, suppose the first constraint in the original problem, f1(x) ≤ 0, represents a limit on warehouse space (say, in square meters). In this new arrangement, we open the possibility that the firm can rent extra warehouse space at a cost of λ1 dollars per square meter and also rent out unused space, at the same rate. The total cost to the􏰉firm, for operating condition x, and constraint prices λi, is L(x, λ) = f0(x) + mi=1 λifi(x). The firm will obviously operate so as to minimize its total cost L(x, λ), which yields a cost g(λ). The dual function therefore represents the optimal cost to the firm, as a function of the constraint price vector λ. The optimal dual value, d⋆, is the optimal cost to the enterprise under the least favorable set of prices. 5.4.4 5.5 Optimality conditions 241 Using this interpretation we can paraphrase weak duality as follows: The opti- mal cost to the firm in the second scenario (in which constraint violations can be bought and sold) is less than or equal to the cost in the original situation (which has constraints that cannot be violated), even with the most unfavorable prices. This is obvious: If x⋆ is optimal in the first scenario, then the operating cost of x⋆ in the second scenario will be lower than f0(x⋆), since some income can be derived from the constraints that are not tight. The optimal duality gap is then the min- imum possible advantage to the enterprise of being allowed to pay for constraint violations (and receive payments for nontight constraints). Now suppose strong duality holds, and the dual optimum is attained. We can interpret a dual optimal λ⋆ as a set of prices for which there is no advantage to the firm in being allowed to pay for constraint violations (or receive payments for nontight constraints). For this reason a dual optimal λ⋆ is sometimes called a set of shadow prices for the original problem. 5.5 Optimality conditions We remind the reader that we do not assume the problem (5.1) is convex, unless explicitly stated. 5.5.1 Certificate of suboptimality and stopping criteria If we can find a dual feasible (λ, ν), we establish a lower bound on the optimal value of the primal problem: p⋆ ≥ g(λ,ν). Thus a dual feasible point (λ,ν) provides a proof or certificate that p⋆ ≥ g(λ, ν). Strong duality means there exist arbitrarily good certificates. Dual feasible points allow us to bound how suboptimal a given feasible point is, without knowing the exact value of p⋆. Indeed, if x is primal feasible and (λ,ν) is dual feasible, then f0(x) − p⋆ ≤ f0(x) − g(λ, ν). In particular, this establishes that x is ǫ-suboptimal, with ǫ = f0(x) − g(λ, ν). (It also establishes that (λ,ν) is ǫ-suboptimal for the dual problem.) We refer to the gap between primal and dual objectives, f0(x)−g(λ,ν), as the duality gap associated with the primal feasible point x and dual feasible point (λ,ν). A primal dual feasible pair x, (λ,ν) localizes the optimal value of the primal (and dual) problems to an interval: p⋆ ∈[g(λ,ν),f0(x)], d⋆ ∈[g(λ,ν),f0(x)], the width of which is the duality gap. If the duality gap of the primal dual feasible pair x, (λ, ν) is zero, i.e., f0(x) = g(λ, ν), then x is primal optimal and (λ, ν) is dual optimal. We can think of (λ, ν) 242 5 Duality as a certificate that proves x is optimal (and, similarly, we can think of x as a certificate that proves (λ,ν) is dual optimal). These observations can be used in optimization algorithms to provide nonheuris- tic stopping criteria. Suppose an algorithm produces a sequence of primal feasible x(k) and dual feasible (λ(k), ν(k)), for k = 1, 2, . . ., and ǫabs > 0 is a given required absolute accuracy. Then the stopping criterion (i.e., the condition for terminating the algorithm)
f0(x(k)) − g(λ(k), ν(k)) ≤ ǫabs
guarantees that when the algorithm terminates, x(k) is ǫabs-suboptimal. Indeed, (λ(k),ν(k)) is a certificate that proves it. (Of course strong duality must hold if this method is to work for arbitrarily small tolerances ǫabs.)
A similar condition can be used to guarantee a given relative accuracy ǫrel > 0.
If
holds, or
g(λ(k), ν(k)) > 0, f0(x(k)) < 0, f0(x(k)) − g(λ(k), ν(k)) ≤ ǫrel g(λ(k), ν(k)) f0(x(k)) − g(λ(k), ν(k)) ≤ ǫrel 5.5.2 f0(x(k)) − p⋆ |p⋆ | is guaranteed to be less than or equal to ǫrel. Complementary slackness Suppose that the primal and dual optimal values are attained and equal (so, in particular, strong duality holds). Let x⋆ be a primal optimal and (λ⋆, ν⋆) be a dual optimal point. This means that −f0(x(k)) holds, then p⋆ ̸= 0 and the relative error f0(x⋆) = = ≤ ≤ g(λ⋆,ν⋆) 􏰇 􏰊m 􏰊p 􏰈 inf f0(x) + λ⋆i fi(x) + νi⋆hi(x) x f0(x⋆) + f0 (x⋆ ). i=1 i=1 􏰊m i=1 λ⋆i fi(x⋆) + 􏰊p i=1 νi⋆hi(x⋆) The first line states that the definition of the dual Lagrangian over x is less follows from λ⋆i ≥ 0, fi(x⋆) ≤ 0, i = 1,...,m, and hi(x⋆) = 0, i = 1,...,p. We conclude that the two inequalities in this chain hold with equality. the optimal duality gap is zero, and the second line is function. The third line follows since the infimum of the than or equal to its value at x = x⋆. The last inequality 5.5 Optimality conditions 243 We can draw several interesting conclusions from this. For example, since the inequality in the third line is an equality, we conclude that x⋆ minimizes L(x, λ⋆, ν⋆) over x. (The Lagrangian L(x, λ⋆ , ν ⋆ ) can have other minimizers; x⋆ is simply a minimizer.) Another important conclusion is that 􏰊m λ⋆i fi(x⋆) = 0. i=1 Since each term in this sum is nonpositive, we conclude that λ⋆i fi(x⋆) = 0, i = 1,...,m. (5.48) This condition is known as complementary slackness; it holds for any primal opti- mal x⋆ and any dual optimal (λ⋆,ν⋆) (when strong duality holds). We can express the complementary slackness condition as ∇f0(x⋆) + 􏰊m i=1 λ⋆i ∇fi(x⋆) + 􏰊p i=1 νi⋆∇hi(x⋆) = 0. λ⋆i >0 =⇒ fi(x⋆)=0, fi(x⋆)<0 =⇒ λ⋆i =0. or, equivalently, Roughly speaking, this means the ith optimal Lagrange multiplier is zero unless the ith constraint is active at the optimum. 5.5.3 KKT optimality conditions We now assume that the functions f0, . . . , fm, h1, . . . , hp are differentiable (and therefore have open domains), but we make no assumptions yet about convexity. KKT conditions for nonconvex problems As above, let x⋆ and (λ⋆,ν⋆) be any primal and dual optimal points with zero duality gap. Since x⋆ minimizes L(x, λ⋆ , ν ⋆ ) over x, it follows that its gradient must vanish at x⋆, i.e., Thus we have ∇f0(x⋆) + which are called the Karush-Kuhn-Tucker (KKT) To summarize, for any optimization problem with differentiable objective and constraint functions for which strong duality obtains, any pair of primal and dual optimal points must satisfy the KKT conditions (5.49). fi(x⋆) ≤ hi(x⋆) = 0, i = 1,...,m 0, i = 1,...,p 0, i=1,...,m (5.49) λ⋆i ≥ 􏰉m 􏰉p λ⋆ifi(x⋆) = 0, i=1,...,m i=1 λ⋆i ∇fi(x⋆) + i=1 νi⋆∇hi(x⋆) = 0, conditions. 244 5 Duality KKT conditions for convex problems When the primal problem is convex, the KKT conditions are also sufficient for the points to be primal and dual optimal. In other words, if fi are convex and hi are affine, and x ̃, λ ̃, ν ̃ are any points that satisfy the KKT conditions fi(x ̃) ≤ hi(x ̃) = λ ̃i ≥ 􏰉 λ ̃ifi(x ̃) = pi=1 ν ̃i∇hi(x ̃) = 0, then x ̃ and (λ ̃, ν ̃) are primal and dual optimal, with zero duality gap. To see this, note that the first two conditions state that x ̃ is primal feasible. Since λ ̃i ≥ 0, L(x,λ ̃,ν ̃) is convex in x; the last KKT condition states that its gradient with respect to x vanishes at x = x ̃, so it follows that x ̃ minimizes L(x, λ ̃, ν ̃) over x. From this we conclude that 􏰉 0, i=1,...,m 0, i=1,...,p 0, i=1,...,m 0, i=1,...,m ∇f0(x ̃) + mi=1 λ ̃i∇fi(x ̃) + g(λ ̃, ν ̃) = = f0(x ̃) + = f 0 ( x ̃ ) , L(x ̃, λ ̃, ν ̃) 􏰊m i=1 λ ̃ifi(x ̃) + 􏰊p i=1 ν ̃ihi(x ̃) where in the last line we use hi(x ̃) = 0 and λ ̃ifi(x ̃) = 0. This shows that x ̃ and (λ ̃,ν ̃) have zero duality gap, and therefore are primal and dual optimal. In summary, for any convex optimization problem with differentiable objective and constraint functions, any points that satisfy the KKT conditions are primal and dual optimal, and have zero duality gap. If a convex optimization problem with differentiable objective and constraint functions satisfies Slater’s condition, then the KKT conditions provide necessary and sufficient conditions for optimality: Slater’s condition implies that the optimal duality gap is zero and the dual optimum is attained, so x is optimal if and only if there are (λ,ν) that, together with x, satisfy the KKT conditions. The KKT conditions play an important role in optimization. In a few special cases it is possible to solve the KKT conditions (and therefore, the optimization problem) analytically. More generally, many algorithms for convex optimization are conceived as, or can be interpreted as, methods for solving the KKT conditions. Example 5.1 Equality constrained convex quadratic minimization. We consider the problem minimize (1/2)xT Px + qT x + r subject to Ax = b, where P ∈ Sn+. The KKT conditions for this problem are Ax⋆ =b, Px⋆ +q+ATν⋆ =0, (5.50) which we can write as 􏰒P AT 􏰓􏰒x⋆ 􏰓=􏰒−q􏰓. A0ν⋆ b 5.5 Optimality conditions 245 Solving this set of m + n equations in the m + n variables x⋆, ν⋆ gives the optimal primal and dual variables for (5.50). Example 5.2 Water-filling. We consider the convex optimization problem minimize − 􏰉ni=1 log(αi + xi) subjectto x≽0, 1Tx=1, where αi > 0. This problem arises in information theory, in allocating power to a set of n communication channels. The variable xi represents the transmitter power allocated to the ith channel, and log(αi + xi) gives the capacity or communication rate of the channel, so the problem is to allocate a total power of one to the channels, in order to maximize the total communication rate.
Introducing Lagrange multipliers λ⋆ ∈ Rn for the inequality constraints x⋆ ≽ 0, and a multiplier ν⋆ ∈ R for the equality constraint 1T x = 1, we obtain the KKT conditions
x⋆ ≽0, 1Tx⋆ =1, λ⋆ ≽0, λ⋆ix⋆i =0, i=1,…,n, −1/(αi +x⋆i)−λ⋆i +ν⋆ =0, i=1,…,n.
We can directly solve these equations to find x⋆, λ⋆, and ν⋆. We start by noting that λ⋆ acts as a slack variable in the last equation, so it can be eliminated, leaving
x⋆ ≽0, 1Tx⋆ =1, x⋆i (ν⋆ −1/(αi +x⋆i))=0, i=1,…,n, ν⋆ ≥1/(αi +x⋆i), i=1,…,n.
If ν⋆ < 1/αi, this last condition can only hold if x⋆i > 0, which by the third condition implies that ν⋆ = 1/(αi + x⋆i ). Solving for x⋆i , we conclude that x⋆i = 1/ν⋆ − αi if ν⋆ < 1/αi. If ν⋆ ≥ 1/αi, then x⋆i > 0 is impossible, because it would imply ν⋆ ≥ 1/αi > 1/(αi + x⋆i ), which violates the complementary slackness condition. Therefore, x⋆i = 0 if ν ⋆ ≥ 1/αi . 􏰆Thus we have
x⋆ = 1/ν⋆ − αi ν⋆ < 1/αi i 0 or, put more simply, x⋆i = max {0, 1/ν⋆ − αi}. Substituting this expression for x⋆i into the condition 1T x⋆ = 1 we obtain 􏰊n ν⋆ ≥1/αi, max{0,1/ν⋆ −αi}=1. i=1 The lefthand side is a piecewise-linear increasing function of 1/ν⋆, with breakpoints at αi, so the equation has a unique solution which is readily determined. This solution method is called water-filling for the following reason. We think of αi as the ground level above patch i, and then flood the region with water to a d􏰉epth 1/ν, as illustrated in figure 5.7. The total amount of water used is then ni=1 max{0, 1/ν⋆ − αi}. We then increase the flood level until we have used a total amount of water equal to one. The depth of water above patch i is then the optimal value x⋆i . 246 5 Duality 1/ν ⋆ xi αi i Figure 5.7 Illustration of water-filling algorithm. The height of each patch is given by αi. The region is flooded to a level 1/ν⋆ which uses a total quantity of water equal to one. The height of the water (shown shaded) above each patch is the optimal value of x⋆i . ww x1 x2 Figure 5.8 Two blocks connected by springs to each other, and the left and right walls. The blocks have width w > 0, and cannot penetrate each other or the walls.
Mechanics interpretation of KKT conditions
The KKT conditions can be given a nice interpretation in mechanics (which indeed, was one of Lagrange’s primary motivations). We illustrate the idea with a simple example. The system shown in figure 5.8 consists of two blocks attached to each other, and to walls at the left and right, by three springs. The position of the blocks are given by x ∈ R2, where x1 is the displacement of the (middle of the) left block, and x2 is the displacement of the right block. The left wall is at position 0, and the right wall is at position l.
The potential energy in the springs, as a function of the block positions, is given
5.5.4
by
f 0 ( x 1 , x 2 ) = 21 k 1 x 21 + 21 k 2 ( x 2 − x 1 ) 2 + 21 k 3 ( l − x 2 ) 2 ,
l
where ki > 0 are the stiffness constants of the three springs. The equilibrium position x⋆ is the position that minimizes the potential energy subject to the in- equalities
w/2−x1 ≤0, w+x1 −x2 ≤0, w/2−l+x2 ≤0. (5.51)

5.5 Optimality conditions 247
λ1 λ2 λ2 λ3
k1x1 k2(x2 − x1) k2(x2 − x1) k3(l − x2)
Figure 5.9 Force analysis of the block-spring system. The total force on each block, due to the springs and also to contact forces, must be zero. The Lagrange multipliers, shown on top, are the contact forces between the walls and blocks. The spring forces are shown at bottom.
These constraints are called kinematic constraints, and express the fact that the blocks have width w > 0, and cannot penetrate each other or the walls. The equilibrium position is therefore given by the solution of the optimization problem
minimize subject to
(1/2) 􏰀k1x21 + k2(x2 − x1)2 + k3(l − x2)2􏰁 w/2 − x1 ≤ 0
w + x1 − x2 ≤ 0
w/2−l+x2 ≤0,
(5.52)
which is a QP.
With λ1, λ2, λ3 as Lagrange multipliers, the KKT conditions for this problem
consist of the kinematic constraints (5.51), the nonnegativity constraints λi ≥ 0, the complementary slackness conditions
λ1(w/2−x1)=0, λ2(w−x2 +x1)=0, λ3(w/2−l+x2)=0, (5.53) and the zero gradient condition
􏰒 k1x1 −k2(x2 −x1) 􏰓+λ1􏰒 −1 􏰓+λ2􏰒 1 􏰓+λ3􏰒 0 􏰓=0. (5.54) k2(x2 −x1)−k3(l−x2) 0 −1 1
The equation (5.54) can be interpreted as the force balance equations for the two blocks, provided we interpret the Lagrange multipliers as contact forces that act between the walls and blocks, as illustrated in figure 5.9. The first equation states that the sum of the forces on the first block is zero: The term −k1x1 is the force exerted on the left block by the left spring, the term k2(x2 −x1) is the force exerted by the middle spring, λ1 is the force exerted by the left wall, and −λ2 is the force exerted by the right block. The contact forces must point away from the contact surface (as expressed by the constraints λ1 ≥ 0 and −λ2 ≤ 0), and are nonzero only when there is contact (as expressed by the first two complementary slackness conditions (5.53)). In a similar way, the second equation in (5.54) is the force balance for the second block, and the last condition in (5.53) states that λ3 is zero unless the right block touches the wall.
In this example, the potential energy and kinematic constraint functions are convex, and (the refined form of) Slater’s constraint qualification holds provided 2w ≤ l, i.e., there is enough room between the walls to fit the two blocks, so we can conclude that the energy formulation of the equilibrium given by (5.52), gives the same result as the force balance formulation, given by the KKT conditions.

248
5.5.5
5 Duality
Solving the primal problem via the dual
We mentioned at the beginning of §5.5.3 that if strong duality holds and a dual optimal solution (λ⋆,ν⋆) exists, then any primal optimal point is also a minimizer of L(x, λ⋆, ν⋆). This fact sometimes allows us to compute a primal optimal solution from a dual optimal solution.
More precisely, suppose we have strong duality and an optimal (λ⋆, ν⋆) is known. Suppose that the minimizer of L(x, λ⋆, ν⋆), i.e., the solution of
minimize f0(x) + 􏰉mi=1 λ⋆i fi(x) + 􏰉pi=1 νi⋆hi(x), (5.55)
is unique. (For a convex problem this occurs, for example, if L(x, λ⋆, ν⋆) is a strictly convex function of x.) Then if the solution of (5.55) is primal feasible, it must be primal optimal; if it is not primal feasible, then no primal optimal point can exist, i.e., we can conclude that the primal optimum is not attained. This observation is interesting when the dual problem is easier to solve than the primal problem, for example, because it can be solved analytically, or has some special structure that can be exploited.
Example 5.3 Entropy maximization. We consider the entropy maximization problem
minimize f0(x) = 􏰉ni=1 xi log xi subject to Ax ≼ b
1T x = 1 with domain Rn++, and its dual problem
maximize −bT λ − ν − e−ν−1 􏰉ni=1 e−aTi λ subject to λ ≽ 0
where ai are the columns of A (see pages 222 and 228). We assume that the weak form of Slater’s condition holds, i.e., there exists an x ≻ 0 with Ax ≼ b and 1T x = 1, so strong duality holds and an optimal solution (λ⋆,ν⋆) exists.
Suppose we have solved the dual problem. The Lagrangian at (λ⋆,ν⋆) is
x⋆i=1/exp(aTiλ⋆+ν⋆+1), i=1,…,n.
􏰊n i=1
xilogxi +λ⋆T(Ax−b)+ν⋆(1Tx−1)
which is strictly convex on D and bounded below, so it has a unique solution x⋆,
given by
L(x,λ⋆,ν⋆)=
If x⋆ is primal feasible, it must be the optimal solution of the primal problem (5.13). If x⋆ is not primal feasible, then we can conclude that the primal optimum is not attained.
Example 5.4 Minimizing a separable function subject to an equality constraint. We consider the problem
minimize f0(x) = 􏰉ni=1 fi(xi) subject to aT x = b,

5.6
Perturbation and sensitivity analysis 249
where a ∈ Rn, b ∈ R, and fi : R → R are differentiable and strictly convex. The objective function is called separable since it is a sum of functions of the individual variables x1,…,xn. We assume that the domain of f0 intersects the constraint set, i.e., there exists a point x0 ∈ dom f0 with aT x0 = b. This implies the problem has a unique optimal point x⋆.
The Lagrangian is L(x,ν) =
􏰊n
􏰊n i=1
fi(xi) + ν(aT x − b) = −bν + which is also separable, so the dual function is
(fi(xi) + νaixi),
The dual problem is thus with (scalar) variable ν ∈ R.
i=1
g(ν) =
= −bν +
(fi(xi) + νaixi)
􏰇􏰊n i=1
􏰊n
i=1 􏰊n
􏰈
Now suppose we have found an optimal dual variable ν⋆. (There are several simple methods for solving a convex problem with one scalar variable, such as the bisection method.) Since each fi is strictly convex, the function L(x,ν⋆) is strictly convex in x, and so has a unique minimizer x ̃. But we also know that x⋆ minimizes L(x,ν⋆), so we must have x ̃ = x⋆. We can recover x⋆ from ∇xL(x, ν⋆) = 0, i.e., by solving the equations fi′(x⋆i ) = −ν⋆ai.
−bν + inf x
i=1
inf(fi(xi) + νaixi) xi
f i∗ ( − ν a i ) .
= − b ν −
maximize −bν − 􏰉ni=1 fi∗(−νai),
5.6 Perturbation and sensitivity analysis
When strong duality obtains, the optimal dual variables give very useful informa- tion about the sensitivity of the optimal value with respect to perturbations of the constraints.
5.6.1 The perturbed problem
We consider the following perturbed version of the original optimization prob- lem (5.1):
minimize f0 (x)
subject to fi(x) ≤ ui, i = 1,…,m (5.56)
hi(x) = vi, i = 1,…,p,

250
5 Duality
with variable x ∈ Rn. This problem coincides with the original problem (5.1) when u = 0, v = 0. When ui is positive it means that we have relaxed the ith inequality constraint; when ui is negative, it means that we have tightened the constraint. Thus the perturbed problem (5.56) results from the original problem (5.1) by tight- ening or relaxing each inequality constraint by ui, and changing the righthand side of the equality constraints by vi.
We define p⋆(u,v) as the optimal value of the perturbed problem (5.56): p⋆(u,v) = inf{f0(x) | ∃x ∈ D, fi(x) ≤ ui, i = 1,…,m,
hi(x) = vi, i = 1,…,p}.
We can have p⋆(u,v) = ∞, which corresponds to perturbations of the constraints that result in infeasibility. Note that p⋆(0,0) = p⋆, the optimal value of the un- perturbed problem (5.1). (We hope this slight abuse of notation will cause no confusion.) Roughly speaking, the function p⋆ : Rm × Rp → R gives the optimal value of the problem as a function of perturbations to the righthand sides of the constraints.
When the original problem is convex, the function p⋆ is a convex function of u and v; indeed, its epigraph is precisely the closure of the set A defined in (5.37) (see exercise 5.32).
A global inequality
Now we assume that strong duality holds, and that the dual optimum is attained. (This is the case if the original problem is convex, and Slater’s condition is satisfied). Let (λ⋆,ν⋆) be optimal for the dual (5.16) of the unperturbed problem. Then for all u and v we have
p⋆(u, v) ≥ p⋆(0, 0) − λ⋆T u − ν⋆T v. (5.57)
To establish this inequality, suppose that x is any feasible point for the per- turbed problem, i.e., fi(x) ≤ ui for i = 1,…,m, and hi(x) = vi for i = 1,…,p. Then we have, by strong duality,
5.6.2
p⋆(0, 0) = g(λ⋆, ν⋆) ≤ f0(x) +
νi⋆hi(x)
(The first inequality follows from the definition of g(λ⋆,ν⋆); the second follows since λ⋆ ≽ 0.) We conclude that for any x feasible for the perturbed problem, we
have
from which (5.57) follows.
f0(x) ≥ p⋆(0, 0) − λ⋆T u − ν⋆T v, Sensitivity interpretations
When strong duality holds, various sensitivity interpretations of the optimal La- grange variables follow directly from the inequality (5.57). Some of the conclusions are:
􏰊m i=1
􏰊p i=1
λ⋆i fi(x) + ≤ f0(x)+λ⋆Tu+ν⋆Tv.

5.6
Perturbation and sensitivity analysis 251
• •


Figure 5.10 Optimal value p⋆(u) of a convex problem with one constraint f1(x) ≤ u, as a function of u. For u = 0, we have the original unperturbed problem; for u < 0 the constraint is tightened, and for u > 0 the constraint is loosened. The affine function p⋆(0) − λ⋆u is a lower bound on p⋆.
If λ⋆i is large and we tighten the ith constraint (i.e., choose ui < 0), then the optimal value p⋆(u,v) is guaranteed to increase greatly. If νi⋆ is large and positive and we take vi < 0, or if νi⋆ is large and negative and we take vi > 0, then the optimal value p⋆(u, v) is guaranteed to increase greatly.
If λ⋆i is small, and we loosen the ith constraint (ui > 0), then the optimal value p⋆(u,v) will not decrease too much.
If νi⋆ is small and positive, and vi > 0, or if νi⋆ is small and negative and vi < 0, then the optimal value p⋆(u, v) will not decrease too much. u = 0 u p⋆(u) p⋆(0) − λ⋆u The inequality (5.57), and the conclusions listed above, give a lower bound on the perturbed optimal value, but no upper bound. For this reason the results are not symmetric with respect to loosening or tightening a constraint. For example, suppose that λ⋆i is large, and we loosen the ith constraint a bit (i.e., take ui small and positive). In this case the inequality (5.57) is not useful; it does not, for example, imply that the optimal value will decrease considerably. The inequality (5.57) is illustrated in figure 5.10 for a convex problem with one inequality constraint. The inequality states that the affine function p⋆(0) − λ⋆u is a lower bound on the convex function p⋆. 5.6.3 Local sensitivity analysis Suppose now that p⋆(u, v) is differentiable at u = 0, v = 0. Then, provided strong duality holds, the optimal dual variables λ⋆, ν⋆ are related to the gradient of p⋆ at 252 5 Duality νi⋆ = −∂p⋆(0,0). (5.58) ∂vi u = 0, v = 0: λ⋆i = −∂p⋆(0,0), ∂ui This property can be seen in the example shown in figure 5.10, where −λ⋆ is the slope of p⋆ near u = 0. Thus, when p⋆(u, v) is differentiable at u = 0, v = 0, and strong duality holds, the optimal Lagrange multipliers are exactly the local sensitivities of the optimal value with respect to constraint perturbations. In contrast to the nondifferentiable case, this interpretation is symmetric: Tightening the ith inequality constraint a small amount (i.e., taking ui small and negative) yields an increase in p⋆ of approximately −λ⋆i ui; loosening the ith constraint a small amount (i.e., taking ui small and positive) yields a decrease in p⋆ of approximately λ⋆i ui. To show (5.58), suppose p⋆(u,v) is differentiable and strong duality holds. For the perturbation u = tei, v = 0, where ei is the ith unit vector, we have t→0 t The inequality (5.57) states that for t > 0,
p⋆(tei, 0) − p⋆ t
p⋆(tei, 0) − p⋆ ∂p⋆(0, 0) lim = .
⋆ ≥ −λi ,
∂ui
while for t < 0 we have the opposite inequality. Taking the limit t → 0, with t > 0, yields
∂ p ⋆ ( 0 , 0 ) ≥ − λ ⋆i , ∂ui
while taking the limit with t < 0 yields the opposite inequality, so we conclude that ∂ p ⋆ ( 0 , 0 ) = − λ ⋆i . ∂ui The same method can be used to establish ∂ p ⋆ ( 0 , 0 ) = − ν i⋆ . ∂vi The local sensitivity result (5.58) gives us a quantitative measure of how active a constraint is at the optimum x⋆. If fi(x⋆) < 0, then the constraint is inactive, and it follows that the constraint can be tightened or loosened a small amount without affecting the optimal value. By complementary slackness, the associated optimal Lagrange multiplier must be zero. But now suppose that fi(x⋆) = 0, i.e., the ith constraint is active at the optimum. The ith optimal Lagrange multiplier tells us how active the constraint is: If λ⋆i is small, it means that the constraint can be loosened or tightened a bit without much effect on the optimal value; if λ⋆i is large, it means that if the constraint is loosened or tightened a bit, the effect on the optimal value will be great. 5.7 Examples 253 Shadow price interpretation We can also give a simple geometric interpretation of the result (5.58) in terms of economics. We consider (for simplicity) a convex problem with no equality constraints, which satisfies Slater’s condition. The variable x ∈ Rm determines how a firm operates, and the objective f0 is the cost, i.e., −f0 is the profit. Each constraint fi(x) ≤ 0 represents a limit on some resource such as labor, steel, or warehouse space. The (negative) perturbed optimal cost function −p⋆(u) tells us how much more or less profit could be made if more, or less, of each resource were made available to the firm. If it is differentiable near u = 0, then we have λ⋆i =−∂p⋆(0). ∂ui In other words, λ⋆i tells us approximately how much more profit the firm could make, for a small increase in availability of resource i. It follows that λ⋆i would be the natural or equilibrium price for resource i, if it were possible for the firm to buy or sell it. Suppose, for example, that the firm can buy or sell resource i, at a price that is less than λ⋆i . In this case it would certainly buy some of the resource, which would allow it to operate in a way that increases its profit more than the cost of buying the resource. Conversely, if the price exceeds λ⋆i , the firm would sell some of its allocation of resource i, and obtain a net gain since its income from selling some of the resource would be larger than its drop in profit due to the reduction in availability of the resource. 5.7 Examples In this section we show by example that simple equivalent reformulations of a problem can lead to very different dual problems. We consider the following types of reformulations: • Introducing new variables and associated equality constraints. • Replacing the objective with an increasing function of the original objective. • Making explicit constraints implicit, i.e., incorporating them into the domain of the objective. 5.7.1 Introducing new variables and equality constraints Consider an unconstrained problem of the form minimize f0(Ax + b). (5.59) Its Lagrange dual function is the constant p⋆. So while we do have strong duality, i.e., p⋆ = d⋆, the Lagrangian dual is neither useful nor interesting. 254 5 Duality (5.60) Now let us reformulate the problem (5.59) as minimize f0(y) subjectto Ax+b=y. Here we have introduced new variables y, as well as new equality constraints Ax + b = y. The problems (5.59) and (5.60) are clearly equivalent. The Lagrangian of the reformulated problem is L(x, y, ν) = f0(y) + νT (Ax + b − y). To find the dual function we minimize L over x and y. Minimizing over x we find that g(ν) = −∞ unless AT ν = 0, in which case we are left with g ( ν ) = b T ν + i n f ( f 0 ( y ) − ν T y ) = b T ν − f 0∗ ( ν ) , y where f0∗ is the conjugate of f0. The dual problem of (5.60) can therefore be expressed as maximize bT ν − f0∗(ν) subject to AT ν = 0. Thus, the dual of the reformulated problem (5.60) is considerably more useful than the dual of the original problem (5.59). Example 5.5 Unconstrained geometric program. Consider the unconstrained geomet- (5.61) ric program We first reformulate it by introducing new variables and equality constraints: where aTi minimize log 􏰀􏰉mi=1 exp(aTi x + bi )􏰁 . minimize f0(y) = log 􏰀􏰉mi=1 exp yi􏰁 subject to Ax+b = y, are the rows of A. The conjugate of the log-sum-exp function is f0∗(ν)=􏰆 􏰉mi=1νilogνi ν≽0, 1Tν=1 ∞ otherwise (example 3.25, page 93), so the dual of the reformulated problem can be expressed as maximize bT ν − 􏰉mi=1 νi log νi subject to 1T ν = 1 AT ν = 0 ν ≽ 0, which is an entropy maximization problem. (5.62) Example 5.6 Norm approximation problem. We consider the unconstrained norm approximation problem minimize ∥Ax − b∥, (5.63) where ∥ · ∥ is any norm. Here too the Lagrange dual function is constant, equal to the optimal value of (5.63), and therefore not useful. 5.7 Examples Once again we reformulate the problem as minimize ∥y∥ 255 The Lagrange dual problem is, following (5.61), Ax−b = y. subject to ∥ν∥∗ ≤ 1 subject to maximize bT ν AT ν = 0, where we use the fact that the conjugate of a norm is the indicator function of the dual norm unit ball (example 3.26, page 93). The idea of introducing new equality constraints can be applied to the constraint functions as well. Consider, for example, the problem (5.64) minimize f0(A0x + b0) subjectto fi(Aix+bi)≤0, i=1,...,m, (5.65) where Ai ∈ Rki×n and fi : Rki → R are convex. (For simplicity we do not include equality constraints here.) We introduce a new variable yi ∈ Rki , for i = 0, . . . , m, and reformulate the problem as minimize f0 (y0 ) subject to fi(yi) ≤ 0, i = 1,...,m (5.66) Aix+bi =yi, The Lagrangian for this problem is 􏰊m L(x,y0,...,ym,λ,ν0,...,νm)=f0(y0)+ i=0,...,m. νiT(Aix+bi −yi). To find the dual function we minimize over x and yi. The minimum over x is −∞ unless 􏰊m A Ti ν i = 0 , i=0 in which case we have, for λ ≻ 0, g(λ,ν0,...,νm) 􏰇 􏰊m 􏰊m􏰊m = νiT bi + inf f0(y0) + λifi(yi) − νiT yi y0 ,...,ym i=0 i=1 i=0 􏰈 􏰊m 􏰀 􏰁􏰊m 􏰀 􏰁 = νiTbi +inf f0(y0)−ν0Ty0 + λi inf fi(yi)−(νi/λi)Tyi y0 yi i=0 i=1 􏰊m = i=0 νiT bi − f0∗(ν0) − 􏰊m i=1 λifi∗(νi/λi). i=1 λifi(yi)+ 􏰊m i=0 256 5 Duality The last expression involves the perspective of the conjugate function, and is there- fore concave in the dual variables. Finally, we address the question of what happens whenλ≽0,butsomeλi arezero. Ifλi =0andνi ̸=0,thenthedualfunctionis −∞. If λi = 0 and νi = 0, however, the terms involving yi, νi, and λi are all zero. Thus, the expression above for g is valid for all λ ≽ 0, if we take λifi∗(νi/λi) = 0 whenλi =0andνi =0,andλifi∗(νi/λi)=∞whenλi =0andνi ̸=0. Therefore we can express the dual of the problem (5.66) as maximize 􏰉􏰉mi=0 νiT bi − f0∗(ν0) − 􏰉mi=1 λifi∗(νi/λi) subject to λ ≽ 0 (5.67) mi = 0 A Ti ν i = 0 . Example 5.7 Inequality constrained geometric program. The inequality constrained geometric program minimize subject to log 􏰎􏰉K0 eaT0k x+b0k 􏰏 􏰎 k=1 􏰏 log 􏰉Ki eaTikx+bik k=1 ≤ 0, is of the form (5.65) with fi : RKi → R given by fi(y) = log􏰀􏰉Ki eyk􏰁. The conjugate of this function is ∗ 􏰆 􏰉Ki νk log νk ν ≽ 0, 1T ν = 1 maximize bT0 ν0 − 􏰉K0 ν0k log ν0k + 􏰉m 􏰀bTi νi − 􏰉Ki νik log(νik /λi )􏰁 k=1 i=1 k=1 subjectto ν0 ≽0, 1Tν0 =1 􏰉νi ≽0, 1Tνi =λi, i=1,...,m λi ≥0, i=1,...,m mi=0 ATi νi = 0, which further simplifies to fi (ν) = Using (5.67) we can immediately write down the dual problem as k=1 ∞ otherwise. i = 1,...,m k=1 5.7.2 mi=0 ATi νi = 0. Transforming the objective If we replace the objective f0 by an increasing function of f0, the resulting problem is clearly equivalent (see §4.1.3). The dual of this equivalent problem, however, can be very different from the dual of the original problem. Example 5.8 We consider again the minimum norm problem minimize ∥Ax − b∥, maximize subject to bT0 ν0 − 􏰉K0 ν0k log ν0k + 􏰉m 􏰀bTi νi − 􏰉Ki νik log(νik /1T νi )􏰁 k=1 i=1 k=1 􏰉νi ≽ 0, i = 0,...,m 1T ν0 = 1 5.7 Examples 257 where ∥ · ∥ is some norm. We reformulate this problem as minimize (1/2)∥y∥2 subject to Ax−b = y. Here we have introduced new variables, and replaced the objective by half its square. Evidently it is equivalent to the original problem. The dual of the reformulated problem is maximize −(1/2)∥ν∥2∗ + bT ν subject to AT ν = 0, where we use the fact that the conjugate of (1/2)∥·∥2 is (1/2)∥·∥2∗ (see example 3.27, page 93). Note that this dual problem is not the same as the dual problem (5.64) derived earlier. 5.7.3 Implicit constraints The next simple reformulation we study is to include some of the constraints in the objective function, by modifying the objective function to be infinite when the constraint is violated. Example 5.9 Linear program with box constraints. We consider the linear program minimize cT x subject to Ax = b (5.68) l≼x≼u where A ∈ Rp×n and l ≺ u. The constraints l ≼ x ≼ u are sometimes called box constraints or variable bounds. We can, of course, derive the dual of this linear program. The dual will have a Lagrange multiplier ν associated with the equality constraint, λ1 associated with the inequality constraint x ≼ u, and λ2 associated with the inequality constraint l ≼ x. The dual is maximize −bTν−λT1 u+λT2 l subjectto ATν+λ1 −λ2 +c=0 (5.69) λ1≽0, λ2≽0. Instead, let us first reformulate the problem (5.68) as wherewedefine minimize subject to 􏰆 cTx f0(x) = ∞ f0 (x) Ax = b, l≼x≼u otherwise. (5.70) The problem (5.70) is clearly equivalent to (5.68); we have merely made the explicit box constraints implicit. 258 5 Duality 5.8 5.8.1 where yi+ = max{yi,0}, yi− = max{−yi,0}. So here we are able to derive an analyt- ical formula for g, which is a concave piecewise-linear function. The dual problem is the unconstrained problem maximize −bTν−uT(ATν+c)− +lT(ATν+c)+, (5.71) which has a quite different form from the dual of the original problem. (The problems (5.69) and (5.71) are closely related, in fact, equivalent; see exer- cise 5.8.) Theorems of alternatives Weak alternatives via the dual function In this section we apply Lagrange duality theory to the problem of determining feasibility of a system of inequalities and equalities fi(x) ≤ 0, i = 1,...,m, hi(x) = 0, i = 1,...,􏰮p. (5.72) 􏰮We assume the domain of the inequality system (5.72), D = mi=1 dom fi ∩ pi=1 domhi, is nonempty. We can think of (5.72) as the standard problem (5.1), The dual function for the problem (5.70) is g(ν) = inf 􏰀cTx+νT(Ax−b)􏰁 l≼x≼u = −bTν−uT(ATν+c)− +lT(ATν+c)+ with objective f0 = 0, i.e., minimize subject to This problem has optimal value 0 fi(x) ≤ 0, i = 1,...,m hi(x) = 0, i = 1,...,p. (5.73) p⋆ = 􏰆 0 so solving the optimization problem (5.73) is the same as solving the inequality system (5.72). The dual function We associate with the inequality system (5.72) the dual function 􏰇􏰊m 􏰊p 􏰈 g(λ, ν) = inf λifi(x) + νihi(x) , x∈D i=1 i=1 (5.72) is feasible ∞ (5.72) is infeasible, (5.74) 5.8 Theorems of alternatives 259 which is the same as the dual function for the optimization problem (5.73). Since f0 = 0, the dual function is positive homogeneous in (λ, ν): For α > 0, g(αλ, αν) = αg(λ,ν). The dual problem associated with (5.73) is to maximize g(λ,ν) subject to λ ≽ 0. Since g is hom􏰆ogeneous, the optimal value of this dual problem is given by
d⋆ = ∞ λ≽0, g(λ,ν)>0isfeasible (5.75) 0 λ ≽ 0, g(λ,ν) > 0 is infeasible.
Weak duality tells us that d⋆ ≤ p⋆. Combining this fact with (5.74) and (5.75) yields the following: If the inequality system
λ ≽ 0, g(λ,ν) > 0 (5.76)
is feasible (which means d⋆ = ∞), then the inequality system (5.72) is infeasible (since we then have p⋆ = ∞). Indeed, we can interpret any solution (λ,ν) of the inequalities (5.76) as a proof or certificate of infeasibility of the system (5.72).
We can restate this implication in terms of feasibility of the original system: If the original inequality system (5.72) is feasible, then the inequality system (5.76) must be infeasible. We can interpret an x which satisfies (5.72) as a certificate establishing infeasibility of the inequality system (5.76).
Two systems of inequalities (and equalities) are called weak alternatives if at most one of the two is feasible. Thus, the systems (5.72) and (5.76) are weak alternatives. This is true whether or not the inequalities (5.72) are convex (i.e., fi convex, hi affine); moreover, the alternative inequality system (5.76) is always convex (i.e., g is concave and the constraints λi ≥ 0 are convex).
Strict inequalities
We can also study feasibility of the strict inequality system
fi(x) < 0, i = 1,...,m, hi(x) = 0, i = 1,...,p. (5.77) With g defined as for the nonstrict inequality system, we have the alternative inequality system λ ≽ 0, λ ̸= 0, g(λ,ν) ≥ 0. (5.78) We can show directly that (5.77) and (5.78) are weak alternatives. Suppose there existsanx ̃withfi(x ̃)<0,hi(x ̃)=0. Thenforanyλ≽0,λ̸=0,andν, λ 1 f 1 ( x ̃ ) + · · · + λ m f m ( x ̃ ) + ν 1 h 1 ( x ̃ ) + · · · + ν p h p ( x ̃ ) < 0 . It follows that g(λ, ν) = ≤ i=1 < 0. 􏰊p i=1 􏰇􏰊m 􏰊p 􏰈 inf λifi(x) + νihi(x) x∈D i=1 i=1 􏰊m λifi(x ̃) + νihi(x ̃) 260 5.8.2 5 Duality Therefore, feasibility of (5.77) implies that there does not exist (λ,ν) satisfy- ing (5.78). Thus, we can prove infeasibility of (5.77) by producing a solution of the sys- tem (5.78); we can prove infeasibility of (5.78) by producing a solution of the system (5.77). Strong alternatives When the original inequality system is convex, i.e., fi are convex and hi are affine, and some type of constraint qualification holds, then the pairs of weak alternatives described above are strong alternatives, which means that exactly one of the two alternatives holds. In other words, each of the inequality systems is feasible if and only if the other is infeasible. In this section we assume that fi are convex and hi are affine, so the inequality system (5.72) can be expressed as fi(x) ≤ 0, i = 1,...,m, We first study the strict inequality system where A ∈ Rp×n. Strict inequalities Ax = b, fi(x) < 0, i = 1,...,m, λ ≽ 0, λ ̸= 0, g(λ,ν) ≥ 0. and its alternative We need one technical condition: There exists an x ∈ relint D with Ax = b. In other words we not only assume that the linear equality constraints are consistent, but also that they have a solution in relint D. (Very often D = Rn, so the condition is satisfied if the equality constraints are consistent.) Under this condition, exactly one of the inequality systems (5.79) and (5.80) is feasible. In other words, the inequality systems (5.79) and (5.80) are strong alternatives. We will establish this result by considering the related optimization problem minimize s subjectto fi(x)−s≤0, i=1,...,m (5.81) Ax = b with variables x, s, and domain D × R. The optimal value p⋆ of this problem is negative if and only if there exists a solution to the strict inequality system (5.79). The Lagrange dual function for the problem (5.81) is 􏰇 􏰊m 􏰈 􏰆 g ( λ , ν ) 1 T λ = 1 inf s+ λi(fi(x)−s)+νT(Ax−b) = −∞ otherwise. x∈D, s i=1 Ax = b, (5.79) (5.80) 5.8 Theorems of alternatives 261 Therefore we can express the dual problem of (5.81) as maximize g(λ, ν) subjectto λ≽0, 1Tλ=1. Now we observe that Slater’s condition holds for the problem (5.81). By the hypothesis there exists an x ̃ ∈ relint D with Ax ̃ = b. Choosing any s ̃ > maxi fi(x ̃) yieldsapoint(x ̃,s ̃)whichisstrictlyfeasiblefor(5.81). Thereforewehaved⋆ =p⋆, and the dual optimum d⋆ is attained. In other words, there exist (λ⋆, ν⋆) such that
g(λ⋆,ν⋆)=p⋆, λ⋆ ≽0, 1Tλ⋆ =1. (5.82)
Now suppose that the strict inequality system (5.79) is infeasible, which means that p⋆ ≥ 0. Then (λ⋆,ν⋆) from (5.82) satisfy the alternate inequality system (5.80). Similarly, if the alternate inequality system (5.80) is feasible, then d⋆ = p⋆ ≥ 0, which shows that the strict inequality system (5.79) is infeasible. Thus, the inequality systems (5.79) and (5.80) are strong alternatives; each is feasible if and only if the other is not.
Nonstrict inequalities
We now consider the nonstrict inequality system
fi(x) ≤ 0, i = 1,…,m, Ax = b,
and its alternative
λ ≽ 0, g(λ,ν) > 0.
We will show these are strong alternatives, provided the following conditions hold: There exists an x ∈ relint D with Ax = b, and the optimal value p⋆ of (5.81) is attained. This holds, for example, if D = Rn and maxi fi(x) → ∞ as x → ∞. With these assumptions we have, as in the strict case, that p⋆ = d⋆, and that both the primal and dual optimal values are attained. Now suppose that the nonstrict inequality system (5.83) is infeasible, which means that p⋆ > 0. (Here we use the assumption that the primal optimal value is attained.) Then (λ⋆,ν⋆) from (5.82) satisfy the alternate inequality system (5.84). Thus, the inequality systems (5.83) and (5.84) are strong alternatives; each is feasible if and only if the other is not.
5.8.3 Examples Linear inequalities
Consider the system of linear inequalities A􏰆x ≼ b. The dual function is
The alternative inequality system is therefore
λ ≽ 0, AT λ = 0, bT λ < 0. −bTλ ATλ=0 x −∞ otherwise. g(λ) = inf λT (Ax − b) = (5.83) (5.84) 262 5 Duality These are, in fact, strong alternatives. This follows since the optimum in the related problem (5.81) is achieved, unless it is unbounded below. We now consider the system of strict linear inequalities Ax ≺ b, which has the strong alternative system λ ≽ 0, λ ̸= 0, AT λ = 0, bT λ ≤ 0. In fact we have encountered (and proved) this result before, in §2.5.1; see (2.17) and (2.18) (on page 50). Intersection of ellipsoids We consider m ellipsoids, described as Ei ={x|fi(x)≤0}, with fi(x) = xTAix+2bTi x+ci, i = 1,...,m, where Ai ∈ Sn++. We ask when the intersection of these ellipsoids has nonempty interior. This is equivalent to feasibility of the set of strict quadratic inequalities fi(x)=xTAix+2bTi x+ci <0, The dual function g is 􏰆􏰀TT􏰁 g(λ) = inf x A(λ)x+2b(λ) x+c(λ) x i=1,...,m. (5.85) = where −b(λ)T A(λ)†b(λ) + c(λ) A(λ) ≽ 0, −∞ otherwise, b(λ) ∈ R(A(λ)) 􏰊m 􏰊m 􏰊m A(λ) = λiAi, b(λ) = λibi, c(λ) = λici. i=1 i=1 i=1 Note that for λ ≽ 0, λ ̸= 0, we have A(λ) ≻ 0, so we can simplify the expression for the dual function as g(λ) = −b(λ)T A(λ)−1b(λ) + c(λ). The strong alternative of the system (5.85) is therefore λ ≽ 0, λ ̸= 0, −b(λ)T A(λ)−1b(λ) + c(λ) ≥ 0. (5.86) We can give a simple geometric interpretation of this pair of strong alternatives. For any nonzero λ ≽ 0, the (possibly empty) ellipsoid Eλ ={x|xTA(λ)x+2b(λ)Tx􏰉+c(λ)≤0} contains E1 ∩ · · · ∩ Em, since fi(x) ≤ 0 implies mi=1 λifi(x) ≤ 0. Now, Eλ has empty interior if and only if inf 􏰀xT A(λ)x + 2b(λ)T x + c(λ)􏰁 = −b(λ)T A(λ)−1b(λ) + c(λ) ≥ 0. x Therefore the alternative system (5.86) means that Eλ has empty interior. Weak duality is obvious: If (5.86) holds, then Eλ contains the intersection E1 ∩ · · · ∩ Em , and has empty interior, so naturally the intersection has empty interior. The fact that these are strong alternatives states the (not obvious) fact that if the intersection E1 ∩ · · · ∩ Em has empty interior, then we can construct an ellipsoid Eλ that contains the intersection and has empty interior. 5.8 Theorems of alternatives 263 Farkas’ lemma In this section we describe a pair of strong alternatives for a mixture of strict and nonstrict linear inequalities, known as Farkas’ lemma: The system of inequalities Ax ≼ 0, cT x < 0, (5.87) where A ∈ Rm×n and c ∈ Rn, and the system of equalities and inequalities AT y + c = 0, y ≽ 0, (5.88) We can prove Farkas’ lemma directly, using LP duality. Consider the LP are strong alternatives. and its dual minimize cT x subject to Ax ≼ 0, maximize 0 subjectto ATy+c=0 y ≽ 0. (5.89) (5.90) The primal LP (5.89) is homogeneous, and so has optimal value 0, if (5.87) is not feasible, and optimal value −∞, if (5.87) is feasible. The dual LP (5.90) has optimal value 0, if (5.88) is feasible, and optimal value −∞, if (5.88) is infeasible. Since x = 0 is feasible in (5.89), we can rule out the one case in which strong duality can fail for LPs, so we must have p⋆ = d⋆. Combined with the remarks above, this shows that (5.87) and (5.88) are strong alternatives. Example 5.10 Arbitrage-free bounds on price. We consider a set of n assets, with prices at the beginning of an investment period p1, . . . , pn, respectively. At the end of the investment period, the value of the assets is v1, . . . , vn. If x1, . . . , xn represents the initial investment in each asset (with xj < 0 meaning a short position in asset j), the cost of the initial investment is pT x, and the final value of the investment is vT x. The value of the assets at the end of the investment period, v, is uncertain. We will assume that only m possible scenarios, or outcomes, are possible. If outcome i occurs, the final value of the assets is v(i), and therefore, the overall value of the investments is v(i)T x. If there is an investment vector x with pT x < 0, and in all possible scenarios, the final value is nonnegative, i.e., v(i)T x ≥ 0 for i = 1, . . . , m, then an arbitrage is said to exist. The condition pT x < 0 means you are paid to accept the investment mix, and the condition v(i)Tx ≥ 0 for i = 1,...,m means that no matter what outcome occurs, the final value is nonnegative, so an arbitrage corresponds to a guaranteed money-making investment strategy. It is generally assumed that the prices and values are such that no arbitrage exists. This means that the inequality system V x ≽ 0, pT x < 0 is infeasible, where Vij = v(i). Using Farkas’ lemma, we have no arbitrage if and only if there exists y such that −V T y + p = 0, y ≽ 0. j 264 5 Duality We can use this characterization of arbitrage-free prices and values to solve several interesting problems. Suppose, for example, that the values V are known, and all prices except the last one, pn, are known. The set of prices pn that are consistent with the no-arbitrage assumption is an interval, which can be found by solving a pair of LPs. The optimal value of the LP minimize pn subjectto VTy=p, y≽0, with variables pn and y, gives the smallest possible arbitrage-free price for asset n. Solving the same LP with maximization instead of minimization yields the largest possible price for asset n. If the two values are equal, i.e., the no-arbitrage assumption leads us to a unique price for asset n, we say the market is complete. For an example, see exercise 5.38. This method can be used to find bounds on the price of a derivative or option that is based on the final value of other underlying assets, i.e., when the value or payoff of asset n is a function of the values of the other assets. Generalized inequalities In this section we examine how Lagrange duality extends to a problem with gen- eralized inequality constraints minimize f0 (x) subject to fi(x) ≼Ki 0, i = 1,...,m (5.91) hi(x) = 0, i = 1,...,p, where Ki ⊆ Rki are proper cones. For now, we do not assume convexity of the prob- lem (5.91). We assume the domain of (5.91), D = 􏰮mi=0 dom fi ∩ 􏰮pi=1 dom hi , is nonempty. The Lagrange dual With each generalized inequality fi(x) ≼Ki 0 in (5.91) we associate a Lagrange multiplier vector λi ∈ Rki and define the associated Lagrangian as L ( x , λ , ν ) = f 0 ( x ) + λ T1 f 1 ( x ) + · · · + λ Tm f m ( x ) + ν 1 h 1 ( x ) + · · · + ν p h p ( x ) , where λ = (λ1,...,λm) and ν = (ν1,...,νp). The dual function is defined exactly as in a problem with scalar inequalities: 5.9 5.9.1 􏰇 􏰊m 􏰊p 􏰈 g(λ,ν) = inf L(x,λ,ν) = inf f0(x) + λTi fi(x) + νihi(x) . x∈D x∈D Since the Lagrangian is affine in the dual variables (λ,ν), and the dual function is a pointwise infimum of the Lagrangian, the dual function is concave. i=1 i=1 5.9 Generalized inequalities 265 As in a problem with scalar inequalities, the dual function gives lower bounds on p⋆, the optimal value of the primal problem (5.91). For a problem with scalar inequalities, we require λi ≥ 0. Here the nonnegativity requirement on the dual variables is replaced by the condition λ i ≽ K i∗ 0 , i = 1 , . . . , m , where Ki∗ denotes the dual cone of Ki. In other words, the Lagrange multipliers associated with inequalities must be dual nonnegative. Weak duality follows immediately from the definition of dual cone. If λi ≽Ki∗ 0 and fi(x ̃) ≼Ki 0, then λTi fi(x ̃) ≤ 0. Therefore for any primal feasible point x ̃ and any λi ≽Ki∗ 0, we have (5.92) 􏰊m f0(x ̃) + 􏰊p i=1 λTi fi(x ̃) + Taking the infimum over x ̃ yields g(λ, ν) ≤ p⋆. maximize g(λ, ν) subjectto λi≽Ki∗ 0, i=1,...,m. νihi(x ̃) ≤ f0(x ̃). The Lagrange dual optimization problem is i=1 We always have weak duality, i.e., d⋆ ≤ p⋆, where d⋆ denotes the optimal value of the dual problem (5.92), whether or not the primal problem (5.91) is convex. Slater’s condition and strong duality As might be expected, strong duality (d⋆ = p⋆) holds when the primal problem is convex and satisfies an appropriate constraint qualification. For example, a generalized version of Slater’s condition for the problem minimize f0 (x) subject to fi(x) ≼Ki 0, i = 1,...,m Ax = b, where f0 is convex and fi is Ki-convex, is that there exists an x ∈ relintD with Ax = b and fi(x) ≺Ki 0, i = 1, . . . , m. This condition implies strong duality (and also, that the dual optimum is attained). Example 5.11 Lagrange dual of semidefinite program. We consider a semidefinite program in inequality form, minimize cT x subjectto x1F1+···+xnFn+G≼0 (5.93) where F1, . . . , Fn, G ∈ Sk. (Here f1 is affine, and K1 is Sk+, the positive semidefinite cone.) We associate with the constraint a dual variable or multiplier Z ∈ Sk, so the La- grangian is L(x,Z) = cTx+tr((x1F1 +···+xnFn +G)Z) = x1(c1 + tr(F1Z)) + · · · + xn(cn + tr(FnZ)) + tr(GZ), 266 5 Duality i = 1,...,n which is affine in x. The dual function is given by g(Z) = inf L(x,Z) = 􏰆 tr(GZ) tr(FiZ) + ci = 0, x −∞ otherwise. The dual problem can therefore be expressed as maximize tr(GZ ) subjectto tr(FiZ)+ci =0, i=1,...,n Z ≽ 0. (We use the fact that Sk+ is self-dual, i.e., (Sk+)∗ = Sk+; see §2.6.) Strong duality obtains if the semidefinite program (5.93) is strictly feasible, i.e., there exists an x with x1F1 +···+xnFn +G≺0. Example 5.12 Lagrange dual of cone program in standard form. We consider the cone program so the dual function is x −∞ otherwise. The dual problem can be expressed as minimize cT x subject to Ax = b x≽K 0, where A ∈ Rm×n, b ∈ Rm, and K ⊆ Rn is a proper cone. We associate with the equality constraint a multiplier ν ∈ Rm, and with the nonnegativity constraint a multiplier λ ∈ Rn. The Lagrangian is L(x, λ, ν) = cT x − λT x + νT (Ax − b), g(λ,ν)=infL(x,λ,ν)=􏰆 −bTν ATν−λ+c=0 5.9.2 maximize −bT ν subjectto ATν+c=λ λ≽K∗ 0. By eliminating λ and defining y = −ν, this problem can be simplified to maximize bT y subject to AT y ≼K∗ c, which is a cone program in inequality form, involving the dual generalized inequality. Strong duality obtains if the Slater condition holds, i.e., there is an x ≻K 0 with Ax = b. Optimality conditions The optimality conditions of §5.5 are readily extended to problems with generalized inequalities. We first derive the complementary slackness conditions. 5.9 Generalized inequalities 267 Complementary slackness Assume that the primal and dual optimal values are equal, and attained at the optimal points x⋆, λ⋆, ν⋆. As in §5.5.2, the complementary slackness conditions follow directly from the equality f0(x⋆) = g(λ⋆, ν⋆), along with the definition of g. We have f0(x⋆) = g(λ⋆, ν⋆) 􏰊m 􏰊p ≤ f (x⋆)+ λ⋆Tf (x⋆)+ ν⋆h (x⋆) 0iiii ≤ f0 (x⋆ ), i=1 i=1 and therefore we conclude that x⋆ minimizes L(x,λ⋆,ν⋆), and also that the two sums in the second line are zero. S􏰉ince the second sum is zero (since x⋆ satisfies λ⋆Tf(x⋆)=0, i=1,...,m, (5.94) ii which generalizes the complementary slackness condition (5.48). From (5.94) we can conclude that λ⋆i ≻Ki∗ 0 =⇒ fi(x⋆)=0, fi(x⋆)≺Ki 0, =⇒ λ⋆i =0. However, in contrast to problems with scalar inequalities, it is possible to sat- isfy (5.94) with λ⋆i ̸= 0 and fi(x⋆) ̸= 0. KKT conditions Now we add the assumption that the functions fi, hi are differentiable, and gener- alize the KKT conditions of §5.5.3 to problems with generalized inequalities. Since x⋆ minimizes L(x, λ⋆, ν⋆), its gradient with respect to x vanishes at x⋆: the equality constraints), we have m λ⋆T f (x⋆) = 0. Since each term in this i=1 i i sum is nonpositive, we conclude that 􏰊m ∇f0(x⋆)+ i=1 Dfi(x⋆)Tλ⋆i + 􏰊p i=1 νi⋆∇hi(x⋆)=0, where Dfi(x⋆) ∈ Rki×n is the derivative of fi evaluated at x⋆ (see §A.4.1). Thus, if strong duality holds, any primal optimal x⋆ and any dual optimal (λ⋆,ν⋆) must satisfy the optimality conditions (or KKT conditions) ∇f0(x⋆) + 􏰉m i=1 Dfi(x⋆)T λ⋆i + 􏰉p i i i=1 νi⋆∇hi(x⋆) = 0. (5.95) If the primal problem is convex, the converse also holds, i.e., the conditions (5.95) are sufficient conditions for optimality of x⋆, (λ⋆,ν⋆). fi (x⋆ ) ≼Ki hi(x⋆) = λ⋆i ≽Ki∗ λ⋆T f (x⋆) = 0, i = 1,...,m 0, i = 1,...,p 0, i=1,...,m 0, i = 1,...,m 268 5.9.3 5 Duality Perturbation and sensitivity analysis The results of §5.6 can be extended to problems involving generalized inequalities. We consider the associated perturbed version of the problem, minimize f0 (x) subject to fi(x) ≼Ki ui, i = 1,...,m hi(x) = vi, i = 1,...,p, where ui ∈ Rki, and v ∈ Rp. We define p⋆(u,v) as the optimal value of the perturbed problem. As in the case with scalar inequalities, p⋆ is a convex function when the original problem is convex. Now let (λ⋆,ν⋆) be optimal for the dual of the original (unperturbed) problem, which we assume has zero duality gap. Then for all u and v we have i=1 variables λ⋆i satisfies the analog of (5.58). p⋆(u,v)≥p⋆− λ⋆Tu −ν⋆Tv, 􏰊m ii the analog of the global sensitivity inequality (5.57). The local sensitivity result holds as well: If p⋆(u,v) is differentiable at u = 0, v = 0, then the optimal dual program in inequality form, as in example 5.11. The primal problem is minimize cT x subject to F(x) = x1F1 +···+xnFn +G ≼ 0, with variable x ∈ Rn (and F1,...,Fn,G ∈ Sk), and the dual problem is maximize tr(GZ ) subjectto tr(FiZ)+ci =0, i=1,...,n Z ≽ 0, with variable Z ∈ Sk. Suppose that x⋆ and Z⋆ are primal and dual optimal, respectively, with zero duality gap. The complementary slackness condition is tr(F(x⋆)Z⋆) = 0. Since F(x⋆) ≼ 0 and Z⋆ ≽ 0, we can conclude that F(x⋆)Z⋆ = 0. Thus, the complementary slackness condition can be expressed as R(F (x⋆)) ⊥ R(Z⋆), i.e., the ranges of the primal and dual matrices are orthogonal. Let p⋆(U) denote the optimal value of the perturbed SDP minimize cT x subject to F(x) = x1F1 +···+xnFn +G ≼ U. λ⋆i =−∇uip⋆(0,0), Example 5.13 Semidefinite program in inequality form. We consider a semidefinite 5.9 5.9.4 Theorems of alternatives We can derive theorems of alternatives for systems of generalized inequalities and equalities fi(x) ≼Ki 0, i = 1,...,m, hi(x) = 0, i = 1,...,p, (5.96) where Ki ⊆ Rki are proper cones. We will also consider systems with strict in- equalities, fi(x) ≺Ki 0, i = 1,...,m, hi(x) = 0, i = 1,...,p. (5.97) We assume that D = 􏰮mi=0 dom fi ∩ 􏰮pi=1 dom hi is nonempty. Weak alternatives We associate with the systems (5.96) and (5.97) the dual function 􏰇􏰊m 􏰊p 􏰈 g(λ, ν) = inf λTi fi(x) + νihi(x) x∈D i=1 i=1 where λ = (λ1,...,λm) with λi ∈ Rki and ν ∈ Rp. In analogy with (5.76), we claim that λi ≽Ki⋆ 0, i=1,...,m, g(λ,ν)>0 (5.98) is a weak alternative to the system (5.96). To verify this, suppose there exists an
x satisfying (5.96) and (λ,ν) satisfying (5.98). Then we have a contradiction: 0 < g ( λ , ν ) ≤ λ T1 f 1 ( x ) + · · · + λ Tm f m ( x ) + ν 1 h 1 ( x ) + · · · + ν p h p ( x ) ≤ 0 . Therefore at least one of the two systems (5.96) and (5.98) must be infeasible, i.e., the two systems are weak alternatives. In a similar way, we can prove that (5.97) and the system λi ≽Ki∗ 0, i=1,...,m, λ̸=0, g(λ,ν)≥0. form a pair of weak alternatives. Strong alternatives We now assume that the functions fi are Ki-convex, and the functions hi are affine. We first consider a system with strict inequalities fi(x) ≺Ki 0, i = 1,...,m, Ax = b, (5.99) Generalized inequalities 269 Then we have, for all U, p⋆(U) ≥ p⋆ − tr(Z⋆U). If p⋆(U) is differentiable at U = 0, then we have This means that for U small, the optimal value of the perturbed SDP is very close ∇p⋆(0) = −Z⋆. to (the lower bound) p⋆ − tr(Z⋆U). 270 5 Duality and its alternative λi ≽Ki⋆ 0, i = 1,...,m, λ ̸= 0, g(λ,ν) ≥ 0. (5.100) We have already seen that (5.99) and (5.100) are weak alternatives. They are also strong alternatives provided the following constraint qualification holds: There exists an x ̃ ∈ relintD with Ax ̃ = b. To prove this, we select a set of vectors ei ≻Ki 0, and consider the problem minimize s subject to fi(x) ≼Ki sei, i = 1,...,m (5.101) Ax = b with variables x and s ∈ R. Slater’s condition holds since (x ̃, s ̃) satisfies the strict inequalities fi(x ̃) ≺Ki s ̃ei provided s ̃ is large enough. The dual of (5.101) is maximize g(λ, ν) subject to λi ≽K∗ 0, i = 1,...,m (5.102) 􏰉m iT with variables λ = (λ1,...,λm) and ν. Now suppose the system (5.99) is infeasible. Then the optimal value of (5.101) is nonnegative. Since Slater’s condition is satisfied, we have strong duality and the dual optimum is attained. Therefore there exist (λ ̃,ν ̃) that satisfy the constraints o f ( 5 . 1 0 2 ) a n d g ( λ ̃ , ν ̃ ) ≥ 0 , i . e . , t h e s y s t e m ( 5 . 1 0 0 ) h a s a s o l u t i o n . As we noted in the case of scalar inequalities, existence of an x ∈ relint D with Ax = b is not sufficient for the system of nonstrict inequalities and its alternative fi(x) ≼Ki 0, i = 1,...,m, Ax = b λi ≽Ki⋆ 0, i=1,...,m, g(λ,ν)>0
i=1ei λi =1
to be strong alternatives. An additional condition is required, e.g., that the optimal value of (5.101) is attained.
Example 5.14 Feasibility of a linear matrix inequality. The following systems are strong alternatives:
F(x) = x1F1 +···+xnFn +G ≺ 0,
Z ≽0, Z ̸=0, tr(GZ)≥0, tr(FiZ)=0, i=1,…,n,
where Fi, G ∈ Sk, and
where Z ∈ Sk. This follows from the general result, if we take for K the positive
semidefinite cone Sk+, and
g(Z) = inf (tr(F(x)Z)) = 􏰆 tr(GZ) tr(FiZ) = 0, i = 1,…,n x −∞ otherwise.

5.9
Generalized inequalities 271
The nonstrict inequality case is slightly more involved, and we need an extra assump- tion on the matrices Fi to have strong alternatives. One such condition is
􏰊n i=1
􏰊n i=1
viFi ≽0=⇒
F(x) = x1F1 +···+xnFn +G ≼ 0
viFi =0.
If this condition holds, the following systems are strong alternatives:
and
(see exercise 5.44).
Z ≽0, tr(GZ)>0, tr(FiZ)=0, i=1,…,n

272
5 Duality
Bibliography
Lagrange duality is covered in detail by Luenberger [Lue69, chapter 8], Rockafellar [Roc70, part VI], Whittle [Whi71], Hiriart-Urruty and Lemar ́echal [HUL93], and Bertsekas, Nedi ́c, and Ozdaglar [Ber03]. The name is derived from Lagrange’s method of multipliers for optimization problems with equality constraints; see Courant and Hilbert [CH53, chapter IV].
The max-min result for matrix games in §5.2.5 predates linear programming duality. It is proved via a theorem of alternatives by von Neuman and Morgenstern [vNM53, page 153]. The strong duality result for linear programming on page 227 is due to von Neumann [vN63] and Gale, Kuhn, and Tucker [GKT51]. Strong duality for the nonconvex quadratic problem (5.32) is a fundamental result in the literature on trust region methods for nonlinear optimization (Nocedal and Wright [NW99, page 78]). It is also related to the S-procedure in control theory, discussed in appendix §B.1. For an extension of the proof of strong duality of §5.3.2 to the refined Slater condition (5.27), see Rockafellar [Roc70, page 277].
Conditions that guarantee the saddle-point property (5.47) can be found in Rockafel- lar [Roc70, part VII] and Bertsekas, Nedi ́c, and Ozdaglar [Ber03, chapter 2]; see also exercise 5.25.
The KKT conditions are named after Karush (whose unpublished 1939 Master’s thesis is summarized in Kuhn [Kuh76]), Kuhn, and Tucker [KT51]. Related optimality condi- tions were also derived by John [Joh85]. The water-filling algorithm in example 5.2 has applications in information theory and communications (Cover and Thomas [CT91, page 252]).
Farkas’ lemma was published by Farkas [Far02]. It is the best known theorem of al- ternatives for systems of linear inequalities and equalities, but many variants exist; see Mangasarian [Man94, §2.4]. The application of Farkas’ lemma to asset pricing (exam- ple 5.10) is discussed by Bertsimas and Tsitsiklis [BT97, page 167] and Ross [Ros99].
The extension of Lagrange duality to problems with generalized inequalities appears in Isii [Isi64], Luenberger [Lue69, chapter 8], Berman [Ber73], and Rockafellar [Roc89, page 47]. It is discussed in the context of cone programming in Nesterov and Nemirovski [NN94, §4.2] and Ben-Tal and Nemirovski [BTN01, lecture 2]. Theorems of alternatives for generalized inequalities were studied by Ben-Israel [BI69], Berman and Ben-Israel [BBI71], and Craven and Kohila [CK77]. Bellman and Fan [BF63], Wolkowicz [Wol81], and Lasserre [Las95] give extensions of Farkas’ lemma to linear matrix inequalities.

Exercises 273
Exercises Basic definitions
5.1 A simple example. Consider the optimization problem minimize x2 + 1
subject to (x−2)(x−4) ≤ 0,
(a) Analysis of primal problem. Give the feasible set, the optimal value, and the optimal
with variable x ∈ R. solution.
(b) Lagrangian and dual function. Plot the objective x2 +1 versus x. On the same plot, show the feasible set, optimal point and value, and plot the Lagrangian L(x, λ) versus x for a few positive values of λ. Verify the lower bound property (p⋆ ≥ infx L(x, λ) for λ ≥ 0). Derive and sketch the Lagrange dual function g.
(c) Lagrange dual problem. State the dual problem, and verify that it is a concave maximization problem. Find the dual optimal value and dual optimal solution λ⋆. Does strong duality hold?
(d) Sensitivity analysis. Let p⋆(u) denote the optimal value of the problem minimize x2 + 1
subject to (x−2)(x−4) ≤ u,
as a function of the parameter u. Plot p⋆(u). Verify that dp⋆(0)/du = −λ⋆.
5.2 Weak duality for unbounded and infeasible problems. The weak duality inequality, d⋆ ≤ p⋆, clearly holds when d⋆ = −∞ or p⋆ = ∞. Show that it holds in the other two cases as well: Ifp⋆ =−∞,thenwemusthaved⋆ =−∞,andalso,ifd⋆ =∞,thenwemusthave p⋆ = ∞.
5.3 Problems with one inequality constraint. Express the dual problem of minimize cT x
subject to f(x) ≤ 0,
with c ̸= 0, in terms of the conjugate f∗. Explain why the problem you give is convex.
We do not assume f is convex. Examples and applications
5.4 Interpretation of LP dual via relaxed problems. Consider the inequality form LP minimize cT x
subject to Ax ≼ b,
with A ∈ Rm×n, b ∈ Rm. In this exercise we develop a simple geometric interpretation
of the dual LP (5.22).
Let w ∈ Rm+ . If x is feasible for the LP, i.e., satisfies Ax ≼ b, then it also satisfies the inequality
wT Ax ≤ wT b.
Geometrically, for any w ≽ 0, the halfspace Hw = {x | wT Ax ≤ wT b} contains the feasible
set for the LP. Therefore if we minimize the objective cT x over the halfspace Hw we get a lower bound on p⋆.

274
5 Duality
Derive an expression for the minimum value of cT x over the halfspace Hw (which will depend on the choice of w ≽ 0).
Formulate the problem of finding the best such bound, by maximizing the lower bound over w ≽ 0.
(a)
(b)
(c)
5.5 Dual of general LP. Find the dual function of the LP
Relate the results of (a) and (b) to the Lagrange dual of the LP, given by (5.22).
minimize cT x subject to Gx ≼ h Ax = b.
Give the dual problem, and make the implicit equality constraints explicit.
5.6 Lower bounds in Chebyshev approximation from least-squares. Consider the Chebyshev
or l∞-norm approximation problem
minimize ∥Ax − b∥∞, (5.103)
where A ∈ Rm×n and rankA = n. Let xch denote an optimal solution (there may be multiple optimal solutions; xch denotes one of them).
The Chebyshev problem has no closed-form solution, but the corresponding least-squares problem does. Define
xls = argmin ∥Ax − b∥2 = (AT A)−1AT b.
We address the following question. Suppose that for a particular A and b we have com- puted the least-squares solution xls (but not xch). How suboptimal is xls for the Chebyshev problem? In other words, how much larger is ∥Axls − b∥∞ than ∥Axch − b∥∞?
(a) Prove the lower bound
∥Axls − b∥∞ ≤ √m ∥Axch − b∥∞, using the fact that for all z ∈ Rm,
√1 ∥z∥2 ≤ ∥z∥∞ ≤ ∥z∥2. m
(b) In example 5.6 (page 254) we derived a dual for the general norm approximation problem. Applying the results to the l∞-norm (and its dual norm, the l1-norm), we can state the following dual for the Chebyshev approximation problem:
maximize bT ν
subject to ∥ν∥1 ≤ 1 (5.104)
AT ν = 0.
Any feasible ν corresponds to a lower bound bT ν on ∥Axch − b∥∞.
Denote the least-squares residual as rls = b − Axls. Assuming rls ̸= 0, show that νˆ = −rls/∥rls∥1, ν ̃ = rls/∥rls∥1,
are both feasible in (5.104). By duality bT νˆ and bT ν ̃ are lower bounds on ∥Axch − b∥∞. Which is the better bound? How do these bounds compare with the bound derived in part (a)?

Exercises 275
5.7 Piecewise-linear minimization. We consider the convex piecewise-linear minimization
problem
minimize maxi=1,…,m(aTi x + bi) (5.105) (a) Derive a dual problem, based on the Lagrange dual of the equivalent problem
with variable x ∈ Rn.
with variables x ∈ Rn, y ∈ Rm.
(a) Show that the matrix
is feasible. Hint. Show that
􏰈−1 􏰒􏰉mk=1akaTk ai􏰓≽0,
minimize maxi=1,…,m yi
subjectto aTi x+bi =yi, i=1,…,m,
(b) Formulate the piecewise-linear minimization problem (5.105) as an LP, and form the dual of the LP. Relate the LP dual to the dual obtained in part (a).
(c) Suppose we approximate the objective function in (5.105) by the smooth function
􏰇􏰊m 􏰈 f0(x) = log exp(aTi x + bi) ,
i=1
and solve the unconstrained geometric program
minimize log 􏰀􏰉mi=1 exp(aTi x + bi )􏰁 . (5.106) A dual of this problem is given by (5.62). Let p⋆pwl and p⋆gp be the optimal values
of (5.105) and (5.106), respectively. Show that 0≤p⋆gp −p⋆pwl ≤logm.
(d) Derive similar bounds for the difference between p⋆pwl and the optimal value of minimize (1/γ) log 􏰀􏰉mi=1 exp(γ(aTi x + bi))􏰁 ,
where γ > 0 is a parameter. What happens as we increase γ?
5.8 Relate the two dual problems derived in example 5.9 on page 257.
5.9 Suboptimality of a simple covering ellipsoid. Recall the problem of determining the min- imum volume ellipsoid, centered at the origin, that contains the points a1, . . . , am ∈ Rn (problem (5.14), page 222):
minimize f0(X) = log det(X−1) subject to aTi Xai ≤ 1, i = 1,…,m,
with dom f0 = Sn++ . We assume that the vectors a1 , . . . , am span Rn (which implies that the problem is bounded below).
X s i m =
􏰇􏰊m k=1
a k a Tk
,
aTi 1
and use Schur complements (§A.5.5) to prove that aTi Xai ≤ 1 for i = 1, . . . , m.

276
5 Duality
Now we establish a bound on how suboptimal the feasible point Xsim is, via the dual
(b)
problem,
s u b j e c t t o􏰉 λ ≽ 0 ,
maximize logdet􏰀􏰉mi=1λiaiaTi 􏰁−1Tλ+n
with the implicit constraint mi=1 λiaiaTi ≻ 0. (This dual is derived on page 222.) To derive a bound, we restrict our attention to dual variables of the form λ = t1, where t > 0. Find (analytically) the optimal value of t, and evaluate the dual objective at this λ. Use this to prove that the volume of the ellipsoid {u | uT Xsimu ≤ 1} is no more than a factor (m/n)n/2 more than the volume of the minimum volume ellipsoid.
5.10 Optimal experiment design. The following problems arise in experiment design (see §7.5). (a) D-optimal design.
minimize log det 􏰀􏰉pi=1 xi vi viT 􏰁−1 subjectto x≽0, 1Tx=1.
(b) A-optimal design.
subjectt􏰉o x≽0, 1Tx=1.
minimize tr􏰀􏰉pi=1 xiviviT􏰁−1
The domain of both problems is {x | pi=1 xiviviT ≻ 0}. The variable is x ∈ Rp; the vectors v1,..􏰉.,vp ∈ Rn are given.
Derive dual problems by first introducing a new variable X ∈ Sn and an equality con- straint X = pi=1 xiviviT , and then applying Lagrange duality. Simplify the dual prob- lems as much as you can.
5.11 Derive a dual problem for
minimize 􏰉Ni=1 ∥Aix + bi∥2 + (1/2)∥x − x0∥2.
The problem data are Ai ∈ Rmi×n, bi ∈ Rmi , and x0 ∈ Rn. First introduce new variables yi ∈ Rmi and equality constraints yi = Aix + bi.
5.12 Analytic centering. Derive a dual problem for
minimize − 􏰉mi=1 log(bi − aTi x)
with domain {x | aTi x < bi, i = 1, . . . , m}. First introduce new variables yi and equality constraints yi = bi − aTi x. (The solution of this problem is called the analytic center of the linear inequalities aTi x ≤ bi, i = 1,...,m. Analytic centers have geometric applications (see §8.5.3), and play an important role in barrier methods (see chapter 11).) 5.13 Lagrangian relaxation of Boolean LP. A Boolean linear program is an optimization prob- lem of the form minimize cT x subject to Ax ≼ b xi ∈ {0,1}, i = 1,...,n, and is, in general, very difficult to solve. In exercise 4.15 we studied the LP relaxation of this problem, minimize cT x subject to Ax ≼ b (5.107) 0≤xi ≤1, i=1,...,n, which is far easier to solve, and gives a lower bound on the optimal value of the Boolean LP. In this problem we derive another lower bound for the Boolean LP, and work out the relation between the two lower bounds. Exercises 277 (a) Lagrangian relaxation. The Boolean LP can be reformulated as the problem minimize cT x subject to Ax ≼ b xi(1 − xi) = 0, i = 1,...,n, which has quadratic equality constraints. Find the Lagrange dual of this problem. The optimal value of the dual problem (which is convex) gives a lower bound on the optimal value of the Boolean LP. This method of finding a lower bound on the optimal value is called Lagrangian relaxation. (b) Show that the lower bound obtained via Lagrangian relaxation, and via the LP relaxation (5.107), are the same. Hint. Derive the dual of the LP relaxation (5.107). 5.14 A penalty method for equality constraints. We consider the problem minimize f0 (x) (5.108) where f0 : Rn → R is convex and differentiable, and A ∈ Rm×n with rank A = m. subject to Ax = b, In a quadratic penalty method, we form an auxiliary function φ(x) = f0(x) + α∥Ax − b∥2, where α > 0 is a parameter. This auxiliary function consists of the objective plus the penalty term α∥Ax−b∥2. The idea is that a minimizer of the auxiliary function, x ̃, should be an approximate solution of the original problem. Intuition suggests that the larger the penalty weight α, the better the approximation x ̃ to a solution of the original problem. Suppose x ̃ is a minimizer of φ. Show how to find, from x ̃, a dual feasible point for (5.108). Find the corresponding lower bound on the optimal value of (5.108).
5.15 Consider the problem
minimize f0 (x)
subject to fi(x) ≤ 0, i = 1,…,m,
(5.109)
where the functions fi : Rn → R are differentiable and convex. Let h1,…,hm : R → R be increasing differentiable convex functions. Show that
of (5.109). Find the corresponding lower bound on the optimal value of (5.109). 5.16 An exact penalty method for inequality constraints. Consider the problem
hi(fi(x))
is convex. Suppose x ̃ minimizes φ. Show how to find from x ̃ a feasible point for the dual
φ(x) = f0(x) +
minimize f0 (x)
subject to fi(x) ≤ 0, i = 1,…,m,
(5.110)
where the functions fi : Rn → R are differentiable and convex. In an exact penalty method, we solve the auxiliary problem
minimize φ(x) = f0 (x) + α maxi=1,…,m max{0, fi (x)}, (5.111)
where α > 0 is a parameter. The second term in φ penalizes deviations of x from feasibility. The method is called an exact penalty method if for sufficiently large α, solutions of the auxiliary problem (5.111) also solve the original problem (5.110).
(a) Show that φ is convex.
􏰊m i=1

278
5 Duality
(b)
(c)
The auxiliary problem can be expressed as
minimize f0(x) + αy
subject to fi(x) ≤ y, i = 1,…,m
0≤y
where the variables are x and y ∈ R. Find the Lagrange dual of this problem, and
express it in terms of the Lagrange dual function g of (5.110).
Use the result in (b) to prove the following property. Suppose λ⋆ is an optimal solution of the Lagrange dual of (5.110), and that strong duality holds. If α > 1T λ⋆, then any solution of the auxiliary problem (5.111) is also an optimal solution of (5.110).
5.17 Robust linear programming with polyhedral uncertainty. Consider the robust LP minimize cT x
subject to supa∈Pi aT x ≤ bi, i = 1,…,m,
with variable x ∈ Rn, where Pi = {a | Cia ≼ di}. The problem data are c ∈ Rn, Ci ∈ Rmi×n, di ∈ Rmi , and b ∈ Rm. We assume the polyhedra Pi are nonempty.
Show that this problem is equivalent to the LP
minimize cT x
subject to dTi zi ≤ bi, i = 1,…,m
CiTzi =x, i=1,…,m zi ≽0, i=1,…,m
with variables x ∈ Rn and zi ∈ Rmi, i = 1,…,m. Hint. Find the dual of the problem of maximizing aTi x over ai ∈ Pi (with variable ai).
5.18 Separating hyperplane between two polyhedra. Formulate the following problem as an LP or an LP feasibility problem. Find a separating hyperplane that strictly separates two polyhedra
P1 ={x|Ax≼b}, P2 ={x|Cx≼d}, i.e., find a vector a ∈ Rn and a scalar γ such that
aT x > γ for x ∈ P1, aT x < γ for x ∈ P2. You can assume that P1 and P2 do not intersect. Hint. The vector a and scalar γ must satisfy inf aT x > γ > sup aT x.
x∈P1
x∈P2
Use LP duality to simplify the infimum and supremum in these conditions. 5.19 The sum of the largest elements of a vector. Define f : Rn → R as
􏰊r i=1
where r is an integer between 1 and n, and x[1] ≥ x[2] ≥ · · · ≥ x[r] are the components of x sorted in decreasing order. In other words, f(x) is the sum of the r largest elements of x. In this problem we study the constraint
f(x) ≤ α.
As we have seen in chapter 3, page 80, this is a convex constraint, and equivalent to a set
of n!/(r!(n − r)!) linear inequalities
xi1 +···+xir ≤α, 1≤i1 0}.
(a) Verify that this is a convex optimization problem. Find the optimal value.
(b) Give the Lagrange dual problem, and find the optimal solution λ⋆ and optimal value d⋆ of the dual problem. What is the optimal duality gap?
(c) Does Slater’s condition hold for this problem?
(d) What is the optimal value p⋆(u) of the perturbed problem
minimize e−x subject to x2/y ≤ u
as a function of u? Verify that the global sensitivity inequality p⋆(u) ≥ p⋆(0) − λ⋆u
does not hold.
5.22 Geometric interpretation of duality. For each of the following optimization problems, draw a sketch of the sets
G = {(u,t)|∃x∈D, f0(x)=t, f1(x)=u}, A = {(u,t)|∃x∈D, f0(x)≤t, f1(x)≤u},
give the dual problem, and solve the primal and dual problems. Is the problem convex? Is Slater’s condition satisfied? Does strong duality hold?
The domain of the problem is R unless otherwise stated.
(a) Minimize x subject to x2 ≤ 1. (b) Minimize x subject to x2 ≤ 0. (c) Minimize x subject to |x| ≤ 0.
(d) Minimize x subject to f1(x) ≤ 0 􏰐where
−x+2 x≥1
f1(x) = x −1 ≤ x ≤ 1
−x−2 x≤−1.
(f) Minimize x3 subject to −x + 1 ≤ 0 with domain D = R+.
5.23 Strong duality in linear programming. We prove that strong duality holds for the LP
(e) Minimize x3 subject to −x + 1 ≤ 0.
and its dual
minimize cT x subject to Ax ≼ b
maximize −bT z
subjectto ATz+c=0, z≽0,
provided at least one of the problems is feasible. In other words, the only possible excep- tion to strong duality occurs when p⋆ = ∞ and d⋆ = −∞.

Exercises 281
(a) Suppose p⋆ is finite and x⋆ is an optimal solution. (If finite, the optimal value of an LP is attained.) Let I ⊆ {1, 2, . . . , m} be the set of active constraints at x⋆:
aTi x⋆ =bi, i∈I, aTi x⋆ 1 over the investment period of interest (for example, a bond), a stock, and an option on the stock. The option gives us the right to purchase the stock at the end of the period, for a predetermined price K.
We consider two scenarios. In the first scenario, the price of the stock goes up from S at the beginning of the period, to Su at the end of the period, where u > r. In this scenario, we exercise the option only if Su > K, in which case we make a profit of Su−K. Otherwise, we do not exercise the option, and make zero profit. The value of the option at the end of the period, in the first scenario, is therefore max{0, Su − K}.
In the second scenario, the price of the stock goes down from S to Sd, where d < 1. The value at the end of the period is max{0, Sd − K}. In the notation of example 5.10, V =􏰒 r uS max{0,Su−K} 􏰓, p1 =1, p2 =S, p3 =C, r dS max{0,Sd−K} where C is the price of the option. Show that for given r, S, K, u, d, the option price C is uniquely determined by the no-arbitrage condition. In other words, the market for the option is complete. Generalized inequalities 5.39 SDP relaxations of two-way partitioning problem. We consider the two-way partitioning problem (5.7), described on page 219, minimize xT W x (5.113) with variable x ∈ Rn. The Lagrange dual of this (nonconvex) problem is given by the SDP subject to x2i = 1, i = 1,...,n, maximize −1T ν subject to W + diag(ν) ≽ 0 with variable ν ∈ Rn. The optimal value of this SDP gives a lower bound on the optimal value of the partitioning problem (5.113). In this exercise we derive another SDP that gives a lower bound on the optimal value of the two-way partitioning problem, and explore the connection between the two SDPs. (5.114) 286 5 Duality Two-way partitioning problem in matrix form. Show that the two-way partitioning problem can be cast as minimize tr(W X ) subject to X ≽ 0, rankX = 1 Xii = 1, i = 1,...,n, with variable X ∈ Sn. Hint. Show that if X is feasible, then it has the form X = xxT , where x ∈ Rn satisfies xi ∈ {−1, 1} (and vice versa). SDP relaxation of two-way partitioning problem. Using the formulation in part (a), we can form the relaxation minimize tr(W X ) subject to X ≽ 0 (5.115) Xii = 1, i = 1,...,n, with variable X ∈ Sn. This problem is an SDP, and therefore can be solved effi- ciently. Explain why its optimal value gives a lower bound on the optimal value of the two-way partitioning problem (5.113). What can you say if an optimal point X⋆ for this SDP has rank one? We now have two SDPs that give a lower bound on the optimal value of the two-way partitioning problem (5.113): the SDP relaxation (5.115) found in part (b), and the Lagrange dual of the two-way partitioning problem, given in (5.114). What is the relation between the two SDPs? What can you say about the lower bounds found by them? Hint: Relate the two SDPs via duality. (a) (b) (c) 5.40 E-optimal experiment design. A variation on the two optimal experiment design problems of exercise 5.10 is the E-optimal design problem minimize λmax 􏰀􏰉pi=1 xiviviT 􏰁−1 subjectto x≽0, 1Tx=1. (See also §7.5.) Derive a dual for this problem, by first reformulating it as minimize 1􏰉/t subject to pi=1 xiviviT ≽ tI x≽0, 1Tx=1, with variables t ∈ R, x ∈ Rp and domain R++ × Rp, and applying Lagrange duality. Simplify the dual problem as much as you can. 5.41 Dual of fastest mixing Markov chain problem. On page 174, we encountered the SDP minimize t sub ject to −tI≼P−(1/n)11T≼tI P1=1 Pij ≥ 0, i,j = 1,...,n Pij =0for(i,j)̸∈E, with variables t ∈ R, P ∈ Sn. Show that the dual of this problem can be expressed as maximize 1T z − (1/n)1T Y 1 subject to ∥Y ∥2∗ ≤ 1 (zi+zj)≤Yij for(i,j)∈E 􏰉nn with variables z ∈ R and Y ∈ S . The norm ∥·∥2∗ is the dual of the spectral norm on Sn: ∥Y ∥2∗ = ni=1 |λi(Y )|, the sum of the absolute values of the eigenvalues of Y . (See §A.1.6, page 637.) Exercises 287 5.42 Lagrange dual of conic form problem in inequality form. Find the Lagrange dual problem of the conic form problem in inequality form minimize cT x subject to Ax ≼K b where A ∈ Rm×n, b ∈ Rm, and K is a proper cone in Rm. Make any implicit equality constraints explicit. 5.43 Dual of SOCP. Show that the dual of the SOCP minimize fT x subjectto ∥Aix+bi∥2 ≤cTi x+di, i=1,...,m, with variables x ∈ Rn, can be expres􏰉sed as maximize 􏰉mi=1(bTi ui − divi) subject to mi=1(ATi ui − civi) + f = 0 ∥ui∥2 ≤ vi, i = 1,...,m, withvariablesui ∈Rni,vi ∈R,i=1,...,m. Theproblemdataaref ∈Rn,Ai ∈Rni×n, bi ∈Rni,ci ∈Randdi ∈R,i=1,...,m. Derive the dual in the following two ways. (a) Introduce new variables yi ∈ Rni and ti ∈ R and equalities yi = Aix + bi, ti = cTi x + di, and derive the Lagrange dual. (b) Start from the conic formulation of the SOCP and use the conic dual. Use the fact that the second-order cone is self-dual. 5.44 Strong alternatives for nonstrict LMIs. In example 5.14, page 270, we mentioned that the system Z ≽ 0, tr(GZ) > 0,
is a strong alternative for the nonstrict LMI
if the matrices Fi satisfy
(5.116) (5.117)
tr(FiZ) = 0, i = 1,…,n, F(x) = x1F1 +···+xnFn +G ≼ 0,
􏰊n i=1
viFi ≽0 =⇒
􏰊n i=1
viFi =0.
In this exercise we prove this result, and give an example to illustrate that the systems
are not always strong alternatives.
(a) Suppose (5.118) holds, and that the optimal value of the auxiliary SDP
minimize s
subject to F (x) ≼ sI
is positive. Show that the optimal value is attained. If follows from the discussion in §5.9.4 that the systems (5.117) and (5.116) are strong alt􏰉ernatives.
Hint. The proof simplifies if you assume, without loss of generality, that the matrices F1, …, Fn are independent, so (5.118) may be replaced by ni=1 viFi ≽ 0 ⇒ v = 0.
(b) Take n = 1, and
Show that (5.117) and (5.116) are both infeasible.
G=􏰒 0 1 􏰓, F1 =􏰒 0 0 􏰓. 10 01
(5.118)

Part II Applications

Chapter 6
Approximation and fitting
6.1 Norm approximation
6.1.1 Basic norm approximation problem
The simplest norm approximation problem is an unconstrained problem of the form
minimize ∥Ax − b∥ (6.1)
where A ∈ Rm×n and b ∈ Rm are problem data, x ∈ Rn is the variable, and ∥·∥ is a norm on Rm. A solution of the norm approximation problem is sometimes called an approximate solution of Ax ≈ b, in the norm ∥ · ∥. The vector
r = Ax − b
is called the residual for the problem; its components are sometimes called the individual residuals associated with x.
The norm approximation problem (6.1) is a convex problem, and is solvable, i.e., there is always at least one optimal solution. Its optimal value is zero if and only if b ∈ R(A); the problem is more interesting and useful, however, when b ̸∈ R(A). We can assume without loss of generality that the columns of A are independent; in particular, that m ≥ n. When m = n the optimal point is simply A−1b, so we can assume that m > n.
Approximation interpretation
By expressing Ax as
Ax=x1a1 +···+xnan,
where a1,…,an ∈ Rm are the columns of A, we see that the goal of the norm approximation problem is to fit or approximate the vector b by a linear combination of the columns of A, as closely as possible, with deviation measured in the norm ∥·∥.
The approximation problem is also called the regression problem. In this context the vectors a1, . . . , an are called the regressors, and the vector x1a1 + · · · + xnan,

292
6 Approximation and fitting
where x is an optimal solution of the problem, is called the regression of b (onto the regressors).
Estimation interpretation
A closely related interpretation of the norm approximation problem arises in the problem of estimating a parameter vector on the basis of an imperfect linear vector measurement. We consider a linear measurement model
y = Ax + v,
where y ∈ Rm is a vector measurement, x ∈ Rn is a vector of parameters to be estimated, and v ∈ Rm is some measurement error that is unknown, but presumed to be small (in the norm ∥ · ∥). The estimation problem is to make a sensible guess as to what x is, given y.
If we guess that x has the value xˆ, then we are implicitly making the guess that v has the value y − Axˆ. Assuming that smaller values of v (measured by ∥ · ∥) are more plausible than larger values, the most plausible guess for x is
xˆ = argminz ∥Az − y∥.
(These ideas can be expressed more formally in a statistical framework; see chap-
ter 7.)
Geometric interpretation
We consider the subspace A = R(A) ⊆ Rm, and a point b ∈ Rm. A projection of the point b onto the subspace A, in the norm ∥ · ∥, is any point in A that is closest to b, i.e., any optimal point for the problem
minimize ∥u − b∥ subject to u ∈ A.
Parametrizing an arbitrary element of R(A) as u = Ax, we see that solving the norm approximation problem (6.1) is equivalent to computing a projection of b onto A.
Design interpretation
We can interpret the norm approximation problem (6.1) as a problem of optimal design. The n variables x1, . . . , xn are design variables whose values are to be determined. The vector y = Ax gives a vector of m results, which we assume to be linear functions of the design variables x. The vector b is a vector of target or desired results. The goal is to choose a vector of design variables that achieves, as closely as possible, the desired results, i.e., Ax ≈ b. We can interpret the residual vector r as the deviation between the actual results (i.e., Ax) and the desired or target results (i.e., b). If we measure the quality of a design by the norm of the deviation between the actual results and the desired results, then the norm approximation problem (6.1) is the problem of finding the best design.

6.1 Norm approximation 293
Weighted norm approximation problems
An extension of the norm approximation problem is the weighted norm approxima- tion problem
minimize ∥W (Ax − b)∥
where the problem data W ∈ Rm×m is called the weighting matrix. The weight- ing matrix is often diagonal, in which case it gives different relative emphasis to different components of the residual vector r = Ax − b.
The weighted norm problem can be considered as a norm approximation prob- lem with norm ∥·∥, and data A ̃ = W A, ̃b = W b, and therefore treated as a standard norm approximation problem (6.1). Alternatively, the weighted norm approxima- tion problem can be considered a norm approximation problem with data A and b, and the W -weighted norm defined by
∥z∥W =∥Wz∥ (assuming here that W is nonsingular).
Least-squares approximation
The most common norm approximation problem involves the Euclidean or l2- norm. By squaring the objective, we obtain an equivalent problem which is called the least-squares approximation problem,
minimize ∥Ax−b∥2 =r12 +r2 +···+rm2 ,
where the objective is the sum of squares of the residuals. This problem can be
solved analytically by expressing the objective as the convex quadratic function f(x) = xT AT Ax − 2bT Ax + bT b.
A point x minimizes f if and only if
∇f(x) = 2AT Ax − 2AT b = 0,
i.e., if and only if x satisfies the so-called normal equations AT Ax = AT b,
which always have a solution. Since we assume the columns of A are independent, the least-squares approximation problem has the unique solution x = (AT A)−1AT b.
Chebyshev or minimax approximation
When the l∞-norm is used, the norm approximation problem minimize ∥Ax − b∥∞ = max{|r1|, . . . , |rm|}
is called the Chebyshev approximation problem, or minimax approximation problem, since we are to minimize the maximum (absolute value) residual. The Chebyshev approximation problem can be cast as an LP
minimize t
subjectto −t1≼Ax−b≼t1,
with variables x ∈ Rn and t ∈ R.

294
6 Approximation and fitting
6.1.2
When the l1-norm is used, the norm approximation problem minimize ∥Ax−b∥1 =|r1|+···+|rm|
is called the sum of (absolute) residuals approximation problem, or, in the context of estimation, a robust estimator (for reasons that will be clear soon). Like the Chebyshev approximation problem, the l1-norm approximation problem can be cast as an LP
minimize 1T t
subjectto −t≼Ax−b≼t,
with variables x ∈ Rn and t ∈ Rm.
Penalty function approximation
In lp-norm approximation, for 1 ≤ p < ∞, the objective is (|r1|p + · · · + |rm|p)1/p . As in least-squares problems, we can consider the equivalent problem with objective |r1|p + ··· + |rm|p, which is a separable and symmetric function of the residuals. In particular, the objective depends only on the amplitude distribution of the residuals, i.e., the residuals in sorted order. We will consider a useful generalization of the lp-norm approximation problem, in which the objective depends only on the amplitude distribution of the residuals. The penalty function approximation problem has the form Sum of absolute residuals approximation minimize φ(r1) + · · · + φ(rm) subjectto r=Ax−b, (6.2) where φ : R → R is called the (residual) penalty function. We assume that φ is convex, so the penalty function approximation problem is a convex optimization problem. In many cases, the penalty function φ is symmetric, nonnegative, and satisfies φ(0) = 0, but we will not use these properties in our analysis. Interpretation We can interpret the penalty function approximation problem (6.2) as follows. For the choice x, we obtain the approximation Ax of b, which has the associated resid- ual vector r. A penalty function assesses a cost or penalty for each component of residual, given by φ(ri); the total penalty is the sum of the penalties for each residual, i.e., φ(r1 ) + · · · + φ(rm ). Different choices of x lead to different resulting residuals, and therefore, different total penalties. In the penalty function approxi- mation problem, we minimize the total penalty incurred by the residuals. 6.1 Norm approximation 2 1.5 1 0.5 295 log barrier 0 −1.5 −1 −0.5 0 0.5 1 1.5 u Figure 6.1 Some common penalty functions: the quadratic penalty function φ(u) = u2, the deadzone-linear penalty function with deadzone width a = 1/4, and the log barrier penalty function with limit a = 1. Example 6.1 Some common penalty functions and associated approximation problems. • By taking φ(u) = |u|p, where p ≥ 1, the penalty function approximation prob- lem is equivalent to the lp-norm approximation problem. In particular, the quadratic penalty function φ(u) = u2 yields least-squares or Euclidean norm approximation, and the absolute value penalty function φ(u) = |u| yields l1- norm approximation. 0 |u| ≤ a |u|−a |u|>a.
φ(u) =
The deadzone-linear function assesses no penalty for residuals smaller than a.
• The log barrier penalty fu􏰆nction (with limit a > 0) has the form
A deadzone-linear, log barrier, and quadratic penalty function are plotted in fig- ure 6.1. Note that the log barrier function is very close to the quadratic penalty for |u/a| ≤ 0.25 (see exercise 6.1).
−a2 log(1 − (u/a)2) |u| < a ∞ |u| ≥ a. φ(u) = The log barrier penalty function assesses an infinite penalty for residuals larger than a. quadratic deadzone-linear • The deadzone-linear penalty func􏰆tion (with deadzone width a > 0) is given by
Scaling the penalty function by a positive number does not affect the solution of the penalty function approximation problem, since this merely scales the objective
φ(u)

296
6 Approximation and fitting
function. But the shape of the penalty function has a large effect on the solution of the penalty function approximation problem. Roughly speaking, φ(u) is a measure of our dislike of a residual of value u. If φ is very small (or even zero) for small values of u, it means we care very little (or not at all) if residuals have these values. If φ(u) grows rapidly as u becomes large, it means we have a strong dislike for large residuals; if φ becomes infinite outside some interval, it means that residuals outside the interval are unacceptable. This simple interpretation gives insight into the solution of a penalty function approximation problem, as well as guidelines for choosing a penalty function.
As an example, let us compare l1-norm and l2-norm approximation, associ- ated with the penalty functions φ1(u) = |u| and φ2(u) = u2, respectively. For |u| = 1, the two penalty functions assign the same penalty. For small u we have φ1(u) ≫ φ2(u), so l1-norm approximation puts relatively larger emphasis on small residuals compared to l2-norm approximation. For large u we have φ2(u) ≫ φ1(u), so l1-norm approximation puts less weight on large residuals, compared to l2-norm approximation. This difference in relative weightings for small and large residuals is reflected in the solutions of the associated approximation problems. The ampli- tude distribution of the optimal residual for the l1-norm approximation problem will tend to have more zero and very small residuals, compared to the l2-norm ap- proximation solution. In contrast, the l2-norm solution will tend to have relatively fewer large residuals (since large residuals incur a much larger penalty in l2-norm approximation than in l1-norm approximation).
Example
An example will illustrate these ideas. We take a matrix A ∈ R100×30 and vector b ∈ R100 (chosen at random, but the results are typical), and compute the l1-norm and l2-norm approximate solutions of Ax ≈ b, as well as the penalty function approximations with a deadzone-linear penalty (with a = 0.5) and log barrier penalty (with a = 1). Figure 6.2 shows the four associated penalty functions, and the amplitude distributions of the optimal residuals for these four penalty approximations. From the plots of the penalty functions we note that
• The l1-norm penalty puts the most weight on small residuals and the least weight on large residuals.
• The l2-norm penalty puts very small weight on small residuals, but strong weight on large residuals.
• The deadzone-linear penalty function puts no weight on residuals smaller than 0.5, and relatively little weight on large residuals.
• The log barrier penalty puts weight very much like the l2-norm penalty for small residuals, but puts very strong weight on residuals larger than around 0.8, and infinite weight on residuals larger than 1.
Several features are clear from the amplitude distributions:
• For the l1-optimal solution, many residuals are either zero or very small. The l1-optimal solution also has relatively more large residuals.

6.1 Norm approximation
297
40
0
−2 −1
10
0
−2 −1
20 0
−2 −1 10
0
−2 −1
0 1 2
0 1 2
0 1 2
0 1 2
r
Log barrier Deadzone p = 2 p = 1
Figure 6.2 Histogram of residual amplitudes for four penalty functions, with the (scaled) penalty functions also shown for reference. For the log barrier plot, the quadratic penalty is also shown, in dashed curve.

298
6 Approximation and fitting
1.5
1
0.5
0
−1.5 −1 −0.5 0 0.5 1 1.5
u
Figure 6.3 A (nonconvex) penalty function that assesses a fixed penalty to residuals larger than a threshold (which in this example is one): φ(u) = u2 if |u| ≤ 1 and φ(u) = 1 if |u| > 1. As a result, penalty approximation with this function would be relatively insensitive to outliers.
• The l2-norm approximation has many modest residuals, and relatively few larger ones.
• For the deadzone-linear penalty, we see that many residuals have the value ±0.5, right at the edge of the ‘free’ zone, for which no penalty is assessed.
• For the log barrier penalty, we see that no residuals have a magnitude larger than 1, but otherwise the residual distribution is similar to the residual dis- tribution for l2-norm approximation.
Sensitivity to outliers or large errors
In the estimation or regression context, an outlier is a measurement yi = aTi x + vi for which the noise vi is relatively large. This is often associated with faulty data or a flawed measurement. When outliers occur, any estimate of x will be associated with a residual vector with some large components. Ideally we would like to guess which measurements are outliers, and either remove them from the estimation process or greatly lower their weight in forming the estimate. (We cannot, however, assign zero penalty for very large residuals, because then the optimal point would likely make all residuals large, which yields a total penalty of zero.) This could be accomplished using penalty function approximation, with a penalty function such
as 􏰆 u2 |u| ≤ M
φ(u) = M2 |u| > M, (6.3)
shown in figure 6.3. This penalty function agrees with least-squares for any residual smaller than M, but puts a fixed weight on any residual larger than M, no matter how much larger it is. In other words, residuals larger than M are ignored; they are assumed to be associated with outliers or bad data. Unfortunately, the penalty
φ(u)

6.1 Norm approximation 299
2
1.5
1
0.5
0
−1.5 −1 −0.5 0 0.5 1 1.5
u
Figure 6.4 The solid line is the robust least-squares or Huber penalty func- tionφhub,withM=1. For|u|≤Mitisquadratic,andfor|u|>Mit grows linearly.
function (6.3) is not convex, and the associated penalty function approximation problem becomes a hard combinatorial optimization problem.
The sensitivity of a penalty function based estimation method to outliers de- pends on the (relative) value of the penalty function for large residuals. If we restrict ourselves to convex penalty functions (which result in convex optimization problems), the ones that are least sensitive are those for which φ(u) grows linearly, i.e., like |u|, for large u. Penalty functions with this property are sometimes called robust, since the associated penalty function approximation methods are much less sensitive to outliers or large errors than, for example, least-squares.
One obvious example of a robust penalty function is φ(u) = |u|, corresponding to l1-norm approximation. Another example is the robust least-squares or Huber penalty function, given by
φhub(u) = 􏰆 u2 |u| ≤ M (6.4) M(2|u|−M) |u|>M,
shown in figure 6.4. This penalty function agrees with the least-squares penalty function for residuals smaller than M, and then reverts to l1-like linear growth for larger residuals. The Huber penalty function can be considered a convex approx- imation of the outlier penalty function (6.3), in the following sense: They agree for |u| ≤ M, and for |u| > M, the Huber penalty function is the convex function closest to the outlier penalty function (6.3).
Example 6.2 Robust regression. Figure 6.5 shows 42 points (ti,yi) in a plane, with two obvious outliers (one at the upper left, and one at lower right). The dashed line shows the least-squares approximation of the points by a straight line f (t) = α + βt. The coefficients α and β are obtained by solving the least-squares problem
minimize 􏰉42 (yi − α − βti)2, i=1
φhub (u)

300
6 Approximation and fitting
20 10 0 −10
−20−10 −5 0 5 10 t
Figure 6.5 The 42 circles show points that can be well approximated by an affine function, except for the two outliers at upper left and lower right. The dashed line is the least-squares fit of a straight line f(t) = α + βt to the points, and is rotated away from the main locus of points, toward the outliers. The solid line shows the robust least-squares fit, obtained by minimizing Huber’s penalty function with M = 1. This gives a far better fit to the non-outlier data.
with variables α and β. The least-squares approximation is clearly rotated away from the main locus of the points, toward the two outliers.
The solid line shows the robust least-squares approximation, obtained by minimizing the Huber penalty function
minimize 􏰉42 φhub(yi − α − βti), i=1
with M = 1. This approximation is far less affected by the outliers.
Since l1-norm approximation is among the (convex) penalty function approxi- mation methods that are most robust to outliers, l1-norm approximation is some- times called robust estimation or robust regression. The robustness property of l1-norm estimation can also be understood in a statistical framework; see page 353.
Small residuals and l1-norm approximation
We can also focus on small residuals. Least-squares approximation puts very small weight on small residuals, since φ(u) = u2 is very small when u is small. Penalty functions such as the deadzone-linear penalty function put zero weight on small residuals. For penalty functions that are very small for small residuals, we expect the optimal residuals to be small, but not very small. Roughly speaking, there is little or no incentive to drive small residuals smaller.
In contrast, penalty functions that put relatively large weight on small residuals, such as φ(u) = |u|, corresponding to l1-norm approximation, tend to produce
f (t)

6.1 Norm approximation 301
optimal residuals many of which are very small, or even exactly zero. This means that in l1-norm approximation, we typically find that many of the equations are satisfied exactly, i.e., we have aTi x = bi for many i. This phenomenon can be seen in figure 6.2.
6.1.3 Approximation with constraints
It is possible to add constraints to the basic norm approximation problem (6.1). When these constraints are convex, the resulting problem is convex. Constraints arise for a variety of reasons.
• In an approximation problem, constraints can be used to rule out certain un- acceptable approximations of the vector b, or to ensure that the approximator Ax satisfies certain properties.
• In an estimation problem, the constraints arise as prior knowledge of the vector x to be estimated, or from prior knowledge of the estimation error v.
• Constraints arise in a geometric setting in determining the projection of a point b on a set more complicated than a subspace, for example, a cone or polyhedron.
Some examples will make these clear.
Nonnegativity constraints on variables
We can add the constraint x ≽ 0 to the basic norm approximation problem: minimize ∥Ax − b∥
subject to x ≽ 0.
In an estimation setting, nonnegativity constraints arise when we estimate a vector x of parameters known to be nonnegative, e.g., powers, intensities, or rates. The geometric interpretation is that we are determining the projection of a vector b onto the cone generated by the columns of A. We can also interpret this problem as approximating b using a nonnegative linear (i.e., conic) combination of the columns of A.
Variable bounds
Here we add the constraint l ≼ x ≼ u, where l, u ∈ Rn are problem parameters: minimize ∥Ax − b∥
subjectto l≼x≼u.
In an estimation setting, variable bounds arise as prior knowledge of intervals in which each variable lies. The geometric interpretation is that we are determining the projection of a vector b onto the image of a box under the linear mapping induced by A.

302
6 Approximation and fitting
6.2
These ideas also come up in the context of regularization; see §6.3.2. Least-norm problems
The basic least-norm problem has the form minimize ∥x∥
subject to Ax = b
Probability distribution
We can impose the constraint that x satisfy x ≽ 0, 1T x = 1: minimize ∥Ax − b∥ T
subjectto x≽0, 1 x=1.
This would arise in the estimation of proportions or relative frequencies, which are nonnegative and sum to one. It can also be interpreted as approximating b by a convex combination of the columns of A. (We will have much more to say about estimating probabilities in §7.2.)
Norm ball constraint
We can add to the basic norm approximation problem the constraint that x lie in
a norm ball:
minimize ∥Ax − b∥ subject to ∥x − x0∥ ≤ d,
where x0 and d are problem parameters. Such a constraint can be added for several reasons.
• In an estimation setting, x0 is a prior guess of what the parameter x is, and d is the maximum plausible deviation of our estimate from our prior guess. Our estimate of the parameter x is the value xˆ which best matches the measured data (i.e., minimizes ∥Az − b∥) among all plausible candidates (i.e., z that satisfy ∥z − x0∥ ≤ d).
• The constraint ∥x−x0∥ ≤ d can denote a trust region. Here the linear relation y = Ax is only an approximation of some nonlinear relation y = f(x) that is valid when x is near some point x0, specifically ∥x − x0∥ ≤ d. The problem is to minimize ∥Ax − b∥ but only over those x for which the model y = Ax is trusted.
wherethedataareA∈Rm×n andb∈Rm,thevariableisx∈Rn,and∥·∥isa norm on Rn. A solution of the problem, which always exists if the linear equations Ax = b have a solution, is called a least-norm solution of Ax = b. The least-norm problem is, of course, a convex optimization problem.
We can assume without loss of generality that the rows of A are independent, so m ≤ n. When m = n, the only feasible point is x = A−1b; the least-norm problem is interesting only when m < n, i.e., when the equation Ax = b is underdetermined. (6.5) 6.2 Least-norm problems 303 Reformulation as norm approximation problem The least-norm problem (6.5) can be formulated as a norm approximation problem by eliminating the equality constraint. Let x0 be any solution of Ax = b, and let Z ∈ Rn×k be a matrix whose columns are a basis for the nullspace of A. The general solution of Ax = b can then be expressed as x0 + Zu where u ∈ Rk. The least-norm problem (6.5) can be expressed as minimize ∥x0 + Zu∥, with variable u ∈ Rk, which is a norm approximation problem. In particular, our analysis and discussion of norm approximation problems applies to least-norm problems as well (when interpreted correctly). Control or design interpretation We can interpret the least-norm problem (6.5) as a problem of optimal design or optimal control. The n variables x1, . . . , xn are design variables whose values are to be determined. In a control setting, the variables x1,...,xn represent inputs, whose values we are to choose. The vector y = Ax gives m attributes or results of the design x, which we assume to be linear functions of the design variables x. The m < n equations Ax = b represent m specifications or requirements on the design. Since m < n, the design is underspecified; there are n − m degrees of freedom in the design (assuming A is rank m). Among all the designs that satisfy the specifications, the least-norm problem chooses the smallest design, as measured by the norm ∥ · ∥. This can be thought of as the most efficient design, in the sense that it achieves the specifications Ax = b, with the smallest possible x. Estimation interpretation We assume that x is a vector of parameters to be estimated. We have m < n perfect (noise free) linear measurements, given by Ax = b. Since we have fewer measurements than parameters to estimate, our measurements do not completely determine x. Any parameter vector x that satisfies Ax = b is consistent with our measurements. To make a good guess about what x is, without taking further measurements, we must use prior information. Suppose our prior information, or assumption, is that x is more likely to be small (as measured by ∥ · ∥) than large. The least-norm problem chooses as our estimate of the parameter vector x the one that is smallest (hence, most plausible) among all parameter vectors that are consistent with the measurements Ax = b. (For a statistical interpretation of the least-norm problem, see page 359.) Geometric interpretation We can also give a simple geometric interpretation of the least-norm problem (6.5). The feasible set {x | Ax = b} is affine, and the objective is the distance (measured by the norm ∥ · ∥) between x and the point 0. The least-norm problem finds the 304 6 Approximation and fitting point in the affine set with minimum distance to 0, i.e., it determines the projection of the point 0 on the affine set {x | Ax = b}. Least-squares solution of linear equations The most common least-norm problem involves the Euclidean or l2-norm. By squaring the objective we obtain the equivalent problem minimize ∥x∥2 subject to Ax = b, the unique solution of which is called the least-squares solution of the equations Ax = b. Like the least-squares approximation problem, this problem can be solved analytically. Introducing the dual variable ν ∈ Rm, the optimality conditions are 2x⋆ + AT ν⋆ = 0, Ax⋆ = b, which is a pair of linear equations, and readily solved. From the first equation we obtain x⋆ = −(1/2)AT ν⋆; substituting this into the second equation we obtain −(1/2)AAT ν⋆ = b, and conclude ν⋆ = −2(AAT )−1b, x⋆ = AT (AAT )−1b. (Since rank A = m < n, the matrix AAT is invertible.) Least-penalty problems A useful variation on the least-norm problem (6.5) is the least-penalty problem minimize φ(x1) + · · · + φ(xn) subject to Ax = b, (6.6) where φ : R → R is convex, nonnegative, and satisfies φ(0) = 0. The penalty function value φ(u) quantifies our dislike of a component of x having value u; the least-penalty problem then finds x that has least total penalty, subject to the constraint Ax = b. All of the discussion and interpretation of penalty functions in penalty function approximation can be transposed to the least-penalty problem, by substituting the amplitude distribution of x (in the least-penalty problem) for the amplitude distribution of the residual r (in the penalty approximation problem). Sparse solutions via least l1-norm Recall from the discussion on page 300 that l1-norm approximation gives relatively large weight to small residuals, and therefore results in many optimal residuals small, or even zero. A similar effect occurs in the least-norm context. The least l1-norm problem, minimize ∥x∥1 subject to Ax = b, tends to produce a solution x with a large number of components equal to zero. In other words, the least l1-norm problem tends to produce sparse solutions of Ax = b, often with m nonzero components. 6.3 Regularized approximation 305 It is easy to find solutions of Ax = b that have only m nonzero components. Choose any set of m indices (out of 1,...,n) which are to be the nonzero com- ponents of x. The equation Ax = b reduces to A ̃x ̃ = b, where A ̃ is the m×m submatrix of A obtained by selecting only the chosen columns, and x ̃ ∈ Rm is the subvector of x containing the m selected components. If A ̃ is nonsingular, then we can take x ̃ = A ̃−1b, which gives a feasible solution x with m or less nonzero components. If A ̃ is singular and b ̸∈ R(A ̃), the equation A ̃x ̃ = b is unsolvable, which means there is no feasible x with the chosen set of nonzero components. If A ̃ is singular and b ∈ R(A ̃), there is a feasible solution with fewer than m nonzero components. This approach can be used to find the smallest x with m (or fewer) nonzero entries, but in general requires examining and comparing all n!/(m!(n−m)!) choices of m nonzero coefficients of the n coefficients in x. Solving the least l1-norm problem, on the other hand, gives a good heuristic for finding a sparse, and small, solution of Ax = b. 6.3 Regularized approximation 6.3.1 Bi-criterion formulation In the basic form of regularized approximation, the goal is to find a vector x that is small (if possible), and also makes the residual Ax − b small. This is naturally described as a (convex) vector optimization problem with two objectives, ∥Ax − b∥ and ∥x∥: minimize (w.r.t. R2+ ) (∥Ax − b∥, ∥x∥) . (6.7) The two norms can be different: the first, used to measure the size of the residual, is on Rm; the second, used to measure the size of x, is on Rn. The optimal trade-off between the two objectives can be found using several methods. The optimal trade-off curve of ∥Ax − b∥ versus ∥x∥, which shows how large one of the objectives must be made to have the other one small, can then be plotted. One endpoint of the optimal trade-off curve between ∥Ax − b∥ and ∥x∥ is easy to describe. The minimum value of ∥x∥ is zero, and is achieved only when x = 0. For this value of x, the residual norm has the value ∥b∥. The other endpoint of the trade-off curve is more complicated to describe. Let C denote the set of minimizers of ∥Ax − b∥ (with no constraint on ∥x∥). Then any minimum norm point in C is Pareto optimal, corresponding to the other endpoint of the trade-off curve. In other words, Pareto optimal points at this endpoint are given by minimum norm minimizers of ∥Ax−b∥. If both norms are Euclidean, this Pareto optimal point is unique, and given by x = A†b, where A† is the pseudo- inverse of A. (See §4.7.6, page 184, and §A.5.4.) 306 6 Approximation and fitting 6.3.2 Regularization Regularization is a common scalarization method used to solve the bi-criterion problem (6.7). One form of regularization is to minimize the weighted sum of the objectives: minimize ∥Ax − b∥ + γ∥x∥, (6.8) where γ > 0 is a problem parameter. As γ varies over (0, ∞), the solution of (6.8) traces out the optimal trade-off curve.
Another common method of regularization, especially when the Euclidean norm is used, is to minimize the weighted sum of squared norms, i.e.,
minimize ∥Ax − b∥2 + δ∥x∥2, (6.9)
for a variety of values of δ > 0.
These regularized approximation problems each solve the bi-criterion problem
of making both ∥Ax − b∥ and ∥x∥ small, by adding an extra term or penalty associated with the norm of x.
Interpretations
Regularization is used in several contexts. In an estimation setting, the extra term penalizing large ∥x∥ can be interpreted as our prior knowledge that ∥x∥ is not too large. In an optimal design setting, the extra term adds the cost of using large values of the design variables to the cost of missing the target specifications.
The constraint that ∥x∥ be small can also reflect a modeling issue. It might be, for example, that y = Ax is only a good approximation of the true relationship y=f(x)betweenxandy. Inordertohavef(x)≈b,wewantAx≈b,andalso need x small in order to ensure that f(x) ≈ Ax.
We will see in §6.4.1 and §6.4.2 that regularization can be used to take into account variation in the matrix A. Roughly speaking, a large x is one for which variation in A causes large variation in Ax, and hence should be avoided.
Regularization is also used when the matrix A is square, and the goal is to solve the linear equations Ax = b. In cases where A is poorly conditioned, or even singular, regularization gives a compromise between solving the equations (i.e., making ∥Ax − b∥ zero) and keeping x of reasonable size.
Regularization comes up in a statistical setting; see §7.1.2. Tikhonov regularization
The most common form of regularization is based on (6.9), with Euclidean norms, which results in a (convex) quadratic optimization problem:
minimize ∥Ax−b∥2 +δ∥x∥2 =xT(ATA+δI)x−2bTAx+bTb. (6.10) This Tikhonov regularization problem has the analytical solution
x = (AT A + δI)−1AT b.
Since AT A + δI ≻ 0 for any δ > 0, the Tikhonov regularized least-squares solution requires no rank (or dimension) assumptions on the matrix A.

6.3 Regularized approximation 307
Smoothing regularization
The idea of regularization, i.e., adding to the objective a term that penalizes large x, can be extended in several ways. In one useful extension we add a regularization term of the form ∥Dx∥, in place of ∥x∥. In many applications, the matrix D represents an approximate differentiation or second-order differentiation operator, so ∥Dx∥ represents a measure of the variation or smoothness of x.
For example, suppose that the vector x ∈ Rn represents the value of some continuous physical parameter, say, temperature, along the interval [0,1]: xi is the temperature at the point i/n. A simple approximation of the gradient or first derivative of the parameter near i/n is given by n(xi+1 − xi), and a simple approximation of its second derivative is given by the second difference
n (n(xi+1 − xi) − n(xi − xi−1)) = n2(xi+1 − 2xi + xi−1). If ∆ is the (tridiagonal, Toeplitz) matrix
0 ∆=n2 .
00
. .  ∈ R(n−2)×n,
 1  0
− 2 10··· 1 −21··· 0 1−2···
. . .
0 00···
0 0 0 ··· 0 0 0 ···
0 0 0 0 0 0
. . − 2 1
1 −2 0 1
00 00
   0 0 0
00
then ∆x represents
∥∆x∥2 represents a measure of the mean-square curvature of the parameter over the interval [0, 1].
The Tikhonov regularized problem
minimize ∥Ax − b∥2 + δ∥∆x∥2
can be used to trade off the objective ∥Ax − b∥2, which might represent a measure of fit, or consistency with experimental data, and the objective ∥∆x∥2, which is (approximately) the mean-square curvature of the underlying physical parameter. The parameter δ is used to control the amount of regularization required, or to plot the optimal trade-off curve of fit versus smoothness.
We can also add several regularization terms. For example, we can add terms associated with smoothness and size, as in
minimize ∥Ax − b∥2 + δ∥∆x∥2 + η∥x∥2.
Here, the parameter δ ≥ 0 is used to control the smoothness of the approximate
solution, and the parameter η ≥ 0 is used to control its size.
Example 6.3 Optimal input design. We consider a dynamical system with scalar input sequence u(0), u(1), . . . , u(N ), and scalar output sequence y(0), y(1), . . . , y(N ), related by convolution:
􏰊t τ=0
an approximation of the second derivative of the parameter, so
1 0 −2 1
y(t) =
h(τ)u(t − τ), t = 0,1,…,N.

308
6 Approximation and fitting
The sequence h(0), h(1), . . . , h(N ) is called the convolution kernel or impulse response of the system.
Our goal is to choose the input sequence u to achieve several goals.
• Output tracking. The primary goal is that the output y should track, or follow, a desired target or reference signal ydes. We measure output tracking error by the quadratic function
1 􏰊N
Jtrack = N + 1 (y(t) − ydes(t))2.
t=0
• Small input. The input should not be large. We measure the magnitude of the
input by the quadratic function
1 􏰊N
Jmag = N + 1 u(t)2. t=0
• Small input variations. The input should not vary rapidly. We measure the magnitude of the input variations by the quadratic function
1 N􏰊− 1
Jder = N (u(t + 1) − u(t))2.
t=0
By minimizing a weighted sum
Jtrack + δJder + ηJmag,
where δ > 0 and η > 0, we can trade off the three objectives.
Now we consider a specific example, with N = 200, and impulse response
h(t) = 91 (0.9)t (1 − 0.4 cos(2t)).
Figure 6.6 shows the optimal input, and corresponding output (along with the desired trajectory ydes), for three values of the regularization parameters δ and η. The top row shows the optimal input and corresponding output for δ = 0, η = 0.005. In this case we have some regularization for the magnitude of the input, but no regularization for its variation. While the tracking is good (i.e., we have Jtrack is small), the input required is large, and rapidly varying. The second row corresponds to δ = 0, η = 0.05. In this case we have more magnitude regularization, but still no regularization for variation in u. The corresponding input is indeed smaller, at the cost of a larger tracking error. The bottom row shows the results for δ = 0.3, η = 0.05. In this case we have added some regularization for the variation. The input variation is substantially reduced, with not much increase in output tracking error.
l1-norm regularization
Regularization with an l1-norm can be used as a heuristic for finding a sparse
solution. For example, consider the problem
minimize ∥Ax − b∥2 + γ∥x∥1, (6.11)

6.3 Regularized approximation
5
0
−5 −10
4
2
309
1
0.5
0
−0.5
−1
0 50 100 150 200 0 50 100 150 200
tt
1 0.5
−2 −4
tt
1 0.5
00
4
2
−0.5 −1
0 50 100 150 200 0 50 100 150 200
−2 −4
00
−0.5 −1
0 50 100 150 200 0 50 100 150 200
tt
Figure 6.6 Optimal inputs (left) and resulting outputs (right) for three values of the regularization parameters δ (which corresponds to input variation) and η (which corresponds to input magnitude). The dashed line in the righthand plots shows the desired output ydes. Top row: δ = 0, η = 0.005; middle row: δ=0,η=0.05;bottomrow: δ=0.3,η=0.05.
u(t) u(t) u(t)
y(t) y(t) y(t)

310
6 Approximation and fitting
in which the residual is measured with the Euclidean norm and the regularization is done with an l1-norm. By varying the parameter γ we can sweep out the optimal trade-off curve between ∥Ax − b∥2 and ∥x∥1, which serves as an approximation of the optimal trade-off curve between ∥Ax − b∥2 and the sparsity or cardinality card(x) of the vector x, i.e., the number of nonzero elements. The problem (6.11) can be recast and solved as an SOCP.
Example 6.4 Regressor selection problem. We are given a matrix A ∈ Rm×n, whose columns are potential regressors, and a vector b ∈ Rm that is to be fit by a linear combination of k < n columns of A. The problem is to choose the subset of k regressors to be used, and the associated coefficients. We can express this problem as minimize ∥Ax − b∥2 subject to card(x) ≤ k. In general, this is a hard combinatorial problem. One straightforward approach is to check every possible sparsity pattern in x with k nonzero entries. For a fixed sparsity pattern, we can find the optimal x by solving a least-squares problem, i.e., minimizing ∥A ̃x ̃ − b∥2, where A ̃ denotes the submatrix of A obtained by keeping the columns corresponding to the sparsity pattern, and x ̃ is the subvector with the nonzero components of x. This is done for each of the n!/(k!(n − k)!) sparsity patterns with k nonzeros. A good heuristic approach is to solve the problem (6.11) for different values of γ, finding the smallest value of γ that results in a solution with card(x) = k. We then fix this sparsity pattern and find the value of x that minimizes ∥Ax − b∥2. Figure 6.7 illustrates a numerical example with A ∈ R10×20, x ∈ R20, b ∈ R10. The circles on the dashed curve are the (globally) Pareto optimal values for the trade-off between card(x) (vertical axis) and the residual ∥Ax − b∥2 (horizontal axis). For each k, the Pareto optimal point was obtained by enumerating all possible sparsity patterns with k nonzero entries, as described above. The circles on the solid curve were obtained with the heuristic approach, by using the sparsity patterns of the solutions of problem (6.11) for different values of γ. Note that for card(x) = 1, the heuristic method actually finds the global optimum. This idea will come up again in basis pursuit (§6.5.4). Reconstruction, smoothing, and de-noising In this section we describe an important special case of the bi-criterion approxi- mation problem described above, and give some examples showing how different regularization methods perform. In reconstruction problems, we start with a signal represented by a vector x ∈ Rn. The coefficients xi correspond to the value of some function of time, evaluated (or sampled, in the language of signal processing) at evenly spaced points. It is usually assumed that the signal does not vary too rapidly, which means that usually, we have xi ≈ xi+1. (In this section we consider signals in one dimension, e.g., audio signals, but the same ideas can be applied to signals in two or more dimensions, e.g., images or video.) 6.3.3 6.3 Regularized approximation 311 10 8 6 4 2 00 1 2 3 4 ∥Ax − b∥2 Figure 6.7 Sparse regressor selection with a matrix A ∈ R10×20. The circles on the dashed line are the Pareto optimal values for the trade-off between the residual ∥Ax − b∥2 and the number of nonzero elements card(x). The points indicated by circles on the solid line are obtained via the l1-norm regularized heuristic. The signal x is corrupted by an additive noise v: xcor =x+v. The noise can be modeled in many different ways, but here we simply assume that it is unknown, small, and, unlike the signal, rapidly varying. The goal is to form an estimate xˆ of the original signal x, given the corrupted signal xcor. This process is called signal reconstruction (since we are trying to reconstruct the original signal from the corrupted version) or de-noising (since we are trying to remove the noise from the corrupted signal). Most reconstruction methods end up performing some sort of smoothing operation on xcor to produce xˆ, so the process is also called smoothing. One simple formulation of the reconstruction problem is the bi-criterion problem minimize (w.r.t. R2+) (∥xˆ − xcor∥2, φ(xˆ)) , (6.12) where xˆ is the variable and xcor is a problem parameter. The function φ : Rn → R is convex, and is called the regularization function or smoothing objective. It is meant to measure the roughness, or lack of smoothness, of the estimate xˆ. The reconstruction problem (6.12) seeks signals that are close (in l2-norm) to the cor- rupted signal, and that are smooth, i.e., for which φ(xˆ) is small. The reconstruction problem (6.12) is a convex bi-criterion problem. We can find the Pareto optimal points by scalarization, and solving a (scalar) convex optimization problem. card(x) 312 6 Approximation and fitting Quadratic smoothing The simplest reconstruction method uses the quadratic smoothing function n􏰊−1 (xi+1 − xi)2 = ∥Dx∥2, where D ∈ R(n−1)×n is the bidiagonal matrix φquad(x) = i=1 1 0 −1 1 . . where δ > 0 parametrizes the optimal trade-off curve. The solution of this quadratic problem,
xˆ = (I + δDT D)−1xcor,
can be computed very efficiently since I + δDT D is tridiagonal; see appendix C.
Quadratic smoothing example
Figure 6.8 shows a signal x ∈ R4000 (top) and the corrupted signal xcor (bottom). The optimal trade-off curve between the objectives ∥xˆ−xcor∥2 and ∥Dxˆ∥2 is shown in figure 6.9. The extreme point on the left of the trade-off curve corresponds to xˆ = xcor, and has objective value ∥Dxcor∥2 = 4.4. The extreme point on the right corresponds to xˆ = 0, for which ∥xˆ − xcor∥2 = ∥xcor∥2 = 16.2. Note the clear knee in the trade-off curve near ∥xˆ − xcor∥2 ≈ 3.
Figure 6.10 shows three smoothed signals on the optimal trade-off curve, cor- responding to ∥xˆ − xcor∥2 = 8 (top), 3 (middle), and 1 (bottom). Comparing the reconstructed signals with the original signal x, we see that the best reconstruction is obtained for ∥xˆ − xcor∥2 = 3, which corresponds to the knee of the trade-off curve. For higher values of ∥xˆ − xcor∥2, there is too much smoothing; for smaller values there is too little smoothing.
Total variation reconstruction
Simple quadratic smoothing works well as a reconstruction method when the orig- inal signal is very smooth, and the noise is rapidly varying. But any rapid varia- tions in the original signal will, obviously, be attenuated or removed by quadratic smoothing. In this section we describe a reconstruction method that can remove much of the noise, while still preserving occasional rapid variations in the original signal. The method is based on the smoothing function
n􏰊−1 i=1
0 0  0 0
. . . 1 0 
··· 0
We can obtain the optimal trade-off between ∥xˆ−xcor∥2 and ∥Dxˆ∥2 by minimizing
 − 1
··· 0 ··· 0
. · · · − 1
 0 D= .  0 0
0 0 0 0
φtv(xˆ) =
|xˆi+1 − xˆi| = ∥Dxˆ∥1,
−1 1 ∥xˆ − xcor∥2 + δ∥Dxˆ∥2,

6.3 Regularized approximation 313
0.5
0
−0.5
0 1000 2000
0.5
0
−0.5
0 1000 2000
i
3000 4000
3000 4000
Figure 6.8 Top: the original signal x ∈ R4000. Bottom: the corrupted signal xcor .
4 3 2 1
00 5 10 15 20 ∥xˆ − xcor∥2
Figure 6.9 Optimal trade-off curve between ∥Dxˆ∥2 and ∥xˆ − xcor∥2. The curve has a clear knee near ∥xˆ − xcor∥ ≈ 3.
∥Dxˆ∥2
xcor x

314
6
Approximation and fitting
3000 4000
3000 4000
3000 4000
0.5
0
−0.5 0
0.5
0
−0.5 0
0.5
0
−0.5 0
1000 2000
1000 2000
1000 2000
i
Figure 6.10 Three smoothed or reconstructed signals xˆ. The top one cor- responds to ∥xˆ − xcor∥2 = 8, the middle one to ∥xˆ − xcor∥2 = 3, and the bottom one to ∥xˆ − xcor∥2 = 1.
which is called the total variation of x ∈ Rn. Like the quadratic smoothness measure φquad, the total variation function assigns large values to rapidly varying xˆ. The total variation measure, however, assigns relatively less penalty to large values of |xi+1 − xi|.
Total variation reconstruction example
Figure 6.11 shows a signal x ∈ R2000 (in the top plot), and the signal corrupted with noise xcor. The signal is mostly smooth, but has several rapid variations or jumps in value; the noise is rapidly varying.
We first use quadratic smoothing. Figure 6.12 shows three smoothed signals on the optimal trade-off curve between ∥Dxˆ∥2 and ∥xˆ−xcor∥2. In the first two signals, the rapid variations in the original signal are also smoothed. In the third signal the steep edges in the signal are better preserved, but there is still a significant amount of noise left.
Now we demonstrate total variation reconstruction. Figure 6.13 shows the optimal trade-off curve between ∥Dxˆ∥1 and ∥xˆ − xcorr∥2. Figure 6.14 shows the re- constructed signals on the optimal trade-off curve, for ∥Dxˆ∥1 = 5 (top), ∥Dxˆ∥1 = 8 (middle), and ∥Dxˆ∥1 = 10 (bottom). We observe that, unlike quadratic smoothing, total variation reconstruction preserves the sharp transitions in the signal.
xˆ xˆ xˆ

6.3 Regularized approximation 315
2 1 0
−1
−20 500 1000
2 1 0
−1
−20 500 1000
i
1500 2000
1500 2000
Figure 6.11 A signal x ∈ R2000, and the corrupted signal xcor ∈ R2000. The noise is rapidly varying, and the signal is mostly smooth, with a few rapid variations.
xcor x

316
6
Approximation and fitting
1500 2000
1500 2000
1500 2000
2 0
−20 500 1000 2
0
−20 500 1000 2
0
−20 500 1000
i
Figure 6.12 Three quadratically smoothed signals xˆ. The top one corre- sponds to ∥xˆ − xcor∥2 = 10, the middle one to ∥xˆ − xcor∥2 = 7, and the bottom one to ∥xˆ − xcor∥2 = 4. The top one greatly reduces the noise, but also excessively smooths out the rapid variations in the signal. The bottom smoothed signal does not give enough noise reduction, and still smooths out the rapid variations in the original signal. The middle smoothed signal gives the best compromise, but still smooths out the rapid variations.
250
200
150
100
50
00 10 20 30 40 50 ∥xˆ − xcor∥2
Figure 6.13 Optimal trade-off curve between ∥Dxˆ∥1 and ∥xˆ − xcor∥2.
∥Dxˆ∥1
xˆ xˆ xˆ

6.3 Regularized approximation 317
2 0
−20 500 1000
1500 2000
1500 2000
1500 2000
2 0
i
−20 500 1000 2
0
−20 500 1000
Figure 6.14 Three reconstructed signals xˆ, using total variation reconstruc- tion. The top one corresponds to ∥Dxˆ∥1 = 5, the middle one to ∥Dxˆ∥1 = 8, and the bottom one to ∥Dxˆ∥1 = 10. The bottom one does not give quite enough noise reduction, while the top one eliminates some of the slowly vary- ing parts of the signal. Note that in total variation reconstruction, unlike quadratic smoothing, the sharp changes in the signal are preserved.
xˆ xˆ xˆ

318
6.4 6.4.1
6 Approximation and fitting
Robust approximation
Stochastic robust approximation
We consider an approximation problem with basic objective ∥Ax−b∥, but also wish to take into account some uncertainty or possible variation in the data matrix A. (The same ideas can be extended to handle the case where there is uncertainty in both A and b.) In this section we consider some statistical models for the variation in A.
We assume that A is a random variable taking values in Rm×n, with mean A ̄,
so we can describe A as
A = A ̄ + U ,
where U is a random matrix with zero mean. Here, the constant matrix A ̄ gives the average value of A, and U describes its statistical variation.
It is natural to use the expected value of ∥Ax − b∥ as the objective:
minimize E ∥Ax − b∥. (6.13)
We refer to this problem as the stochastic robust approximation problem. It is always a convex optimization problem, but usually not tractable since in most cases it is very difficult to evaluate the objective or its derivatives.
One simple case in which the stochastic robust approximation problem (6.13) can be solved occurs when A assumes only a finite number of values, i.e.,
prob(A = Ai) = pi, i = 1,…,k,
where Ai ∈ Rm×n, 1T p = 1, p ≽ 0. In this case the problem (6.13) has the form
minimize p1∥A1x − b∥ + ··· + pk∥Akx − b∥, which is often called a sum-of-norms problem. It can be expressed as
minimize pT t
subjectto ∥Aix−b∥≤ti, i=1,…,k,
where the variables are x ∈ Rn and t ∈ Rk. If the norm is the Euclidean norm, this sum-of-norms problem is an SOCP. If the norm is the l1- or l∞-norm, the sum-of-norms problem can be expressed as an LP; see exercise 6.8.
Some variations on the statistical robust approximation problem (6.13) are tractable. As an example, consider the statistical robust least-squares problem
minimize E ∥Ax − b∥2,
where the norm is the Euclidean norm. We can express the objective as
E∥Ax−b∥2 = E(A ̄x−b+Ux)T(A ̄x−b+Ux) = (A ̄x−b)T(A ̄x−b)+ExTUTUx
= ∥A ̄x−b∥2 +xTPx,

6.4 Robust approximation 319
where P = EUTU. Therefore the statistical robust approximation problem has the form of a regularized least-squares problem
with solution
minimize ∥A ̄x − b∥2 + ∥P1/2x∥2, x = ( A ̄ T A ̄ + P ) − 1 A ̄ T b .
This makes perfect sense: when the matrix A is subject to variation, the vector Ax will have more variation the larger x is, and Jensen’s inequality tells us that variation in Ax will increase the average value of ∥Ax−b∥2. So we need to balance making A ̄x − b small with the desire for a small x (to keep the variation in Ax small), which is the essential idea of regularization.
This observation gives us another interpretation of the Tikhonov regularized least-squares problem (6.10), as a robust least-squares problem, taking into account possible variation in the matrix A. The solution of the Tikhonov regularized least- squares problem (6.10) minimizes E ∥(A + U )x − b∥2 , where Uij are zero mean, uncorrelated random variables, with variance δ/m (and here, A is deterministic).
6.4.2 Worst-case robust approximation
It is also possible to model the variation in the matrix A using a set-based, worst- case approach. We describe the uncertainty by a set of possible values for A:
A ∈ A ⊆ Rm×n,
which we assume is nonempty and bounded. We define the associated worst-case
error of a candidate approximate solution x ∈ Rn as ewc(x) = sup{∥Ax − b∥ | A ∈ A},
which is always a convex function of x. The (worst-case) robust approximation problem is to minimize the worst-case error:
minimize ewc(x) = sup{∥Ax − b∥ | A ∈ A}, (6.14)
where the variable is x, and the problem data are b and the set A. When A is the singleton A = {A}, the robust approximation problem (6.14) reduces to the basic norm approximation problem (6.1). The robust approximation problem is always a convex optimization problem, but its tractability depends on the norm used and the description of the uncertainty set A.
Example 6.5 Comparison of stochastic and worst-case robust approximation. To illustrate the difference between the stochastic and worst-case formulations of the robust approximation problem, we consider the least-squares problem
minimize ∥A(u)x − b∥2,
where u ∈ R is an uncertain parameter and A(u) = A0 + uA1. We consider a specific instance of the problem, with A(u) ∈ R20×10, ∥A0∥ = 10, ∥A1∥ = 1, and u

320
6 Approximation and fitting
12
10
8
6
4
2
xnom
xstoch xwc
0
−2 −1 0 1 2
u
Figure 6.15 The residual r(u) = ∥A(u)x − b∥2 as a function of the un- certain parameter u for three approximate solutions x: (1) the nominal least-squares solution xnom; (2) the solution of the stochastic robust approx- imation problem xstoch (assuming u is uniformly distributed on [−1, 1]); and (3) the solution of the worst-case robust approximation problem xwc, as- suming the parameter u lies in the interval [−1,1]. The nominal solution achieves the smallest residual when u = 0, but gives much larger residuals as u approaches −1 or 1. The worst-case solution has a larger residual when u = 0, but its residuals do not rise much as the parameter u varies over the interval [−1, 1].
in the interval [−1, 1]. (So, roughly speaking, the variation in the matrix A is around ±10%.)
We find three approximate solutions:
• Nominal optimal. The optimal solution xnom is found, assuming A(u) has its
nominal value A0.
• Stochastic robust approximation. We find xstoch, which minimizes E ∥A(u)x −
b∥2, assuming the parameter u is uniformly distributed on [−1,1].
• Worst-case robust approximation. We find xwc, which minimizes
sup ∥A(u)x − b∥2 = max{∥(A0 − A1)x − b∥2, ∥(A0 + A1)x − b∥2}. −1≤u≤1
For each of these three values of x, we plot the residual r(u) = ∥A(u)x − b∥2 as a function of the uncertain parameter u, in figure 6.15. These plots show how sensitive an approximate solution can be to variation in the parameter u. The nominal solu- tion achieves the smallest residual when u = 0, but is quite sensitive to parameter variation: it gives much larger residuals as u deviates from 0, and approaches −1 or 1. The worst-case solution has a larger residual when u = 0, but its residuals do not rise much as u varies over the interval [−1,1]. The stochastic robust approximate solution is in between.
r(u)

6.4 Robust approximation 321
The robust approximation problem (6.14) arises in many contexts and applica- tions. In an estimation setting, the set A gives our uncertainty in the linear relation between the vector to be estimated and our measurement vector. Sometimes the noise term v in the model y = Ax + v is called additive noise or additive error, since it is added to the ‘ideal’ measurement Ax. In contrast, the variation in A is called multiplicative error, since it multiplies the variable x.
In an optimal design setting, the variation can represent uncertainty (arising in manufacture, say) of the linear equations that relate the design variables x to the results vector Ax. The robust approximation problem (6.14) is then interpreted as the robust design problem: find design variables x that minimize the worst possible mismatch between Ax and b, over all possible values of A.
Finite set
Here we have A = {A1 , . . . , Ak }, and the robust approximation problem is minimize maxi=1,…,k ∥Aix − b∥.
This problem is equivalent to the robust approximation problem with the polyhe- dral set A = conv{A1,…,Ak}:
minimize sup{∥Ax−b∥|A∈conv{A1,…,Ak}}. We can cast the problem in epigraph form as
minimize t
subjectto ∥Aix−b∥≤t, i=1,…,k,
which can be solved in a variety of ways, depending on the norm used. If the norm is the Euclidean norm, this is an SOCP. If the norm is the l1- or l∞-norm, we can express it as an LP.
Norm bound error
Here the uncertainty set A is a norm ball, A = {A ̄+U | ∥U∥ ≤ a}, where ∥·∥ is a norm on Rm×n. In this case we have
e w c ( x ) = s u p { ∥ A ̄ x − b + U x ∥ | ∥ U ∥ ≤ a } ,
which must be carefully interpreted since the first norm appearing is on Rm (and is used to measure the size of the residual) and the second one appearing is on Rm×n (used to define the norm ball A).
This expression for ewc(x) can be simplified in several cases. As an example, let us take the Euclidean norm on Rn and the associated induced norm on Rm×n, i.e., the maximum singular value. If A ̄x − b ̸= 0 and x ̸= 0, the supremum in the expression for ewc(x) is attained for U = auvT , with
u=A ̄x−b, v=x, ∥A ̄x − b∥2 ∥x∥2
and the resulting worst-case error is
ewc(x) = ∥A ̄x − b∥2 + a∥x∥2.

322
6 Approximation and fitting
(It is easily verified that this expression is also valid if x or A ̄x − b is zero.) The robust approximation problem (6.14) then becomes
minimize ∥A ̄x − b∥2 + a∥x∥2, which is a regularized norm problem, solvable as the SOCP
minimize t1 + at2
subject to ∥A ̄x − b∥2 ≤ t1, ∥x∥2 ≤ t2.
Since the solution of this problem is the same as the solution of the regularized least-squares problem
minimize ∥A ̄x − b∥2 + δ∥x∥2
for some value of the regularization parameter δ, we have another interpretation of the regularized least-squares problem as a worst-case robust approximation prob- lem.
Uncertainty ellipsoids
We can also describe the variation in A by giving an ellipsoid of possible values for each row:
where
A={[a1 ··· am]T |ai ∈Ei, i=1,…,m}, Ei ={a ̄i +Piu|∥u∥2 ≤1}.
The matrix Pi ∈ Rn×n describes the variation in ai. We allow Pi to have a nontriv- ial nullspace, in order to model the situation when the variation in ai is restricted to a subspace. As an extreme case, we take Pi = 0 if there is no uncertainty in ai.
With this ellipsoidal uncertainty description, we can give an explicit expression for the worst-case magnitude of each residual:
sup |aTi x−bi| = sup{|a ̄Ti x−bi +(Piu)Tx||∥u∥2 ≤1} ai ∈Ei
= |a ̄Tix−bi|+∥PiTx∥2.
Using this result we can solve several robust approximation problems. For
example, the robust l2-norm approximation problem
minimize ewc(x)=sup{∥Ax−b∥2 |ai ∈Ei, i=1,…,m}
can be reduced to an SOCP, as follows. An explicit expression for the worst-case error is given by
􏰇􏰊m􏰄 ewc(x)=
sup |aTi x−bi| ai∈Ei
􏰅2􏰈1/2 􏰇􏰊m =
􏰈1/2
(|a ̄Ti x−bi|+∥PiTx∥2)2 subjectto |a ̄Ti x−bi|+∥PiTx∥2 ≤ti, i=1,…,m,
.
i=1
To minimize ewc(x) we can solve
i=1
minimize ∥t∥2

6.4 Robust approximation 323
where we introduced new variables t1, . . . , tm. This problem can be formulated as
minimize ∥t∥2
subjectto a ̄Tix−bi+∥PiTx∥2≤ti, i=1,…,m
−a ̄Ti x+bi +∥PiTx∥2 ≤ti, i=1,…,m, which becomes an SOCP when put in epigraph form.
Norm bounded error with linear structure
As a generalization of the norm bound description A = {A ̄ + U | ∥U ∥ ≤ a}, we can define A as the image of a norm ball under an affine transformation:
A={A ̄+u1A1 +u2A2 +···+upAp |∥u∥≤1},
where ∥·∥ is a norm on Rp, and the p+1 matrices A ̄, A1,…,Ap ∈ Rm×n are
given. The worst-case error can be expressed as
ewc(x) = sup ∥(A ̄+u1A1 +···+upAp)x−b∥
∥u∥≤1
= sup ∥P(x)u+q(x)∥,
∥u∥≤1 where P and q are defined as
P(x)=􏰋 A1x A2x ··· Apx 􏰌∈Rm×p, q(x)=A ̄x−b∈Rm.
As a first example, we consider the robust Chebyshev approximation problem
minimize ewc(x) = sup∥u∥∞≤1 ∥(A ̄ + u1A1 + · · · + upAp)x − b∥∞.
In this case we can derive an explicit expression for the worst-case error. Let pi(x)T denote the ith row of P (x). We have
ewc(x) = = =
sup ∥P (x)u + q(x)∥∞ ∥u∥∞ ≤1
max sup |pi(x)T u + qi(x)| i=1,…,m ∥u∥∞≤1
max (∥pi(x)∥1 + |qi(x)|). i=1,…,m
The robust Chebyshev approximation problem can therefore be cast as an LP
minimize t
subjectto −y0≼􏰉A ̄x−b≼y0
−yk ≼ Akx ≼ yk, k = 1,…,p y0+ pk=1yk≼t1,
withvariablesx∈Rn,yk ∈Rm,t∈R.
As another example, we consider the robust least-squares problem
minimize ewc(x) = sup∥u∥2≤1 ∥(A ̄ + u1A1 + · · · + upAp)x − b∥2.

324
6 Approximation and fitting
Here we use Lagrange duality to evaluate ewc. The worst-case error ewc(x) is the squareroot of the optimal value of the (nonconvex) quadratic optimization problem
maximize ∥P (x)u + q(x)∥2 subject to uT u ≤ 1,
with u as variable. The Lagrange dual of this problem can be expressed as the
SDP
minimize subjectto
t + λ
 I P(x)q(x)
 P(x)T λI 0 ≽0
q(x)T 0 t
6.5
with variables t, λ ∈ R. Moreover, as mentioned in §5.2 and §B.1 (and proved in §B.4), strong duality holds for this pair of primal and dual problems. In other words, for fixed x, we can compute ewc(x)2 by solving the SDP (6.15) with variables t and λ. Optimizing jointly over t, λ, and x is equivalent to minimizing ewc(x)2. We conclude that the robust least-squares problem is equivalent to the SDP (6.15) with x, λ, t as variables.
Example 6.6 Comparison of worst-case robust, Tikhonov regularized, and nominal least-squares solutions. We consider an instance of the robust approximation problem
minimize sup∥u∥2≤1 ∥(A ̄ + u1A1 + u2A2)x − b∥2, (6.16)
with dimensions m = 50, n = 20. The matrix A ̄ has norm 10, and the two matrices A1 and A2 have norm 1, so the variation in the matrix A is, roughly speaking, around 10%. The uncertainty parameters u1 and u2 lie in the unit disk in R2.
We compute the optimal solution of the robust least-squares problem (6.16) xrls, as well as the solution of the nominal least-squares problem xls (i.e., assuming u = 0), and also the Tikhonov regularized solution xtik, with δ = 1.
To illustrate the sensitivity of each of these approximate solutions to the parameter u, we generate 105 parameter vectors, uniformly distributed on the unit disk, and evaluate the residual
∥(A0 + u1A1 + u2A2)x − b∥2
for each parameter value. The distributions of the residuals are shown in figure 6.16.
We can make several observations. First, the residuals of the nominal least-squares solution are widely spread, from a smallest value around 0.52 to a largest value around 4.9. In particular, the least-squares solution is very sensitive to parameter variation. In contrast, both the robust least-squares and Tikhonov regularized so- lutions exhibit far smaller variation in residual as the uncertainty parameter varies over the unit disk. The robust least-squares solution, for example, achieves a residual between 2.0 and 2.6 for all parameters in the unit disk.
Function fitting and interpolation
In function fitting problems, we select a member of a finite-dimensional subspace of functions that best fits some given data or requirements. For simplicity we
(6.15)

6.5 Function fitting and interpolation 325
.25 0.2 .15 0.1 .05
00 1 2 3 4 5
xrls
xtik
xls
0
0
0
∥(A0 + u1A1 + u2A2)x − b∥2
Figure 6.16 Distribution of the residuals for the three solutions of a least- squares problem (6.16): xls, the least-squares solution assuming u = 0; xtik, the Tikhonov regularized solution with δ = 1; and xrls, the robust least- squares solution. The histograms were obtained by generating 105 values of the uncertain parameter vector u from a uniform distribution on the unit disk in R2. The bins have width 0.1.
frequency

326
6.5.1
6 Approximation and fitting
consider real-valued functions; the ideas are readily extended to handle vector- valued functions as well.
Function families
We consider a family of functions f1, . . . , fn : Rk → R, with common domain domfi = D. With each x ∈ Rn we associate the function f : Rk → R given by
f (u) = x1 f1 (u) + · · · + xn fn (u) (6.17)
with dom f = D. The family {f1, . . . , fn} is sometimes called the set of basis functions (for the fitting problem) even when the functions are not independent. The vector x ∈ Rn, which parametrizes the subspace of functions, is our optimiza- tion variable, and is sometimes called the coefficient vector. The basis functions generate a subspace F of functions on D.
In many applications the basis functions are specially chosen, using prior knowl- edge or experience, in order to reasonably model functions of interest with the finite-dimensional subspace of functions. In other cases, more generic function families are used. We describe a few of these below.
Polynomials
One common subspace of functions on R consists of polynomials of degree less than n. The simplest basis consists of the powers, i.e., fi(t) = ti−1, i = 1, . . . , n. In many applications, the same subspace is described using a different basis, for example, a set of polynomials f1, . . . , fn, of degree less than n, that are orthonormal with respect to some positive function (or measure) φ : Rn → R+, i.e.,
􏰑 fi(t)fj(t)φ(t)dt=􏰆 1 i=j 0 i̸=j.
Another common basis for polynomials is the Lagrange basis f1, . . . , fn associated with distinct points t1, . . . , tn, which satisfy
fi(tj)=􏰆 1 i=j 0 i̸=j.
We can also consider polynomials on Rk, with a maximum total degree, or a maximum degree for each variable.
As a related example, we have trigonometric polynomials of degree less than n, with basis
sin kt, k = 1, . . . , n − 1, cos kt, k = 0, . . . , n − 1. Piecewise-linear functions
We start with a triangularization of the domain D, which means the following. We have a set of mesh or grid points g1,…,gn ∈ Rk, and a partition of D into a set of simplexes:
D = S1 ∪ · · · ∪ Sm , int(Si ∩ Sj ) = ∅ for i ̸= j.

6.5 Function fitting and interpolation
327
1
0 0
0 1 1 u1
u
Figure 6.17 A piecewise-linear function of two variables, on the unit square. The triangulation consists of 98 simplexes, and a uniform grid of 64 points in the unit square.
Each simplex is the convex hull of k + 1 grid points, and we require that each grid point is a vertex of any simplex it lies in.
Given a triangularization, we can construct a piecewise-linear (or more precisely, piecewise-affine) function f by assigning function values f(gi) = xi to the grid points, and then extending the function affinely on each simplex. The function f can be expressed as (6.17) where the basis functions fi are affine on each simplex and are defined by the conditions
fi(gj)=􏰆1 i=j 0 i̸=j.
By construction, such a function is continuous. Figure 6.17 shows an example for k = 2.
Piecewise polynomials and splines
The idea of piecewise-affine functions on a triangulated domain is readily extended to piecewise polynomials and other functions.
Piecewise polynomials are defined as polynomials (of some maximum degree) on each simplex of the triangulation, which are continuous, i.e., the polynomials agree at the boundaries between simplexes. By further restricting the piecewise polynomials to have continuous derivatives up to a certain order, we can define various classes of spline functions. Figure 6.18 shows an example of a cubic spline, i.e., a piecewise polynomial of degree 3 on R, with continuous first and second derivatives.
2
f(u1,u2)

328
6 Approximation and fitting
p2 (u)
p3 (u)
p1 (u)
u0 u1 u u2 u3
Figure 6.18 Cubic spline. A cubic spline is a piecewise polynomial, with continuous first and second derivatives. In this example, the cubic spline f is formed from the three cubic polynomials p1 (on [u0, u1]), p2 (on [u1, u2]), and p3 (on [u2,u3]). Adjacent polynomials have the same function value, and equal first and second derivatives, at the boundary points u1 and u2. In this example, the dimension of the family of functions is n = 6, since we have 12 polynomial coefficients (4 per cubic polynomial), and 6 equality constraints (3 each at u1 and u2).
f (u)

6.5 Function fitting and interpolation 329
6.5.2 Constraints
In this section we describe some constraints that can be imposed on the function f, and therefore, on the variable x ∈ Rn.
Function value interpolation and inequalities
Let v be a point in D. The value of f at v,
􏰊n i=1
is a linear function of x. Therefore interpolation conditions f(vj) = zj, j = 1,…,m,
which require the function f to have the values zj ∈ R at specified points vj ∈ D, form a set of linear equalities in x. More generally, inequalities on the function value at a given point, as in l ≤ f(v) ≤ u, are linear inequalities on the variable x. There are many other interesting convex constraints on f (hence, x) that involve the function values at a finite set of points v1, . . . , vN . For example, the Lipschitz constraint
|f(vj)−f(vk)|≤L∥vj −vk∥, j, k=1,…,m,
forms a set of linear inequalities in x.
We can also impose inequalities on the function values at an infinite number of
points. As an example, consider the nonnegativity constraint f(u) ≥ 0 for all u ∈ D.
This is a convex constraint on x (since it is the intersection of an infinite number of halfspaces), but may not lead to a tractable problem except in special cases that exploit the particular structure of the functions. One simple example occurs when the functions are piecewise-linear. In this case, if the function values are nonnegative at the grid points, the function is nonnegative everywhere, so we obtain a simple (finite) set of linear inequalities.
As a less trivial example, consider the case when the functions are polynomials onR,withevenmaximumdegree2k(i.e.,n=2k+1),andD=R. Asshownin exercise 2.37, page 65, the nonnegativity constraint
p(u)=x1+x2u+···+x2k+1u2k ≥0 forallu∈R, is equivalent to
xi = 􏰊 Ymn, i=1,…,2k+1, Y ≽0, m+n=i+1
where Y ∈ Sk+1 is an auxiliary variable.
f(v) =
xifi(v),

330
6 Approximation and fitting
Derivative constraints
Suppose the basis functions fi are differentiable at a point v ∈ D. The gradient
􏰊n i=1
is a linear function of x, so interpolation conditions on the derivative of f at v reduce to linear equality constraints on x. Requiring that the norm of the gradient at v not exceed a given limit,
􏳶􏳶􏳶􏰊n 􏳶􏳶􏳶 ∥∇f(v)∥ = 􏳶􏳶 xi∇fi(v)􏳶􏳶 ≤ M,
i=1
is a convex constraint on x. The same idea extends to higher derivatives. For
example, if f is twice differentiable at v, the requirement that lI ≼ ∇2f(v) ≼ uI
is a linear matrix inequality in x, hence convex.
We can also impose constraints on the derivatives at an infinite number of
points. For example, we can require that f is monotone: f(u)≥f(v)forallu, v∈D, u≽v.
This is a convex constraint in x, but may not lead to a tractable problem except in special cases. When f is piecewise affine, for example, the monotonicity constraint is equivalent to the condition ∇f(v) ≽ 0 inside each of the simplexes. Since the gradient is a linear function of the grid point values, this leads to a simple (finite) set of linear inequalities.
As another example, we can require that the function be convex, i.e., satisfy f((u+v)/2)≤(f(u)+f(v))/2forallu, v∈D
(which is enough to ensure convexity when f is continuous). This is a convex con- straint, which has a tractable representation in some cases. One obvious example is when f is quadratic, in which case the convexity constraint reduces to the re- quirement that the quadratic part of f be nonnegative, which is an LMI. Another example in which a convexity constraint leads to a tractable problem is described in more detail in §6.5.5.
Integral constraints
Any linear functional L on the subspace of functions can be expressed as a linear function of x, i.e., we have L(f) = cT x. Evaluation of f (or a derivative) at a point is just a special case. As another example, the linear functional
∇f(v) =
xi∇fi(v),
L(f) = 􏰑
φ(u)f(u) du,
D

6.5 Function fitting and interpolation 331
where φ : Rk → R, can be expresse􏰑d as L(f) = cT x, where ci = φ(u)fi(u) du.
D
Thus, a constraint of the form L(f) = a is a linear equality constraint on x. One example of such a constraint is the moment constraint
􏰑D (where f : R → R).
6.5.3 Fitting and interpolation problems Minimum norm function fitting
In a fitting problem, we are given data
(u1,y1), …, (um,ym)
withui ∈Dandyi ∈R,andseekafunctionf∈Fthatmatchesthisdataas closely as possible. For example in least-squares fitting we consider the problem
minimize 􏰉mi=1(f(ui) − yi)2,
which is a simple least-squares problem in the variable x. We can add a variety of constraints, for example linear inequalities that must be satisfied by f at various points, constraints on the derivatives of f, monotonicity constraints, or moment constraints.
Example 6.7 Polynomial fitting. We are given data u1,…,um ∈ R and v1,…,vm ∈ R, and hope to approximately fit a polynomial of the form
p(u) = x1 +x2u+···+xnun−1 to the data. For each x we form the vector of errors,
e = (p(u1) − v1,…,p(um) − vm).
To find the polynomial that minimizes the norm of the error, we solve the norm
approximation problem
tmf(t) dt = a
minimize ∥e∥ = ∥Ax − v∥
with variable x ∈ Rn, where Aij = uj−1, i = 1,…,m, j = 1,…,n.
Figure 6.19 shows an example with m = 40 data points and n = 6 (i.e., polynomials of maximum degree 5), for the l2- and l∞-norms.
i

332
6 Approximation and fitting
0.2
0.1
0
−0.1 −1 −0.5
0 0.5 1 u
Figure 6.19 Two polynomials of degree 5 that approximate the 40 data points shown as circles. The polynomial shown as a solid line minimizes the l2-norm of the error; the polynomial shown as a dashed line minimizes the l∞ -norm.
0.2
0.1
0
−0.1 −1
−0.5 0 0.5 1 u
Figure 6.20 Two cubic splines that approximate the 40 data points shown as circles (which are the same as the data in figure 6.19). The spline shown as a solid line minimizes the l2-norm of the error; the spline shown as a dashed line minimizes the l∞-norm. As in the polynomial approximation shown in figure 6.19, the dimension of the subspace of fitting functions is 6.
f (u) p(u)

6.5
Function fitting and interpolation 333
Example 6.8 Spline fitting. Figure 6.20 shows the same data as in example 6.7, and two optimal fits with cubic splines. The interval [−1,1] is divided into three equal intervals, and we consider piecewise polynomials, with maximum degree 3, with continuous first and second derivatives. The dimension of this subspace of functions is 6, the same as the dimension of polynomials with maximum degree 5, considered in example 6.7.
In the simplest forms of function fitting, we have m ≫ n, i.e., the number of data points is much larger than the dimension of the subspace of functions. Smoothing is accomplished automatically, since all members of the subspace are smooth.
Least-norm interpolation
In another variation of function fitting, we have fewer data points than the dimen- sion of the subspace of functions. In the simplest case, we require that the function we choose must satisfy the interpolation conditions
f(ui) = yi, i = 1,…,m,
which are linear equality constraints on x. Among the functions that satisfy these interpolation conditions, we might seek one that is smoothest, or smallest. These lead to least-norm problems.
In the most general function fitting problem, we can optimize an objective (such as some measure of the error e), subject to a variety of convex constraints that represent our prior knowledge of the underlying function.
Interpolation, extrapolation, and bounding
By evaluating the optimal function fit fˆ at a point v not in the original data set, we obtain a guess of what the value of the underlying function is, at the point v. This is called interpolation when v is between or near the given data points (e.g., v ∈ conv{v1, . . . , vm}), and extrapolation otherwise.
We can also produce an interval in which the value f(v) can lie, by maximizing and minimizing (the linear function) f(v), subject to the constraints. We can use the function fit to help identify faulty data or outliers. Here we might use, for example, an l1-norm fit, and look for data points with large errors.
6.5.4 Sparse descriptions and basis pursuit
In basis pursuit, there is a very large number of basis functions, and the goal is to find a good fit of the given data as a linear combination of a small number of the basis functions. (In this context the function family is linearly dependent, and is sometimes referred to as an over-complete basis or dictionary.) This is called basis pursuit since we are selecting a much smaller basis, from the given over-complete basis, to model the data.

334
6 Approximation and fitting
Thus we seek a function f ∈ F that fits the data well, f(ui) ≈ yi, i = 1,…,m,
with a sparse coefficient vector x, i.e., card(x) small. In this case we refer to f =x1f1 +···+xnfn =􏰊xifi,
i∈B
where B = {i | xi ̸= 0} is the set of indices of the chosen basis elements, as a sparse description of the data. Mathematically, basis pursuit is the same as the regressor selection problem (see §6.4), but the interpretation (and scale) of the optimization problem are different.
Sparse descriptions and basis pursuit have many uses. They can be used for de-noising or smoothing, or data compression for efficient transmission or storage of a signal. In data compression, the sender and receiver both know the dictionary, or basis elements. To send a signal to the receiver, the sender first finds a sparse representation of the signal, and then sends to the receiver only the nonzero coef- ficients (to some precision). Using these coefficients, the receiver can reconstruct (an approximation of) the original signal.
One common approach to basis pursuit is the same as the method for regressor selection described in §6.4, and based on l1-norm regularization as a heuristic for finding sparse descriptions. We first solve the convex problem
minimize 􏰉mi=1(f(ui) − yi)2 + γ∥x∥1, (6.18)
where γ > 0 is a parameter used to trade off the quality of the fit to the data, and the sparsity of the coefficient vector. The solution of this problem can be used directly, or followed by a refinement step, in which the best fit is found, using the sparsity pattern of the solution of (6.18). In other words, we first solve (6.18), to obtain xˆ. We then set B = {i | xˆi ̸= 0}, i.e., the set of indices corresponding to nonzero coefficients. Then we solve the least-squares problem
minimize 􏰉mi=1(f(ui) − yi)2
withvariablesxi,i∈B,andxi =0fori̸∈B.
In basis pursuit and sparse description applications it is not uncommon to have
a very large dictionary, with n on the order of 104 or much more. To be effective, algorithms for solving (6.18) must exploit problem structure, which derives from the structure of the dictionary signals.
Time-frequency analysis via basis pursuit
In this section we illustrate basis pursuit and sparse representation with a simple example. We consider functions (or signals) on R, with the range of interest [0, 1]. We think of the independent variable as time, so we use t (instead of u) to denote it.
We first describe the basis functions in the dictionary. Each basis function is a Gaussian sinusoidal pulse, or Gabor function, with form
e−(t−τ)2/σ2 cos(ωt+φ),

6.5 Function fitting and interpolation
335
1 0
−10 0.2 1
0
−10 0.2 1
0
−10 0.2
0.4 0.6 0.8 1
0.4 0.6 0.8 1
0.4 0.6 0.8 1
Figure 6.21 Three of the basis elements in the dictionary, all with center time τ = 0.5 and cosine phase. The top signal has frequency ω = 0, the middle one has frequency ω = 75, and the bottom one has frequency ω = 150.
where σ > 0 gives the width of the pulse, τ is the time of (the center of) the pulse, ω ≥ 0 is the frequency, and φ is the phase angle. All of the basis functions have width σ = 0.05. The pulse times and frequencies are
τ = 0.002k, k = 0,…,500, ω = 5k, k = 0,…,30.
For each time τ, there is one basis element with frequency zero (and phase φ = 0), and 2 basis elements (cosine and sine, i.e., phase φ = 0 and φ = π/2) for each of 30 remaining frequencies, so all together there are 501 × 61 = 30561 basis elements. The basis elements are naturally indexed by time, frequency, and phase (cosine or sine), so we denote them as
fτ,ω,c, τ = 0,0.002,…,1, ω = 0,5,…,150, fτ,ω,s, τ = 0,0.002,…,1, ω = 5,…,150.
Three of these basis functions (all with time τ = 0.5) are shown in figure 6.21. Basis pursuit with this dictionary can be thought of as a time-frequency analysis of the data. If a basis element fτ,ω,c or fτ,ω,s appears in the sparse representation of a signal (i.e., with a nonzero coefficient), we can interpret this as meaning that
the data contains the frequency ω at time τ.
We will use basis pursuit to find a sparse approximation of the signal
y(t) = a(t) sin θ(t)
t
f0.5,150,c f0.5,75,c f0.5,0,c

336
6
Approximation and fitting
0.8 1
1.5
0.5 −0.5
−1.50 0.2 0.4 0.05
0 −0.05
t
0.6
0 0.2 0.4 0.6 0.8 1 t
Figure 6.22 Top. The original signal (solid line) and approximation yˆ ob- tained by basis pursuit (dashed line) are almost indistinguishable. Bottom. The approximation error y(t) − yˆ(t), with different vertical scale.
where
a(t) = 1 + 0.5 sin(11t), θ(t) = 30 sin(5t).
(This signal is chosen only because it is simple to describe, and exhibits noticeable changes in its spectral content over time.) We can interpret a(t) as the signal amplitude, and θ(t) as its total phase. We can also interpret
ω(t) = 􏰍􏰍􏰍 dθ 􏰍􏰍􏰍 = 150| cos(5t)| 􏰍dt􏰍
as the instantaneous frequency of the signal at time t. The data are given as 501 uniformly spaced samples over the interval [0, 1], i.e., we are given 501 pairs (tk , yk ) with
tk = 0.005k, yk = y(tk), k = 0,…,500.
We first solve the l1-norm regularized least-squares problem (6.18), with γ = 1. The resulting optimal coefficient vector is very sparse, with only 42 nonzero coefficients out of 30561. We then find the least-squares fit of the original signal using these 42 basis vectors. The result yˆ is compared with the original signal y in figure 6.22. The top figure shows the approximated signal (in dashed line) and, almost indistinguishable, the original signal y(t) (in solid line). The bottom figure shows the error y(t) − yˆ(t). As is clear from the figure, we have obtained an
y(t) − yˆ(t) yˆ(t), y(t)

6.5 Function fitting and interpolation
337
1.5
0.5 −0.5
−1.50 0.2 150
100
50
0.4
t
0.6
0.8 1
0
0 0.2 0.4 0.6 0.8 1
τ
Figure 6.23 Top: Original signal. Bottom: Time-frequency plot. The dashed curve shows the instantaneous frequency ω(t) = 150| cos(5t)| of the original signal. Each circle corresponds to a chosen basis element in the approxima- tion obtained by basis pursuit. The horizontal axis shows the time index τ, and the vertical axis shows the frequency index ω of the basis element.
i=1
approximation yˆ with a very good relative fit. The relative error is (1/501) 􏰉501 (y(ti) − yˆ(ti))2
i=1 −4 (1/501) 􏰉501 y(ti)2 = 2.6 · 10 .
By plotting the pattern of nonzero coefficients versus time and frequency, we obtain a time-frequency analysis of the original data. Such a plot is shown in fig- ure 6.23, along with the instantaneous frequency. The plot shows that the nonzero components closely track the instantaneous frequency.
6.5.5 Interpolation with convex functions
In some special cases we can solve interpolation problems involving an infinite- dimensional set of functions, using finite-dimensional convex optimization. In this section we describe an example.
We start with the following question: When does there exist a convex function f : Rk → R, with domf = Rk, that satisfies the interpolation conditions
f(ui) = yi, i = 1,…,m,
ω(t) y(t)

338
6 Approximation and fitting
at given points ui ∈ Rk? (Here we do not restrict f to lie in any finite-dimensional subspace of functions.) The answer is: if and only if there exist g1, . . . , gm such that
yj≥yi+giT(uj−ui), i,j=1,…,m. (6.19) To see this, first suppose that f is convex, domf = Rk, and f(ui) = yi,
i = 1,…,m. At each ui we can find a vector gi such that
f(z) ≥ f(ui) + giT (z − ui) (6.20)
for all z. If f is differentiable, we can take gi = ∇f(ui); in the more general case, we can construct gi by finding a supporting hyperplane to epi f at (ui, yi). (The vectors gi are called subgradients.) By applying (6.20) to z = uj , we obtain (6.19).
Conversely, suppose g1, . . . , gm satisfy (6.19). Define f as f(z)= max (yi +giT(z−ui))
i=1,…,m
for all z ∈ Rk. Clearly, f is a (piecewise-linear) convex function. The inequali- ties (6.19) imply that f(ui) = yi, for i = 1,…,m.
We can use this result to solve several problems involving interpolation, approx- imation, or bounding, with convex functions.
Fitting a convex function to given data
Perhaps the simplest application is to compute the least-squares fit of a convex function to given data (ui,yi), i = 1,…,m:
minimize 􏰉mi=1(yi − f(ui))2
subjectto f:Rk →Risconvex, domf=Rk.
This is an infinite-dimensional problem, since the variable is f, which is in the space of continuous real-valued functions on Rk. Using the result above, we can formulate this problem as
minimize 􏰉mi=1(yi − yˆi)2
subjectto yˆj≥yˆi+giT(uj−ui), i,j=1,…,m,
which is a QP with variables yˆ ∈ Rm and g1,…,gm ∈ Rk. The optimal value of this problem is zero if and only if the given data can be interpolated by a convex function, i.e., if there is a convex function that satisfies f(ui) = yi. An example is shown in figure 6.24.
Bounding values of an interpolating convex function
As another simple example, suppose that we are given data (ui, yi), i = 1, . . . , m, which can be interpolated by a convex function. We would like to determine the range of possible values of f(u0), where u0 is another point in Rk, and f is any convex function that interpolates the given data. To find the smallest possible value of f(u0) we solve the LP
minimize y0
subjectto yj≥yi+giT(uj−ui), i,j=0,…,m,

6.5 Function fitting and interpolation 339
Figure 6.24 Least-squares fit of a convex function to data, shown as circles. The (piecewise-linear) function shown minimizes the sum of squared fitting error, over all convex functions.
which is an LP with variables y0 ∈ R, g0, . . . , gm ∈ Rk. By maximizing y0 (which is also an LP) we find the largest possible value of f(u0) for a convex function that interpolates the given data.
Interpolation with monotone convex functions
As an extension of convex interpolation, we can consider interpolation with a convex and monotone nondecreasing function. It can be shown that there exists a convex function f : Rk → R, with domf = Rk, that satisfies the interpolation conditions
f(ui) = yi, i = 1,…,m,
and is monotone nondecreasing (i.e., f(u) ≥ f(v) whenever u ≽ v), if and only if
there exist g1,…,gm ∈ Rk, such that
g i ≽ 0 , i = 1 , . . . , m , y j ≥ y i + g iT ( u j − u i ) , i , j = 1 , . . . , m . ( 6 . 2 1 )
In other words, we add to the convex interpolation conditions (6.19), the condition that the subgradients gi are all nonnegative. (See exercise 6.12.)
Bounding consumer preference
As an application, we consider a problem of predicting consumer preferences. We consider different baskets of goods, consisting of different amounts of n consumer goods. A goods basket is specified by a vector x ∈ [0,1]n where xi denotes the amount of consumer good i. We assume the amounts are normalized so that 0≤xi ≤1,i.e.,xi =0istheminimumandxi =1isthemaximumpossible amount of good i. Given two baskets of goods x and x ̃, a consumer can either prefer x to x ̃, or prefer x ̃ to x, or consider x and x ̃ equally attractive. We consider one model consumer, whose choices are repeatable.

340
6 Approximation and fitting
We model consumer preference in the following way. We assume there is an underlying utility function u : Rn → R, with domain [0,1]n; u(x) gives a measure of the utility derived by the consumer from the goods basket x. Given a choice between two baskets of goods, the consumer chooses the one that has larger utility, and will be ambivalent when the two baskets have equal utility. It is reasonable to assume that u is monotone nondecreasing. This means that the consumer always prefers to have more of any good, with the amounts of all other goods the same. It is also reasonable to assume that u is concave. This models satiation, or decreasing marginal utility as we increase the amount of goods.
Now suppose we are given some consumer preference data, but we do not know the underlying utility function u. Specifically, we have a set of goods baskets a1, . . . , am ∈ [0, 1]n, and some information about preferences among them:
u(ai) > u(aj) for (i,j) ∈ P, u(ai) ≥ u(aj) for (i,j) ∈ Pweak, (6.22)
where P, Pweak ⊆ {1,…,m}×{1,…,m} are given. Here P gives the set of known preferences: (i,j) ∈ P means that basket ai is known to be preferred to basket aj. The set Pweak gives the set of known weak preferences: (i, j) ∈ Pweak means that basket ai is preferred to basket aj, or that the two baskets are equally attractive.
We first consider the following question: How can we determine if the given data are consistent, i.e., whether or not there exists a concave nondecreasing utility function u for which (6.22) holds? This is equivalent to solving the feasibility problem
find subject to
u
u : Rn → R concave and nondecreasing u(ai) > u(aj), (i,j) ∈ P
u(ai) ≥ u(aj ), (i, j) ∈ Pweak,
(6.23)
with the function u as the (infinite-dimensional) optimization variable. Since the constraints in (6.23) are all homogeneous, we can express the problem in the equiv- alent form
find subject to
u
u : Rn → R concave and nondecreasing u(ai)≥u(aj)+1, (i,j)∈P
u(ai) ≥ u(aj ), (i, j) ∈ Pweak,
(6.24)
which uses only nonstrict inequalities. (It is clear that if u satisfies (6.24), then it must satisfy (6.23); conversely, if u satisfies (6.23), then it can be scaled to satisfy (6.24).) This problem, in turn, can be cast as a (finite-dimensional) linear programming feasibility problem, using the interpolation result on page 339:
find u1,…,um, g1,…,gm subject to gi ≽0, i=1,…,m
uj≤ui+giT(aj−ai), i,j=1,…,m (6.25) ui≥uj+1, (i,j)∈P
ui ≥uj, (i,j)∈Pweak.
By solving this linear programming feasibility problem, we can determine whether there exists a concave, nondecreasing utility function that is consistent with the

6.5 Function fitting and interpolation 341
given sets of strict and nonstrict preferences. If (6.25) is feasible, there is at least one such utility function (and indeed, we can construct one that is piecewise-linear, from a feasible u1,…,um, g1,…,gm). If (6.25) is not feasible, we can conclude that there is no concave increasing utility function that is consistent with the given sets of strict and nonstrict preferences.
As an example, suppose that P and Pweak are consumer preferences that are known to be consistent with at least one concave increasing utility function. Con- sider a pair (k,l) that is not in P or Pweak, i.e., consumer preference between baskets k and l is not known. In some cases we can conclude that a preference holds between basket k and l, even without knowing the underlying preference function. To do this we augment the known preferences (6.22) with the inequality u(ak) ≤ u(al), which means that basket l is preferred to basket k, or they are equally attractive. We then solve the feasibility linear program (6.25), including the extra weak preference u(ak) ≤ u(al). If the augmented set of preferences is in- feasible, it means that any concave nondecreasing utility function that is consistent with the original given consumer preference data must also satisfy u(ak) > u(al). In other words, we can conclude that basket k is preferred to basket l, without knowing the underlying utility function.
Example 6.9 Here we give a simple numerical example that illustrates the discussion above. We consider baskets of two goods (so we can easily plot the goods baskets). To generate the consumer preference data P, we compute 40 random points in [0, 1]2, and then compare them using the utility function
u(x1 , x2 ) = (1.1×1/2 + 0.8×1/2 )/1.9. 12
These goods baskets, and a few level curves of the utility function u, are shown in figure 6.25.
We now use the consumer preference data (but not, of course, the true utility function u) to compare each of these 40 goods baskets to the basket a0 = (0.5, 0.5). For each original basket ai, we solve the linear programming feasibility problem described above, to see if we can conclude that basket a0 is preferred to basket ai. Similarly, we check whether we can conclude that basket ai is preferred to basket a0. For each basket ai, there are three possible outcomes: we can conclude that a0 is definitely preferred to ai, that ai is definitely preferred to a0, or (if both LP feasibility problems are feasible) that no conclusion is possible. (Here, definitely preferred means that the preference holds for any concave nondecreasing utility function that is consistent with the original given data.)
We find that 21 of the baskets are definitely rejected in favor of (0.5,0.5), and 14 of the baskets are definitely preferred. We cannot make any conclusion, from the consumer preference data, about the remaining 5 baskets. These results are shown in figure 6.26. Note that goods baskets below and to the left of (0.5, 0.5) will definitely be rejected in favor of (0.5, 0.5), using only the monotonicity property of the utility function, and similarly, those points that are above and to the right of (0.5, 0.5) must be preferred. So for these 17 points, there is no need to solve the feasibility LP (6.25). Classifying the 23 points in the other two quadrants, however, requires the concavity assumption, and solving the feasibility LP (6.25).

342 6 Approximation and fitting
1
0.5
00 0.5 1 x1
Figure
6.25 Forty goods baskets a1, . . . , a40, shown as circles. The 0.1, 0.2, . . . , 0.9 level curves of the true utility function u are shown as dashed lines. This utility function is used to find the consumer preference data P among the 40 baskets.
1
0.5
00 0.5 1 x1
Figure 6.26 Results of consumer preference analysis using the LP (6.25), for a new goods basket a0 = (0.5, 0.5). The original baskets are displayed as open circles if they are definitely rejected (u(ak) < u(a0)), as solid black circles if they are definitely preferred (u(ak) > u(a0)), and as squares when no conclusion can be made. The level curve of the underlying utility function, that passes through (0.5, 0.5), is shown as a dashed curve. The vertical and horizontal lines passing through (0.5, 0.5) divide [0, 1]2 into four quadrants. Points in the upper right quadrant must be preferred to (0.5,0.5), by the monotonicity assumption on u. Similarly, (0.5, 0.5) must be preferred to the points in the lower left quadrant. For the points in the other two quadrants, the results are not obvious.
x2 x2

Bibliography 343
Bibliography
The robustness properties of approximations with different penalty functions were an- alyzed by Huber [Hub64, Hub81], who also proposed the penalty function (6.4). The log-barrier penalty function arises in control theory, where it is applied to the system closed-loop frequency response, and has several names, e.g., central H∞, or risk-averse control; see Boyd and Barratt [BB91] and the references therein.
Regularized approximation is covered in many books, including Tikhonov and Arsenin [TA77] and Hansen [Han98]. Tikhonov regularization is sometimes called ridge regression (Golub and Van Loan [GL89, page 564]). Least-squares approximation with l1-norm regularization is also known under the name lasso (Tibshirani [Tib96]). Other least- squares regularization and regressor selection techniques are discussed and compared in Hastie, Tibshirani, and Friedman [HTF01, §3.4].
Total variation denoising was introduced for image reconstruction by Rudin, Osher, and Fatemi [ROF92].
The robust least-squares problem with norm bounded uncertainty (page 321) was in- troduced by El Ghaoui and Lebret [EL97], and Chandrasekaran, Golub, Gu, and Sayed [CGGS98]. El Ghaoui and Lebret also give the SDP formulation of the robust least-squares problem with structured uncertainty (page 323).
Chen, Donoho, and Saunders [CDS01] discuss basis pursuit via linear programming. They refer to the l1-norm regularized problem (6.18) as basis pursuit denoising. Meyer and Pratt [MP68] is an early paper on the problem of bounding utility functions.

344
6 Approximation and fitting
Exercises
Norm approximation and least-norm problems
6.1 Quadratic bounds for log barrier penalty. Let φ : R → R be the log barrier penalty function with limit a > 0:
φ(u) = 􏰆 −a2 log(1 − (u/a)2) |u| < a ∞ otherwise. Show that if u ∈ Rm satisfies ∥u∥∞ < a, then 2 􏰊m φ(∥u∥∞) 2 This means that 􏰉mi=1 φ(ui) is well approximated by ∥u∥2 if ∥u∥∞ is small compared to a. For example, if ∥u∥∞/a = 0.25, then (a) Deadzone-linear penalty approxima􏰆tion: minimize mi=1 φ(aTi x − bi), where ∥u∥2≤ φ(ui)≤∥u∥2∞∥u∥2. i=1 􏰊m i=1 ∥u∥2 ≤ 6.2 l1-, l2-, and l∞-norm approximation by a constant vector. What is the solution of the φ(ui) ≤ 1.033 · ∥u∥2. norm approximation problem with one scalar variable x ∈ R, minimize ∥x1 − b∥, 6.3 Formulate the following approximation problems as LPs, QPs, SOCPs, or SDPs. The for the l1-, l2-, and l∞-norms? problem data are A ∈ Rm×n and b ∈ Rm. The rows of A􏰉are denoted aTi . φ(u) = 0 |u| ≤ a |u|−a |u|>a,
where a > 0.
(b) Log-barrier penalty approxim􏰆ation: minimize i=1 φ(ai x − bi), where
φ(u) =
with a > 0. 􏰉m

(c) Huber penalty approximation: m􏰆inimize i=1 φ(ai x − bi), where
􏰉m T −a2 log(1 − (u/a)2) |u| < a u2 M(2|u|−M) |u|>M,
|u| ≤ M
(d) Log-Chebyshev approximation: minimize maxi=1,…,m |log(aTi x)−logbi|. We assume
φ(u) =
minimize t
subject to 1/t ≤ aTi x/bi ≤ t, i = 1,…,m,
with variables x ∈ Rn and t ∈ R, and domain Rn × R++.
with M > 0.
b ≻ 0. An equivalent convex form is
|u| ≥ a, T

Exercises 345
(e) Minimizing the sum of the largest k residuals: minimize 􏰉ki=1 |r|[i]
subject to r = Ax−b,
where |r|[1] ≥ |r|[2] ≥ · · · ≥ |r|[m] are the numbers |r1|, |r2|, . . . , |rm| sorted in decreasing order. (For k = 1, this reduces to l∞-norm approximation; for k = m, it reduces to l1-norm approximation.) Hint. See exercise 5.19.
6.4 A differentiable approximation of l1-norm approximation. The function φ(u) = (u2+ǫ)1/2, with parameter ǫ > 0, is sometimes used as a differentiable approximation of the absolute value function |u|. To approximately solve the l1-norm approximation problem
minimize ∥Ax − b∥1, (6.26) where A ∈ Rm×n, we solve instead the pr􏰉oblem
minimize mi=1 φ(aTi x − bi), (6.27)
where aTi is the ith row of A. We assume rank A = n.
Let p⋆ denote the optimal value of the l1-norm approximation problem (6.26). Let xˆ
denote the optimal solution of the approximate problem (6.27), and let rˆ denote the
associated residual, rˆ 􏰉= Axˆ − b.
⋆ m 2 2 1/2
(a)Showthatp≥ i=1rˆi/(rˆi+ǫ) .
(b) Show that
⋆ 􏰊m 􏰄 |rˆi| 􏰅 ∥Axˆ−b∥1≤p+ |rˆi|1−2 1/2 .
(rˆi +ǫ)
(By evaluating the righthand side after computing xˆ, we obtain a bound on how subop-
i=1 timal xˆ is for the l1-norm approximation problem.)
6.5 Minimum length approximation. Consider the problem minimize length(x)
subject to ∥Ax − b∥ ≤ ǫ,
where length(x) = min{k | xi = 0 for i > k}. The problem variable is x ∈ Rn; the problem parameters are A ∈ Rm×n, b ∈ Rm, and ǫ > 0. In a regression context, we are asked to find the minimum number of columns of A, taken in order, that can approximate the vector b within ǫ.
Show that this is a quasiconvex optimization problem.
6.6 Duals of some penalty function approximation problems. Derive a Lagrange dual for the
problem
for the following penalty functions φ : R → R. The variables are x ∈ Rn, r ∈ Rm.
r = Ax−b, (a) Deadzone-linear penalty (with deadzone width a = 1),
φ(u)=􏰆 0 |u|≤1 |u|−1 |u|>1.
(b) Huber penalty (with M = 1),
φ(u)=􏰆 u2 |u|≤1 2|u|−1 |u| > 1.
minimize subject to
􏰉m
i=1 φ(ri)

346
6 Approximation and fitting
Log-barrier (with limit a = 1),
φ(u) = −log(1 − u2), domφ = (−1,1).
Relative deviation from one,
φ(u)=max{u,1/u}=􏰆 u u≥1 1/u u≤1,
(c)
(d)
with dom φ = R++.
Regularization and robust approximation
6.7 Bi-criterion optimization with Euclidean norms. We consider the bi-criterion optimization
problem
where A ∈ Rm×n has rank r, and b ∈ Rm. Show how to find the solution of each of the
􏰊r i=1
(see §A.5.4).
(a) Tikhonov regularization: minimize ∥Ax − b∥2 + δ∥x∥2. (b) Minimize ∥Ax − b∥2 subject to ∥x∥2 = γ.
(c) Maximize ∥Ax − b∥2 subject to ∥x∥2 = γ.
Here δ and γ are positive parameters.
Your results provide efficient methods for computing the optimal trade-off curve and the set of achievable values of the bi-criterion problem.
6.8 Formulate the following robust approximation problems as LPs, QPs, SOCPs, or SDPs. For each subproblem, consider the l1-, l2-, and the l∞-norms.
(a) Stochastic robust approximation with a finite set of parameter values, i.e., the sum-
minimize (w.r.t. R2+ ) (∥Ax − b∥2 , ∥x∥2 ), following problems from the singular value decomposition of A,
of-norms problem wherep≽0and1Tp=1. (See§6.4.1.)
A=Udiag(σ)VT =
σiuiviT
(b) Worst-case robust approximation with coefficient bounds: minimize supA∈A ∥Ax − b∥
􏰉k
minimize i=1 pi∥Aix − b∥
where
Here the uncertainty set is described by giving upper and lower bounds for the
A={A∈Rm×n |lij ≤aij ≤uij, i=1,…,m, j=1,…,n}. components of A. We assume lij < uij . (c) Worst-case robust approximation with polyhedral uncertainty: minimize supA∈A ∥Ax − b∥ where A={[a1 ··· am]T |Ciai ≼di, i=1,...,m}. The uncertainty is described by giving a polyhedron Pi = {ai | Ciai ≼ di} of possible values for each row. The parameters Ci ∈ Rpi×n, di ∈ Rpi, i = 1,...,m, are given. We assume that the polyhedra Pi are nonempty and bounded. Exercises 347 Function fitting and interpolation 6.9 Minimax rational function fitting. Show that the following problem is quasiconvex: where minimize max 􏰍􏰍􏰍􏰍p(ti) − yi􏰍􏰍􏰍􏰍 i=1,...,k q(ti) p(t) = a0 +a1t+a2t2 +···+amtm, q(t) = 1+b1t+···+bntn, and the domain of the objective function is defined as D={(a,b)∈Rm+1 ×Rn |q(t)>0, α≤t≤β}.
In this problem we fit a rational function p(t)/q(t) to given data, while constraining the denominator polynomial to be positive on the interval [α,β]. The optimization variables are the numerator and denominator coefficients ai, bi. The interpolation points ti ∈ [α, β], and desired function values yi, i = 1, . . . , k, are given.
6.10 Fitting data with a concave nonnegative nondecreasing quadratic function. We are given
the data
x1,…,xN ∈ Rn, y1,…,yN ∈ R, and wish to fit a quadratic function of the form
f(x) = (1/2)xT Px + qT x + r,
where P ∈ Sn, q ∈ Rn, and r ∈ R are the parameters in the model (and, therefore, the variables in the fitting problem).
OurmodelwillbeusedonlyontheboxB={x∈Rn |l≼x≼u}. Youcanassumethat l ≺ u, and that the given data points xi are in this box.
We will use the simple sum of squared errors objective,
􏰊N
(f(xi) − yi)2, i=1
as the criterion for the fit. We also impose several constraints on the function f. First, it must be concave. Second, it must be nonnegative on B, i.e., f(z) ≥ 0 for all z ∈ B. Third, f must be nondecreasing on B, i.e., whenever z, z ̃ ∈ B satisfy z ≼ z ̃, we have f ( z ) ≤ f ( z ̃ ) .
Show how to formulate this fitting problem as a convex problem. Simplify your formula- tion as much as you can.
6.11 Least-squares direction interpolation. Suppose F1,…,Fn : Rk → Rp, and we form the linear combination F : Rk → Rp,
F (u) = x1 F1 (u) + · · · + xn Fn (u),
where x is the variable in the interpolation problem.
In this problem we require that ̸ (F(vj),qj) = 0, j = 1,…,m, where qj are given vectors in Rp, which we assume satisfy ∥qj∥2 = 1. In other words, we require the direction of F to take on specified values at the points vj. To ensure that F(vj) is not zero (which makes the angle undefined), we impose the minimum length constraints ∥F(vj)∥2 ≥ ǫ, j = 1,…,m, where ǫ > 0 is given.
Show how to find x that minimizes ∥x∥2, and satisfies the direction (and minimum length) conditions above, using convex optimization.
6.12 Interpolation with monotone functions. A function f : Rk → R is monotone nondecreas- ing (with respect to Rk+) if f(u) ≥ f(v) whenever u ≽ v.

348 6 Approximation and fitting
(a)
(b)
Show that there exists a monotone nondecreasing function f : Rk → R, that satisfies f(ui) = yi for i = 1,…,m, if and only if
yi ≥ yj whenever ui ≽ uj, i, j = 1,…,m.
Show that there exists a convex monotone nondecreasing function f : Rk → R, with domf = Rk, that satisfies f(ui) = yi for i = 1,…,m, if and only if there exist gi ∈ Rk, i = 1,…,m, such that
gi ≽0, i=1,…,m, yj ≥yi +giT(uj −ui), i, j=1,…,m.
6.13 Interpolation with quasiconvex functions. Show that there exists a quasiconvex function f:Rk →R,thatsatisfiesf(ui)=yi fori=1,…,m,ifandonlyifthereexistgi ∈Rk, i = 1,…,m, such that
giT (uj − ui) ≤ −1 whenever yj < yi, i, j = 1,...,m. 6.14 [Nes00] Interpolation with positive-real functions. Suppose z1, . . . , zn ∈ C are n distinct points with |zi| > 1. We define Knp as the set of vectors y ∈ Cn for which there exists a function f : C → C that satisfies the following conditions.
• f is positive-real, which means it is analytic outside the unit circle (i.e., for |z| > 1), and its real part is nonnegative outside the unit circle (Rf(z) ≥ 0 for |z| > 1).
• f satisfies the interpolation conditions
f(z1) = y1, f(z2) = y2, …, f(zn) = yn.
If we denote the set of positive-real functions as F, then we can express Knp as Knp ={y∈Cn |∃f ∈F, yk =f(zk), k=1,…,n}.
(a) It can be shown that f is positive-real if and only if there exists a nondecreasing
function ρ such that for all z with |z| > 􏰑1,
2π eiθ +z−1
f(z) = iIf(∞) + eiθ − z−1 dρ(θ),
0
where i = √−1 (see [KN77, page 389]). Use this representation to show that Knp is a closed convex cone.
(b) We will use the inner product R(xHy) between vectors x,y ∈ Cn, where xH denotes the complex conjugate transpose of x. Show that the dual cone of Knp is given by
􏰐 􏰍􏰍 􏰇􏰊n −iθ −1􏰈 􏰚 ∗ n􏰍 T e +z ̄l
Knp = x∈C 􏰍􏰍I(1 x)=0, R xle−iθ −z ̄−1 ≥0∀θ∈[0,2π] . l=1 l
(c) Show that
Kn∗p= x∈Cn􏰍􏰍∃Q∈Hn+,xl= Qkl ,l=1,…,n
􏰐􏰍􏰍􏰊n 􏰚 􏰍 1−z−1z ̄−1
where Hn+ denotes the set of positive semidefinite Hermitian matrices of size n × n. Use the following result (known as Riesz-Fej ́er theorem; see [KN77, page 60]). A
function of the form 􏰊n
(yke−ikθ +y ̄keikθ)
k=0
k=1 kl

Exercises 349
is nonnegative for all θ if and only if there exist a0,…,an ∈ C such that
􏰊n 􏰍􏰍􏰊n 􏰍􏰍2 ( y k e − i k θ + y ̄ k e i k θ ) = 􏰍􏰍􏰍 a k e i k θ 􏰍􏰍􏰍 .
k=0 k=0
(d) ShowthatKnp ={y∈Cn |P(y)≽0}whereP(y)∈Hn isdefinedas
P(y)kl = yk +yl , l,k=1,…,n. 1−z−1z ̄−1
The matrix P(y) is called the Nevanlinna-Pick matrix associated with the points
zk, yk.
Hint. As we noted in part (a), Knp is a closed convex cone, so Knp = K∗∗. np
(e) As an application, pose the following problem as a convex optimization problem: minimize 􏰉nk=1 |f (zk ) − wk |2
subjectto f∈F.
The problem data are n points zk with |zk| > 1 and n complex numbers w1, …, wn. We optimize over all positive-real functions f.
kl

Chapter 7
Statistical estimation
7.1 Parametric distribution estimation 7.1.1 Maximum likelihood estimation
We consider a family of probability distributions on Rm, indexed by a vector x ∈ Rn, with densities px(·). When considered as a function of x, for fixed y ∈ Rm, the function px(y) is called the likelihood function. It is more convenient to work with its logarithm, which is called the log-likelihood function, and denoted l:
l(x) = log px(y).
There are often constraints on the values of the parameter x, which can repre- sent prior knowledge about x, or the domain of the likelihood function. These constraints can be explicitly given, or incorporated into the likelihood function by assigning px(y) = 0 (for all y) whenever x does not satisfy the prior information constraints. (Thus, the log-likelihood function can be assigned the value −∞ for parameters x that violate the prior information constraints.)
Now consider the problem of estimating the value of the parameter x, based on observing one sample y from the distribution. A widely used method, called maximum likelihood (ML) estimation, is to estimate x as
xˆml = argmaxxpx(y) = argmaxxl(x),
i.e., to choose as our estimate a value of the parameter that maximizes the like- lihood (or log-likelihood) function for the observed value of y. If we have prior information about x, such as x ∈ C ⊆ Rn, we can add the constraint x ∈ C explicitly, or impose it implicitly, by redefining px(y) to be zero for x ̸∈ C.
The problem of finding a maximum likelihood estimate of the parameter vector x can be expressed as
maximize l(x) = log px(y) subject to x ∈ C,
(7.1)
where x ∈ C gives the prior information or other constraints on the parameter vector x. In this optimization problem, the vector x ∈ Rn (which is the parameter

352
7 Statistical estimation
in the probability density) is the variable, and the vector y ∈ Rm (which is the observed sample) is a problem parameter.
The maximum likelihood estimation problem (7.1) is a convex optimization problem if the log-likelihood function l is concave for each value of y, and the set C can be described by a set of linear equality and convex inequality constraints, a situation which occurs in many estimation problems. For these problems we can compute an ML estimate using convex optimization.
Linear measurements with IID noise
We consider a linear measurement model,
yi =aTi x+vi, i=1,…,m,
where x ∈ Rn is a vector of parameters to be estimated, yi ∈ R are the measured or observed quantities, and vi are the measurement errors or noise. We assume that vi are independent, identically distributed (IID), with density p on R. The likelihood function is then
􏰖m i=1
􏰊m i=1
The ML estimate is any optimal point for the problem
maximize 􏰉mi=1 log p(yi − aTi x), (7.2)
with variable x. If the density p is log-concave, this problem is convex, and has the form of a penalty approximation problem ((6.2), page 294), with penalty function −logp.
Example 7.1 ML estimation for some common noise densities.
• Gaussian noise. When vi are Gaussian with zero mean and variance σ2, the
density is p(z) = (2πσ2)−1/2e−z2/2σ2 , and the log-likelihood function is l(x) = −(m/2) log(2πσ2) − 1 ∥Ax − y∥2,
2σ2
where A is the matrix with rows aT1 , . . . , aTm . Therefore the ML estimate of x is xml = argminx ∥Ax − y∥2, the solution of a least-squares approximation problem.
• Laplacian noise. When vi are Laplacian, i.e., have density p(z) = (1/2a)e−|z|/a (where a > 0), the ML estimate is xˆ = argminx ∥Ax − y∥1, the solution of the l1-norm approximation problem.
• Uniform noise. When vi are uniformly distributed on [−a, a], we have p(z) = 1/(2a) on [−a, a], and an ML estimate is any x satisfying ∥Ax − y∥∞ ≤ a.
p x ( y ) = so the log-likelihood function is
p ( y i − a Ti x ) ,
l(x) = log px(y) =
log p(yi − aTi x).

7.1 Parametric distribution estimation 353
ML interpretation of penalty function approximation
Conversely, we can interpret any penalty function approximation problem minimize 􏰉mi=1 φ(bi − aTi x)
as a maximum likelihood estimation problem, with noise density
e−φ(z) p(z) = 􏰜 e−φ(u) du,
and measurements b. This observation gives a statistical interpretation of the penalty function approximation problem. Suppose, for example, that the penalty function φ grows very rapidly for large values, which means that we attach a very large cost or penalty to large residuals. The corresponding noise density function p will have very small tails, and the ML estimator will avoid (if possible) estimates with any large residuals because these correspond to very unlikely events.
We can also understand the robustness of l1-norm approximation to large errors in terms of maximum likelihood estimation. We interpret l1-norm approximation as maximum likelihood estimation with a noise density that is Laplacian; l2-norm approximation is maximum likelihood estimation with a Gaussian noise density. The Laplacian density has larger tails than the Gaussian, i.e., the probability of a very large vi is far larger with a Laplacian than a Gaussian density. As a result, the associated maximum likelihood method expects to see greater numbers of large residuals.
Counting problems with Poisson distribution
In a wide variety of problems the random variable y is nonnegative integer valued, with a Poisson distribution with mean μ > 0:
e−μ μk prob(y=k)= k! .
Often y represents the count or number of events (such as photon arrivals, traffic accidents, etc.) of a Poisson process over some period of time.
In a simple statistical model, the mean μ is modeled as an affine function of a
vector u ∈ Rn:
μ = aT u + b.
Here u is called the vector of explanatory variables, and the vector a ∈ Rn and number b ∈ R are called the model parameters. For example, if y is the number of traffic accidents in some region over some period, u1 might be the total traffic flow through the region during the period, u2 the rainfall in the region during the period, and so on.
We are given a number of observations which consist of pairs (ui,yi), i = 1, . . . , m, where yi is the observed value of y for which the value of the explanatory variable is ui ∈ Rn. Our job is to find a maximum likelihood estimate of the model parameters a ∈ Rn and b ∈ R from these data.

354
7 Statistical estimation
The likelihood function has the form
􏰖m (aT ui + b)yi exp(−(aT ui + b)) ,
i=1 yi! so the log-likelihood function is
􏰊m i=1
We can find an ML estimate of a and b by solving the convex optimization problem maximize 􏰉mi=1(yi log(aT ui + b) − (aT ui + b)),
where the variables are a and b. Logistic regression
We consider a random variable y ∈ {0, 1}, with
prob(y = 1) = p, prob(y = 0) = 1 − p,
where p ∈ [0,1], and is assumed to depend on a vector of explanatory variables u ∈ Rn. For example, y = 1 might mean that an individual in a population acquires a certain disease. The probability of acquiring the disease is p, which is modeled as a function of some explanatory variables u, which might represent weight, age, height, blood pressure, and other medically relevant variables.
The logistic model has the form
p= exp(aTu+b) , (7.3)
1+exp(aTu+b)
where a ∈ Rn and b ∈ R are the model parameters that determine how the probability p varies as a function of the explanatory variable u.
Now suppose we are given some data consisting of a set of values of the explana- tory variables u1, . . . , um ∈ Rn along with the corresponding outcomes y1, . . . , ym ∈ {0, 1}. Our job is to find a maximum likelihood estimate of the model parameters a ∈ Rn and b ∈ R. Finding an ML estimate of a and b is sometimes called logistic regression.
We can re-order the data so for u1,…,uq, the outcome is y = 1, and for uq+1, . . . , um the outcome is y = 0. The likelihood function then has the form
l(a,b)=
(yilog(aTui +b)−(aTui +b)−log(yi!)).
likelihood function has the form
􏰊q l(a,b) =
i=1
logpi +
log(1−pi)
􏰖q i=1
􏰖m i=q+1
(1 − pi),
where pi is given by the logistic model with explanatory variable ui. The log-
pi
􏰊m i=q+1

7.1 Parametric distribution estimation 355
1
0.8
0.6
0.4
0.2
0
0 2 4u6 8 10
Figure 7.1 Logistic regression. The circles show 50 points (ui,yi), where ui ∈ R is the explanatory variable, and yi ∈ {0,1} is the outcome. The data suggest that for u < 5 or so, the outcome is more likely to be y = 0, whileforu>5orso,theoutcomeismorelikelytobey=1. Thedata also suggest that for u < 2 or so, the outcome is very likely to be y = 0, andforu>8orso,theoutcomeisverylikelytobey=1. Thesolid curve shows prob(y = 1) = exp(au + b)/(1 + exp(au + b)) for the maximum likelihood parameters a, b. This maximum likelihood model is consistent with our informal observations about the data set.
􏰊q =
i=1
i=1
e x p ( a T u i + b )
log 1+exp(aTui +b) +
􏰊m 1
log 1+exp(aTui +b) log(1+exp(aTui +b)).
􏰊q =
i=q+1 􏰊m
(aTui +b)−
Since l is a concave function of a and b, the logistic regression problem can be solved
i=1
as a convex optimization problem. Figure 7.1 shows an example with u ∈ R. Covariance estimation for Gaussian variables
Suppose y ∈ Rn is a Gaussian random variable with zero mean and covariance matrix R = E yyT , so its density is
pR(y) = (2π)−n/2 det(R)−1/2 exp(−yT R−1y/2),
where R ∈ Sn++. We want to estimate the covariance matrix R based on N in- dependent samples y1, . . . , yN ∈ Rn drawn from the distribution, and using prior knowledge about R.
The log-likelihood function has the form l(R) = logpR(y1,…,yN)
prob(y = 1)

356
7 Statistical estimation
where
= =
−(Nn/2) log(2π) − (N/2) log det R − (1/2) 1 􏰊N
ykT R−1yk −(Nn/2) log(2π) − (N/2) log det R − (N/2) tr(R−1Y ),
Y=N ykykT k=1
is the sample covariance of y1, . . . , yN . This log-likelihood function is not a concave function of R (although it is concave on a subset of its domain Sn++; see exercise 7.4), but a change of variable yields a concave log-likelihood function. Let S denote the inverse of the covariance matrix, S = R−1 (which is called the information matrix ). Using S in place of R as a new parameter, the log-likelihood function has the form
l(S) = −(Nn/2) log(2π) + (N/2) log det S − (N/2) tr(SY ), which is a concave function of S.
Therefore the ML estimate of S (hence, R) is found by solving the problem
maximize log det S − tr(SY ) subject to S ∈ S
(7.4)
where S is our prior knowledge of S = R−1. (We also have the implicit constraint that S ∈ Sn++.) Since the objective function is concave, this is a convex problem if the set S can be described by a set of linear equality and convex inequality constraints.
First we examine the case in which no prior assumptions are made on R (hence, S), other than R ≻ 0. In this case the problem (7.4) can be solved analytically. The gradient of the objective is S−1 −Y , so the optimal S satisfies S−1 = Y if Y ∈ Sn++. (If Y ̸∈ Sn++, the log-likelihood function is unbounded above.) Therefore, when we have no prior assumptions about R, the maximum likelihood estimate of the covariance is, simply, the sample covariance: Rˆml = Y .
Now we consider some examples of constraints on R that can be expressed as convex constraints on the information matrix S. We can handle lower and upper (matrix) bounds on R, of the form
L ≼ R ≼ U,
where L and U are symmetric and positive definite, as
U−1 ≼R−1 ≼L−1. A condition number constraint on R,
can be expressed as
λmax(R) ≤ κmaxλmin(R), λmax(S) ≤ κmaxλmin(S).
􏰊N k=1

7.1 Parametric distribution estimation 357
This is equivalent to the existence of u > 0 such that uI ≼ S ≼ κmaxuI. We can therefore solve the ML problem, with the condition number constraint on R, by solving the convex problem
maximize log det S − tr(SY ) subject to uI ≼ S ≼ κmaxuI
(7.5) As another example, suppose we are given bounds on the variance of some linear
functions of the underlying random vector y,
E(cTi y)2 ≤ αi, i = 1,…,K.
These prior assumptions can be expressed as
E(cTi y)2 = cTi Rci = cTi S−1ci ≤ αi, i = 1,…,K.
Since cTi S−1ci is a convex function of S (provided S ≻ 0, which holds here), these bounds can be imposed in the ML problem.
7.1.2 Maximum a posteriori probability estimation
Maximum a posteriori probability (MAP) estimation can be considered a Bayesian version of maximum likelihood estimation, with a prior probability density on the underlying parameter x. We assume that x (the vector to be estimated) and y (the observation) are random variables with a joint probability density p(x,y). This is in contrast to the statistical estimation setup, where x is a parameter, not a random variable.
The prior density of x is given by
px(x)=􏰑 p(x,y)dy.
This density represents our prior information about what the values of the vector x might be, before we observe the vector y. Similarly, the prior density of y is given by 􏰑
py(y) = p(x,y) dx.
This density represents the prior information about what the measurement or ob- servation vector y will be.
The conditional density of y, given x, is given by py|x(x,y) = p(x,y).
px (x)
In the MAP estimation method, py|x plays the role of the parameter dependent density px in the maximum likelihood estimation setup. The conditional density of x, given y, is given by
px|y(x,y) = p(x,y) = py|x(x,y)px(x). py (y) py (y)
where the variables are S ∈ Sn and u ∈ R.

358
7 Statistical estimation
When we substitute the observed value y into px|y, we obtain the posterior density of x. It represents our knowledge of x after the observation.
In the MAP estimation method, our estimate of x, given the observation y, is given by
argmaxxpx|y(x, y)
= argmaxxpy|x(x, y)px(x)
= argmaxxp(x, y).
In other words, we take as estimate of x the value that maximizes the conditional density of x, given the observed value of y. The only difference between this estimate and the maximum likelihood estimate is the second term, px(x), appearing here. This term can be interpreted as taking our prior knowledge of x into account. Note that if the prior density of x is uniform over a set C, then finding the MAP estimate is the same as maximizing the likelihood function subject to x ∈ C, which is the ML estimation problem (7.1).
Taking logarithms, we can express the MAP estimate as
xˆmap =argmaxx(logpy|x(x,y)+logpx(x)). (7.6)
The first term is essentially the same as the log-likelihood function; the second term penalizes choices of x that are unlikely, according to the prior density (i.e., x with px(x) small).
Brushing aside the philosophical differences in setup, the only difference between finding the MAP estimate (via (7.6)) and the ML estimate (via (7.1)) is the presence of an extra term in the optimization problem, associated with the prior density of x. Therefore, for any maximum likelihood estimation problem with concave log- likelihood function, we can add a prior density for x that is log-concave, and the resulting MAP estimation problem will be convex.
Linear measurements with IID noise
Suppose that x ∈ Rn and y ∈ Rm are related by
yi =aTi x+vi, i=1,…,m,
where vi are IID with density pv on R, and x has prior density px on Rn. The joint density of x and y is then
􏰖m i=1
and the MAP estimate can be found by solving the optimization problem
maximize log px (x) + 􏰉mi=1 log pv (yi − aTi x). (7.7)
If px and pv are log-concave, this problem is convex. The only difference between the MAP estimation problem (7.7) and the associated ML estimation problem (7.2) is the extra term log px(x).
xˆmap =
p ( x , y ) = p x ( x )
p v ( y i − a Ti x ) ,

7.2 Nonparametric distribution estimation 359
For example, if vi are uniform on [−a,a], and the prior distribution of x is Gaussian with mean x ̄ and covariance Σ, the MAP estimate is found by solving the QP
minimize (x − x ̄)T Σ−1(x − x ̄) subject to ∥Ax − y∥∞ ≤ a,
with variable x.
MAP with perfect linear measurements
Suppose x ∈ Rn is a vector of parameters to be estimated, with prior density px. We have m perfect (noise free, deterministic) linear measurements, given by y = Ax. In other words, the conditional distribution of y, given x, is a point mass with value one at the point Ax. The MAP estimate can be found by solving the problem
maximize log px(x) subject to Ax = y.
If px is log-concave, this is a convex problem.
If under the prior distribution, the parameters xi are IID with density p on R,
then the MAP estimation problem has the form maximize 􏰉ni=1 log p(xi)
subject to Ax = y,
which is a least-penalty problem ((6.6), page 304), with penalty function φ(u) = − log p(u).
Conversely, we can interpret any least-penalty problem, minimize φ(x1) + · · · + φ(xn)
subject to Ax = b
as a MAP estimation problem, with m perfect linear measurements (i.e., Ax = b)
and xi IID with density
p(z) = 􏰜 e−φ(u) du.
7.2 Nonparametric distribution estimation
We consider a random variable X with values in the finite set {α1, . . . , αn} ⊆ R. (We take the values to be in R for simplicity; the same ideas can be applied when the values are in Rk, for example.) The distribution of X is characterized by p ∈ Rn, with prob(X = αk) = pk. Clearly, p satisfies p ≽ 0, 1T p = 1. Conversely, if p ∈ Rn satisfies p ≽ 0, 1T p = 1, then it defines a probability distribution for a random variable X, defined as prob(X = αk) = pk. Thus, the probability simplex
{p∈Rn |p≽0, 1Tp=1}
e−φ(z)

360
7 Statistical estimation
is in one-to-one correspondence with all possible probability distributions for a random variable X taking values in {α1 , . . . , αn }.
In this section we discuss methods used to estimate the distribution p based on a combination of prior information and, possibly, observations and measurements.
Prior information
Many types of prior information about p can be expressed in terms of linear equality constraints or inequalities. If f : R → R is any function, then
􏰊n i=1
prob(X∈C)=cTp, ci = 1 αi ∈C 0 αi̸∈C.
It follows that known expected values of certain functions (e.g., moments) or known probabilities of certain sets can be incorporated as linear equality constraints on p ∈ Rn. Inequalities on expected values or probabilities can be expressed as linear inequalities on p ∈ Rn.
For example, suppose we know that X has mean EX = α, second moment E X2 = β, and prob(X ≥ 0) ≤ 0.3. This prior information can be expressed as
􏰊n 􏰊n 􏰊
EX= αipi =α, EX2 = αi2pi =β, pi ≤0.3,
i=1 i=1 αi≥0
which are two linear equalities and one linear inequality in p.
We can also include some prior constraints that involve nonlinear functions of
p. As an example, the variance of X is given by
􏰊n 􏰇􏰊n 􏰈2 v a r ( X ) = E X 2 − ( E X ) 2 = α i2 p i − α i p i .
i=1 i=1
The first term is a linear function of p and the second term is concave quadratic in p, so the variance of X is a concave function of p. It follows that a lower bound on the variance of X can be expressed as a convex quadratic inequality on p.
As another example, suppose A and B are subsets of R, and consider the conditional probability of A given B:
prob(X ∈A|X ∈B)= prob(X ∈A∩B). prob(X ∈ B)
This function is linear-fractional in p ∈ Rn: it can be expressed as prob(X ∈A|X ∈B)=cTp/dTp,
pif(αi)
is a linear function of p. As a special case, if C ⊆􏰆R, then prob(X ∈ C) is a linear
function of p:
Ef(X) =
where
ci=􏰆1 αi∈A∩B , di=􏰆1 αi∈B 0 αi̸∈A∩B 0 αi̸∈B.

7.2 Nonparametric distribution estimation 361
Therefore we can express the prior constraints l≤prob(X ∈A|X ∈B)≤u
as the linear inequality constraints on p
ldT p ≤ cT p ≤ udT p.
Several other types of prior information can be expressed in terms of nonlinear convex inequalities. For example, the entropy of X, given by
􏰊n i=1
is a concave function of p, so we can impose a minimum value of entropy as a convex inequality on p. If q represents another distribution, i.e., q ≽ 0, 1T q = 1, then the Kullback-Leibler divergence between the distribution q and the distribution p is given by
􏰊n
pi log(pi/qi),
i=1
which is convex in p (and q as well; see example 3.19, page 90). It follows that we can impose a maximum Kullback-Leibler divergence between p and a given distribution q, as a convex inequality on p.
In the next few paragraphs we express the prior information about the distribu- tion p as p ∈ P. We assume that P can be described by a set of linear equalities and convex inequalities. We include in the prior information P the basic constraints p ≽ 0, 1T p = 1.
Bounding probabilities and expected values
Given prior information about the distribution, say p ∈ P, we can compute upper or lower bounds on the expected value of a function, or probability of a set. For example to determine a lower bound on Ef(X) over all distributions that satisfy the prior information p ∈ P, we solve the convex problem
minimize 􏰉ni=1 f(αi)pi subject to p ∈ P.
Maximum likelihood estimation
We can use maximum likelihood estimation to estimate p based on observations from the distribution. Suppose we observe N independent samples x1, . . . , xN from the distribution. Let ki denote the number of these samples with value αi, so that k1 + · · · + kn = N , the total number of observed samples. The log-likelihood function is then
􏰊n i=1

pi logpi,
l(p) =
ki log pi,

362
7 Statistical estimation
which is a concave function of p. The maximum likelihood estimate of p can be found by solving the convex problem
maximize l(p) = 􏰉ni=1 ki log pi subject to p ∈ P,
with variable p. Maximum entropy
The maximum entropy distribution consistent with the prior assumptions can be found by solving the convex problem
minimize 􏰉ni=1 pi log pi subject to p ∈ P.
Enthusiasts describe the maximum entropy distribution as the most equivocal or most random, among those consistent with the prior information.
Minimum Kullback-Leibler divergence
We can find the distribution p that has minimum Kullback-Leibler divergence from a given prior distribution q, among those consistent with prior information, by solving the convex problem
minimize 􏰉ni=1 pi log(pi/qi) subject to p ∈ P,
Note that when the prior distribution is the uniform distribution, i.e., q = (1/n)1, this problem reduces to the maximum entropy problem.
Example 7.2 We consider a probability distribution on 100 equidistant points αi in the interval [−1, 1]. We impose the following prior assumptions:
EX2 E(3X3 − 2X) prob(X < 0) ∈ [0.5,0.6] ∈ [−0.3, −0.2] ∈ [0.3, 0.4]. (7.8) EX ∈ [−0.1, 0.1] Along with the constraints 1T p = 1, p ≽ 0, these constraints describe a polyhedron of probability distributions. Figure 7.2 shows the maximum entropy distribution that satisfies these constraints. The maximum entropy distribution satisfies EX EX2 E(3X3 − 2X) prob(X < 0) = 0.056 = 0.5 = −0.2 = 0.4. To illustrate bounding probabilities, we compute upper and lower bounds on the cumulative distribution prob(X ≤ αi ), for i = 1, . . . , 100. For each value of i, 7.2 Nonparametric distribution estimation 363 0.04 0.03 0.02 0.01 0 −1 −0.5 0 0.5 1 αi Figure 7.2 Maximum entropy distribution that satisfies the constraints (7.8). we solve two LPs: one that maximizes prob(X ≤ αi), and one that minimizes prob(X ≤ αi), over all distributions consistent with the prior assumptions (7.8). The results are shown in figure 7.3. The upper and lower curves show the upper and lower bounds, respectively; the middle curve shows the cumulative distribution of the maximum entropy distribution. Example 7.3 Bounding risk probability with known marginal distributions. Suppose X and Y are two random variables that give the return on two investments. We assume that X takes values in {α1,...,αn} ⊆ R and Y takes values in {β1,...,βm} ⊆ R, with pij = prob(X = αi,Y = βj). The marginal distributions of the two returns X and Y are known, i.e., 􏰊m j=1 􏰊n i=1 pij = qj, j = 1,...,m, (7.9) but otherwise nothing is known about the joint distribution p. This defines a poly- hedron of joint distributions consistent with the given marginals. Now suppose we make both investments, so our total return is the random variable X + Y . We are interested in computing an upper bound on the probability of some level of loss, or low return, i.e., prob(X + Y < γ). We can compute a tight upper bound on this probability by solving the LP maximize 􏰉{pij | αi + βj < γ} subject to (7.9), pij ≥ 0, i = 1,...n, j = 1,...,m. The optimal value of this LP is the maximum probability of loss. The optimal solution p⋆ is the joint distribution, consistent with the given marginal distributions, that maximizes the probability of the loss. The same method can be applied to a derivative of the two investments. Let R(X, Y ) be the return of the derivative, where R : R2 → R. We can compute sharp lower pij = ri, i = 1,...,n, pi = prob(X = αi) 364 7 Statistical estimation 7.3 0 −1 −0.5 0 0.5 1 αi Figure 7.3 The top and bottom curves show the maximum and minimum possible values of the cumulative distribution function, prob(X ≤ αi), over all distributions that satisfy (7.8). The middle curve is the cumulative dis- tribution of the maximum entropy distribution that satisfies (7.8). and upper bounds on prob(R < γ) by solving a similar LP, with objective function 􏰊{pij | R(αi,βj) < γ}, which we can minimize and maximize. Optimal detector design and hypothesis testing Suppose X is a random variable with values in {1, . . . , n}, with a distribution that depends on a parameter θ ∈ {1, . . . , m}. The distributions of X, for the m possible values of θ, can be represented by a matrix P ∈ Rn×m, with elements pkj =prob(X=k|θ=j). The jth column of P gives the probability distribution associated with the param- eter value θ = j. We consider the problem of estimating θ, based on an observed sample of X. In other words, the sample X is generated from one of the m possible distributions, and we are to guess which one. The m values of θ are called hypotheses, and guessing which hypothesis is correct (i.e., which distribution generated the observed sample X) is called hypothesis testing. In many cases one of the hypotheses corresponds to some normal situation, and each of the other hypotheses corresponds to some abnormal event. In this case hypothesis testing can be interpreted as observing a 1 0.8 0.6 0.4 0.2 prob(X ≤ αi) 7.3 Optimal detector design and hypothesis testing 365 value of X, and then guessing whether or not an abnormal event has occurred, and if so, which one. For this reason hypothesis testing is also called detection. In most cases there is no significance to the ordering of the hypotheses; they are simply m different hypotheses, arbitrarily labeled θ = 1, . . . , m. If θˆ = θ, where θˆ denotes the estimate of θ, then we have correctly guessed the parameter value θ. If θˆ ̸= θ, then we have (incorrectly) guessed the parameter value θ; we have mistaken θˆ for θ. In other cases, there is significance in the ordering of the hypotheses. In this case, an event such as θˆ > θ, i.e., the event that we overestimate θ, is meaningful.
It is also possible to parametrize θ by values other than {1, . . . , m}, say as θ ∈ {θ1, . . . , θm}, where θi are (distinct) values. These values could be real numbers, or vectors, for example, specifying the mean and variance of the kth distribution. In this case, a quantity such as ∥θˆ−θ∥, which is the norm of the parameter estimation error, is meaningful.
7.3.1 Deterministic and randomized detectors
A (deterministic) estimator or detector is a function ψ from {1, . . . , n} (the set of possible observed values) into {1, . . . , m} (the set of hypotheses). If X is observed to have value k, then our guess for the value of θ is θˆ = ψ(k). One obvious deterministic detector is the maximum likelihood detector, given by
θˆ = ψml(k) = argmax pkj . (7.10) j
When we observe the value X = k, the maximum likelihood estimate of θ is a value that maximizes the probability of observing X = k, over the set of possible distributions.
We will consider a generalization of the deterministic detector, in which the estimate of θ, given an observed value of X, is random. A randomized detector of θ is a random variable θˆ ∈ {1, . . . , m}, with a distribution that depends on the observed value of X. A randomized detector can be defined in terms of a matrix T ∈ Rm×n with elements
tik =prob(θˆ=i|X=k).
The interpretation is as follows: if we observe X = k, then the detector gives θˆ = i with probability tik . The kth column of T , which we will denote tk , gives the probability distribution of θˆ, when we observe X = k. If each column of T is a unit vector, then the randomized detector is a deterministic detector, i.e., θˆ is a (deterministic) function of the observed value of X.
At first glance, it seems that intentionally introducing additional randomiza- tion into the estimation or detection process can only make the estimator worse. But we will see below examples in which a randomized detector outperforms all deterministic estimators.
We are interested in designing the matrix T that defines the randomized detec- tor. Obviously the columns tk of T must satisfy the (linear equality and inequality) constraints
tk ≽0, 1Ttk =1. (7.11)

366
7.3.2
7 Statistical estimation
7.3.3
For the randomized detector defined by the matrix T, we define the detection probability matrix as D = T P . We have
Dij =(TP)ij =prob(θˆ=i|θ=j),
so Dij is the probability of guessing θˆ = i, when in fact θ = j. The m×m detection probability matrix D characterizes the performance of the randomized detector defined by T . The diagonal entry Dii is the probability of guessing θˆ = i when θ = i, i.e., the probability of correctly detecting that θ = i. The off-diagonal entry Dij (with i ̸= j) is the probability of mistaking θ = i for θ = j, i.e., the probability that our guess is θˆ = i, when in fact θ = j. If D = I, the detector is perfect: no matter what the parameter θ is, we correctly guess θˆ = θ.
The diagonal entries of D, arranged in a vector, are called the detection proba- bilities, and denoted Pd:
Pid =Dii =prob(θˆ=i|θ=i).
The error probabilities are the complements, and are denoted Pe:
Pie =1−Dii =prob(θˆ̸=i|θ=i).
Since the columns of the detection probability matrix D add up to one, we can
express the error probabilities as
P ie = 􏰊 D j i .
j ̸=i
Optimal detector design
In this section we show that a wide variety of objectives for detector design are linear, affine, or convex piecewise-linear functions of D, and therefore also of T (which is the optimization variable). Similarly, a variety of constraints for detector design can be expressed in terms of linear inequalities in D. It follows that a wide variety of optimal detector design problems can be expressed as LPs. We will see in §7.3.4 that some of these LPs have simple solutions; in this section we simply formulate the problem.
Limits on errors and detection probabilities
We can impose a lower bound on the probability of correctly detecting the jth hypothesis,
P jd = D j j ≥ L j ,
which is a linear inequality in D (hence, T). Similarly, we can impose a maximum
allowable probability for mistaking θ = i for θ = j: Dij ≤Uij,
Detection probability matrix

7.3 Optimal detector design and hypothesis testing 367
which are also linear constraints on T. We can take any of the detection prob- abilities as an objective to be maximized, or any of the error probabilities as an objective to be minimized.
Minimax detector design
We can take as objective (to be minimized) the minimax error probability, maxj Pje, which is a piecewise-linear convex function of D (hence, also of T). With this as the only objective, we have the problem of minimizing the maximum probability of detection error,
minimize maxj Pje
subjectto tk ≽0, 1Ttk =1, k=1,…,n,
where the variables are t1, . . . , tn ∈ Rm. This can be reformulated as an LP. The minimax detector minimizes the worst-case (largest) probability of error over all m hypotheses.
We can, of course, add further constraints to the minimax detector design prob- lem.
Bayes detector design
In Bayes detector design, we have a prior distribution for the hypotheses, given by
q ∈ Rm, where
qi = prob(θ = i).
In this case, the probabilities pij are interpreted as conditional probabilities of X, given θ. The probability of error for the detector is then given by qTPe, which is an affine function of T . The Bayes optimal detector is the solution of the LP
minimize qT P e
subjectto tk ≽0, 1Ttk =1, k=1,…,n.
We will see in §7.3.4 that this problem has a simple analytical solution.
One special case is when q = (1/m)1. In this case the Bayes optimal detector minimizes the average probability of error, where the (unweighted) average is over the hypotheses. In §7.3.4 we will see that the maximum likelihood detector (7.10)
is optimal for this problem.
Bias, mean-square error, and other quantities
In this section we assume that the ordering of the values of θ have some significance, i.e., that the value θ = i can be interpreted as a larger value of the parameter than θ = j, when i > j. This might be the case, for example, when θ = i corresponds to the hypothesis that i events have occurred. Here we may be interested in quantities such as
p r o b ( θˆ > θ | θ = i ) ,
which is the probability that we overestimate θ when θ = i. This is an affine function of D: ˆ 􏰊
prob(θ > θ | θ = i) = Dji, j>i

368
7 Statistical estimation
so a maximum allowable value for this probability can be expressed as a linear inequality on D (hence, T). As another example, the probability of misclassifying θ by more than one, when θ = i,
prob(|θˆ−θ|>1|θ=i)= 􏰊 Dji, |j −i|>1
is also a linear function of D.
We now suppose that the parameters have values {θ1,…,θm} ⊆ R. The es-
timation or detection (parameter) error is then given by θˆ − θ, and a number of quantities of interest are given by linear functions of D. Examples include:
• Bias. The bias of the detector, when θ = θi, is given by the linear function ˆ 􏰊m
Ei ( θ − θ ) = ( θ j − θ i ) D j i , j=1
where the subscript on E means the expectation is with respect to the dis- tribution of the hypothesis θ = θi.
• Mean square error. The mean square error of the detector, when θ = θi, is given by the linear function
ˆ 􏰊m
Ei (θ − θ)2 = (θj − θi)2Dji.
j=1
• Average absolute error. The average absolute error of the detector, when
θ = θi, is given by the linear function
ˆ 􏰊m
Ei | θ − θ | = | θ j − θ i | D j i . j=1
Multicriterion formulation and scalarization
The optimal detector design problem can be considered a multicriterion problem, with the constraints (7.11), and the m(m − 1) objectives given by the off-diagonal entries of D, which are the probabilities of the different types of detection error:
7.3.4
minimize (w.r.t. Rm(m−1)) D , i, j = 1,…,m, i ̸= j + ij
subjectto tk ≽0, 1Ttk =1, k=1,…,n,
(7.12)
with variables t1, . . . , tn ∈ Rm. Since each objective Dij is a linear function of the variables, this is a multicriterion linear program.
We can scalarize this multicriterion problem by forming the weighted sum ob-
jective
􏰊m
WijDij =tr(WTD)
i,j =1

7.3 Optimal detector design and hypothesis testing 369
where the weight matrix W ∈ Rm×m satisfies
Wii =0, i=1,…,m, Wij >0, i, j =1,…,m, i̸=j.
This objective is a weighted sum of the m(m − 1) error probabilities, with weight Wij associated with the error of guessing θˆ = i when in fact θ = j. The weight matrix is sometimes called the loss matrix.
To find a Pareto optimal point for the multicriterion problem (7.12), we form the scalar optimization problem
minimize tr(W T D)
subjectto tk ≽0, 1Ttk =1, k=1,…,n,
(7.13) which is an LP. This LP is separable in the variables t1, . . . , tn. The objective can
be expressed as a sum of (linear) functions of tk:
􏰊n
cTk tk,
where ck is the kth column of W P T . The constraints are separable (i.e., we have separate constraints on each ti). Therefore we can solve the LP (7.13) by separately solving
minimize cTk tk
subjectto tk ≽0, 1Ttk =1,
for k = 1, . . . , n. Each of these LPs has a simple analytical solution (see exer- cise 4.8). We first find an index q such that ckq = minj ckj. Then we take t⋆k = eq. This optimal point corresponds to a deterministic detector: when X = k is ob- served, our estimate is
θˆ = a r g m i n ( W P T ) j k . ( 7 . 1 4 ) j
Thus, for every weight matrix W with positive off-diagonal elements we can find a deterministic detector that minimizes the weighted sum objective. This seems to suggest that randomized detectors are not needed, but we will see this is not the case. The Pareto optimal trade-off surface for the multicriterion LP (7.12) is piecewise-linear; the deterministic detectors of the form (7.14) correspond to the vertices on the Pareto optimal surface.
MAP and ML detectors
Consider a Bayes detector design with prior distribution q. The mean probability
of error is
T e 􏰊m 􏰊 􏰊m
qP= qj Dij = WijDij,
tr(WT D) = tr(WT TP) = tr(PWT T) =
j =1 i̸=j i,j =1 if we define the weight matrix W as
Wij =qj, i, j =1,…,m, i̸=j, Wii =0,
i=1,…,m.
k=1

370
7 Statistical estimation
Thus, a Bayes optimal detector is given by the deterministic detector (7.14), with
T 􏰊 􏰊m
(WP )jk = qipki = qipki − qjpkj.
i̸=j i=1
The first term is independent of j, so the optimal detector is simply
θˆ = argmax(pkj qj ), j
when X = k is observed. The solution has a simple interpretation: Since pkjqj gives the probability that θ = j and X = k, this detector is a maximum a posteriori probability (MAP) detector.
For the special case q = (1/m)1, i.e., a uniform prior distribution on θ, this MAP detector reduces to a maximum likelihood (ML) detector:
θˆ = a r g m a x p k j . j
Thus, a maximum likelihood detector minimizes the (unweighted) average or mean probability of error.
Binary hypothesis testing
As an illustration, we consider the special case m = 2, which is called binary hypothesis testing. The random variable X is generated from one of two distribu- tions, which we denote p ∈ Rn and q ∈ Rn, to simplify the notation. Often the hypothesis θ = 1 corresponds to some normal situation, and the hypothesis θ = 2 corresponds to some abnormal event that we are trying to detect. If θˆ = 1, we say the test is negative (i.e., we guess that the event did not occur); if θˆ = 2, we say the test is positive (i.e., we guess that the event did occur).
The detection probability matrix D ∈ R2×2 is traditionally expressed as D=􏰒 1−Pfp Pfn 􏰓.
Pfp 1 − Pfn
Here Pfn is the probability of a false negative (i.e., the test is negative when in fact the event has occurred) and Pfp is the probability of a false positive (i.e., the test is positive when in fact the event has not occurred), which is also called the false alarm probability. The optimal detector design problem is a bi-criterion problem, with objectives Pfn and Pfp.
The optimal trade-off curve between Pfn and Pfp is called the receiver operating characteristic (ROC), and is determined by the distributions p and q. The ROC can be found by scalarizing the bi-criterion problem, as described in §7.3.4. For the weight matrix W , an optimal detector (7.14) is
θˆ=􏰆 1 W21pk >W12qk 2 W21pk ≤ W12qk
7.3.5

7.3 Optimal detector design and hypothesis testing
371
1 0.8 0.6 0.4 0.2
00 0.2
0.4 P
0.6 0.8
1
1
24
3
Figure 7.4 Optimal trade-off curve between probability of a false negative, and probability of a false positive test result, for the matrix P given in (7.15). The vertices of the trade-off curve, labeled 1–3, correspond to deterministic detectors; the point labeled 4, which is a randomized detector, is the mini- max detector. The dashed line shows Pfn = Pfp, the points where the error probabilities are equal.
when X = k is observed. This is called a likelihood ratio threshold test: if the ratio pk /qk is more than the threshold W12 /W21 , the test is negative (i.e., θˆ = 1); otherwise the test is positive. By choosing different values of the threshold, we obtain (deterministic) Pareto optimal detectors that give different levels of false positive versus false negative error probabilities. This result is known as the Neyman-Pearson lemma.
The likelihood ratio detectors do not give all the Pareto optimal detectors; they are the vertices of the optimal trade-off curve, which is piecewise-linear.
Example 7.4 We consider a binary hypothesis testing example with n = 4, and  0.70 0.10 
P=0.20 0.10. (7.15) 0.05 0.70
0.05 0.10
The optimal trade-off curve between Pfn and Pfp, i.e., the receiver operating curve, is shown in figure 7.4. The left endpoint corresponds to the detector which is always negative, independent of the observed value of X; the right endpoint corresponds to the detector that is always positive. The vertices labeled 1, 2, and 3 correspond to the deterministic detectors
T(1) = 􏰒1 1 0 1􏰓, 0010
T(2) = 􏰒1 1 0 0􏰓, 0011
fp
Pfn

372
7 Statistical estimation
T(3) = 􏰒1 0 0 0􏰓, 0111
respectively. The point labeled 4 corresponds to the nondeterministic detector T(4)=􏰒1 2/3 0 0􏰓,
0 1/3 1 1
which is the minimax detector. This minimax detector yields equal probability of a false positive and false negative, which in this case is 1/6. Every deterministic detector has either a false positive or false negative probability that exceeds 1/6, so this is an example where a randomized detector outperforms every deterministic detector.
Robust detectors
So far we have assumed that P , which gives the distribution of the observed variable X, for each value of the parameter θ, is known. In this section we consider the case where these distributions are not known, but certain prior information about them is given. We assume that P ∈ P, where P is the set of possible distributions. With a randomized detector characterized by T , the detection probability matrix D now depends on the particular value of P. We will judge the error probabilities by their worst-case values, over P ∈ P. We define the worst-case detection probability matrix Dwc as
7.3.6
Dwc=supD , i,j=1,…,m, i̸=j ij ij
P∈P
Dwc=infD, i=1,…,m.
The off-diagonal entries give the largest possible probability of errors, and the
diagonal en􏰉tries give the smallest possible probability of detection, over P ∈ P.
Note that n Dwc ̸= 1 in general, i.e., the columns of a worst-case detection i=1 ij
probability matrix do not necessarily add up to one. We define the worst-case probability of error as
Pwce =1−Dwc. i ii
Thus, Pwce is the largest probability of error, when θ = i, over all possible distri- i
butions in P.
Using the worst-case detection probability matrix, or the worst-case probability
of error vector, we can develop various robust versions of detector design problems. In the rest of this section we concentrate on the robust minimax detector design problem, as a generic example that illustrates the ideas.
We define the robust minimax detector as the detector that minimizes the worst- case probability of error, over all hypotheses, i.e., minimizes the objective
maxPwce = max sup(1−(TP) )=1− min inf(TP) . i i i=1,…,m P ∈P ii i=1,…,m P ∈P ii
The robust minimax detector minimizes the worst possible probability of error, over all m hypotheses, and over all P ∈ P.
and
ii P∈P ii

7.3 Optimal detector design and hypothesis testing 373
Robust minimax detector for finite P
When the set of possible distributions is finite, the robust minimax detector design problem is readily formulated as an LP. With P = {P1, . . . , Pk}, we can find the robust minimax detector by solving
maximize mini=1,…,m infP∈P(TP)ii = mini=1,…,m minj=1,…,k(TPj)ii subjectto ti ≽0, 1Tti =1, i=1,…,n,
The objective is piecewise-linear and concave, so this problem can be expressed as an LP. Note that we can just as well consider P to be the polyhedron convP; the associated worst-case detection matrix, and robust minimax detector, are the same.
Robust minimax detector for polyhedral P
It is also possible to efficiently formulate the robust minimax detector problem as an LP when P is a polyhedron described by linear equality and inequality constraints. This formulation is less obvious, and relies on a dual representation of P.
To simplify the discussion, we as􏰍sume that P has the form
P=􏰂P=[p1 ···pm]􏰍Akpk=bk,1Tpk=1,pk≽0􏰃. (7.16)
In other words, for each distribution pk, we are given some expected values Akpk = bk. (These might represent known moments, probabilities, etc.) The extension to the case where we are given inequalities on expected values is straightforward.
The robust minimax design problem is
maximize γ
subjectto inf{tTi p|Aip=bi, 1Tp=1, p≽0}≥γ, i=1,…,m
̃
ti ≽0, 1Tti =1, i=1,…,n,
̃ ̃
where tTi denotes the ith row of T (so that (TP)ii = tTi pi). By LP duality,
TTTT
inf{t ̃ p|Ap=b, 1 p=1, p≽0}=sup{ν b +μ|A ν+μ1≼t ̃}. iii iii
Using this, the robust minimax detector design problem can be expressed as the
LP
maximize γ
subjectto νiTbi+μi≥γ, i=1,…,m
ATν +μ1≼t ̃, i=1,…,m iiii
ti ≽0, 1Tti =1, i=1,…,n,
with variables ν1,…,νm, μ1,…,μn, and T (which has columns ti and rows tTi ).
Example 7.5 Robust binary hypothesis testing. Suppose m = 2 and the set P in (7.16) is defined by
A1 =A2 =A=􏰒 a1 a2 ··· an 􏰓, b1 =􏰒 α1 􏰓, b2 =􏰒 β1 􏰓. a21 a2 ··· a2n α2 β2
Designing a robust minimax detector for this set P can be interpreted as a binary hypothesis testing problem: based on an observation of a random variable X ∈ {a1, . . . , an}, choose between the following two hypotheses:
̃

374
7 Statistical estimation
1. EX=α1,EX2 =α2 2. EX=β1,EX2 =β2.
̃ ̃ ̃
Let tT denote the first row of T (and so, (1 − t)T is the second row). For given t, the
worst-case probabilities of correct detection are
􏰍 􏰊n 􏰊n
􏰐 􏰍􏰍 􏰚
wc T􏰍 2 T
D 1 1 = i n f t ̃ p 􏰍 a i p i = α 1 , a i p i = α 2 , 1 p = 1 , p ≽ 0
i=1 i=1
􏰍 􏰊n 􏰊n
􏰐􏰍􏰍 􏰚 wc ̃T􏰍 2 T
D22 = inf (1−t) p 􏰍 aipi =β1, aipi =β2, 1 p=1, p≽0 . i=1 i=1
Using LP duality we can express Dwc as the optimal value of the LP 11
maximize z0 + z1α1 + z2α2 2 ̃
subjectto z0 +aiz1 +aiz2 ≤ti, i=1,…,n,
with variables z0, z1, z2 ∈ R. Similarly Dwc is the optimal value of the LP
22 maximize w0 + w1β1 + w2β2
2 ̃
subjectto w0 +aiw1 +aiw2 ≤1−ti, i=1,…,n,
with variables w0, w1, w2 ∈ R. To obtain the minimax detector, we have to maximize the minimum of Dwc and Dwc, i.e., solve the LP
11 22
maximize γ
subjectto z0+z1α2+z2α2≥γ
w0 + β1w1 + β2w2 ≥ γ 2 ̃
w0 +w1ai +w2ai ≤1−ti, 0 ≼ t ̃ ≼ 1 .
̃ The variables are z0, z1, z2, w0, w1, w2 and t.
Chebyshev and Chernoff bounds
i=1,…,n
z0 +z1ai +z2ai ≤ti, 2 ̃
i=1,…,n
7.4
7.4.1
In this section we consider two types of classical bounds on the probability of a set, and show that generalizations of each can be cast as convex optimization problems. The original classical bounds correspond to simple convex optimization problems with analytical solutions; the convex optimization formulation of the general cases allow us to compute better bounds, or bounds for more complex situations.
Chebyshev bounds
Chebyshev bounds give an upper bound on the probability of a set based on known expected values of certain functions (e.g., mean and variance). The simplest ex- ample is Markov’s inequality: If X is a random variable on R+ with EX = μ,

7.4 Chebyshev and Chernoff bounds 375
then we have prob(X ≥ 1) ≤ μ, no matter what the distribution of X is. An- other simple example is Chebyshev’s bound: If X is a random variable on R with EX=μandE(X−μ)2 =σ2,thenwehaveprob(|X−μ|≥1)≤σ2,againno matter what the distribution of X is. The idea behind these simple bounds can be generalized to a setting in which convex optimization is used to compute a bound on the probability.
Let X be a random variable on S ⊆ Rm, and C ⊆ S be the set for which we want to bound prob(X ∈ C). Let 1C denote the 0-1 indicator function of the set C,i.e.,1C(z)=1ifz∈C and1C(z)=0ifz̸∈C.
Our prior knowledge of the distribution consists of known expected values of some functions:
Efi(X) = ai, i = 1,…,n,
where fi : Rm → R. We take f0 to be the constant function with value one, for which we always have E f0 (X ) = a0 = 1. Consider a linear combination of the
functions fi, given by
􏰊n
f(z) =
xifi(z),
i=0
where xi ∈ R, i = 0,…,n. From our knowledge of Efi(X), we have Ef(X) =
aT x.
Now suppose that f satisfies the condition f(z) ≥ 1C(z) for all z ∈ S, i.e., f
is pointwise greater than or equal to the indicator function of C (on S). Then we have
Ef(X)=aTx≥E1C(X)=prob(X ∈C).
In other words, aT x is an upper bound on prob(X ∈ C), valid for all distributions supported on S, with E fi(X) = ai.
We can search for the best such upper bound on prob(X ∈ C), by solving the problem
minimize x0 + a1􏰉􏰉x1 + ··· + anxn
subject to f(z) = ni=0 xifi(z) ≥ 1 for z ∈ C (7.17)
f(z)= ni=0xifi(z)≥0forz∈S,z∈/C,
with variable x ∈ Rn+1. This problem is always convex, since the constraints can
be expressed as
g1(x)=1− inf f(z)≤0, g2(x)=− inf f(z)≤0
z∈C z∈S\C
(g1 and g2 are convex). The problem (7.17) can also be thought of as a semi-infinite linear program, i.e., an optimization problem with a linear objective and an infinite number of linear inequalities, one for each z ∈ S.
In simple cases we can solve the problem (7.17) analytically. As an example, we take S = R+, C = [1,∞), f0(z) = 1, and f1(z) = z, with Ef1(X) = EX = μ ≤ 1 as our prior information. The constraint f(z) ≥ 0 for z ∈ S reduces to x0 ≥ 0, x1 ≥0. Theconstraintf(z)≥1forz∈C,i.e.,x0+x1z≥1forallz≥1,reduces to x0 + x1 ≥ 1. The problem (7.17) is then
minimize x0 + μx1 subjectto x0 ≥0, x1 ≥0
x0 + x1 ≥ 1.

376
7 Statistical estimation
Since0≤μ≤1,theoptimalpointforthissimpleLPisx0 =0,x1 =1. Thisgives the classical Markov bound prob(X ≥ 1) ≤ μ.
In other cases we can solve the problem (7.17) using convex optimization.
Remark 7.1 Duality and the Chebyshev bound problem. The Chebyshev bound prob- lem (7.17) determines a bound on prob(X ∈ C) for all probability measures that satisfy the given expected value constraints. Thus we can think of the Chebyshev bound problem (7.17) as producing a bound on the optimal value of the infinite- dimensional problem
maximize 􏰜 π(dz)
subject to 􏰜C fi(z)π(dz) = ai,
i = 1,…,n
π ≥ 0,
where the variable is the measure π, and π ≥ 0 means that the measure is nonnegative.
Since the Chebyshev problem (7.17) produces a bound on the problem (7.18), it should not be a surprise that they are related by duality. While semi-infinite and infinite-dimensional problems are beyond the scope of this book, we can still formally construct a dual of the problem (7.17), introducing a Lagrange multiplier function p : S → R, with p(z) the Lagrange multiplier associated with the inequality f(z) ≥ 1 (for z ∈ C) or f(z) ≥ 0 (for z ∈ S\C). Using an integral over z where we would have a sum in the finite-dimensional case, we arrive at the formal dual
􏰜S π(dz) = 1 S
(7.18)
maximize 􏰜
􏰜S p(z) dz = 1
p(z) dz
subject to 􏰜C fi(z)p(z) dz = ai, i = 1,…,n
S
p(z) ≥ 0 for all z ∈ S,
where the optimization variable is the function p. This is, essentially, the same
as (7.18).
Probability bounds with known first and second moments
As an example, suppose that S = Rm, and that we are given the first and second moments of the random variable X:
EX=a∈Rm, EXXT =Σ∈Sm.
In other words, we are given the expected value of the m functions zi, i = 1, . . . , m, and the m(m + 1)/2 functions zizj , i, j = 1, . . . , m, but no other information about the distribution.
In this case we can express f as the general quadratic function f(z) = zT Pz + 2qT z + r,
where the variables (i.e., the vector x in the discussion above) are P ∈ Sm, q ∈ Rm, and r ∈ R. From our knowledge of the first and second moments, we find that
Ef(X) = E(XTPX+2qTX+r)
= Etr(PXXT)+2EqTX+r
= tr(ΣP)+2qTa+r.

7.4 Chebyshev and Chernoff bounds 377
The constraint that f(z) ≥ 0 for all z can be expressed as the linear matrix in-
equality
􏰒P q􏰓≽0. qT r
In particular, we have P ≽ 0.
Now suppose that the set C is the complement of an open polyhedron,
C = R m \ P , P = { z | a Ti z < b i , i = 1 , . . . , k } . The condition that f(z) ≥ 1 for all z ∈ C is the same as requiring that a Ti z ≥ b i = ⇒ z T P z + 2 q T z + r ≥ 1 for i = 1,...,k. This, in turn, can be expressed as: there exist τ1,...,τk ≥ 0 such that 􏰒P q 􏰓≽τ􏰒 0 ai/2􏰓, i=1,...,k. qTr−1 iaTi/2−bi (See §B.2.) Putting it all together, the Chebyshev bound problem (7.17) can be expressed (7.19) as minimize tr(ΣP ) + 2qT a + r subjectto 􏰒 P q 􏰓≽τ 􏰒 0 ai/2 􏰓, i=1,...,k qTr−1 iaTi/2−bi τi ≥0, i=1,...,k 􏰒P q􏰓≽0, qT r which is a semidefinite program in the variables P, q, r, and τ1,...,τk. The optimal value, say α, is an upper bound on prob(X ∈ C) over all distributions with mean a and second moment Σ. Or, turning it around, 1 − α is a lower bound on prob(X ∈ P). Remark 7.2 Duality and the Chebyshev bound problem. The dual SDP associated with (7.19) can be expressed as 􏰉ki=1 λi maximize subjectto 􏰉aTzi ≥bλi, i=1,...,k SDP (7.19) is strictly feasible, strong duality holds and the dual optimum is attained. We can give an interesting probability interpretation to the dual problem. Suppose Zi, zi, λi are dual feasible and that the first r components of λ are positive, and the 􏰒􏰓􏰒􏰓 􏰒 i=1 ziT􏰓 λi aT 1 i k Zi zi ≼ Σ a Zi zi ziT λi ≽ 0, i = 1,...,k. The variables are Zi ∈ Sm, zi ∈ Rm, and λi ∈ R, for i = 1,...,k. Since the 378 7 Statistical estimation rest are zero. For simplicity we also assume that 􏰉ki=1 λi < 1. We define xi = (1/λi)zi, i=1,...,r, 1􏰇􏰊r 􏰈 w0 = μ a− λixi , i=1 1􏰇􏰊r 􏰈 W = μ Σ− λixixTi , i=1 where μ = 1 − 􏰉ki=1 λi . With these definitions the dual feasibility constraints can be expressed as and aTi xi ≥bi, i=1,...,r 􏰊r 􏰒 x i x Ti x i 􏰓 􏰒 W w 0 􏰓 􏰒 Σ a 􏰓 λi xTi 1 +μ w0T 1 = aT 1 . i=1 Moreover, from dual feasibility, 􏰒Ww0􏰓 􏰒Σa􏰓􏰊r 􏰒xixTi xi􏰓 μ w0T 1 = aT 1 − λi xTi 1 i=1 􏰒􏰓r􏰒􏰓 = Σ a − 􏰊􏰊 ( 1 / λ i ) z i z i T z i aT1 ziT λi i=1 i=1 ≽ 0. Therefore, W ≽ w0w0T , so it can be factored as W − w0w0T = 􏰉si=1 wiwiT . Now consider a discrete random variable X with the following distribution. If s ≥ 1, we take 􏰒 Σ a 􏰓 r 􏰒 Zi zi 􏰓 ≽ aT 1 − ziT λi X = xi √ X = w0 + √swi X = w0 − swi If s = 0, we take X = xi X = w0 It is easily verified that EX = a and EXXT = Σ, i.e., the distribution matches the given moments. Furthermore, since xi ∈ C, 􏰊r i=1 In particular, by applying this interpretation to the dual optimal solution, we can construct a distribution that satisfies the Chebyshev bound from (7.19) with equality, which shows that the Chebyshev bound is sharp for this case. with probability λi, i = 1,...,r with probability μ/(2s), i = 1,...,s with probability μ/(2s), i = 1,...,s. with probability λi, i = 1,...,r with probability μ. prob(X ∈ C) ≥ λi. 7.4 Chebyshev and Chernoff bounds 7.4.2 Chernoff bounds 379 Let X be a random variable on R. The Chernoff bound states that prob(X ≥ u) ≤ inf E eλ(X−u), λ≥0 which can be expressed as log prob(X ≥ u) ≤ inf {−λu + log E eλX }. λ≥0 (7.20) Recall (from example 3.41, page 106) that the righthand term, log E eλX , is called the cumulant generating function of the distribution, and is always convex, so the function to be minimized is convex. The bound (7.20) is most useful in cases when the cumulant generating function has an analytical expression, and the minimiza- tion over λ can be carried out analytically. For example, if X is Gaussian with zero mean and unit variance, the cumulant generating function is logEeλX =λ2/2, and the infimum over λ ≥ 0 of −λu + λ2/2 occurs with λ = u (if u ≥ 0), so the Chernoff bound is (for u ≥ 0) prob(X ≥ u) ≤ e−u2/2. The idea behind the Chernoff bound can be extended to a more general setting, in which convex optimization is used to compute a bound on the probability of a set in Rm. Let C ⊆ Rm, and as in the description of Chebyshev bounds above, let 1C denote the 0-1 indicator function of C. We will derive an upper bound on prob(X ∈ C). (In principle we can compute prob(X ∈ C), for example by Monte Carlo simulation, or numerical integration, but either of these can be a daunting computational task, and neither method produces guaranteed bounds.) Letλ∈Rm andμ∈R,andconsiderthefunctionf:Rm →Rgivenby f(z) = eλT z+μ. As in the development of Chebyshev bounds, if f satisfies f(z) ≥ 1C(z) for all z, then we can conclude that prob(X ∈ C) = E1C(X) ≤ Ef(X). Clearly we have f(z) ≥ 0 for all z; to have f(z) ≥ 1 for z ∈ C is the same as λTz+μ≥0forallz∈C,i.e.,−λTz≤μforallz∈C. Thus,if−λTz≤μforall z ∈ C, we have the bound prob(X ∈ C) ≤ E exp(λT X + μ), logprob(X ∈C)≤μ+logEexp(λTX). or, taking logarithms, 380 7 Statistical estimation From this we obtain a general form of Chernoff’s bound: logprob(X∈C) ≤ inf{μ+logEexp(λTX)| −λTz≤μforallz∈C} = inf 􏰄sup(−λT z) + log E exp(λT X)􏰅 λ 􏰀z∈C 􏰁 = inf SC(−λ)+logEexp(λTX) , where SC is the support function of C. Note that the second term, log E exp(λT X), is the cumulant generating function of the distribution, and is always convex (see example 3.41, page 106). Evaluating this bound is, in general, a convex optimiza- tion problem. Chernoff bound for a Gaussian variable on a polyhedron As a specific example, suppose that X is a Gaussian random vector on Rm with zero mean and covariance I, so its cumulant generating function is log E exp(λT X) = λT λ/2. We take C to be a polyhedron described by inequalities: C ={x|Ax≼b}, which we assume is nonempty. For use in the Chernoff bound, we use a dual characterization of the support function SC: SC(y) = sup{yTx|Ax≼b} = −inf{−yTx|Ax≼b} = −sup{−bTu|ATu=y, u≽0} = inf{bTu|ATu=y, u≽0} where in the third line we use LP duality: inf{cTx|Ax≼b}=sup{−bTu|ATu+c=0, u≽0} with c = −y. Using this expression for SC in the Chernoff bound we obtain 􏰀 􏰍􏰍T􏰁 logprob(X∈C) ≤ inf SC(−λ)+logEexp(λ X) λ = infinf{bTu+λTλ/2 u≽0, ATu+λ=0}. λu Thus, the Chernoff bound on prob(X ∈ C) is the exponential of the optimal value of the QP minimize bT u + λT λ/2 subjectto u≽0, ATu+λ=0, where the variables are u and λ. (7.21) 7.4 Chebyshev and Chernoff bounds 381 This problem has an interesting geometric interpretation. It is equivalent to minimize bT u + (1/2)∥AT u∥2 which is the dual of subject to u ≽ 0, maximize −(1/2)∥x∥2 subject to Ax ≼ b. In other words, the Chernoff bound is prob(X ∈ C) ≤ exp(− dist(0, C)2/2), where dist(0,C) is the Euclidean distance of the origin to C. (7.22) Remark 7.3 The bound (7.22) can also be derived without using Chernoff’s inequality. If the distance between 0 and C is d, then there is a halfspace H = {z | aTz ≥ d}, with ∥a∥2 = 1, that contains C . The random variable aT X is N (0, 1), so prob(X ∈ C) ≤ prob(X ∈ H) = Φ(−d), where Φ is the cumulative distribution function of a zero mean, unit variance Gaus- sian. Since Φ(−d) ≤ e−d2/2 for d ≥ 0, this bound is at least as sharp as the Chernoff bound (7.22). 7.4.3 Example In this section we illustrate the Chebyshev and Chernoff probability bounding methods with a detection example. We have a set of m possible symbols or signals s ∈ {s1,s2,...,sm} ⊆ Rn, which is called the signal constellation. One of these signals is transmitted over a noisy channel. The received signal is x = s + v, where v is a noise, modeled as a random variable. We assume that Ev = 0 and E vvT = σ2I, i.e., the noise components v1, . . . , vn are zero mean, uncorrelated, and have variance σ2. The receiver must estimate which signal was sent on the basis of the received signal x = s + v. The minimum distance detector chooses as estimate the symbol sk closest (in Euclidean norm) to x. (If the noise v is Gaussian, then minimum distance decoding is the same as maximum likelihood decoding.) If the signal sk is transmitted, correct detection occurs if sk is the estimate, given x. This occurs when the signal sk is closer to x than the other signals, i.e., ∥x−sk∥2 <∥x−sj∥2, j̸=k. Thus, correct detection of symbol sk occurs if the random variable v satisfies the linear inequalities 2(sj−sk)T(sk+v)<∥sj∥2−∥sk∥2, j̸=k. These inequalities define the Voronoi region Vk of sk in the signal constellation, i.e., the set of points closer to sk than any other signal in the constellation. The probability of correct detection of sk is prob(sk + v ∈ Vk ). Figure 7.5 shows a simple example with m = 7 signals, with dimension n = 2. 382 7 Statistical estimation s4 s3 s5 s1 s2 s7 s6 Figure7.5Aconstellationof7signalss1,...,s7 ∈R2,shownassmallcircles. The line segments show the boundaries of the corresponding Voronoi regions. The minimum distance detector selects symbol sk when the received signal lies closer to sk than to any of the other points, i.e., if the received signal is in the interior of the Voronoi region around symbol sk. The circles around each point have radius one, to show the scale. Chebyshev bounds The SDP bound (7.19) provides a lower bound on the probability of correct detec- tion, and is plotted in figure 7.6, as a function of the noise standard deviation σ, for the three symbols s1, s2, and s3. These bounds hold for any noise distribution with zero mean and covariance σ2I. They are tight in the sense that there exists a noise distribution with zero mean and covariance Σ = σ2I, for which the proba- bility of error is equal to the lower bound. This is illustrated in figure 7.7, for the first Voronoi set, and σ = 1. Chernoff bounds We use the same example to illustrate the Chernoff bound. Here we assume that the noise is Gaussian, i.e., v ∼ N(0,σ2I). If symbol sk is transmitted, the probability of correct detection is the probability that sk + v ∈ Vk. To find a lower bound for this probability, we use the QP (7.21) to compute upper bounds on the probability that the ML detector selects symbol i, i = 1,...,m, i ̸= k. (Each of these upper bounds is related to the distance of sk to the Voronoi set Vi.) Adding these upper bounds on the probabilities of mistaking sk for si, we obtain an upper bound on the probability of error, and therefore, a lower bound on the probability of correct detection of symbol sk. The resulting lower bound, for s1, is shown in figure 7.8, along with an estimate of the probability of correct detection obtained using Monte Carlo analysis. 7.4 Chebyshev and Chernoff bounds 383 1 0.8 0.6 0.4 0.2 00 0.5 1σ1.5 2 2.5 Figure 7.6 Chebyshev lower bounds on the probability of correct detection for symbols s1, s2, and s3. These bounds are valid for any noise distribution that has zero mean and covariance σ2I. 3 2 1 s4 s3 s5 s1 s2 s7 s6 Figure 7.7 The Chebyshev lower bound on the probability of correct detec- tion of symbol 1 is equal to 0.2048 when σ = 1. This bound is achieved by the discrete distribution illustrated in the figure. The solid circles are the possible values of the received signal s1 + v. The point in the center of the ellipse has probability 0.2048. The five points on the boundary have a total probability 0.7952. The ellipse is defined by xT P x + 2qT x + r = 1, where P, q, and r are the optimal solution of the SDP (7.19). probability of correct detection 384 7 Statistical estimation 7.5 Figure 7.8 The Chernoff lower bound (solid line) and a Monte Carlo esti- mate (dashed line) of the probability of correct detection of symbol s1, as a function of σ. In this example the noise is Gaussian with zero mean and covariance σ2I. Experiment design We consider the problem of estimating a vector x ∈ Rn from measurements or experiments yi =aTi x+wi, i=1,...,m, where wi is measurement noise. We assume that wi are independent Gaussian random variables with zero mean and unit variance, and that the measurement vectors a1, . . . , am span Rn. The maximum likelihood estimate of x, which is the same as the minimum variance estimate, is given by the least-squares solution 1 0.95 0.9 0.2 0.3 σ 0.4 0.5 􏰇 􏰊m xˆ = i=1 The associated estimation error e = xˆ − x has zero mean and covariance matrix 􏰇􏰊m 􏰈−1 a i a Ti 􏰈 − 1 􏰊m i=1 y i a i . E = E e e T = a i a Ti . i=1 The matrix E characterizes the accuracy of the estimation, or the informativeness of the experiments. For example the α-confidence level ellipsoid for x is given by E = { z | ( z − xˆ ) T E − 1 ( z − xˆ ) ≤ β } , where β is a constant that depends on n and α. We suppose that the vectors a1, . . . , am, which characterize the measurements, can be chosen among p possible test vectors v1, . . . , vp ∈ Rn, i.e., each ai is one of probability of correct detection 7.5 Experiment design 385 the vj. The goal of experiment design is to choose the vectors ai, from among the possible choices, so that the error covariance E is small (in some sense). In other words, each of m experiments or measurements can be chosen from a fixed menu of p possible experiments; our job is to find a set of measurements that (together) are maximally informative. Let mj denote the number of experiments for which ai is chosen to have the value vj, so we have m1 + · · · + mp = m. We can express the error covariance matrix as E = 􏰇􏰊m i=1 a i a Ti 􏰈−1 􏰊p −1 =  m j v j v jT  . j=1 This shows that the error covariance depends only on the numbers of each type of experiment chosen (i.e., m1, . . . , mp). The basic experiment design problem is as follows. Given the menu of possible choices for experiments, i.e., v1, . . . , vp, and the total number m of experiments to be carried out, choose the numbers of each type of experiment, i.e., m1,...,mp, to make the error covariance E small (in some sense). The variables m1, . . . , mp must, of course, be integers and sum to m, the given total number of experiments. This leads to the optimization problem minimize (w.r.t. Sn+) E = 􏰎􏰉pj=1 mjvjvjT 􏰏−1 subjectto mi≥0, m1+···+mp=m mi ∈ Z, (7.23) where the variables are the integers m1 , . . . , mp . The basic experiment design problem (7.23) is a vector optimization problem over the positive semidefinite cone. If one experiment design results in E, and another in E ̃, with E ≼ E ̃, then certainly the first experiment design is as good as or better than the second. For example, the confidence ellipsoid for the first experiment design (translated to the origin for comparison) is contained in the confidence ellipsoid of the second. We can also say that the first experiment design allows us to estimate qT x better (i.e., with lower variance) than the second experi- ment design, for any vector q, since the variance of our estimate of qT x is given by qT Eq for the first experiment design and qT E ̃q for the second. We will see below several common scalarizations for the problem. 7.5.1 The relaxed experiment design problem The basic experiment design problem (7.23) can be a hard combinatorial problem when m, the total number of experiments, is comparable to n, since in this case the mi are all small integers. In the case when m is large compared to n, however, a good approximate solution of (7.23) can be found by ignoring, or relaxing, the constraint that the mi are integers. Let λi = mi/m, which is the fraction of 386 7 Statistical estimation the total number of experiments for which aj = vi, or the relative frequency of experiment i. We can express the error covariance in terms of λi as 1 􏰇 􏰊p 􏰈 − 1 E = m λ i v i v iT . ( 7 . 2 4 ) i=1 The vector λ ∈ Rp satisfies λ ≽ 0, 1T λ = 1, and also, each λi is an integer multiple of 1/m. By ignoring this last constraint, we arrive at the problem minimize (w.r.t. Sn+) E = (1/m) 􏰀􏰉pi=1 λiviviT 􏰁−1 subjectto λ≽0, 1Tλ=1, (7.25) with variable λ ∈ Rp. To distinguish this from the original combinatorial experi- ment design problem (7.23), we refer to it as the relaxed experiment design problem. The relaxed experiment design problem (7.25) is a convex optimization problem, since the objective E is an Sn+-convex function of λ. Several statements can be made about the relation between the (combinato- rial) experiment design problem (7.23) and the relaxed problem (7.25). Clearly the optimal value of the relaxed problem provides a lower bound on the optimal value of the combinatorial one, since the combinatorial problem has an additional constraint. From a solution of the relaxed problem (7.25) we can construct a sub- optimal solution of the combinatorial problem (7.23) as follows. First, we apply simple rounding to get mi = round(mλi), i = 1,...,p. Corresponding to this choice of m1, . . . , mp is the vector λ ̃, λ ̃i =(1/m)round(mλi), i=1,...,p. The vector λ ̃ satisfies the constraint that each entry is an integer multiple of 1/m. Clearly we have |λi − λ ̃i| ≤ 1/(2m), so for m large, we have λ ≈ λ ̃. This implies that the constraint 1T λ ̃ = 1 is nearly satisfied, for large m, and also that the error covariance matrices associated with λ ̃ and λ are close. We can also give an alternative interpretation of the relaxed experiment design problem (7.25). We can interpret the vector λ ∈ Rp as defining a probability distribution on the experiments v1, . . . , vp. Our choice of λ corresponds to a random experiment: each experiment ai takes the form vj with probability λj. In the rest of this section, we consider only the relaxed experiment design problem, so we drop the qualifier ‘relaxed’ in our discussion. 7.5.2 Scalarizations Several scalarizations have been proposed for the experiment design problem (7.25), which is a vector optimization problem over the positive semidefinite cone. 7.5 Experiment design 387 D-optimal design The most widely used scalarization is called D-optimal design, in which we minimize the determinant of the error covariance matrix E. This corresponds to designing the experiment to minimize the volume of the resulting confidence ellipsoid (for a fixed confidence level). Ignoring the constant factor 1/m in E, and taking the logarithm of the objective, we can pose this problem as minimize log det 􏰀􏰉pi=1 λiviviT 􏰁−1 subjectto λ≽0, 1Tλ=1, which is a convex optimization problem. E-optimal design (7.26) In E-optimal design, we minimize the norm of the error covariance matrix, i.e., the maximum eigenvalue of E. Since the diameter (twice the longest semi-axis) of the confidence ellipsoid E is proportional to ∥E∥1/2, minimizing ∥E∥ can be 22 interpreted geometrically as minimizing the diameter of the confidence ellipsoid. E-optimal design can also be interpreted as minimizing the maximum variance of qTe,overallqwith∥q∥2 =1. The E-optimal experiment design problem is minimize 􏳶􏳶􏳶􏰀􏰉pi=1 λiviviT 􏰁−1􏳶􏳶􏳶2 subjectto λ≽0, 1Tλ=1. The objective is a convex function of λ, so this is a convex problem. The E-optimal experiment design problem can be cast as an SDP maximize t􏰉p T subject to i=1 λivivi ≽ tI (7.27) λ≽0, 1Tλ=1, In A-optimal experiment design, we minimize trE, the trace of the covariance matrix. This objective is simply the mean of the norm of the error squared: E∥e∥2 =Etr(eeT)=trE. with variables λ ∈ Rp and t ∈ R. A-optimal design The A-optimal experiment design problem is minimize tr 􏰀􏰉pi=1 λiviviT 􏰁−1 (7.28) subjectto λ≽0, 1Tλ=1. This, too, is a convex problem. Like the E-optimal experiment design problem, it can be cast as an SDP: minimize 1T u subjectto 􏰒 􏰉pi=1λiviviT ek 􏰓≽0, k=1,...,n eTk uk λ≽0, 1Tλ=1, 388 7 Statistical estimation where the variables are u ∈ Rn and λ ∈ Rp, and here, ek is the kth unit vector. Optimal experiment design and duality The Lagrange duals of the three scalarizations have an interesting geometric mean- ing. The dual of the D-optimal experiment design problem (7.26) can be expressed as maximize log det W + n log n subjectto viTWvi ≤1, i=1,...,p, with variable W ∈ Sn and domain Sn++ (see exercise 5.10). This dual problem has a simple interpretation: The optimal solution W⋆ determines the minimum volume ellipsoid, centered at the origin, given by {x | xT W ⋆ x ≤ 1}, that contains the points v1, . . . , vp. (See also the discussion of problem (5.14) on page 222.) By complementary slackness, λ⋆i(1−viTW⋆vi)=0, i=1,...,p, (7.29) i.e., the optimal experiment design only uses the experiments vi which lie on the surface of the minimum volume ellipsoid. The duals of the E-optimal and A-optimal design problems can be given a similar interpretation. The duals of problems (7.27) and (7.28) can be expressed as and maximize tr W subjectto viTWvi ≤1, i=1,...,p W ≽ 0, maximize (tr W 1/2 )2 subjectto viTWvi ≤1, i=1,...,p, (7.30) (7.31) respectively. The variable in both problems is W ∈ Sn. In the second problem there is an implicit constraint W ∈ Sn+. (See exercises 5.40 and 5.10.) As for the D-optimal design, the optimal solution W⋆ determines a minimal ellipsoid {x | xTW⋆x ≤ 1} that contains the points v1,...,vp. Moreover W⋆ and λ⋆ satisfy the complementary slackness conditions (7.29), i.e., the optimal design only uses experiments vi that lie on the surface of the ellipsoid defined by W⋆. Experiment design example We consider a problem with x ∈ R2, and p = 20. The 20 candidate measurement vectors ai are shown as circles in figure 7.9. The origin is indicated with a cross. The D-optimal experiment has only two nonzero λi, indicated as solid circles in figure 7.9. The E-optimal experiment has two nonzero λi, indicated as solid circles in figure 7.10. The A-optimal experiment has three nonzero λi, indicated as solid circles in figure 7.11. We also show the three ellipsoids {x | xT W ⋆x ≤ 1} associated with the dual optimal solutions W⋆. The resulting 90% confidence ellipsoids are shown in figure 7.12, along with the confidence ellipsoid for the ‘uniform’ design, with equal weight λi = 1/p on all experiments. 7.5 Experiment design 389 λ1 = 0.5 λ2 = 0.5 Figure 7.9 Experiment design example. The 20 candidate measurement vec- tors are indicated with circles. The D-optimal design uses the two measure- ment vectors indicated with solid circles, and puts an equal weight λi = 0.5 on each of them. The ellipsoid is the minimum volume ellipsoid centered at the origin, that contains the points vi. λ2 = 0.2 λ3 = 0.8 Figure 7.10 The E-optimal design uses two measurement vectors. The dashed lines are (part of) the boundary of the ellipsoid {x | xT W⋆x ≤ 1} where W ⋆ is the solution of the dual problem (7.30). λ1 = 0.30 λ2 = 0.38 λ3 = 0.32 Figure 7.11 The A-optimal design uses three measurement vectors. The dashed line shows the ellipsoid {x | xT W⋆x ≤ 1} associated with the solution of the dual problem (7.31). 390 7 Statistical estimation D A uniform E Figure 7.12 Shape of the 90% confidence ellipsoids for D-optimal, A-optimal, E-optimal, and uniform designs. 7.5.3 Extensions Resource limits Suppose that associated with each experiment is a cost ci, which could represent the economic cost, or time required, to carry out an experiment with vi. The total cost, or time required (if the experiments are carried out sequentially) is then m1c1 +···+mpcp =mcTλ. We can add a limit on total cost by adding the linear inequality mcT λ ≤ B, where B is a budget, to the basic experiment design problem. We can add multiple linear inequalities, representing limits on multiple resources. Multiple measurements per experiment We can also consider a generalization in which each experiment yields multiple measurements. In other words, when we carry out an experiment using one of the possible choices, we obtain several measurements. To model this situation we can use the same notation as before, with vi as matrices in Rn×ki : vi=􏰋ui1 ··· uiki 􏰌, where ki is the number of (scalar) measurements obtained when the experiment vi is carried out. The error covariance matrix, in this more complicated setup, has the exact same form. In conjunction with additional linear inequalities representing limits on cost or time, we can model discounts or time savings associated with performing groups of measurements simultaneously. Suppose, for example, that the cost of simulta- neously making (scalar) measurements v1 and v2 is less than the sum of the costs 7.5 Experiment design 391 of making them separately. We can take v3 to be the matrix v3=􏰋v1 v2􏰌 and assign costs c1, c2, and c3 associated with making the first measurement alone, the second measurement alone, and the two simultaneously, respectively. When we solve the experiment design problem, λ1 will give us the fraction of times we should carry out the first experiment alone, λ2 will give us the fraction of times we should carry out the second experiment alone, and λ3 will give us the fraction of times we should carry out the two experiments simultaneously. (Normally we would expect a choice to be made here; we would not expect to have λ1 >0,λ2 >0,andλ3 >0.)

392
7 Statistical estimation
Bibliography
ML and MAP estimation, hypothesis testing, and detection are covered in books on statistics, pattern recognition, statistical signal processing, or communications; see, for example, Bickel and Doksum [BD77], Duda, Hart, and Stork [DHS99], Scharf [Sch91], or Proakis [Pro01].
Logistic regression is discussed in Hastie, Tibshirani, and Friedman [HTF01, §4.4]. For the covariance estimation problem of page 355, see Anderson [And70].
Generalizations of Chebyshev’s inequality were studied extensively in the sixties, by Isii [Isi64], Marshall and Olkin [MO60], Karlin and Studden [KS66, chapter 12], and others. The connection with semidefinite programming was made more recently by Bertsimas and Sethuraman [BS00] and Lasserre [Las02].
The terminology in §7.5 (A-, D-, and E-optimality) is standard in the literature on optimal experiment design (see, for example, Pukelsheim [Puk93]). The geometric interpretation of the dual D-optimal design problem is discussed by Titterington [Tit75].

Exercises 393
Exercises Estimation
7.1 Linear measurements with exponentially distributed noise. Show how to solve the ML estimation problem (7.2) when the noise is exponentially distributed, with density
p(z) = 􏰆 (1/a)e−z/a z ≥ 0 0 z < 0, where a > 0.
7.2 ML estimation and l∞-norm approximation. We consider the linear measurement model
y = Ax + v of page 352, with a uniform noise distribution of the form p(z) = 􏰆 1/(2α) |z| ≤ α
0 |z| > α.
As mentioned in example 7.1, page 352, any x that satisfies ∥Ax − y∥∞ ≤ α is a ML
estimate.
Now assume that the parameter α is not known, and we wish to estimate α, along with the parameters x. Show that the ML estimates of x and α are found by solving the l∞-norm approximation problem
minimize ∥Ax − y∥∞,
7.3 Probit model. Suppose y ∈ {0, 1} i􏰆s random variable given by
where aTi are the rows of A.
y=
1 aTu+b+v≤0 0 aTu+b+v>0,
where the vector u ∈ Rn is a vector of explanatory variables (as in the logistic model described on page 354), and v is a zero mean unit variance Gaussian variable.
Formulate the ML estimation problem of estimating a and b, given data consisting of pairs (ui,yi), i = 1,…,N, as a convex optimization problem.
7.4 Estimation of covariance and mean of a multivariate normal distribution. We consider the problem of estimating the covariance matrix R and the mean a of a Gaussian probability density function
pR,a(y) = (2π)−n/2 det(R)−1/2 exp(−(y − a)T R−1(y − a)/2), based on N independent samples y1, y2, . . . , yN ∈ Rn.
(a) We first consider the estimation problem when there are no additional constraints on R and a. Let μ and Y be the sample mean and covariance, defined as
1 􏰊N μ= N yk,
k=1
Show that the log-likelihood function
l(R, a) = −(Nn/2) log(2π) − (N/2) log det R − (1/2)
1 􏰊N
Y = N (yk −μ)(yk −μ)T.
k=1
􏰊N k=1
(yk − a)T R−1(yk − a)

394 7 Statistical estimation
(b)
can be expressed as
l(R,a)= N2 􏰀−nlog(2π)−logdetR−tr(R−1Y)−(a−μ)TR−1(a−μ)􏰁.
Use this expression to show that if Y ≻ 0, the ML estimates of R and a are unique, and given by
aml =μ, Rml =Y.
The log-likelihood function includes a convex term (−logdetR), so it is not obvi- ously concave. Show that l is concave, jointly in R and a, in the region defined by
R ≼ 2Y.
This means we can use convex optimization to compute simultaneous ML estimates of R and a, subject to convex constraints, as long as the constraints include R ≼ 2Y , i.e., the estimate R must not exceed twice the unconstrained ML estimate.
7.5 Markov chain estimation. Consider a Markov chain with n states, and transition proba- bility matrix P ∈ Rn×n defined as
Pij = prob(y(t+1) = i | y(t􏰉) = j).
The transition probabilities must satisfy Pij ≥ 0 and ni=1 Pij = 1, j = 1, . . . , n. We consider the problem of estimating the transition probabilities, given an observed sample sequence y(1) = k1, y(2) = k2, …, y(N) = kn.
(a) Show that if there are no other prior constraints on Pij , then the ML estimates are the empirical transition frequencies: Pˆij is the ratio of the number of times the state transitioned from j into i, divided by the number of times it was j, in the observed sample.
(b) Suppose that an equilibrium distribution p of the Markov chain is known, i.e., a vector q ∈ Rn+ satisfying 1T q = 1 and P q = q. Show that the problem of computing the ML estimate of P, given the observed sequence and knowledge of q, can be expressed as a convex optimization problem.
7.6 Estimation of mean and variance. Consider a random variable x ∈ R with density p, which is normalized, i.e., has zero mean and unit variance. Consider a random variable y = (x+b)/a obtained by an affine transformation of x, where a > 0. The random variable y has mean b and variance 1/a2. As a and b vary over R+ and R, respectively, we generate a family of densities obtained from p by scaling and shifting, uniquely parametrized by mean and variance.
Show that if p is log-concave, then finding the ML estimate of a and b, given samples y1,…,yn of y, is a convex problem.
As an example, work out an analytical solution for the ML estimates of a and b, assuming p is a normalized Laplacian density, p(x) = e−2|x|.
7.7 ML estimation of Poisson distributions. Suppose xi , i = 1, . . . , n, are independent random variables with Poisson distributions
prob(xi = k) = e−μi μki , k!
with unknown means μi. The variables xi represent the number of times that one of n possible independent events occurs during a certain period. In emission tomography, for example, they might represent the number of photons emitted by n sources.
We consider an experiment designed to determine the means μi. The experiment involves m detectors. If event i occurs, it is detected by detector j with probability pji. We assume

Exercises 395
the probabilities pji are given (with pji ≥ 0, 􏰉mj=1 pji ≤ 1). The total number of events recorded by detector j is denoted yj ,
values of yj , j = 1, . . . , m, as a convex optimization problem.
Hint. The variables yji have Poisson distributions with means pjiμi, i.e.,
e−pjiμi (pjiμi)k prob(yji = k) = k! .
The sum of n independent Poisson variables with means λ1, . . . , λn has a Poisson distri- bution with mean λ1 + ··· + λn.
7.8 Estimation using sign measurements. We consider the measurement setup yi =sign(aTi x+bi +vi), i=1,…,m,
where x ∈ Rn is the vector to be estimated, and yi ∈ {−1, 1} are the measurements. The vectors ai ∈ Rn and scalars bi ∈ R are known, and vi are IID noises with a log-concave probability density. (You can assume that aTi x + bi + vi = 0 does not occur.) Show that maximum likelihood estimation of x is a convex optimization problem.
7.9 Estimation with unknown sensor nonlinearity. We consider the measurement setup yi=f(aTix+bi+vi), i=1,…,m,
where x ∈ Rn is the vector to be estimated, yi ∈ R are the measurements, ai ∈ Rn, bi ∈ R are known, and vi are IID noises with log-concave probability density. The function f : R → R, which represents a measurement nonlinearity, is not known. However, it is known that f′(t) ∈ [l,u] for all t, where 0 < l < u are given. Explain how to use convex optimization to find a maximum likelihood estimate of x, as well as the function f. (This is an infinite-dimensional ML estimation problem, but you can be informal in your approach and explanation.) 7.10 Nonparametric distributions on Rk. We consider a random variable x ∈ Rk with values in a finite set {α1, . . . , αn}, and with distribution pi = prob(x = αi), i = 1,...,n. Show that a lower bound on the covariance of X, yj = 􏰊n i=1 yji, j=1,...,m. Formulate the ML estimation problem of estimating the means μi, based on observed S ≼ E(X − E X)(X − E X)T , 7.11 Randomized detectors. Show that every randomized detector can be expressed as a convex combination of a set of deterministic detectors: If T=􏰋t1 t2 ··· tn 􏰌∈Rm×n satisfies tk ≽ 0 and 1T tk = 1, then T can be expressed as T = θ1T1 +···+θNTN, is a convex constraint in p. Optimal detector design 396 7 Statistical estimation where T􏰉i is a zero-one matrix with exactly one element equal to one per column, and θi ≥ 0, Ni=1 θi = 1. What is the maximum number of deterministic detectors N we may need? We can interpret this convex decomposition as follows. The randomized detector can be realized as a bank of N deterministic detectors. When we observe X = k, the estimator chooses a random index from the set {1,...,N}, with probability prob(j = i) = θi, and then uses deterministic detector Tj. 7.12 Optimal action. In detector design, we are given a matrix P ∈ Rn×m (whose columns are probability distributions), and then design a matrix T ∈ Rm×n (whose columns are probability distributions), so that D = TP has large diagonal elements (and small off- diagonal elements). In this problem we study the dual problem: Given P, find a matrix S ∈ Rm×n (whose columns are probability distributions), so that D ̃ = PS ∈ Rn×n has large diagonal elements (and small off-diagonal elements). To make the problem specific, we take the objective to be maximizing the minimum element of D ̃ on the diagonal. We can interpret this problem as follows. There are n outcomes, which depend (stochas- tically) on which of m inputs or actions we take: Pij is the probability that outcome i occurs, given action j. Our goal is find a (randomized) strategy that, to the extent pos- sible, causes any specified outcome to occur. The strategy is given by the matrix S: Sji is the probability that we take action j, when we want outcome i to occur. The matrix D ̃ gives the action error probability matrix: D ̃ij is the probability that outcome i occurs, when we want outcome j to occur. In particular, D ̃ii is the probability that outcome i occurs, when we want it to occur. Show that this problem has a simple analytical solution. Show that (unlike the corre- sponding detector problem) there is always an optimal solution that is deterministic. Hint. Show that the problem is separable in the columns of S. Chebyshev and Chernoff bounds 7.13 Chebyshev-type inequalities on a finite set. Assume X is a random variable taking values in the set {α1,α2,...,αm}, and let S be a subset of {α1,...,αm}. The distribution of X is unknown, but we are given the expected values of n functions fi: Efi(X) = bi, i = 1,...,n. (7.32) Show that the optimal value of the LP 􏰉n minimize x0 + 􏰉i=1 bixi n subjectto x0 +􏰉i=1fi(α)xi ≥1, α∈S x0+ ni=1fi(α)xi≥0, α̸∈S, with variables x0, . . . , xn, is an upper bound on prob(X ∈ S), valid for all distributions that satisfy (7.32). Show that there always exists a distribution that achieves the upper bound. Chapter 8 Geometric problems 8.1 Projection on a set Thedistanceofapointx0 ∈Rn toaclosedsetC⊆Rn,inthenorm∥·∥,is defined as dist(x0,C)=inf{∥x0 −x∥|x∈C}. The infimum here is always achieved. We refer to any point z ∈ C which is closest to x0, i.e., satisfies ∥z − x0∥ = dist(x0, C), as a projection of x0 on C. In general there can be more than one projection of x0 on C, i.e., several points in C closest to x0. In some special cases we can establish that the projection of a point on a set is unique. For example, if C is closed and convex, and the norm is strictly convex (e.g., the Euclidean norm), then for any x0 there is always exactly one z ∈ C which is closest to x0. As an interesting converse, we have the following result: If for every x0 there is a unique Euclidean projection of x0 on C, then C is closed and convex (see exercise 8.2). We use the notation PC : Rn → Rn to denote any function for which PC(x0) is a projection of x0 on C, i.e., for all x0, PC (x0) ∈ C, ∥x0 − PC (x0)∥ = dist(x0, C). In other words, we have PC(x0) = argmin{∥x − x0∥ | x ∈ C}. We refer to PC as projection on C. Example 8.1 Projection on the unit square in R2. Consider the (boundary of the) unitsquareinR2,i.e.,C={x∈R2 |∥x∥∞ =1}. Wetakex0 =0. In the l1-norm, the four points (1, 0), (0, −1), (−1, 0), and (0, 1) are closest to x0 = 0, with distance 1, so we have dist(x0 , C ) = 1 in the l1 -norm. The same statement holds for the l2-norm. In the l∞-norm, all points in C lie at a distance 1 from x0, and dist(x0, C) = 1. 398 8 Geometric problems Example 8.2 Projection onto rank-k matrices. Consider the set of m × n matrices with rank less than or equal to k, C ={X ∈Rm×n | rankX ≤k}, with k ≤ min{m,n}, and let X0 ∈ Rm×n. We can find a projection of X0 on C, in the (spectral or maximum singular value) norm ∥ · ∥2, via the singular value decomposition. Let 􏰊r σiuiviT be th􏰉e singular value decomposition of X0, where r = rankX0. Then the matrix X0 = Y = min{k,r} σiuiviT is a projection of X0 on C. i=1 Projecting a point on a convex set If C is convex, then we can compute the projection PC(x0) and the distance dist(x0 , C ) by solving a convex optimization problem. We represent the set C by a set of linear equalities and convex inequalities Ax = b, fi(x) ≤ 0, i = 1,...,m, (8.1) and find the projection of x0 on C by solving the problem minimize ∥x − x0∥ subject to fi(x) ≤ 0, i = 1,...,m (8.2) Ax = b, with variable x. This problem is feasible if and only if C is nonempty; when it is feasible, its optimal value is dist(x0,C), and any optimal point is a projection of x0 on C. Euclidean projection on a polyhedron The projection of x0 on a polyhedron described by linear inequalities Ax ≼ b can be computed by solving the QP minimize ∥x − x0∥2 subject to Ax ≼ b. Some special cases have simple analytical solutions. • The Euclidean projection of x0 on a hyperplane C = {x | aT x = b} is given by PC (x0) = x0 + (b − aT x0)a/∥a∥2. • The Euclidean projectio􏰆n of x0 on a halfspace C = {x | aT x ≤ b} is given by P (x )= x0 +(b−aTx0)a/∥a∥2 aTx0 >b C0 x0 aTx0≤b.
i=1
8.1.1

8.1 Projection on a set 399
• The Euclidean projection of x0 on a rectangle C = {x | l ≼ x ≼ u} (where
l≺u)isgivenby
 lk x0k ≤lk PC(x0)k = x0k lk ≤ x0k ≤ uk
uk x0k ≥uk. Euclidean projection on a proper cone
Let x = PK(x0) denote the Euclidean projection of a point x0 on a proper cone K. The KKT conditions of
are given by
minimize subject to
∥x − x0∥2 x ≽K 0
z≽K∗ 0, zTx=0.
x0 =x+ −x−, x+ ≽K 0, x− ≽K∗ 0, xT+x− =0.
In other words, by projecting x0 on the cone K, we decompose it into the difference of two orthogonal elements: one nonnegative with respect to K (and which is the projection of x0 on K), and the other nonnegative with respect to K∗.
Some specific examples:
• For K = Rn+, we have PK(x0)k = max{x0k,0}. The Euclidean projection of a vector onto the nonnegative orthant is found by replacing each negative component with 0.
• For K = Sn+, and the Euclidean (or Frobenius) norm ∥·∥F , we have PK (X0) = 􏰉ni=1 max{0, λi}viviT , where X0 = 􏰉ni=1 λiviviT is the eigenvalue decomposi- tion of X0. To project a symmetric matrix onto the positive semidefinite cone, we form its eigenvalue expansion and drop terms associated with negative eigenvalues. This matrix is also the projection onto the positive semidefinite cone in the l2-, or spectral norm.
8.1.2 Separating a point and a convex set
Suppose C is a closed convex set described by the equalities and inequalities (8.1). If x0 ∈ C, then dist(x0,C) = 0, and the optimal point for the problem (8.2) is x0. If x0 ̸∈ C then dist(x0,C) > 0, and the optimal value of the problem (8.2) is positive. In this case we will see that any dual optimal point provides a separating hyperplane between the point x0 and the set C.
The link between projecting a point on a convex set and finding a hyperplane that separates them (when the point is not in the set) should not be surprising. Indeed, our proof of the separating hyperplane theorem, given in §2.5.1, relies on
x≽K 0, x−x0 =z,
Introducing the notation x+ = x and x− = z, we can express these conditions as

400
8 Geometric problems
x0
We first express (8.2) as
minimize subject to
∥y∥
fi(x) ≤ 0, Ax = b
x0 − x = y
i = 1,…,m
with variables x and y. The Lagrangian of this problem is
􏰊m i=1
L(x,y,λ,μ,ν)=∥y∥+ and the dual function is
λifi(x)+νT(Ax−b)+μT(x0 −x−y)
g(λ,μ,ν)=􏰆 infx􏰀􏰉mi=1λifi(x)+νT(Ax−b)+μT(x0 −x)􏰁 −∞
so we obtain the dual problem
∥μ∥∗ ≤1 otherwise,
PC(x0) C
Figure 8.1 A point x0 and its Euclidean projection PC(x0) on a convex set C. The hyperplane midway between the two, with normal vector PC (x0) − x0, strictly separates the point and the set. This property does not hold for general norms; see exercise 8.4.
finding the Euclidean distance between the sets. If PC(x0) denotes the Euclidean projection of x0 on C, where x0 ̸∈ C, then the hyperplane
(PC (x0) − x0)T (x − (1/2)(x0 + PC (x0))) = 0
(strictly) separates x0 from C, as illustrated in figure 8.1. In other norms, however, the clearest link between the projection problem and the separating hyperplane problem is via Lagrange duality.
maximize μT x0 + infx 􏰀􏰉mi=1 λifi(x) + νT (Ax − b) − μT x􏰁 subject to λ ≽ 0
∥μ∥∗ ≤ 1,
with variables λ, μ, ν. We can interpret the dual problem as follows. Suppose λ, μ, ν are dual feasible with a positive dual objective value, i.e., λ ≽ 0, ∥μ∥∗ ≤ 1,

8.1 Projection on a set
and
μTx0 −μTx+
for all x. This implies that μT x0 > μT x for x ∈ C, and therefore μ defines a strictly separating hyperplane. In particular, suppose (8.2) is strictly feasible, so strong duality holds. If x0 ̸∈ C, the optimal value is positive, and any dual optimal solution defines a strictly separating hyperplane.
Note that this construction of a separating hyperplane, via duality, works for any norm. In contrast, the simple construction described above only works for the Euclidean norm.
401
Separating a point from a polyhedron
The dual problem of
is
minimize subject to
maximize subject to
∥y∥
Ax ≼ b
x0 − x = y
μT x0 − bT λ AT λ = μ ∥μ∥∗ ≤ 1 λ≽0
(Ax0 − b)T λ ∥AT λ∥∗ ≤ 1 λ ≽ 0.
which can be further simplified as
maximize subject to
􏰊m i=1
λifi(x)+νT(Ax−b)>0
It is easily verified that if the dual objective is positive, then AT λ is the normal vector to a separating hyperplane: If Ax ≼ b, then
(AT λ)T x = λT (Ax) ≤ λT b < λT Ax0, so μ = AT λ defines a separating hyperplane. 8.1.3 Projection and separation via indicator and support functions The ideas described above in §8.1.1 and §8.1.2 can be expressed in a compact form in terms of the indicator function IC and the support function SC of the set C, defined as 􏰆 0 x ∈ C SC (x) = sup xT y, IC (x) = +∞ x ̸∈ C. y∈C The problem of projecting x0 on a closed convex set C can be expressed compactly as minimize ∥x − x0∥ subject to IC (x) ≤ 0, 402 8 Geometric problems 8.2 subject to ∥z∥∗ ≤ 1. If z is dual optimal with a positive objective value, then zT x0 > zT x for all x ∈ C,
i.e., z defines a separating hyperplane.
Distance between sets
The distance between two sets C and D, in a norm ∥ · ∥, is defined as dist(C,D)=inf{∥x−y∥|x∈C, y∈D}.
The two sets C and D do not intersect if dist(C,D) > 0. They intersect if dist(C, D) = 0 and the infimum in the definition is attained (which is the case, for example, if the sets are closed and one of the sets is bounded).
The distance between sets can be expressed in terms of the distance between a point and a set,
dist(C, D) = dist(0, D − C),
so the results of the previous section can be applied. In this section, however, we derive results specifically for problems involving distance between sets. This allows us to exploit the structure of the set C − D, and makes the interpretation easier.
Computing the distance between convex sets
Suppose C and D are described by two sets of convex inequalities C={x|fi(x)≤0, i=1,…,m}, D={x|gi(x)≤0, i=1,…,p}.
8.2.1
or, equivalently, as
=
so we obtain the dual problem
minimize subject to
∥y∥
IC (x) ≤ 0 x0 − x = y
where the variables are x and y. The dual function of this problem is g(z,λ) = inf􏰀∥y∥+λIC(x)+zT(x0 −x−y)􏰁
x,y
= 􏰆􏰆zTx0+infx􏰀−zTx+IC(x)􏰁 ∥z∥∗≤1, λ≥0
−∞ otherwise zTx0−SC(z) ∥z∥∗≤1, λ≥0
−∞ otherwise maximize zT x0 − SC (z)

8.2 Distance between sets
403
Figure 8.2 Euclidean distance between polyhedra C and D. The dashed line connects the two points in C and D, respectively, that are closest to each other in Euclidean norm. These points can be found by solving a QP.
(We can include linear equalities, but exclude them here for simplicity.) We can find dist(C, D) by solving the convex optimization problem
minimize ∥x − y∥
subject to fi(x) ≤ 0, i = 1,…,m (8.3)
gi(y) ≤ 0, i = 1,…,p. Euclidean distance between polyhedra
Let C and D be two polyhedra described by the sets of linear inequalities A1x ≼ b1 and A2x ≼ b2, respectively. The distance between C and D is the distance between the closest pair of points, one in C and the other in D, as illustrated in figure 8.2. The distance between them is the optimal value of the problem
minimize ∥x − y∥2
subject to A1x ≼ b1 (8.4)
A2y ≼ b2.
We can square the objective to obtain an equivalent QP.
8.2.2 Separating convex sets
The dual of the problem (8.3) of finding the distance between two convex sets has an interesting geometric interpretation in terms of separating hyperplanes between the sets. We first express the problem in the following equivalent form:
The dual function is
∥w∥
fi(x) ≤ 0, gi(y) ≤ 0, x − y = w.
􏰇􏰊m􏰊p 􏰈
minimize subject to
i = 1,…,m i = 1,…,p
(8.5)
g(λ, z, μ) = inf ∥w∥ + λifi(x) + μigi(y) + zT (x − y − w)
x,y,w
i=1 i=1
D
C

404
8 Geometric problems
= 􏰆 infx 􏰀􏰉mi=1 λifi(x) + zT x􏰁 + infy 􏰀􏰉pi=1 μigi(y) − zT y􏰁 ∥z∥∗ ≤ 1 −∞ otherwise,
which results in the dual problem
maximize infx 􏰀􏰉mi=1 λifi(x) + zT x􏰁 + infy 􏰀􏰉pi=1 μigi(y) − zT y􏰁
subject to ∥z∥∗ ≤ 1 (8.6)
λ≽0, μ≽0.
We can interpret this geometrically as follows. If λ, μ are dual feasible with a
8.2.3
If λ, μ, and z are dual feasible, then for all x ∈ C, y ∈ D,
zT x = −λT A1x ≥ −λT b1, zT y = μT A2x ≤ μT b2,
and, if the dual objective value is positive, zTx−zTy≥−λTb1 −μTb2 >0,
i.e., z defines a separating hyperplane.
Distance and separation via indicator and support functions
The ideas described above in §8.2.1 and §8.2.2 can be expressed in a compact form using indicator and support functions. The problem of finding the distance between two convex sets can be posed as the convex problem
minimize ∥x − y∥ subject to IC (x) ≤ 0
ID(y) ≤ 0,
positive objective value, then
A2x ≼ b2, we find the dual problem maximize
subjectto
− b T1 λ − b T2 μ AT1λ+z=0
A T2 μ − z = 0 ∥z∥∗ ≤ 1 λ≽0, μ≽0.
􏰊m i=1
λifi(x) + zT x +
􏰊p i=1
μigi(y) − zT y > 0
forallxandy. Inparticular,forx∈C andy∈D,wehavezTx−zTy>0,sowe see that z defines a hyperplane that strictly separates C and D.
Therefore, if strong duality holds between the two problems (8.5) and (8.6) (which is the case when (8.5) is strictly feasible), we can make the following con- clusion. If the distance between the two sets is positive, then they can be strictly separated by a hyperplane.
Separating polyhedra
Applying these duality results to sets defined by linear inequalities A1x ≼ b1 and

8.3 Euclidean distance and angle problems 405
which is equivalent to
minimize subject to
∥w∥
IC (x) ≤ 0 ID(y) ≤ 0 x − y = w.
The dual of this problem is
maximize −SC (−z) − SD (z)
subject to ∥z∥∗ ≤ 1.
If z is dual feasible with a positive objective value, then SD(z) < −SC(−z), i.e., supzTx< inf zTx. x∈C In other words, z defines a hyperplane that strictly separates C and D. 8.3 Euclidean distance and angle problems Suppose a1, . . . , an is a set of vectors in Rn, which we assume (for now) have known Euclidean lengths l1 = ∥a1∥2, ..., ln = ∥an∥2. We will refer to the set of vectors as a configuration, or, when they are indepen- dent, a basis. In this section we consider optimization problems involving various geometric properties of the configuration, such as the Euclidean distances between pairs of the vectors, the angles between pairs of the vectors, and various geometric measures of the conditioning of the basis. 8.3.1 Gram matrix and realizability The lengths, distances, and angles can be expressed in terms of the Gram matrix associated with the vectors a1, . . . , an, given by G=ATA, A=􏰋a1···an􏰌, so that Gij = aTi aj . The diagonal entries of G are given by Gii =li2, i=1,...,n, which (for now) we assume are known and fixed. The distance dij between ai and aj is dij = ∥ai−aj∥2 = (li2+lj2−2aTiaj)1/2 = (li2+lj2−2Gij)1/2. x∈D 406 8 Geometric problems Conversely, we can express Gij in terms of dij as Gij = li2 + lj2 − d2ij , 2 which we note, for future reference, is an affine function of d2ij. The correlation coefficient ρij between (nonzero) ai and aj is given by ρij= aTiaj =Gij, ∥ai ∥2 ∥aj ∥2 li lj so that Gij = liljρij is a linear function of ρij. The angle θij between (nonzero) ai and aj is given by θij = cos−1 ρij = cos−1(Gij/(lilj)), where we take cos−1 ρ ∈ [0, π]. Thus, we have Gij = lilj cos θij . The lengths, distances, and angles are invariant under orthogonal transforma- tions: If Q ∈ Rn×n is orthogonal, then the set of vectors Qai, . . . , Qan has the same Gram matrix, and therefore the same lengths, distances, and angles. Realizability The Gram matrix G = AT A is, of course, symmetric and positive semidefinite. The converse is a basic result of linear algebra: A matrix G ∈ Sn is the Gram matrix of a set of vectors a1,...,an if and only if G ≽ 0. When G ≽ 0, we can construct a configuration with Gram matrix G by finding a matrix A with AT A = G. One solution of this equation is the symmetric squareroot A = G1/2. When G ≻ 0, we can find a solution via the Cholesky factorization of G: If LLT = G, then we can take A = LT . Moreover, we can construct all configurations with the given Gram matrix G, given any one solution A, by orthogonal transformation: If A ̃T A ̃ = G is any solution, then A ̃ = QA for some orthogonal matrix Q. Thus, a set of lengths, distances, and angles (or correlation coefficients) is real- izable, i.e., those of some configuration, if and only if the associated Gram matrix G is positive semidefinite, and has diagonal elements l12, . . . , ln2 . We can use this fact to express several geometric problems as convex optimiza- tion problems, with G ∈ Sn as the optimization variable. Realizability imposes the constraint G ≽ 0 and Gii = li2, i = 1,...,n; we list below several other convex constraints and objectives. Angle and distance constraints We can fix an angle to have a certain value, θij = α, via the linear equality constraint Gij = lilj cosα. More generally, we can impose a lower and upper bound on an angle, α ≤ θij ≤ β, by the constraint lilj cosα ≥ Gij ≥ lilj cosβ, which is a pair of linear inequalities on G. (Here we use the fact that cos−1 is monotone decreasing.) We can maximize or minimize a particular angle θij, by minimizing or maximizing Gij (again using monotonicity of cos−1). 8.3 Euclidean distance and angle problems 407 In a similar way we can impose constraints on the distances. To require that dij lies in an interval, we use dmin ≤ dij ≤ dmax ⇐⇒ d2min ≤ d2ij ≤ d2max ⇐⇒ d2min≤li2+lj2−2Gij≤d2max, which is a pair of linear inequalities on G. We can minimize or maximize a distance, by minimizing or maximizing its square, which is an affine function of G. As a simple example, suppose we are given ranges (i.e., an interval of possible values) for some of the angles and some of the distances. We can then find the minimum and maximum possible value of some other angle, or some other distance, over all configurations, by solving two SDPs. We can reconstruct the two extreme configurations by factoring the resulting optimal Gram matrices. Singular value and condition number constraints The singular values of A, σ1 ≥ ··· ≥ σn, are the squareroots of the eigenvalues λ1 ≥ ··· ≥ λn of G. Therefore σ12 is a convex function of G, and σn2 is a concave function of G. Thus we can impose an upper bound on the maximum singular value of A, or minimize it; we can impose a lower bound on the minimum singular value, or maximize it. The condition number of A, σ1/σn, is a quasiconvex function of G, so we can impose a maximum allowable value, or minimize it over all configurations that satisfy the other geometric constraints, by quasiconvex optimization. Roughly speaking, the constraints we can impose as convex constraints on G are those that require a1, . . . , an to be a well conditioned basis. Dual basis When G ≻ 0, a1,...,an form a basis for􏰆Rn. The associated dual basis is b1,...,bn, where The dual basis vectors b1, . . . , bn are simply the rows of the matrix A−1. As a result, the Gram matrix associated with the dual basis is G−1. We can express several geometric conditions on the dual basis as convex con- straints on G. The (squared) lengths of the dual basis vectors, ∥bi∥2 = eTi G−1ei, are convex functions of G, and so can be minimized. The trace of G−1, another convex function of G, gives the sum of the squares of the lengths of the dual basis vectors (and is another measure of a well conditioned basis). Ellipsoid and simplex volume The volume of the ellipsoid {Au | ∥u∥2 ≤ 1}, which gives another measure of how well conditioned the basis is, is given by γ(det(AT A))1/2 = γ(det G)1/2, b Ti a j = 1 i=j 0 i̸=j. 408 8 Geometric problems where γ is the volume of the unit ball in Rn. The log volume is therefore log γ + (1/2) log det G, which is a concave function of G. We can therefore maximize the volume of the image ellipsoid, over a convex set of configurations, by maximizing log det G. The same holds for any set in Rn. The volume of the image under A is its volume, multiplied by the factor (det G)1/2 . For example, consider the image under A of the unit simplex conv{0,e1,...,en}, i.e., the simplex conv{0,a1,...,an}. The volume of this simplex is given by γ(detG)1/2, where γ is the volume of the unit simplex in Rn. We can maximize the volume of this simplex by maximizing log det G. Problems involving angles only Suppose we only care about the angles (or correlation coefficients) between the vectors, and do not specify the lengths or distances between them. In this case it is intuitively clear that we can simply assume the vectors ai have length li = 1. This is easily verified: The Gram matrix has the form G = diag(l)C diag(l), where l is the vector of lengths, and C is the correlation matrix, i.e., Cij = cosθij. It follows that if G ≽ 0 for any set of positive lengths, then G ≽ 0 for all sets of positive lengths, and in particular, this occurs if and only if C ≽ 0 (which is the same as assuming that all lengths are one). Thus, a set of angles θij ∈ [0,π], i,j = 1,...,n is realizable if and only if C ≽ 0, which is a linear matrix inequality in the correlation coefficients. As an example, suppose we are given lower and upper bounds on some of the angles (which is equivalent to imposing lower and upper bounds on the correlation coefficients). We can then find the minimum and maximum possible value of some other angle, over all configurations, by solving two SDPs. Example 8.3 Bounding correlation coefficients. We consider an example in R4, where we are given 0.6≤ρ12 ≤0.9, 0.8≤ρ13 ≤0.9, 0.5≤ρ24 ≤0.7, −0.8≤ρ34 ≤−0.4. 8.3.2 (8.7) To find the minimum and maximum possible values of ρ14, we solve the two SDPs  1.00 0.60 0.87 −0.39 0.60 0.87 −0.39   1.00 0.71 0.80 0.23  1.00 0.33 0.50  ,  0.71 1.00 0.31 0.59  . minimize/maximize ρ14 subject to (8.7) ρ14  ρ 2 4  ≽ 0 , 0.33 1.00 −0.55 0.50 −0.55 1.00 0.80 0.31 1.00 −0.40 0.23 0.59 −0.40 1.00 1  ρ 1 2 ρ12 ρ13 1 ρ 2 3 ρ23 1 ρ24 ρ34 ρ13 ρ14 ρ34 1 with variables ρ12,ρ13,ρ14,ρ23,ρ24,ρ34. The minimum and maximum values (to two significant digits) are −0.39 and 0.23, with corresponding correlation matrices 8.3 Euclidean distance and angle problems 409 8.3.3 Euclidean distance problems In a Euclidean distance problem, we are concerned only with the distances between the vectors, dij , and do not care about the lengths of the vectors, or about the angles between them. These distances, of course, are invariant not only under orthogonal transformations, but also translation: The configuration a ̃1 = a1 +b, . . . , a ̃n = an +b has the same distances as the original configuration, for any b ∈ Rn. In particular, for the choice 􏰊n b = −(1/n) ai = −(1/n)A1, i=1 w􏰉e s e e t h a t a ̃ i h a v e t h e s a m e d i s t a n c e s a s t h e o r i g i n a l c o n fi g u r a t i o n , a n d a l s o s a t i s f y ni=1 a ̃i = 0. It follows that in a Euclidean distance problem, we can assume, without any i.e., A1 = 0. We can solve Euclidean distance problems by considering the lengths (which cannot occur in the objective or constraints of a Euclidean distance problem) as free variables in the optimization problem. Here we rely on the fact that there is a configuration with distances dij ≥ 0 if and only if there are lengths l1 , . . . , ln for which G ≽ 0, where Gij = (li2 + lj2 − d2ij)/2. Wedefinez∈Rn aszi =li2,andD∈Sn byDij =d2ij (with,ofcourse, Dii = 0). The condition that G ≽ 0 for some choice of lengths can be expressed as G=(z1T +1zT −D)/2≽0forsomez≽0, (8.8) which is an LMI in D and z. A matrix D ∈ Sn, with nonnegative elements, zero diagonal, and which satisfies (8.8), is called a Euclidean distance matrix. A matrix is a Euclidean distance matrix if and only if its entries are the squares of the Euclidean distances between the vectors of some configuration. (Given a Euclidean distance matrix D and the associated length squared vector z, we can reconstruct one, or all, configurations with the given pairwise distances using the method described above.) The condition (8.8) turns out to be equivalent to the simpler condition that D is negative semidefinite on 1⊥, i.e., (8.8) ⇐⇒ uTDu≤0foralluwith1Tu=0 ⇐⇒ (I−(1/n)11T)D(I−(1/n)11T)≼0. This simple matrix inequality, along with Dij ≥ 0, Dii = 0, is the classical char- acterization of a Euclidean distance matrix. To see the equivalence, recall that we can assume A1 = 0, which implies that 1T G1 = 1T AT A1 = 0. It follows that G ≽ 0 if and only if G is positive semidefinite on 1⊥, i.e., 0 ≼ (I − (1/n)11T )G(I − (1/n)11T ) = (1/2)(I − (1/n)11T )(z1T + 1zT − D)(I − (1/n)11T ) = −(1/2)(I − (1/n)11T )D(I − (1/n)11T ), which is the simplified condition. loss of generality, that the average of the vectors a1, . . . , an is zero, 410 8 Geometric problems In summary, a matrix D ∈ Sn is a Euclidean distance matrix, i.e., gives the squared distances between a set of n vectors in Rn, if and only if Dii = 0, i = 1,...,n, Dij ≥ 0, i,j = 1,...,n, (I − (1/n)11T )D(I − (1/n)11T ) ≼ 0, which is a set of linear equalities, linear inequalities, and a matrix inequality in D. Therefore we can express any Euclidean distance problem that is convex in the squared distances as a convex problem with variable D ∈ Sn. Extremal volume ellipsoids Suppose C ⊆ Rn is bounded and has nonempty interior. In this section we consider the problems of finding the maximum volume ellipsoid that lies inside C, and the minimum volume ellipsoid that covers C. Both problems can be formulated as convex programming problems, but are tractable only in special cases. The L ̈owner-John ellipsoid The minimum volume ellipsoid that contains a set C is called the L ̈owner-John ellipsoid of the set C, and is denoted Elj. To characterize Elj, it will be convenient to parametrize a general ellipsoid as E = {v | ∥Av + b∥2 ≤ 1 } , (8.9) i.e., the inverse image of the Euclidean unit ball under an affine mapping. We can assume without loss of generality that A ∈ Sn++, in which case the volume of E is proportional to det A−1. The problem of computing the minimum volume ellipsoid containing C can be expressed as 8.4 8.4.1 minimize log det A−1 subject to supv∈C ∥Av + b∥2 ≤ 1, (8.10) where the variables are A ∈ Sn and b ∈ Rn, and there is an implicit constraint A ≻ 0. The objective and constraint functions are both convex in A and b, so the problem (8.10) is convex. Evaluating the constraint function in (8.10), however, involves solving a convex maximization problem, and is tractable only in certain special cases. Minimum volume ellipsoid covering a finite set We consider the problem of finding the minimum volume ellipsoid that contains the finite set C = {x1,...,xm} ⊆ Rn. An ellipsoid covers C if and only if it covers its convex hull, so finding the minimum volume ellipsoid that covers C 8.4 Extremal volume ellipsoids 411 is the same as finding the minimum volume ellipsoid containing the polyhedron conv{x1, . . . , xm}. Applying (8.10), we can write this problem as or, replacing the variable b by ̃b = Ab, minimize log det A−1 subjectto ∥Axi+b∥2≤1, i=1,...,m (8.11) where the variables are A ∈ Sn and b ∈ Rn, and we have the implicit constraint A ≻ 0. The norm constraints ∥Axi + b∥2 ≤ 1, i = 1, . . . , m, are convex inequalities in the variables A and b. They can be replaced with the squared versions, ∥Axi +b∥2 ≤ 1, which are convex quadratic inequalities in A and b. Minimum volume ellipsoid covering union of ellipsoids Minimum volume covering ellipsoids can also be computed efficiently for certain sets C that are defined by quadratic inequalities. In particular, it is possible to compute the L ̈owner-John ellipsoid for a union or sum of ellipsoids. As an example, consider the problem of finding the minimum volume ellip- soid Elj, that contains the ellipsoids E1, . . . , Em (and therefore, the convex hull of their union). The ellipsoids E1, ..., Em will be described by (convex) quadratic inequalities: Ei ={x|xTAix+2bTi x+ci ≤0}, i=1,...,m, where Ai ∈ Sn++. We parametrize the ellipsoid Elj as Elj = {x|∥Ax+b∥2 ≤1} = {x|xTATAx+2(ATb)Tx+bTb−1≤0} whereA∈Sn andb∈Rn. Nowweusearesultfrom§B.2,thatEi ⊆Elj ifand only if there exists a τ ≥ 0 such that 􏰒 A2 − τ Ai Ab − τ bi 􏰓 ≼ 0. (Ab−τbi)T bTb−1−τci The volume of Elj is proportional to detA−1, so we can find the minimum volume ellipsoidthatcontainsE1,...,Em bysolving i=1,...,m, i=1,...,m, minimize log det A−1 subject to τ1 ≥ 0,...,τm ≥ 0 􏰒 A2 −τiAi Ab−τibi 􏰓≼0, (Ab−τibi)T bTb−1−τici minimize log det A−1 subject to τ1 ≥ 0,...,τm ≥ 0 A2−τiAi ̃b−τibi 0  ( ̃b−τibi)T −1−τici ̃bT ≼0, 0 ̃b −A2 which is convex in the variables A2 ∈ Sn, ̃b, τ1, . . . , τm. 412 8 Geometric problems Figure 8.3 The outer ellipse is the boundary of the L ̈owner-John ellipsoid, i.e., the minimum volume ellipsoid that encloses the points x1, . . . , x6 (shown as dots), and therefore the polyhedron P = conv{x1, . . . , x6}. The smaller ellipse is the boundary of the L ̈owner-John ellipsoid, shrunk by a factor of n = 2 about its center. This ellipsoid is guaranteed to lie inside P. Efficiency of L ̈owner-John ellipsoidal approximation Let Elj be the L ̈owner-John ellipsoid of the convex set C ⊆ Rn, which is bounded and has nonempty interior, and let x0 be its center. If we shrink the L ̈owner-John ellipsoid by a factor of n, about its center, we obtain an ellipsoid that lies inside the set C: x0 +(1/n)(Elj −x0)⊆C ⊆Elj. In other words, the L ̈owner-John ellipsoid approximates an arbitrary convex set, within a factor that depends only on the dimension n. Figure 8.3 shows a simple example. The factor 1/n cannot be improved without additional assumptions on C. Any simplex in Rn, for example, has the property that its L ̈owner-John ellipsoid must be shrunk by a factor n to fit inside it (see exercise 8.13). We will prove this efficiency result for the special case C = conv{x1, . . . , xm}. We square the norm constraints in (8.11) and introduce variables A ̃ = A2 and ̃b = Ab, to obtain the problem minimize log det A ̃−1 subjectto xiTA ̃xi−2 ̃bTxi+ ̃bTA ̃−1 ̃b≤1, i=1,...,m. The KKT conditions for this problem are (8.12) By a suitable affine change of coordinates, we can assume that A ̃ = I and ̃b = 0, i.e., the minimum volume ellipsoid is the unit ball centered at the origin. The KKT 􏰉mi=1 λi(xixiT − A ̃−1 ̃b ̃bT A ̃−1) = A ̃−1, 􏰉mi=1 λi(xi − A ̃−1 ̃b) = 0, λi ≥0, xiTA ̃xi −2 ̃bTxi + ̃bTA ̃−1 ̃b≤1, i=1,...,m, λi(1−xiTA ̃xi +2 ̃bTxi − ̃bTA ̃−1 ̃b)=0, i=1,...,m. 8.4 Extremal volume ellipsoids 413 conditions then simplify to 􏰊m i=1 p􏰉lus the feasibility conditions ∥xi∥2 ≤ 1 and λi ≥ 0. By taking the trace of both sides of the first equation, and using complementary slackness, we also have 􏰊m i=1 λixixiT =I, λixi =0, λi(1−xiTxi)=0, i=1,...,m, mi = 1 λ i = n . In the new coordinates the shrunk ellipsoid is a ball with radius 1/n, centered at the origin. We need to show that ∥x∥2 ≤ 1/n =⇒ x ∈ C = conv{x1,...,xm}. Suppose ∥x∥2 ≤ 1/n. From the KKT conditions, we see that 􏰊m 􏰊m 􏰊m x = λi(xT xi)xi = λi(xT xi + 1/n)xi = μixi, i=1 i=1 i=1 (8.13) where μi = λi(xT xi + 1/n). From the Cauchy-Schwartz inequality, we note that μi = λi(xT xi + 1/n) ≥ λi(−∥x∥2∥xi∥2 + 1/n) ≥ λi(−1/n + 1/n) = 0. Furthermore 􏰊m􏰊m 􏰊m μi = λi(xTxi +1/n)= λi/n=1. i=1 i=1 i=1 This, along with (8.13), shows that x is a convex combination of x1 , . . . , xm , hence x ∈ C. Efficiency of L ̈owner-John ellipsoidal approximation for symmetric sets If the set C is symmetric about a point x0, then the factor 1/n can be tightened to 1/√n: √ x0 +(1/ n)(Elj −x0)⊆C ⊆Elj. Again, the factor 1/√n is tight. The L ̈owner-John ellipsoid of the cube C={x∈Rn | −1≼x≼1} is the ball with radius √n. Scaling down by 1/√n yields a ball enclosed in C, and touching the boundary at x = ±ei. Approximating a norm by a quadratic norm Let∥·∥beanynormonRn,andletC={x|∥x∥≤1}beitsunitball. Let Elj = {x | xT Ax ≤ 1}, with A ∈ Sn++, be the L ̈owner-John ellipsoid of C. Since C is symmetric about the origin, the result above tells us that (1/√n)Elj ⊆ C ⊆ Elj. Let ∥ · ∥lj denote the quadratic norm ∥z∥lj = (zT Az)1/2, 414 8 Geometric problems whose unit ball is Elj. The inclusions (1/√n)Elj ⊆ C ⊆ Elj are equivalent to the 8.4.2 for all z ∈ Rn. In other words, the quadratic norm ∥ · ∥lj approximates the norm ∥ · ∥ within a factor of √n. In particular, we see that any norm on Rn can be approximated within a factor of √n by a quadratic norm. Maximum volume inscribed ellipsoid We now consider the problem of finding the ellipsoid of maximum volume that lies inside a convex set C, which we assume is bounded and has nonempty interior. To formulate this problem, we parametrize the ellipsoid as the image of the unit ball under an affine transformation, i.e., as E ={Bu+d |∥u∥2 ≤1}. Again it can be assumed that B ∈ Sn++, so the volume is proportional to det B. We can find the maximum volume ellipsoid inside C by solving the convex optimization problem inequalities ∥z∥lj ≤ ∥z∥ ≤ √n∥z∥lj maximize log det B subjectto sup∥u∥2≤1IC(Bu+d)≤0 in the variables B ∈ Sn and d ∈ Rn, with implicit constraint B ≻ 0. Maximum volume ellipsoid in a polyhedron (8.14) We consider the case where C is a polyhedron described by a set of linear inequal- ities: C={x|aTi x≤bi, i=1,...,m}. To apply (8.14) we first express the constraint in a more convenient form: sup IC(Bu+d)≤0 ⇐⇒ sup aTi (Bu+d)≤bi, i=1,...,m ∥u∥2 ≤1 ∥u∥2 ≤1 ⇐⇒ ∥Bai∥2+aTid≤bi, i=1,...,m. We can therefore formulate (8.14) as a convex optimization problem in the variables B and d: minimize log det B−1 subjectto ∥Bai∥2+aTi d≤bi, i=1,...,m. Maximum volume ellipsoid in an intersection of ellipsoids We can also find the maximum volume ellipsoid E that lies in the intersection of m ellipsoids E1,...,Em. We will describe E as E = {Bu + d | ∥u∥2 ≤ 1} with B ∈ Sn++, and the other ellipsoids via convex quadratic inequalities, Ei ={x|xTAix+2bTi x+ci ≤0}, i=1,...,m, (8.15) 8.4 Extremal volume ellipsoids 415 where Ai ∈ Sn++. We first work out the condition under which E ⊆ Ei. This occurs if and only if sup 􏰀(d+Bu)TAi(d+Bu)+2bTi (d+Bu)+ci􏰁 ∥u∥2≤1 􏰀 􏰁 = dTAid+2bTi d+ci + sup uTBAiBu+2(Aid+bi)TBu ∥u∥2 ≤1 ≤ 0. From §B.1, sup 􏰀uTBAiBu+2(Aid+bi)TBu􏰁≤−(dTAid+2bTi d+ci) ∥u∥2 ≤1 if and only if there exists a λi ≥ 0 such that 􏰒 −λi −dTAid−2bTi d−ci (Aid+bi)TB 􏰓≽0. B(Aid + bi) λiI − BAiB The maximum volume ellipsoid contained in E1, . . . , Em can therefore be found by solving the problem minimize log det B−1 subjectto 􏰒 −λi−dTAid−2bTi d−ci (Aid+bi)TB 􏰓≽0, i=1,...,m, B(Aid + bi) λiI − BAiB with variables B ∈ Sn, d ∈ Rn, and λ ∈ Rm, or, equivalently, d+A−1bi B A−1 ii minimize log det B−1  −λi −ci +bTi A−1bi 0 (d+A−1bi)T  ii subjectto  0 λiI B ≽0, i=1,...,m. Efficiency of ellipsoidal inner approximations Approximation efficiency results, similar to the ones for the L ̈owner-John ellipsoid, hold for the maximum volume inscribed ellipsoid. If C ⊆ Rn is convex, bounded, with nonempty interior, then the maximum volume inscribed ellipsoid, expanded by a factor of n about its center, covers the set C. The factor n can be tightened to √n if the set C is symmetric about a point. An example is shown in figure 8.4. 8.4.3 Affine invariance of extremal volume ellipsoids The L ̈owner-John ellipsoid and the maximum volume inscribed ellipsoid are both affinely invariant. If Elj is the L ̈owner-John ellipsoid of C, and T ∈ Rn×n is nonsingular, then the L ̈owner-John ellipsoid of TC is TElj. A similar result holds for the maximum volume inscribed ellipsoid. To establish this result, let E be any ellipsoid that covers C. Then the ellipsoid T E covers T C . The converse is also true: Every ellipsoid that covers T C has 416 8 Geometric problems Figure 8.4 The maximum volume ellipsoid (shown shaded) inscribed in a polyhedron P. The outer ellipse is the boundary of the inner ellipsoid, expanded by a factor n = 2 about its center. The expanded ellipsoid is guaranteed to cover P. the form TE, where E is an ellipsoid that covers C. In other words, the relation E ̃ = T E gives a one-to-one correspondence between the ellipsoids covering T C and the ellipsoids covering C. Moreover, the volumes of the corresponding ellipsoids are all related by the ratio |detT|, so in particular, if E has minimum volume among ellipsoids covering C, then TE has minimum volume among ellipsoids covering TC. 8.5 Centering 8.5.1 Chebyshev center Let C ⊆ Rn be bounded and have nonempty interior, and x ∈ C. The depth of a point x ∈ C is defined as depth(x, C) = dist(x, Rn \ C), i.e., the distance to the closest point in the exterior of C. The depth gives the radius of the largest ball, centered at x, that lies in C. A Chebyshev center of the set C is defined as any point of maximum depth in C: xcheb(C) = argmax depth(x, C) = argmax dist(x, Rn \ C). A Chebyshev center is a point inside C that is farthest from the exterior of C; it is also the center of the largest ball that lies inside C. Figure 8.5 shows an example, in which C is a polyhedron, and the norm is Euclidean. 8.5 Centering 417 xcheb Figure 8.5 Chebyshev center of a polyhedron C, in the Euclidean norm. The center xcheb is the deepest point inside C, in the sense that it is farthest from the exterior, or complement, of C. The center xcheb is also the center of the largest Euclidean ball (shown lightly shaded) that lies inside C. Chebyshev center of a convex set When the set C is convex, the depth is a concave function for x ∈ C, so computing the Chebyshev center is a convex optimization problem (see exercise 8.5). More specifically, suppose C ⊆ Rn is defined by a set of convex inequalities: C = {x | f1(x) ≤ 0,...,fm(x) ≤ 0}. We can find a Chebyshev center by solving the problem maximize R subject to gi(x,R) ≤ 0, i = 1,...,m, where gi is defined as gi(x,R)= sup fi(x+Ru). ∥u∥≤1 Problem (8.16) is a convex optimization problem, since each function gi is the pointwise maximum of a family of convex functions of x and R, hence convex. However, evaluating gi involves solving a convex maximization problem (either numerically or analytically), which may be very hard. In practice, we can find the Chebyshev center only in cases where the functions gi are easy to evaluate. Chebyshev center of a polyhedron Suppose C is defined by a set of linear inequalities aTi x ≤ bi, i = 1,...,m. We have gi(x,R)= sup aTi (x+Ru)−bi =aTi x+R∥ai∥∗ −bi ∥u∥≤1 (8.16) 418 8 Geometric problems if R ≥ 0, so the Chebyshev center can be found by solving the LP maximize R subjectto aTi x+R∥ai∥∗ ≤bi, i=1,...,m R≥0 with variables x and R. Euclidean Chebyshev center of intersection of ellipsoids Let C be an intersection of m ellipsoids, defined by quadratic inequalities, C ={x|xTAix+2bTi x+ci ≤0, i=1,...,m}, where Ai ∈ Sn++. We have gi(x,R) = sup 􏰀(x+Ru)TAi(x+Ru)+2bTi (x+Ru)+ci􏰁 ∥u∥2≤1 􏰀 􏰁 = xTAix+2bTi x+ci + sup R2uTAiu+2R(Aix+bi)Tu . ∥u∥2 ≤1 From §B.1, gi(x,R) ≤ 0 if and only if there exists a λi such that the matrix 8.5.2 inequality 􏰒 −xTAixi −2bTi x−ci −λi R(Aix+bi)T 􏰓≽0 (8.17) R(Aix + bi) λiI − R2Ai holds. Using this result, we can express the Chebyshev centering problem as maximize R  −λi −ci +bTi A−1bi 0 (x+A−1bi)T  ii subjectto  0 λiI RI ≽0, i=1,...,m, x + A−1bi RI A−1 ii which is an SDP with variables R, λ, and x. Note that the Schur complement of A−1 in the LMI constraint is equal to the lefthand side of (8.17). Maximum volume ellipsoid center The Chebyshev center xcheb of a set C ⊆ Rn is the center of the largest ball that lies in C. As an extension of this idea, we define the maximum volume ellipsoid center of C, denoted xmve, as the center of the maximum volume ellipsoid that lies in C. Figure 8.6 shows an example, where C is a polyhedron. The maximum volume ellipsoid center is readily computed when C is defined by a set of linear inequalities, by solving the problem (8.15). (The optimal value of the variable d ∈ Rn is xmve.) Since the maximum volume ellipsoid inside C is affine invariant, so is the maximum volume ellipsoid center. i 8.5 Centering 419 xmve Figure 8.6 The lightly shaded ellipsoid shows the maximum volume ellipsoid contained in the set C, which is the same polyhedron as in figure 8.5. Its center xmve is the maximum volume ellipsoid center of C. 8.5.3 Analytic center of a set of inequalities The analytic center xac of a set of convex inequalities and linear equalities, fi(x) ≤ 0, i = 1,...,m, Fx = g is defined as an optimal point for the (convex) problem minimize − 􏰉mi=1 log(−fi(x)) subjectto Fx=g, (8.18) with variable x ∈ Rn and implicit constraints fi(x) < 0, i = 1, . . . , m. The objec- tive in (8.18) is called the logarithmic barrier associated with the set of inequalities. We assume here that the domain of the logarithmic barrier intersects the affine set defined by the equalities, i.e., the strict inequality system fi(x) < 0, i = 1,...,m, Fx = g is feasible. The logarithmic barrier is bounded below on the feasible set C ={x|fi(x)<0, i=1,...,m, Fx=g}, if C is bounded. When x is strictly feasible, i.e., Fx = g and fi(x) < 0 for i = 1,...,m, we can interpret −fi(x) as the margin or slack in the ith inequality. The analytic center xac is the point that maximizes the product (or geometric mean) of these slacks or margins, subject to the equality constraints Fx = g, and the implicit constraints fi(x) < 0. The analytic center is not a function of the set C described by the inequalities and equalities; two sets of inequalities and equalities can define the same set, but have different analytic centers. Still, it is not uncommon to informally use the 420 8 Geometric problems term ‘analytic center of a set C’ to mean the analytic center of a particular set of equalities and inequalities that define it. The analytic center is, however, independent of affine changes of coordinates. It is also invariant under (positive) scalings of the inequality functions, and any reparametrization of the equality constraints. In other words, if F ̃ and g ̃ are such that F ̃x = g ̃ if and only if Fx = g, and α1,...,αm > 0, then the analytic center of
αifi(x) ≤ 0, i = 1,…,m, is the same as the analytic center of
fi(x) ≤ 0, i = 1,…,m,
(see exercise 8.17).
Analytic center of a set of linear inequalities
F ̃x = g ̃, Fx = g
The analytic center of a set of linear inequalities
aTi x≤bi, i=1,…,m,
is the solution of the unconstrained minimization problem
minimize − 􏰉mi=1 log(bi − aTi x), (8.19)
with implicit constraint bi − aTi x > 0, i = 1, . . . , m. If the polyhedron defined by the linear inequalities is bounded, then the logarithmic barrier is bounded below and strictly convex, so the analytic center is unique. (See exercise 4.2.)
We can give a geometric interpretation of the analytic center of a set of linear inequalities. Since the analytic center is independent of positive scaling of the constraint functions, we can assume without loss of generality that ∥ai∥2 = 1. In this case, the slack bi − aTi x is the distance to the hyperplane Hi = {x | aTi x = bi}. Therefore the analytic center xac is the point that maximizes the product of distances to the defining hyperplanes.
Inner and outer ellipsoids from analytic center of linear inequalities
The analytic center of a set of linear inequalities implicitly defines an inscribed and a covering ellipsoid, defined by the Hessian of the logarithmic barrier function
􏰊m
i=1 evaluated at the analytic center, i.e.,
􏰊m 1
H = d 2i a i a Ti , d i = b − a T x ,
i=1 i i ac We have Einner ⊆ P ⊆ Eouter, where

l o g ( b i − a Ti x ) ,
i = 1 , . . . , m .
P = {x|aTix≤bi,i=1,…,m},
Einner = {x|(x−xac)TH(x−xac)≤1},
Eouter = {x|x−xac)TH(x−xac)≤m(m−1)}.

8.5 Centering
421
xac
Figure 8.7 The dashed lines show five level curves of the logarithmic barrier function for the inequalities defining the polyhedron C in figure 8.5. The minimizer of the logarithmic barrier function, labeled xac, is the analytic center of the inequalities. The inner ellipsoid Einner = {x | (x − xac)H(x − xac) ≤ 1}, where H is the Hessian of the logarithmic barrier function at xac, is shaded.
This is a weaker result than the one for the maximum volume inscribed ellipsoid, which when scaled up by a factor of n covers the polyhedron. The inner and outer ellipsoids defined by the Hessian of the logarithmic barrier, in contrast, are related by the scale factor (m(m − 1))1/2, which is always at least n.
To show that Einner ⊆ P, suppose x ∈ Einner, i.e.,
and therefore aTi x ≤ bi for i = 1,…,m. (We have not used the fact that xac is the analytic center, so this result is valid if we replace xac with any strictly feasible point.)
To establish that P ⊆ Eouter, we will need the fact that xac is the analytic center, and therefore the gradient of the logarithmic barrier vanishes:
􏰊m
diai = 0.
i=1
Now assume x ∈ P. Then (x−x􏰊)TH(x−x )
ac ac
m
= (diaTi (x−xac))2
i=1
􏰊m i=1
(diaTi (x−xac))2 ≤1. aTi (x−xac)≤1/di =bi −aTi xac, i=1,…,m,
(x−xac)TH(x−xac)= This implies that

422
8
Geometric problems
􏰊m i=1
􏰊m i=1
d2i(1/di −aTi (x−xac))2 −m d 2i ( b i − a Ti x ) 2 − m
􏰈2
d i ( b i − a Ti x ) − m
= =

diaTi (xac −x)
􏰉m 􏰉􏰉m2􏰉m2
Analytic center of a linear matrix inequality
The definition of analytic center can be extended to sets described by generalized inequalities with respect to a cone K, if we define a logarithm on K. For example, the analytic center of a linear matrix inequality
x1A1 +x2A2 +···+xnAn ≼B is defined as the solution of
􏰇􏰊m i=1
􏰇􏰊m i=1
􏰈2
x ∈ Eouter. (The second equality follows from the fact that
di(bi −aTi xac)+ m2−m,
􏰊m i=1
−m
i=1 diai = 0. The inequality follows from i=1 yi ≤ ( i=1 yi) for y ≽ 0. The
= =
which shows that
last equality follows from mi=1 diai = 0, and the definition of di.)
minimize − log det(B − x1A1 − · · · − xnAn). 8.6 Classification
In pattern recognition and classification problems we are given two sets of points in Rn, {x1,…,xN} and {y1,…,yM}, and wish to find a function f : Rn → R (within a given family of functions) that is positive on the first set and negative on the second, i.e.,
f(xi) > 0, i = 1,…,N, f(yi) < 0, i = 1,...,M. If these inequalities hold, we say that f, or its 0-level set {x | f(x) = 0}, separates, classifies, or discriminates the two sets of points. We sometimes also consider weak separation, in which the weak versions of the inequalities hold. 8.6 Classification 423 Figure 8.8 The points x1, . . . , xN are shown as open circles, and the points y1 , . . . , yM are shown as filled circles. These two sets are classified by an affine function f, whose 0-level set (a line) separates them. 8.6.1 Linear discrimination In linear discrimination, we seek an affine function f (x) = aT x − b that classifies the points, i.e., aTxi −b>0, i=1,…,N, aTyi −b<0, i=1,...,M. (8.20) Geometrically, we seek a hyperplane that separates the two sets of points. Since the strict inequalities (8.20) are homogeneous in a and b, they are feasible if and only if the set of nonstrict linear inequalities aTxi −b≥1, i=1,...,N, aTyi −b≤−1, i=1,...,M (8.21) (in the variables a, b) is feasible. Figure 8.8 shows a simple example of two sets of points and a linear discriminating function. Linear discrimination alternative The strong alternative of the set of strict inequalities (8.20) is the existence of λ, λ ̃ such that conditions as λ≽0, 1Tλ=1, λ ̃≽0, 1Tλ ̃=1, 􏰊N i=1 λixi = 􏰊M i=1 λ ̃iyi 􏰊N 􏰊M λixi = λ ̃iyi, 1Tλ=1Tλ ̃ (8.22) λ≽0, λ ̃≽0, (λ,λ ̃)̸=0, (see §5.8.3). Using the third and last conditions, we can express these alternative i=1 i=1 424 8 Geometric problems (by dividing by 1T λ, which is positive, and using the same symbols for the normal- ized λ and λ ̃). These conditions have a simple geometric interpretation: They state that there is a point in the convex hull of both {x1,...,xN} and {y1,...,yM}. In other words: the two sets of points can be linearly discriminated (i.e., discrimi- nated by an affine function) if and only if their convex hulls do not intersect. We have seen this result several times before. Robust linear discrimination The existence of an affine classifying function f (x) = aT x − b is equivalent to a set of linear inequalities in the variables a and b that define f. If the two sets can be linearly discriminated, then there is a polyhedron of affine functions that discriminate them, and we can choose one that optimizes some measure of robust- ness. We might, for example, seek the function that gives the maximum possible ‘gap’ between the (positive) values at the points xi and the (negative) values at the points yi. To do this we have to normalize a and b, since otherwise we can scale a and b by a positive constant and make the gap in the values arbitrarily large. This leads to the problem −t+ i=1 ui(t+b−aTxi)+ maximize subjectto t aTxi−b≥t, i=1,...,N aTyi −b≤−t, i=1,...,M ∥a∥2 ≤ 1, (8.23) with variables a, b, and t. The optimal value t⋆ of this convex problem (with linear objective, linear inequalities, and one quadratic inequality) is positive if and only if the two sets of points can be linearly discriminated. In this case the inequality ∥a∥2 ≤ 1 is always tight at the optimum, i.e., we have ∥a⋆∥2 = 1. (See exercise 8.23.) We can give a simple geometric interpretation of the robust linear discrimination problem (8.23). If ∥a∥2 = 1 (as is the case at any optimal point), aT xi − b is the Euclidean distance from the point xi to the separating hyperplane H = {z | aT z = b}. Similarly, b−aT yi is the distance from the point yi to the hyperplane. Therefore the problem (8.23) finds the hyperplane that separates the two sets of points, and has maximal distance to the sets. In other words, it finds the thickest slab that separates the two sets. As suggested by the example shown in figure 8.9, the optimal value t⋆ (which is half the slab thickness) turns out to be half the distance between the convex hulls of the two sets of points. This can be seen clearly from the dual of the robust linear discrimination problem (8.23). The Lagrangian (for the problem of minimizing −t) is a i=1 i=1 􏰊M i=1 􏰊N vi(t−b+aTyi)+λ(∥a∥2 −1). Minimizing over b and t yields the conditions 1T u = 1/2, 1T v = 1/2. When these hold, we have 􏰇T􏰊M􏰊N 􏰈 g(u,v,λ) = inf a ( viyi − uixi)+λ∥a∥2 −λ 8.6 Classification 425 Figure 8.9 By solving the robust linear discrimination problem (8.23) we find an affine function that gives the largest gap in values between the two sets (with a normalization bound on the linear part of the function). Ge- ometrically, we are finding the thickest slab that separates the two sets of points. = 􏰐 −λ 􏳶􏳶􏳶􏰉Mi=1 viyi − 􏰉Ni=1 uixi􏳶􏳶􏳶2 ≤ λ −∞ otherwise. The dual problem can then be written as maximize − 􏳶􏳶􏳶􏰉Mi=1 viyi − 􏰉Ni=1 uixi􏳶􏳶􏳶2 subjectto u≽0, 1Tu=1/2 v≽0, 1Tv=1/2. W􏰉e can interpret 2􏰉Ni=1 uixi as a point in the convex hull of {x1,...,xN} and 2 Mi=1 viyi as a point in the convex hull of {y1,...,yM}. The dual objective is to minimize (half) the distance between these two points, i.e., find (half) the distance between the convex hulls of the two sets. Support vector classifier When the two sets of points cannot be linearly separated, we might seek an affine function that approximately classifies the points, for example, one that minimizes the number of points misclassified. Unfortunately, this is in general a difficult combinatorial optimization problem. One heuristic for approximate linear discrim- ination is based on support vector classifiers, which we describe in this section. We start with the feasibility problem (8.21). We first relax the constraints by introducing nonnegative variables u1, . . . , uN and v1, . . . , uM , and forming the inequalities aTxi−b≥1−ui, i=1,...,N, aTyi−b≤−(1−vi), i=1,...,M. (8.24) 426 8 Geometric problems Figure 8.10 Approximate linear discrimination via linear programming. The points x1, . . . , x50, shown as open circles, cannot be linearly separated from the points y1, . . . , y50, shown as filled circles. The classifier shown as a solid line was obtained by solving the LP (8.25). This classifier misclassifies one point. The dashed lines are the hyperplanes aT z − b = ±1. Four points are correctly classified, but lie in the slab defined by the dashed lines. When u = v = 0, we recover the original constraints; by making u and v large enough, these inequalities can always be made feasible. We can think of ui as a measure of how much the constraint aT xi − b ≥ 1 is violated, and similarly for vi. Our goal is to find a, b, and sparse nonnegative u and v that satisfy the inequalities (8.24). As a heuristic for this, we can minimize the sum of the variables ui and vi, by solving the LP minimize 1T u + 1T v subjectto aTxi−b≥1−ui, i=1,...,N aTyi −b≤−(1−vi), i=1,...,M u≽0, v≽0. (8.25) Figure 8.10 shows an example. In this example, the affine function aT z − b mis- classifies 1 out of 100 points. Note however that when 0 < ui < 1, the point xi is correctly classified by the affine function aT z − b, but violates the inequality aT xi − b ≥ 1, and similarly for yi. The objective function in the LP (8.25) can be interpreted as a relaxation of the number of points xi that violate aT xi −b ≥ 1 plus the number of points yi that violate aT yi −b ≤ −1. In other words, it is a relaxation of the number of points misclassified by the function aT z − b, plus the number of points that are correctly classified but lie in the slab defined by −1 < aT z − b < 1. More generally, we can consider the trade-off between the number of misclas- sified points, and the width of the slab {z | −1 ≤ aTz−b ≤ 1}, which is given by 2/∥a∥2 . The standard support vector classifier for the sets {x1 , . . . , xN }, 8.6 Classification 427 Figure 8.11 Approximate linear discrimination via support vector classifier, with γ = 0.1. The support vector classifier, shown as the solid line, misclas- sifies three points. Fifteen points are correctly classified but lie in the slab defined by −1 < aT z − b < 1, bounded by the dashed lines. {y1,...,yM} is defined as the solution of minimize ∥a∥2 + γ(1T u + 1T v) subjectto aTxi−b≥1−ui, i=1,...,N aTyi −b≤−(1−vi), i=1,...,M u≽0, v≽0, The first term is proportional to the inverse of the width of the slab defined by −1 ≤ aT z − b ≤ 1. The second term has the same interpretation as above, i.e., it is a convex relaxation for the number of misclassified points (including the points in the slab). The parameter γ, which is positive, gives the relative weight of the number of misclassified points (which we want to minimize), compared to the width of the slab (which we want to maximize). Figure 8.11 shows an example. Approximate linear discrimination via logistic modeling Another approach to finding an affine function that approximately classifies two sets of points that cannot be linearly separated is based on the logistic model described in §7.1.1. We start by fitting the two sets of points with a logistic model. Suppose z is a random variable with values 0 or 1, with a distribution that depends on some (deterministic) explanatory variable u ∈ Rn, via a logistic model of the form prob(z = 1) = (exp(aT u − b))/(1 + exp(aT u − b)) prob(z = 0) = 1/(1 + exp(aT u − b)). (8.26) Now we assume that the given sets of points, {x1,...,xN} and {y1,...,yM}, arise as samples from the logistic model. Specifically, {x1, . . . , xN } are the values 428 8 Geometric problems of u for the N samples for which z = 1, and {y1,...,yM} are the values of u for the M samples for which z = 0. (This allows us to have xi = yj , which would rule out discrimination between the two sets. In a logistic model, it simply means that we have two samples, with the same value of explanatory variable but different outcomes.) We can determine a and b by maximum likelihood estimation from the observed samples, by solving the convex optimization problem minimize −l(a, b) (8.27) with variables a, b, where l is the log-likelihood function l ( a , b ) = 􏰉 Ni = 1 ( a T x i − b ) − 􏰉Ni=1 log(1 + exp(aT xi − b)) − 􏰉Mi=1 log(1 + exp(aT yi − b)) (see §7.1.1). If the two sets of points can be linearly separated, i.e., if there exist a, b with aT xi > b and aT yi < b, then the optimization problem (8.27) is unbounded below. Once we find the maximum likelihood values of a and b, we can form a linear classifier f (x) = aT x − b for the two sets of points. This classifier has the following property: Assuming the data points are in fact generated from a logistic model with parameters a and b, it has the smallest probability of misclassification, over all linear classifiers. The hyperplane aT u = b corresponds to the points where prob(z = 1) = 1/2, i.e., the two outcomes are equally likely. An example is shown in figure 8.12. Remark 8.1 Bayesian interpretation. Let x and z be two random variables, taking values in Rn and in {0, 1}, respectively. We assume that prob(z = 1) = prob(z = 0) = 1/2, and we denote by p0(x) and p1(x) the conditional probability densities of x, given z = 0 and given z = 1, respectively. We assume that p0 and p1 satisfy p1(x) = eaT x−b p0 (x) for some a and b. Many common distributions satisfy this property. For example, p0 and p1 could be two normal densities on Rn with equal covariance matrices and different means, or they could be two exponential densities on Rn+. It follows from Bayes’ rule that prob(z=1|x=u) = prob(z = 0 | x = u) = from which we obtain exp(aT u − b) 1 + exp(aT u − b) 1 . 1 + exp(aT u − b) p1 (u) p1(u) + p0(u) p0(u) , p1(u) + p0(u) prob(z=1|x=u) = prob(z=0|x=u) = 8.6 Classification 429 Figure 8.12 Approximate linear discrimination via logistic modeling. The points x1, . . . , x50, shown as open circles, cannot be linearly separated from the points y1,...,y50, shown as filled circles. The maximum likelihood lo- gistic model yields the hyperplane shown as a dark line, which misclassifies only two points. The two dashed lines show aT u − b = ±1, where the proba- bility of each outcome, according to the logistic model, is 73%. Three points are correctly classified, but lie in between the dashed lines. The logistic model (8.26) can therefore be interpreted as the posterior distribution of z, given that x = u. 8.6.2 Nonlinear discrimination We can just as well seek a nonlinear function f, from a given subspace of functions, that is positive on one set and negative on another: f(xi) > 0, i = 1,…,N, f(yi) < 0, i = 1,...,M. Provided f is linear (or affine) in the parameters that define it, these inequalities can be solved in exactly the same way as in linear discrimination. In this section we examine some interesting special cases. Quadratic discrimination Suppose we take f to be quadratic: f(x) = xT Px + qT x + r. The parameters P ∈ Sn, q ∈ Rn, r ∈ R must satisfy the inequalities xTiPxi+qTxi+r>0, i=1,…,N yiTPyi+qTyi+r<0, i=1,...,M, 430 8 Geometric problems which is a set of strict linear inequalities in the variables P, q, r. As in linear discrimination, we note that f is homogeneous in P, q, and r, so we can find a solution to the strict inequalities by solving the nonstrict feasibility problem xTiPxi+qTxi+r≥1, i=1,...,N yiTPyi+qTyi+r≤−1, i=1,...,M. The separating surface {z | zT P z + qT z + r = 0} is a quadratic surface, and the two classification regions {z | zT P z + qT z + r ≤ 0}, {z | zT P z + qT z + r ≥ 0}, are defined by quadratic inequalities. Solving the quadratic discrimination problem, then, is the same as determining whether the two sets of points can be separated by a quadratic surface. We can impose conditions on the shape of the separating surface or classification regions by adding constraints on P, q, and r. For example, we can require that P ≺ 0, which means the separating surface is ellipsoidal. More specifically, it means that we seek an ellipsoid that contains all the points x1,...,xN, but none of the points y1, . . . , yM . This quadratic discrimination problem can be solved as an SDP feasibility problem find P, q, r subjectto xTiPxi+qTxi+r≥1, i=1,...,N yiTPyi +qTyi +r≤−1, i=1,...,M P ≼ −I, with variables P ∈ Sn, q ∈ Rn, and r ∈ R. (Here we use homogeneity in P, q, r to express the constraint P ≺ 0 as P ≼ −I.) Figure 8.13 shows an example. Polynomial discrimination We consider the set of polynomials on Rn with degree less than or equal to d: f(x)= 􏰊 a xi1 ···xin. i1 +···+in ≤d We can determine whether or not two sets {x1,...,xN} and {y1,...,yM} can be separated by such a polynomial by solving a set of linear inequalities in the variables ai1···id. Geometrically, we are checking whether the two sets can be separated by an algebraic surface (defined by a polynomial of degree less than or equal to d). As an extension, the problem of determining the minimum degree polynomial on Rn that separates two sets of points can be solved via quasiconvex programming, since the degree of a polynomial is a quasiconvex function of the coefficients. This can be carried out by bisection on d, solving a feasibility linear program at each step. An example is shown in figure 8.14. i1···id 1 n 8.6 Classification 431 Figure 8.13 Quadratic discrimination, with the condition that P ≺ 0. This means that we seek an ellipsoid containing all of xi (shown as open circles) and none of the yi (shown as filled circles). This can be solved as an SDP feasibility problem. Figure 8.14 Minimum degree polynomial discrimination in R2. In this ex- ample, there exists no cubic polynomial that separates the points x1, . . . , xN (shown as open circles) from the points y1, . . . , yM (shown as filled circles), but they can be separated by fourth-degree polynomial, the zero level set of which is shown. 432 8.7 8 Geometric problems 8.7.1 In this section we discuss a few variations on the following problem. We have N points in R2 or R3, and a list of pairs of points that must be connected by links. The positions of some of the N points are fixed; our task is to determine the positions of the remaining points, i.e., to place the remaining points. The objective is to place the points so that some measure of the total interconnection length of the links is minimized, subject to some additional constraints on the positions. As an example application, we can think of the points as locations of plants or warehouses of a company, and the links as the routes over which goods must be shipped. The goal is to find locations that minimize the total transportation cost. In another application, the points represent the position of modules or cells on an integrated circuit, and the links represent wires that connect pairs of cells. Here the goal might be to place the cells in such a way that the total length of wire used to interconnect the cells is minimized. The problem can be described in terms of an undirected graph with N nodes, representing the N points. With each node we associate a variable xi ∈ Rk, where k = 2 or k = 3, which represents its location or position. The problem is to minimize 􏰊 fij (xi , xj ) (i,j )∈A whereAisthesetofalllinksinthegraph,andfij :Rk×Rk →Risacost function associated with arc (i,j). (Alternatively, we can sum over all i and j, or over i < j, and simply set fij = 0 when links i and j are not connected.) Some of the coordinate vectors xi are given. The optimization variables are the remaining coordinates. Provided the functions fij are convex, this is a convex optimization problem. Linear facility location problems In the simplest version of the problem the cost associated with arc (i,j) is the distance between nodes i and j: fij (xi, xj ) = ∥xi − xj ∥, i.e., we minimize 􏰊 ∥xi−xj∥. (i,j )∈A We can use any norm, but the most common applications involve the Euclidean norm or the l1-norm. For example, in circuit design it is common to route the wires between cells along piecewise-linear paths, with each segment either horizontal or vertical. (This is called Manhattan routing, since paths along the streets in a city with a rectangular grid are also piecewise-linear, with each street aligned with one of two orthogonal axes.) In this case, the length of wire required to connect cell i and cell j is given by ∥xi − xj∥1. We can include nonnegative weights that reflect differences in the cost per unit Placement and location 8.7 Placement and location distance along different arcs: 􏰊 wij∥xi −xj∥. (i,j )∈A By assigning a weight wij = 0 to pairs of nodes that are not connected, we can express this problem more simply using the objective 􏰊wij∥xi −xj∥. (8.28) i 0 between cells by changing the relative position constraints from xi +wi ≤ xj for (i,j) ∈ H, to xi +wi +ρ ≤ xj for (i, j) ∈ H, and similarly for the vertical graph. We can have a different minimum spacing associated with each edge in H and V. Another possibility is to fix W and H, and maximize the minimum spacing ρ as objective.
V

442
8 Geometric problems
Minimum cell area
For each cell we specify a minimum area, i.e., we require that wihi ≥ Ai, where
Ai > 0. These minimum cell area constraints can be expressed as convex inequali-
ties in several ways, e.g., wi ≥ Ai/hi, (wihi)1/2 ≥ A1/2, or log wi + log hi ≥ log Ai. i
Aspect ratio constraints
We can impose upper and lower bounds on the aspect ratio of each cell, i.e., li ≤ hi/wi ≤ ui.
Multiplying through by wi transforms these constraints into linear inequalities. We can also fix the aspect ratio of a cell, which results in a linear equality constraint.
Alignment constraints
We can impose the constraint that two edges, or a center line, of two cells are aligned. For example, the horizontal center line of cell i aligns with the top of cell j when
yi +wi/2=yj +wj.
These are linear equality constraints. In a similar way we can require that a cell is
flushed against the bounding box boundary.
Symmetry constraints
We can require pairs of cells to be symmetric about a vertical or horizontal axis, that can be fixed or floating (i.e., whose position is fixed or not). For example, to specify that the pair of cells i and j are symmetric about the vertical axis x = xaxis, we impose the linear equality constraint
xaxis −(xi +wi/2)=xj +wj/2−xaxis.
We can require that several pairs of cells be symmetric about an unspecified vertical
axis by imposing these equality constraints, and introducing xaxis as a new variable. Similarity constraints
We can require that cell i be an a-scaled translate of cell j by the equality con- straints wi = awj, hi = ahj. Here the scaling factor a must be fixed. By imposing only one of these constraints, we require that the width (or height) of one cell be a given factor times the width (or height) of the other cell.
Containment constraints
We can require that a particular cell contains a given point, which imposes two lin- ear inequalities. We can require that a particular cell lie inside a given polyhedron, again by imposing linear inequalities.

8.8 Floor planning 443
Distance constraints
We can impose a variety of constraints that limit the distance between pairs of cells. In the simplest case, we can limit the distance between the center points of cell i and j (or any other fixed points on the cells, such as lower left corners). For example, to limit the distance between the centers of cells i and j, we use the (convex) inequality
∥(xi +wi/2,yi +hi/2)−(xj +wj/2,yj +hj/2)∥≤Dij.
As in placement problems, we can limit sums of distances, or use sums of distances as the objective.
We can also limit the distance dist(Ci,Cj) between cell i and cell j, i.e., the minimum distance between a point in cell i and a point in cell j. In the general case this can be done as follows. To limit the distance between cells i and j in the norm ∥ · ∥, we can introduce four new variables ui, vi, uj , vj . The pair (ui, vi) will represent a point in Ci, and the pair (uj,vj) will represent a point in Cj. To ensure this we impose the linear inequalities
xi ≤ui ≤xi +wi, yi ≤vi ≤yi +hi,
and similarly for cell j. Finally, to limit dist(Ci,Cj), we add the convex inequality ∥(ui , vi ) − (uj , vj )∥ ≤ Dij .
In many specific cases we can express these distance constraints more efficiently, by exploiting the relative positioning constraints or deriving a more explicit formu- lation. As an example consider the l∞-norm, and suppose cell i lies to the left of cell j (by a relative positioning constraint). The horizontal displacement between thetwocellsisxj−(xi+wi)Thenwehavedist(Ci,Cj)≤Dij ifandonlyif
xj −(xi +wi)≤Dij, yj −(yi +hi)≤Dij, yi −(yj +hj)≤Dij.
The first inequality states that the horizontal displacement between the right edge of cell i and the left edge of cell j does not exceed Dij. The second inequality requires that the bottom of cell j is no more than Dij above the top of cell i, and the third inequality requires that the bottom of cell i is no more than Dij above the top of cell j. These three inequalities together are equivalent to dist(Ci,Cj) ≤ Dij. In this case, we do not need to introduce any new variables.
We can limit the l1- (or l2-) distance between two cells in a similar way. Here we introduce one new variable dv, which will serve as a bound on the vertical displacement between the cells. To limit the l1-distance, we add the constraints
yj −(yi +hi)≤dv, yi −(yj +hj)≤dv, dv ≥0 and the constraints
xj −(xi +wi)+dv ≤Dij.
(The first term is the horizontal displacement and the second is an upper bound on the vertical displacement.) To limit the Euclidean distance between the cells, we replace this last constraint with
(x − (x + w ))2 + d2 ≤ D2 . j i i v ij

444
8 Geometric problems
1
2
1
4
4
3
5
1 2
4
3
5
3
5
4
1
3
5
2
2
8.8.3
Figure 8.20 Four instances of an optimal floor plan, using the relative po- sitioning constraints shown in figure 8.19. In each case the objective is to minimize the perimeter, and the same minimum spacing constraint between cells is imposed. We also require the aspect ratios to lie between 1/5 and 5. The four cases differ in the minimum areas required for each cell. The sum of the minimum areas is the same for each case.
Example 8.7 Figure 8.20 shows an example with 5 cells, using the ordering constraints of figure 8.19, and four different sets of constraints. In each case we impose the same minimum required spacing constraint, and the same aspect ratio constraint 1/5 ≤ wi/hi ≤ 5. The four cases differ in the minimum required􏰉cell areas Ai. The values of Ai are chosen so that the total minimum required area 5i=1 Ai is the same for each case.
Floor planning via geometric programming
The floor planning problem can also be formulated as a geometric program in the variables xi, yi, wi, hi, W, H. The objectives and constraints that can be handled in this formulation are a bit different from those that can be expressed in the convex formulation.
First we note that the bounding box constraints (8.35) and the relative po-

8.8 Floor planning 445
sitioning constraints (8.34) are posynomial inequalities, since the lefthand sides are sums of variables, and the righthand sides are single variables, hence monomi- als. Dividing these inequalities by the righthand side yields standard posynomial inequalities.
In the geometric programming formulation we can minimize the bounding box area, since WH is a monomial, hence posynomial. We can also exactly specify the area of each cell, since wihi = Ai is a monomial equality constraint. On the other hand alignment, symmetry, and distance constraints cannot be handled in the geometric programming formulation. Similarity, however, can be; indeed it is possible to require that one cell be similar to another, without specifying the scaling ratio (which can be treated as just another variable).

446
8 Geometric problems
Bibliography
The characterization of Euclidean distance matrices in §8.3.3 appears in Schoenberg [Sch35]; see also Gower [Gow85].
Our use of the term L ̈owner-John ellipsoid follows Gro ̈tschel, Lov ́asz, and Schrijver [GLS88, page 69]. The efficiency results for ellipsoidal approximations in §8.4 were proved by John [Joh85]. Boyd, El Ghaoui, Feron, and Balakrishnan [BEFB94, §3.7] give con- vex formulations of several ellipsoidal approximation problems involving sets defined as unions, intersections or sums of ellipsoids.
The different centers defined in §8.5 have applications in design centering (see, for exam- ple, Seifi, Ponnambalan, and Vlach [SPV99]), and cutting-plane methods (Elzinga and Moore [EM75], Tarasov, Khachiyan, and E`rlikh [TKE88], and Ye [Ye97, chapter 8]). The inner ellipsoid defined by the Hessian of the logarithmic barrier function (page 420) is sometimes called the Dikin ellipsoid, and is the basis of Dikin’s algorithm for linear and quadratic programming [Dik67]. The expression for the outer ellipsoid at the analytic center was given by Sonnevend [Son86]. For extensions to nonpolyhedral convex sets, see Boyd and El Ghaoui [BE93], Jarre [Jar94], and Nesterov and Nemirovski [NN94, page 34].
Convex optimization has been applied to linear and nonlinear discrimination problems since the 1960s; see Mangasarian [Man65] and Rosen [Ros65]. Standard texts that dis- cuss pattern classification include Duda, Hart, and Stork [DHS99] and Hastie, Tibshirani, and Friedman [HTF01]. For a detailed discussion of support vector classifiers, see Vap- nik [Vap00] or Sch ̈olkopf and Smola [SS01].
The Weber point defined in example 8.4 is named after Weber [Web71]. Linear and quadratic placement is used in circuit design (Kleinhaus, Sigl, Johannes, and Antre- ich [KSJA91, SDJ91]). Sherwani [She99] is a recent overview of algorithms for placement, layout, floor planning, and other geometric optimization problems in VLSI circuit design.

Exercises 447
Exercises Projection on a set
8.1 Uniqueness of projection. Show that if C ⊆ Rn is nonempty, closed and convex, and the norm ∥ · ∥ is strictly convex, then for every x0 there is exactly one x ∈ C closest to x0 . In other words the projection of x0 on C is unique.
8.2 [Web94, Val64] Chebyshev characterization of convexity. A set C ∈ Rn is called a Cheby- shev set if for every x0 ∈ Rn, there is a unique point in C closest (in Euclidean norm) to x0. From the result in exercise 8.1, every nonempty, closed, convex set is a Chebyshev set. In this problem we show the converse, which is known as Motzkin’s theorem.
Let C ∈ Rn be a Chebyshev set.
(a) Show that C is nonempty and closed.
(b) Show that PC, the Euclidean projection on C, is continuous.
(c) Suppose x0 ̸∈ C. Show that PC(x) = PC(x0) for all x = θx0 + (1 − θ)PC(x0) with 0 ≤ θ ≤ 1.
(d) Suppose x0 ̸∈ C. Show that PC(x) = PC(x0) for all x = θx0 + (1 − θ)PC(x0) with θ ≥ 1.
(e) Combining parts (c) and (d), we can conclude that all points on the ray with base PC(x0) and direction x0 − PC(x0) have projection PC(x0). Show that this implies that C is convex.
8.3 Euclidean projection on proper cones.
(a) Nonnegative orthant. Show that Euclidean projection onto the nonnegative orthant
is given by the expression on page 399.
(b) Positive semidefinite cone. Show that Euclidean projection onto the positive semidef- inite cone is given by the expression on page 399.
(c) Second-order cone. Show that the Euclidean projection of (x0,t0) on the second-
order cone is given by
PK(x0,t0) =
K ={(x,t)∈Rn+1 |∥x∥2 ≤t} 􏰐 0
(x0,t0)
(1/2)(1 + t0/∥x0∥2)(x0, ∥x0∥2)
∥x0∥2 ≤−t0 ∥x0∥2 ≤ t0 ∥x0∥2 ≥ |t0|.
8.4 The Euclidean projection of a point on a convex set yields a simple separating hyperplane (PC (x0) − x0)T (x − (1/2)(x0 + PC (x0))) = 0.
Find a counterexample that shows that this construction does not work for general norms.
8.5 [HUL93, volume 1, page 154] Depth function and signed distance to boundary. Let C ⊆ Rn be a nonempty convex set, and let dist(x,C) be the distance of x to C in some norm. We already know that dist(x,C) is a convex function of x.
(a) Show that the depth function,
− depth(x, C) x ∈ C.
depth(x, C) = dist(x, Rn \ C), (b) The signed distance to the boundary of C is defined as
is concave for x ∈ C.
s(x) = 􏰆 dist(x,C) x ̸∈ C
Thus, s(x) is positive outside C, zero on its boundary, and negative on its interior. Show that s is a convex function.

448 8 Geometric problems
Distance between sets 8.6 Let C, D be convex sets.
8.7
8.8
(a) Show that dist(C, x + D) is a convex function of x.
(b) Show that dist(tC, x + tD) is a convex function of (x, t) for t > 0. Separation of ellipsoids. Let E1 and E2 be two ellipsoids defined as
E1 ={x|(x−x1)TP−1(x−x1)≤1}, E2 ={x|(x−x2)TP−1(x−x2)≤1}, 12
8.9
8.10
P1 ={x|Ax≼b}, P2 ={x|Fx≼g},
with A ∈ Rm×n, b ∈ Rm, F ∈ Rp×n, g ∈ Rp. Formulate each of the following problems
as an LP feasibility problem, or a set of LP feasibility problems. (a) Find a point in the intersection P1 ∩ P2.
(b) Determine whether P1 ⊆ P2.
For each problem, derive a set of linear inequalities and equalities that forms a strong alternative, and give a geometric interpretation of the alternative.
Repeat the question for two polyhedra defined as
P1 = conv{v1,…,vK}, P2 = conv{w1,…,wL}. Euclidean distance and angle problems
ˆ
Closest Euclidean distance matrix to given data. We are given data dij , for i, j = 1, . . . , n,
whereP1, P2 ∈Sn++. ShowthatE1 ∩E2 =∅ifandonlyifthereexistsana∈Rn with ∥P 1/2a∥2 + ∥P 1/2a∥2 < aT (x1 − x2). 21 Intersection and containment of polyhedra. Let P1 and P2 be two polyhedra defined as which are corrupted measurements of the Euclidean distances between vectors in Rk: ˆ dij =∥xi −xj∥2 +vij, i,j=1,...,n, ˆˆˆ where vij is some noise or error. These data satisfy dij ≥ 0 and dij = dji, for all i, j. The dimension k is not specified. Show how to solve the followin􏰉g problem using convex optimization. Find a dimension knˆ2 kandx1,...,xn∈R sothat i,j=1(dij−dij) isminimized,wheredij=∥xi−xj∥2, i, j = 1, . . . , n. In other words, given some data that are approximate Euclidean distances, you are to find the closest set of actual Euclidean distances, in the least-squares sense. Minimax angle fitting. Suppose that y1, . . . , ym ∈ Rk are affine functions of a variable x ∈ Rn: yi =Aix+bi, i=1,...,m, and z1, . . . , zm ∈ Rk are given nonzero vectors. We want to choose the variable x, subject to some convex constraints, (e.g., linear inequalities) to minimize the maximum angle between yi and zi, max{̸ (y1, z1), . . . , ̸ (ym, zm)}. The angle between nonzero vectors is defined as usual: ̸ (u,v)=cos−1􏰄 uTv 􏰅, ∥u∥2∥v∥2 where we take cos−1(a) ∈ [0,π]. We are only interested in the case when the optimal objective value does not exceed π/2. Formulate this problem as a convex or quasiconvex optimization problem. When the constraints on x are linear inequalities, what kind of problem (or problems) do you have to solve? Exercises 449 8.11 Smallest Euclidean cone containing given points. In Rn, we define a Euclidean cone, with center direction c ̸= 0, and angular radius θ, with 0 ≤ θ ≤ π/2, as the set {x∈Rn |̸ (c,x)≤θ}. (A Euclidean cone is a second-order cone, i.e., it can be represented as the image of the second-order cone under a nonsingular linear mapping.) Let a1 , . . . , am ∈ Rn . How would you find the Euclidean cone, of smallest angular radius, that contains a1,...,am? (In particular, you should explain how to solve the feasibility problem, i.e., how to determine whether there is a Euclidean cone which contains the points.) Extremal volume ellipsoids 8.12 Show that the maximum volume ellipsoid enclosed in a set is unique. Show that the L ̈owner-John ellipsoid of a set is unique. 8.13 L ̈owner-John ellipsoid of a simplex. In this exercise we show that the L ̈owner-John el- lipsoid of a simplex in Rn must be shrunk by a factor n to fit inside the simplex. Since the L ̈owner-John ellipsoid is affinely invariant, it is sufficient to show the result for one particular simplex. Derive the L ̈owner-John ellipsoid Elj for the simplex C = conv{0, e1 , . . . , en }. Show that Elj must be shrunk by a factor 1/n to fit inside the simplex. 8.14 Efficiency of ellipsoidal inner approximation. Let C be a polyhedron in Rn described as C = {x | Ax ≼ b}, and suppose that {x | Ax ≺ b} is nonempty. (a) Show that the maximum volume ellipsoid enclosed in C, expanded by a factor n about its center, is an ellipsoid that contains C. (b) Show that if C is symmetric about the origin, i.e., of the form C = {x | −1 ≼ Ax ≼ 1}, then expanding the maximum volume inscribed ellipsoid by a factor √n gives an ellipsoid that contains C. 8.15 Minimum volume ellipsoid covering union of ellipsoids. Formulate the following problem as a convex optimization problem. Find the minimum volume ellipsoid E = {x | (x − x0)T A−1(x − x0) ≤ 1} that contains K given ellipsoids Ei ={x|xTAix+2bTi x+ci ≤0}, i=1,...,K. Hint. See appendix B. 8.16 Maximum volume rectangle inside a polyhedron. Formulate the following problem as a convex optimization problem. Find the rectangle R={x∈Rn |l≼x≼u} of maximum volume, enclosed in a polyhedron P = {x | Ax ≼ b}. The variables are l, u ∈ Rn. Your formulation should not involve an exponential number of constraints. Centering 8.17 Affine invariance of analytic center. Show that the analytic center of a set of inequalities is affine invariant. Show that it is invariant with respect to positive scaling of the inequalities. 8.18 Analytic center and redundant inequalities. Two sets of linear inequalities that describe the same polyhedron can have different analytic centers. Show that by adding redundant inequalities, we can make any interior point x0 of a polyhedron P ={x∈Rn |Ax≼b} 450 8 Geometric problems 8.19 the analytic center. More specifically, suppose A ∈ Rm×n and Ax0 ≺ b. Show that there exist c ∈ Rn, γ ∈ R, and a positive integer q, such that P is the solution set of the m+q inequalities Ax ≼ b, cT x ≤ γ, cT x ≤ γ, . . . , cT x ≤ γ (8.36) (where the inequality cT x ≤ γ is added q times), and x0 is the analytic center of (8.36). Let xac be the analytic center of a set of linear inequalities aTi x≤bi, i=1,...,m, and define H as the Hessian of the logarithmic barrier function at xac: H = 􏰊m 1 a i a Ti . i=1 (bi −aTi xac)2 Show that the kth inequality is redundant (i.e., it can be deleted without changing the feasible set) if bk − aTk xac ≥ m(aTk H−1ak)1/2. 8.20 Ellipsoidal approximation from analytic center of linear matrix inequality. Let C be the where Einner = {x | (x − xac)T H(x − xac) ≤ 1}, Eouter = {x | (x − xac)T H(x − xac) ≤ m(m − 1)}, and H is the Hessian of the logarithmic barrier function −logdet(B−x1A1 −x2A2 −···−xnAn) evaluated at xac. 8.21 [BYT99] Maximum likelihood interpretation of analytic center. We use the linear mea- surement model of page 352, y = Ax + v, where A ∈ Rm×n. We assume the noise components vi are IID with support [−1, 1]. The set of parameters x consistent with the measurements y ∈ Rm is the polyhedron defined by the linear inequalities −1+y ≼ Ax ≼ 1+y. (8.37) Suppose the probability density function of vi has the form p(v)=􏰆 αr(1−v2)r −1≤v≤1 0 otherwise, where r ≥ 1 and αr > 0. Show that the maximum likelihood estimate of x is the analytic center of (8.37).
8.22 Center of gravity. The center of gravity of a set C ⊆ Rn with nonempty interior is defined
as 􏰜 udu xcg = 􏰜C .
C 1 du
solution set of the LMI
where Ai, B ∈ Sm, and let xac be its analytic center. Show that
x1A1 +x2A2 +···+xnAn ≼B, Einner ⊆ C ⊆ Eouter,

Exercises 451
The center of gravity is affine invariant, and (clearly) a function of the set C, and not its particular description. Unlike the centers described in the chapter, however, it is very difficult to compute the center of gravity, except in simple cases (e.g., ellipsoids, balls, simplexes).
Show that the center of gravity xcg is the minimizer of the convex function f(x)=􏰑 ∥u−x∥2 du.
C
Classification
8.23 Robust linear discrimination. Consider the robust linear discrimination problem given
in (8.23).
(a) Show that the optimal value t⋆ is positive if and only if the two sets of points can be linearly separated. When the two sets of points can be linearly separated, show that the inequality ∥a∥2 ≤ 1 is tight, i.e., we have ∥a⋆∥2 = 1, for the optimal a⋆.
(b) Using the change of variables a ̃ = a/t, ̃b = b/t, prove that the problem (8.23) is equivalent to the QP
minimize ∥a ̃∥2
subjectto a ̃Txi− ̃b≥1, i=1,…,N
a ̃Tyi − ̃b≤−1, i=1,…,M.
8.24 Linear discrimination maximally robust to weight errors. Suppose we are given two sets of points {x1,…,xN} and and {y1,…,yM} in Rn that can be linearly separated. In §8.6.1 we showed how to find the affine function that discriminates the sets, and gives the largest gap in function values. We can also consider robustness with respect to changes in the vector a, which is sometimes called the weight vector. For a given a and b for which f (x) = aT x − b separates the two sets, we define the weight error margin as the norm of the smallest u ∈ Rn such that the affine function (a + u)T x − b no longer separates the two sets of points. In other words, the weight error margin is the maximum ρ such that
(a+u)Txi ≥b, i=1,…,N, (a+u)Tyj ≤b, i=1,…,M,
holds for all u with ∥u∥2 ≤ ρ.
Show how to find a and b that maximize the weight error margin, subject to the normal- ization constraint ∥a∥2 ≤ 1.
8.25 Most spherical separating ellipsoid. We are given two sets of vectors x1, . . . , xN ∈ Rn, and y1,…,yM ∈Rn,andwishtofindtheellipsoidwithminimumeccentricity(i.e.,minimum condition number of the defining matrix) that contains the points x1, . . . , xN , but not the points y1, . . . , yM . Formulate this as a convex optimization problem.
Placement and floor planning
8.26 Quadratic placement. We consider a placement problem in R2, defined by an undirected
graph A with N nodes, and with quadra􏰉tic costs:
minimize (i,j)∈A ∥xi − xj∥2.
Thevariablesarethepositionsxi ∈R2,i=1,…,M. Thepositionsxi,i=M+1,…,N are given. We define two vectors u, v ∈ RM by
u = (x11,x21,…,xM1), v = (x12,x22,…,xM2), containing the first and second components, respectively, of the free nodes.

452
8 Geometric problems
Show that u and v can be found by solving two sets of linear equations, Cu = d1, Cv = d2,
where C ∈ SM . Give a simple expression for the coefficients of C in terms of the graph A. 8.27 Problems with minimum distance constraints. We consider a problem with variables
x1, . . . , xN ∈ Rk. The objective, f0(x1, . . . , xN ), is convex, and the constraints fi(x1,…,xN) ≤ 0, i = 1,…,m,
are convex (i.e., the functions fi : RNk → R are convex). In addition, we have the minimum distance constraints
∥xi−xj∥2≥Dmin, i̸=j, i,j=1,…,N.
In general, this is a hard nonconvex problem.
Following the approach taken in floorplanning, we can form a convex restriction of the problem, i.e., a problem which is convex, but has a smaller feasible set. (Solving the restricted problem is therefore easy, and any solution is guaranteed to be feasible for the nonconvex problem.) Let aij ∈ Rk, for i < j, i,j = 1,...,N, satisfy ∥aij∥2 = 1. Show that the restricted problem minimize f0(x1, . . . , xN ) subject to fi(x1,...,xN) ≤ 0, i = 1,...,m aTij(xi−xj)≥Dmin, i 0 is some specified tolerance.
Initial point and sublevel set
The methods described in this chapter require a suitable starting point x(0). The starting point must lie in domf, and in addition the sublevel set
S = {x ∈ domf | f(x) ≤ f(x(0))} (9.3)
must be closed. This condition is satisfied for all x(0) ∈ dom f if the function f is closed, i.e., all its sublevel sets are closed (see §A.3.3). Continuous functions with

458 9 Unconstrained minimization
domf = Rn are closed, so if domf = Rn, the initial sublevel set condition is satisfied by any x(0). Another important class of closed functions are continuous functions with open domains, for which f(x) tends to infinity as x approaches bddomf.
9.1.1 Examples
Quadratic minimization and least-squares
The general convex quadratic minimization problem has the form
minimize (1/2)xT P x + qT x + r, (9.4)
where P ∈ Sn+, q ∈ Rn, and r ∈ R. This problem can be solved via the optimality conditions, P x⋆ + q = 0, which is a set of linear equations. When P ≻ 0, there is a unique solution, x⋆ = −P−1q. In the more general case when P is not positive definite, any solution of Px⋆ = −q is optimal for (9.4); if Px⋆ = −q does not have a solution, then the problem (9.4) is unbounded below (see exercise 9.1). Our ability to analytically solve the quadratic minimization problem (9.4) is the basis for Newton’s method, a powerful method for unconstrained minimization described in §9.5.
One special case of the quadratic minimization problem that arises very fre- quently is the least-squares problem
minimize ∥Ax−b∥2 =xT(ATA)x−2(ATb)Tx+bTb. The optimality conditions
ATAx⋆ =ATb
are called the normal equations of the least-squares problem.
Unconstrained geometric programming
As a second example, we consider an unconstrained geometric program in convex
form,
The optimality condition is
minimize f (x) = log 􏰀􏰉mi=1 exp(aTi x + bi )􏰁 . 1 􏰊m
∇f(x⋆) = 􏰉mj=1 exp(aTj x⋆ + bj) exp(aTi x⋆ + bi)ai = 0, i=1
which in general has no analytical solution, so here we must resort to an iterative algorithm. For this problem, domf = Rn, so any point can be chosen as the initial point x(0).
Analytic center of linear inequalities
We consider the optimization problem
minimize f (x) = − 􏰉mi=1 log(bi − aTi x), (9.5)

9.1 Unconstrained minimization problems 459
where the domain of f is the open set
domf ={x|aTi x 0 such that
∇2f(x) ≽ mI (9.7) for all x ∈ S. Strong convexity has several interesting consequences. For x, y ∈ S
we have
for some z on the line segment [x, y]. By the strong convexity assumption (9.7), the
f(y)=f(x)+∇f(x)T(y−x)+ 12(y−x)T∇2f(z)(y−x)
last term on the righthand side is at least (m/2)∥y−x∥2, so we have the inequality
f ( y ) ≥ f ( x ) + ∇ f ( x ) T ( y − x ) + m2 ∥ y − x ∥ 2 2 ( 9 . 8 )
for all x and y in S. When m = 0, we recover the basic inequality characterizing convexity; for m > 0 we obtain a better lower bound on f(y) than follows from convexity alone.

460
9 Unconstrained minimization
We will first show that the inequality (9.8) can be used to bound f(x) − p⋆, which is the suboptimality of the point x, in terms of ∥∇f(x)∥2. The righthand side of (9.8) is a convex quadratic function of y (for fixed x). Setting the gradient with respect to y equal to zero, we find that y ̃ = x − (1/m)∇f(x) minimizes the righthand side. Therefore we have
f(y) ≥ f(x)+∇f(x)T(y−x)+m2∥y−x∥2 ≥ f(x)+∇f(x)T(y ̃−x)+ m2 ∥y ̃−x∥2
= f(x) − 1 ∥∇f(x)∥2. 2m
Since this holds for any y ∈ S, we have
p⋆ ≥ f(x) − 1 ∥∇f(x)∥2.
2m
(9.9)
This inequality shows that if the gradient is small at a point, then the point is nearly optimal. The inequality (9.9) can also be interpreted as a condition for suboptimality which generalizes the optimality condition (9.2):
∥∇f (x)∥2 ≤ (2mǫ)1/2 =⇒ f (x) − p⋆ ≤ ǫ. (9.10) We can also derive a bound on ∥x − x⋆∥2, the distance between x and any
optimal point x⋆, in terms of ∥∇f(x)∥2:
∥x − x⋆∥2 ≤ m2 ∥∇f(x)∥2. (9.11)
To see this, we apply (9.8) with y = x⋆ to obtain
p⋆ =f(x⋆) ≥ f(x)+∇f(x)T(x⋆−x)+m2∥x⋆−x∥2
≥ f(x) − ∥∇f(x)∥2∥x⋆ − x∥2 + m2 ∥x⋆ − x∥2,
where we use the Cauchy-Schwarz inequality in the second inequality. Since p⋆ ≤
f(x), we must have
− ∥ ∇ f ( x ) ∥ 2 ∥ x ⋆ − x ∥ 2 + m2 ∥ x ⋆ − x ∥ 2 2 ≤ 0 ,
from which (9.11) follows. One consequence of (9.11) is that the optimal point x⋆
is unique.
Upper bound on ∇2f(x)
The inequality (9.8) implies that the sublevel sets contained in S are bounded, so in particular, S is bounded. Therefore the maximum eigenvalue of ∇2f(x), which is a continuous function of x on S, is bounded above on S, i.e., there exists a constant M such that
∇2f(x) ≼ MI (9.12)

9.1 Unconstrained minimization problems
for all x ∈ S. This upper bound on the Hessian implies for any x, y ∈ S, f(y)≤f(x)+∇f(x)T(y−x)+ M2 ∥y−x∥2,
which is analogous to (9.8). Minimizing each side over y yields p⋆ ≤ f(x) − 1 ∥∇f(x)∥2,
461
the counterpart of (9.9).
Condition number of sublevel sets
2M
(9.13)
(9.14)
From the strong convexity inequality (9.7) and the inequality (9.12), we have
mI ≼ ∇2f(x) ≼ MI (9.15)
for all x ∈ S. The ratio κ = M/m is thus an upper bound on the condition number of the matrix ∇2f(x), i.e., the ratio of its largest eigenvalue to its smallest eigenvalue. We can also give a geometric interpretation of (9.15) in terms of the sublevel sets of f.
We define the width of a convex set C ⊆ Rn, in the direction q, where ∥q∥2 = 1, as
W (C, q) = sup qT z − inf qT z.
z∈C
The minimum width and maximum width of C are given by
Wmin = inf W(C,q), Wmax = sup W(C,q).
∥q∥2=1
The condition number of the convex set C is defined as
c o n d ( C ) = W m2 a x , W m2 i n
i.e., the square of the ratio of its maximum width to its minimum width. The condition number of C gives a measure of its anisotropy or eccentricity. If the condition number of a set C is small (say, near one) it means that the set has approximately the same width in all directions, i.e., it is nearly spherical. If the condition number is large, it means that the set is far wider in some directions than in others.
Example 9.1 Condition number of an ellipsoid. Let E be the ellipsoid E = {x | (x − x0)T A−1(x − x0) ≤ 1},
where A ∈ Sn++. The width of E in the direction q is
supqT z − inf qT z = (∥A1/2q∥2 + qT x0) − (−∥A1/2q∥2 + qT x0)
z∈C
∥q∥2=1
z∈E
z∈E
= 2∥A1/2q∥2.

462
9 Unconstrained minimization
It follows that its minimum and maximum width are
Wmin = 2λmin(A)1/2, Wmax = 2λmax(A)1/2,
and its condition number is
cond(E) = λmax(A) = κ(A),
where κ(A) denotes the condition number of the matrix A, i.e., the ratio of its maximum singular value to its minimum singular value. Thus the condition number of the ellipsoid E is the same as the condition number of the matrix A that defines it.
Now suppose f satisfies mI ≼ ∇2f(x) ≼ MI for all x ∈ S. We will derive a bound on the condition number of the α-sublevel Cα = {x | f(x) ≤ α}, where p⋆ < α ≤ f(x(0)). Applying (9.13) and (9.8) with x = x⋆, we have p⋆ + (M/2)∥y − x⋆∥2 ≥ f(y) ≥ p⋆ + (m/2)∥y − x⋆∥2. This implies that Binner ⊆ Cα ⊆ Bouter where Binner = {y | ∥y − x⋆∥2 ≤ (2(α − p⋆)/M)1/2}, Bouter = {y | ∥y − x⋆∥2 ≤ (2(α − p⋆)/m)1/2}. In other words, the α-sublevel set contains Binner, and is contained in Bouter, which are balls with radii (2(α − p⋆)/M)1/2, (2(α − p⋆)/m)1/2, respectively. The ratio of the radii squared gives an upper bound on the condition λmin (A) number of Cα: We can also give a geometric interpretation of the condition number κ(∇2f(x⋆)) of the Hessian at the optimum. From the Taylor series expansion of f around x⋆, f(y) ≈ p⋆ + 12(y − x⋆)T ∇2f(x⋆)(y − x⋆), we see that, for α close to p⋆, Cα ≈{y|(y−x⋆)T∇2f(x⋆)(y−x⋆)≤2(α−p⋆)}, i.e., the sublevel set is well approximated by an ellipsoid with center x⋆. Therefore lim⋆ cond(Cα) = κ(∇2f(x⋆)). α→p We will see that the condition number of the sublevel sets of f (which is bounded by M/m) has a strong effect on the efficiency of some common methods for uncon- strained minimization. c o n d ( C α ) ≤ Mm . 9.2 Descent methods 463 The strong convexity constants It must be kept in mind that the constants m and M are known only in rare cases, so the inequality (9.10) cannot be used as a practical stopping criterion. It can be considered a conceptual stopping criterion; it shows that if the gradient of f at x is small enough, then the difference between f(x) and p⋆ is small. If we terminate an algorithm when ∥∇f(x(k))∥2 ≤ η, where η is chosen small enough to be (very likely) smaller than (mǫ)1/2, then we have f(x(k)) − p⋆ ≤ ǫ (very likely). In the following sections we give convergence proofs for algorithms, which in- clude bounds on the number of iterations required before f(x(k)) − p⋆ ≤ ǫ, where ǫ is some positive tolerance. Many of these bounds involve the (usually unknown) constants m and M, so the same comments apply. These results are at least con- ceptually useful; they establish that the algorithm converges, even if the bound on the number of iterations required to reach a given accuracy depends on constants that are unknown. We will encounter one important exception to this situation. In §9.6 we will study a special class of convex functions, called self-concordant, for which we can provide a complete convergence analysis (for Newton’s method) that does not de- pend on any unknown constants. 9.2 Descent methods The algorithms described in this chapter produce a minimizing sequence x(k), k = 1,..., where x(k+1) = x(k) + t(k)∆x(k) and t(k) > 0 (except when x(k) is optimal). Here the concatenated symbols ∆ and x that form ∆x are to be read as a single entity, a vector in Rn called the step or search direction (even though it need not have unit norm), and k = 0, 1, . . . denotes the iteration number. The scalar t(k) ≥ 0 is called the step size or step length at iteration k (even though it is not equal to ∥x(k+1) − x(k)∥ unless ∥∆x(k)∥ = 1). The terms ‘search step’ and ‘scale factor’ are more accurate, but ‘search direction’ and ‘step length’ are the ones widely used. When we focus on one iteration of an algorithm, we sometimes drop the superscripts and use the lighter notation x+ =x+t∆x,orx:=x+t∆x,inplaceofx(k+1) =x(k)+t(k)∆x(k).
All the methods we study are descent methods, which means that f(x(k+1)) < f(x(k)), except when x(k) is optimal. This implies that for all k we have x(k) ∈ S, the initial sublevel set, and in particular we have x(k) ∈ domf. From convexity we know that ∇f(x(k))T (y − x(k)) ≥ 0 implies f(y) ≥ f(x(k)), so the search direction in a descent method must satisfy ∇f(x(k))T ∆x(k) < 0, i.e., it must make an acute angle with the negative gradient. We call such a direction a descent direction (for f, at x(k)). 464 9 Unconstrained minimization The outline of a general descent method is as follows. It alternates between two steps: determining a descent direction ∆x, and the selection of a step size t. Algorithm 9.1 General descent method. given a starting point x ∈ dom f . repeat 1. Determine a descent direction ∆x. 2. Line search. Choose a step size t > 0. 3. Update. x := x + t∆x.
until stopping criterion is satisfied.
The second step is called the line search since selection of the step size t deter- mines where along the line {x + t∆x | t ∈ R+} the next iterate will be. (A more accurate term might be ray search.)
A practical descent method has the same general structure, but might be or- ganized differently. For example, the stopping criterion is often checked while, or immediately after, the descent direction ∆x is computed. The stopping criterion is often of the form ∥∇f(x)∥2 ≤ η, where η is small and positive, as suggested by the suboptimality condition (9.9).
Exact line search
One line search method sometimes used in practice is exact line search, in which t ischosentominimizef alongtheray{x+t∆x|t≥0}:
t=argmins≥0 f(x+s∆x). (9.16)
An exact line search is used when the cost of the minimization problem with one variable, required in (9.16), is low compared to the cost of computing the search direction itself. In some special cases the minimizer along the ray can be found an- alytically, and in others it can be computed efficiently. (This is discussed in §9.7.1.)
Backtracking line search
Most line searches used in practice are inexact: the step length is chosen to ap- proximately minimize f along the ray {x + t∆x | t ≥ 0}, or even to just reduce f ‘enough’. Many inexact line search methods have been proposed. One inexact line search method that is very simple and quite effective is called backtracking line search. It depends on two constants α, β with 0 < α < 0.5, 0 < β < 1. Algorithm 9.2 Backtracking line search. given a descent direction ∆x for f at x ∈ domf, α ∈ (0,0.5), β ∈ (0,1). t := 1. while f(x + t∆x) > f(x) + αt∇f(x)T ∆x, t := βt.

9.2
Descent methods 465
f(x+t∆x)
f(x) + t∇f(x)T ∆x f(x) + αt∇f(x)T ∆x t
t=0 t0
Figure 9.1 Backtracking line search. The curve shows f , restricted to the line over which we search. The lower dashed line shows the linear extrapolation of f, and the upper dashed line has a slope a factor of α smaller. The backtracking condition is that f lies below the upper dashed line, i.e., 0 ≤ t ≤ t0.
The line search is called backtracking because it starts with unit step size and then reduces it by the factor β until the stopping condition f (x + t∆x) ≤ f (x) + αt∇f (x)T ∆x holds. Since ∆x is a descent direction, we have ∇f (x)T ∆x < 0, so for small enough t we have f(x + t∆x) ≈ f(x) + t∇f(x)T ∆x < f(x) + αt∇f(x)T ∆x, which shows that the backtracking line search eventually terminates. The constant α can be interpreted as the fraction of the decrease in f predicted by linear extrap- olation that we will accept. (The reason for requiring α to be smaller than 0.5 will become clear later.) The backtracking condition is illustrated in figure 9.1. This figure suggests, and it can be shown, that the backtracking exit inequality f(x + t∆x) ≤ f(x) + αt∇f(x)T∆xholdsfort≥0inaninterval(0,t0]. Itfollowsthatthebacktracking line search stops with a step length t that satisfies t = 1, or t ∈ (βt0,t0]. The first case occurs when the step length t = 1 satisfies the backtracking condition, i.e., 1 ≤ t0. In particular, we can say that the step length obtained by backtracking line search satisfies t ≥ min{1, βt0}. Whendomf isnotallofRn,theconditionf(x+t∆x)≤f(x)+αt∇f(x)T∆x in the backtracking line search must be interpreted carefully. By our convention that f is infinite outside its domain, the inequality implies that x + t∆x ∈ dom f . In a practical implementation, we first multiply t by β until x + t∆x ∈ dom f ; 466 9 Unconstrained minimization then we start to check whether the inequality f (x + t∆x) ≤ f (x) + αt∇f (x)T ∆x holds. The parameter α is typically chosen between 0.01 and 0.3, meaning that we accept a decrease in f between 1% and 30% of the prediction based on the linear extrapolation. The parameter β is often chosen to be between 0.1 (which corre- sponds to a very crude search) and 0.8 (which corresponds to a less crude search). Gradient descent method A natural choice for the search direction is the negative gradient ∆x = −∇f(x). The resulting algorithm is called the gradient algorithm or gradient descent method. Algorithm 9.3 Gradient descent method. given a starting point x ∈ dom f . repeat 1. ∆x := −∇f(x). 2. Line search. Choose step size t via exact or backtracking line search. 3. Update. x := x + t∆x. until stopping criterion is satisfied. The stopping criterion is usually of the form ∥∇f(x)∥2 ≤ η, where η is small and positive. In most implementations, this condition is checked after step 1, rather than after the update. Convergence analysis In this section we present a simple convergence analysis for the gradient method, using the lighter notation x+ = x + t∆x for x(k+1) = x(k) + t(k)∆x(k), where ∆x = −∇f(x). We assume f is strongly convex on S, so there are positive constants m 9.3 9.3.1 a n d M s u c h t h a t m I ≼ ∇ 2 f ( x ) ≼ M I f o r a l l x ∈ S . D e fi n e t h e f u n c t i o n f ̃ : R → R ̃ by f (t) = f (x − t∇f (x)), i.e., f as a function of the step length t in the negative gradient direction. In the following discussion we will only consider t for which x − t∇f(x) ∈ S. From the inequality (9.13), with y = x − t∇f(x), we obtain a ̃ quadratic upper bound on f: ̃ 2Mt22 f(t) ≤ f(x) − t∥∇f(x)∥2 + 2 ∥∇f(x)∥2. (9.17) Analysis for exact line search We now assume that an exact line search is used, and minimize over t both sides ̃ of the inequality (9.17). On the lefthand side we get f(texact), where texact is the ̃ step length that minimizes f. The righthand side is a simple quadratic, which 9.3 Gradient descent method 467 is minimized by t = 1/M, and has minimum value f(x) − (1/(2M))∥∇f(x)∥2. Therefore we have + ̃12 f(x )=f(texact)≤f(x)−2M∥∇(f(x))∥2. Subtracting p⋆ from both sides, we get f(x+) − p⋆ ≤ f(x) − p⋆ − 1 ∥∇f(x)∥2. 2M We combine this with ∥∇f(x)∥2 ≥ 2m(f(x) − p⋆) (which follows from (9.9)) to conclude f(x+) − p⋆ ≤ (1 − m/M)(f(x) − p⋆). Applying this inequality recursively, we find that f(x(k)) − p⋆ ≤ ck(f(x(0)) − p⋆) (9.18) where c = 1−m/M < 1, which shows that f(x(k)) converges to p⋆ as k → ∞. In particular, we must have f(x(k)) − p⋆ ≤ ǫ after at most log((f(x(0)) − p⋆)/ǫ) log(1/c) (9.19) iterations of the gradient method with exact line search. This bound on the number of iterations required, even though crude, can give some insight into the gradient method. The numerator, log((f(x(0)) − p⋆)/ǫ) can be interpreted as the log of the ratio of the initial suboptimality (i.e., gap between f(x(0)) and p⋆), to the final suboptimality (i.e., less than ǫ). This term suggests that the number of iterations depends on how good the initial point is, and what the final required accuracy is. The denominator appearing in the bound (9.19), log(1/c), is a function of M/m, which we have seen is a bound on the condition number of ∇2f(x) over S, or the condition number of the sublevel sets {z | f(z) ≤ α}. For large condition number bound M/m, we have log(1/c) = − log(1 − m/M) ≈ m/M, so our bound on the number of iterations required increases approximately linearly with increasing M/m. We will see that the gradient method does in fact require a large number of iterations when the Hessian of f, near x⋆, has a large condition number. Conversely, when the sublevel sets of f are relatively isotropic, so that the condition number bound M/m can be chosen to be relatively small, the bound (9.18) shows that convergence is rapid, since c is small, or at least not too close to one. The bound (9.18) shows that the error f(x(k)) − p⋆ converges to zero at least as fast as a geometric series. In the context of iterative numerical methods, this is called linear convergence, since the error lies below a line on a log-linear plot of error versus iteration number. 468 9 Unconstrained minimization Analysis for backtracking line search Now we consider the case where a backtracking line search is used in the gradient descent method. We will show that the backtracking exit condition, ̃2 f(t) ≤ f(x) − αt∥∇f(x)∥2, is satisfied whenever 0 ≤ t ≤ 1/M. First note that Mt2 0≤t≤1/M =⇒ −t+ 2 ≤−t/2 (which follows from convexity of −t+Mt2/2). Using this result and the bound (9.17), we have, for 0 ≤ t ≤ 1/M, ̃ 2Mt2 2 f(t) ≤ f(x) − t∥∇f(x)∥2 + 2 ∥∇(f(x))∥2 ≤ f (x) − (t/2)∥∇f (x)∥2 ≤ f(x) − αt∥∇f(x)∥2, since α < 1/2. Therefore the backtracking line search terminates either with t = 1 or with a value t ≥ β/M. This provides a lower bound on the decrease in the objective function. In the first case we have f(x+) ≤ f(x) − α∥∇f(x)∥2, and in the second case we have f(x+) ≤ f(x) − (βα/M)∥∇f(x)∥2. Putting these together, we always have f(x+) ≤ f(x) − min{α, βα/M}∥∇f(x)∥2. Now we can proceed exactly as in the case of exact line search. We subtract p⋆ from both sides to get f(x+)−p⋆ ≤f(x)−p⋆ −min{α,βα/M}∥∇f(x)∥2, and combine this with ∥∇f(x)∥2 ≥ 2m(f(x) − p⋆) to obtain f(x+)−p⋆ ≤(1−min{2mα,2βαm/M})(f(x)−p⋆). From this we conclude where f(x(k)) − p⋆ ≤ ck(f(x(0)) − p⋆) c = 1 − min{2mα, 2βαm/M} < 1. In particular, f(x(k)) converges to p⋆ at least as fast as a geometric series with an exponent that depends (at least in part) on the condition number bound M/m. In the terminology of iterative methods, the convergence is at least linear. 9.3 Gradient descent method 469 4 0 −4 Figure 9.2 Some contour lines of the function f(x) = (1/2)(x21 +10x2). The condition number of the sublevel sets, which are ellipsoids, is exactly 10. The figure shows the iterates of the gradient method with exact line search, started at x(0) = (10, 1). 9.3.2 Examples A quadratic problem in R2 Our first example is very simple. We consider the quadratic objective function on x(0) x(1) R2 f ( x ) = 21 ( x 21 + γ x 2 2 ) , −10 0 10 x1 where γ > 0. Clearly, the optimal point is x⋆ = 0, and the optimal value is 0. The Hessian of f is constant, and has eigenvalues 1 and γ, so the condition numbers of the sublevel sets of f are all exactly
max{1, γ} = max{γ, 1/γ}. min{1, γ}
The tightest choices for the strong convexity constants m and M are m = min{1, γ}, M = max{1, γ}.
We apply the gradient descent method with exact line search, starting at the point x(0) = (γ, 1). In this case we can derive the following closed-form expressions for the iterates x(k) and their function values (exercise 9.6):
and
(k) 􏰄γ − 1􏰅k (k) x1 =γ γ+1 , x2
γ(γ + 1) 􏰄γ − 1􏰅2k f(x(k)) = 2 γ + 1 =
􏰄 γ − 1􏰅k = −γ+1 ,
􏰄γ − 1􏰅2k
γ + 1 f(x(0)).
This is illustrated in figure 9.2, for γ = 10.
For this simple example, convergence is exactly linear, i.e., the error is exactly
a geometric series, reduced by the factor |(γ − 1)/(γ + 1)|2 at each iteration. For
x2

470
9 Unconstrained minimization
γ = 1, the exact solution is found in one iteration; for γ not far from one (say, between 1/3 and 3) convergence is rapid. The convergence is very slow for γ ≫ 1 or γ ≪ 1.
We can compare the convergence with the bound derived above in §9.3.1. Using the least conservative values m = min{1, γ} and M = max{1, γ}, the bound (9.18) guarantees that the error in each iteration is reduced at least by the factor c = (1 − m/M ). We have seen that the error is in fact reduced exactly by the factor
􏰄1−m/M􏰅2 1+m/M
in each iteration. For small m/M, which corresponds to large condition number, the upper bound (9.19) implies that the number of iterations required to obtain a given level of accuracy grows at most like M/m. For this example, the exact number of iterations required grows approximately like (M/m)/4, i.e., one quarter of the value of the bound. This shows that for this simple example, the bound on the number of iterations derived in our simple analysis is only about a factor of four conservative (using the least conservative values for m and M). In particular, the convergence rate (as well as its upper bound) is very dependent on the condition number of the sublevel sets.
A nonquadratic problem in R2
We now consider a nonquadratic example in R2, with
f(x1,x2)=ex1+3×2−0.1 +ex1−3×2−0.1 +e−x1−0.1. (9.20)
We apply the gradient method with a backtracking line search, with α = 0.1, β = 0.7. Figure 9.3 shows some level curves of f, and the iterates x(k) generated by the gradient method (shown as small circles). The lines connecting successive iterates show the scaled steps,
x(k+1) − x(k) = −t(k)∇f(x(k)).
Figure 9.4 shows the error f(x(k))−p⋆ versus iteration k. The plot reveals that the error converges to zero approximately as a geometric series, i.e., the convergence is approximately linear. In this example, the error is reduced from about 10 to about 10−7 in 20 iterations, so the error is reduced by a factor of approximately 10−8/20 ≈ 0.4 each iteration. This reasonably rapid convergence is predicted by our convergence analysis, since the sublevel sets of f are not too badly conditioned, which in turn means that M/m can be chosen as not too large.
To compare backtracking line search with an exact line search, we use the gradient method with an exact line search, on the same problem, and with the same starting point. The results are given in figures 9.5 and 9.4. Here too the convergence is approximately linear, about twice as fast as the gradient method with backtracking line search. With exact line search, the error is reduced by about 10−11 in 15 iterations, i.e., a reduction by a factor of about 10−11/15 ≈ 0.2 per iteration.

9.3 Gradient descent method
471
x(0) x(2)
x(1)
Figure 9.3 Iterates of the gradient method with backtracking line search, for the problem in R2 with objective f given in (9.20). The dashed curves are level curves of f, and the small circles are the iterates of the gradient method. The solid lines, which connect successive iterates, show the scaled steps t(k)∆x(k).
105 100
10−5 10−10
backtracking l.s.
exact l.s.
f(x(k))−p⋆
10−150 5 10 15 20 25 k
Figure 9.4 Error f(x(k))−p⋆ versus iteration k of the gradient method with backtracking and exact line search, for the problem in R2 with objective f given in (9.20). The plot shows nearly linear convergence, with the error reduced approximately by the factor 0.4 in each iteration of the gradient method with backtracking line search, and by the factor 0.2 in each iteration of the gradient method with exact line search.

472
9 Unconstrained minimization
x(0) x(1)
Figure 9.5 Iterates of the gradient method with exact line search for the problem in R2 with objective f given in (9.20).
A problem in R100
We next consider a larger example, of the form
􏰊m i=1
f ( x ) = c T x −
l o g ( b i − a Ti x ) ,
( 9 . 2 1 )
with m = 500 terms and n = 100 variables.
The progress of the gradient method with backtracking line search, with pa-
rameters α = 0.1, β = 0.5, is shown in figure 9.6. In this example we see an initial approximately linear and fairly rapid convergence for about 20 iterations, followed by a slower linear convergence. Overall, the error is reduced by a factor of around 106 in around 175 iterations, which gives an average error reduction by a factor of around 10−6/175 ≈ 0.92 per iteration. The initial convergence rate, for the first 20 iterations, is around a factor of 0.8 per iteration; the slower final convergence rate, after the first 20 iterations, is around a factor of 0.94 per iteration.
Figure 9.6 shows the convergence of the gradient method with exact line search. The convergence is again approximately linear, with an overall error reduction by approximately a factor 10−6/140 ≈ 0.91 per iteration. This is only a bit faster than the gradient method with backtracking line search.
Finally, we examine the influence of the backtracking line search parameters α and β on the convergence rate, by determining the number of iterations required to obtain f(x(k)) − p⋆ ≤ 10−5. In the first experiment, we fix β = 0.5, and vary α from 0.05 to 0.5. The number of iterations required varies from about 80, for larger values of α, in the range 0.2–0.5, to about 170 for smaller values of α. This, and other experiments, suggest that the gradient method works better with fairly large α, in the range 0.2–0.5.
Similarly, we can study the effect of the choice of β by fixing α = 0.1 and varying β from 0.05 to 0.95. Again the variation in the total number of iterations is not large, ranging from around 80 (when β ≈ 0.5) to around 200 (for β small, or near 1). This experiment, and others, suggest that β ≈ 0.5 is a good choice.

9.3 Gradient descent method
473
104
102
100
10−2
exact l.s.
10−4
0 50 100 150 200
k
Figure 9.6 Error f(x(k))−p⋆ versus iteration k for the gradient method with backtracking and exact line search, for a problem in R100.
These experiments suggest that the effect of the backtracking parameters on the convergence is not large, no more than a factor of two or so.
Gradient method and condition number
Our last experiment will illustrate the importance of the condition number of ∇2f(x) (or the sublevel sets) on the rate of convergence of the gradient method. We start with the function given by (9.21), but replace the variable x by x = T x ̄, where
i.e., we minimize
T = diag((1, γ1/n, γ2/n, . . . , γ(n−1)/n)), ̄ 􏰊m
f ( x ̄ ) = c T T x ̄ − l o g ( b i − a Ti T x ̄ ) . ( 9 . 2 2 ) i=1
This gives us a family of optimization problems, indexed by γ, which affects the problem condition number.
̄(k) ⋆ −5 Figure 9.7 shows the number of iterations required to achieve f(x ̄ )−p ̄ < 10 as a function of γ, using a backtracking line search with α = 0.3 and β = 0.7. This plot shows that for diagonal scaling as small as 10 : 1 (i.e., γ = 10), the number of iterations grows to more than a thousand; for a diagonal scaling of 20 or more, the gradient method slows to essentially useless. 2 ̄⋆ The condition number of the Hessian ∇ f(x ̄ ) at the optimum is shown in figure 9.8. For large and small γ, the condition number increases roughly as max{γ2,1/γ2}, in a very similar way as the number of iterations depends on γ. This shows again that the relation between conditioning and convergence speed is a real phenomenon, and not just an artifact of our analysis. backtracking l.s. f(x(k))−p⋆ 474 9 Unconstrained minimization 103 102 Figure 9.7 Number of iterations of the gradient method applied to prob- 10−1 100 γ 101 lem (9.22). The vertical axis shows the number of iterations required to ̄(k) ⋆ −5 obtain f(x ̄ ) − p ̄ < 10 . The horizontal axis shows γ, which is a param- eter that controls the amount of diagonal scaling. We use a backtracking line search with α = 0.3, β = 0.7. 104 103 102 101 Figure 9.8 Condition number of the Hessian of the function at its minimum, as a function of γ. By comparing this plot with the one in figure 9.7, we see that the condition number has a very strong influence on convergence rate. 10−1 100 101 γ 2 ̄⋆ κ(∇ f(x ̄ )) iterations 9.4 Steepest descent method 475 Conclusions From the numerical examples shown, and others, we can make the conclusions summarized below. • The gradient method often exhibits approximately linear convergence, i.e., the error f(x(k)) − p⋆ converges to zero approximately as a geometric series. • The choice of backtracking parameters α, β has a noticeable but not dramatic effect on the convergence. An exact line search sometimes improves the con- vergence of the gradient method, but the effect is not large (and probably not worth the trouble of implementing the exact line search). • The convergence rate depends greatly on the condition number of the Hessian, or the sublevel sets. Convergence can be very slow, even for problems that are moderately well conditioned (say, with condition number in the 100s). When the condition number is larger (say, 1000 or more) the gradient method is so slow that it is useless in practice. The main advantage of the gradient method is its simplicity. Its main disadvantage is that its convergence rate depends so critically on the condition number of the Hessian or sublevel sets. 9.4 Steepest descent method The first-order Taylor approximation of f (x + v) around x is f ( x + v ) ≈ f􏰝 ( x + v ) = f ( x ) + ∇ f ( x ) T v . The second term on the righthand side, ∇f(x)Tv, is the directional derivative of f at x in the direction v. It gives the approximate change in f for a small step v. The step v is a descent direction if the directional derivative is negative. We now address the question of how to choose v to make the directional deriva- tive as negative as possible. Since the directional derivative ∇f(x)Tv is linear in v, it can be made as negative as we like by taking v large (provided v is a descent direction, i.e., ∇f (x)T v < 0). To make the question sensible we have to limit the size of v, or normalize by the length of v. Let ∥ · ∥ be any norm on Rn. We define a normalized steepest descent direction (with respect to the norm ∥ · ∥) as ∆xnsd = argmin{∇f(x)T v | ∥v∥ = 1}. (9.23) (We say ‘a’ steepest descent direction because there can be multiple minimizers.) A normalized steepest descent direction ∆xnsd is a step of unit norm that gives the largest decrease in the linear approximation of f. A normalized steepest descent direction can be interpreted geometrically as follows. We can just as well define ∆xnsd as ∆xnsd = argmin{∇f(x)T v | ∥v∥ ≤ 1}, 476 9 Unconstrained minimization i.e., as the direction in the unit ball of ∥ · ∥ that extends farthest in the direction −∇f (x). It is also convenient to consider a steepest descent step ∆xsd that is unnormal- ized, by scaling the normalized steepest descent direction in a particular way: ∆xsd = ∥∇f(x)∥∗∆xnsd, (9.24) where ∥ · ∥∗ denotes the dual norm. Note that for the steepest descent step, we have ∇f(x)T ∆xsd = ∥∇f(x)∥∗∇f(x)T ∆xnsd = −∥∇f(x)∥2∗ (see exercise 9.7). The steepest descent method uses the steepest descent direction as search direc- tion. Algorithm 9.4 Steepest descent method. given a starting point x ∈ dom f . repeat 1. Compute steepest descent direction ∆xsd. 2. Line search. Choose t via backtracking or exact line search. 3. Update. x := x + t∆xsd. until stopping criterion is satisfied. When exact line search is used, scale factors in the descent direction have no effect, so the normalized or unnormalized direction can be used. Steepest descent for Euclidean and quadratic norms Steepest descent for Euclidean norm If we take the norm ∥·∥ to be the Euclidean norm we find that the steepest descent direction is simply the negative gradient, i.e., ∆xsd = −∇f(x). The steepest descent method for the Euclidean norm coincides with the gradient descent method. Steepest descent for quadratic norm We consider the quadratic norm ∥z∥P =(zTPz)1/2 =∥P1/2z∥2, where P ∈ Sn++. The normalized steepest descent direction is given by ∆xnsd =−􏰀∇f(x)TP−1∇f(x)􏰁−1/2P−1∇f(x). The dual norm is given by ∥z∥∗ = ∥P−1/2z∥2, so the steepest descent step with 9.4.1 respect to ∥ · ∥P is given by ∆xsd = −P−1∇f(x). (9.25) The normalized steepest descent direction for a quadratic norm is illustrated in figure 9.9. 9.4 Steepest descent method 477 Figure 9.9 Normalized steepest descent direction for a quadratic norm. The ellipsoid shown is the unit ball of the norm, translated to the point x. The normalized steepest descent direction ∆xnsd at x extends as far as possible in the direction −∇f(x) while staying in the ellipsoid. The gradient and normalized steepest descent directions are shown. Interpretation via change of coordinates We can give an interesting alternative interpretation of the steepest descent direc- tion ∆xsd as the gradient search direction after a change of coordinates is applied to the problem. Define u ̄ = P 1/2 u, so we have ∥u∥P = ∥u ̄∥2 . Using this change of coordinates, we can solve the original problem of minimizing f by solving the equivalent problem of minimizing the function f ̄ : Rn → R, given by ̄ −1/2 ∆x ̄ = −∇f(x ̄) = −P −1/2 −1/2 x ̄) = −P ∇f(x). ̄ −1/2 f(u ̄) = f(P u ̄) = f(u). ̄ If we apply the gradient method to f, the search direction at a point x ̄ (which corresponds to the point x = P−1/2x ̄ for the original problem) is ∇f(P This gradient search direction corresponds to the direction ∆x = P−1/2 􏰎−P−1/2∇f(x)􏰏 = −P−1∇f(x) for the original variable x. In other words, the steepest descent method in the quadratic norm ∥ · ∥P can be thought of as the gradient method applied to the problem after the change of coordinates x ̄ = P1/2x. 9.4.2 Steepest descent for l1-norm As another example, we consider the steepest descent method for the l1-norm. A normalized steepest descent direction, ∆xnsd = argmin{∇f (x)T v | ∥v∥1 ≤ 1}, −∇f (x) ∆xnsd 478 9 Unconstrained minimization −∇f (x) ∆xnsd Figure 9.10 Normalized steepest descent direction for the l1-norm. The diamond is the unit ball of the l1-norm, translated to the point x. The normalized steepest descent direction can always be chosen in the direction of a standard basis vector; in this example we have ∆xnsd = e1. is easily characterized. Let i be any index for which ∥∇f(x)∥∞ = |(∇f(x))i|. Then a normalized steepest descent direction ∆xnsd for the l1-norm is given by ∆xnsd = −sign􏰄∂f(x)􏰅ei, ∂xi where ei is the ith standard basis vector. An unnormalized steepest descent step is then ∆xsd = ∆xnsd∥∇f(x)∥∞ = −∂f(x)ei. ∂xi Thus, the normalized steepest descent step in l1-norm can always be chosen to be a standard basis vector (or a negative standard basis vector). It is the coordinate axis direction along which the approximate decrease in f is greatest. This is illustrated in figure 9.10. The steepest descent algorithm in the l1-norm has a very natural interpretation: At each iteration we select a component of ∇f(x) with maximum absolute value, and then decrease or increase the corresponding component of x, according to the sign of (∇f(x))i. The algorithm is sometimes called a coordinate-descent algorithm, since only one component of the variable x is updated at each iteration. This can greatly simplify, or even trivialize, the line search. Example 9.2 Frobenius norm scaling. 􏰉In §4.5.4 we encountered the unconstrained geometric program where M ∈ Rn×n is given, and the variable is d ∈ Rn. Using the change of variables xi = 2 log di we can express this geometric program in convex form as minimize f(x) = log􏰎􏰉n M2 exi−xj 􏰏. i,j =1 ij minimize n M2 d2/d2, i,j=1 ij i j 9.4 Steepest descent method 479 It is easy to minimize f one component at a time. Keeping all components except the kth fixed, we can write f (x) = log(αk + βk e−xk + γk exk ), where αk =M2 + 􏰊M2exi−xj, βk =􏰊M2 exi, γk =􏰊M2 e−xj. kkij ik kj i,j ̸=k i̸=k j ̸=k The minimum of f (x), as a function of xk , is obtained for xk = log(βk /γk )/2. So for this problem an exact line search can be carried out using a simple analytical formula. The l1-steepest descent algorithm with exact line search consists of repeating the following steps. 1. Compute the gradient (∇f(x))i = αi + βie−xi + γiexi , i = 1,...,n. −βie−xi + γiexi 2. Select a largest (in absolute value) component of ∇f(x): |∇f(x)|k = ∥∇f(x)∥∞. 3. Minimize f over the scalar variable xk, by setting xk = log(βk/γk)/2. 9.4.3 Convergence analysis In this section we extend the convergence analysis for the gradient method with backtracking line search to the steepest descent method for an arbitrary norm. We will use the fact that any norm can be bounded in terms of the Euclidean norm, so there exists constants γ, γ ̃ ∈ (0, 1] such that ∥x∥ ≥ γ∥x∥2, ∥x∥∗ ≥ γ ̃∥x∥2 (see §A.1.4). Again we assume f is strongly convex on the initial sublevel set S. The upper bound ∇2f(x) ≼ MI implies an upper bound on the function f(x + t∆xsd) as a function of t: f(x + t∆xsd) ≤ f(x) + t∇f(x)T ∆xsd + M∥∆xsd∥2 t2 2 T M∥∆xsd∥2 2 ≤ f(x)+t∇f(x) ∆xsd + 2γ2 t = f(x) − t∥∇f(x)∥2∗ + M t2∥∇f(x)∥2∗. 2γ2 The step size tˆ = γ2/M (which minimizes the quadratic upper bound (9.26)) satisfies the exit condition for the backtracking line search: ˆ γ2 2 αγ2 T f(x + t∆xsd) ≤ f(x) − 2M ∥∇f(x)∥∗ ≤ f(x) + M ∇f(x) ∆xsd (9.27) (9.26) 480 9 Unconstrained minimization since α < 1/2 and ∇f(x)T ∆xsd = −∥∇f(x)∥2∗. The line search therefore returns a step size t ≥ min{1,βγ2/M}, and we have f(x+)=f(x+t∆xsd) ≤ f(x)−αmin{1,βγ2/M}∥∇f(x)∥2∗ ≤ f(x)−αγ ̃2 min{1,βγ2/M}∥∇f(x)∥2. Subtracting p⋆ from both sides and using (9.9), we obtain f(x+) − p⋆ ≤ c(f(x) − p⋆), 9.4.4 Discussion and examples Choice of norm for steepest descent The choice of norm used to define the steepest descent direction can have a dra- matic effect on the convergence rate. For simplicity, we consider the case of steep- est descent with quadratic P-norm. In §9.4.1, we showed that the steepest descent method with quadratic P-norm is the same as the gradient method applied to the problem after the change of coordinates x ̄ = P1/2x. We know that the gradient method works well when the condition numbers of the sublevel sets (or the Hes- sian near the optimal point) are moderate, and works poorly when the condition numbers are large. It follows that when the sublevel sets, after the change of coor- dinates x ̄ = P1/2x, are moderately conditioned, the steepest descent method will work well. This observation provides a prescription for choosing P: It should be chosen so that the sublevel sets of f , transformed by P −1/2 , are well conditioned. For example if an approximation Hˆ of the Hessian at the optimal point H(x⋆) were known, a very good choice of P would be P = Hˆ , since the Hessian of f ̃ at the optimum is then Hˆ−1/2∇2f(x⋆)Hˆ−1/2 ≈ I, and so is likely to have a low condition number. This same idea can be described without a change of coordinates. Saying that a sublevel set has low condition number after the change of coordinates x ̄ = P1/2x is the same as saying that the ellipsoid E ={x|xTPx≤1} approximates the shape of the sublevel set. (In other words, it gives a good ap- proximation after appropriate scaling and translation.) This dependence of the convergence rate on the choice of P can be viewed from two sides. The optimist’s viewpoint is that for any problem, there is always a where Therefore we have i.e., linear convergence exactly as in the gradient method. c = 1 − 2 m α γ ̃ 2 m i n { 1 , β γ 2 / M } < 1 . f(x(k)) − p⋆ ≤ ck(f(x(0)) − p⋆), 9.4 Steepest descent method x(1) 481 x(0) x(2) Figure 9.11 Steepest descent method with a quadratic norm ∥ · ∥P1 . The ellipses are the boundaries of the norm balls {x | ∥x − x(k)∥P1 ≤ 1} at x(0) and x(1). choice of P for which the steepest descent method works very well. The challenge, of course, is to find such a P. The pessimist’s viewpoint is that for any problem, there are a huge number of choices of P for which steepest descent works very poorly. In summary, we can say that the steepest descent method works well in cases where we can identify a matrix P for which the transformed problem has moderate condition number. Examples In this section we illustrate some of these ideas using the nonquadratic problem in R2 with objective function (9.20). We apply the steepest descent method to the problem, using the two quadratic norms defined by P1 =􏰒 2 0 􏰓, P2 =􏰒 8 0 􏰓. 08 02 In both cases we use a backtracking line search with α = 0.1 and β = 0.7. Figures9.11and9.12showtheiteratesforsteepestdescentwithnorm∥·∥P1 and norm ∥ · ∥P2 . Figure 9.13 shows the error versus iteration number for both norms. Figure 9.13 shows that the choice of norm strongly influences the convergence. With the norm ∥ · ∥P1 , convergence is a bit more rapid than the gradient method, whereas with the norm ∥ · ∥P2 , convergence is far slower. This can be explained by examining the problems after the changes of coor- dinates x ̄ = P1/2x and x ̄ = P1/2x, respectively. Figures 9.14 and 9.15 show the 12 problems in the transformed coordinates. The change of variables associated with P1 yields sublevel sets with modest condition number, so convergence is fast. The change of variables associated with P2 yields sublevel sets that are more poorly conditioned, which explains the slower convergence. 482 9 Unconstrained minimization x(0) x(2) x(1) Figure 9.12 Steepest descent method, with quadratic norm ∥ · ∥P2 . 105 100 10−5 P1 10−10 10−150 10 P2 20 30 40 k f(x(k))−p⋆ Figure 9.13 Error f(x(k)) − p⋆ versus iteration k, for the steepest descent method with the quadratic norm ∥ · ∥P1 and the quadratic norm ∥ · ∥P2 . Convergence is rapid for the norm ∥ · ∥P1 and very slow for ∥ · ∥P2 . 9.4 Steepest descent method 483 x ̄(0) x ̄(1) Figure 9.14 The iterates of steepest descent with norm ∥ · ∥P1 , after the change of coordinates. This change of coordinates reduces the condition number of the sublevel sets, and so speeds up convergence. x ̄(0) x ̄(1) Figure 9.15 The iterates of steepest descent with norm ∥ · ∥P2 , after the change of coordinates. This change of coordinates increases the condition number of the sublevel sets, and so slows down convergence. 484 9 Unconstrained minimization f􏰝 f 9.5 9.5.1 Figure 9.16 The function f (shown solid) and its second-order approximation f􏰝at x (dashed). The Newton step ∆xnt is what must be added to x to give the minimizer of f􏰝. Newton’s method The Newton step For x ∈ domf, the vector ∆xnt = −∇2f(x)−1∇f(x) is called the Newton step (for f, at x). Positive definiteness of ∇2f(x) implies that ∇f(x)T ∆xnt = −∇f(x)T ∇2f(x)−1∇f(x) < 0 unless ∇f(x) = 0, so the Newton step is a descent direction (unless x is optimal). The Newton step can be interpreted and motivated in several ways. Minimizer of second-order approximation (x, f (x)) (x + ∆xnt, f(x + ∆xnt)) The second-order Taylor approximation (or model) f􏰝 of f at x is f􏰝(x + v) = f(x) + ∇f(x)T v + 1vT ∇2f(x)v, (9.28) 2 which is a convex quadratic function of v, and is minimized when v = ∆xnt. Thus, the Newton step ∆xnt is what should be added to the point x to minimize the second-order approximation of f at x. This is illustrated in figure 9.16. This interpretation gives us some insight into the Newton step. If the function f is quadratic, then x + ∆xnt is the exact minimizer of f. If the function f is nearly quadratic, intuition suggests that x + ∆xnt should be a very good estimate of the minimizer of f, i.e., x⋆. Since f is twice differentiable, the quadratic model of f will be very accurate when x is near x⋆. It follows that when x is near x⋆, the point x + ∆xnt should be a very good estimate of x⋆. We will see that this intuition is correct. 9.5 Newton’s method 485 x Figure 9.17 The dashed lines are level curves of a convex function. The ellipsoid shown (with solid line) is {x + v | vT ∇2f(x)v ≤ 1}. The arrow shows −∇f(x), the gradient descent direction. The Newton step ∆xnt is the steepest descent direction in the norm ∥ · ∥∇2f(x). The figure also shows ∆xnsd, the normalized steepest descent direction for the same norm. Steepest descent direction in Hessian norm The Newton step is also the steepest descent direction at x, for the quadratic norm defined by the Hessian ∇2f(x), i.e., ∥u∥∇2f(x) = (uT ∇2f(x)u)1/2. This gives another insight into why the Newton step should be a good search direction, and a very good search direction when x is near x⋆. Recall from our discussion above that steepest descent, with quadratic norm ∥ · ∥P , converges very rapidly when the Hessian, after the associated change of coordinates, has small condition number. In particular, near x⋆, a very good choice is P = ∇2f(x⋆). When x is near x⋆, we have ∇2f(x) ≈ ∇2f(x⋆), which explains why the Newton step is a very good choice of search direction. This is illustrated in figure 9.17. Solution of linearized optimality condition If we linearize the optimality condition ∇f(x⋆) = 0 near x we obtain ∇f(x + v) ≈ ∇f(x) + ∇2f(x)v = 0, which is a linear equation in v, with solution v = ∆xnt. So the Newton step ∆xnt is what must be added to x so that the linearized optimality condition holds. Again, this suggests that when x is near x⋆ (so the optimality conditions almost hold), the update x + ∆xnt should be a very good approximation of x⋆. When n = 1, i.e., f : R → R, this interpretation is particularly simple. The solution x⋆ of the minimization problem is characterized by f′(x⋆) = 0, i.e., it is x + ∆xnsd x+∆xnt 486 9 Unconstrained minimization (x + ∆xnt, f′(x + ∆xnt)) (x,f′(x)) Figure 9.18 The solid curve is the derivative f′ of the function f shown in figure 9.16. f􏰝′ is the linear approximation of f′ at x. The Newton step ∆xnt is the difference between the root of f􏰝′ and the point x. the zero-crossing of the derivative f′, which is monotonically increasing since f is convex. Given our current approximation x of the solution, we form a first-order Taylor approximation of f′ at x. The zero-crossing of this affine approximation is then x + ∆xnt. This interpretation is illustrated in figure 9.18. Affine invariance of the Newton step An important feature of the Newton step is that it is independent of linear (or affine) changes of coordinates. Suppose T ∈ Rn×n is nonsingular, and define ̄ f(y) = f(Ty). Then we have ̄T2 ̄T2 ∇f(y) = T ∇f(x), ∇ f(y) = T ∇ f(x)T, where x = T y. The Newton step for f ̄ at y is therefore ∆ynt = −􏰀TT ∇2f(x)T􏰁−1 􏰀TT ∇f(x)􏰁 = −T−1∇2f(x)−1∇f(x) = T−1∆xnt, where ∆xnt is the Newton step for f at x. Hence the Newton steps of f and f ̄ are related by the same linear transformation, and x+∆xnt =T(y+∆ynt). The Newton decrement The quantity λ(x) = 􏰀∇f(x)T ∇2f(x)−1∇f(x)􏰁1/2 is called the Newton decrement at x. We will see that the Newton decrement plays an important role in the analysis of Newton’s method, and is also useful f􏰝′ f′ 9.5 Newton’s method 487 as a stopping criterion. We can relate the Newton decrement to the quantity 􏰝􏰝 f(x) − infy f(y), where f is the second-order approximation of f at x: 􏰝􏰝12 Thus, λ2/2 is an estimate of f(x)−p⋆, based on the quadratic approximation of f at x. We can also express the Newton decrement as λ(x) = 􏰀∆xTnt∇2f(x)∆xnt􏰁1/2 . (9.29) This shows that λ is the norm of the Newton step, in the quadratic norm defined by the Hessian, i.e., the norm ∥u∥∇2f(x) = 􏰀uT ∇2f(x)u􏰁1/2 . The Newton decrement comes up in backtracking line search as well, since we have ∇f(x)T ∆xnt = −λ(x)2. (9.30) This is the constant used in a backtracking line search, and can be interpreted as f(x)−inff(y)=f(x)−f(x+∆xnt)= λ(x) . y2 the directional derivative of f at x in the direction of the Newton step: −λ(x)2 = ∇f(x)T ∆xnt = d f(x + ∆xntt)􏰍􏰍􏰍􏰍 . dt t=0 Finally, we note that the Newton decrement is, like the Newton step, affine in- ̄ variant. In other words, the Newton decrement of f(y) = f(Ty) at y, where T is nonsingular, is the same as the Newton decrement of f at x = T y. 9.5.2 Newton’s method Newton’s method, as outlined below, is sometimes called the damped Newton method or guarded Newton method, to distinguish it from the pure Newton method, which uses a fixed step size t = 1. Algorithm 9.5 Newton’s method. given a starting point x ∈ dom f , tolerance ǫ > 0.
repeat
1. Compute the Newton step and decrement.
∆xnt := −∇2f(x)−1∇f(x); λ2 := ∇f(x)T ∇2f(x)−1∇f(x).
2. Stopping criterion. quit if λ2/2 ≤ ǫ.
3. Line search. Choose step size t by backtracking line search. 4. Update. x := x + t∆xnt.
This is essentially the general descent method described in §9.2, using the New- ton step as search direction. The only difference (which is very minor) is that the stopping criterion is checked after computing the search direction, rather than after the update.

488
9.5.3
9 Unconstrained minimization
Convergence analysis
We assume, as before, that f is twice continuously differentiable, and strongly convex with constant m, i.e., ∇2f(x) ≽ mI for x ∈ S. We have seen that this also implies that there exists an M > 0 such that ∇2f(x) ≼ MI for all x ∈ S.
In addition, we assume that the Hessian of f is Lipschitz continuous on S with constant L, i.e.,
∥∇2f(x) − ∇2f(y)∥2 ≤ L∥x − y∥2 (9.31)
for all x, y ∈ S. The coefficient L, which can be interpreted as a bound on the third derivative of f, can be taken to be zero for a quadratic function. More generally L measures how well f can be approximated by a quadratic model, so we can expect the Lipschitz constant L to play a critical role in the performance of Newton’s method. Intuition suggests that Newton’s method will work very well for a function whose quadratic model varies slowly (i.e., has small L).
Idea and outline of convergence proof
We first give the idea and outline of the convergence proof, and the main conclusion, and then the details of the proof. We will show there are numbers η and γ with 0 < η ≤ m2/L and γ > 0 such that the following hold.
• If ∥∇f(x(k))∥2 ≥ η, then
• If ∥∇f(x(k))∥2 < η, then the backtracking line search selects t(k) = 1 and f(x(k+1)) − f(x(k)) ≤ −γ. (9.32) L (k+1) 􏰄L (k) 􏰅2 2m2 ∥∇f(x )∥2 ≤ 2m2 ∥∇f(x )∥2 . (9.33) Let us analyze the implications of the second condition. Suppose that it is satisfied for iteration k, i.e., ∥∇f(x(k))∥2 < η. Since η ≤ m2/L, we have ∥∇f(x(k+1))∥2 < η, i.e., the second condition is also satisfied at iteration k + 1. Continuing recursively, we conclude that once the second condition holds, it will hold for all future iterates, i.e., for all l ≥ k, we have ∥∇f(x(l))∥2 < η. Therefore for all l ≥ k, the algorithm takes a full Newton step t = 1, and L (l+1) 􏰄L (l) 􏰅2 2m2 ∥∇f(x )∥2 ≤ 2m2 ∥∇f(x )∥2 . Applying this inequality recursively, we find that for l ≥ k, L (l) 􏰄 L (k) 􏰅2l−k 􏰄1􏰅2l−k (9.34) (9.35) and hence 2m2∥∇f(x )∥2 ≤ 2m2∥∇f(x )∥2 ≤ 2 (l) ⋆ 1 (l) 2 2m3 􏰄1􏰅2l−k+1 f(x )−p ≤ 2m∥∇f(x )∥2 ≤ L2 2 . , 9.5 Newton’s method 489 This last inequality shows that convergence is extremely rapid once the second condition is satisfied. This phenomenon is called quadratic convergence. Roughly speaking, the inequality (9.35) means that, after a sufficiently large number of iterations, the number of correct digits doubles at each iteration. The iterations in Newton’s method naturally fall into two stages. The second stage, which occurs once the condition ∥∇f(x)∥2 ≤ η holds, is called the quadrat- ically convergent stage. We refer to the first stage as the damped Newton phase, because the algorithm can choose a step size t < 1. The quadratically convergent stage is also called the pure Newton phase, since in these iterations a step size t = 1 is always chosen. Now we can estimate the total complexity. First we derive an upper bound on the number of iterations in the damped Newton phase. Since f decreases by at least γ at each iteration, the number of damped Newton steps cannot exceed f(x(0)) − p⋆ γ, since if it did, f would be less than p⋆, which is impossible. We can bound the number of iterations in the quadratically convergent phase using the inequality (9.35). It implies that we must have f (x) − p⋆ ≤ ǫ after no more than log2 log2(ǫ0/ǫ) iterations in the quadratically convergent phase, where ǫ0 = 2m3/L2. Overall, then, the number of iterations until f (x) − p⋆ ≤ ǫ is bounded above by f(x(0)) − p⋆ γ + log2 log2(ǫ0/ǫ). (9.36) The term log2 log2(ǫ0/ǫ), which bounds the number of iterations in the quadrati- cally convergent phase, grows extremely slowly with required accuracy ǫ, and can be considered a constant for practical purposes, say five or six. (Six iterations of the quadratically convergent stage gives an accuracy of about ǫ ≈ 5 · 10−20ǫ0.) Not quite accurately, then, we can say that the number of Newton iterations required to minimize f is bounded above by f(x(0)) − p⋆ γ +6. (9.37) A more precise statement is that (9.37) is a bound on the number of iterations to compute an extremely good approximation of the solution. Damped Newton phase We now establish the inequality (9.32). Assume ∥∇f(x)∥2 ≥ η. We first derive a lower bound on the step size selected by the line search. Strong convexity implies that ∇2f(x) ≼ MI on S, and therefore f(x + t∆xnt) ≤ f(x) + t∇f(x)T ∆xnt + M∥∆xnt∥2 t2 2 ≤ f(x) − tλ(x)2 + M t2λ(x)2, 2m 490 9 Unconstrained minimization where we use (9.30) and λ(x)2 = ∆xTnt∇2f(x)∆xnt ≥ m∥∆xnt∥2. The step size tˆ = m/M satisfies the exit condition of the line search, since f(x + tˆ∆xnt) ≤ f(x) − m λ(x)2 ≤ f(x) − αtˆλ(x)2. 2M Therefore the line search returns a step size t ≥ βm/M, resulting in a decrease of the objective function where we use f(x+) − f(x) ≤ −αtλ(x)2 ≤ − α β Mm λ ( x ) 2 ≤ −αβ m ∥∇f (x)∥2 M2 ≤ −αβη2 m , M2 λ(x)2 = ∇f(x)T ∇2f(x)−1∇f(x) ≥ (1/M)∥∇f(x)∥2. Therefore, (9.32) is satisfied with γ = αβη2 m . M2 Quadratically convergent phase We now establish the inequality (9.33). Assume ∥∇f(x)∥2 < η. We first show that the backtracking line search selects unit steps, provided m2 η≤3(1−2α) L . By the Lipschitz condition (9.31), we have, for t ≥ 0, ∥∇2f(x + t∆xnt) − ∇2f(x)∥2 ≤ tL∥∆xnt∥2, and therefore 􏰍􏰍 ∆ x Tn t 􏰀 ∇ 2 f ( x + t ∆ x n t ) − ∇ 2 f ( x ) 􏰁 ∆ x n t 􏰍􏰍 ≤ t L ∥ ∆ x n t ∥ 32 . ̃ ̃′′T2 With f(t) = f(x + t∆xnt), we have f (t) = ∆xnt∇ f(x + t∆xnt)∆xnt, so the inequality above is We will use this inequality to determine an upper bound on f(t). We start with ̃ ̃ |f′′(t) − f′′(0)| ≤ tL∥∆xnt∥32. ̃ f′′(t) ≤ f′′(0) + tL∥∆xnt∥32 ≤ λ(x)2 + tm3/2 λ(x)3, ̃ ̃ L (9.38) 9.5 Newton’s method 491 where we use f ̃′′(0) = λ(x)2 and λ(x)2 ≥ m∥∆xnt∥2. We integrate the inequality to get L f′(t) ≤ f′(0) + tλ(x)2 + t2 λ(x)3 ̃ ̃ = −λ(x)2 + tλ(x)2 + t2 L λ(x)3, 2m3/2 using f ̃′(0) = −λ(x)2. We integrate once more to get ̃ ̃22123L3 f(t)≤f(0)−tλ(x) +t 2λ(x) +t 6m3/2λ(x) . Finally, we take t = 1 to obtain f(x + ∆xnt) ≤ f(x) − 1λ(x)2 + L λ(x)3. (9.39) 2 6m3/2 Now suppose ∥∇f(x)∥2 ≤ η ≤ 3(1 − 2α)m2/L. By strong convexity, we have and by (9.39) we have f(x + ∆xnt) ≤ f(x) − λ(x)2 􏰄1 − Lλ(x)􏰅 2 6m3/2 λ(x) ≤ 3(1 − 2α)m3/2/L, ≤ f (x) − αλ(x)2 = f(x) + α∇f(x)T ∆xnt, which shows that the unit step t = 1 is accepted by the backtracking line search. Let us now examine the rate of convergence. Applying the Lipschitz condition, ≤ L2 ∥∆xnt∥2 = L2 ∥∇2f(x)−1∇f(x)∥2 ≤ L ∥∇f(x)∥2, 2m2 i.e., the inequality (9.33). In conclusion, the algorithm selects unit steps and satisfies the condition (9.33) we have = 􏳶􏳶􏳶􏳶􏰑 1 􏰀∇2f(x + t∆xnt) − ∇2f(x)􏰁 ∆xnt dt􏳶􏳶􏳶􏳶 ∥∇f (x+ )∥2 = ∥∇f(x + ∆xnt) − ∇f(x) − ∇2f(x)∆xnt∥2 02 if ∥∇f(x(k))∥2 < η, where m2 η=min{1,3(1−2α)} L . Substituting this bound and (9.38) into (9.37), we find that the number of iterations is bounded above by M2L2/m5 (0) ⋆ 6+ αβmin{1,9(1−2α)2}(f(x )−p ). (9.40) 2m3/2 492 9 Unconstrained minimization x(0) x(1) Figure 9.19 Newton’s method for the problem in R2, with objective f given in (9.20), and backtracking line search parameters α = 0.1, β = 0.7. Also shown are the ellipsoids {x | ∥x−x(k)∥∇2f(x(k)) ≤ 1} at the first two iterates. 9.5.4 Examples Example in R2 We first apply Newton’s method with backtracking line search on the test func- tion (9.20), with line search parameters α = 0.1, β = 0.7. Figure 9.19 shows the Newton iterates, and also the ellipsoids {x | ∥x − x(k)∥∇2f(x(k)) ≤ 1} for the first two iterates k = 0, 1. The method works well because these ellipsoids give good approximations of the shape of the sublevel sets. Figure 9.20 shows the error versus iteration number for the same example. This plot shows that convergence to a very high accuracy is achieved in only five iterations. Quadratic convergence is clearly apparent: The last step reduces the error from about 10−5 to 10−10. Example in R100 Figure 9.21 shows the convergence of Newton’s method with backtracking and exact line search for a problem in R100. The objective function has the form (9.21), with the same problem data and the same starting point as was used in figure 9.6. The plot for the backtracking line search shows that a very high accuracy is attained in eight iterations. Like the example in R2, quadratic convergence is clearly evident after about the third iteration. The number of iterations in Newton’s method with exact line search is only one smaller than with a backtracking line search. This is also typical. An exact line search usually gives a very small improvement in convergence of Newton’s method. Figure 9.22 shows the step sizes for this example. After two damped steps, the steps taken by the backtracking line search are all full, i.e., t = 1. Experiments with the values of the backtracking parameters α and β reveal that they have little effect on the performance of Newton’s method, for this example 9.5 Newton’s method 493 105 100 10−5 10−10 10−150 1 2 3 4 5 k Figure 9.20 Error versus iteration k of Newton’s method for the problem in R2. Convergence to a very high accuracy is achieved in five iterations. 105 100 10−5 10−10 l.s. exact l.s. backtracking 10−150 2 4 6 8 10 k Figure 9.21 Error versus iteration for Newton’s method for the problem in R100. The backtracking line search parameters are α = 0.01, β = 0.5. Here too convergence is extremely rapid: a very high accuracy is attained in only seven or eight iterations. The convergence of Newton’s method with exact line search is only one iteration faster than with backtracking line search. f(x(k)) − p⋆ f(x(k)) − p⋆ 494 9 Unconstrained minimization 2 1.5 1 0.5 exact l.s. backtracking l.s. 00 2 4 6 8 k Figure 9.22 The step size t versus iteration for Newton’s method with back- tracking and exact line search, applied to the problem in R100. The back- tracking line search takes one backtracking step in the first two iterations. After the first two iterations it always selects t = 1. (and others). With α fixed at 0.01, and values of β varying between 0.2 and 1, the number of iterations required varies between 8 and 12. With β fixed at 0.5, the number of iterations is 8, for all values of α between 0.005 and 0.5. For these reasons, most practical implementations use a backtracking line search with a small value of α, such as 0.01, and a larger value of β, such as 0.5. Example in R10000 In this last example we consider a larger problem, of the form 􏰊n minimize − i=1 log(1 − x2i ) − 􏰊m i=1 log(bi − aTi x) with m = 100000 and n = 10000. The problem data ai are randomly generated sparse vectors. Figure 9.23 shows the convergence of Newton’s method with back- tracking line search, with parameters α = 0.01, β = 0.5. The performance is very similar to the previous convergence plots. A linearly convergent initial phase of about 13 iterations is followed by a quadratically convergent phase, that achieves a very high accuracy in 4 or 5 more iterations. Affine invariance of Newton’s method A very important feature of Newton’s method is that it is independent of linear (or affine) changes of coordinates. Let x(k) be the kth iterate of Newton’s method, n n×n ̄ applied to f : R → R. Suppose T ∈ R is nonsingular, and define f(y) = f(Ty). If we use Newton’s method (with the same backtracking parameters) to step size t(k) 9.5 Newton’s method 495 105 100 10−5 0 5 10 15 20 k Figure 9.23 Error versus iteration of Newton’s method, for a problem in R10000. A backtracking line search with parameters α = 0.01, β = 0.5 is used. Even for this large scale problem, Newton’s method requires only 18 iterations to achieve very high accuracy. ̄ (0) −1 (0) minimize f, starting from y = T x , then we have Ty(k) =x(k) for all k. In other words, Newton’s method is the same: The iterates are related by the same change of coordinates. Even the stopping criterion is the same, since the Newton decrement for f ̄ at y(k) is the same as the Newton decrement for f at x(k). This is in stark contrast to the gradient (or steepest descent) method, which is strongly affected by changes of coordinates. As an example, consider the family of problems given in (9.22), indexed by the parameter γ, which affects the condition number of the sublevel sets. We observed (in figures 9.7 and 9.8) that the gradient method slows to useless for values of γ smaller than 0.05 or larger than 20. In contrast, Newton’s method (with α = 0.01, β = 0.5) solves this problem (in fact, to a far higher accuracy) in nine iterations, for all values of γ between 10−10 and 1010. In a real implementation, with finite precision arithmetic, Newton’s method is not exactly independent of affine changes of coordinates, or the condition number of the sublevel sets. But we can say that condition numbers ranging up to very large values such as 1010 do not adversely affect a real implementation of Newton’s method. For the gradient method, a far smaller range of condition numbers can be tolerated. While choice of coordinates (or condition number of sublevel sets) is a first-order issue for gradient and steepest descent methods, it is a second-order issue for Newton’s method; its only effect is in the numerical linear algebra required to compute the Newton step. f(x(k))−p⋆ 496 9 Unconstrained minimization Summary Newton’s method has several very strong advantages over gradient and steepest descent methods: • Convergence of Newton’s method is rapid in general, and quadratic near x⋆. Once the quadratic convergence phase is reached, at most six or so iterations are required to produce a solution of very high accuracy. • Newton’s method is affine invariant. It is insensitive to the choice of coordi- nates, or the condition number of the sublevel sets of the objective. • Newton’s method scales well with problem size. Its performance on problems in R10000 is similar to its performance on problems in R10, with only a modest increase in the number of steps required. • The good performance of Newton’s method is not dependent on the choice of algorithm parameters. In contrast, the choice of norm for steepest descent plays a critical role in its performance. The main disadvantage of Newton’s method is the cost of forming and storing the Hessian, and the cost of computing the Newton step, which requires solving a set of linear equations. We will see in §9.7 that in many cases it is possible to exploit problem structure to substantially reduce the cost of computing the Newton step. Another alternative is provided by a family of algorithms for unconstrained op- timization called quasi-Newton methods. These methods require less computational effort to form the search direction, but they share some of the strong advantages of Newton methods, such as rapid convergence near x⋆. Since quasi-Newton meth- ods are described in many books, and tangential to our main theme, we will not consider them in this book. 9.6 Self-concordance There are two major shortcomings of the classical convergence analysis of Newton’s method given in §9.5.3. The first is a practical one: The resulting complexity estimates involve the three constants m, M, and L, which are almost never known in practice. As a result, the bound (9.40) on the number of Newton steps required is almost never known specifically, since it depends on three constants that are, in general, not known. Of course the convergence analysis and complexity estimate are still conceptually useful. The second shortcoming is that while Newton’s method is affinely invariant, the classical analysis of Newton’s method is very much dependent on the coordinate system used. If we change coordinates the constants m, M, and L all change. If for no reason other than aesthetic, we should seek an analysis of Newton’s method that is, like the method itself, independent of affine changes of coordinates. In 9.6 Self-concordance 497 other words, we seek an alternative to the assumptions mI ≼ ∇2f(x) ≼ MI, ∥∇2f(x) − ∇2f(y)∥2 ≤ L∥x − y∥2, that is independent of affine changes of coordinates, and also allows us to analyze Newton’s method. A simple and elegant assumption that achieves this goal was discovered by Nesterov and Nemirovski, who gave the name self-concordance to their condition. Self-concordant functions are important for several reasons. • They include many of the logarithmic barrier functions that play an impor- tant role in interior-point methods for solving convex optimization problems. • The analysis of Newton’s method for self-concordant functions does not de- pend on any unknown constants. • Self-concordance is an affine-invariant property, i.e., if we apply a linear transformation of variables to a self-concordant function, we obtain a self- concordant function. Therefore the complexity estimate that we obtain for Newton’s method applied to a self-concordant function is independent of affine changes of coordinates. 9.6.1 Definition and examples Self-concordant functions on R We start by considering functions on R. A convex function f : R → R is self- concordant if |f′′′(x)| ≤ 2f′′(x)3/2 (9.41) for all x ∈ domf. Since linear and (convex) quadratic functions have zero third derivative, they are evidently self-concordant. Some more interesting examples are given below. Example 9.3 Logarithm and entropy. • Negative logarithm. The function f (x) = − log x is self-concordant. Using f′′(x) = 1/x2, f′′′(x) = −2/x3, we find that |f ′′′ (x)| 2/x3 2f ′′ (x)3/2 = 2(1/x2 )3/2 = 1, so the defining inequality (9.41) holds with equality. • Negative entropy plus negative logarithm. The function f (x) = x log x − log x is self-concordant. To verify this, we use to obtain f′′(x) = x+1, f′′′(x) = −x+2 x2 x3 |f′′′(x)| = x+2 . 2f′′(x)3/2 2(x + 1)3/2 498 9 Unconstrained minimization The function on the righthand side is maximized on R+ by x = 0, where its value is 1. The negative entropy function by itself is not self-concordant; see exercise 11.13. We should make two important remarks about the self-concordance defini- tion (9.41). The first concerns the mysterious constant 2 that appears in the definition. In fact, this constant is chosen for convenience, in order to simplify the formulas later on; any other positive constant could be used instead. Suppose, for example, that the convex function f : R → R satisfies |f′′′(x)| ≤ kf′′(x)3/2 where k is some positive constant. Then the function f(x) = (k /4)f(x) satisfies ̃ |f′′′(x)| = (k2/4)|f′′′(x)| ≤ (k3/4)f′′(x)3/2 􏰎 ̃ 􏰏3/2 = (k3/4) (4/k2)f′′(x) ̃ = 2f ′′ (x)3/2 and therefore is self-concordant. This shows that a function that satisfies (9.42) for some positive k can be scaled to satisfy the standard self-concordance inequal- ity (9.41). So what is important is that the third derivative of the function is bounded by some multiple of the 3/2-power of its second derivative. By appropri- ately scaling the function, we can change the multiple to the constant 2. The second comment is a simple calculation that shows why self-concordance ̃ ̃ is so important: it is affine invariant. Suppose we define the function f by f(y) = f (ay + b), where a ̸= 0. Then f ̃ is self-concordant if and only if f is. To see this, we substitute ̃ ̃ f′′(y) = a2f′′(x), f′′′(y) = a3f′′′(x), ̃ ̃′′′ where x = ay + b, into the self-concordance inequality for f, i.e., |f (y)| ≤ 2f ̃′′(y)3/2, to obtain |a3f′′′(x)| ≤ 2(a2f′′(x))3/2, which (after dividing by a3) is the self-concordance inequality for f. Roughly speaking, the self-concordance condition (9.41) is a way to limit the third derivative of a function, in a way that is independent of affine coordinate changes. Self-concordant functions on Rn WenowconsiderfunctionsonRn withn>1. Wesayafunctionf :Rn →R
is self-concordant if it is self-concordant along every line in its domain, i.e., if the ̃
function f (t) = f (x + tv) is a self-concordant function of t for all x ∈ dom f and for all v.
̃2
(9.42)

9.6 Self-concordance 499
9.6.2 Self-concordant calculus Scaling and sum
Self-concordance is preserved by scaling by a factor exceeding one: If f is self- concordant and a ≥ 1, then af is self-concordant. Self-concordance is also preserved by addition: If f1, f2 are self-concordant, then f1 + f2 is self-concordant. To show this, it is sufficient to consider functions f1, f2 : R → R. We have
|f′′′(x) + f′′′(x)| ≤ |f′′′(x)| + |f′′′(x)| 1212
≤ 2(f′′(x)3/2 + f′′(x)3/2) 12
≤ 2(f′′(x) + f′′(x))3/2. 12
In the last step we use the inequality
(u3/2 + v3/2)2/3 ≤ u + v,
which holds for u, v ≥ 0. Composition with affine function
If f : Rn → R is self-concordant, and A ∈ Rn×m, b ∈ Rn, then f(Ax+b) is self-concordant.
Example 9.4 Log barrier for linear inequalities. The function
􏰊m i=1
with domf = {x | aTi x < bi, i = 1,...,m}, is self-concordant. Each term −log(bi − aTi x) is the composition of − log y with the affine transformation y = bi − aTi x, and hence self-concordant. Therefore the sum is also self-concordant. f ( x ) = − l o g ( b i − a Ti x ) , Example 9.5 Log-determinant. The function f (X ) = − log det X is self-concordant n ̃ on dom f = S++. To show this, we consider the function f(t) = f(X + tV ), where X≻0andV ∈Sn. Itcanbeexpressedas ̃ 1/2 −1/2 f (t) = − log det(X (I + tX 1/2 −1/2 = − log det X − l􏰊og det(I + tX−1/2V X−1/2) n = − log det X − log(1 + tλi) i=1 where λi are the eigenvalues of X −1/2 V X −1/2 . Each term − log(1 + tλi ) is a self- ̃ concordant function of t, so the sum, f, is self-concordant. It follows that f is self-concordant. Example 9.6 Log of concave quadratic. The function f (x) = − log(xT P x + qT x + r), V X )X ) 500 9 Unconstrained minimization where P ∈ −Sn+, is self-concordant on dom f = {x | xT P x + qT x + r > 0}.
To show this, it suffices to consider the case n = 1 (since by restricting f to a line, the general case reduces to the n = 1 case). We can then express f as
f(x) = −log(px2 +qx+r) = −log(−p(x−a)(b−x))
where dom f = (a, b) (i.e., a and b are the roots of px2 +qx+r). Using this expression
(9.43)
we have
which establishes self-concordance.
f(x) = −log(−p) − log(x − a) − log(b − x), Composition with logarithm
Let g : R → R be a convex function with domg = R++, and |g′′′(x)| ≤ 3g′′(x)
for all x. Then
x
f (x) = − log(−g(x)) − log x
is self-concordant on {x | x > 0, g(x) < 0}. (For a proof, see exercise 9.14.) The condition (9.43) is homogeneous and preserved under addition. It is sat- isfied by all (convex) quadratic functions, i.e., functions of the form ax2 + bx + c, where a ≥ 0. Therefore if (9.43) holds for a function g, then it holds for the function g(x) + ax2 + bx + c, where a ≥ 0. Example 9.7 The following functions g satisfy the condition (9.43). • g(x)=−xp for0 0, g(x) + ax2 + bx + c < 0}, Example 9.8 The composition with logarithm rule allows us to show self-concordance provided a ≥ 0. of the following functions. • f(x,y) = −log(y2 − xT x) on {(x,y) | ∥x∥2 < y}. • f(x,y)=−2logy−log(y2/p −x2),withp≥1,on{(x,y)∈R2 ||x|p 0, with 0 < η ≤ 1/4, that depend only on the line search parameters α and β, such that the following hold: • If λ(x(k)) > η, then
f(x(k+1)) − f(x(k)) ≤ −γ. (9.51) • If λ(x(k)) ≤ η, then the backtracking line search selects t = 1 and
2λ(x(k+1)) ≤ 􏰎2λ(x(k))􏰏2 . (9.52)
These are the analogs of (9.32) and (9.33). As in §9.5.3, the second condition can be applied recursively, so we can conclude that for all l ≥ k, we have λ(x(l)) ≤ η,
and
􏰎
􏰏2l−k 􏰄 􏰅2l−k 2λ(x(k)) ≤ (2η)2l−k ≤ 12 .
2λ(x(l)) ≤
As a consequence, for all l ≥ k,
(l) ⋆ (l) 2 1 􏰄1􏰅2l−k+1 􏰄1􏰅2l−k+1
f(x )−p ≤λ(x ) ≤4 2 ≤ 2 ,

504
9 Unconstrained minimization
and hence f(x(l)) − p⋆ ≤ ǫ if l − k ≥ log2 log2(1/ǫ).
The first inequality implies that the damped phase cannot require more than
(f(x(0)) − p⋆)/γ steps. Thus the total number of iterations required to obtain an accuracy f(x) − p⋆ ≤ ǫ, starting at a point x(0), is bounded by
f(x(0)) − p⋆
γ + log2 log2(1/ǫ). (9.53)
This is the analog of the bound (9.36) in the classical analysis of Newton’s method.
Damped Newton phase
̃
Let f(t) = f(x + t∆xnt), so we have
̃ ̃
f′(0) = −λ(x)2, f′′(0) = λ(x)2.
̃
If we integrate the upper bound in (9.46) twice, we obtain an upper bound for f(t):
̃ ̃ ̃ ̃􏰎 ̃􏰏 f(t) ≤ f(0) + tf′(0) − tf′′(0)1/2 − log 1 − tf′′(0)1/2
̃2
= f (0) − tλ(x) − tλ(x) − log(1 − tλ(x)), (9.54)
valid for 0 ≤ t < 1/λ(x). We can use this bound to show the backtracking line search always results in a step size t ≥ β/(1 + λ(x)). To prove this we note that the point tˆ = 1/(1 + λ(x)) satisfies the exit condition of the line search: ≤ ̃ˆ ̃ˆ2ˆˆ f(t) ≤ = f(0)−tλ(x) −tλ(x)−log(1−tλ(x)) ̃ f(0)−λ(x)+log(1+λ(x)) λ(x)2 f(0)−α1+λ(x) ̃ ̃ 2ˆ = The second inequality follows from the fact that x2 forx≥0. Sincet≥β/(1+λ(x)),wehave f (0) − αλ(x) t. −x + log(1 + x) + 2(1 + x) ≤ 0 λ(x)2 f (t) − f (0) ≤ −αβ 1 + λ(x) , η2 γ = αβ 1 + η . ̃ ̃ so (9.51) holds with 9.6 Self-concordance Quadratically convergent phase We will show that we can take η = (1 − 2α)/4, (which satisfies 0 < η < 1/4, since 0 < α < 1/2), i.e., if λ(x(k)) ≤ (1 − 2α)/4, then the backtracking line search accepts the unit step and (9.52) holds. We first note that the upper bound (9.54) implies that a unit step t = 1 yields a point in dom f if λ(x) < 1. Moreover, if λ(x) ≤ (1 − 2α)/2, we have, using (9.54), ̃ ̃2 f(1) ≤ f(0) − λ(x) − λ(x) − log(1 − λ(x)) ̃123 ≤ f(0)−2λ(x) +λ(x) ̃2 ≤ f(0)−αλ(x) , so the unit step satisfies the condition of sufficient decrease. (The second line followsfromthefactthat−x−log(1−x)≤21x2+x3 for0≤x≤0.81.) The inequality (9.52) follows from the following fact, proved in exercise 9.18. If λ(x) < 1, and x+ = x − ∇2f(x)−1∇f(x), then 505 + λ(x)2 λ(x )≤(1−λ(x))2. Putting it all together, the bound (9.53) on the number of Newton iterations be- comes f(x(0))−p⋆ 20−8α (0) ⋆ γ +log2 log2(1/ǫ) = αβ(1 − 2α)2 (f(x )−p )+log2 log2(1/ǫ). (9.56) This expression depends only on the line search parameters α and β, and the final accuracy ǫ. Moreover the term involving ǫ can be safely replaced by the constant six, so the bound really depends only on α and β. For typical values of α and β, the constant that scales f(x(0)) − p⋆ is on the order of several hundred. For example, with α = 0.1, β = 0.8, the scaling factor is 375. With tolerance ǫ = 10−10, we obtain the bound 375(f(x(0)) − p⋆) + 6. (9.57) We will see that this bound is fairly conservative, but does capture what appears to be the general form of the worst-case number of Newton steps required. A more refined analysis, such as the one originally given by Nesterov and Nemirovski, gives a similar bound, with a substantially smaller constant scaling f(x(0)) − p⋆. (9.55) In particular, if λ(x) ≤ 1/4, which proves that (9.52) holds when λ(x(k)) ≤ η. λ(x+) ≤ 2λ(x)2, The final complexity bound 506 9 Unconstrained minimization 9.6.5 0 0 5 10 15 20 25 30 35 f(x(0)) − p⋆ Figure 9.25 Number of Newton iterations required to minimize self- concordant􏰉functions versus f (x(0) ) − p⋆ . The function f has the form f(x) = − mi=1 log(bi − aTi x), where the problem data ai and b are ran- domly generated. The circles show problems with m = 100, n = 50; the squares show problems with m = 1000, n = 500; and the diamonds show problems with m = 1000, n = 50. Fifty instances of each are shown. Discussion and numerical examples A family of self-concordant functions It is interesting to compare the upper bound (9.57) with the actual number of iterations required to minimize a self-concordant function. We consider a family of problems of the form 􏰊m i=1 The problem data ai and b were generated as follows. For each problem instance, the coefficients of ai were generated from independent normal distributions with mean zero and unit variance, and the coefficients b were generated from a uniform distribution on [0,1]. Problem instances which were unbounded below were dis- carded. For each problem we first compute x⋆. We then generate a starting point by choosing a random direction v, and taking x(0) = x⋆ + sv, where s is chosen so that f(x(0)) − p⋆ has a prescribed value between 0 and 35. (We should point out that starting points with values f(x(0)) − p⋆ = 10 or higher are actually very close to the boundary of the polyhedron.) We then minimize the function using New- ton’s method with a backtracking line search with parameters α = 0.1, β = 0.8, and tolerance ǫ = 10−10. Figure 9.25 shows the number of Newton iterations required versus f(x(0))−p⋆ for 150 problem instances. The circles show 50 problems with m = 100, n = 50; the squares show 50 problems with m = 1000, n = 500; and the diamonds show 50 problems with m = 1000, n = 50. 25 20 15 10 5 f ( x ) = − l o g ( b i − a Ti x ) . iterations 9.6 Self-concordance 507 For the values of the backtracking parameters used, the complexity bound found above is 375(f(x(0)) − p⋆) + 6, (9.58) clearly a much larger value than the number of iterations required (for these 150 instances). The plot suggests that there is a valid bound of the same form, but with a much smaller constant (say, around 1.5) scaling f(x(0)) − p⋆. Indeed, the expression f(x(0))−p⋆ +6 is not a bad gross predictor of the number of Newton steps required, although it is clearly not the only factor. First, there are plenty of problems instances where the number of Newton steps is somewhat smaller, which correspond, we can guess, to ‘lucky’ starting points. Note also that for the larger problems, with 500 variables (represented by the squares), there seem to be even more cases where the number of Newton steps is unusually small. We should mention here that the problem family we study is not just self- concordant, but in fact minimally self-concordant, by which we mean that αf is not self-concordant for α < 1. Hence, the bound (9.58) cannot be improved by simply scaling f . (The function f (x) = −20 log x is an example of a self- concordant function which is not minimally self-concordant, since (1/20)f is also self-concordant.) Practical importance of self-concordance We have already observed that Newton’s method works in general very well for strongly convex objective functions. We can justify this vague statement empir- ically, and also using the classical analysis of Newton’s method, which yields a complexity bound, but one that depends on several constants that are almost al- ways unknown. For self-concordant functions we can say somewhat more. We have a complexity bound that is completely explicit, and does not depend on any unknown constants. Empirical studies suggest that this bound can be tightened considerably, but its general form, a small constant plus a multiple of f(x(0)) − p⋆, seems to predict, at least crudely, the number of Newton steps required to minimize an approximately minimally self-concordant function. It is not yet clear whether self-concordant functions are in practice more easily minimized by Newton’s method than non-self-concordant functions. (It is not even clear how one would make this statement precise.) At the moment, we can say that self-concordant functions are a class of functions for which we can say considerably more about the complexity of Newton’s method than is the case for non-self-concordant functions. 508 9 Unconstrained minimization 9.7 Implementation In this section we discuss some of the issues that arise in implementing an un- constrained minimization algorithm. We refer the reader to appendix C for more details on numerical linear algebra. 9.7.1 Pre-computation for line searches In the simplest implementation of a line search, f (x + t∆x) is evaluated for each value of t in the same way that f(z) is evaluated for any z ∈ domf. But in some cases we can exploit the fact that f (and its derivatives, in an exact line search) are to be evaluated at many points along the ray {x + t∆x | t ≥ 0} to reduce the total computational effort. This usually requires some pre-computation, which is often on the same order as computing f at any point, after which f (and its derivatives) can be computed more efficiently along the ray. Suppose that x ∈ dom f and ∆x ∈ Rn, and define f ̃ as f restricted to the line ̃ or ray determined by x and ∆x, i.e., f(t) = f(x + t∆x). In a backtracking line search we must evaluate f ̃ for several, and possibly many, values of t; in an exact line search method we must evaluate f ̃ and one or more derivatives at a number of ̃ values of t. In the simple method described above, we evaluate f(t) by first forming z = x + t∆x, and then evaluating f(z). To evaluate f ̃′(t), we form z = x + t∆x, then evaluate ∇f(z), and then compute f ̃′(t) = ∇f(z)T ∆x. In some representative examples below we show how f ̃ can be computed at a number of values of t more efficiently. Composition with an affine function A very general case in which pre-computation can speed up the line search process occurs when the objective has the form f(x) = φ(Ax+b), where A ∈ Rp×n, and φ ̃ is easy to evaluate (for example, separable). To evaluate f (t) = f (x + t∆x) for k values of t using the simple approach, we form A(x + t∆x) + b for each value of t (which costs 2kpn flops), and then evaluate φ(A(x + t∆x) + b) for each value of t. This can be done more efficiently by first computing Ax + b and A∆x (4pn flops), then forming A(x + t∆x) + b for each value of t using A(x + t∆x) + b = (Ax + b) + t(A∆x), which costs 2kp flops. The total cost, keeping only the dominant terms, is 4pn+2kp flops, compared to 2kpn for the simple method. Analytic center of a linear matrix inequality Here we give an example that is more specific, and more complete. We consider the problem (9.6) of computing the analytic center of a linear matrix inequality, i.e., minimizing logdetF(x)−1, where x ∈ Rn and F : Rn → Sp is affine. Along the line through x with direction ∆x we have ̃ −1 f(t)=logdet(F(x+t∆x)) =−logdet(A+tB) 9.7 Implementation 509 where Since A ≻ 0, it has a Cholesky factorization A = LLT , where L is lower triangular A=F(x), B=∆x1F1+···+∆xnFn ∈Sp. and nonsingular. Therefore we can express f ̃ as ̃ 􏰀−1−TT􏰁 􏰊p f(t)=−logdet L(I+tL BL )L =−logdetA− log(1+tλi) (9.59) i=1 where λ1, . . . , λp are the eigenvalues of L−1BL−T . Once these eigenvalues are ̃ computed, we can evaluate f(t), for any t, with 4p simple arithmetic computations, by using the formula on the right hand side of (9.59). We can evaluate f ̃′(t) (and similarly, any higher derivative) in 4p operations, using the formula ′ 􏰊p λi f ̃ ( t ) = − 1 + t λ . i=1 i Let us compare the two methods for carrying out a line search, assuming that we need to evaluate f(x + t∆x) for k values of t. In the simple method, for each value of t we form F (x+t∆x), and then evaluate f (x+t∆x) as − log det F (x+t∆x). For example, we can find the Cholesky factorization of F (x + t∆x) = LLT , and then evaluate 􏰊p −logdetF(x+t∆x) = −2 logLii. i=1 The cost is np2 to form F (x + t∆x), plus (1/3)p3 for the Cholesky factorization. Therefore the total cost of the line search is k(np2 + (1/3)p3) = knp2 + (1/3)kp3. Using the method outlined above, we first form A, which costs np2, and factor it, which costs (1/3)p3. We also form B (which costs np2), and L−1BL−T , which costs 2p3. The eigenvalues of this matrix are then computed, at a cost of about (4/3)p3 flops. This pre-computation requires a total of 2np2 + (11/3)p3 flops. After ̃ finishing this pre-computation, we can now evaluate f(t) for each value of t at a cost of 4p flops. The total cost is then 2np2 + (11/3)p3 + 4kp. Assuming k is small compared to p(2n + (11/3)p), this means the entire line search can be carried out at an effort comparable to simply evaluating f. Depending on the values of k, p, and n, the savings over the simple method can be as large as order k. 9.7.2 Computing the Newton step In this section we briefly describe some of the issues that arise in implementing Newton’s method. In most cases, the work of computing the Newton step ∆xnt 510 9 Unconstrained minimization dominates the work involved in the line search. To compute the Newton step ∆xnt, we first evaluate and form the Hessian matrix H = ∇2f(x) and the gradient g = ∇f(x) at x. Then we solve the system of linear equations H∆xnt = −g to find the Newton step. This set of equations is sometimes called the Newton system (since its solution gives the Newton step) or the normal equations, since the same type of equation arises in solving a least-squares problem (see §9.1.1). While a general linear equation solver can be used, it is better to use methods that take advantage of the symmetry and positive definiteness of H. The most common approach is to form the Cholesky factorization of H, i.e., to compute a lower triangular matrix L that satisfies LLT = H (see §C.3.2). We then solve Lw = −g by forward substitution, to obtain w = −L−1g, and then solve LT ∆xnt = w by back substitution, to obtain ∆xnt = L−T w = −L−T L−1g = −H−1g. We can compute the Newton decrement as λ2 = −∆xTntg, or use the formula λ2 = gT H−1g = ∥L−1g∥2 = ∥w∥2. If a dense (unstructured) Cholesky factorization is used, the cost of the forward and back substitution is dominated by the cost of the Cholesky factorization, which is (1/3)n3 flops. The total cost of computing the Newton step ∆xnt is thus F +(1/3)n3 flops, where F is the cost of forming H and g. It is often possible to solve the Newton system H∆xnt = −g more efficiently, by exploiting special structure in H, such as band structure or sparsity. In this context, ‘structure of H’ means structure that is the same for all x. For example, when we say that ‘H is tridiagonal’ we mean that for every x ∈ domf, ∇2f(x) is tridiagonal. Band structure If the Hessian H is banded with bandwidth k, i.e., Hij = 0 for |i−j| > k, then the banded Cholesky factorization can be used, as well as banded forward and back substitutions. The cost of computing the Newton step ∆xnt = −H−1g is then F +nk2 flops (assuming k ≪ n), compared to F +(1/3)n3 for a dense factorization and substitution method.
The Hessian band structure condition
∇2f(x)ij = ∂2f(x) =0 for |i−j|>k,
for all x ∈ domf, has an interesting interpretation in terms of the objective function f. Roughly speaking it means that in the objective function, each variable xi couples nonlinearly only to the 2k + 1 variables xj, j = i − k,…,i + k. This occurs when f has the partial separability form
f(x) = ψ1(x1, . . . , xk+1) + ψ2(x2, . . . , xk+2) + · · · + ψn−k(xn−k, . . . , xn),
where ψi : Rk+1 → R. In other words, f can be expressed as a sum of functions of k consecutive variables.
∂xi∂xj

9.7 Implementation 511
Example 9.9 Consider the problem of minimizing f : Rn → R, which has the form
f(x) = ψ1(x1, x2) + ψ2(x2, x3) + · · · + ψn−1(xn−1, xn),
where ψi : R2 → R are convex and twice differentiable. Because of this form, the Hessian ∇2f is tridiagonal, since ∂2f/∂xi∂xj = 0 for |i − j| > 1. (And conversely, if the Hessian of a function is tridiagonal for all x, then it has this form.)
Using Cholesky factorization and forward and back substitution algorithms for tridi- agonal matrices, we can solve the Newton system for this problem in order n flops. This should be compared to order n3 flops, if the special form of f were not exploited.
Sparse structure
More generally we can exploit sparsity of the Hessian H in solving the Newton system. This sparse structure occurs whenever each variable xi is nonlinearly coupled (in the objective) to only a few other variables, or equivalently, when the objective function can be expressed as a sum of functions, each depending on only a few variables, and each variable appearing in only a few of these functions.
To solve H∆x = −g when H is sparse, a sparse Cholesky factorization is used to compute a permutation matrix P and lower triangular matrix L for which
H = P LLT P T .
The cost of this factorization depends on the particular sparsity pattern, but is often far smaller than (1/3)n3, and an empirical complexity of order n (for large n) is not uncommon. The forward and back substitution are very similar to the basic method without the permutation. We solve Lw = −PTg using forward substitution, and then solve LT v = w by back substitution to obtain
v = L−T w = −L−T L−1PT g.
The Newton step is then ∆x = P v.
Since the sparsity pattern of H does not change as x varies (or more precisely,
since we only exploit sparsity that does not change with x) we can use the same permutation matrix P for each of the Newton steps. The step of determining a good permutation matrix P , which is called the symbolic factorization step, can be done once, for the whole Newton process.
Diagonal plus low rank
There are many other types of structure that can be exploited in solving the New- ton system H∆xnt = −g. Here we briefly describe one, and refer the reader to appendix C for more details. Suppose the Hessian H can be expressed as a diago- nal matrix plus one of low rank, say, p. This occurs when the objective function f has the special form
􏰊n i=1
f(x) =
ψi(xi) + ψ0(Ax + b) (9.60)

512
9 Unconstrained minimization
whereA∈Rp×n,ψ1,…,ψn :R→R,andψ0 :Rp →R. Inotherwords,f is a separable function, plus a function that depends on a low dimensional affine function of x.
To find the Newton step ∆xnt for (9.60) we must solve the Newton system
H∆xnt = −g, with
Here D = diag(ψ′′(x ),…,ψ′′(x )) is diagonal, and H = ∇2ψ (Ax + b) is the
H = D + AT H0A.
11nn00
Hessian of ψ0. If we compute the Newton step without exploiting the structure, the cost of solving the Newton system is (1/3)n3 flops.
Let H0 = L0LT0 be the Cholesky factorization of H0. We introduce the tempo- rary variable w = LT0 A∆xnt ∈ Rp, and express the Newton system as
D ∆ x n t + A T L 0 w = − g , w = L T0 A ∆ x n t .
Substituting ∆xnt = −D−1(AT L0w + g) (from the first equation) into the second
equation, we obtain
(I + LT0 AD−1AT L0)w = −LT0 AD−1g, (9.61)
which is a system of p linear equations.
Now we proceed as follows to compute the Newton step ∆xnt. First we compute
the Cholesky factorization of H0, which costs (1/3)p3. We then form the dense, positive definite symmetric matrix appearing on the lefthand side of (9.61), which costs 2p2n. We then solve (9.61) for w using a Cholesky factorization and a back and forward substitution, which costs (1/3)p3 flops. Finally, we compute ∆xnt using ∆xnt = −D−1(AT L0w + g), which costs 2np flops. The total cost of computing ∆xnt is (keeping only the dominant term) 2p2n flops, which is far smaller than (1/3)n3 for p ≪ n.

Bibliography 513
Bibliography
Dennis and Schnabel [DS96] and Ortega and Rheinboldt [OR00] are two standard refer- ences on algorithms for unconstrained minimization and nonlinear equations. The result on quadratic convergence, assuming strong convexity and Lipschitz continuity of the Hes- sian, is attributed to Kantorovich [Kan52]. Polyak [Pol87, §1.6] gives some insightful comments on the role of convergence results that involve unknown constants, such as the results derived in §9.5.3.
Self-concordant functions were introduced by Nesterov and Nemirovski [NN94]. All our results in §9.6 and exercises 9.14–9.20 can be found in their book, although often in a more general form or with different notation. Renegar [Ren01] gives a concise and elegant presentation of self-concordant functions and their role in the analysis of primal-dual interior-point algorithms. Peng, Roos, and Terlaky [PRT02] study interior-point methods from the viewpoint of self-regular functions, a class of functions that is similar, but not identical, to self-concordant functions.
References for the material in §9.7 are given at the end of appendix C.

514
9.1
9.2
9 Unconstrained minimization
Consider the problem of minimizing a quadratic
unbounded below.
(b) Now suppose⋆that P ≽ 0 (so the objective function is convex), but the optimality condition P x = −q does not have a solution. Show that the problem is unbounded below.
Minimizing a quadratic-over-linear fractional function. Consider the problem of minimiz- ing the function f : Rn → R, defined as
f(x)= ∥Ax−b∥2, domf ={x|cTx+d>0}. cT x + d
We assume rankA = n and b ̸∈ R(A).
(a) Show that f is closed.
(b) Show that the minimizer x⋆ of f is given by
x⋆ = x1 + tx2
where x1 = (AT A)−1AT b, x2 = (AT A)−1c, and t ∈ R can be calculated by solving a quadratic equation.
Initial point and sublevel set condition. Consider the function f (x) = x21 + x2 with domain domf = {(x1,x2) | x1 > 1}.
(a) What is p⋆?
(b) Draw the sublevel set S = {x | f(x) ≤ f(x(0))} for x(0) = (2,2). Is the sublevel set
S closed? Is f strongly convex on S?
(c) What happens if we apply the gradient method with backtracking line search, start-
ing at x(0)? Does f(x(k)) converge to p⋆?
Do you agree with the following argument? The l1-norm of a vector x ∈ Rm can be
Exercises Unconstrained minimization
Minimizing a quadratic function.
function:
whereP ∈Sn (butwedonotassumeP ≽0).
f(x) = (1/2)xT Px + qT x + r,
(a) Show that if P ̸≽ 0, i.e., the objective function f is not convex, then the problem is
minimize
9.3
9.4
9.5
withdomf={(x,y)∈Rn×Rm|y≻0},whereaTi istheithrowofA.Sincefistwice differentiable and convex, we can solve the l1-norm approximation problem by applying Newton’s method to (9.62).
Backtracking line search. Suppose f is strongly convex with mI ≼ ∇2f(x) ≼ MI. Let ∆x be a descent direction at x. Show that the backtracking stopping condition holds for
0 < t ≤ −∇f(x)T ∆x. M ∥∆x∥2 Use this to give an upper bound on the number of backtracking iterations. expressed as minimize f(x,y)= i=1(ai x−bi) /yi +1 y, 􏰈 is equivalent to the minimization problem􏰉m T 2 T Therefore the l1-norm approximation problem ∥x∥1 =(1/2)inf y≻0 . 􏰇􏰊m i=1 x2i/yi +1Ty minimize ∥Ax − b∥1 (9.62) Exercises 515 Gradient and steepest descent methods 9.6 Quadratic problem in R2. Verify the expressions for the iterates x(k) in the first example of §9.3.2. 9.7 Let ∆xnsd and ∆xsd be the normalized and unnormalized steepest descent directions at x, for the norm ∥ · ∥. Prove the following identities. (a) ∇f(x)T ∆xnsd = −∥∇f(x)∥∗. (b) ∇f(x)T ∆xsd = −∥∇f(x)∥2∗. (c) ∆xsd = argminv(∇f(x)T v + (1/2)∥v∥2). 9.8 Steepest descent method in l∞-norm. Explain how to find a steepest descent direction in the l∞-norm, and give a simple interpretation. Newton’s method 9.9 Newton decrement. Show that the Newton decrement λ(x) satisfies λ(x) = sup (−vT ∇f(x)) = sup −vT ∇f(x) . vT ∇2f(x)v=1 v̸=0 (vT ∇2f(x)v)1/2 9.10 The pure Newton method. Newton’s method with fixed step size t = 1 can diverge if the initial point is not close to x⋆. In this problem we consider two examples. (a) f(x) = log(ex + e−x) has a unique minimizer x⋆ = 0. Run Newton’s method with fixed step size t = 1, starting at x(0) = 1 and at x(0) = 1.1. (b) f (x) = − log x + x has a unique minimizer x⋆ = 1. Run Newton’s method with fixed step size t = 1, starting at x(0) = 3. Plot f and f′, and show the first few iterates. 9.11 Gradient and Newton methods for composition functions. Suppose φ : R → R is increasing and convex, and f : Rn → R is convex, so g(x) = φ(f(x)) is convex. (We assume that f and g are twice differentiable.) The problems of minimizing f and minimizing g are clearly equivalent. Compare the gradient method and Newton’s method, applied to f and g. How are the search directions related? How are the methods related if an exact line search is used? Hint. Use the matrix inversion lemma (see §C.4.3). 9.12 Trust region Newton method. If ∇2f(x) is singular (or very ill-conditioned), the Newton step ∆xnt = −∇2f(x)−1∇f(x) is not well defined. Instead we can define a search direction ∆xtr as the solution of minimize (1/2)vT Hv + gT v subject to ∥v∥2 ≤ γ, where H = ∇2f(x), g = ∇f(x), and γ is a positive constant. The point x+∆xtr minimizes the second-order approximation of f at x, subject to the constraint that ∥(x+∆xtr)−x∥2 ≤ γ. The set {v | ∥v∥2 ≤ γ} is called the trust region. The parameter γ, the size of the trust region, reflects our confidence in the second-order model. Show that ∆xtr minimizes (1/2)vT Hv + gT v + βˆ∥v∥2, for some βˆ. This quadratic function can be interpreted as a regularized quadratic model for f around x. 516 9 Unconstrained minimization Self-concordance 9.13 Self-concordance and the inverse barrier. (a) Show that f (x) = 1/x with domain (0, 8/9) is self-concordant. (b) Show that the function 9.14 9.15 9.16 and for all x. Prove that f (x) = − log(−g(x)) − log x is self-concordant on {x | x > 0, g(x) < 9.17 f ( x ) = α 􏰊m 1 i=1 bi − aTi x with domf = {x ∈ Rn | aTi x < bi, i = 1,...,m}, is self-concordant if domf is bounded and Composition with logarithm. Let g : R → R be a convex function with domg = R++, 0}. Hint. Use the inequality which holds for p,q,r ∈ R+ with p2 +q2 +r2 = 1. α>(9/8) max sup (bi −aTi x). i=1,…,m x∈dom f
|g′′′(x)| ≤ 3g′′(x) x
23 r p 2 + q 3 + 23 p 2 q + r 3 ≤ 1
Prove that the following functions are self-concordant. In your proof, restrict the function
to a line, and apply the composition with logarithm rule.
(a) f(x,y) = −log(y2 − xT x) on {(x,y) | ∥x∥2 < y}. (b) f(x,y)=−2logy−log(y2/p −x2),withp≥1,on{(x,y)∈R2 ||x|p 0forallx∈domf.
Upper and lower bounds on the Hessian of a self-concordant function.
pressed as
Find the ‘extreme’ self-concordant functions of one variable, i.e., the functions f
(a) Let f : R2 → R be a self-concordant function. Show that 􏰍􏰍􏰍 ∂ 3 f ( x ) 􏰍􏰍􏰍 􏰄 ∂ 2 f ( x ) 􏰅 3 / 2
for all x ∈ domf.
􏰍􏰍􏰍 d 􏰀f′′(x)−1/2􏰁􏰍􏰍􏰍 ≤ 1. dx
􏰍∂3xi􏰍≤2 ∂x2 ,i=1,2, i
􏰍􏰍􏰍 ∂3f(x) 􏰍􏰍􏰍 ∂2f(x) 􏰄∂2f(x)􏰅1/2
􏰍∂x2∂xj􏰍 ≤ 2 ∂x2 ∂x2 , i̸=j iij

Exercises 517
Hint. If h : R2 × R2 × R2 → R is a symmetric trilinear form, i.e., h(u, v, w) = a1u1v1w1 + a2(u1v1w2 + u1v2w1 + u2v1w1)
then
+ a3(u1v2w2 + u2v1w1 + u2v2w1) + a4u2v2w2, sup h(u,v,w) = sup h(u,u,u).
u,v,w̸=0 ∥u∥2∥v∥2∥w∥2 u̸=0 ∥u∥32
(b) Let f : Rn → R be a self-concordant function. Show that the nullspace of ∇2f(x) is independent of x. Show that if f is strictly convex, then ∇2f(x) is nonsingular for all x ∈ domf.
Hint. Prove that if wT ∇2f(x)w = 0 for some x ∈ dom f, then wT ∇2f(y)w = 0 for all y ∈ dom f . To show this, apply the result in (a) to the self-concordant function
̃
f (t, s) = f (x + t(y − x) + sw).
(c) Let f : Rn → R be a self-concordant function. Suppose x ∈ domf, v ∈ Rn. Show
that
for x + tv ∈ dom f, 0 ≤ t < α, where α = (vT ∇2f(x)v)1/2. (1 − tα)2∇2f(x) ≼ ∇2f(x + tv) ≼ 1 ∇2f(x) (1 − tα)2 9.18 Quadratic convergence. Let f : Rn → R be a strictly convex self-concordant function. Suppose λ(x) < 1, and define x+ = x − ∇2f(x)−1∇f(x). Prove that λ(x+) ≤ λ(x)2/(1 − λ(x))2. Hint. Use the inequalities in exercise 9.17, part (c). 9.19 Bound on the distance from the optimum. Let f : Rn → R be a strictly convex self- concordant function. (a) Suppose λ(x ̄) < 1 and the sublevel set {x | f(x) ≤ f(x ̄)} is closed. Show that the minimum of f is attained and 􏰀(x ̄ − x⋆)T ∇2f(x ̄)(x ̄ − x⋆)􏰁1/2 ≤ λ(x ̄) . 1 − λ(x ̄) (b) Show that if f has a closed sublevel set, and is bounded below, then its minimum is attained. 9.20 Conjugate of a self-concordant function. Suppose f : Rn → R is closed, strictly convex, and self-concordant. We show that its conjugate (or Legendre transform) f∗ is self- concordant. (a) ( b ) Show that for each y ∈ domf∗, there is a unique x ∈ domf that satisfies y = ∇f(x). Hint. Refer to the result of exercise 9.19. S u p p o s e y ̄ = ∇ f ( x ̄ ) . D e fi n e g(t) = f(x ̄ + tv), h(t) = f∗(y ̄ + tw) where v ∈ Rn and w = ∇2f(x ̄)v. Show that g′′(0) = h′′(0), g′′′(0) = −h′′′(0). Use these identities to show that f∗ is self-concordant. 9.21 Optimal line search parameters. Consider the upper bound (9.56) on the number of Newton iterations required to minimize a strictly convex self-concordant functions. What is the minimum value of the upper bound, if we minimize over α and β? 9.22 Suppose that f is strictly convex and satisfies (9.42). Give a bound on the number of Newton steps required to compute p⋆ within ǫ, starting at x(0). 518 9 Unconstrained minimization 9.23 9.24 9.25 Implementation Pre-computation for line searches. For each of the following functions, explain how the computational cost of a line search can be reduced by a pre-computation. Give the cost of the pre-computation, and the cost of evaluating g(t) = f(x + t∆x) and g′(t) with and without the pre-computation. b∈Rm anddomf={x|P0+ ni=1xiPi ≻0}. 9.26 Newton equations with linear structure. Consider the problem of minimizing a function 9.27 (a) f (x) = − 􏰉mi=1 log(bi − aTi x). 􏰀􏰉m T 􏰁􏰉 (b) f(x)=log i=1exp(ai x+bi) . (c) f(x)=(Ax−b)T(P0 +x1P1 +···+xnPn)−1(Ax−b),wherePi ∈Sm,A∈Rm×n, Exploiting block diagonal structure in the Newton system. Suppose the Hessian ∇2f(x) of a convex function f is block diagonal. How do we exploit this structure when computing the Newton step? What does it mean about f? Smoothed fit to given data. Consider the problem minimize f(x) = 􏰉n ψ(xi − yi) + λ 􏰉n−1(xi+1 − xi)2 i=1 i=1 where λ > 0 is smoothing parameter, ψ is a convex penalty function, and x ∈ Rn is the
variable. We can interpret x as a smoothed fit to the vector y.
(a) What is the structure in the Hessian of f?
(b) Extend to the problem of making a smooth fit to two-dimensional data, i.e., mini- mizing the function
n
􏰊 ψ(xij − yij) + λ
􏰇n−1 n n n−1 􏰈 􏰊􏰊(xi+1,j − xij)2 + 􏰊􏰊(xi,j+1 − xij)2
,
i=1 j=1 i=1 j=1 with variable X ∈ Rn×n, where Y ∈ Rn×n and λ > 0 are given.
of the form
􏰊N i=1
i,j=1
f(x) =
ψi(Aix + bi) (9.63)
where Ai ∈ Rmi×n, bi ∈ Rmi, and the functions ψi : Rmi → R are twice differentiable and convex. The Hessian H and gradient g of f at x are given by
ATi HiAi,
where Hi = ∇2ψi(Aix + bi) and gi = ∇ψi(Aix + bi).
H =
g =
􏰊N i=1
􏰊N i=1
ATi gi. (9.64) Describe how you would implement Newton’s method for minimizing f. Assume that
n ≫ mi, the matrices Ai are very sparse, but the Hessian H is dense.
Analytic center of linear inequalities with variable bounds. Give the most efficient method
for computing the Newton step of the function
􏰊n 􏰊n 􏰊m
f(x)=− log(xi +1)− log(1−xi)− log(bi −aTi x),
i=1 i=1 i=1
withdomf={x∈Rn|−1≺x≺1,Ax≺b},whereaTi istheithrowofA.AssumeA is dense, and distinguish two cases: m ≥ n and m ≤ n. (See also exercise 9.30.)

Exercises 519
9.28 Analytic center of quadratic inequalities. Describe an efficient method for computing the Newton step of the function
􏰊m i=1
with domf = {x | xTAix+bTi x+ci < 0, i = 1,...,m}. Assume that the matrices Ai ∈ Sn++ are large and sparse, and m ≪ n. Hint. The Hessian and gradient of f at x are given by f(x)=− log(−xTAix−bTi x−ci), 􏰊m H = (2αiAi + αi2(2Aix + bi)(2Aix + bi)T ), i=1 w h e r e α i = 1 / ( − x T A i x − b Ti x − c i ) . g = 􏰊m i=1 αi(2Aix + bi), 9.29 Exploiting structure in two-stage optimization. This exercise continues exercise 4.64, which describes optimization with recourse, or two-stage optimization. Using the notation and assumptions in exercise 4.64, we assume in addition that the cost function f is a twice differentiable function of (x, z), for each scenario i = 1, . . . , S. Explain how to efficiently compute the Newton step for the problem of finding the optimal policy. How does the approximate flop count for your method compare to that of a generic method (which exploits no structure), as a function of S, the number of scenarios? Numerical experiments 9.30 Gradient and Newton methods. Consider the unconstrained problem minimize f (x) = − 􏰉mi=1 log(1 − aTi x) − 􏰉ni=1 log(1 − x2i ), withvariablex∈Rn,anddomf={x|aTi x<1, i=1,...,m, |xi|<1, i=1,...,n}. This is the problem of computing the analytic center of the set of linear inequalities aTi x≤1, i=1,...,m, |xi|≤1, i=1,...,n. Note that we can choose x(0) = 0 as our initial point. You can generate instances of this problem by choosing ai from some distribution on Rn. (a) Use the gradient method to solve the problem, using reasonable choices for the back- tracking parameters, and a stopping criterion of the form ∥∇f(x)∥2 ≤ η. Plot the objective function and step length versus iteration number. (Once you have deter- mined p⋆ to high accuracy, you can also plot f − p⋆ versus iteration.) Experiment with the backtracking parameters α and β to see their effect on the total number of iterations required. Carry these experiments out for several instances of the problem, of different sizes. (b) Repeat using Newton’s method, with stopping criterion based on the Newton decre- ment λ2. Look for quadratic convergence. You do not have to use an efficient method to compute the Newton step, as in exercise 9.27; you can use a general purpose dense solver, although it is better to use one that is based on a Cholesky factorization. Hint. Use the chain rule to find expressions for ∇f(x) and ∇2f(x). 9.31 Some approximate Newton methods. The cost of Newton’s method is dominated by the cost of evaluating the Hessian ∇2f(x) and the cost of solving the Newton system. For large problems, it is sometimes useful to replace the Hessian by a positive definite approximation that makes it easier to form and solve for the search step. In this problem we explore some common examples of this idea. For each of the approximate Newton methods described below, test the method on some instances of the analytic centering problem described in exercise 9.30, and compare the results to those obtained using the Newton method and gradient method. 520 9 Unconstrained minimization (a) Re-using the Hessian. We evaluate and factor the Hessian only every N iterations, where N > 1, and use the search step ∆x = −H−1∇f(x), where H is the last Hessian evaluated. (We need to evaluate and factor the Hessian once every N steps; for the other steps, we compute the search direction using back and forward substitution.)
(b) Diagonal approximation. We replace the Hessian by its diagonal, so we only have to evaluate the n second derivatives ∂2f(x)/∂x2i , and computing the search step is very easy.
9.32 Gauss-Newton method for convex nonlinear least-squares problems. We consider a (non- linear) least-squares problem, in which we minimize a function of the form
1 􏰊m
f(x) = 2 fi(x)2,
i=1
where fi are twice differentiable functions. The gradient and Hessian of f at x are given
by
∇f(x) = fi(x)∇fi(x), ∇2f(x) = ∇fi(x)∇fi(x)T + fi(x)∇2fi(x) .
􏰊m 􏰊m􏰀 􏰁 i=1 i=1
We consider the case when f is convex. This occurs, for example, if each fi is either nonnegative and convex, or nonpositive and concave, or affine.
The Gauss-Newton method uses the search direction
􏰇 􏰊m
∆xgn = −
i=1
∇fi(x)∇fi(x)T
􏰈 − 1 􏰇 􏰊m i=1
fi(x)∇fi(x)
􏰈
.
(We assume here that the inverse exists, i.e., the vectors ∇f1(x),…,∇fm(x) span Rn.) This search direction can be considered an approximate Newton direction (see exer- cise 9.31), obtained by dropping the second derivative terms from the Hessian of f.
We can give another simple interpretation of the Gauss-Newton search direction ∆xgn. Using the first-order approximation fi(x + v) ≈ fi(x) + ∇fi(x)T v we obtain the approxi- mation
􏰊m
f(x + v) ≈ 21 (fi(x) + ∇fi(x)T v)2.
i=1
The Gauss-Newton search step ∆xgn is precisely the value of v that minimizes this ap- proximation of f. (Moreover, we conclude that ∆xgn can be computed by solving a linear least-squares problem.)
Test the Gauss-Newton method on some problem instances of the form
f i ( x ) = ( 1 / 2 ) x T A i x + b Ti x + 1 ,
with Ai ∈ Sn and bT A−1bi ≤ 2 (which ensures that f is convex). ++ ii

Chapter 10
Equality constrained minimization
10.1 Equality constrained minimization problems
In this chapter we describe methods for solving a convex optimization problem with equality constraints,
minimize f (x) subject to Ax = b,
(10.1)
where f : Rn → R is convex and twice continuously differentiable, and A ∈ Rp×n with rank A = p < n. The assumptions on A mean that there are fewer equality constraints than variables, and that the equality constraints are independent. We will assume that an optimal solution x⋆ exists, and use p⋆ to denote the optimal value, p⋆ = inf{f(x) | Ax = b} = f(x⋆). Recall (from §4.2.3 or §5.5.3) that a point x⋆ ∈ dom f is optimal for (10.1) if and only if there is a ν⋆ ∈ Rp such that Ax⋆ =b, ∇f(x⋆)+ATν⋆ =0. (10.2) Solving the equality constrained optimization problem (10.1) is therefore equivalent to finding a solution of the KKT equations (10.2), which is a set of n + p equations in the n + p variables x⋆, ν⋆. The first set of equations, Ax⋆ = b, are called the primal feasibility equations, which are linear. The second set of equations, ∇f(x⋆) + AT ν⋆ = 0, are called the dual feasibility equations, and are in general nonlinear. As with unconstrained optimization, there are a few problems for which we can solve these optimality conditions analytically. The most important special case is when f is quadratic, which we examine in §10.1.1. Any equality constrained minimization problem can be reduced to an equiv- alent unconstrained problem by eliminating the equality constraints, after which the methods of chapter 9 can be used to solve the problem. Another approach is to solve the dual problem (assuming the dual function is twice differentiable) using an unconstrained minimization method, and then recover the solution of the 522 10 Equality constrained minimization equality constrained problem (10.1) from the dual solution. The elimination and dual methods are briefly discussed in §10.1.2 and §10.1.3, respectively. The bulk of this chapter is devoted to extensions of Newton’s method that di- rectly handle equality constraints. In many cases these methods are preferable to methods that reduce an equality constrained problem to an unconstrained one. One reason is that problem structure, such as sparsity, is often destroyed by elimination (or forming the dual); in contrast, a method that directly handles equality con- straints can exploit the problem structure. Another reason is conceptual: methods that directly handle equality constraints can be thought of as methods for directly solving the optimality conditions (10.2). Equality constrained convex quadratic minimization Consider the equality constrained convex quadratic minimization problem minimize f(x) = (1/2)xT Px + qT x + r 10.1.1 subject to Ax = b, (10.3) where P ∈ Sn+ and A ∈ Rp×n. This problem is important on its own, and also because it forms the basis for an extension of Newton’s method to equality con- strained problems. Here the optimality conditions (10.2) are Ax⋆ =b, Px⋆ +q+ATν⋆ =0, which we can write as 􏰒P AT 􏰓􏰒x⋆ 􏰓=􏰒−q􏰓. A0ν⋆ b This set of n + p linear equations in the n + p variables x⋆, ν⋆ is called the KKT system for the equality constrained quadratic optimization problem (10.3). The coefficient matrix is called the KKT matrix. When the KKT matrix is nonsingular, there is a unique optimal primal-dual pair (x⋆,ν⋆). If the KKT matrix is singular, but the KKT system is solvable, any solution yields an optimal pair (x⋆,ν⋆). If the KKT system is not solvable, the quadratic optimization problem is unbounded below or infeasible. Indeed, in this case there exist v ∈ Rn and w ∈ Rp such that P v + AT w = 0, Av = 0, −qT v + bT w > 0.
Let xˆ be any feasible point. The point x = xˆ + tv is feasible for all t and
f(xˆ + tv) = f(xˆ) + t(vT Pxˆ + qT v) + (1/2)t2vT Pv
= f(xˆ) + t(−xˆT AT w + qT v) − (1/2)t2wT Av
= f(xˆ)+t(−bTw+qTv), which decreases without bound as t → ∞.
(10.4)

10.1 Equality constrained minimization problems 523
Nonsingularity of the KKT matrix
Recall our assumption that P ∈ Sn+ and rankA = p < n. There are several conditions equivalent to nonsingularity of the KKT matrix: • N (P ) ∩ N (A) = {0}, i.e., P and A have no nontrivial common nullspace. • Ax=0,x̸=0 =⇒ xTPx>0,i.e.,P ispositivedefiniteonthenullspaceof
A.
• FT PF ≻ 0, where F ∈ Rn×(n−p) is a matrix for which R(F) = N(A).
(See exercise 10.1.) As an important special case, we note that if P ≻ 0, the KKT matrix must be nonsingular.
10.1.2 Eliminating equality constraints
One general approach to solving the equality constrained problem (10.1) is to elim- inate the equality constraints, as described in §4.2.4, and then solve the resulting unconstrained problem using methods for unconstrained minimization. We first find a matrix F ∈ Rn×(n−p) and vector xˆ ∈ Rn that parametrize the (affine) feasible set:
{ x | A x = b } = { F z + xˆ | z ∈ R n − p } .
Here xˆ can be chosen as any particular solution of Ax = b, and F ∈ Rn×(n−p) is any matrix whose range is the nullspace of A. We then form the reduced or eliminated optimization problem
̃
minimize f (z) = f (F z + xˆ), (10.5)
which is an unconstrained problem with variable z ∈ Rn−p. From its solution z⋆, we can find the solution of the equality constrained problem as x⋆ = F z⋆ + xˆ.
We can also construct an optimal dual variable ν⋆ for the equality constrained problem, as
ν⋆ = −(AAT )−1A∇f(x⋆).
To show that this expression is correct, we must verify that the dual feasibility
(10.6)
Example 10.1 Optimal allocation with resource constraint. We consider the problem minimize 􏰉n fi (xi )
condition
holds. To show this, we note that
∇f(x⋆) + AT (−(AAT )−1A∇f(x⋆)) = 0
􏰒 FT 􏰓􏰀∇f(x⋆)−AT(AAT)−1A∇f(x⋆)􏰁=0,
A
whereinthetopblockweuseF ∇f(x )=∇f(z )=0andAF =0. Sincethe
T⋆ ̃⋆ matrix on the left is nonsingular, this implies (10.6).
􏰉i=1 subject to ni=1 xi = b,

524
10 Equality constrained minimization
where the functions fi : R → R are convex and twice differentiable, and b ∈ R is a problem parameter. We interpret this as the problem of optimally allocating a single resource, with a fixed total amount b (the budget) to n otherwise independent activities.
We can eliminate xn (for example) using the parametrization xn =b−x1 −···−xn−1,
which corresponds to the choices xˆ = b e n ,
The reduced problem is then
􏰓 ∈ R n × ( n − 1 ) . minimize fn(b − x1 − · · · − xn−1) + 􏰉n−1 fi(xi),
F = 􏰒 I −1T
i=1
There are, of course, many possible choices for the elimination matrix F , which can be chosen as any matrix in Rn×(n−p) with R(F ) = N (A). If F is one such matrix, and T ∈ R(n−p)×(n−p) is nonsingular, then F ̃ = F T is also a suitable elimination matrix, since
̃ R(F ̃) = R(F) = N(A).
Conversely, if F and F are any two suitable elimination matrices, then there is some nonsingular T such that F ̃ = F T .
If we eliminate the equality constraints using F, we solve the unconstrained problem
minimize f (F z + xˆ), while if F ̃ is used, we solve the unconstrained problem
m i n i m i z e f ( F ̃ z ̃ + xˆ ) = f ( F ( T z ̃ ) + xˆ ) .
This problem is equivalent to the one above, and is simply obtained by the change of coordinates z = Tz ̃. In other words, changing the elimination matrix can be thought of as changing variables in the reduced problem.
Solving equality constrained problems via the dual
Another approach to solving (10.1) is to solve the dual, and then recover the optimal primal variable x⋆, as described in §5.5.5. The dual function of (10.1) is
g(ν) = −bT ν + inf(f(x) + νT Ax)
x􏰀􏰁 = −bTν −sup (−ATν)Tx−f(x)
x
= −bTν−f∗(−ATν),
with variables x1, . . . , xn−1. Choice of elimination matrix
10.1.3

10.2 Newton’s method with equality constraints 525
where f∗ is the conjugate of f, so the dual problem is maximize −bT ν − f∗(−AT ν).
Since by assumption there is an optimal point, the problem is strictly feasible, so Slater’s condition holds. Therefore strong duality holds, and the dual optimum is attained, i.e., there exists a ν⋆ with g(ν⋆) = p⋆.
If the dual function g is twice differentiable, then the methods for unconstrained minimization described in chapter 9 can be used to maximize g. (In general, the dual function g need not be twice differentiable, even if f is.) Once we find an optimal dual variable ν⋆, we reconstruct an optimal primal solution x⋆ from it. (This is not always straightforward; see §5.5.5.)
Example 10.2 Equality constrained analytic center. We consider the problem minimize f (x) = − 􏰉ni=1 log xi
subject to Ax = b,
where A ∈ Rp×n, with implicit constraint x ≻ 0. Using
∗􏰊n 􏰊n
f (y)= (−1−log(−yi))=−n− log(−yi)
i=1 i=1 (with domf∗ = −Rn++), the dual problem is 􏰉
(10.7)
maximize g(ν) = −bT ν + n + ni=1 log(AT ν)i, equation, i.e., find the x that minimizes L(x,ν):
(10.8) with implicit constraint AT ν ≻ 0. Here we can easily solve the dual feasibility
and so
∇f (x) + AT ν = −(1/x1 , . . . , 1/xn ) + AT ν = 0,
xi(ν) = 1/(AT ν)i. (10.9)
To solve the equality constrained analytic centering problem (10.7), we solve the (unconstrained) dual problem (10.8), and then recover the optimal solution of (10.7) via (10.9).
10.2 Newton’s method with equality constraints
In this section we describe an extension of Newton’s method to include equality constraints. The method is almost the same as Newton’s method without con- straints, except for two differences: The initial point must be feasible (i.e., satisfy x ∈ domf and Ax = b), and the definition of Newton step is modified to take the equality constraints into account. In particular, we make sure that the Newton step ∆xnt is a feasible direction, i.e., A∆xnt = 0.

526
10.2.1
10 Equality constrained minimization
The Newton step
Definition via second-order approximation
To derive the Newton step ∆xnt for the equality constrained problem minimize f (x)
subject to Ax = b,
at the feasible point x, we replace the objective with its second-order Taylor ap-
(10.10)
with variable v. This is a (convex) quadratic minimization problem with equality constraints, and can be solved analytically. We define ∆xnt, the Newton step at x, as the solution of the convex quadratic problem (10.10), assuming the associated KKT matrix is nonsingular. In other words, the Newton step ∆xnt is what must be added to x to solve the problem when the quadratic approximation is used in place of f.
From our analysis in §10.1.1 of the equality constrained quadratic problem, the Newton step ∆xnt is characterized by
􏰒 ∇2f(x) AT 􏰓􏰒 ∆xnt 􏰓 = 􏰒 −∇f(x) 􏰓, (10.11) A0w0
where w is the associated optimal dual variable for the quadratic problem. The Newton step is defined only at points for which the KKT matrix is nonsingular.
As in Newton’s method for unconstrained problems, we observe that when the objective f is exactly quadratic, the Newton update x + ∆xnt exactly solves the equality constrained minimization problem, and in this case the vector w is the op- timal dual variable for the original problem. This suggests, as in the unconstrained case, that when f is nearly quadratic, x + ∆xnt should be a very good estimate of the solution x⋆, and w should be a good estimate of the optimal dual variable ν⋆.
Solution of linearized optimality conditions
We can interpret the Newton step ∆xnt, and the associated vector w, as the solu- tions of a linearized approximation of the optimality conditions
Ax⋆ =b, ∇f(x⋆)+ATν⋆ =0.
We substitute x + ∆xnt for x⋆ and w for ν⋆, and replace the gradient term in the
second equation by its linearized approximation near x, to obtain the equations A(x+∆xnt)=b, ∇f(x+∆xnt)+ATw≈∇f(x)+∇2f(x)∆xnt +ATw=0.
Using Ax = b, these become
A∆xnt = 0, ∇2f(x)∆xnt + AT w = −∇f(x),
which are precisely the equations (10.11) that define the Newton step.
proximation near x, to form the problem 􏰝TT2
minimize f(x + v) = f(x) + ∇f(x) v + (1/2)v ∇ f(x)v subject to A(x + v) = b,

10.2 Newton’s method with equality constraints 527
The Newton decrement
We define the Newton decrement for the equality constrained problem as
λ(x) = (∆xTnt∇2f(x)∆xnt)1/2. (10.12)
This is exactly the same expression as (9.29), used in the unconstrained case, and the same interpretations hold. For example, λ(x) is the norm of the Newton step, in the norm determined by the Hessian.
Let
f􏰝(x + v) = f(x) + ∇f(x)T v + (1/2)vT ∇2f(x)v
be the second-order Taylor approximation of f at x. The difference between f(x)
and the minimum of the second-order model satisfies
f(x) − inf{f􏰝(x + v) | A(x + v) = b} = λ(x)2/2, (10.13)
exactly as in the unconstrained case (see exercise 10.6). This means that, as in the unconstrained case, λ(x)2/2 gives an estimate of f(x)−p⋆, based on the quadratic model at x, and also that λ(x) (or a multiple of λ(x)2) serves as the basis of a good stopping criterion.
The Newton decrement comes up in the line search as well, since the directional derivative of f in the direction ∆xnt is
d f(x + t∆xnt)􏰍􏰍􏰍􏰍
dt t=0
as in the unconstrained case.
Feasible descent direction
= ∇f(x)T ∆xnt = −λ(x)2, (10.14)
Suppose that Ax = b. We say that v ∈ Rn is a feasible direction if Av = 0. In this case, every point of the form x + tv is also feasible, i.e., A(x + tv) = b. We say that v is a descent direction for f at x, if for small t > 0, f(x + tv) < f(x). The Newton step is always a feasible descent direction (except when x is opti- mal, in which case ∆xnt = 0). Indeed, the second set of equations that define ∆xnt are A∆xnt = 0, which shows it is a feasible direction; that it is a descent direction follows from (10.14). Affine invariance Like the Newton step and decrement for unconstrained optimization, the New- ton step and decrement for equality constrained optimization are affine invariant. n×n ̄ Suppose T ∈ R and the equality constraint Ax = b becomes AT y = b. is nonsingular, and define f(y) = f(Ty). We have ̄T2 ̄T2 ∇f(y) = T ∇f(Ty), ∇ f(y) = T ∇ f(Ty)T, ̄ Now consider the problem of minimizing f(y), subject to ATy = b. The Newton step ∆ynt at y is given by the solution of 􏰒 TT∇2f(Ty)T TTAT 􏰓􏰒 ∆ynt 􏰓=􏰒 −TT∇f(Ty) 􏰓. AT 0 w ̄ 0 528 10.2.2 10 Equality constrained minimization Comparing with the Newton step ∆xnt for f at x = Ty, given in (10.11), we see that T∆ynt =∆xnt (and w = w ̄), i.e., the Newton steps at y and x are related by the same change of coordinates as T y = x. Newton’s method with equality constraints The outline of Newton’s method with equality constraints is exactly the same as for unconstrained problems. Algorithm 10.1 Newton’s method for equality constrained minimization. given starting point x ∈ dom f with Ax = b, tolerance ǫ > 0.
repeat
1. Compute the Newton step and decrement ∆xnt, λ(x).
2. Stopping criterion. quit if λ2/2 ≤ ǫ.
3. Line search. Choose step size t by backtracking line search. 4. Update. x := x + t∆xnt.
The method is called a feasible descent method, since all the iterates are feasi- ble, with f(x(k+1)) < f(x(k)) (unless x(k) is optimal). Newton’s method requires that the KKT matrix be invertible at each x; we will be more precise about the assumptions required for convergence in §10.2.4. Newton’s method and elimination We now show that the iterates in Newton’s method for the equality constrained problem (10.1) coincide with the iterates in Newton’s method applied to the re- duced problem (10.5). Suppose F satisfies R(F ) = N (A) and rank F = n − p, 10.2.3 and xˆ satisfies Axˆ = b. The gradient and Hessian of the reduced objective function ̃ f ( z ) = f ( F z + xˆ ) a r e ̃T 2 ̃T2 ∇f(z)=F ∇f(Fz+xˆ), ∇ f(z)=F ∇ f(Fz+xˆ)F. From the Hessian expression, we see that the Newton step for the equality con- strained problem is defined, i.e., the KKT matrix 􏰒 ∇2f(x) AT 􏰓 A0 is invertible, if and only if the Newton step for the reduced problem is defined, i.e., 2 ̃ ∇ f(z) is invertible. The Newton step for the reduced problem is 2 ̃ −1 ̃ T 2 −1 T ∆znt = −∇ f(z) ∇f(z) = −(F ∇ f(x)F) F ∇f(x), (10.15) 10.2 Newton’s method with equality constraints 529 where x = F z + xˆ. This search direction for the reduced problem corresponds to the direction F∆znt = −F(FT ∇2f(x)F)−1FT ∇f(x) for the original, equality constrained problem. We claim this is precisely the same as the Newton direction ∆xnt for the original problem, defined in (10.11). To show this, we take ∆xnt = F∆znt, choose w = −(AAT )−1A(∇f(x) + ∇2f(x)∆xnt), and verify that the equations defining the Newton step, ∇2f(x)∆xnt + AT w + ∇f(x) = 0, A∆xnt = 0, (10.16) hold. The second equation, A∆xnt = 0, is satisfied because AF = 0. To verify the first equation, we observe that 􏰒 FT 􏰓􏰀∇2f(x)∆xnt +ATw+∇f(x)􏰁 A = 􏰒 FT∇2f(x)∆xnt +FTATw+FT∇f(x) 􏰓 A∇2f(x)∆xnt + AAT w + A∇f(x) = 0. Since the matrix on the left of the first line is nonsingular, we conclude that (10.16) holds. In a similar way, the Newton decrement λ ̃(z) of f ̃at z and the Newton decrement off atxturnouttobeequal: ̃2T2 ̃ λ(z) = ∆znt∇ f(z)∆znt = ∆znTtFT∇2f(x)F∆znt = ∆xTnt ∇2 f (x)∆xnt = λ(x)2 . 10.2.4 Convergence analysis We saw above that applying Newton’s method with equality constraints is exactly the same as applying Newton’s method to the reduced problem obtained by elimi- nating the equality constraints. Everything we know about the convergence of New- ton’s method for unconstrained problems therefore transfers to Newton’s method for equality constrained problems. In particular, the practical performance of New- ton’s method with equality constraints is exactly like the performance of Newton’s method for unconstrained problems. Once x(k) is near x⋆, convergence is extremely rapid, with a very high accuracy obtained in only a few iterations. Assumptions We make the following assumptions. 530 10 Equality constrained minimization • The sublevel set S = {x | x ∈ domf, f(x) ≤ f(x(0)), Ax = b} is closed, where x(0) ∈ domf satisfies Ax(0) = b. This is the case if f is closed (see §A.3.3). • On the set S, we have ∇2f(x) ≼ MI, and 􏳶􏳶􏳶􏳶􏰒 ∇2f(x) AT 􏰓−1􏳶􏳶􏳶􏳶 ≤ K, (10.17) 􏳶 A 0 􏳶2 i.e., the inverse of the KKT matrix is bounded on S. (Of course the inverse must exist in order for the Newton step to be defined at each point in S.) • For x, x ̃ ∈ S, ∇2f satisfies the Lipschitz condition ∥∇2f(x) − ∇2f(x ̃)∥2 ≤ L∥x − x ̃∥2. Bounded inverse KKT matrix assumption The condition (10.17) plays the role of the strong convexity assumption in the standard Newton method (§9.5.3, page 488). When there are no equality con- straints, (10.17) reduces to the condition ∥∇2f(x)−1∥2 ≤ K on S, so we can take K = 1/m, if ∇2f(x) ≽ mI on S, where m > 0. With equality constraints, the condition is not as simple as a positive lower bound on the minimum eigenvalue. Since the KKT matrix is symmetric, the condition (10.17) is that its eigenvalues, n of which are positive, and p of which are negative, are bounded away from zero.
Analysis via the eliminated problem
̃
The assumptions above imply that the eliminated objective function f, together
with the associated initial point z(0), where x(0) = xˆ + F z(0), satisfy the assump- tions required in the convergence analysis of Newton’s method for unconstrained problems, given in §9.5.3 (with different constants m ̃ , M ̃ , and L ̃). It follows that Newton’s method with equality constraints converges to x⋆ (and ν⋆ as well).
To show that the assumptions above imply that the eliminated problem satisfies
the assumptions for the unconstrained Newton method is mostly straightforward
(see exercise 10.4). Here we show the one implication that is tricky: that the
bounded inverse KKT condition, together with the upper bound ∇2f(x) ≼ MI, 2 ̃
implies that ∇ f(z) ≽ mI for some positive constant m. More specifically we will show that this inequality holds for
σmin (F )2
m= K2M , (10.18)
which is positive, since F is full rank.
We show this by contradiction. Suppose that FT HF ̸≽ mI, where H = ∇2f(x).
Then we can find u, with ∥u∥2 = 1, such that uT FT HFu < m, i.e., ∥H1/2Fu∥2 < m1/2. Using AF = 0, we have 􏰒H AT 􏰓􏰒Fu􏰓=􏰒HFu􏰓, A000 10.3 Infeasible start Newton method 531 and so 􏳶􏳶􏰒 􏰓−1􏳶􏳶 􏳶􏳶􏳶􏳶􏰒 Fu 􏰓􏳶􏳶􏳶􏳶 􏳶HAT 􏳶 02 ∥Fu∥2 􏳶􏳶 A 0 􏳶􏳶 ≥ 􏳶􏳶 􏰒 H F u 􏰓 􏳶􏳶 = ∥ H F u ∥ . 2 􏳶􏳶 0 􏳶􏳶 2 2 Using ∥Fu∥2 ≥ σmin(F) and ∥HFu∥2 ≤ ∥H1/2∥2∥H1/2Fu∥2 < M1/2m1/2, we conclude 􏳶􏳶􏳶􏳶􏰒 H AT 􏰓−1􏳶􏳶􏳶􏳶 ≥ ∥Fu∥2 > σmin(F) = K,
􏳶 A 0 􏳶2 ∥HFu∥2 M1/2m1/2 using our expression for m given in (10.18).
Convergence analysis for self-concordant functions
̃
If f is self-concordant, then so is f(z) = f(Fz + xˆ). It follows that if f is self-
concordant, we have the exact same complexity estimate as for unconstrained prob- lems: the number of iterations required to produce a solution within an accuracy ǫ is no more than
20 − 8α (f(x(0)) − p⋆) + log2 log2(1/ǫ), αβ(1 − 2α)2
where α and β are the backtracking parameters (see (9.56)).
10.3 Infeasible start Newton method
Newton’s method, as described in §10.2, is a feasible descent method. In this section we describe a generalization of Newton’s method that works with initial points, and iterates, that are not feasible.
10.3.1 Newton step at infeasible points
As in Newton’s method, we start with the optimality conditions for the equality constrained minimization problem:
Ax⋆ =b, ∇f(x⋆)+ATν⋆ =0.
Let x denote the current point, which we do not assume to be feasible, but we do assume satisfies x ∈ dom f. Our goal is to find a step ∆x so that x + ∆x satisfies (at least approximately) the optimality conditions, i.e., x + ∆x ≈ x⋆. To do this

532
10 Equality constrained minimization
we substitute x + ∆x for x⋆ and w for ν⋆ in the optimality conditions, and use the first-order approximation
∇f(x + ∆x) ≈ ∇f(x) + ∇2f(x)∆x for the gradient to obtain
A(x + ∆x) = b, ∇f(x) + ∇2f(x)∆x + AT w = 0. This is a set of linear equations for ∆x and w,
􏰒 ∇2f(x) AT 􏰓􏰒 ∆x 􏰓 = −􏰒 ∇f(x) 􏰓. A0w Ax−b
(10.19)
The equations are the same as the equations (10.11) that define the Newton step at a feasible point x, with one difference: the second block component of the righthand side contains Ax − b, which is the residual vector for the linear equality constraints. When x is feasible, the residual vanishes, and the equations (10.19) reduce to the equations (10.11) that define the standard Newton step at a feasible point x. Thus, if x is feasible, the step ∆x defined by (10.19) coincides with the Newton step described above (but defined only when x is feasible). For this reason we use the notation ∆xnt for the step ∆x defined by (10.19), and refer to it as the Newton step at x, with no confusion.
Interpretation as primal-dual Newton step
We can give an interpretation of the equations (10.19) in terms of a primal-dual method for the equality constrained problem. By a primal-dual method, we mean one in which we update both the primal variable x, and the dual variable ν, in order to (approximately) satisfy the optimality conditions.
We express the optimality conditions as r(x⋆, ν⋆) = 0, where r : Rn × Rp → Rn × Rp is defined as
r(x, ν) = (rdual(x, ν), rpri(x, ν)).
rdual(x, ν) = ∇f(x) + AT ν, rpri(x, ν) = Ax − b
Here
are the dual residual and primal residual, respectively. The first-order Taylor ap-
proximation of r, near our current estimate y, is
r ( y + z ) ≈ rˆ ( y + z ) = r ( y ) + D r ( y ) z ,
where Dr(y) ∈ R(n+p)×(n+p) is the derivative of r, evaluated at y (see §A.4.1). We define the primal-dual Newton step ∆ypd as the step z for which the Taylor approximation rˆ(y + z) vanishes, i.e.,
Dr(y)∆ypd = −r(y). (10.20)
Note that here we consider both x and ν as variables; ∆ypd = (∆xpd, ∆νpd) gives both a primal and a dual step.

10.3 Infeasible start Newton method
Evaluating the derivative of r, we can express (10.20) as
􏰒 ∇2f(x) AT 􏰓􏰒 ∆xpd 􏰓=−􏰒 rdual 􏰓=−􏰒 ∇f(x)+ATν 􏰓.
A 0 ∆νpd rpri Ax−b Writing ν + ∆νpd as ν+, we can express this as
􏰒 ∇2f(x) AT 􏰓􏰒 ∆xpd 􏰓 = −􏰒 ∇f(x) 􏰓, A 0 ν+ Ax−b
533
which is exactly the same set of equations as (10.19). The solutions of (10.19), (10.21), and (10.22) are therefore related as
∆xnt =∆xpd, w=ν+ =ν+∆νpd.
This shows that the (infeasible) Newton step is the same as the primal part of the primal-dual step, and the associated dual vector w is the updated primal-dual variable ν+ = ν + ∆νpd.
The two expressions for the Newton step and dual variable (or dual step), given by (10.21) and (10.22), are of course equivalent, but each reveals a different feature of the Newton step. The equation (10.21) shows that the Newton step and the associated dual step are obtained by solving a set of equations, with the primal and dual residuals as the righthand side. The equation (10.22), which is how we originally defined the Newton step, gives the Newton step and the updated dual variable, and shows that the current value of the dual variable is not needed to compute the primal step, or the updated value of the dual variable.
Residual norm reduction property
The Newton direction, at an infeasible point, is not necessarily a descent direction for f. From (10.19), we note that
d f(x + t∆x)􏰍􏰍􏰍􏰍 = dt t=0
= −∆xT ∇2f(x)∆x + (Ax − b)T w,
which is not necessarily negative (unless, of course, x is feasible, i.e., Ax = b). The primal-dual interpretation, however, shows that the norm of the residual decreases in the Newton direction, i.e.,
d ∥r(y + t∆y )∥2􏰍􏰍􏰍 = 2r(y)T Dr(y)∆y
pd 2􏰍t=0 􏰍􏰍 Taking the derivative of the square, we obtain
d ∥r(y + t∆ypd)∥2􏰍􏰍
dt t=0
= −2r(y)T r(y). = −∥r(y)∥2.
dt
pd
∇f(x)T ∆x
􏰀 􏰁
= −∆xT ∇2f(x)∆x + AT w
This allows us to use ∥r∥2 to measure the progress of the infeasible start Newton method, for example, in the line search. (For the standard Newton method, we use the function value f to measure progress of the algorithm, at least until quadratic convergence is attained.)
(10.21)
(10.22)
(10.23)

534
10 Equality constrained minimization
10.3.2
where r(i) = Ax(i) − b is the residual of x(i). This formula shows that the primal residual at each step is in the direction of the initial primal residual, and is scaled down at each step. It also shows that once a full step is taken, all future iterates are primal feasible.
Infeasible start Newton method
We can develop an extension of Newton’s method, using the Newton step ∆xnt defined by (10.19), with x(0) ∈ domf, but not necessarily satisfying Ax(0) = b. We also use the dual part of the Newton step: ∆νnt = w − ν in the notation of (10.19), or equivalently, ∆νnt = ∆νpd in the notation of (10.21).
Algorithm 10.2 Infeasible start Newton method.
given starting point x ∈ domf, ν, tolerance ǫ > 0, α ∈ (0,1/2), β ∈ (0,1).
Full step feasibility property
The Newton step ∆xnt defined by (10.19) has the property (by construction) that
A(x + ∆xnt) = b. (10.24)
It follows that, if a step length of one is taken using the Newton step ∆xnt, the following iterate will be feasible. Once x is feasible, the Newton step becomes a feasible direction, so all future iterates will be feasible, regardless of the step sizes taken.
More generally, we can analyze the effect of a damped step on the equality constraint residual rpri. With a step length t ∈ [0,1], the next iterate is x+ = x + t∆xnt, so the equality constraint residual at the next iterate is
r+ =A(x+∆xntt)−b=(1−t)(Ax−b)=(1−t)rpri, pri
using (10.24). Thus, a damped step, with length t, causes the residual to be scaled down by a factor 1 − t. Now suppose that we have x(i+1) = x(i) + t(i)∆x(i), for
nt
i = 0,…,k − 1, where ∆x(i) is the Newton step at the point x(i) ∈ domf, and
nt t(i) ∈ [0, 1]. Then we have
􏰇􏰖 􏰈
k−1
r(k) = (1 − t(i)) r(0),
i=0
repeat
1. Compute primal and dual Newton steps ∆xnt, ∆νnt. 2. Backtracking line search on ∥r∥2.
t := 1.
while ∥r(x + t∆xnt, ν + t∆νnt)∥2 > (1 − αt)∥r(x, ν)∥2, 3.Update. x:=x+t∆xnt,ν:=ν+t∆νnt.
untilAx=band∥r(x,ν)∥2 ≤ǫ.
t := βt.

10.3 Infeasible start Newton method 535
This algorithm is very similar to the standard Newton method with feasible start- ing point, with a few exceptions. First, the search directions include the extra correction terms that depend on the primal residual. Second, the line search is carried out using the norm of the residual, instead of the function value f. Finally, the algorithm terminates when primal feasibility has been achieved, and the norm of the (dual) residual is small.
The line search in step 2 deserves some comment. Using the norm of the residual in the line search can increase the cost, compared to a line search based on the function value, but the increase is usually negligible. Also, we note that the line search must terminate in a finite number of steps, since (10.23) shows that the line search exit condition is satisfied for small t.
The equation (10.24) shows that if at some iteration the step length is chosen to be one, the next iterate will be feasible. Thereafter, all iterates will be feasible, and therefore the search direction for the infeasible start Newton method coincides, once a feasible iterate is obtained, with the search direction for the (feasible) Newton method described in §10.2.
There are many variations on the infeasible start Newton method. For example, we can switch to the (feasible) Newton method described in §10.2 once feasibility is achieved. (In other words, we change the line search to one based on f, and terminate when λ(x)2/2 ≤ ǫ.) Once feasibility is achieved, the infeasible start and the standard (feasible) Newton method differ only in the backtracking and exit conditions, and have very similar performance.
Using infeasible start Newton method to simplify initialization
The main advantage of the infeasible start Newton method is in the initialization required. If domf = Rn, then initializing the (feasible) Newton method simply requires computing a solution to Ax = b, and there is no particular advantage, other than convenience, in using the infeasible start Newton method.
When domf is not all of Rn, finding a point in domf that satisfies Ax = b can itself be a challenge. One general approach, probably the best when dom f is complex and not known to intersect {z | Az = b}, is to use a phase I method (de- scribed in §11.4) to compute such a point (or verify that dom f does not intersect {z | Az = b}). But when dom f is relatively simple, and known to contain a point satisfying Ax = b, the infeasible start Newton method gives a simple alternative.
One common example occurs when domf = Rn++, as in the equality con- strained analytic centering problem described in example 10.2. To initialize New- ton’s method for the problem
minimize − 􏰉ni=1 log xi subject to Ax = b,
(10.25)
requires finding a point x(0) ≻ 0 with Ax = b, which is equivalent to solving a stan- dard form LP feasibility problem. This can be carried out using a phase I method, or alternatively, using the infeasible start Newton method, with any positive initial point, e.g., x(0) = 1.
The same trick can be used to initialize unconstrained problems where a starting point in dom f is not known. As an example, we consider the dual of the equality

536
10 Equality constrained minimization
constrained analytic centering problem (10.25),
maximize g(ν) = −bT ν + n + 􏰉ni=1 log(AT ν)i.
To initialize this problem for the (feasible start) Newton method, we must find a point ν(0) that satisfies AT ν(0) ≻ 0, i.e., we must solve a set of linear inequalities. This can be done using a phase I method, or using an infeasible start Newton method, after reformulating the problem. We first express it as an equality con- strained problem,
maximize −bT ν + n + 􏰉ni=1 log yi subject to y = AT ν,
with new variable y ∈ Rn. We can now use the infeasible start Newton method, starting with any positive y(0) (and any ν(0)).
The disadvantage of using the infeasible start Newton method to initialize prob- lems for which a strictly feasible starting point is not known is that there is no clear way to detect that there exists no strictly feasible point; the norm of the residual will simply converge, slowly, to some positive value. (Phase I methods, in contrast, can determine this fact unambiguously.) In addition, the convergence of the infea- sible start Newton method, before feasibility is achieved, can be slow; see §11.4.2.
Convergence analysis
In this section we show that the infeasible start Newton method converges to the optimal point, provided certain assumptions hold. The convergence proof is very similar to those for the standard Newton method, or the standard Newton method with equality constraints. We show that once the norm of the residual is small enough, the algorithm takes full steps (which implies that feasibility is achieved), and convergence is subsequently quadratic. We also show that the norm of the residual is reduced by at least a fixed amount in each iteration before the region of quadratic convergence is reached. Since the norm of the residual cannot be negative, this shows that within a finite number of steps, the residual will be small enough to guarantee full steps, and quadratic convergence.
Assumptions
We make the following assumptions. • The sublevel set
S = {(x, ν) | x ∈ dom f, ∥r(x, ν)∥2 ≤ ∥r(x(0), ν(0))∥2} (10.26) is closed. If f is closed, then ∥r∥2 is a closed function, and therefore this con-
dition is satisfied for any x(0) ∈ dom f and any ν(0) ∈ Rp (see exercise 10.7). • On the set S, we have
∥Dr(x, ν)−1∥2 = 􏳶􏳶􏳶􏳶􏰒 ∇2f(x) AT 􏰓−1􏳶􏳶􏳶􏳶 ≤ K, (10.27) 􏳶 A 0 􏳶2
10.3.3

10.3 Infeasible start Newton method 537
for some K.
• For (x, ν), (x ̃, ν ̃) ∈ S, Dr satisfies the Lipschitz condition
∥Dr(x,ν)−Dr(x ̃,ν ̃)∥2 ≤L∥(x,ν)−(x ̃,ν ̃)∥2.
(This is equivalent to ∇2f(x) satisfying a Lipschitz condition; see exer-
cise 10.7.)
As we will see below, these assumptions imply that domf and {z | Az = b}
intersect, and that there is an optimal point (x⋆,ν⋆). Comparison with standard Newton method
The assumptions above are very similar to the ones made in §10.2.4 (page 529) for the analysis of the standard Newton method. The second and third assump- tions, the bounded inverse KKT matrix and Lipschitz condition, are essentially the same. The sublevel set condition (10.26) for the infeasible start Newton method is, however, more general than the sublevel set condition made in §10.2.4.
As an example, consider the equality constrained maximum entropy problem minimize f (x) = 􏰉ni=1 xi log xi
subject to Ax = b,
with dom f = Rn++. The objective f is not closed; it has sublevel sets that are not closed, so the assumptions made in the standard Newton method may not hold, at least for some initial points. The problem here is that the negative entropy function does not converge to ∞ as xi → 0. On the other hand the sublevel set condition (10.26) for the infeasible start Newton method does hold for this problem, since the norm of the gradient of the negative entropy function does converge to ∞ as xi → 0. Thus, the infeasible start Newton method is guaranteed to solve the equality constrained maximum entropy problem. (We do not know whether the standard Newton method can fail for this problem; we are only observing here that our convergence analysis does not hold.) Note that if the initial point satisfies the equality constraints, the only difference between the standard and infeasible start Newton methods is in the line searches, which differ only during the damped stage.
A basic inequality
We start by deriving a basic inequality. Let y = (x, ν) ∈ S with ∥r(y)∥2 ̸= 0, and let ∆ynt = (∆xnt, ∆νnt) be the Newton step at y. Define
tmax =inf{t>0|y+t∆ynt ̸∈S}.
If y + t∆ynt ∈ S for all t ≥ 0, we follow the usual convention and define tmax = ∞. Otherwise, tmax is the smallest positive value of t such that ∥r(y + t∆ynt)∥2 = ∥r(y(0))∥2. In particular, it follows that y + t∆ynt ∈ S for 0 ≤ t ≤ tmax.
We will show that
∥r(y + t∆ynt)∥2 ≤ (1 − t)∥r(y)∥2 + (K2L/2)t2∥r(y)∥2 (10.28)

538
10 Equality constrained minimization
for 0 ≤ t ≤ min{1, tmax}. We have
􏰑1 0􏰑1
0 = r(y) + tDr(y)∆ynt + e
= (1−t)r(y)+e, using Dr(y)∆ynt = −r(y􏰑), and defining
1
e = (Dr(y + τt∆ynt) − Dr(y))t∆ynt dτ. 0
Now suppose 0 ≤ t ≤ tmax, so y + τt∆ynt ∈ S for 0 ≤ τ ≤ 1. We can bound ∥e∥2 as follows:
􏰑1
r(y + t∆ynt) =
= r(y) + tDr(y)∆ynt +
r(y) +
Dr(y + τt∆ynt)t∆ynt dτ
∥Dr(y + τt∆ynt) − Dr(y)∥2 dτ L∥τ t∆ynt ∥2 dτ
0
= (L/2)t2 ∥∆ynt ∥2
= (L/2)t2∥Dr(y)−1r(y)∥2
≤ (K2L/2)t2∥r(y)∥2,
using the Lipschitz condition on the second line, and the bound ∥Dr(y)−1∥2 ≤ K on the last. Now we can derive the bound (10.28): For 0 ≤ t ≤ min{1, tmax},
∥r(y + t∆ynt)∥2 = ∥(1 − t)r(y) + e∥2
≤ (1 − t)∥r(y)∥2 + ∥e∥2
≤ (1 − t)∥r(y)∥2 + (K2L/2)t2∥r(y)∥2.
Damped Newton phase
We first show that if ∥r(y)∥2 > 1/(K2L), one iteration of the infeasible start Newton method reduces ∥r∥2 by at least a certain minimum amount.
The righthand side of the basic inequality (10.28) is quadratic in t, and mono- tonically decreasing between t = 0 and its minimizer
t ̄ = 1 < 1 . K2L∥r(y)∥2 ̄ We must have tmax > t, because the opposite would imply ∥r(y + tmax∆ynt)∥2 < ̄ ∥r(y)∥2, which is false. The basic inequality is therefore valid at t = t, and therefore ∥r(y+t ̄∆ynt)∥2 ≤ ∥r(y)∥2−1/(2K2L) ≤ ∥r(y)∥2 − α/(K2L) ̄ = (1 − αt)∥r(y)∥2, ∥e∥2 ≤ ≤ ∥t∆ynt ∥2 ∥t∆ynt∥2 􏰑 0 1 (Dr(y + τt∆ynt) − Dr(y))t∆ynt dτ 10.3 Infeasible start Newton method 539 which shows that the step length t ̄satisfies the line search exit condition. Therefore ̄ we have t ≥ βt, where t is the step length chosen by the backtracking algorithm. From t ≥ βt ̄ we have (from the exit condition in the backtracking line search) ∥r(y + t∆ynt)∥2 ≤ (1 − αt)∥r(y)∥2 ̄ ≤ (1 − αβt)∥r(y)∥2 = 􏰄1− αβ 􏰅∥r(y)∥2 K2L∥r(y)∥2 = ∥r(y)∥2 − αβ . K2L Thus, as long as we have ∥r(y)∥2 > 1/(K2L), we obtain a minimum decrease in
∥r∥2, per iteration, of αβ/(K2L). It follows that a maximum of ∥r(y(0))∥2K2L
αβ
iterations can be taken before we have ∥r(y(k))∥2 ≤ 1/(K2L).
Quadratically convergent phase
Now suppose ∥r(y)∥2 ≤ 1/(K2L). The basic inequality gives ∥r(y + t∆ynt)∥2 ≤ (1 − t + (1/2)t2)∥r(y)∥2
(10.29)
for 0 ≤ t ≤ min{1, tmax}. We must have tmax > 1, because otherwise it would follow from (10.29) that ∥r(y + tmax∆ynt)∥2 < ∥r(y)∥2, which contradicts the definition of tmax. The inequality (10.29) therefore holds with t = 1, i.e., we have ∥r(y + ∆ynt)∥2 ≤ (1/2)∥r(y)∥2 ≤ (1 − α)∥r(y)∥2. This shows that the backtracking line search exit criterion is satisfied for t = 1, so a full step will be taken. Moreover, for all future iterations we have ∥r(y)∥2 ≤ 1/(K2L), so a full step will be taken for all following iterations. We can write the inequality (10.28) (for t = 1) as K2L∥r(y+)∥2 􏰄K2L∥r(y)∥2 􏰅2 2≤2, where y+ = y + ∆ynt. Therefore, if r(y+k) denotes the residual k steps after an iteration in which ∥r(y)∥2 ≤ 1/K2L, we have K2L∥r(y+k)∥2 􏰄K2L∥r(y)∥2 􏰅2k 􏰄1􏰅2k 2≤2≤2, i.e., we have quadratic convergence of ∥r(y)∥2 to zero. To show that the sequence of iterates converges, we will show that it is a Cauchy sequence. Suppose y is an iterate satisfying ∥r(y)∥2 ≤ 1/(K2L), and y+k denotes 540 10 Equality constrained minimization the kth iterate after y. Since these iterates are in the region of quadratic conver- gence, the step size is one, so we have ∥y+k − y∥2 ≤ ∥y+k − y+(k−1)∥2 + ··· + ∥y+ − y∥2 = ∥Dr(y+(k−1))−1r(y+(k−1))∥2 + · · · + ∥Dr(y)−1r(y)∥2 ≤ K 􏰎∥r(y+(k−1))∥2 + · · · + ∥r(y)∥2􏰏 k−1 􏰄 􏰅2i−1 ≤ K∥r(y)∥2 􏰊􏰊 K2L∥r(y)∥2 10.3.4 where in the third line we use the assumption that ∥Dr−1∥2 ≤ K for all iterates. Since ∥r(y(k))∥2 converges to zero, we conclude y(k) is a Cauchy sequence, and therefore converges. By continuity of r, the limit point y⋆ satisfies r(y⋆) = 0. This establishes our earlier claim that the assumptions at the beginning of this section imply that there is an optimal point (x⋆,ν⋆). Convex-concave games The proof of convergence for the infeasible start Newton method reveals that the method can be used for a larger class of problems than equality constrained convex optimization problems. Suppose r : Rn → Rn is differentiable, its derivative satisfies a Lipschitz condition on S, and ∥Dr(x)−1∥2 is bounded on S, where S = {x ∈ dom r | ∥r(x)∥2 ≤ ∥r(x(0))∥2} is a closed set. Then the infeasible start Newton method, started at x(0), converges to a solution of r(x) = 0 in S. In the infeasible start Newton method, we apply this to the specific case in which r is the residual for the equality constrained convex optimization problem. But it applies in several other interesting cases. One interesting example is solving a convex-concave game. (See §5.4.3 and exercise 5.25 for discussion of other, related games). An unconstrained (zero-sum, two-player) game on Rp × Rq is defined by its payoff function f : Rp+q → R. The meaning is that player 1 chooses a value (or move) u ∈ Rp, and player 2 chooses a value (or move) v ∈ Rq; based on these choices, player 1 makes a payment to player 2, in the amount f(u,v). The goal of player 1 is to minimize this payment, while the goal of player 2 is to maximize it. If player 1 makes his choice u first, and player 2 knows the choice, then player 2 will choose v to maximize f (u, v), which results in a payoff of supv f (u, v) (assuming the supremum is achieved). If player 1 assumes that player 2 will make this choice, he should choose u to minimize supv f(u,v). The resulting payoff, from player 1 to player 2, will then be inf sup f(u,v) (10.30) uv k−1 􏰄1􏰅2i−1 ≤ K∥r(y)∥2 2 i=0 ≤ 2K∥r(y)∥2 i=0 2 10.3 Infeasible start Newton method 541 (assuming that the supremum is achieved). On the other hand if player 2 makes the first choice, the strategies are reversed, and the resulting payoff from player 1 to player 2 is sup inf f(u,v). (10.31) vu The payoff (10.30) is always greater than or equal to the payoff (10.31); the dif- ference between the two payoffs can be interpreted as the advantage afforded the player who makes the second move, with knowledge of the other player’s move. We say that (u⋆,v⋆) is a solution of the game, or a saddle-point for the game, if for all u, v, f(u⋆, v) ≤ f(u⋆, v⋆) ≤ f(u, v⋆). When a solution exists, there is no advantage to making the second move; f(u⋆,v⋆) is the common value of both payoffs (10.30) and (10.31). (See exercise 3.14.) The game is called convex-concave if for each v, f(u,v) is a convex function of u, and for each u, f(u,v) is a concave function of v. When f is differentiable (and convex-concave), a saddle-point for the game is characterized by ∇f(u⋆,v⋆) = 0. Solution via infeasible start Newton method We can use the infeasible start Newton method to compute a solution of a convex- concave game with twice differentiable payoff function. We define the residual as r(u,v) = ∇f(u,v) = 􏰒 ∇uf(u,v) 􏰓, ∇vf(u,v) and apply the infeasible start Newton method. In the context of games, the infea- sible start Newton method is simply called Newton’s method (for convex-concave games). We can guarantee convergence of the (infeasible start) Newton method provided Dr = ∇2f has bounded inverse, and satisfies a Lipschitz condition on the sublevel set S = {(u, v) ∈ dom f | ∥r(u, v)∥2 ≤ ∥r(u(0), v(0))∥2}, where u(0), v(0) are the starting players’ choices. There is a simple analog of the strong convexity condition in an unconstrained minimization problem. We say the game with payoff function f is strongly convex- concave if for some m > 0, we have ∇2uuf(u,v) ≽ mI and ∇2vvf(u,v) ≼ −mI, for all (u, v) ∈ S. Not surprisingly, this strong convex-concave assumption implies the bounded inverse condition (exercise 10.10).
10.3.5 Examples
A simple example
We illustrate the infeasible start Newton method on the equality constrained an- alytic center problem (10.25). Our first example is an instance with dimensions n = 100 and m = 50, generated randomly, for which the problem is feasible and bounded below. The infeasible start Newton method is used, with initial primal

542
10 Equality constrained minimization
and dual points x(0) = 1, ν(0) = 0, and backtracking parameters α = 0.01 and β = 0.5. The plot in figure 10.1 shows the norms of the primal and dual residu- als separately, versus iteration number, and the plot in figure 10.2 shows the step lengths. A full Newton step is taken in iteration 8, so the primal residual becomes (almost) zero, and remains (almost) zero. After around iteration 9 or so, the (dual) residual converges quadratically to zero.
An infeasible example
We also consider a problem instance, of the same dimensions as the example above, for which domf does not intersect {z | Az = b}, i.e., the problem is infeasible. (This violates the basic assumption in the chapter that problem (10.1) is solvable, as well as the assumptions made in §10.2.4; the example is meant only to show what happens to the infeasible start Newton method when domf does not intersect {z | Az = b}.) The norm of the residual for this example is shown in figure 10.3, and the step length in figure 10.4. Here, of course, the step lengths are never one, and the residual does not converge to zero.
A convex-concave game
Our final example involves a convex-concave game on R100 × R100, with payoff function
f (u, v) = uT Av + bT u + cT v − log(1 − uT u) + log(1 − vT v), (10.32) defined on
domf={(u,v)|uTu<1, vTv<1}. The problem data A, b, and c were randomly generated. The progress of the (infeasible start) Newton method, started at u(0) = v(0) = 0, with backtracking parameters α = 0.01 and β = 0.5, is shown in figure 10.5. 10.4 Implementation 10.4.1 Elimination To implement the elimination method, we have to calculate a full rank matrix F and an xˆ such that { x | A x = b } = { F z + xˆ | z ∈ R n − p } . Several methods for this are described in §C.5. 10.4.2 Solving KKT systems In this section we describe methods that can be used to compute the Newton step or infeasible Newton step, both of which involve solving a set of linear equations 10.4 Implementation 543 105 100 10−5 10−10 10−150 2 4 6 8 10 12 iteration number Figure 10.1 Progress of infeasible start Newton method on an equality con- strained analytic centering problem with 100 variables and 50 constraints. The figure shows ∥rpri∥2 (solid line), and ∥rdual∥2 (dashed line). Note that feasibility is achieved (and maintained) after 8 iterations, and convergence is quadratic, starting from iteration 9 or so. 1 0.5 0 2 4 6 8 10 12 iteration number Figure 10.2 Step length versus iteration number for the same example prob- lem. A full step is taken in iteration 8, which results in feasibility from iteration 8 on. t ∥rpri∥2 and ∥rdual∥2 544 10 Equality constrained minimization 102 1010 5 10 15 20 iteration number Figure 10.3 Progress of infeasible start Newton method on an equality con- strained analytic centering problem with 100 variables and 50 constraints, for which dom f = R100 does not intersect {z | Az = b}. The figure shows ++ ∥rpri∥2 (solid line), and ∥rdual∥2 (dashed line). In this case, the residuals do not converge to zero. 0.3 0.2 0.1 00 5 10 15 20 iteration number Figure 10.4 Step length versus iteration number for the infeasible example problem. No full steps are taken, and the step lengths converge to zero. t ∥rpri∥2 and ∥rdual∥2 10.4 Implementation 545 105 100 10−5 10−10 10−150 2 4 6 8 iteration number Figure 10.5 Progress of (infeasible start) Newton method on a convex- concave game. Quadratic convergence becomes apparent after about 5 iter- ations. with KKT form 􏰒H AT 􏰓􏰒v􏰓=−􏰒g􏰓. (10.33) A0wh Here we assume H ∈ Sn+, and A ∈ Rp×n with rankA = p < n. Similar methods can be used to compute the Newton step for a convex-concave game, in which the bottom right entry of the coefficient matrix is negative semidefinite (see exer- cise 10.13). Solving full KKT system One straightforward approach is to simply solve the KKT system (10.33), which is a set of n + p linear equations in n + p variables. The KKT matrix is symmetric, but not positive definite, so a good way to do this is to use an LDLT factorization (see §C.3.3). If no structure of the matrix is exploited, the cost is (1/3)(n + p)3 flops. This can be a reasonable approach when the problem is small (i.e., n and p are not too large), or when A and H are sparse. Solving KKT system via elimination A method that is often better than directly solving the full KKT system is based on eliminating the variable v (see §C.4). We start by describing the simplest case, in which H ≻ 0. Starting from the first of the KKT equations we solve for v to obtain Hv + AT w = −g, Av = −h, v = −H−1(g + AT w). ∥∇f (u, v)∥2 546 10 Equality constrained minimization Substituting this into the second KKT equation yields AH−1(g + AT w) = h, so we have w = (AH−1AT )−1(h − AH−1g). These formulas give us a method for computing v and w. The matrix appearing in the formula for w is the Schur complement S of H in the KKT matrix: Because of the special structure of the KKT matrix, and our assumption that A S = −AH−1AT . has rank p, the matrix S is negative definite. Algorithm 10.3 Solving KKT system by block elimination. given KKT system with H ≻ 0. 1. Form H−1AT and H−1g. 2. Form Schur complement S = −AH−1AT . 3. Determine w by solving Sw = AH−1g − h. 4. Determine v by solving Hv = −AT w − g. Step 1 can be done by a Cholesky factorization of H, followed by p + 1 solves, which costs f +(p+1)s, where f is the cost of factoring H and s is the cost of an associated solve. Step 2 requires a p × n by n × p matrix multiplication. If we exploit no structure in this calculation, the cost is p2n flops. (Since the result is symmetric, we only need to compute the upper triangular part of S.) In some cases special structure in A and H can be exploited to carry out step 2 more efficiently. Step 3 can be carried out by Cholesky factorization of −S, which costs (1/3)p3 flops if no further structure of S is exploited. Step 4 can be carried out using the factorization of H already calculated in step 1, so the cost is 2np + s flops. The total flop count, assuming that no structure is exploited in forming or factoring the Schur complement, is f +ps+p2n+(1/3)p3 flops (keeping only dominant terms). If we exploit structure in forming or factoring S, the last two terms are even smaller. If H can be factored efficiently, then block elimination gives us a flop count advantage over directly solving the KKT system using an LDLT factorization. For example, if H is diagonal (which corresponds to a separable objective function), we have f = 0 and s = n, so the total cost is p2n + (1/3)p3 flops, which grows only linearly with n. If H is banded with bandwidth k ≪ n, then f = nk2, s = 4nk, so the total cost is around nk2 + 4nkp + p2n + (1/3)p3 which still grows only linearly with n. Other structures of H that can be exploited are block diagonal (which corresponds to block separable objective function), sparse, or diagonal plus low rank; see appendix C and §9.7 for more details and examples. Example 10.3 Equality constrained analytic center. We consider the problem minimize − 􏰉ni=1 log xi subject to Ax = b. 10.4 Implementation 547 Here the objective is separable, so the Hessian at x is diagonal: H = diag(x−2,...,x−2). 1n If we compute the Newton direction using a generic method such as an LDLT factor- ization of the KKT matrix, the cost is (1/3)(n + p)3 flops. If we compute the Newton step using block elimination, the cost is np2 + (1/3)p3 flops. This is much smaller than the cost of the generic method. In fact this cost is the same as that of computing the Newton step for the dual prob- lem, described in example 10.2 on page 525. For the (unconstrained) dual problem, the Hessian is where D is diagonal, with Dii = (AT ν)−2. Forming this matrix costs np2 flops, and Hdual = −ADAT , i solving for the Newton step by a Cholesky factorization of −Hdual costs (1/3)p3 flops. Example 10.4 Minimum length piecewise-linear curve subject to equality constraints. We consider a piecewise-linear curve in R2 with knot points (0, 0), (1, x1), . . . , (n, xn). To find the minimum length curve that satisfies the equality constraints Ax = b, we form the problem minimize 􏰀1 + x21􏰁1/2 + 􏰉n−1 􏰀1 + (xi+1 − xi)2􏰁1/2 i=1 with variable x ∈ Rn, and A ∈ Rp×n. In this problem, the objective is a sum of functions of pairs of adjacent variables, so the Hessian H is tridiagonal. Using block elimination, we can compute the Newton step in around p2n + (1/3)p3 flops. Elimination with singular H The block elimination method described above obviously does not work when H is singular, but a simple variation on the method can be used in this more general case. The more general method is based on the following result: The KKT matrix is nonsingular if and only if H + ATQA ≻ 0 for some Q ≽ 0, in which case, H + AT QA ≻ 0 for all Q ≻ 0. (See exercise 10.1.) We conclude, for example, that if the KKT matrix is nonsingular, then H + AT A ≻ 0. Let Q ≽ 0 be a matrix for which H +AT QA ≻ 0. Then the KKT system (10.33) is equivalent to 􏰒 H+ATQA AT 􏰓􏰒 v 􏰓=−􏰒 g+ATQh 􏰓, A0wh which can be solved using elimination since H + AT QA ≻ 0. 10.4.3 Examples In this section we describe some longer examples, showing how structure can be exploited to efficiently compute the Newton step. We also include some numerical results. subject to Ax = b, 548 10 Equality constrained minimization We consider the equality constrained analytic centering problem minimize f (x) = − 􏰉ni=1 log xi Equality constrained analytic centering subject to Ax = b. (See examples 10.2 and 10.3.) We compare three methods, for a problem of size p=100, n=500. The first method is Newton’s method with equality constraints (§10.2). The Newton step ∆xnt is defined by the KKT system (10.11): 􏰒H AT 􏰓􏰒∆xnt 􏰓=􏰒−g􏰓, A0w0 where H = diag(1/x21, . . . , 1/x2n), and g = −(1/x1, . . . , 1/xn). As explained in example 10.3, page 546, the KKT system can be efficiently solved by elimination, i.e., by solving AH−1AT w = −AH−1g, and setting ∆xnt = −H−1(AT w + g). In other words, where w is the solution of ∆xnt =−diag(x)2ATw+x, A diag(x)2AT w = b. Figure 10.6 shows the error versus iteration. The different curves correspond to four different starting points. We use a backtracking line search with α = 0.1, β = 0.5. The second method is Newton’s method applied to the dual maximize g(ν) = −bT ν + 􏰉ni=1 log(AT ν)i + n (see example 10.2, page 525). Here the Newton step is obtained from solving A diag(y)2AT ∆νnt = −b + Ay (10.35) where y = (1/(AT ν)1, . . . , 1/(AT ν)n). Comparing (10.35) and (10.34) we see that both methods have the same complexity. In figure 10.7 we show the error for four different starting points. We use a backtracking line search with α = 0.1, β = 0.5. The third method is the infeasible start Newton method of §10.3, applied to the optimality conditions ∇f(x⋆)+ATν⋆ =0, Ax⋆ =b. The Newton step is obtained by solving 􏰒H AT 􏰓􏰒∆xnt 􏰓=−􏰒g+ATν􏰓, A 0 ∆νnt Ax−b (10.34) 10.4 Implementation 105 100 10−5 10−100 5 549 10 15 k 20 Figure 10.6 Error f(x(k)) − p⋆ in Newton’s method, applied to an equality constrained analytic centering problem of size p = 100, n = 500. The different curves correspond to four different starting points. Final quadratic convergence is clearly evident. 105 100 10−5 10−100 2 4 6 8 10 k Figure 10.7 Error |g(ν(k)) − p⋆| in Newton’s method, applied to the dual of the equality constrained analytic centering problem. p⋆ − g(ν(k)) f(x(k)) − p⋆ 550 10 Equality constrained minimization 1010 105 100 10−5 10−10 10−150 5 10 15 20 25 k Figure 10.8 Residual ∥r(x(k),ν(k))∥2 in the infeasible start Newton method, applied to the equality constrained analytic centering problem. where H = diag(1/x21, . . . , 1/x2n), and g = −(1/x1, . . . , 1/xn). This KKT system can be efficiently solved by elimination, at the same cost as (10.34) or (10.35). For example, if we first solve A diag(x)2AT w = 2Ax − b, then ∆νnt and ∆xnt follow from ∆νnt =w−ν, ∆xnt =x−diag(x)2ATw. Figure 10.8 shows the norm of the residual r(x, ν) = (∇f(x) + AT ν, Ax − b) versus iteration, for four different starting points. We use a backtracking line search with α = 0.1, β = 0.5. The figures show that for this problem, the dual method appears to be faster, but only by a factor of two or three. It takes about six iterations to reach the region of quadratic convergence, as opposed to 12–15 in the primal method and 10–20 in the infeasible start Newton method. The methods also differ in the initialization they require. The primal method requires knowledge of a primal feasible point, i.e., satisfying Ax(0) = b, x(0) ≻ 0. The dual method requires a dual feasible point, i.e., AT ν(0) ≻ 0. Depending on the problem, one or the other might be more readily available. The infeasible start Newton method requires no initialization; the only requirement is that x(0) ≻ 0. Optimal network flow We consider a connected directed graph or network with n edges and p + 1 nodes. We let xj denote the flow or traffic on arc j, with xj > 0 meaning flow in the
∥r(x(k), ν(k))∥2

10.4 Implementation 551
direction of the arc, and xj < 0 meaning flow in the direction opposite the arc. There is also a given external source (or sink) flow si that enters (if si > 0) or leaves (if si < 0) node i. The flow must satisfy a conservation equation, which states that at each node, the total flow entering the node, including the external sources and sinks, is zero. This conservation equation can be expressed as A ̃x = s where A ̃ ∈ R(p+1)×n is the node incidence matrix of the graph, ̃  1 arcjleavesnodei Aij = −1 arc j enters node i 0 otherwise. The flow conservation equation A ̃x = s is inconsistent unless 1T s = 0, which we assume is the case. (In other words, the total of the source flows must equal the total of the sink flows.) The flow conservation equations A ̃x = s are also redundant, since 1T A ̃ = 0. To obtain an independent set of equations we can delete any one equation, to obtain Ax = b, where A ∈ Rp×n is the reduced node incidence matrix of the graph (i.e., the node incidence matrix with one row removed) and b ∈ Rp is reduced source vector (i.e., s with the associated entry removed). In summary, flow conservation is given by Ax = b, where A is the reduced node incidence matrix of the graph and b is the reduced source vector. The matrix A is very sparse, since each column has at most two nonzero entries (which can only be +1 or −1). We will take traffic flows x as the variables, and the sources as given. We introduce the objective function 􏰊n i=1 where φi : R → R is the flow cost function for arc i. We assume that the flow cost functions are strictly convex and twice differentiable. The problem of choosing the best flow, that satisfies the flow conservation re- quirement, is 􏰉n i=1 φi(xi) Ax = b. f(x) = φi(xi), minimize subject to (10.36) Here the Hessian H is diagonal, since the objective is separable. We have several choices for computing the Newton step for the optimal network flow problem (10.36). The most straightforward is to solve the full KKT system, using a sparse LDLT factorization. For this problem it is probably better to compute the Newton step using block elimination. We can characterize the sparsity pattern of the Schur complement S = −AH−1AT in terms of the graph: We have Sij ̸= 0 if and only if node i and node j are connected by an arc. It follows that if the network is sparse, i.e., if each node is connected by an arc to only a few other nodes, then the Schur complement S is sparse. In this case, we can exploit sparsity in forming S, and in the associated factorization and solve steps, as well. We can expect the computational complexity of computing the Newton step to grow approximately linearly with the number of arcs (which is the number of variables). 552 10 Equality constrained minimization Optimal control We consider the problem minimize 􏰉N φt(z(t)) + 􏰉N−1 ψt(u(t)) t=1 t=0 subjectto z(t+1)=Atz(t)+Btu(t), t=0,...,N−1. Here • z(t) ∈ Rk is the system state at time t • u(t) ∈ Rl is the input or control action at time t • φt : Rk → R is the state cost function • ψt : Rl → R is the input cost function • N is called the time horizon for the problem. We assume that the input and state cost functions are strictly convex and twice dif- ferentiable. The variables in the problem are u(0), . . . , u(N −1), and z(1), . . . , z(N ). The initial state z(0) is given. The linear equality constraints are called the state equations or dynamic evolution equations. We define the overall optimization vari- able x as x = (u(0), z(1), u(1), . . . , u(N − 1), z(N)) ∈ RN(k+l). Since the objective is block separable (i.e., a sum of functions of z(t) and u(t)), the Hessian is block diagonal: H = diag(R0,Q1,...,RN−1,QN), where Rt =∇2ψt(u(t)), t=0,...,N−1, Qt =∇2φt(z(t)), t=1,...,N. We can collect all the equality constraints (i.e., the state them as Ax = b where −B0 I 0 0 0 ··· 0  0 −A1 −B1 I 0 ··· 0 A=0 0 0−A2−B2··· 0  . . . . . . 00000···I 00 . .  0 0 0 0 0 ··· −AN−1 −BN−1 I  A0z(0)   0  b= . .  0  equations) and express 0 0 0 0 0 0 10.4 Implementation 553 The number of rows of A (i.e., equality constraints) is Nk. Directly solving the KKT system for the Newton step, using a dense LDLT factorization, would cost (1/3)(2Nk + Nl)3 = (1/3)N3(2k + l)3 flops. Using a sparse LDLT factorization would give a large improvement, since the method would exploit the many zero entries in A and H. In fact we can do better by exploiting the special block structure of H and A, using block elimination to compute the Newton step. The Schur complement S = −AH−1AT turns out to be block tridiagonal, with k × k blocks: = S = −AH−1AT SQ−1AT0···0 0 11 1 1 AQ−1SQ−1AT···0 0 11 22 22  0 AQ−1 S ··· 0 0  2 2 33 ....... .  0 0 0···SN−1,N−1Q−1AT N−1 N−1 0 0 0 ··· AN−1Q−1 SNN where S = −B R−1BT −Q−1, 11 0001 S = −A Q−1 AT −B R−1 BT −Q−1, i=2,...,N. In particular, S is banded, with bandwidth 2k − 1, so we can factor it in order k3N flops. Therefore we can compute the Newton step in order k3N flops, assuming k ≪ N . Note that this grows linearly with the time horizon N , whereas for a generic method, the flop count grows like N3. For this problem we could go one step further and exploit the block tridiagonal structure of S. Applying a standard block tridiagonal factorization method would result in the classic Riccati recursion for solving a quadratic optimal control prob- lem. Still, using only the banded nature of S yields an algorithm that is the same order. ii i−1 i−1 i−1 i−1 i−1 i−1 i Analytic center of a linear matrix inequality We consider the problem minimize f(X) = − log det X (10.37) subject to tr(AiX) = bi, i = 1,...,p, whereX∈Sn isthevariable,Ai ∈Sn,bi ∈R,anddomf=Sn++. TheKKT conditions for this problem are 􏰊m i=1 The dimension of the variable X is n(n + 1)/2. We could simply ignore the special matrix structure of X, and consider it as (vector) variable x ∈ Rn(n+1)/2, −X⋆−1 + νi⋆Ai = 0, tr(AiX⋆) = bi, i = 1,...,p. (10.38) N−1 554 10 Equality constrained minimization and solve the problem (10.37) using a generic method for a problem with n(n+1)/2 variables and p equality constraints. The cost for computing a Newton step would then be at least (1/3)(n(n + 1)/2 + p)3 flops, which is order n6 in n. We will see that there are a number of far more attractive alternatives. A first option is to solve the dual problem. The conjugate of f is f∗(Y)=logdet(−Y)−1 −n with domf∗ = −Sn++ (see example 3.23, page􏰉92), so the dual problem is 􏰉maximize −bT ν + log det( pi=1 νiAi) + n, (10.39) with domain {ν | pi=1 νiAi ≻ 0}. This is an unconstrained problem with variable ν ∈ R . The optimal X can be recovered from the optimal ν by solving the first p⋆􏰉⋆ (dual feasibility) equation in (10.38), i.e., X⋆ = ( pi=1 νi⋆Ai)−1. Let us work out the cost of computing the Newton step for the dual prob- lem (10.39). We have to form the gradient and Hessian of g, and then solve for the Newton step. The gradient and Hessian are given by ∇2g(ν)ij = −tr(A−1AiA−1Aj), i, j = 1,...,p, 􏰉 ∇g(ν)i = tr(A−1Ai)−bi, i=1...,p, where A = pi=1 νiAi. To form ∇2g(ν) and ∇g(ν) we proceed as follows. We first form A (pn2 flops), and A−1Aj for each j (2pn3 flops). Then we form the matrix ∇2g(ν). Each of the p(p + 1)/2 entries of ∇2g(ν) is the inner product of two matrices in Sn, each of which costs n(n + 1) flops, so the total is (dropping dominated terms) (1/2)p2n2 flops. Forming ∇g(ν) is cheap since we already have the matrices A−1Ai. Finally, we solve for the Newton step −∇2g(ν)−1∇g(ν), which costs (1/3)p3 flops. All together, and keeping only the leading terms, the total cost of computing the Newton step is 2pn3 + (1/2)p2n2 + (1/3)p3. Note that this is order n3 in n, which is far better than the simple primal method described above, which is order n6. We can also solve the primal problem more efficiently, by exploiting its special matrix structure. To derive the KKT system for the Newton step ∆Xnt at a feasible X, we replace X⋆ in the KKT conditions by X + ∆Xnt and ν⋆ by w, and linearize the first equation using the first-order approximation (X + ∆Xnt)−1 ≈ X−1 − X−1∆XntX−1. This gives the KKT system −1 −1 −1 􏰊p −X +X ∆XntX + wiAi =0, tr(Ai∆Xnt)=0, i=1,...,p. i=1 (10.40) This ispa set of n(n + 1)/2 + p linear equations in the variables ∆Xnt ∈ Sn and w ∈ R . If we solved these equations using a generic method, the cost would be order n6. 10.4 Implementation 555 We can use block elimination to solve the KKT system (10.40) far more effi- ciently. We eliminate the variable ∆Xnt, by solving the first equation to get 􏰇􏰊p 􏰈 􏰊p ∆Xnt =X−X wiAi X=X− wiXAiX. i=1 i=1 Substituting this expression for ∆Xnt into the other equation gives 􏰊p i=1 This is a set of p linear equations in w: Cw = d (10.41) tr(Aj ∆Xnt ) = tr(Aj X ) − wi tr(Aj X Ai X ) = 0, j = 1, . . . , p. where Cij = tr(AiXAjX), di = tr(AiX). The coefficient matrix C is symmetric and positive definite, so a Cholesky factorization can be used to find w. Once we have w, we can compute ∆Xnt from (10.41). The cost of this method is as follows. We form the products AiX (2pn3 flops), and then form the matrix C. Each of the p(p + 1)/2 entries of C is the inner product of two matrices in Rn×n, so forming C costs p2n2 flops. Then we solve for w = C −1 d, which costs (1/3)p3 . Finally we compute ∆Xnt . If we use the first expression in (10.41), i.e., first compute the sum and then pre- and post- multiply with X, the cost is approximately pn2 + 3n3. All together, the total cost is 2pn3 + p2n2 + (1/3)p3 flops to form the Newton step for the primal problem, using block elimination. This is far better than the simple method, which is order n6. Note also that the cost is the same as that of computing the Newton step for the dual problem. 556 10 Equality constrained minimization Bibliography The two key assumptions in our analysis of the infeasible start Newton method (the derivative Dr has a bounded inverse and satisfies a Lipschitz condition) are central to most convergence proofs of Newton’s method; see Ortega and Rheinboldt [OR00] and Dennis and Schnabel [DS96]. The relative merits of solving KKT systems via direct factorization of the full system, or via elimination, have been extensively studied in the context of interior-point methods for linear and quadratic programming; see, for example, Wright [Wri97, chapter 11] and Nocedal and Wright [NW99, §16.1-2]. The Riccati recursion from optimal control can be interpreted as a method for exploiting the block tridiagonal structure in the Schur complement S of the example on page 552. This observation was made by Rao, Wright, and Rawlings [RWR98, §3.3]. Exercises 557 Exercises Equality constrained minimization 10.1 Nonsingularity of the KKT matrix. Consider the KKT matrix 􏰒P AT 􏰓, A0 whereP ∈Sn+,A∈Rp×n,andrankA=p0.
• FT PF ≻ 0, where F ∈ Rn×(n−p) is a matrix for which R(F) = N(A). • P+ATQA≻0forsomeQ≽0.
(b) Show that if the KKT matrix is nonsingular, then it has exactly n positive and p negative eigenvalues.
10.2 Projected gradient method. In this problem we explore an extension of the gradient method to equality constrained minimization problems. Suppose f is convex and differentiable, and x ∈ domf satisfies Ax = b, where A ∈ Rp×n with rankA = p < n. The Euclidean projection of the negative gradient −∇f(x) on N(A) is given by ∆xpg =argmin∥−∇f(x)−u∥2. Au=0 (a) Let (v, w) be the unique solution of 􏰒I AT 􏰓􏰒v􏰓=􏰒−∇f(x)􏰓. A0w0 Show that v = ∆xpg and w = argminy ∥∇f(x) + AT y∥2. (b) What is the relation between the projected negative gradient ∆xpg and the negative gradient of the reduced problem (10.5), assuming FT F = I? (c) The projected gradient method for solving an equality constrained minimization problem uses the step ∆xpg, and a backtracking line search on f. Use the re- sults of part (b) to give some conditions under which the projected gradient method converges to the optimal solution, when started from a point x(0) ∈ domf with Ax(0) = b. Newton’s method with equality constraints 10.3 Dual Newton method. In this problem we explore Newton’s method for solving the dual of the equality constrained minimization problem (10.1). We assume that f is twice differentiable, ∇2f(x) ≻ 0 for all x ∈ domf, and that for each ν ∈ Rp, the Lagrangian L(x, ν ) = f (x) + ν T (Ax − b) has a unique minimizer, which we denote x(ν ). (a) Show that the dual function g is twice differentiable. Find an expression for the Newton step for the dual function g, evaluated at ν, in terms of f, ∇f, and ∇2f, evaluated at x = x(ν). You can use the results of exercise 3.40. 558 10.4 10.5 10.6 10.7 10.8 10.9 10 Equality constrained minimization (b) Suppose there exists a K such that 􏳶􏳶􏳶􏳶􏰒 ∇2f(x) AT 􏰓−1􏳶􏳶􏳶􏳶 ≤ K 􏳶 A 0 􏳶2 for all x ∈ domf. Show that g is strongly concave, with ∇2g(ν) ≼ −(1/K)I. Strong convexity and Lipschitz constant of the reduced problem. Suppose f satisfies the ̃ assumptions given on page 529. Show that the reduced objective function f(z) = f(Fz+xˆ) is strongly convex, and that its Hessian is Lipschitz continuous (on the associated sublevel set S ̃). Express the strong convexity and Lipschitz constants of f ̃ in terms of K, M, L, and the maximum and minimum singular values of F . Adding a quadratic term to the objective. Suppose Q ≽ 0. The problem minimize f (x) + (Ax − b)T Q(Ax − b) subject to Ax = b is equivalent to the original equality constrained optimization problem (10.1). Is the Newton step for this problem the same as the Newton step for the original problem? The Newton decrement. Show that (10.13) holds, i.e., f(x)−inf{f􏰝(x+v) | A(x+v) = b} = λ(x)2/2. Infeasible start Newton method Assumptions for infeasible start Newton method. Consider the set of assumptions given on page 536. (a) Suppose that the function f is closed. Show that this implies that the norm of the residual, ∥r(x, ν)∥2, is closed. (b) Show that Dr satisfies a Lipschitz condition if and only if ∇2f does. Infeasible start Newton method and initially satisfied equality constraints. Suppose we use the infeasible start Newton method to minimize f(x) subject to aTi x = bi, i = 1, . . . , p. (a) Suppose the initial point x(0) satisfies the linear equality aTi x = bi. Show that the linear equality will remain satisfied for future iterates, i.e., if aTi x(k) = bi for all k. (b) Suppose that one of the equality constraints becomes satisfied at iteration k, i.e., we have aTi x(k−1) ̸= bi, aTi x(k) = bi. Show that at iteration k, all the equality constraints are satisfied. Equality constrained entropy maximization. Consider the equality constrained entropy maximization problem minimize f (x) = 􏰉ni=1 xi log xi subject to Ax = b, (10.42) with dom f = Rn++ and A ∈ Rp×n . We assume the problem is feasible and that rank A = p < n. (a) Show that the problem has a unique optimal solution x⋆. (b) Find A, b, and feasible x(0) for which the sublevel set {x∈Rn++ |Ax=b, f(x)≤f(x(0))} is not closed. Thus, the assumptions listed in §10.2.4, page 529, are not satisfied for some feasible initial points. Exercises 559 (c) Show that the problem (10.42) satisfies the assumptions for the infeasible start Newton method listed in §10.3.3, page 536, for any feasible starting point. (d) Derive the Lagrange dual of (10.42), and explain how to find the optimal solution of (10.42) from the optimal solution of the dual problem. Show that the dual problem satisfies the assumptions listed in §10.2.4, page 529, for any starting point. The results of part (b), (c), and (d) do not mean the standard Newton method will fail, or that the infeasible start Newton method or dual method will work better in practice. It only means our convergence analysis for the standard Newton method does not apply, while our convergence analysis does apply to the infeasible start and dual methods. (See exercise 10.15.) 10.10 Bounded inverse derivative condition for strongly convex-concave game. Consider a convex- concave game with payoff function f (see page 541). Suppose ∇2uuf(u,v) ≽ mI and ∇2vvf(u,v) ≼ −mI, for all (u,v) ∈ domf. Show that ∥Dr(u,v)−1∥2 =∥∇2f(u,v)−1∥2 ≤1/m. 10.11 Consider the resource allocation problem described in example 10.1. You can assume the fi are strongly convex, i.e., f′′(z) ≥ m > 0 for all z. i
(a) Find the computational effort required to compute a Newton step for the reduced problem. Be sure to exploit the special structure of the Newton equations.
(b) Explain how to solve the problem via the dual. You can assume that the conjugate functions fi∗, and their derivatives, are readily computable, and that the equation fi′(x) = ν is readily solved for x, given ν. What is the computational complexity of finding a Newton step for the dual problem?
(c) What is the computational complexity of computing a Newton step for the resource allocation problem? Be sure to exploit the special structure of the KKT equations.
10.12 Describe an efficient way to compute the Newton step for the problem minimize tr(X −1 )
subject to tr(AiX) = bi, i = 1,…,p
with domain Sn++, assuming p and n have the same order of magnitude. Also derive the Lagrange dual problem and give the complexity of finding the Newton step for the dual problem.
10.13 Elimination method for computing Newton step for convex-concave game. Consider a convex-concave game with payoff function f : Rp × Rq → R (see page 541). We assume that f is strongly convex-concave, i.e., for all (u, v) ∈ dom f and some m > 0, we have ∇2uuf(u,v) ≽ mI and ∇2vvf(u,v) ≼ −mI.
(a) Show how to compute the Newton step using Cholesky factorizations of ∇2uuf(u,v) and −∇2fvv(u,v). Compare the cost of this method with the cost of using an LDLT factorization of ∇f(u, v), assuming ∇2f(u, v) is dense.
(b) Show how you can exploit diagonal or block diagonal structure in ∇2uuf(u,v) and/or ∇2vvf(u,v). How much do you save, if you assume ∇2uvf(u,v) is dense?
Numerical experiments
10.14 Log-optimal investment. Consider the log-optimal investment problem described in exer- cise 4.60, without the constraint x ≽ 0. Use Newton’s method to compute the solution,
Implementation

560
10 Equality constrained minimization
with the following problem data: there are n = 3 assets, and m = 4 scenarios, with returns
􏰔 2 􏰕 􏰔 2 􏰕 􏰔 0.5 􏰕 􏰔 0.5 􏰕 p1= 1.3 , p2= 0.5 , p3= 1.3 , p4= 0.5 .
1111 The probabilities of the four scenarios are given by π = (1/3, 1/6, 1/3, 1/6).
10.15 Equality constrained entropy maximization. Consider the equality constrained entropy maximization problem
minimize f (x) = 􏰉ni=1 xi log xi subject to Ax = b,
with domf = Rn++ and A ∈ Rp×n, with p < n. (See exercise 10.9 for some relevant analysis.) Generate a problem instance with n = 100 and p = 30 by choosing A randomly (checking that it has full rank), choosing xˆ as a random positive vector (e.g., with entries uniformly distributed on [0, 1]) and then setting b = Axˆ. (Thus, xˆ is feasible.) Compute the solution of the problem using the following methods. (a) Standard Newton method. You can use initial point x(0) = xˆ. (b) Infeasible start Newton method. You can use initial point x(0) = xˆ (to compare with the standard Newton method), and also the initial point x(0) = 1. (c) Dual Newton method, i.e., the standard Newton method applied to the dual problem. Verify that the three methods compute the same optimal point (and Lagrange multiplier). Compare the computational effort per step for the three methods, assuming relevant structure is exploited. (Your implementation, however, does not need to exploit structure to compute the Newton step.) 10.16 Convex-concave game. Use the infeasible start Newton method to solve convex-concave games of the form (10.32), with randomly generated data. Plot the norm of the residual and step length versus iteration. Experiment with the line search parameters and initial point (which must satisfy ∥u∥2 < 1, ∥v∥2 < 1, however). Chapter 11 Interior-point methods 11.1 Inequality constrained minimization problems In this chapter we discuss interior-point methods for solving convex optimization problems that include inequality constraints, minimize f0 (x) subject to fi(x) ≤ 0, i = 1,...,m (11.1) Ax = b, where f0 , . . . , fm : Rn → R are convex and twice continuously differentiable, and A ∈ Rp×n with rank A = p < n. We assume that the problem is solvable, i.e., an optimal x⋆ exists. We denote the optimal value f0(x⋆) as p⋆. We also assume that the problem is strictly feasible, i.e., there exists x ∈ D that satisfies Ax = b and fi(x) < 0 for i = 1, . . . , m. This means that Slater’s constraint qualification holds, so there exist dual optimal λ⋆ ∈ Rm, ν⋆ ∈ Rp, which together with x⋆ satisfy the KKT conditions Ax⋆ = b, fi(x⋆) ≤ 0, i=1,...,m 􏰉m λ⋆ ≽ 0 (11.2) ∇f0(x⋆) + i=1 λ⋆i ∇fi(x⋆) + AT ν⋆ =0 λ⋆ifi(x⋆) = 0, i=1,...,m. Interior-point methods solve the problem (11.1) (or the KKT conditions (11.2)) by applying Newton’s method to a sequence of equality constrained problems, or to a sequence of modified versions of the KKT conditions. We will concentrate on a particular interior-point algorithm, the barrier method, for which we give a proof of convergence and a complexity analysis. We also describe a simple primal-dual interior-point method (in §11.7), but do not give an analysis. We can view interior-point methods as another level in the hierarchy of convex optimization algorithms. Linear equality constrained quadratic problems are the simplest. For these problems the KKT conditions are a set of linear equations, which can be solved analytically. Newton’s method is the next level in the hierarchy. We can think of Newton’s method as a technique for solving a linear equality 562 11 Interior-point methods constrained optimization problem, with twice differentiable objective, by reducing it to a sequence of linear equality constrained quadratic problems. Interior-point methods form the next level in the hierarchy: They solve an optimization problem with linear equality and inequality constraints by reducing it to a sequence of linear equality constrained problems. Examples Many problems are already in the form (11.1), and satisfy the assumption that the objective and constraint functions are twice differentiable. Obvious examples are LPs, QPs, QCQPs, and GPs in convex form; another example is linear inequality constrained entropy maximization, minimize 􏰉ni=1 xi log xi subjectto Fx≼g Ax = b, with domain D = Rn++. Many other problems do not have the required form (11.1), with twice differen- tiable objective and constraint functions, but can be reformulated in the required form. We have already seen many examples of this, such as the transformation of an unconstrained convex piecewise-linear minimization problem minimize maxi=1,...,m(aTi x + bi) (with nondifferentiable objective), to the LP minimize t subjectto aTi x+bi ≤t, i=1,...,m (which has twice differentiable objective and constraint functions). Other convex optimization problems, such as SOCPs and SDPs, are not readily recast in the required form, but can be handled by extensions of interior-point methods to problems with generalized inequalities, which we describe in §11.6. Logarithmic barrier function and central path Our goal is to approximately formulate the inequality constrained problem (11.1) as an equality constrained problem to which Newton’s method can be applied. Our first step is to rewrite the problem (11.1), making the inequality constraints implicit in the objective: 11.2 minimize f0(x) + 􏰉mi=1 I−(fi(x)) subject to Ax = b, where I− : R → R is the indicator fun􏰆ction for the nonpositive reals, 0 u≤0 ∞ u>0.
(11.3)
I−(u) =

11.2
Logarithmic barrier function and central path 563
10
5
0
−5
−3 −2 −1 0 1
u
Figure 11.1 The dashed lines show the function I−(u), and the solid curves show I􏰝−(u) = −(1/t)log(−u), for t = 0.5, 1, 2. The curve for t = 2 gives
the best approximation.
The problem (11.3) has no inequality constraints, but its objective function is not (in general) differentiable, so Newton’s method cannot be applied.
11.2.1 Logarithmic barrier
The basic idea of the barrier method is to approximate the indicator function I− by the function
I􏰝􏰝 (u) = −(1/t) log(−u), dom I􏰝 = −R , − −++
where t > 0 is a parameter that sets the accuracy of the approximation. Like
I−, the function I− is convex and nondecreasing, and (by our convention) takes
on the value ∞ for u > 0. Unlike I , however, I􏰝 is differentiable and closed: −−
it increases to ∞ a􏰝s u increases to 0. Figure 11.1 shows the function I−, and the approximation I−, for several values of t. As t increases, the approximation
becomes more ac􏰝curate.
Substituting I− for I− in (11.3) gives the approximation
minimize f0(x) + 􏰉mi=1 −(1/t) log(−fi(x)) subject to Ax = b.
(11.4)
The objective here is convex, since −(1/t)log(−u) is convex and increasing in u, and differentiable. Assuming an appropriate closedness condition holds, Newton’s method can be used to solve it.
The function
􏰊m
φ(x) = − log(−fi(x)), (11.5)
i=1

564
11 Interior-point methods
with domφ = {x ∈ Rn | fi(x) < 0, i = 1,...,m}, is called the logarithmic barrier or log barrier for the problem (11.1). Its domain is the set of points that satisfy the inequality constraints of (11.1) strictly. No matter what value the positive parameter t has, the logarithmic barrier grows without bound if fi(x) → 0, for any i. Of course, the problem (11.4) is only an approximation of the original prob- lem (11.3), so one question that arises immediately is how well a solution of (11.4) approximates a solution of the original problem (11.3). Intuition suggests, and we will soon confirm, that the quality of the approximation improves as the parameter t grows. On the other hand, when the parameter t is large, the function f0 + (1/t)φ is difficult to minimize by Newton’s method, since its Hessian varies rapidly near the boundary of the feasible set. We will see that this problem can be circumvented by solving a sequence of problems of the form (11.4), increasing the parameter t (and therefore the accuracy of the approximation) at each step, and starting each Newton minimization at the solution of the problem for the previous value of t. For future reference, we note that the gradient and Hessian of the logarithmic barrier function φ are given by 11.2.2 ∇ φ ( x ) = 􏰊m 1 ∇ f i ( x ) , i=1 −fi(x) ∇2φ(x) = 􏰊m i=1 (see §A.4.2 and §A.4.4). Central path 1 ∇fi(x)∇fi(x)T +􏰊m i=1 1 ∇2fi(x) −fi (x) fi (x)2 We now consider in more detail the minimization problem (11.4). It will simplify notation later on if we multiply the objective by t, and consider the equivalent problem minimize tf0(x) + φ(x) subject to Ax = b, (11.6) which has the same minimizers. We assume for now that the problem (11.6) can be solved via Newton’s method, and, in particular, that it has a unique solution for each t > 0. (We will discuss this assumption in more detail in §11.3.3.)
For t > 0 we define x⋆(t) as the solution of (11.6). The central path associated with problem (11.1) is defined as the set of points x⋆(t), t > 0, which we call the central points. Points on the central path are characterized by the following necessary and sufficient conditions: x⋆(t) is strictly feasible, i.e., satisfies
Ax⋆(t) = b, fi(x⋆(t)) < 0, i = 1,...,m, and there exists a νˆ ∈ Rp such that 0 = t∇f0(x⋆(t)) + ∇φ(x⋆(t)) + AT νˆ 11.2 Logarithmic barrier function and central path 565 = t∇f0(x⋆(t)) + 􏰊m 1 ∇fi(x⋆(t)) + AT νˆ (11.7) i=1 −fi(x⋆(t)) holds. Example 11.1 Inequality form linear programming. The logarithmic barrier function for an LP in inequality form, is given by 􏰊m i=1 minimize subject to cT x Ax ≼ b, (11.8) log(bi −aTi x), where aT1 , . . . , aTm are the rows of A. The gradient and Hessian of the barrier function φ(x)=− ∇ φ ( x ) = 􏰊m 1 a i , domφ={x|Ax≺b}, ∇ 2 φ ( x ) = 􏰊m 1 a i a Ti , are feasible, we have d ≻ 0, so the Hessian of φ is nonsingular if and only if A has rank n. The centrality condition (11.7) is t c + 􏰊m 1 a i = t c + A T d = 0 . ( 1 1 . 9 ) i=1 bi − aTi x We can give a simple geometric interpretation of the centrality condition. At a point x⋆(t) on the central path the gradient ∇φ(x⋆(t)), which is normal to the level set of φ through x⋆(t), must be parallel to −c. In other words, the hyperplane cT x = cT x⋆(t) is tangent to the level set of φ through x⋆(t). Figure 11.2 shows an example with m = 6 and n = 2. Dual points from central path From (11.7) we can derive an important property of the central path: Every central point yields a dual feasible point, and hence a lower bound on the optimal value p⋆. More specifically, define λ⋆i (t) = − 1 , i = 1,...,m, ν⋆(t) = νˆ/t. (11.10) tfi (x⋆ (t)) We claim that the pair λ⋆(t), ν⋆(t) is dual feasible. First, it is clear that λ⋆(t) ≻ 0 because fi(x⋆(t)) < 0, i = 1,...,m. By expressing the optimality conditions (11.7) as 􏰊m i=1 i=1 bi −aTi x ∇φ(x) = AT d, i=1 (bi −aTi x)2 ∇2φ(x) = AT diag(d)2A, or, more compactly, where the elements of d ∈ Rm are given by di = 1/(bi − aTi x). Since x is strictly ∇f0(x⋆(t)) + λ⋆i (t)∇fi(x⋆(t)) + AT ν⋆(t) = 0, 566 11 Interior-point methods c x⋆ x⋆ (10) Figure 11.2 Central path for an LP with n = 2 and m = 6. The dashed curves show three contour lines of the logarithmic barrier function φ. The central path converges to the optimal point x⋆ as t → ∞. Also shown is the point on the central path with t = 10. The optimality condition (11.9) at this point can be verified geometrically: The line cT x = cT x⋆(10) is tangent to the contour line of φ through x⋆(10). we see that x⋆(t) minimizes the Lagrangian L(x, λ, ν) = f0(x) + Therefore the dual function g(λ⋆(t),ν⋆(t)) is finite, and 􏰊m i=1 λifi(x) + νT (Ax − b), for λ = λ⋆(t) and ν = ν⋆(t), which means that λ⋆(t), ν⋆(t) is a dual feasible pair. 􏰊m i=1 λ⋆i (t)fi(x⋆(t)) + ν⋆(t)T (Ax⋆(t) − b) = f0(x⋆(t)) − m/t. g(λ⋆(t), ν⋆(t)) = f0(x⋆(t)) + In particular, the duality gap associated with x⋆(t) and the dual feasible pair λ⋆(t), ν⋆(t) is simply m/t. As an important consequence, we have f0(x⋆(t)) − p⋆ ≤ m/t, i.e., x⋆(t) is no more than m/t-suboptimal. This confirms the intuitive idea that x⋆(t) converges to an optimal point as t → ∞. Example 11.2 Inequality form linear programming. The dual of the inequality form LP (11.8) is maximize −bT λ subjectto ATλ+c=0 λ ≽ 0. From the optimality conditions (11.9), it is clear that λ⋆i(t)= 1 , i=1,...,m, t ( b i − a Ti x ⋆ ( t ) ) 11.2 Logarithmic barrier function and central path 567 is dual feasible, with dual objective value −bT λ⋆(t) = cT x⋆(t) + (Ax⋆(t) − b)T λ⋆(t) = cT x⋆(t) − m/t. Interpretation via KKT conditions We can also interpret the central path conditions (11.7) as a continuous deformation of the KKT optimality conditions (11.2). A point x is equal to x⋆(t) if and only if there exists λ, ν such that Ax=b, fi(x) ≤ 0, i=1,...,m 􏰉m λ ≽ 0 ∇f0(x) + i=1 λi∇fi(x) + AT ν = 0 −λi fi (x) = 1/t, i = 1,...,m. (11.11) The only difference between the KKT conditions (11.2) and the centrality condi- tions (11.11) is that the complementarity condition −λifi(x) = 0 is replaced by the condition −λifi(x) = 1/t. In particular, for large t, x⋆(t) and the associated dual point λ⋆(t), ν⋆(t) ‘almost’ satisfy the KKT optimality conditions for (11.1). Force field interpretation We can give a simple mechanics interpretation of the central path in terms of potential forces acting on a particle in the strictly feasible set C. For simplicity we assume that there are no equality constraints. We associate with each constraint the force Fi(x) = −∇ (− log(−fi(x))) = 1 ∇fi(x) acting on the particle when it is at position x. The potential associated with the total force field generated by the constraints is the logarithmic barrier φ. As the particle moves toward the boundary of the feasible set, it is strongly repelled by the forces generated by the constraints. Now we imagine another force acting on the particle, given by F0(x) = −t∇f0(x), when the particle is at position x. This objective force field acts to pull the particle in the negative gradient direction, i.e., toward smaller f0. The parameter t scales the objective force, relative to the constraint forces. The central point x⋆(t) is the point where the constraint forces exactly balance the objective force felt by the particle. As the parameter t increases, the particle is more strongly pulled toward the optimal point, but it is always trapped in C by the barrier potential, which becomes infinite as the particle approaches the boundary. Example 11.3 Force field interpretation for inequality form LP. The force field asso- ciated with the ith constraint of the LP (11.8) is Fi(x) = −ai . b i − a Ti x fi (x) 568 11 Interior-point methods −c Figure 11.3 Force field interpretation of central path. The central path is shown as the dashed curve. The two points x⋆(1) and x⋆(3) are shown as dots in the left and right plots, respectively. The objective force, which is equal to −c and −3c, respectively, is shown as a heavy arrow. The other arrows represent the constraint forces, which are given by an inverse-distance law. As the strength of the objective force varies, the equilibrium position of the particle traces out the central path. This force is in the direction of the inward pointing normal to the constraint plane Hi = {x | aTi x = bi}, and has magnitude inversely proportional to the distance to Hi, i.e., In other words, each constraint hyperplane has an associated repulsive force, given ∥Fi(x)∥2 = ∥ai∥2 = 1 . b i − a Ti x d i s t ( x , H i ) −3c 11.3 by the inverse distance to the hyperplane. The term tcT x is the potential associated with a constant force −tc on the particle. This ‘objective force’ pushes the particle in the direction of low cost. Thus, x⋆(t) is the equilibrium position of the particle when it is subject to the inverse-distance constraint forces, and the objective force −tc. When t is very large, the particle is pushed almost to the optimal point. The strong objective force is balanced by the opposing constraint forces, which are large because we are near the feasible boundary. Figure 11.3 illustrates this interpretation for a small LP with n = 2 and m = 5. The lefthand plot shows x⋆(t) for t = 1, as well as the constraint forces acting on it, which balance the objective force. The righthand plot shows x⋆(t) and the associated forces for t = 3. The larger value of objective force moves the particle closer to the optimal point. The barrier method We have seen that the point x⋆(t) is m/t-suboptimal, and that a certificate of this accuracy is provided by the dual feasible pair λ⋆(t), ν⋆(t). This suggests a very straightforward method for solving the original problem (11.1) with a guaranteed specified accuracy ǫ: We simply take t = m/ǫ and solve the equality constrained 11.3 The barrier method problem minimize subject to using Newton’s method. This method could be called the unconstrained minimiza- tion method, since it allows us to solve the inequality constrained problem (11.1) to a guaranteed accuracy by solving an unconstrained, or linearly constrained, prob- lem. Although this method can work well for small problems, good starting points, and moderate accuracy (i.e., ǫ not too small), it does not work well in other cases. As a result it is rarely, if ever, used. 11.3.1 The barrier method A simple extension of the unconstrained minimization method does work well. It is based on solving a sequence of unconstrained (or linearly constrained) mini- mization problems, using the last point found as the starting point for the next unconstrained minimization problem. In other words, we compute x⋆(t) for a se- quence of increasing values of t, until t ≥ m/ǫ, which guarantees that we have an ǫ-suboptimal solution of the original problem. When the method was first proposed by Fiacco and McCormick in the 1960s, it was called the sequential unconstrained minimization technique (SUMT). Today the method is usually called the barrier method or path-following method. A simple version of the method is as follows. Algorithm 11.1 Barrier method. given strictly feasible x, t := t(0) > 0, μ > 1, tolerance ǫ > 0.
repeat
1. Centering step.
Compute x⋆(t) by minimizing tf0 + φ, subject to Ax = b, starting at x.
2. Update. x := x⋆(t).
3. Stopping criterion. quit if m/t < ǫ. 4. Increase t. t := μt. At each iteration (except the first one) we compute the central point x⋆(t) starting from the previously computed central point, and then increase t by a factor μ > 1. The algorithm can also return λ = λ⋆(t), and ν = ν⋆(t), a dual ǫ-suboptimal point, or certificate for x.
We refer to each execution of step 1 as a centering step (since a central point is being computed) or an outer iteration, and to the first centering step (the com- putation of x⋆(t(0))) as the initial centering step. (Thus the simple algorithm with t(0) = m/ǫ consists of only the initial centering step.) Although any method for linearly constrained minimization can be used in step 1, we will assume that New- ton’s method is used. We refer to the Newton iterations or steps executed during the centering step as inner iterations. At each inner step, we have a primal fea- sible point; we have a dual feasible point, however, only at the end of each outer (centering) step.
569
(m/ǫ)f0(x) + φ(x) Ax = b

570
11 Interior-point methods
Accuracy of centering
We should make some comments on the accuracy to which we solve the centering problems. Computing x⋆(t) exactly is not necessary since the central path has no significance beyond the fact that it leads to a solution of the original problem as t → ∞; inexact centering will still yield a sequence of points x(k) that converges to an optimal point. Inexact centering, however, means that the points λ⋆(t), ν⋆(t), computed from (11.10), are not exactly dual feasible. This can be corrected by adding a correction term to the formula (11.10), which yields a dual feasible point provided the computed x is near the central path, i.e., x⋆(t) (see exercise 11.9).
On the other hand, the cost of computing an extremely accurate minimizer of tf0 + φ, as compared to the cost of computing a good minimizer of tf0 + φ, is only marginally more, i.e., a few Newton steps at most. For this reason it is not unreasonable to assume exact centering.
Choice of μ
The choice of the parameter μ involves a trade-off in the number of inner and outer iterations required. If μ is small (i.e., near 1) then at each outer iteration t increases by a small factor. As a result the initial point for the Newton process, i.e., the previous iterate x, is a very good starting point, and the number of Newton steps needed to compute the next iterate is small. Thus for small μ we expect a small number of Newton steps per outer iteration, but of course a large number of outer iterations since each outer iteration reduces the gap by only a small amount. In this case the iterates (and indeed, the iterates of the inner iterations as well) closely follow the central path. This explains the alternate name path-following method.
On the other hand if μ is large we have the opposite situation. After each outer iteration t increases a large amount, so the current iterate is probably not a very good approximation of the next iterate. Thus we expect many more inner iterations. This ‘aggressive’ updating of t results in fewer outer iterations, since the duality gap is reduced by the large factor μ at each outer iteration, but more inner iterations. With μ large, the iterates are widely separated on the central path; the inner iterates veer way off the central path.
This trade-off in the choice of μ is confirmed both in practice and, as we will see, in theory. In practice, small values of μ (i.e., near one) result in many outer iterations, with just a few Newton steps for each outer iteration. For μ in a fairly large range, from around 3 to 100 or so, the two effects nearly cancel, so the total number of Newton steps remains approximately constant. This means that the choice of μ is not particularly critical; values from around 10 to 20 or so seem to work well. When the parameter μ is chosen to give the best worst-case bound on the total number of Newton steps required, values of μ near one are used.
Choice of t(0)
Another important issue is the choice of initial value of t. Here the trade-off is simple: If t(0) is chosen too large, the first outer iteration will require too many it- erations. If t(0) is chosen too small, the algorithm will require extra outer iterations, and possibly too many inner iterations in the first centering step.
Since m/t(0) is the duality gap that will result from the first centering step, one

11.3 The barrier method 571
reasonable choice is to choose t(0) so that m/t(0) is approximately of the same order as f0(x(0)) − p⋆, or μ times this amount. For example, if a dual feasible point λ, ν is known, with duality gap η = f0(x(0)) − g(λ, ν), then we can take t(0) = m/η. Thus, in the first outer iteration we simply compute a pair with the same duality gap as the initial primal and dual feasible points.
Another possibility is suggested by the central path condition (11.7). We can
interpret
inf 􏳶􏳶􏳶t∇f0(x(0)) + ∇φ(x(0)) + AT ν􏳶􏳶􏳶 (11.12) ν2
as a measure for the deviation of x(0) from the point x⋆(t), and choose for t(0) the value that minimizes (11.12). (This value of t and ν can be found by solving a least-squares problem.)
A variation on this approach uses an affine-invariant measure of deviation be- tween x and x⋆(t) in place of the Euclidean norm. We choose t and ν that minimize
α(t, ν) = 􏰎t∇f (x(0)) + ∇φ(x(0)) + AT ν􏰏T H−1 􏰎t∇f (x(0)) + ∇φ(x(0)) + AT ν􏰏 , 000
where
(It can be shown that infν α(t,ν) is the square of the Newton decrement of tf0 +φ
H0 = t∇2f0(x(0)) + ∇2φ(x(0)).
at x(0).) Since α is a quadratic-over-linear function of ν and t, it is convex.
Infeasible start Newton method
In one variation on the barrier method, an infeasible start Newton method (de- scribed in §10.3) is used for the centering steps. Thus, the barrier method is ini- tialized with a point x(0) that satisfies x(0) ∈ dom f0 and fi(x(0)) < 0, i = 1, . . . , m, but not necessarily Ax(0) = b. Assuming the problem is strictly feasible, a full New- ton step is taken at some point during the first centering step, and thereafter, the iterates are all primal feasible, and the algorithm coincides with the (standard) barrier method. 11.3.2 Examples Linear programming in inequality form Our first example is a small LP in inequality form, minimize cT x subject to Ax ≼ b with A ∈ R100×50. The data were generated randomly, in such a way that the problem is strictly primal and dual feasible, with optimal value p⋆ = 1. The initial point x(0) is on the central path, with a duality gap of 100. The barrier method is used to solve the problem, and terminated when the duality gap is less than 10−6. The centering problems are solved by Newton’s method with backtracking, using parameters α = 0.01, β = 0.5. The stopping criterion for 572 11 Interior-point methods 102 100 10−2 10−4 10−6 0 20 40 60 80 μ=50 μ=150 Newton iterations Figure 11.4 Progress of barrier method for a small LP, showing duality gap versus cumulative number of Newton steps. Three plots are shown, corresponding to three values of the parameter μ: 2, 50, and 150. In each case, we have approximately linear convergence of duality gap. Newton’s method is λ(x)2/2 ≤ 10−5, where λ(x) is the Newton decrement of the function tcT x + φ(x). The progress of the barrier method, for three values of the parameter μ, is shown in figure 11.4. The vertical axis shows the duality gap on a log scale. The horizontal axis shows the cumulative total number of inner iterations, i.e., Newton steps, which is the natural measure of computational effort. Each of the plots has a staircase shape, with each stair associated with one outer iteration. The width of each stair tread (i.e., horizontal portion) is the number of Newton steps required for that outer iteration. The height of each stair riser (i.e., the vertical portion) is exactly equal to (a factor of) μ, since the duality gap is reduced by the factor μ at the end of each outer iteration. The plots illustrate several typical features of the barrier method. First of all, the method works very well, with approximately linear convergence of the duality gap. This is a consequence of the approximately constant number of Newton steps required to re-center, for each value of μ. For μ = 50 and μ = 150, the barrier method solves the problem with a total number of Newton steps between 35 and 40. The plots in figure 11.4 clearly show the trade-off in the choice of μ. For μ = 2, the treads are short; the number of Newton steps required to re-center is around 2 or 3. But the risers are also short, since the duality gap reduction per outer iteration is only a factor of 2. At the other extreme, when μ = 150, the treads are longer, typically around 7 Newton steps, but the risers are also much larger, since the duality gap is reduced by the factor 150 in each outer iteration. The trade-off in choice of μ is further examined in figure 11.5. We use the barrier method to solve the LP, terminating when the duality gap is smaller than 10−3, for 25 values of μ between 1.2 and 200. The plot shows the total number of Newton steps required to solve the problem, as a function of the parameter μ. μ=2 duality gap 11.3 The barrier method 573 140 120 100 80 60 40 20 00 40 80 120 160 200 μ Figure 11.5 Trade-off in the choice of the parameter μ, for a small LP. The vertical axis shows the total number of Newton steps required to reduce the duality gap from 100 to 10−3, and the horizontal axis shows μ. The plot shows the barrier method works well for values of μ larger than around 3, but is otherwise not sensitive to the value of μ. This plot shows that the barrier method performs very well for a wide range of values of μ, from around 3 to 200. As our intuition suggests, the total number of Newton steps rises when μ is too small, due to the larger number of outer iterations required. One interesting observation is that the total number of Newton steps does not vary much for values of μ larger than around 3. Thus, as μ increases over this range, the decrease in the number of outer iterations is offset by an increase in the number of Newton steps per outer iteration. For even larger values of μ, the performance of the barrier method becomes less predictable (i.e., more dependent on the particular problem instance). Since the performance does not improve with larger values of μ, a good choice is in the range 10 – 100. Geometric programming We consider a geometric program in convex form, minimize log􏰎􏰉􏰉K0 exp(aT x+b0k)􏰏 􏰎k=1 0k 􏰏 subjectto log Ki exp(aT x+bik) ≤0, i=1,...,m, k=1 ik with variable x ∈ Rn, and associated logarithmic barrier m􏰇Ki 􏰈 φ(x) = −􏰊log −log􏰊exp(aTikx+bik) . i=1 k=1 The problem instance we consider has n = 50 variables and m = 100 inequalities (like the small LP considered above). The objective and constraint functions all Newton iterations 574 11 Interior-point methods 102 100 10−2 10−4 10−6 μ = 150 μ = 50 μ = 2 0 20 40 60 80 100 120 Newton iterations Figure 11.6 Progress of barrier method for a small GP, showing duality gap versus cumulative number of Newton steps. Again we have approximately linear convergence of duality gap. have Ki = 5 terms. The problem instance was generated randomly, in such a way that it is strictly primal and dual feasible, with optimal value one. We start with a point x(0) on the central path, with a duality gap of 100. The barrier method is used to solve the problem, with parameters μ = 2, μ = 50, and μ = 150, and terminated when the duality gap is less than 10−6. The centering problems are solved using Newton’s method, with the same parameter values as in the LP example, i.e., α = 0.01, β = 0.5, and stopping criterion λ(x)2/2 ≤ 10−5. Figure 11.6 shows the duality gap versus cumulative number of Newton steps. This plot is very similar to the plot for LP, shown in figure 11.4. In particular, we see an approximately constant number of Newton steps required per centering step, and therefore approximately linear convergence of the duality gap. The variation of the total number of Newton steps required to solve the problem, versus the parameter μ, is very similar to that in the LP example. For this GP, the total number of Newton steps required to reduce the duality gap below 10−3 is around 30 (ranging from around 20 to 40 or so) for values of μ between 10 and 200. So here, too, a good choice of μ is in the range 10 – 100. A family of standard form LPs In the examples above we examined the progress of the barrier method, in terms of duality gap versus cumulative number of Newton steps, for a randomly generated instance of an LP and a GP, with similar dimensions. The results for the two examples are remarkably similar; each shows approximately linear convergence of duality gap with the number of Newton steps. We also examined the variation in performance with the parameter μ, and found essentially the same results in the two cases. For μ above around 10, the barrier method performs very well, requiring around 30 Newton steps to bring the duality gap down from 102 to 10−6. In both duality gap 11.3 The barrier method 575 cases, the choice of μ hardly affects the total number of Newton steps required (provided μ is larger than 10 or so). In this section we examine the performance of the barrier method as a function of the problem dimensions. We consider LPs in standard form, minimize cT x subjectto Ax=b, x≽0 with A ∈ Rm×n, and explore the total number of Newton steps required as a function of the number of variables n and number of equality constraints m, for a family of randomly generated problem instances. We take n = 2m, i.e., twice as many variables as constraints. The problems were generated as follows. The elements of A are independent and identically distributed, with zero mean, unit variance normal distribution N (0, 1). We take b = Ax(0) where the elements of x(0) are independent, and uniformly distributed in [0, 1]. This ensures that the problem is strictly primal feasible, since x(0) ≻ 0 is feasible. To construct the cost vector c, we first compute a vector z ∈ Rm with elements distributed according to N (0, 1) and a vector s ∈ Rn with elements from a uniform distribution on [0, 1]. We then take c = AT z + s. This guarantees that the problem is strictly dual feasible, since AT z ≺ c. The algorithm parameters we use are μ = 100, and the same parameters for the centering steps in the examples above: backtracking parameters α = 0.01, β = 0.5, and stopping criterion λ(x)2/2 ≤ 10−5. The initial point is on the central path with t(0) = 1 (i.e., gap n). The algorithm is terminated when the initial duality gap is reduced by a factor 104, i.e., after completing two outer iterations. Figure 11.7 shows the duality gap versus iteration number for three problem instances, with dimensions m = 50, m = 500, and m = 1000. The plots look very much like the others, with approximately linear convergence of the duality gap. The plots show a small increase in the number of Newton steps required as the problem size grows from 50 constraints (100 variables) to 1000 constraints (2000 variables). To examine the effect of problem size on the number of Newton steps required, we generate 100 problem instances for each of 20 values of m, ranging from m = 10 to m = 1000. We solve each of these 2000 problems using the barrier method, noting the number of Newton steps required. The results are summarized in fig- ure 11.8, which shows the mean and standard deviation in the number of Newton steps, for each value of m. The first comment we make is that the standard de- viation is around 2 iterations, and appears to be approximately independent of problem size. Since the average number of steps required is near 25, this means that the number of Newton steps required varies only around ±10%. The plot shows that the number of Newton steps required grows only slightly, from around 21 to around 27, as the problem dimensions increase by a factor of 100. This behavior is typical for the barrier method in general: The number of Newton steps required grows very slowly with problem dimensions, and is almost always around a few tens. Of course, the computational effort to carry out one Newton step grows with the problem dimensions. 576 11 Interior-point methods 104 102 100 10−2 m = 50 10−40 10 20 30 40 50 Newton iterations Figure 11.7 Progress of barrier method for three randomly generated stan- dard form LPs of different dimensions, showing duality gap versus cumula- tive number of Newton steps. The number of variables in each problem is n = 2m. Here too we see approximately linear convergence of the duality gap, with a slight increase in the number of Newton steps required for the larger problems. 35 30 25 20 151 2 3 10 10 10 m Figure 11.8 Average number of Newton steps required to solve 100 randomly generated LPs of different dimensions, with n = 2m. Error bars show stan- dard deviation, around the average value, for each value of m. The growth in the number of Newton steps required, as the problem dimensions range over a 100:1 ratio, is very small. m = 1000 m = 500 Newton iterations duality gap 11.3 The barrier method 577 11.3.3 Convergence analysis Convergence analysis for the barrier method is straightforward. Assuming that tf0 + φ can be minimized by Newton’s method for t = t(0), μt(0), μ2t(0), . . ., the duality gap after the initial centering step, and k additional centering steps, is m/(μkt(0)). Therefore the desired accuracy ǫ is achieved after exactly 􏳷 log(m/(ǫt(0) )) 􏳸 log μ (11.13) centering steps, plus the initial centering step. It follows that the barrier method works provided the centering problem (11.6) is solvable by Newton’s method, for t ≥ t(0). For the standard Newton method, it suffices that for t ≥ t(0), the function tf0 +φ satisfies the conditions given in §10.2.4, page 529: its initial sublevel set is closed, the associated inverse KKT matrix is bounded, and the Hessian satisfies a Lipschitz condition. (Another set of sufficient conditions, based on self-concordance, will be discussed in detail in §11.5.) If the infeasible start Newton method is used for centering, then the conditions listed in §10.3.3, page 536, are sufficient to guarantee convergence. Assuming that f0, . . . , fm are closed, a simple modification of the original problem ensures that these conditions hold. By adding a constraint of the form ∥x∥2 ≤ R2 to the problem, it follows that tf0 + φ is strongly convex, for every t ≥ 0; in particular convergence of Newton’s method, for the centering steps, is guaranteed. (See exercise 11.4.) While this analysis shows that the barrier method does converge, under reason- able assumptions, it does not address a basic question: As the parameter t increases, do the centering problems become more difficult (and therefore take more and more iterations)? Numerical evidence suggests that for a wide variety of problems, this is not the case; the centering problems appear to require a nearly constant number of Newton steps to solve, even as t increases. We will see (in §11.5) that this issue can be resolved, for problems that satisfy certain self-concordance conditions. 11.3.4 Newton step for modified KKT equations In the barrier method, the Newton step ∆xnt, and associated dual variable are given by the linear equations 􏰒 t∇2f0(x)+∇2φ(x) AT 􏰓􏰒 ∆xnt 􏰓=−􏰒 t∇f0(x)+∇φ(x) 􏰓. (11.14) A0νnt 0 In this section we show how these Newton steps for the centering problem can be interpreted as Newton steps for directly solving the modified KKT equations ∇f0(x) + 􏰉mi=1 λi∇fi(x) + AT ν = 0 −λifi(x) = 1/t, i = 1,...,m (11.15) in a particular way. Ax = b 578 11 Interior-point methods To solve the modified KKT equations (11.15), which is a set of n + p + m nonlinear equations in the n + p + m variables x, ν, and λ, we first eliminate the variables λi, using λi = −1/(tfi(x)). This yields ∇f0(x)+􏰊m 1 ∇fi(x)+ATν =0, Ax=b, (11.16) i=1 −tfi(x) which is a set of n+p equations in the n+p variables x and ν. To find the Newton step for solving the set of nonlinear equations (11.16), we form the Taylor approximation for the nonlinear term occurring in the first equation. For v small, we have the Taylor approximation ∇f0(x+v)+􏰊m 1 ∇fi(x+v) 􏰊−tfi(x + v) i=1 ≈ ∇f0(x) + −tfi(x)∇fi(x) + ∇ f0(x)v m12 i=1 + 􏰊m 1 ∇2fi(x)v + 􏰊m 1 ∇fi(x)∇fi(x)T v. i=1 −tfi(x) i=1 tfi(x)2 The Newton step is obtained by replacing the nonlinear term in equation (11.16) by this Taylor approximation, which yields the linear equations where Hv + AT ν = −g, Av = 0, (11.17) 2 􏰊􏰊m 1 2 􏰊m 1 T m1 ∇f0(x) + −tfi(x)∇fi(x). H = g = ∇ f0(x) + −tfi(x)∇ fi(x) + tfi(x)2 ∇fi(x)∇fi(x) i=1 i=1 i=1 Now we observe that H = ∇2f0(x) + (1/t)∇2φ(x), g = ∇f0(x) + (1/t)∇φ(x), so, from (11.14), the Newton steps ∆xnt and νnt in the barrier method centering step satisfy tH∆xnt + AT νnt = −tg, A∆xnt = 0. Comparing this with (11.17) shows that v = ∆xnt, ν = (1/t)νnt. This shows that the Newton step for the centering problem (11.6) can be inter- preted, after scaling the dual variable, as the Newton step for solving the modified KKT equations (11.16). In this approach, we first eliminated the variable λ from the modified KKT equations, and then applied Newton’s method to solve the resulting set of equations. Another variation on this approach is to directly apply Newton’s method to the modified KKT equations, without first eliminating λ. This method yields the so- called primal-dual search directions, discussed in §11.7. 11.4 Feasibility and phase I methods 579 11.4 Feasibility and phase I methods The barrier method requires a strictly feasible starting point x(0). When such a point is not known, the barrier method is preceded by a preliminary stage, called phase I, in which a strictly feasible point is computed (or the constraints are found to be infeasible). The strictly feasible point found during phase I is then used as the starting point for the barrier method, which is called the phase II stage. In this section we describe several phase I methods. 11.4.1 Basic phase I method We consider a set of inequalities and equalities in the variables x ∈ Rn, fi(x) ≤ 0, i = 1,...,m, Ax = b, (11.18) where fi : Rn → R are convex, with continuous second derivatives. We assume thatwearegivenapointx(0) ∈domf1∩···∩domfm,withAx(0) =b. Our goal is to find a strictly feasible solution of these inequalities and equalities, or determine that none exists. To do this we form the following optimization problem: minimize s subject to fi(x) ≤ s, i = 1,...,m (11.19) Ax = b in the variables x ∈ Rn, s ∈ R. The variable s can be interpreted as a bound on the maximum infeasibility of the inequalities; the goal is to drive the maximum infeasibility below zero. This problem is always strictly feasible, since we can choose x(0) as starting point for x, and for s, we can choose any number larger than maxi=1,...,m fi(x(0)). We can therefore apply the barrier method to solve the problem (11.19), which is called the phase I optimization problem associated with the inequality and equality system (11.19). ⋆ 1. If p ̄ < 0, then (11.18) has a strictly feasible solution. Moreover if (x, s) is feasible for (11.19) with s < 0, then x satisfies fi(x) < 0. This means we do not need to solve the optimization problem (11.19) with high accuracy; we can terminate when s < 0. ⋆ the phase I optimization problem (11.19) to high accuracy; we can terminate when a dual feasible point is found with positive dual objective (which proves ⋆ We can distinguish three cases depending on the sign of the optimal value p ̄ of (11.19). ⋆ 2. If p ̄ > 0, then (11.18) is infeasible. As in case 1, we do not need to solve
thatp ̄ >0).Inthiscase,wecanconstructthealternativethatproves(11.18) is infeasible from the dual feasible point.
⋆⋆⋆
3.Ifp ̄ =0andtheminimumisattainedatx ands =0,thenthesetof ⋆
inequalities is feasible, but not strictly feasible. If p ̄ = 0 and the minimum is not attained, then the inequalities are infeasible.

580
11 Interior-point methods

that |p ̄ | < ǫ for some small, positive ǫ. This allows us to conclude that the inequalities fi(x) ≤ −ǫ are infeasible, while the inequalities fi(x) ≤ ǫ are feasible. Sum of infeasibilities There are many variations on the basic phase I method just described. One method is based on minimizing the sum of the infeasibilities, instead of the maximum infeasibility. We form the problem In practice it is impossible to determine exactly that p ̄ = 0. Instead, an optimization algorithm applied to (11.19) will terminate with the conclusion ⋆ minimize subject to 1T s fi(x) ≤ si, Ax = b s ≽ 0. i = 1,...,m (11.20) For fixed x, the optimal value of si is max{fi(x),0}, so in this problem we are minimizing the sum of the infeasibilities. The optimal value of (11.20) is zero and achieved if and only if the original set of equalities and inequalities is feasible. This sum of infeasibilities phase I method has a very interesting property when the system of equalities and inequalities (11.19) is infeasible. In this case, the op- timal point for the phase I problem (11.20) often violates only a small number, say r, of the inequalities. Therefore, we have computed a point that satisfies many (m − r) of the inequalities, i.e., we have identified a large subset of inequalities that is feasible. In this case, the dual variables associated with the strictly satisfied inequalities are zero, so we have also proved infeasibility of a subset of the inequal- ities. This is more informative than finding that the m inequalities, together, are mutually infeasible. (This phenomenon is closely related to l1-norm regularization, or basis pursuit, used to find sparse approximate solutions; see §6.1.2 and §6.5.4). Example 11.4 Comparison of phase I methods. We apply two phase I methods to an infeasible set of inequalities Ax ≼ b with dimensions m = 100, n = 50. The first method is the basic phase I method minimize s subject to Ax ≼ b + 1s, which minimizes the maximum infeasibility. The second method minimizes the sum of the infeasibilities, i.e., solves the LP minimize 1T s subjectto Ax≼b+s s ≽ 0. Figure 11.9 shows the distributions of the infeasibilities bi − aTi x for these two values of x, denoted xmax and xsum, respectively. The point xmax satisfies 39 of the 100 inequalities, whereas the point xsum satisfies 79 of the inequalities. replacemen 11.4 Feasibility and phase I methods 60 50 40 30 20 10 581 00 −1 −0.5 0 T 0.5 1 1.5 −1 −0.5 0 T 0.5 1 1.5 bi −ai xmax Figure 11.9 Distributions of the infeasibilities bi − aTi x for an infeasible set of 100 inequalities aTi x ≤ bi, with 50 variables. The vector xmax used in the left plot was obtained by the basic phase I algorithm. It satisfies 39 of the 100 inequalities. In the right plot the vector xsum was obtained by minimizing the sum of the infeasibilities. This vector satisfies 79 of the 100 inequalities. Termination near the phase II central path A simple variation on the basic phase I method, using the barrier method, has the property that (when the equalities and inequalities are strictly feasible) the central path for the phase I problem intersects the central path for the original optimization problem (11.1). We assume a point x(0) ∈ D = domf0 ∩domf1 ∩···∩domfm, with Ax(0) = b is given. We form the phase I optimization problem minimize s subject to fi(x) ≤ s, i = 1,...,m f0(x) ≤ M Ax = b, (11.21) where M is a constant chosen to be larger than max{f0(x(0)),p⋆}. We assume now that the original problem (11.1) is strictly feasible, so the ⋆ 􏰊m 1 ̄ 1 􏰊m 1 s−fi(x)=t, M−f0(x)∇f0(x)+ s−fi(x)∇fi(x)+A ν=0, i=1 i=1 where t ̄ is the parameter. If (x, s) is on the central path and s = 0, then x and ν optimal value p ̄ of (11.21) is negative. The central path of (11.21) is characterized by satisfy t ∇ f 0 ( x ) + 􏰊m 1 ∇ f i ( x ) + A T ν = 0 i=1 −fi(x) for t = 1/(M − f0(x)). This means that x is on the central path for the original 60 50 40 30 20 10 bi −ai xsum T number number 582 11.4.2 11 Interior-point methods optimization problem (11.1), with associated duality gap m(M − f0(x)) ≤ m(M − p⋆). (11.22) Phase I via infeasible start Newton method We can also carry out the phase I stage using an infeasible start Newton method, applied to a modified version of the original problem minimize f0 (x) subject to fi(x) ≤ 0, i = 1,...,m Ax = b. We first express the problem in the (obviously equivalent) form minimize f0 (x) subject to fi(x) ≤ s, i = 1,...,m Ax=b, s=0, with the additional variable s ∈ R. To start the barrier method, we use an infeasible start Newton method to solve minimize t(0)f0(x) − 􏰉mi=1 log(s − fi(x)) subjectto Ax=b, s=0. This can be initialized with any x ∈ D, and any s > maxi fi(x). Provided the problem is strictly feasible, the infeasible start Newton method will eventually take an undamped step, and thereafter we will have s = 0, i.e., x strictly feasible.
The same trick can be applied if a point in D, the common domain of the functions, is not known. We simply apply the infeasible start Newton method to the problem
minimize t(0)f0(x + z0) − 􏰉mi=1 log(s − fi(x + zi)) subjectto Ax=b, s=0, z0 =0, …, zm =0
withvariablesx,z0,…,zm,ands∈R. Weinitializezi sothatx+zi ∈domfi. The main disadvantage of this approach to the phase I problem is that there is no good stopping criterion when the problem is infeasible; the residual simply fails
to converge to zero.
11.4.3 Examples
We consider a family of linear feasibility problems, Ax ≼ b(γ)
where A ∈ R50×20 and b(γ) = b + γ∆b. The problem data are chosen so that the inequalities are strictly feasible for γ > 0 and infeasible for γ < 0. For γ = 0 the problem is feasible but not strictly feasible. 11.4 Feasibility and phase I methods 583 Figure 11.10 shows the total number of Newton steps required to find a strictly feasible point, or a certificate of infeasibility, for 40 values of γ in [−1, 1]. We use the basic phase I method of §11.4.1, i.e., for each value of γ, we form the LP minimize s subject to Ax ≼ b(γ) + s1. The barrier method is used with μ = 10, and starting point x = 0, s = − mini bi(γ)+ 1. The method terminates when a point (x,s) with s < 0 is found, or a feasible solution z of the dual problem maximize subject to −b(γ)T z AT z = 0 1T z = 1 z≽0 is found with −b(γ)T z > 0.
The plot shows that when the inequalities are feasible, with some margin, it
takes around 25 Newton steps to produce a strictly feasible point. Conversely, when the inequalities are infeasible, again with some margin, it takes around 35 steps to produce a certificate proving infeasibility. The phase I effort increases as the set of inequalities approaches the boundary between feasible and infeasible, i.e., γ near zero. When γ is very near zero, so the inequalities are very near the boundary between feasible and infeasible, the number of steps grows substantially. Figure 11.11 shows the total number of Newton steps required for values of γ near zero. The plots show an approximately logarithmic increase in the number of steps required to detect feasibility, or prove infeasibility, for problems very near the boundary between feasible and infeasible.
This example is typical: The cost of solving a set of convex inequalities and linear equalities using the barrier method is modest, and approximately constant, as long as the problem is not very close to the boundary between feasibility and infeasibility. When the problem is very close to the boundary, the number of Newton steps required to find a strictly feasible point or produce a certificate of infeasibility grows. When the problem is exactly on the boundary between strictly feasible and infeasible, for example, feasible but not strictly feasible, the cost becomes infinite.
Feasibility using infeasible start Newton method
We also solve the same set of feasibility problems using the infeasible start Newton method, applied to the problem
minimize − 􏰉mi=1 log si subject to Ax + s = b(γ).
We use backtracking parameters α = 0.01, β = 0.9, and initialize with x(0) = 0, s(0) = 1, ν(0) = 0. We consider only feasible problems (i.e., γ > 0) and terminate once a feasible point is found. (We do not consider infeasible problems, since in that case the residual simply converges to a positive number.) Figure 11.12 shows the number of Newton steps required to find a feasible point, as a function of γ.

584
11 Interior-point methods
Infeasible
Feasible
100 80 60 40 20
100 80 60 40 20
0
−1 −0.5 0 0.5 1
γ
Figure 11.10 Number of Newton iterations required to detect feasibility or infeasibility of a set of linear inequalities Ax ≼ b + γ∆b parametrized by γ ∈ R. The inequalities are strictly feasible for γ > 0, and infeasible for γ < 0. For γ larger than around 0.2, about 30 steps are required to compute a strictly feasible point; for γ less than −0.5 or so, it takes around 35 steps to produce a certificate proving infeasibility. For values of γ in between, and especially near zero, more Newton steps are required to determine feasibility. 100 80 60 40 20 00 −2 −4 −60−6 −4 −2 0 −10 −10 γ −10 −10 10 10 γ 10 10 Figure 11.11 Left. Number of Newton iterations required to find a proof of infeasibility versus γ, for γ small and negative. Right. Number of Newton iterations required to find a strictly feasible point versus γ, for γ small and positive. Newton iterations Newton iterations Newton iterations 11.5 Complexity analysis via self-concordance 585 104 103 102 101 100 100 101 10−2 10−1 γ Figure 11.12 Number of iterations required to find a feasible point for a set of linear inequalities Ax ≼ b + γ∆b parametrized by γ ∈ R. The infeasible start Newton method is used, and terminated when a feasible point is found. For γ = 10, the starting point x(0) = 0 happened to be feasible (0 iterations). The plot shows that for γ larger than 0.3 or so, it takes fewer than 20 Newton steps to find a feasible point. In these cases the method is more efficient than a phase I method, which takes a total of around 30 Newton steps. For smaller values of γ, the number of Newton steps required grows dramatically, approximately as 1/γ. For γ = 0.01, the infeasible start Newton method requires several thousand iterations to produce a feasible point. In this region the phase I approach is far more efficient, requiring only 40 iterations or so. These results are quite typical. The infeasible start Newton method works very well provided the inequalities are feasible, and not very close to the boundary between feasible and infeasible. But when the feasible set is just barely nonempty (as is the case in this example with small γ), a phase I method is far better. Another advantage of the phase I method is that it gracefully handles the infeasible case; the infeasible start Newton method, in contrast, simply fails to converge. 11.5 Complexity analysis via self-concordance Using the complexity analysis of Newton’s method for self-concordant functions (§9.6.4, page 503, and §10.2.4, page 531), we can give a complexity analysis of the barrier method. The analysis applies to many common problems, and leads to several interesting conclusions: It gives a rigorous bound on the total number of Newton steps required to solve a problem using the barrier method, and it justifies our observation that the centering problems do not become more difficult as t increases. Newton iterations 586 11.5.1 11 Interior-point methods Self-concordance assumption We make two assumptions. • The function tf0 + φ is closed and self-concordant for all t ≥ t(0). • The sublevel sets of (11.1) are bounded. The second assumption implies that the centering problem has bounded sublevel sets (see exercise 11.3), and, therefore, the centering problem is solvable. The bounded sublevel set assumption also implies that the Hessian of tf0 + φ is positive definite everywhere (see exercise 11.14). While the self-concordance assumption restricts the complexity analysis to a particular class of problems, it is important to emphasize that the barrier method works well in general, whether or not the self-concordance assumption holds. The self-concordance assumption holds for a variety of problems, including all linear and quadratic problems. If the functions fi are linear or quadratic, then 􏰊m i=1 is self-concordant for all values of t ≥ 0 (see §9.6). The complexity analysis given below therefore applies to LPs, QPs, and QCQPs. In other cases, it is possible to reformulate the problem so the assumption of self-concordance holds. As an example, consider the linear inequality constrained entropy maximization problem The function Ax = b. xilogxi − For this problem we have 􏰊n tf0(x)+φ(x)=t i=1 􏰊m i=1 log(gi −fiTx), tf0 − log(−fi) minimize 􏰉ni=1 xi log xi subjectto Fx≼g wheref1T,...,fmT aretherowsofF,isnotclosed(unlessFx≼gimpliesx≽0),or self-concordant. We can, however, add the redundant inequality constraints x ≽ 0 to obtain the equivalent problem minimize 􏰉ni=1 xi log xi subjectto Fx≼g Ax = b x ≽ 0. (11.23) 􏰊n 􏰊n 􏰊m tf0(x)+φ(x)=t xilogxi − logxi − log(gi −fiTx), i=1 i=1 i=1 11.5 Complexity analysis via self-concordance 587 which is self-concordant and closed, for any t ≥ 0. (The function ty log y − log y is self-concordant on R++, for all t ≥ 0; see exercise 11.13.) The complexity analysis therefore applies to the reformulated linear inequality constrained entropy maximization problem (11.23). As a more exotic example, consider the GP minimize f (x) = log 􏰎􏰉K0 exp(aT x + b )􏰏 0 􏰉 k=1 0k 0k subjectto log􏰎 Ki exp(aT x+bik)􏰏≤0, i=1,...,m. k=1 ik It is not clear whether or not the function 􏰇K0 􏰈 m 􏰇 Ki 􏰈 tf0(x) + φ(x) = t log 􏰊 exp(aT0kx + b0k) − 􏰊 log − log 􏰊 exp(aTikx + bik) k=1 i=1 k=1 is self-concordant, so although the barrier method works, the complexity analysis of this section need not hold. We can, however, reformulate the GP in a form that definitely satisfies the self- concordance assumption. For each (monomial) term exp(aTikx + bik) we introduce a new variable yik that serves as an upper bound, exp(aTikx + bik) ≤ yik. Using these new variables we can express the GP in the form minimize subject to 􏰉K0 􏰉k=1 y0k yik ≥ 0, i = 0,...,m, The associated logarithmic barrier is Ki yik ≤ 1, i = 1,...,m k=1 aTikx+bik −logyik ≤0, i=0,...,m, k = 1,...,Ki. k=1,...,Ki m 􏰇 Ki 􏰈 􏰊􏰊􏰀−logyik −log(logyik −aTikx−bik)􏰁−􏰊log 1−􏰊yik , which is closed and self-concordant (example 9.8, page 500). Since the objective is linear, it follows that tf0 + φ is closed and self-concordant for any t. 11.5.2 Newton iterations per centering step The complexity theory of Newton’s method for self-concordant functions, developed in §9.6.4 (page 503) and §10.2.4 (page 531), shows that the number of Newton iterations required to minimize a closed strictly convex self-concordant function f is bounded above by f(x) − p⋆ γ +c. (11.24) m Ki i=0 k=1 i=1 k=1 588 11 Interior-point methods Here x is the starting point for Newton’s method, and p⋆ = infx f(x) is the optimal value. The constant γ depends only on the backtracking parameters α and β, and is given by 1= 20−8α . γ αβ(1 − 2α)2 The constant c depends only on the tolerance ǫnt, c = log2 log2(1/ǫnt), and can reasonably be approximated as c = 6. The expression (11.24) is a quite conservative bound on the number of Newton steps required, but our interest in this section is only to establish a complexity bound, concentrating on how it increases with problem size and algorithm parameters. In this section we use this result to derive a bound on the number of Newton steps required for one outer iteration of the barrier method, i.e., for computing x⋆(μt), starting from x⋆(t). To lighten the notation we use x to denote x⋆(t), the current iterate, and we use x+ to denote x⋆(μt), the next iterate. We use λ and ν to denote λ⋆(t) and ν⋆(t), respectively. The self-concordance assumption implies that μtf0(x) + φ(x) − μtf0(x+) − φ(x+) + c (11.25) γ is an upper bound on the number of Newton steps required to compute x+ = x⋆(μt), starting at x = x⋆(t). Unfortunately we do not know x+, and hence the upper bound (11.25), until we actually compute x+, i.e., carry out the Newton algorithm (whereupon we know the exact number of Newton steps required to compute x⋆(μt), which defeats the purpose). We can, however, derive an upper bound on (11.25), as follows: μtf0(x) + φ(x) − μtf0(x+) −􏰊φ(x+) m = μtf0(x) − μtf0(x+) + log(−μtλifi(x+)) − m log μ i=1 􏰊m ≤ μtf0(x) − μtf0(x+) − μt λifi(x+) − m − m log μ i=1 􏰇􏰊m 􏰈 = μtf0(x)−μt f0(x+)+ λifi(x+)+νT(Ax+ −b) i=1 ≤ μtf0(x)−μtg(λ,ν)−m−mlogμ = m(μ−1−logμ). −m−mlogμ This chain of equalities and inequalities needs some explanation. To obtain the second line from the first, we use λi = −1/(tfi(x)). In the first inequality we use thefactthatloga≤a−1fora>0. Toobtainthefourthlinefromthethird,we use Ax+ = b, so the extra term νT (Ax+ − b) is zero. The second inequality follows

11.5
Complexity analysis via self-concordance 589
1
0.8
0.6
0.4
0.2
0
1 1.5 2 2.5 3
μ
Figure 11.13 The function μ − 1 − log μ, versus μ. The number of Newton steps required for one outer iteration of the barrier method is bounded by (m/γ)(μ−1−logμ)+c.
from the definition of the dual function:
i=1
λifi(x+) + νT (Ax+ − b).
The conclusion is that
m(μ−1−logμ) +c γ
g(λ,ν) =
≤ f0(x+) +
i=1
The last line follows from g(λ, ν) = f0(x) − m/t.
􏰇􏰊m 􏰈 inf f0(z) + λifi(z) + νT (Az − b)
z
􏰊m
is an upper bound on (11.25), and therefore an upper bound on the number of Newton steps required for one outer iteration of the barrier method. The function μ − 1 − log μ is shown in figure 11.13. For small μ it is approximately quadratic; for large μ it grows approximately linearly. This fits with our intuition that for μ near one, the number of Newton steps required to center is small, whereas for large μ, it could well grow.
The bound (11.26) shows that the number of Newton steps required in each centering step is bounded by a quantity that depends mostly on μ, the factor by which t is updated in each outer step of the barrier method, and m, the number of inequality constraints in the problem. It also depends, weakly, on the parameters α and β used in the line search for the inner iterations, and in a very weak way on the tolerance used to terminate the inner iterations. It is interesting to note that the bound does not depend on n, the dimension of the variable, or p, the number of equality constraints, or the particular values of the problem data, i.e., the objective and constraint functions (provided the self-concordance assumption in §11.5.1 holds). Finally, we note that it does not depend on t; in particular, as t → ∞, a uniform bound on the number of Newton steps per outer iteration holds.
(11.26)
μ−1−logμ

590
11.5.3
11 Interior-point methods
Total number of Newton iterations
We can now give an upper bound on the total number of Newton steps in the barrier method, not counting the initial centering step (which we will analyze later, as part of phase I). We multiply (11.26), which bounds the number of Newton steps per outer iteration, by (11.13), the number of outer steps required, to obtain
N =􏳷log(m/(t(0)ǫ))􏳸􏰄m(μ−1−logμ) +c􏰅, (11.27) logμ γ
an upper bound on the total number of Newton steps required. This formula shows that when the self-concordance assumption holds, we can bound the number of Newton steps required by the barrier method, for any value of μ > 1.
If we fix μ and m, the bound N is proportional to log(m/(t(0)ǫ)), which is the log of the ratio of the initial duality gap m/t(0) to the final duality gap ǫ, i.e., the log of the required duality gap reduction. We can therefore say that the barrier method converges at least linearly, since the number of steps required to reach a given precision grows logarithmically with the inverse of the precision.
If μ, and the required duality gap reduction factor, are fixed, the bound N grows linearly with m, the number of inequalities. The bound N is independent of the other problem dimensions n and p, and the particular problem data or functions. We will see below that by a particular choice of μ, that depends on m, we can obtain a bound on the number of Newton steps that grows only as √m, instead of as m.
Finally, we analyze the bound N as a function of the algorithm parameter μ. As μ approaches one, the first term in N grows large, and therefore so does N. This is consistent with our intuition and observation that for μ near one, the number of outer iterations is very large. As μ becomes large, the bound N grows approximately as μ/ log μ, this time because the bound on the number of Newton iterations required per outer iteration grows. This, too, is consistent with our observations. As a result, the bound N has a minimum value as a function of μ.
The variation of the bound with the parameter μ is illustrated in figure 11.14, which shows the bound (11.27) versus μ for the values
c = 6, γ = 1/375, m/(t(0)ǫ) = 105, m = 100.
The bound is qualitatively consistent with intuition, and our observations: it grows very large as μ approaches one, and increases, more slowly, as μ becomes large. The bound N has a minimum at μ ≈ 1.02, which gives a bound on the total number of Newton iterations around 8000. The complexity analysis of Newton’s method is conservative, but the basic trade-off in the choice of μ is reflected in the plot. (In practice, far larger values of μ, from around 2 to 100, work very well, and require a total number of Newton iterations on the order of a few tens.)
Choosing μ as a function of m
When μ (and the required duality gap reduction) is fixed, the bound (11.27) grows linearly with m, the number of inequalities. It turns out we can obtain a better

11.5
Complexity analysis via self-concordance
5 104 4 104 3 104 2 104 1 104
01 1.1 μ
591
where
Figure 11.14 The upper bound N on the total number of Newton iterations, given by equation (11.27), for c = 6, γ = 1/375, m = 100, and a duality gap reduction factor m/(t(0)ǫ) = 105, versus the barrier algorithm parameter μ.
exponent for m by making μ a function of m. Suppose we choose μ = 1 + 1/√m.
Then we can bound the second term in (11.27) as
μ−1−logμ = 1/√m−log(1+1/√m) ≤ 1/√m − 1/√m + 1/(2m)
(11.28)
= 1/(2m)
(using − log(1 + a) ≤ −a + a2/2 for a ≥ 0). Using concavity of the logarithm, we
also have
Using these inequalities we can bound the total number of Newton steps by
log μ = log(1 + 1/√m) ≥ (log 2)/√m.
N ≤ 􏳷log(m/(t(0)ǫ))􏳸􏰄m(μ−1−logμ) +c􏰅 logμ γ
= 􏳺√m log2(m/(t(0)ǫ))􏳻 􏰄 1 + c􏰅 √ 2γ
≤ c1+c2 m,
c1 = 1 +c, c2 =log2(m/(t(0)ǫ))􏰄 1 +c􏰅. 2γ 2γ
(11.29)
1.2
≤ 􏳷√mlog(m/(t(0)ǫ))􏳸􏰄 1 +c􏰅 log2 2γ
N

592
11 Interior-point methods
Here c1 depends (and only weakly) on algorithm parameters for the centering Newton steps, and c2 depends on these and the required duality gap reduction. Note that the term log2(m/(t(0)ǫ)) is exactly the number of bits of required duality gap reduction. √
11.5.4
or even decrease μ as a function of m. Our only interest in this value of μ is that it (approximately) minimizes our (very conservative) upper bound on the number of Newton steps, and yields an overall estimate that grows as √m, instead of m.
Feasibility problems
In this section we analyze the complexity of a (minor) variation on the basic phase I method described in §11.4.1, used to solve a set of convex inequalities,
f1(x) ≤ 0, …, fm(x) ≤ 0, (11.30) where f1 , . . . , fm are convex, with continuous second derivatives. (We will consider
m, whereas the bound N in (11.27) grows like m, if the parameter μ is held constant. For this reason the barrier method, with parameter value (11.28), is said to be an order
For fixed duality gap reduction, the bound (11.29) grows as
√m method. √
In practice, we would not use the value μ = 1 + 1/ m, which is far too small,
equality constraints later.) We assume that the phase I problem minimize s
subject to fi(x) ≤ s, i = 1,…,m
(11.31)
satisfies the conditions in §11.5.1. In particular we assume that the feasible set of the inequalities (11.30) (which of course can be empty) is contained in a Euclidean ball of radius R:
{x|fi(x)≤0, i=1,…,m}⊆{x|∥x∥2 ≤R}.
We can interpret R as a prior bound on the norm of any point in the feasible set of the inequalities. This assumption implies that the sublevel sets of the phase I prob- lem are bounded. Without loss of generality, we will start the phase I method at the point x = 0. We define F = maxi fi(0), which is the maximum constraint violation, assumed to be positive (since otherwise x = 0 satisfies the inequalities (11.30)).

We define p ̄ as the optimal value of the phase I optimization problem (11.31). ⋆
The sign of p ̄ determines whether or not the set of inequalities (11.30) is feasible. ⋆⋆
The magnitude of p ̄ also has a meaning. If p ̄ is positive and large (say, near F,
the largest value it can have) it means that the set of inequalities is quite infeasible,
in the sense that for each x, at least one of the inequalities is substantially violated ⋆⋆
(by at least p ̄ ). On the other hand, if p ̄ is negative and large, it means that
the set of inequalities is quite feasible, in the sense that there is not only an x for
which fi(x) are all nonpositive, but in fact there is an x for which fi(x) are all quite ⋆⋆
negative (no more than p ̄ ). Thus, the magnitude |p ̄ | is a measure of how clearly the set of inequalities is feasible or infeasible, and therefore related to the difficulty

11.5 Complexity analysis via self-concordance
593

To determine feasibility of the inequalities, we use a variation on the basic phase I problem (11.31). We add a redundant linear inequality aT x ≤ 1, to obtain
minimize s
subject to fi(x) ≤ s, i = 1,…,m (11.32)
aT x ≤ 1.
We will specify a later. Our choice will satisfy ∥a∥2 ≤ 1/R, so ∥x∥2 ≤ R implies aT x ≤ 1, i.e., the extra constraint is redundant.
Wewillchooseaands0 sothatx=0,s=s0 isonthecentralpathofthe problem (11.32), with a parameter value t(0), i.e., they minimize
( 1 1 . 3 3 )
( 1 1 . 3 4 )
of determining feasibility of the inequalities (11.30). In particular, if |p ̄ | is small, it means the problem is near the boundary between feasibility and infeasibility.
t(0)s −
Setting to zero the gradient with respect to x yields
a = − 􏰊m 1 ∇ f i ( 0 ) .
􏰊m i=1
log(s − fi(x)) − log(1 − aT x). Setting to zero the derivative with respect to s, we get
t ( 0 ) = 􏰊m 1 . i=1 s0 −fi(0)
i=1 s0 −fi(0)
So it remains only to pick the parameter s0; once we have chosen s0, the vector a is given by (11.34), and the parameter t(0) is given by (11.33). Since x = 0 and s = s0 must be strictly feasible for the phase I problem (11.32), we must choose s0 >F.
We must also pick s0 to make sure that ∥a∥2 ≤ 1/R. From (11.34), we have ∥ a ∥ 2 ≤ 􏰊m 1 ∥ ∇ f i ( 0 ) ∥ ≤ m G ,
where G = maxi ∥∇fi(0)∥2. Therefore we can take s0 = mGR + F, which ensures ∥a∥2 ≤ 1/R, so the extra linear inequality is redundant.
Using (11.33), we have
t(0)=􏰊m 1 ≥1,
i=1 mGR + F − fi(0) mGR
sinceF =maxifi(0). Thusx=0,s=s0 areonthecentralpathforthephaseI
problem (11.32), with initial duality gap
m+1 ≤ (m+1)mGR. t(0)
i=1 s0 −fi(0) s0 −F

594
11 Interior-point methods

We can stop when either the primal objective value of (11.32) is negative, or the
dual objective value is positive. One of these two cases must occur when the duality

duality gap no more than (m + 1)mGR, and terminating when (or before) the ⋆
11.5.5
To solve the original inequalities (11.30) we need to determine the sign of p ̄ .
gap for (11.32) is less than |p ̄ |.
We use the barrier method to solve (11.32), starting from a central point with
duality gap is less than |p ̄ |. Using the results of the previous section, this requires
no more than
Newton steps. (Here we take μ = 1 + 1/√m + 1, which gives a better complexity exponent for m than a fixed value of μ.) √
The bound (11.35) grows only slightly faster than m, and depends weakly on the algorithm parameters used in the centering steps. It is approximately propor-

Feasibility problems with equality constraints
We can apply the same analysis to feasibility problems that include equality con- straints, by eliminating the equality constraints. This does not affect the self- concordance of the problem, but it does mean that G and R refer to the reduced, or eliminated, problem.
Combined phase I/phase II complexity
In this section we give an end-to-end complexity analysis for solving the problem
minimize f0 (x)
subject to fi(x) ≤ 0, i = 1,…,m
Ax = b
using (a variation on) the barrier method. First we solve the phase I problem
minimize s
subject to fi(x) ≤ s, i = 1,…,m
f0(x) ≤ M Ax = b
aT x ≤ 1,
which we assume satisfies the self-concordance and bounded sublevel set assump- tions of §11.5.1. Here we have added two redundant inequalities to the basic phase I problem. The constraint f0(x) ≤ M is added to guarantee that the phase I cen- tral path intersects the central path for phase II, as described in section §11.4.1 (see (11.21)). The number M is a prior bound on the optimal value of the problem. The second added constraint is the linear inequality aT x ≤ 1, where a is chosen
􏳷√m+1log m(m+1)GR􏳸􏰄 1 +c􏰅 (11.35) 2⋆
| p ̄ | 2 γ
tional to log2((GR)/|p ̄ |), which can be interpreted as a measure of how difficult the particular feasibility problem is, or how close it is to the boundary between feasibility and infeasibility.

11.5 Complexity analysis via self-concordance 595
as described in §11.5.4. We use the barrier method to solve this problem, with μ=1+1/√m+2,andthestartingpointsx=0,s=s0 givenin §11.5.4.
To either find a strictly feasible point, or determine the problem is infeasible, requires no more than
N =􏳷√m+2log (m+1)(m+2)GR􏳸􏰄 1 +c􏰅 (11.36) I2⋆
Newton steps, where G and R are as given in 11.5.4. If the problem is infeasible we are done; if it is feasible, then we find a point in phase I, associated with s = 0, that lies on the central path of the phase II problem
minimize f0 (x)
subject to fi(x) ≤ 0, i = 1,…,m
Ax = b aT x ≤ 1.
The associated initial duality gap of this initial point is no more than (m + 1)(M − p∗) (see (11.22)). We assume the phase II problem also satisfies the the self- concordance and bounded sublevel set assumptions in §11.5.1.
We now proceed to phase II, again using the barrier method. We must reduce the duality gap from its initial value, which is no more than (m + 1)(M − p∗), to some tolerance ǫ > 0. This takes at most
N =􏳷√m+1log (m+1)(M−p⋆)􏳸􏰄1 +c􏰅 (11.37) II 2 ǫ 2γ
Newton steps.
The total number of Newton steps is therefore no more than NI + NII. This
bound grows with the number of inequalities m approximately as √m, and includes two terms that depend on the particular problem instance,
11.5.6 Summary
GR
log2 ⋆, log2
M − p⋆ ǫ
.
| p ̄ | 2 γ
| p ̄ |
The complexity analysis given in this section is mostly of theoretical interest. In particular, we remind the reader that the choice μ = 1 + 1/√m, discussed in this section, would be a very poor one to use in practice; its only advantage is that it results in a bound that grows like √m instead of m. Likewise, we do not recommend adding the redundant inequality aT x ≤ 1 in practice.
The actual bounds obtained from the analysis given here are far higher than the numbers of iterations actually observed. Even the order in the bound appears to be conservative. The best bounds on the number of Newton steps grow like √m, whereas practical experience suggests that the number of Newton steps hardly grows at all with m (or any other parameter, in fact).
Still, it is comforting to know that when the self-concordance condition holds, we can give a uniform bound on the number of Newton steps required in each

596
11 Interior-point methods
centering step of the barrier method. An obvious potential pitfall of the barrier method is the possibility that as t grows, the associated centering problems might become more difficult, requiring more Newton steps. While practical experience suggests that this is not the case, the uniform bound bolsters our confidence that it cannot happen.
Finally, we mention that it is not yet clear whether or not there is a practical advantage to formulating a problem so that the self-concordance condition holds. All we can say is that when the self-concordance condition holds, the barrier method will work well in practice, and we can give a worst case complexity bound.
Problems with generalized inequalities
In this section we show how the barrier method can be extended to problems with generalized inequalities. We consider the problem
minimize f0 (x)
subject to fi(x) ≼Ki 0, i = 1,…,m (11.38)
Ax = b,
where f0 : Rn → R is convex, fi : Rn → Rki, i = 1,…,k, are Ki-convex, and Ki ⊆ Rki are proper cones. As in §11.1, we assume that the functions fi are twice continuously differentiable, that A ∈ Rp×n with rank A = p, and that the problem is solvable.
11.6
The KKT conditions for problem (11.38) are
∇f0(x⋆)+
Ax⋆ = b fi(x⋆) ≼Ki 0, 􏰉m λ⋆i ≽Ki∗ 0,
i=1Dfi(x⋆)Tλ⋆i +ATν⋆ = 0 λ⋆T f (x⋆) = 0,
i = 1,…,m
i=1,…,m (11.39)
i = 1, . . . , m,
We will assume that prob-
are necessary and sufficient
where Dfi(x⋆) ∈ Rki×n is the derivative of fi at x⋆. lem (11.38) is strictly feasible, so the KKT conditions conditions for optimality of x⋆.
ii
The development of the method is parallel to the case with scalar constraints. Once we develop a generalization of the logarithm function that applies to general proper cones, we can define a logarithmic barrier function for the problem (11.38). From that point on, the development is essentially the same as in the scalar case. In particular, the central path, barrier method, and complexity analysis are very similar.

11.6 Problems with generalized inequalities 597
11.6.1 Logarithmic barrier and central path Generalized logarithm for a proper cone
We first define the analog of the logarithm, logx, for a proper cone K ⊆ Rq. We say that ψ : Rq → R is a generalized logarithm for K if
• ψ is concave, closed, twice continuously differentiable, dom ψ = int K , and ∇2ψ(y) ≺ 0 for y ∈ intK.
• Thereisaconstantθ>0suchthatforally≻K 0,andalls>0, ψ(sy) = ψ(y) + θ log s.
In other words, ψ behaves like a logarithm along any ray in the cone K.
We call the constant θ the degree of ψ (since exp ψ is a homogeneous function of degree θ). Note that a generalized logarithm is only defined up to an additive constant; if ψ is a generalized logarithm for K , then so is ψ + a, where a ∈ R. The ordinary logarithm is, of course, a generalized logarithm for R+.
We will use the following two properties, which are satisfied by any generalized
logarithm: If y ≻K 0, then
∇ψ(y) ≻K∗ 0, (11.40) which implies ψ is K-increasing (see §3.6.1), and
yT ∇ψ(y) = θ.
The first property is proved in exercise 11.15. The second property follows imme-
diately from differentiating ψ(sy) = ψ(y) + θ log s with respect to s.
Example 11.5 Nonnegative orthant. The function ψ(x) = 􏰉ni=1 log xi is a generalized
logarithm for K = Rn+, with degree n. For x ≻ 0, ∇ψ(x) = (1/x1, . . . , 1/xn),
so ∇ψ(x) ≻ 0, and xT ∇ψ(x) = n.
Example 11.6 Second-order cone. The function
􏰇 􏰊n 􏰈 ψ(x) = log x2n+1 − x2i
i=1 is a generalized logarithm for the second-order cone
 􏰍􏰍􏰍􏰍􏰇 􏰈   􏰍􏰍􏰊n 1/2 
K = x ∈ Rn+1 x2i ≤ xn+1, i=1

598
11 Interior-point methods
with degree 2. The gradient of ψ at a point x ∈ int K is given by ∂ψ(x) −2xj
= 􏰀x2n+1−􏰉ni=1x2i􏰁, j=1,…,n ∂ψ(x) 2xn+1
∂xj ∂ xn+1
= 􏰀x2n+1 − 􏰉ni=1 x2i 􏰁 .
The identities ∇ψ(x) ∈ int K∗ = int K and xT ∇ψ(x) = 2 are easily verified.
Example 11.7 Positive semidefinite cone. The function ψ(X) = log det X is a gen- eralized logarithm for the cone Sp+. The degree is p, since
logdet(sX) = logdetX +plogs fors>0. ThegradientofψatapointX∈Sp++ isequalto
∇ψ(X) = X−1.
Thus, we have ∇ψ(X) = X−1 ≻ 0, and the inner product of X and ∇ψ(X) is equal
to tr(XX−1) = p.
Logarithmic barrier functions for generalized inequalities
Returning to problem (11.38), let ψ1, . . . , ψm be generalized logarithms for the cones K1, . . . , Km, respectively, with degrees θ1, . . . , θm. We define the logarithmic barrier function for problem (11.38) as
The central path
The next step is to define the central path for problem (11.38). We define the central point x⋆(t), for t ≥ 0, as the minimizer of tf0 + φ, subject to Ax = b, i.e., as the solution of
minimize tf0(x) − 􏰉mi=1 ψi(−fi(x)) subject to Ax = b
(assuming the minimizer exists, and is unique). Central points are characterized by the optimality condition
􏰊m i=1
ψi(−fi(x)), domφ = {x | fi(x) ≺ 0, i = 1,…,m}. Convexity of φ follows from the fact that the functions ψi are Ki-increasing, and
φ(x) = −
the functions fi are Ki-convex (see the composition rule of §3.6.2).
t∇f0(x) + ∇φ(x)􏰊+ AT ν m
= t∇f0(x) + Dfi(x)T ∇ψi(−fi(x)) + AT ν = 0, i=1
for some ν ∈ Rp, where Dfi(x) is the derivative of fi at x.
(11.41)

11.6 Problems with generalized inequalities 599
Dual points on central path
As in the scalar case, points on the central path give dual feasible points for the problem (11.38). For i = 1, . . . , m, define
λ⋆i (t) = 1t ∇ψi(−fi(x⋆(t))), (11.42)
and let ν⋆(t) = ν/t, where ν is the optimal dual variable in (11.41). We will show that λ⋆1(t), . . . , λ⋆m(t), together with ν⋆(t), are dual feasible for the original problem (11.38).
First, λ⋆i (t) ≻Ki∗ 0, by the monotonicity property (11.40) of generalized loga- rithms. Second, it follows from (11.41) that the Lagrangian
L(x, λ⋆(t), ν⋆(t)) = f0(x) + is therefore equal to
g(λ⋆(t), ν⋆(t)) =
= f0(x⋆(t)) + (1/t)
⋆ 􏰊m
f0(x⋆(t)) +
􏰊m i=1
λ⋆i (t)T fi(x) + ν⋆(t)T (Ax − b)
is minimized over x by x = x⋆(t). The dual function g evaluated at (λ⋆(t),ν⋆(t))
􏰊m i=1
λ⋆i (t)T fi(x⋆(t)) + ν⋆(t)T (Ax⋆(t) − b)
= f0(x (t)) − (1/t)
where θi is the degree of ψi. In the last line, we use the fact that yT ∇ψi(y) = θi
for y ≻Ki 0, and therefore
λ⋆i (t)T fi(x⋆(t)) = −θi/t,
i = 1, . . . , m.
(11.43)
Thus, if we define
􏰊m θ= θi,
i=1
then the primal feasible point x⋆(t) and the dual feasible point (λ⋆(t),ν⋆(t)) have duality gap θ/t. This is just like the scalar case, except that θ, the sum of the degrees of the generalized logarithms for the cones, appears in place of m, the number of inequalities.
Example 11.8 Second-order cone programming. We consider an SOCP with variable
􏰊m i=1
∇ψi(−fi(x⋆(t)))T fi(x⋆(t)) θi,
i=1
x ∈ Rn:
minimize fT x
subjectto ∥Aix+bi∥2 ≤cTi x+di, i=1,…,m, where Ai ∈ Rni×n. As we have seen in example 11.6, the function
􏰇 􏰊p 􏰈 ψ(y) = log yp2+1 − yi2
i=1
(11.44)

600
11 Interior-point methods
is a generalized logarithm for the second-order cone in Rp+1, with degree 2. The corresponding logarithmic barrier function for (11.44) is
∇φ(x)=−2􏰊m 1 􏰀(cTi x+di)ci −ATi (Aix+bi)􏰁. i=1 (cTi x+di)2 −∥Aix+bi∥2
􏰊m i=1
log((cTi x + di)2 − ∥Aix + bi∥2), (11.45) on the central path is tf + ∇φ(x⋆(t)) = 0, where
φ(x) = −
withdomφ={x|∥Aix+bi∥2 (􏰉pi=1 x2i )1/2}. The associated dual logarithm is
􏰇 􏰊p 􏰈
ψ(y) = log yp2+1 − yi2 + 2 − log 4,
i=1
with dom ψ = {y ∈ Rp+1 | yp+1 > (􏰉pi=1 yi2 )1/2 } (see exercise 3.36). Except for a constant, it is the same as the original generalized logarithm for the second-order cone.

608
11 Interior-point methods
Example 11.13 Positive semidefinite cone. The dual logarithm associated with ψ(X) = logdetX, with domψ = Sp++, is
ψ(Y)=logdetY +p,
with domain domψ∗ = Sp++ (see example 3.23). Again, it is the same generalized
logarithm, except for a constant.
Derivation of the basic bound
To simplify notation, we denote x⋆(t) as x, x⋆(μt) as x+, λ⋆i (t) as λi, and ν⋆(t) as ν. From tλi = ∇ψi(−fi(x)) (in (11.42)) and property (11.43), we conclude that
ψi(−fi(x)) + ψi(tλi) = −tλTi fi(x) = θi, (11.51) i.e., the inequality (11.50) holds with equality for the pair u = −fi(x) and v = tλi.
The same inequality for the pair u = −fi(x+), v = μtλi gives ψi(−fi(x+)) + ψi(μtλi) ≤ −μtλTi fi(x+),
which becomes, using logarithmic homogeneity of ψi, ψi(−fi(x+)) + ψi(tλi) + θi log μ ≤ −μtλTi fi(x+).
Subtracting the equality (11.51) from this inequality, we get −ψi(−fi(x)) + ψi(−fi(x+)) + θi log μ ≤ −θi − μtλTi fi(x+),
and summing over i yields
φ(x) − φ(x+) + θ log μ ≤ −θ − μt
i=1 We also have, from the definition of the dual function,
f0(x) − θ/t
= g(λ, ν) 􏰊 m
≤ f0(x+) + = f0(x+)+
i=1
Multiplying this inequality by μt and adding to the inequality (11.52), we get
φ(x) − φ(x+) + θ log μ + μtf0(x) − μθ ≤ μtf0(x+) − θ, which when re-arranged gives
μtf0(x) + φ(x) − μtf0(x+) − φ(x+) ≤ θ(μ − 1 − log μ), the desired inequality (11.48).
i=1 􏰊m
λTi fi(x+) + νT (Ax+ − b) λTi fi(x+).
􏰊m
λTi fi(x+).
(11.52)

11.7 Primal-dual interior-point methods 609
11.7 Primal-dual interior-point methods
In this section we describe a basic primal-dual interior-point method. Primal- dual interior-point methods are very similar to the barrier method, with some differences.
• There is only one loop or iteration, i.e., there is no distinction between inner and outer iterations as in the barrier method. At each iteration, both the primal and dual variables are updated.
• The search directions in a primal-dual interior-point method are obtained from Newton’s method, applied to modified KKT equations (i.e., the opti- mality conditions for the logarithmic barrier centering problem). The primal- dual search directions are similar to, but not quite the same as, the search directions that arise in the barrier method.
• In a primal-dual interior-point method, the primal and dual iterates are not necessarily feasible.
Primal-dual interior-point methods are often more efficient than the barrier method, especially when high accuracy is required, since they can exhibit better than linear convergence. For several basic problem classes, such as linear, quadratic, second-order cone, geometric, and semidefinite programming, customized primal- dual methods outperform the barrier method. For general nonlinear convex op- timization problems, primal-dual interior-point methods are still a topic of active research, but show great promise. Another advantage of primal-dual algorithms over the barrier method is that they can work when the problem is feasible, but not strictly feasible (although we will not pursue this).
In this section we present a basic primal-dual method for (11.1), without conver- gence analysis. We refer the reader to the references for a more thorough treatment of primal-dual methods and their convergence analysis.
11.7.1 Primal-dual search direction
As in the barrier method, we start with the modified KKT conditions (11.15), expressed as rt(x, λ, ν) = 0, where we define
 ∇f0(x)+Df(x)Tλ+ATν 
rt(x, λ, ν) = − diag(λ)f(x) − (1/t)1 , (11.53)
Ax−b
and t > 0. Here f : Rn → Rm and its derivative matrix Df are given by
 f1(x)   ∇f1(x)T  f(x) =  . , Df(x) =  . .
∇fm (x)T
If x, λ, ν satisfy rt(x,λ,ν) = 0 (and fi(x) < 0), then x = x⋆(t), λ = λ⋆(t), and ν = ν⋆(t). In particular, x is primal feasible, and λ, ν are dual feasible, with fm (x) 610 11 Interior-point methods duality gap m/t. The first block component of rt, rdual = ∇f0(x) + Df(x)T λ + AT ν, is called the dual residual, and the last block component, rpri = Ax − b, is called the primal residual. The middle block, rcent =−diag(λ)f(x)−(1/t)1, is the centrality residual, i.e., the residual for the modified complementarity condi- tion. Now consider the Newton step for solving the nonlinear equations rt(x, λ, ν) = 0, for fixed t (without first eliminating λ, as in §11.3.4), at a point (x,λ,ν) that satisifes f(x) ≺ 0, λ ≻ 0. We will denote the current point and Newton step as y = (x,λ,ν), ∆y = (∆x,∆λ,∆ν), respectively. The Newton step is characterized by the linear equations rt(y + ∆y) ≈ rt(y) + Drt(y)∆y = 0, i.e., ∆y = −Drt(y)−1rt(y). In terms of x, λ, and ν, we have  ∇2f0(x)+􏰉mi=1 λi∇2fi(x) Df(x)T AT  ∆x   rdual   −diag(λ)Df(x) −diag(f(x)) 0  ∆λ  = − rcent . A 00∆νrpri (11.54) The primal-dual search direction ∆ypd = (∆xpd, ∆λpd, ∆νpd) is defined as the solution of (11.54). The primal and dual search directions are coupled, both through the coefficient matrix and the residuals. For example, the primal search direction ∆xpd depends on the current value of the dual variables λ and ν, as well as x. We note also that if x satisfies Ax = b, i.e., the primal feasibility residual rpri is zero, then we have A∆xpd = 0, so ∆xpd defines a (primal) feasible direction: for any s, x + s∆xpd will satisfy A(x + s∆xpd) = b. Comparison with barrier method search directions The primal-dual search directions are closely related to the search directions used in the barrier method, but not quite the same. We start with the linear equa- tions (11.54) that define the primal-dual search directions. We eliminate the vari- able ∆λpd, using ∆λpd =−diag(f(x))−1diag(λ)Df(x)∆xpd +diag(f(x))−1rcent, which comes from the second block of equations. Substituting this into the first block of equations gives 􏰒Hpd AT􏰓􏰒∆xpd􏰓 A 􏰒0 ∆νpd 􏰓 rpri = − rdual + Df(x)T diag(f(x))−1rcent rpri 􏰒 ∇f0(x)+(1/t)􏰉m 1 ∇fi(x)+ATν 􏰓 = − i=1−fi(x) , (11.55) 11.7 Primal-dual interior-point methods 611 where Hpd = ∇2f0(x) + λi∇2fi(x) + i ∇fi(x)∇fi(x)T . (11.56) 􏰒 (1/t)H A bar AT 􏰓􏰒 ∆x 􏰓 bar =− 0 ∆νbar 􏰒 ∇f0(x)+(1/t)􏰉m 1 ∇fi(x)+ATν 􏰓 i=1 −fi(x) . rpri 􏰊m 􏰊mλ i=1 i=1 −fi(x) We can compare (11.55) to the equation (11.14), which defines the Newton step for the centering problem in the barrier method with parameter t. This equation can be written as 􏰒 Hbar AT 􏰓􏰒 ∆xbar 􏰓 A􏰒0 νbar 􏰓 = − t∇f0(x) + ∇φ(x) 􏰒rpri􏰉m1 􏰓 = − t∇f0 (x) + i=1 −fi (x) ∇fi (x) , (11.57) (11.58) where rpri Hbar = t∇2f0(x) + 􏰊m 1 ∇2fi(x) + 􏰊m 1 ∇fi(x)∇fi(x)T . i=1 −fi(x) i=1 fi(x)2 (Here we give the general expression for the infeasible Newton step; if the current x is feasible, i.e., rpri = 0, then ∆xbar coincides with the feasible Newton step ∆xnt defined in (11.14).) Our first observation is that the two systems of equations (11.55) and (11.57) are very similar. The coefficient matrices in (11.55) and (11.57) have the same structure; indeed, the matrices Hpd and Hbar are both positive linear combinations of the matrices ∇2f0(x), ∇2f1(x), . . . , ∇2fm(x), ∇f1(x)∇f1(x)T , . . . , ∇fm(x)∇fm(x)T . This means that the same method can be used to compute the primal-dual search directions and the barrier method Newton step. We can say more about the relation between the primal-dual equations (11.55) and the barrier method equations (11.57). Suppose we divide the first block of equation (11.57) by t, and define the variable ∆νbar = (1/t)νbar − ν (where ν is arbitrary). Then we obtain In this form, the righthand side is identical to the righthand side of the primal-dual equations (evaluated at the same x, λ, and ν). The coefficient matrices differ only in the 1, 1 block: Hpd = (1/t)Hbar = When x and λ satisfy −fi(x)λi = 1/t, the coefficient matrices, and therefore also the search directions, coincide. 􏰊m 􏰊mλ ∇2f0(x) + λi∇2fi(x) + i ∇fi(x)∇fi(x)T , i=1 i=1 −fi(x) ∇2f0(x) + 􏰊m 1 ∇2fi(x) + 􏰊m 1 ∇fi(x)∇fi(x)T . i=1 −tfi(x) i=1 tfi(x)2 612 11.7.2 11 Interior-point methods 11.7.3 In the primal-dual interior-point method the iterates x(k), λ(k), and ν(k) are not necessarily feasible, except in the limit as the algorithm converges. This means that we cannot easily evaluate a duality gap η(k) associated with step k of the algorithm, as we do in (the outer steps of) the barrier method. Instead we define the surrogate duality gap, for any x that satisfies f(x) ≺ 0 and λ ≽ 0, as ηˆ(x, λ) = −f (x)T λ. (11.59) The surrogate gap ηˆ would be the duality gap, if x were primal feasible and λ, ν were dual feasible, i.e., if rpri = 0 and rdual = 0. Note that the value of the parameter t that corresponds to the surrogate duality gap ηˆ is m/ηˆ. Primal-dual interior-point method We can now describe the basic primal-dual interior-point algorithm. Algorithm 11.2 Primal-dual interior-point method. givenxthatsatisfiesf1(x)<0,...,fm(x)<0,λ≻0,μ>1,ǫfeas >0,ǫ>0.
repeat
1. Determine t. Set t := μm/ηˆ.
2. Compute primal-dual search direction ∆ypd. 3. Line search and update.
Determine step length s > 0 and set y := y + s∆ypd. until ∥rpri∥2 ≤ ǫfeas, ∥rdual∥2 ≤ ǫfeas, and ηˆ ≤ ǫ.
In step 1, the parameter t is set to a factor μ times m/ηˆ, which is the value of t associated with the current surrogate duality gap ηˆ. If x, λ, and ν were central, with parameter t (and therefore with duality gap m/t), then in step 1 we would increase t by the factor μ, which is exactly the update used in the barrier method. Values of the parameter μ on the order of 10 appear to work well.
The primal-dual interior-point algorithm terminates when x is primal feasible and λ, ν are dual feasible (within the tolerance ǫfeas) and the surrogate gap is smaller than the tolerance ǫ. Since the primal-dual interior-point method often has faster than linear convergence, it is common to choose ǫfeas and ǫ small.
Line search
The line search in the primal-dual interior point method is a standard backtracking line search, based on the norm of the residual, and modified to ensure that λ ≻ 0 and f(x) ≺ 0. We denote the current iterate as x, λ, and ν, and the next iterate as x+, λ+, and ν+, i.e.,
x+ = x+s∆xpd, λ+ = λ+s∆λpd, ν+ = ν +s∆νpd.
The surrogate duality gap

11.7 Primal-dual interior-point methods 613
The residual, evaluated at y+, will be denoted r+.
We first compute the largest positive step length, not exceeding one, that gives
λ+ ≽ 0, i.e.,
smax = sup{s∈[0,1]|λ+s∆λ≽0}
= min{1,min{−λi/∆λi|∆λi<0}}. We start the backtracking with s = 0.99smax , and multiply s by β ∈ (0, 1) until we have f(x+) ≺ 0. We continue multiplying s by β until we have ∥rt(x+,λ+,ν+)∥2 ≤(1−αs)∥rt(x,λ,ν)∥2. Common choices for the backtracking parameters α and β are the same as those for Newton’s method: α is typically chosen in the range 0.01 to 0.1, and β is typically chosen in the range 0.3 to 0.8. One iteration of the primal-dual interior-point algorithm is the same as one step of the infeasible Newton method, applied to solving rt(x, λ, ν) = 0, but modified to ensure λ ≻ 0 and f (x) ≺ 0 (or, equivalently, with dom rt restricted to λ ≻ 0 and f(x) ≺ 0). The same arguments used in the proof of convergence of the infeasible start Newton method show that the line search for the primal-dual method always terminates in a finite number of steps. 11.7.4 Examples We illustrate the performance of the primal-dual interior-point method for the same problems considered in §11.3.2. The only difference is that instead of starting with a point on the central path, as in §11.3.2, we start the primal-dual interior- point method at a randomly generated x(0), that satisfies f(x) ≺ 0, and take λ(0) = −1/fi(x(0)), so the initial value of the surrogate gap is ηˆ = 100. The i parameter values we use for the primal-dual interior-point method are μ = 10, β = 0.5, ǫ = 10−8, α = 0.01. Small LP and GP We first consider the small LP used in §11.3.2, with m = 100 inequalities and n = 50 variables. Figure 11.21 shows the progress of the primal-dual interior-point method. Two plots are shown: the surrogate gap ηˆ, and the norm of the primal and dual residuals, rfeas = 􏰀∥rpri∥2 + ∥rdual∥2􏰁1/2 , versus iteration number. (The initial point is primal feasible, so the plot shows the norm of the dual feasibility residual.) The plots show that the residual converges to zero rapidly, and becomes zero to numerical precision in 24 iterations. The surrogate gap also converges rapidly. Compared to the barrier method, the primal- dual interior-point method is faster, especially when high accuracy is required. Figure 11.22 shows the progress of the primal-dual interior-point method on the GP considered in §11.3.2. The convergence is similar to the LP example. 614 11 102 105 10−6 Interior-point methods 100 10−2 100 10−4 10−5 10−10 10−10 10−15 10−8 0 5 10 15 20 25 30 iteration number 0 5 10 15 20 25 30 iteration number Figure 11.21 Progress of the primal-dual interior-point method for an LP, showing surrogate duality gap ηˆ and the norm of the primal and dual resid- uals, versus iteration number. The residual converges rapidly to zero within 24 iterations; the surrogate gap also converges to a very small number in about 28 iterations. The primal-dual interior-point method converges faster than the barrier method, especially if high accuracy is required. 102 100 10−2 10−4 10−6 10−8 10−10 0 5 10 15 20 25 iteration number 105 100 10−5 10−10 10−15 0 5 10 15 20 25 iteration number Figure 11.22 Progress of primal-dual interior-point method for a GP, show- ing surrogate duality gap ηˆ and the norm of the primal and dual residuals versus iteration number. ηˆ ηˆ rfeas rfeas 11.8 Implementation 615 50 40 30 20 101 2 3 10 10 10 m Figure 11.23 Number of iterations required to solve randomly generated standard LPs of different dimensions, with n = 2m. Error bars show stan- dard deviation, around the average value, for 100 instances of each dimen- sion. The growth in the number of iterations required, as the problem di- mensions range over a 100:1 ratio, is approximately logarithmic. A family of LPs Here we examine the performance of the primal-dual method as a function of the problem dimensions, for the same family of standard form LPs considered in §11.3.2. We use the primal-dual interior-point method to solve the same 2000 instances, which consist of 100 instances for each value of m. The primal-dual algorithm is started at x(0) = 1, λ(0) = 1, ν(0) = 0, and terminated using tolerance ǫ = 10−8. Figure 11.23 shows the average, and standard deviation, of the number of iterations required versus m. The number of iterations ranges from 15 to 35, and grows approximately as the logarithm of m. Comparing with the results for the barrier method shown in figure 11.8, we see that the number of iterations in the primal-dual method is only slightly higher, despite the fact that we start at infeasible starting points, and solve the problem to a much higher accuracy. 11.8 Implementation The main effort in the barrier method is computing the Newton step for the cen- tering problem, which consists of solving sets of linear equations of the form where H = t∇2f0(x) + 􏰊m 1 ∇fi(x)∇fi(x)T + 􏰊m i=1 1 ∇2fi(x) −fi (x) 􏰒H AT 􏰓􏰒∆xnt 􏰓=−􏰒g􏰓, A0νnt 0 (11.60) i=1 fi (x)2 iterations 616 11 Interior-point methods 11.8.1 The Newton equations for the primal-dual method have exactly the same structure, so our observations in this section apply to the primal-dual method as well. The coefficient matrix of (11.60) has KKT structure, so all of the discussion in §9.7 and §10.4 applies here. In particular, the equations can be solved by elimi- nation, and structure such as sparsity or diagonal plus low rank can be exploited. Let us give some generic examples in which the special structure of the KKT equa- tions can be exploited to compute the Newton step more efficiently. Sparse problems If the original problem is sparse, which means that the objective and every con- straint function each depend on only a modest number of variables, then the gradi- ents and Hessian matrices of the objective and constraint functions are all sparse, as is the coefficient matrix A. Provided m is not too big, the matrix H is then likely to be sparse, so a sparse matrix method can be used to compute the Newton step. The method will likely work well if there are a few relatively dense rows and columns in the KKT matrix, which would occur, for example, if there were a few equality constraints involving a large number of variables. Separable objective and a few linear inequality constraints Suppose the objective function is separable, and there are only a relatively small number of linear equality and inequality constraints. Then ∇2f0(x) is diagonal, and the terms ∇2fi(x) vanish, so the matrix H is diagonal plus low rank. Since H is easily inverted, we can solve the KKT equations efficiently. The same method can be applied whenever ∇2f0(x) is easily inverted, e.g., banded, sparse, or block diagonal. Standard form linear programming We first discuss the implementation of the barrier method for the standard form LP minimize cT x subjectto Ax=b, x≽0, with A ∈ Rm×n. The Newton equations for the centering problem minimize tcT x − 􏰉ni=1 log xi are given by subject to Ax = b 􏰒 diag(x)−2 AT 􏰓􏰒 ∆xnt 􏰓=􏰒 −tc+diag(x)−11 􏰓. A0νnt 0 g = t ∇ f 0 ( x ) + 􏰊m 1 ∇ f i ( x ) . i=1 −fi(x) 11.8 Implementation 617 These equations are usually solved by block elimination of ∆xnt. From the first equation, ∆xnt = diag(x)2(−tc + diag(x)−11 − AT νnt) = −t diag(x)2c + x − diag(x)2AT νnt. Substituting in the second equation yields A diag(x)2AT νnt = −tA diag(x)2c + b. The coefficient matrix is positive definite since by assumption rank A = m. More- over if A is sparse, then usually Adiag(x)2AT is sparse, so a sparse Cholesky factorization can be used. 11.8.2 l1-norm approximation Consider the l1-norm approximation problem minimize ∥Ax − b∥1 with A ∈ Rm×n. We will discuss the implementation assuming m and n are large, and A is structured, e.g., sparse, and compare it with the cost of the corresponding least-squares problem minimize ∥Ax − b∥2 . We start by expressing the l1-norm approximation problem as an LP by intro- ducing auxiliary variables y ∈ Rm: minimize 1T y and D1 =diag(b−Ax+y)−2, D2 =diag(−b+Ax+y)−2 g1 = diag(b−Ax+y)−11−diag(−b+Ax+y)−11 subjectto 􏰒 A −I􏰓􏰒x􏰓≼􏰒 b 􏰓. −A−I y −b The Newton equation for the centering problem is 􏰒AT −AT 􏰓􏰒D1 0 􏰓􏰒 A −I􏰓􏰒∆xnt 􏰓=−􏰒ATg1 􏰓 −I −I 0 D2 −A −I ∆ynt g2 where g2 = t1−diag(b−Ax+y)−11−diag(−b+Ax+y)−11. If we multiply out the lefthand side, this can be simplified as 􏰒 AT(D1 +D2)A −AT(D1 −D2) 􏰓􏰒 ∆xnt 􏰓=−􏰒 ATg1 􏰓. −(D1 −D2)A D1 +D2 ∆ynt g2 618 11 Interior-point methods 11.8.3 ∆ynt = (D1 + D2)−1(−g2 + (D1 − D2)A∆xnt). It is interesting to note that (11.61) are the normal equations of a weighted least- squares problem minimize ∥D1/2(A∆x + D−1g)∥2. In other words, the cost of solving the l1-norm approximation problem is the cost of solving a relatively small number of weighted least-squares problems with the same matrix A, and weights that change at each iteration. If A has structure that allows us to solve the least-squares problem fast (for example, by exploiting sparsity), then we can solve (11.61) fast. Semidefinite programming in inequality form We consider the SDP 􏰉T with variable x ∈ Rn, and parameters F1, . . . , Fn, G ∈ Sp. The associated centering problem, using the log-determinant barrier function, is minimize tcT x − log det(− 􏰉ni=1 xi Fi − G). The Newton step ∆xnt is found from H∆xnt = −g, where the Hessian and gradient are given by Hij = tr(S−1FiS−1Fj), i, j = 1,...,n gi = tci +tr(S−1Fi), i=1,...,n, where S = − 􏰉ni=1 xiFi − G. One standard approach is to form H (and g), and then solve the Newton equation via Cholesky factorization. We first consider the unstructured case, i.e., we assume all matrices are dense. We will also just keep track of the order in the flop count, with respect to the problem dimensions n and p. We first form S, which costs order np2 flops. We then compute the matrices S−1Fi, for each i, via Cholesky factorization of S, and then back substitution with the columns of Fi (or forming S−1 and multiplying by Fi). This cost is order p3 for each i, so the total cost is order np3. Finally, Applying block elimination to ∆ynt, we can reduce this to AT DA∆xnt = −AT g (11.61) where and D = 4D1D2(D1 + D2)−1 = 2(diag(y)2 + diag(b − Ax)2)−1 g = g1 + (D1 − D2)(D1 + D2)−1g2. After solving for ∆xnt, we obtain ∆ynt from minimize c x subject to ni=1 xiFi + G ≼ 0, 11.8 Implementation 619 we form Hij as the inner product of the matrices S−1Fi and S−1Fj, which costs order p2 flops. Since we do this for n(n + 1)/2 such pairs, the cost is order n2p2. Solving for the Newton direction costs order n3. The dominating order is thus max{np3, n2p2, n3}. It is not possible, in general, to exploit sparsity in the matrices Fi and G, since H is often dense, even when Fi and G are sparse. One exception is when Fi and G have a common block diagonal structure, in which case all the operations described above can be carried out block by block. It is often possible to exploit (common) sparsity in Fi and G to form the (dense) Hessian H more efficiently. If we can find an ordering that results in S having a reasonably sparse Cholesky factor, then we can compute the matrices S−1Fi efficiently, and form Hij far more efficiently. One interesting example that arises frequently is an SDP with matrix inequality diag(x) ≼ B. This corresponds to Fi = Eii, where Eii is the matrix with i,i entry one and all others zero. In this case, the matrix H can be found very efficiently: Hij = (S−1)2ij, where S = B − diag(x). The cost of forming H is thus the cost of forming S−1, which is at most (i.e., when no other structure is exploited) order n3. 11.8.4 Network rate optimization We consider a variation on the optimal network flow problem described in §10.4.3 (page 550), which is sometimes called the network rate optimization problem. The network is described as a directed graph with L arcs or links. Goods, or packets of information, travel on the network, passing through the links. The network supports n flows, with (nonnegative) rates x1, . . . , xn, which are the optimization variables. Each flow moves along a fixed, or pre-determined, path (or route) in the network, from a source node to a destination node. Each link can support multiple flows passing through it. The total traffic on a link is the sum of the flow rates of the flows that travel over the link. Each link has a positive capacity, which is the maximum total traffic it can handle. We can describe these link capacity limits using the flow-link incidence matrix A ∈ RL×n, defined as 􏰆 1 flow j passes through link i The total traffic on link i is then given by (Ax)i, so the link capacity constraints can be expressed as Ax ≼ c, where ci is the capacity of link i. Usually each path passes through only a small fraction of the total number of links, so the matrix A is sparse. In the network rate problem the paths are fixed (and encoded in the matrix A, which is a problem parameter); the variables are the flow rates xi. The objective Aij = 0 otherwise. 620 11 Interior-point methods is to choose the flow rates to maximize a separable utility function U, given by U (x) = U1 (x1 ) + · · · + Un (xn ). We assume that each Ui (and hence, U) is concave and nondecreasing. We can think of Ui(xi) as the income derived from supporting the ith flow at rate xi; U(x) is then the total income associated with the flows. The network rate optimization problem is then maximize U (x) subjectto Ax≼c, x≽0, (11.62) which is a convex optimization problem. Let us apply the barrier method to solve this problem. At each step we must minimize a function of the form 􏰊L log(c−Ax)i − (D0 + AT D1A + D2)∆xnt = −g, D = −tdiag(U′′(x),...,U′′(x)) 01n D1 = diag(1/(c − Ax)21, . . . , 1/(c − Ax)2L) D2 = diag(1/x21, . . . , 1/x2n) −tU(x)− i=1 logxj, using Newton’s method. The Newton step ∆xnt is found by solving the linear equations where are diagonal matrices, and g ∈ Rn. We can describe the sparsity structure of this n × n coefficient matrix precisely: (D0 +ATD1A+D2)ij ̸=0 if and only if flow i and flow j share a link. If the paths are relatively short, and each link has relatively few paths passing through it, then this matrix is sparse, so a sparse Cholesky factorization can be used. We can also solve the Newton system efficiently when some, but not too many, of the rows and columns are relatively dense. This occurs when a few of the flows intersect with a large number of the other flows, which might occur if a few flows are relatively long. We can also use the matrix inversion lemma to compute the Newton step by solving a system with L × L coefficient matrix, with form (D−1 +A(D +D )−1AT)y=−A(D +D )−1g, 10202 and then computing Here too we can precisely describe the sparsity pattern: ∆xnt = −(D0 + D2)−1(g + AT y). (D−1+A(D+D)−1AT) ̸=0 1 0 2 ij if and only if there is a path that passes through link i and link j. If most paths are short, this matrix is sparse. This matrix will be sparse, with a few dense rows and columns, if there are a few bottlenecks, i.e., a few links over which many flows travel. 􏰊n j=1 Bibliography 621 Bibliography The early history of the barrier method is described in detail by Fiacco and McCormick [FM90, §1.2]. The method was a popular algorithm for convex optimization in the 1960s, along with closely related techniques such as the method of centers (Liˆeu ̃ and Huard [LH66]; see also exercise 11.11), and penalty (or exterior-point) methods [FM90, §4]. Interest declined in the 1970s amid concerns about the ill-conditioning of the Newton equations of the centering problem (11.6) for high values of t. The barrier method regained popularity in the 1980s, after Gill, Murray, Saunders, Tom- lin, and Wright [GMS+86] pointed out the close connections with Karmarkar’s polynomial- time projective algorithm for linear programming [Kar84]. The focus of research through- out the 1980s remained on linear (and to a lesser extent, quadratic) programming, result- ing in different variations of the basic interior-point methods, and improved worst-case complexity results (see Gonzaga [Gon92]). Primal-dual methods emerged as the algo- rithms of choice for practical implementations (see Mehrotra [Meh92], Lustig, Marsten, and Shanno [LMS94], Wright [Wri97]). In their 1994 book, Nesterov and Nemirovski extended the complexity theory of linear programming interior-point methods to nonlinear convex optimization problems, using the convergence theory of Newton’s method for self-concordant functions. They also developed interior-point methods for problems with generalized inequalities, and discussed ways of reformulating problems to satisfy the self-concordance assumption. The geometric programming reformulation on page 587, for example, is from [NN94, §6.3.1]. As mentioned on page 585, the complexity analysis shows that, contrary to what one might expect, the centering problems in the barrier method do not become more difficult as t increases, at least not in exact arithmetic. Practical experience, supported by theoretical results (Forsgren, Gill, and Wright [FGW02, §4.3.2], Nocedal and Wright [NW99, page 525]), also indicates that the effects of ill-conditioning on the computed solution of the Newton system are more benign than thought earlier. Recent research on interior-point methods has concentrated on extending the primal-dual methods for linear programming, which converge faster and reach higher accuracies than (primal) barrier methods, to nonlinear convex problems. One popular approach, along the lines of the simple primal-dual method of §11.7, is based on linearizing modified KKT equations for a convex optimization problem in standard form, i.e., problem (11.1). More sophisticated algorithms of this type differ from algorithm 11.2 in the strategy used to select t (which is crucial to achieve superlinear asymptotic convergence), and the line search. We refer to Wright [Wri97, chapter 8], Ralph and Wright [RW97], den Hertog [dH93], Terlaky [Ter96], and the survey by Forsgren, Gill, and Wright [FGW02, §5] for details and references. Other authors adopt the cone programming framework as starting point for extending primal-dual interior-point methods for linear programming to convex optimization (see for example, Nesterov and Todd [NT98]). This approach has resulted in efficient and accurate primal-dual methods for semidefinite and second-order programming (see the surveys by Todd [Tod01] and Alizadeh and Goldfarb [AG03]). As for linear programming, primal-dual methods for semidefinite programming are usually described as variations of Newton’s method applied to modified KKT equations. Unlike in linear programming, however, the linearization can be carried out in many different ways, which lead to different search directions and algorithms; see Helmberg, Rendl, Vanderbei, and Wolkowicz [HRVW96], Kojima, Shindo, and Harah [KSH97], Monteiro [Mon97], Nesterov and Todd [NT98], Zhang [Zha98], Alizadeh, Haeberly, and Overton [AHO98], and Todd, Toh, and Tu ̈tu ̈ncu ̈ [TTT98]. Great progress has also been made in the area of initialization and infeasibility detection. Homogeneous self-dual formulations provide an elegant and efficient alternative to the classical two-phase approach of §11.4; see Ye, Todd, and Mizuno [YTM94], Xu, Hung, 622 11 Interior-point methods and Ye [XHY96], Andersen and Ye [AY98] and Luo, Sturm, and Zhang [LSZ00] for details. The primal-dual interior-point methods for semidefinite and second-order cone program- ming have been implemented in a number of software packages, including SeDuMi [Stu99], SDPT3 [TTT02], SDPA [FKN98], CSDP [Bor02], and DSDP [BY02], A user-friendly in- terface to several of these codes is provided by YALMIP [L ̈of04]. The following books document the recent developments in this rapidly advancing field in greater detail: Vanderbei [Van96], Wright [Wri97], Roos, Terlaky, and Vial [RTV97] Ye [Ye97], Wolkowicz, Saigal, and Vandenberghe [WSV00], Ben-Tal and Nemirovski, [BTN01], Renegar [Ren01], and Peng, Roos, and Terlaky [PRT02]. Exercises 623 Exercises The barrier method 11.1 Barrier method example. Consider the simple problem minimize x2 + 1 subjectto 2≤x≤4, which has feasible set [2, 4], and optimal point x⋆ = 2. Plot f0, and tf0 + φ, for several values of t > 0, versus x. Label x⋆(t).
11.2 What happens if the barrier method is applied to the LP
minimize x2
subject to x1 ≤ x2, 0 ≤ x2,
with variable x ∈ R2?
11.3 Boundedness of centering problem. Suppose the sublevel sets of (11.1),
minimize f0 (x)
subject to fi(x) ≤ 0, i = 1,…,m
Ax = b,
are bounded. Show that the sublevel sets of the associated centering problem,
minimize tf0(x) + φ(x) subject to Ax = b,
are bounded.
11.4 Adding a norm bound to ensure strong convexity of the centering problem. Suppose we
add the constraint xT x ≤ R2 to the problem (11.1):
minimize f0 (x)
subject to fi(x) ≤ 0, i = 1,…,m
Ax = b
xT x ≤ R2.
Let φ ̃ denote the logarithmic barrier function for this modified problem. Find a > 0 for which ∇2(tf0(x) + φ ̃(x)) ≽ aI holds, for all feasible x.
11.5 Barrier method for second-order cone programming. Consider the SOCP (without equality constraints, for simplicity)
minimize fT x
subjectto ∥Aix+bi∥2 ≤cTi x+di, i=1,…,m.
(11.63)
The constraint functions in this problem are not differentiable (since the Euclidean norm ∥u∥2 is not differentiable at u = 0) so the (standard) barrier method cannot be applied. In §11.6, we saw that this SOCP can be solved by an extension of the barrier method that handles generalized inequalities. (See example 11.8, page 599, and page 601.) In this exercise, we show how the standard barrier method (with scalar constraint functions) can be used to solve the SOCP.
We first reformulate the SOCP as
minimize fT x
subject to ∥Aix + bi∥2/(cTi x + di) ≤ cTi x + di, i = 1,…,m (11.64)
cTi x+di ≥0, i=1,…,m.

624
11 Interior-point methods
f i ( x ) = ∥ A i x + b i ∥ 2 2 − c Ti x − d i c Ti x + d i
The constraint function
is the composition of a quadratic-over-linear function with an affine function, and is twice differentiable (and convex), provided we define its domain as dom fi = {x | cTi x + di > 0}. Note that the two problems (11.63) and (11.64) are not exactly equivalent. If cTi x⋆ +di = 0 for some i, where x⋆ is the optimal solution of the SOCP (11.63), then the reformulated problem (11.64) is not solvable; x⋆ is not in its domain. Nevertheless we will see that the barrier method, applied to (11.64), produces arbitrarily accurate suboptimal solutions of (11.64), and hence also for (11.63).
(a) Form the log barrier φ for the problem (11.64). Compare it to the log barrier that arises when the SOCP (11.63) is solved using the barrier method for generalized inequalities (in §11.6).
(b) Show that if tfT x + φ(x) is minimized, the minimizer x⋆(t) is 2m/t-suboptimal for the problem (11.63). It follows that the standard barrier method, applied to the reformulated problem (11.64), solves the SOCP (11.63), in the sense of producing arbitrarily accurate suboptimal solutions. This is the case even though the optimal point x⋆ need not be in the domain of the reformulated problem (11.64).
11.6 General barriers. T􏰝he log barrier is based on the approximation −(1/t)log(−u) of the
indicator function I−(u) (see §11.2.1, page 563). We can also construct barriers from other approximations, which in turn yield generalizations of the central path and barrier method. Let h : R → R be a twice differentiable, closed, increasing convex function, with domh = −R++. (This implies h(u) → ∞ as u → 0.) One such function is h(u) = − log(−u); another example is h(u) = −1/u (for u < 0). Now consider the optimization problem (without equality constraints, for simplicity) minimize f0 (x) subject to fi(x) ≤ 0, i = 1,...,m, where fi are twice differentiable. We define the h-barrier for this problem as 􏰊m i=1 with domain {x | fi(x) < 0, i = 1,...,m}. When h(u) = −log(−u), this is the usual logarithmic barrier; when h(u) = −1/u, φh is called the inverse barrier. We define the φh(x) = h(fi(x)), h-central path as where t > 0 is a parameter. (We assume that for each t, the minimizer exists and is
unique.)
(a) Explain why tf0(x) + φh(x) is convex in x, for each t > 0.
(b) Show how to construct a dual feasible λ from x⋆(t). Find the associated duality gap.
(c) For what functions h does the duality gap found in part (b) depend only on t and m (and no other problem data)?
11.7 Tangent to central path. This problem concerns dx⋆(t)/dt, which gives the tangent to the central path at the point x⋆(t). For simplicity, we consider a problem without equality constraints; the results readily generalize to problems with equality constraints.
(a) Find an explicit expression for dx⋆(t)/dt. Hint. Differentiate the centrality equa- tions (11.7) with respect to t.
x⋆(t) = argmin tf0(x) + φh(x),

Exercises 625
(b) Show that f0(x⋆(t)) decreases as t increases. Thus, the objective value in the barrier method decreases, as the parameter t is increased. (We already know that the duality gap, which is m/t, decreases as t increases.)
11.8 Predictor-corrector method for centering problems. In the standard barrier method, x⋆(μt) is computed using Newton’s method, starting from the initial point x⋆(t). One alternative that has been proposed is to make an approximation or prediction x􏰝 of x⋆(μt), and then start the Newton method for computing x⋆(μt) from x􏰝. The idea is that this should reduce the number of Newton steps, since x􏰝 is (presumably) a better initial point than x⋆(t). This method of centering is called a predictor-corrector method, since it first makes a prediction of what x⋆(μt) is, then corrects the prediction using Newton’s method.
The most widely used predictor is the first-order predictor, based on the tangent to the central path, explored in exercise 11.7. This predictor is given by
x􏰝 = x⋆(t) + dx⋆(t)(μt − t). dt
Derive an expression for the first-order predictor x􏰝. Compare it to the Newton update obtained, i.e., x⋆(t) + ∆xnt, where ∆xnt is the Newton step for μtf0(x) + φ(x), at x⋆(t). What can you say when the objective f0 is linear? (For simplicity, you can consider a problem without equality constraints.)
11.9 Dual feasible points near the central path. Consider the problem minimize f0 (x)
subject to fi(x) ≤ 0, i = 1,…,m,
with variable x ∈ Rn. We assume the functions fi are convex and twice differentiable. (We assume for simplicity there are no equality constraints.) Recall (from §11.2.2, page 565) that λi = −1/(tfi (x⋆ (t))), i = 1, . . . , m, is dual feasible, and in fact, x⋆ (t) minimizes L(x, λ). This allows us to evaluate the dual function for λ, which turns out to be g(λ) = f0(x⋆(t)) − m/t. In particular, we conclude that x⋆(t) is m/t-suboptimal.
In this problem we consider what happens when a point x is close to x⋆(t), but not quite centered. (This would occur if the centering steps were terminated early, or not carried out to full accuracy.) In this case, of course, we cannot claim that λi = −1/(tfi(x)), i = 1,…,m, is dual feasible, or that x is m/t-suboptimal. However, it turns out that a slightly more complicated formula does yield a dual feasible point, provided x is close enough to centered.
Let ∆xnt be the Newton step at x of the centering problem minimize tf0(x) − 􏰉mi=1 log(−fi(x)).
A formula that often gives a dual feasible point when ∆xnt is small (i.e., for x nearly
centered) is
λi = 1 􏰄1+ ∇fi(x)T∆xnt􏰅, i=1,…,m. −tfi (x) −fi (x)
In this case, the vector x does not minimize L(x, λ), so there is no general formula for the dual function value g(λ) associated with λ. (If we have an analytical expression for the dual objective, however, we can simply evaluate g(λ).)
Verify that for a QCQP
minimize (1/2)xT P0x + q0T x + r0
subject to (1/2)xT Pix + qiT x + ri ≤ 0, i = 1,…,m,
the formula for λ yields a dual feasible point (i.e., λ ≽ 0 and L(x, λ) is bounded below) when ∆xnt is sufficiently small.

626
11
1 ∆xnt, tλi fi (x)
Interior-point methods
i=1,…,m.
Hint. Define
S h o w t h a t
x0 =x+∆xnt,
xi =x−
L(z, λ).
path x⋆(t) for t > 0, defined as the solution of
􏰊m i=1
λi∇fi(xi) = 0.
Now use fi(z) ≥ fi(xi) + ∇fi(xi)T (z − xi), i = 0,…,m, to derive a lower bound on
∇f0(x0) +
11.10 Another parametrization of the central path. We consider the problem (11.1), with central
minimize tf0(x) − 􏰉mi=1 log(−fi(x)) subject to Ax = b.
In this problem we explore another parametrization of the central path. For u > p⋆, let z⋆(u) denote the solution of
minimize − log(u − f0 (x)) − 􏰉mi=1 log(−fi (x)) subject to Ax = b.
Show that the curve defined by z⋆(u), for u > p⋆, is the central path. (In other words, for each u > p⋆, there is a t > 0 for which x⋆(t) = z⋆(u), and conversely, for each t > 0, there is an u > p⋆ for which z⋆(u) = x⋆(t)).
11.11 Method of analytic centers. In this problem we consider a variation on the barrier method, based on the parametrization of the central path described in exercise 11.10. For simplic- ity, we consider a problem with no equality constraints,
minimize f0 (x)
subject to fi(x) ≤ 0, i = 1,…,m.
The method of analytic centers starts with any strictly feasible initial point x(0), and any u(0) > f0(x(0)). We then set
u(1) = θu(0) + (1 − θ)f0(x(0)),
where θ ∈ (0, 1) is an algorithm parameter (usually chosen small), and then compute the
next iterate as
(using Newton’s method, starting from x(0)). Here z⋆(s) denotes the minimizer of
The point z⋆(s) is the analytic center of the inequalities
f0(x) ≤ s, f1(x) ≤ 0,…,fm(x) ≤ 0,
hence the algorithm name.
Show that the method of centers works, i.e., x(k) converges to an optimal point. Find a
stopping criterion that guarantees that x is ǫ-suboptimal, where ǫ > 0.
Hint. The points x(k) are on the central path; see exercise 11.10. Use this to show that
u+ −p⋆ ≤ m+θ(u−p⋆), m+1
where u and u+ are the values of u on consecutive iterations.
log(−fi (x)),
which we assume exists and is unique. This process is then repeated.
− log(s − f0 (x)) −
x(1) = z⋆(u(1))
􏰊m i=1

Exercises 627
11.12 Barrier method for convex-concave games. We consider a convex-concave game with inequality constraints,
minimizew maximizez f0(w,z)
subject to fi(w) ≤ 0, i = 1,…,m
̃
fi(z) ≤ 0, i = 1,…,m ̃.
Here w ∈ Rn is the variable associated with minimizing the objective, and z ∈ Rn ̃ is ̃
the variable associated with maximizing the objective. The constraint functions fi and fi are convex and differentiable, and the objective function f0 is differentiable and convex- concave, i.e., convex in w, for each z, and concave in z, for each w. We assume for simplicity that dom f0 = Rn × Rn ̃ .
A solution or saddle-point for the game is a pair w⋆, z⋆, for which f0(w⋆, z) ≤ f0(w⋆, z⋆) ≤ f0(w, z⋆)
holds for every feasible w and z. (For background on convex-concave games and functions, see §5.4.3, §10.3.4 and exercises 3.14, 5.24, 5.25, 10.10, and 10.13.) In this exercise we show how to solve this game using an extension of the barrier method, and the infeasible start Newton method (see §10.3).
(a) Let t > 0. Explain why the function
􏰊m
log(−fi(w)) +
is convex-concave in (w,z). We will assume that it has a unique saddle-point,
(w⋆(t),z⋆(t)), which can be found using the infeasible start Newton method.
(b) As in the barrier method for solving a convex optimization problem, we can derive a simple bound on the suboptimality of (w⋆(t),z⋆(t)), which depends only on the problem dimensions, and decreases to zero as t increases. Let W and Z denote the feasible sets for w and z,
and therefore
sup f0(w⋆(t), z) − inf f0(w, z⋆(t)) ≤ m + m ̃ . z∈Z w∈W t
tf0(w, z) −
i=1
̃ log(−fi(z))
W={w|fi(w)≤0, i=1,…,m}, Show that
̃
Z={z|fi(z)≤0, i=1,…,m ̃}.
inf f0(w, z⋆(t)) + mt , w∈W
sup f0(w⋆(t), z) − m ̃ , z∈Z t
f0(w⋆(t), z⋆(t)) f0(w⋆(t), z⋆(t))
≤ ≥
Self-concordance and complexity analysis
11.13 Self-concordance and negative entropy.
(a) Show that the negative entropy function xlogx (on R++) is not self-concordant.
(b) Show that for any t > 0, tx log x − log x is self-concordant (on R++ ).
11.14 Self-concordance and the centering problem. Let φ be the logarithmic barrier function of problem (11.1). Suppose that the sublevel sets of (11.1) are bounded, and that tf0 + φ is closed and self-concordant. Show that t∇2 f0 (x) + ∇2 φ(x) ≻ 0, for all x ∈ dom φ. Hint. See exercises 9.17 and 11.3.
􏰊m ̃ i=1

628 11 Interior-point methods
Barrier method for generalized inequalities
11.15 Generalized logarithm is K-increasing. Let ψ be a generalized logarithm for the proper
cone K. Suppose y ≻K 0.
(a) Show that ∇ψ(y) ≽K∗ 0, i.e., that ψ is K-nondecreasing. Hint. If ∇ψ(y) ̸≽K∗ 0, then there is some w ≻K 0 for which wT ∇ψ(y) ≤ 0. Use the inequality ψ(sw) ≤ ψ(y) + ∇ψ(y)T (sw − y), with s > 0.
(b) Now show that ∇ψ(y) ≻K∗ 0, i.e., that ψ is K-increasing. Hint. Show that ∇2ψ(y) ≺ 0, ∇ψ(y) ≽K∗ 0 imply ∇ψ(y) ≻K∗ 0.
11.16 [NN94, page 41] Properties of a generalized logarithm. Let ψ be a generalized logarithm for the proper cone K, with degree θ. Prove that the following properties hold at any y ≻K 0.
(a) ∇ψ(sy) = ∇ψ(y)/s for all s > 0. (b) ∇ψ(y) = −∇2ψ(y)y.
(c) yT ∇ψ2(y)y = −θ.
(d) ∇ψ(y)T ∇2ψ(y)−1∇ψ(y) = −θ.
11.17 Dual generalized logarithm. Let ψ be a generalized logarithm for the proper cone K, with degree θ. Show that the dual generalized logarithm ψ, defined in (11.49), satisfies
forv≻K∗ 0,s>0. 11.18 Is the function
􏰉􏰄 􏰉ni=1 yi2 􏰅
ψ(sv) = ψ(v) + θ log s,
ψ(y) = log yn+1 −
with domψ = {y ∈ Rn+1 | yn+1 > ni=1 yi2}, a generalized logarithm for the second-
,
11.19 Yet another method for computing the Newton step. Show that the Newton step for the barrier method, which is given by the solution of the linear equations (11.14), can be found by solving a larger set of linear equations with coefficient matrix
 t∇2f0(x)+􏰉 1 ∇2fi(x) Df(x)T AT   i −fi(x) 2 
Df (x) − diag(f (x)) 0 A00
where f(x) = (f1(x),…,fm(x)).
For what types of problem structure might solving this larger system be interesting?
11.20 Network rate optimization via the dual problem. In this problem we examine a dual method for solving the network rate optimization problem of §11.8.4. To simplify the presentation we assume that the utility functions Ui are strictly concave, with domUi = R++, and thattheysatisfyUi′(xi)→∞asxi →0andUi′(xi)→0asxi →∞.
(a) Express the dual problem of (11.62) in terms of the conjugate utility functions
order cone in Rn+1? Implementation
Vi = (−Ui)∗, defined as
Show that domVi = −R++, and that for each λ < 0 there is a unique x with Ui′(x) = −λ. (b) Describe a barrier method for the dual problem. Compare the complexity per iter- ation with the complexity of the method in §11.8.4. Distinguish the same two cases as in §11.8.4 (AT A is sparse and AAT is sparse). Vi(λ) = sup(λx + Ui(x)). x>0
yn+1

Exercises 629
Numerical experiments
11.21 Log-Chebyshev approximation with bounds. We consider an approximation problem: find x ∈ Rn, that satisfies the variable bounds l ≼ x ≼ u, and yields Ax ≈ b, where b ∈ Rm. You can assume that l ≺ u, and b ≻ 0 (for reasons we explain below). We let aTi denote the ith row of the matrix A.
We judge the approximation Ax ≈ b by the maximum fractional deviation, which is
max max{(aTi x)/bi , bi /(aTi x)} = i=1,…,n
max max{aTi x, bi } , i=1,…,n min{aTi x,bi}
when Ax ≻ 0; we define the maximum fractional deviation as ∞ if Ax ̸≻ 0.
The problem of minimizing the maximum fractional deviation is called the fractional Chebyshev approximation problem, or the logarithmic Chebyshev approximation problem, since it is equivalent to minimizing the objective
max |logaTi x−logbi|. i=1,…,n
(See also exercise 6.3, part (c).)
(a) Formulate the fractional Chebyshev approximation problem (with variable bounds) as a convex optimization problem with twice differentiable objective and constraint functions.
(b) Implement a barrier method that solves the fractional Chebyshev approximation problem. You can assume an initial point x(0), satisfying l ≺ x(0) ≺ u, Ax(0) ≻ 0, is known.
11.22 Maximum volume rectangle inside a polyhedron. Consider the problem described in exer- cise 8.16, i.e., finding the maximum volume rectangle R = {x | l ≼ x ≼ u} that lies in a polyhedron described by a set of linear inequalities, P = {x | Ax ≼ b}. Implement a barrier method for solving this problem. You can assume that b ≻ 0, which means that for small l ≺ 0 and u ≻ 0, the rectangle R lies inside P.
Test your implementation on several simple examples. Find the maximum volume rect- angle that lies in the polyhedron defined by
 0 −1 
 2 −4
A=2 1, b=1.
−4 4 −4 0
Plot this polyhedron, and the maximum volume rectangle that lies inside it.
11.23 SDP bounds and heuristics for the two-way partitioning problem. In this exercise we consider the two-way partitioning problem (5.7), described on page 219, and also in ex-
ercise 5.39:
minimize xT W x
subject to x2i = 1, i = 1,…,n,
with variable x ∈ Rn. We assume, without loss of generality, that W ∈ Sn satisfies Wii = 0. We denote the optimal value of the partitioning problem as p⋆, and x⋆ will denote an optimal partition. (Note that −x⋆ is also an optimal partition.)
The Lagrange dual of the two-way partitioning problem (11.65) is given by the SDP
maximize −1T ν
subject to W + diag(ν) ≽ 0,
(11.66)
(11.65)

630 11 Interior-point methods
with variable ν ∈ Rn. The dual of this SDP is
minimize tr(W X )
subject to X ≽ 0 (11.67)
Xii = 1, i = 1,…,n,
with variable X ∈ Sn. (This SDP can be interpreted as a relaxation of the two-way partitioning problem (11.65); see exercise 5.39.) The optimal values of these two SDPs are equal, and give a lower bound, which we denote d⋆, on the optimal value p⋆. Let ν⋆ and X⋆ denote optimal points for the two SDPs.
(a) Implement a barrier method that solves the SDP (11.66) and its dual (11.67), given the weight matrix W. Explain how you obtain nearly optimal ν and X, give for- mulas for any Hessians and gradients that your method requires, and explain how you compute the Newton step. Test your implementation on some small problem instances, comparing the bound you find with the optimal value (which can be found by checking the objective value of all 2n partitions). Try your implementation on a randomly chosen problem instance large enough that you cannot find the optimal partition by exhaustive search (e.g., n = 100).
(b) A heuristic for partitioning. In exercise 5.39, you found that if X⋆ has rank one, then it must have the form X⋆ = x⋆(x⋆)T , where x⋆ is optimal for the two-way partitioning problem. This suggests the following simple heuristic for finding a good partition (if not the best): solve the SDPs above, to find X⋆ (and the bound d⋆). Let v denote an eigenvector of X⋆ associated with its largest eigenvalue, and let xˆ = sign(v). The vector xˆ is our guess for a good partition.
Try this heuristic on some small problem instances, and the large problem instance you used in part (a). Compare the objective value of your heuristic partition, xˆT W xˆ, with the lower bound d⋆.
(c) A randomized method. Another heuristic technique for finding a good partition, given the solution X⋆ of the SDP (11.67), is based on randomization. The method is simple: we generate independent samples x(1), . . . , x(K) from a normal distribution on Rn, with zero mean and covariance X⋆. For each sample we consider the heuristic approximate solution xˆ(k) = sign(x(k)). We then take the best among these, i.e., the one with lowest cost. Try out this procedure on some small problem instances, and the large problem instance you considered in part (a).
(d) A greedy heuristic refinement. Suppose you are given a partition x, i.e., xi ∈ {−1, 1}, i = 1,…,n. How does the objective value change if we move element i from one set to the other, i.e., change xi to −xi? Now consider the following simple greedy algorithm: given a starting partition x, move the element that gives the largest reduction in the objective. Repeat this procedure until no reduction in objective can be obtained by moving an element from one set to the other.
Try this heuristic on some problem instances, including the large one, starting from various initial partitions, including x = 1, the heuristic approximate solution found in part (b), and the randomly generated approximate solutions found in part (c). How much does this greedy refinement improve your approximate solutions from parts (b) and (c)?
11.24 Barrier and primal-dual interior-point methods for quadratic programming. Implement a barrier method, and a primal-dual method, for solving the QP (without equality con- straints, for simplicity)
minimize (1/2)xT P x + qT x subject to Ax ≼ b,
with A ∈ Rm×n. You can assume a strictly feasible initial point is given. Test your codes on several examples. For the barrier method, plot the duality gap versus Newton steps. For the primal-dual interior-point method, plot the surrogate duality gap and the norm of the dual residual versus iteration number.

Appendices

Appendix A
Mathematical background
In this appendix we give a brief review of some basic concepts from analysis and linear algebra. The treatment is by no means complete, and is meant mostly to set out our notation.
A.1 Norms
A.1.1 Inner product, Euclidean norm, and angle
The standard inner product on Rn, the set of real n-vectors, is given by T 􏰊n
⟨x,y⟩=x y= xiyi, i=1
for x, y ∈ Rn. In this book we use the notation xT y, instead of ⟨x, y⟩. The Euclidean norm, or l2-norm, of a vector x ∈ Rn is defined as
∥x∥2 = (xT x)1/2 = (x21 + · · · + x2n)1/2.
The Cauchy-Schwartz inequality states that |xT y| ≤ ∥x∥2∥y∥2 for any x, y ∈ Rn.
The (unsigned) angle between nonzero vectors x, y ∈ Rn is defined as ̸ (x,y)=cos−1􏰄 xTy 􏰅,
where we take cos−1(u) ∈ [0, π]. We say x and y are orthogonal if xT y = 0.
The standard inner product on Rm×n, the set of m × n real matrices, is given
by
T 􏰊m􏰊n ⟨X,Y⟩=tr(X Y)= XijYij,
i=1 j=1
for X, Y ∈ Rm×n. (Here tr denotes trace of a matrix, i.e., the sum of its diagonal elements.) We use the notation tr(XT Y ) instead of ⟨X, Y ⟩. Note that the inner
∥x∥2∥y∥2
(A.1)

634
A Mathematical background
product of two matrices is the inner product of the associated vectors, in Rmn, obtained by listing the coefficients of the matrices in some order, such as row major.
The Frobenius norm of a matrix X ∈ Rm×n is given by
􏰀 􏰁1/2 􏰊m􏰊n 1/2
∥X∥ = tr(XTX) = X2 . (A.2) F ij
i=1 j=1
The Frobenius norm is the Euclidean norm of the vector obtained by listing the coefficients of the matrix. (The l2-norm of a matrix is a different norm; see §A.1.5.)
The standard inner product on Sn, the set of symmetric n×n matrices, is given
A.1.2
by
􏰊n 􏰊n i=1 j=1
􏰊n i=1
⟨X,Y⟩=tr(XY)= Norms, distance, and unit ball
XiiYii +2
􏰊
i0for which
{y|∥y−x∥2 ≤ǫ}⊆C,

638
A Mathematical background
i.e., there exists a ball centered at x that lies entirely in C. The set of all points interior to C is called the interior of C and is denoted intC. (Since all norms on Rn are equivalent to the Euclidean norm, all norms generate the same set of interior points.) A set C is open if int C = C, i.e., every point in C is an interior point. A set C ⊆ Rn is closed if its complement Rn \ C = {x ∈ Rn | x ̸∈ C} is open.
The closure of a set C is defined as
cl C = Rn \ int(Rn \ C),
i.e., the complement of the interior of the complement of C. A point x is in the closure of C if for every ǫ > 0, there is a y ∈ C with ∥x − y∥2 ≤ ǫ.
We can also describe closed sets and the closure in terms of convergent sequences and limit points. A set C is closed if and only if it contains the limit point of every convergent sequence in it. In other words, if x1 , x2 , . . . converges to x, and xi ∈ C , then x ∈ C. The closure of C is the set of all limit points of convergent sequences in C.
The boundary of the set C is defined as
bd C = cl C \ int C.
A boundary point x (i.e., a point x ∈ bdC) satisfies the following property: For allǫ>0,thereexistsy∈C andz̸∈C with
∥y−x∥2 ≤ǫ, ∥z−x∥2 ≤ǫ,
i.e., there exist arbitrarily close points in C, and also arbitrarily close points not in C. We can characterize closed and open sets in terms of the boundary operation: C is closed if it contains its boundary, i.e., bd C ⊆ C. It is open if it contains no boundary points, i.e., C ∩ bd C = ∅.
Supremum and infimum
SupposeC⊆R. AnumberaisanupperboundonCifforeachx∈C,x≤a. The set of upper bounds on a set C is either empty (in which case we say C is unbounded above), all of R (only when C = ∅), or a closed infinite interval [b, ∞). The number b is called the least upper bound or supremum of the set C, and is denoted sup C. We take sup ∅ = −∞, and sup C = ∞ if C is unbounded above. When sup C ∈ C , we say the supremum of C is attained or achieved.
When the set C is finite, sup C is the maximum of its elements. Some authors use the notation max C to denote supremum, when it is attained, but we follow standard mathematical convention, using max C only when the set C is finite.
We define lower bound, and infimum, in a similar way. A number a is a lower bound on C ⊆ R if for each x ∈ C, a ≤ x. The infimum (or greatest lower bound) of a set C ⊆ R is defined as infC = −sup(−C). When C is finite, the infimum is the minimum of its elements. We take inf∅ = ∞, and infC = −∞ if C is unbounded below, i.e., has no lower bound.
A.2.2

A.3 Functions 639
A.3 Functions
A.3.1 Function notation
Our notation for functions is mostly standard, with one exception. When we write
f:A→B
we mean that f is a function on the set domf ⊆ A into the set B; in particular we can have dom f a proper subset of the set A. Thus the notation f : Rn → Rm means that f maps (some) n-vectors into m-vectors; it does not mean that f(x) is defined for every x ∈ Rn. This convention is similar to function declarations in computer languages. Specifying the data types of the input and output arguments of a function gives the syntax of that function; it does not guarantee that any input argument with the specified data type is valid.
As an example consider the function f : Sn → R, given by
f(X) = log det X, (A.3)
with domf = Sn++. The notation f : Sn → R specifies the syntax of f: it takes as argument a symmetric n × n matrix, and returns a real number. The notation dom f = Sn++ specifies which symmetric n × n matrices are valid input arguments for f (i.e., only positive definite ones). The formula (A.3) specifies what f(X) is, for X ∈ domf.
A.3.2 Continuity
A function f : Rn → Rm is continuous at x ∈ dom f if for all ǫ > 0 there exists a δ such that
y∈domf, ∥y−x∥2 ≤δ =⇒ ∥f(y)−f(x)∥2 ≤ǫ.
Continuity can be described in terms of limits: whenever the sequence x1,x2,… in dom f converges to a point x ∈ dom f, the sequence f(x1), f(x2), . . . converges to f(x), i.e.,
lim f(xi) = f( lim xi). i→∞ i→∞
A function f is continuous if it is continuous at every point in its domain. A.3.3 Closed functions
A function f : Rn → R is said to be closed if, for each α ∈ R, the sublevel set {x∈domf |f(x)≤α}
is closed. This is equivalent to the condition that the epigraph of f, epif ={(x,t)∈Rn+1 |x∈domf, f(x)≤t},

640
A Mathematical background
is closed. (This definition is general, but is usually only applied to convex func- tions.)
Iff :Rn →Riscontinuous,anddomf isclosed,thenf isclosed. Iff :Rn → R is continuous, with dom f open, then f is closed if and only if f converges to ∞ along every sequence converging to a boundary point of domf. In other words, if limi→∞ xi = x ∈ bddomf, with xi ∈ domf, we have limi→∞ f(xi) = ∞.
Example A.1 Examples on R.
• The function f : R → R, with f(x) = xlogx, domf = R++, is not closed.
• Thefunctionf:R→R,with
f(x)=􏰆xlogx x>0 domf=R+,
is closed.
• The function f(x) = −logx, domf = R++, is closed.
0 x = 0,
A.4 Derivatives
A.4.1 Derivative and gradient
Suppose f : Rn → Rm and x ∈ intdomf. The function f is differentiable at x if there exists a matrix Df(x) ∈ Rm×n that satisfies
lim ∥f(z)−f(x)−Df(x)(z−x)∥2 =0, (A.4) z∈dom f, z̸=x, z→x ∥z − x∥2
in which case we refer to Df(x) as the derivative (or Jacobian) of f at x. (There can be at most one matrix that satisfies (A.4).) The function f is differentiable if dom f is open, and it is differentiable at every point in its domain.
The affine function of z given by
f(x) + Df(x)(z − x)
is called the first-order approximation of f at (or near) x. Evidently this function agrees with f at z = x; when z is close to x, this affine function is very close to f. The derivative can be found by deriving the first-order approximation of the function f at x (i.e., the matrix Df(x) that satisfies (A.4)), or from partial deriva-
tives:
Df(x)ij = ∂fi(x), i = 1,…,m, j = 1,…,n. ∂xj

A.4 Derivatives 641
Gradient
When f is real-valued (i.e., f : Rn → R) the derivative Df (x) is a 1 × n matrix, i.e., it is a row vector. Its transpose is called the gradient of the function:
∇f(x) = Df(x)T ,
which is a (column) vector, i.e., in Rn. Its components are the partial derivatives
of f:
The first-order approximation of f at a point x ∈ int dom f can be expressed as
(the affine function of z)
Examples
f(x) + ∇f(x)T (z − x).
∇f(x)i=∂f(x), i=1,…,n. ∂xi
As a simple example consider the quadratic function f : Rn → R, f(x) = (1/2)xT Px + qT x + r,
whereP ∈Sn,q∈Rn,andr∈R. ItsderivativeatxistherowvectorDf(x)= xTP +qT, and its gradient is
∇f(x) = Px + q.
As a more interesting example, we consider the function f : Sn → R, given by
f(X) = logdetX, domf = Sn++.
One (tedious) way to find the gradient of f is to introduce a basis for Sn, find the gradient of the associated function, and finally translate the result back to Sn. Instead, we will directly find the first-order approximation of f at X ∈ Sn++. Let Z ∈ S n+ + b e c l o s e t o X , a n d l e t ∆ X = Z − X ( w h i c h i s a s s u m e d t o b e s m a l l ) . W e have
= log det 􏰎X1/2(I + X−1/2∆XX−1/2)X1/2􏰏
= log det X + log det(I + X−1/2∆XX−1/2)
􏰊n i=1
where λi is the ith eigenvalue of X−1/2∆XX−1/2. Now we use the fact that ∆X is small, which implies λi are small, so to first order we have log(1 + λi) ≈ λi. Using this first-order approximation in the expression above, we get
􏰊n
i=1
= log det X + tr(X−1/2∆XX−1/2)
= log det X + tr(X−1∆X)
= logdetX +tr􏰀X−1(Z −X)􏰁,
logdetZ = logdet(X+∆X)
= log det X +
log(1 + λi),
logdetZ ≈
logdetX +
λi

642
A Mathematical background
where we have used the fact that the sum of the eigenvalues is the trace, and the property tr(AB) = tr(BA).
Thus, the first-order approximation of f at X is the affine function of Z given f (Z ) ≈ f (X ) + tr 􏰀X −1 (Z − X )􏰁 .
Noting that the second term on the righthand side is the standard inner product of X−1 and Z − X, we can identify X−1 as the gradient of f at X. Thus, we can write the simple formula
∇f(X) = X−1.
This result should not be surprising, since the derivative of log x, on R++, is 1/x.
Chain rule
Supposef:Rn →Rm isdifferentiableatx∈intdomfandg:Rm →Rp is differentiable at f (x) ∈ int dom g. Define the composition h : Rn → Rp by h(z) = g(f(z)). Then h is differentiable at x, with derivative
Dh(x) = Dg(f(x))Df(x). (A.5) As an example, suppose f : Rn → R, g : R → R, and h(x) = g(f(x)). Taking
the transpose of Dh(x) = Dg(f(x))Df(x) yields
∇h(x) = g′(f(x))∇f(x). (A.6)
Composition with affine function
Suppose f : Rn → Rm is differentiable, A ∈ Rn×p, and b ∈ Rn. Define g : Rp → Rm as g(x) = f(Ax+b), with domg = {x | Ax+b ∈ domf}. The derivative of g is, by the chain rule (A.5), Dg(x) = Df(Ax + b)A.
When f is real-valued (i.e., m = 1), we obtain the formula for the gradient of a composition of a function with an affine function,
∇g(x) = AT ∇f(Ax + b).
For example, suppose that f : Rn → R, x,v ∈ Rn, and we define the function
̃ ̃ ̃
f : R → R by f(t) = f(x + tv). (Roughly speaking, f is f, restricted to the line
A.4.2
by
{x+tv|t∈R}.) Thenwehave ̃ ̃
Df(t) = f′(t) = ∇f(x + tv)T v.
(The scalar f ̃′(0) is the directional derivative of f, at x, in the direction v.)
Example A.2 Consider the function f : Rn → R, with dom f = Rn and
􏰊m i=1
f(x) = log
exp(aTi x + bi),

A.4 Derivatives 643
where a1,…,am ∈ Rn, and b1,…,bm ∈ R. We can find a simple expression for its gradient by noting that it is the composition of the affine function Ax + b, where
􏰉m×nTT m
A ∈ R with rows a1 ,…,am, and the function g : R → R given by g(y) =
log( mi=1 expyi). Simple differentiation (or the formula (A.6)) shows that
1 expy1 ∇g(y)=􏰉mi=1expyi . ,
exp ym
(A.7)
so by the composition formula we have
∇f(x)= 1 ATz
1T z where zi = exp(aTi x + bi), i = 1,…,m.
Example A.3 We derive an expression for ∇f(x), where
f(x) = logdet(F0 +x1F1 +···+xnFn),
where F0,…,Fn ∈ Sp, and
domf ={x∈Rn |F0 +x1F1 +···+xnFn ≻0}.
The function f is the composition of the affine mapping from x ∈ Rn to F0 +x1F1 + · · · + xnFn ∈ Sp, with the function log det X. We use the chain rule to evaluate
∂f(x) = tr(Fi∇logdet(F)) = tr(F−1Fi), ∂xi
whereF =F0 +x1F1 +···+xnFn. Thuswehave
 tr(F−1F1)  ∇ f ( x ) =  . . .  .
tr(F−1Fn)
In this section we review the second derivative of a real-valued function f : Rn → R. The second derivative or Hessian matrix of f at x ∈ int dom f , denoted ∇2f(x), is given by
∇2f(x)ij = ∂2f(x), i = 1,…n, j = 1,…,n, ∂xi∂xj
provided f is twice differentiable at x, where the partial derivatives are evaluated at x. The second-order approximation of f, at or near x, is the quadratic function of z defined by
f􏰝(z) = f(x) + ∇f(x)T (z − x) + (1/2)(z − x)T ∇2f(x)(z − x).
A.4.3 Second derivative

644
A Mathematical background
This second-order approximation satisfies
l i m | f ( z ) − f􏰝 ( z ) | = 0 .
z∈dom f, z̸=x, z→x ∥z − x∥2
Not surprisingly, the second derivative can be interpreted as the derivative of the first derivative. If f is differentiable, the gradient mapping is the function ∇f : Rn → Rn, with dom∇f = domf, with value ∇f(x) at x. The derivative of this mapping is
D∇f(x) = ∇2f(x).
Examples
As a simple example consider the quadratic function f : Rn → R, f(x) = (1/2)xT Px + qT x + r,
where P ∈ Sn, q ∈ Rn, and r ∈ R. Its gradient is ∇f(x) = Px+q, so its Hessian is given by ∇2f(x) = P. The second-order approximation of a quadratic function is itself.
As a more complicated example, we consider again the function f : Sn → R, given by f(X) = logdetX, with domf = Sn++. To find the second-order approxi- mation (and therefore, the Hessian), we will derive a first-order approximation of thegradient,∇f(X)=X−1. ForZ∈Sn++ nearX∈Sn++,and∆X=Z−X,we have
Z−1 =
= 􏰎X1/2(I + X−1/2∆XX−1/2)X1/2􏰏−1
(X+∆X)−1
= X−1/2(I + X−1/2∆XX−1/2)−1X−1/2
≈ X−1/2(I − X−1/2∆XX−1/2)X−1/2 = X−1 − X−1∆XX−1,
using the first-order approximation (I + A)−1 ≈ I − A, valid for A small.
This approximation is enough for us to identify the Hessian of f at X. The Hessian is a quadratic form on Sn. Such a quadratic form is cumbersome to de- scribe in the general case, since it requires four indices. But from the first-order
approximation of the gradient above, the quadratic form can be expressed as − tr(X−1UX−1V ),
where U, V ∈ Sn are the arguments of the quadratic form. (This generalizes the expression for the scalar case: (log x)′′ = −1/x2.)
Now we have the second-order approximation of f near X:
f(Z) = f(X+∆X)
≈ f(X) + tr(X−1∆X) − (1/2) tr(X−1∆XX−1∆X)
≈ f(X)+tr􏰀X−1(Z−X)􏰁−(1/2)tr􏰀X−1(Z−X)X−1(Z−X)􏰁.

A.5 Linear algebra 645
A.4.4 Chain rule for second derivative
A general chain rule for the second derivative is cumbersome in most cases, so we will state it only for some special cases that we will need.
Composition with scalar function
Suppose f : Rn → R, g : R → R, and h(x) = g(f(x)). Simply working out the partial derivatives yields
∇2h(x) = g′(f(x))∇2f(x) + g′′(f(x))∇f(x)∇f(x)T . (A.8) Composition with affine function
Supposef:Rn →R,A∈Rn×m,andb∈Rn. Defineg:Rm →Rbyg(x)= f(Ax + b). Then we have
∇2g(x) = AT ∇2f(Ax + b)A.
As an example, consider the restriction of a real-valued function f to a line, i.e.,
̃
the function f(t) = f(x + tv), where x and v are fixed. Then we have
̃ ̃
∇2f(t) = f′′(t) = vT ∇2f(x + tv)v.
Example A.4 We consider the function f : Rn → R from example A.2,
􏰊m i=1
where a1,..􏰉.,am ∈ Rn, and b1,…,bm ∈ R. By noting that f(x) = g(Ax+b), where g(y) = log( mi=1 expyi), we can obtain a simple formula for the Hessian of f. Taking partial de􏰉rivatives, or using the formula (A.8), noting that g is the composition of log with mi=1 expyi, yields
∇2g(y) = diag(∇g(y)) − ∇g(y)∇g(y)T , where ∇g(y) is given in (A.7). By the composition formula we have
∇2f(x)=AT 􏰄 1 diag(z)− 1 zzT􏰅A, 1T z (1T z)2
where zi = exp(aTi x + bi), i = 1,…,m.
A.5 Linear algebra A.5.1 Range and nullspace
Let A ∈ Rm×n (i.e., A is a real matrix with m rows and n columns). The range of A, denoted R(A), is the set of all vectors in Rm that can be written as linear
f(x) = log
exp(aTi x + bi),

646
A Mathematical background
combinations of the columns of A, i.e.,
R(A) = {Ax | x ∈ Rn}.
The range R(A) is a subspace of Rm, i.e., it is itself a vector space. Its dimension is the rank of A, denoted rankA. The rank of A can never be greater than the minimum of m and n. We say A has full rank if rank A = min{m, n}.
The nullspace (or kernel) of A, denoted N(A), is the set of all vectors x mapped into zero by A:
N (A) = {x | Ax = 0}. The nullspace is a subspace of Rn.
Orthogonal decomposition induced by A
If V is a subspace of Rn, its orthogonal complement, denoted V⊥, is defined as
V⊥ ={x|zTx=0forallz∈V}. (As one would expect of a complement, we have V⊥⊥ = V.)
A basic result of linear algebra is that, for any A ∈ Rm×n, we have N(A) = R(AT )⊥.
(Applying the result to AT we also have R(A) = N (AT )⊥ .) This result is often stated as
N(A)⊕⊥ R(AT)=Rn. (A.9)

Here the symbol ⊕ refers to orthogonal direct sum, i.e., the sum of two subspaces that are orthogonal. The decomposition (A.9) of Rn is called the orthogonal de- composition induced by A.
Symmetric eigenvalue decomposition
Suppose A ∈ Sn, i.e., A is a real symmetric n × n matrix. Then A can be factored as
A = QΛQT , (A.10)
where Q ∈ Rn×n is orthogonal, i.e., satisfies QT Q = I, and Λ = diag(λ1, . . . , λn). The (real) numbers λi are the eigenvalues of A, and are the roots of the charac- teristic polynomial det(sI − A). The columns of Q form an orthonormal set of eigenvectors of A. The factorization (A.10) is called the spectral decomposition or (symmetric) eigenvalue decomposition of A.
We order the eigenvalues as λ1 ≥ λ2 ≥ ··· ≥ λn. We use the notation λi(A) to refer to the ith largest eigenvalue of A ∈ S. We usually write the largest or maximum eigenvalue as λ1(A) = λmax(A), and the least or minimum eigenvalue as λn(A) = λmin(A).
A.5.2

A.5 Linear algebra 647
The determinant and trace can be expressed in terms of the eigenvalues,
λi, as can the spectral and Frobenius norms,
trA =
∥A∥2 = max |λi|=max{λ1,−λn}, i=1,…,n
Definiteness and matrix inequalities
∥A∥F =
􏰇􏰊n i=1
λ2i
􏰈1/2
.
The largest and smallest eigenvalues satisfy λmax(A) = sup xT Ax,
x̸=0 xT x In particular, for any x, we have
detA =
􏰖n i=1
􏰊n i=1
λi,
λmin(A) = inf xT Ax. x̸=0 xT x
λmin(A)xT x ≤ xT Ax ≤ λmax(A)xT x,
with both inequalities tight for (different) choices of x.
A matrix A ∈ Sn is called positive definite if for all x ̸= 0, xTAx > 0. We
denote this as A ≻ 0. By the inequality above, we see that A ≻ 0 if and only all its eigenvalues are positive, i.e., λmin(A) > 0. If −A is positive definite, we say A is negative definite, which we write as A ≺ 0. We use Sn++ to denote the set of positive definite matrices in Sn.
If A satisfies xT Ax ≥ 0 for all x, we say that A is positive semidefinite or nonnegative definite. If −A is nonnegative definite, i.e., if xT Ax ≤ 0 for all x, we say that A is negative semidefinite or nonpositive definite. We use Sn+ to denote the set of nonnegative definite matrices in Sn.
ForA,B∈Sn,weuseA≺BtomeanB−A≻0,andsoon. Theseinequal- ities are called matrix inequalities, or generalized inequalities associated with the positive semidefinite cone.
Symmetric squareroot
Let A ∈ Sn+, with eigenvalue decomposition A = Q diag(λ1, . . . , λn)QT . We define the (symmetric) squareroot of A as
A1/2 =Qdiag(λ1/2,…,λ1/2)QT. 1n
The squareroot A1/2 is the unique symmetric positive semidefinite solution of the equation X2 = A.
A.5.3 Generalized eigenvalue decomposition
The generalized eigenvalues of a pair of symmetric matrices (A, B) ∈ Sn × Sn are defined as the roots of the polynomial det(sB − A).

648
A Mathematical background
We are usually interested in matrix pairs with B ∈ Sn++. In this case the generalized eigenvalues are also the eigenvalues of B−1/2AB−1/2 (which are real). As with the standard eigenvalue decomposition, we order the generalized eigen- values in nonincreasing order, as λ1 ≥ λ2 ≥ ··· ≥ λn, and denote the maximum generalized eigenvalue by λmax(A, B).
When B ∈ Sn++, the pair of matrices can be factored as
A = V ΛV T , B = V V T , (A.11)
where V ∈ Rn×n is nonsingular, and Λ = diag(λ1, . . . , λn), where λi are the generalized eigenvalues of the pair (A,B). The decomposition (A.11) is called the generalized eigenvalue decomposition.
The generalized eigenvalue decomposition is related to the standard eigenvalue decomposition of the matrix B−1/2AB−1/2. If QΛQT is the eigenvalue decompo- sition of B−1/2AB−1/2, then (A.11) holds with V = B1/2Q.
Singular value decomposition
Suppose A ∈ Rm×n with rank A = r. Then A can be factored as
A = UΣV T , (A.12)
whereU ∈Rm×r satisfiesUTU =I,V ∈Rn×r satisfiesVTV =I,andΣ= diag(σ1,…,σr), with
σ1 ≥σ2 ≥···≥σr >0.
The factorization (A.12) is called the singular value decomposition (SVD) of A. The columns of U are called left singular vectors of A, the columns of V are right singular vectors, and the numbers σi are the singular values. The singular value decomposition can be written
􏰊r i=1
where ui ∈ Rm are the left singular vectors, and vi ∈ Rn are the right singular vectors.
The singular value decomposition of a matrix A is closely related to the eigen- value decomposition of the (symmetric, nonnegative definite) matrix AT A. Us- ing (A.12) we can write
ATA=VΣ2VT=􏰋V V ̃􏰌􏰒Σ2 0􏰓􏰋V V ̃􏰌T, 00
where V ̃ is any matrix for which [V V ̃ ] is orthogonal. The righthand expression is the eigenvalue decomposition of AT A, so we conclude that its nonzero eigenvalues are the singular values of A squared, and the associated eigenvectors of AT A are the right singular vectors of A. A similar analysis of AAT shows that its nonzero
A.5.4
A =
σ i u i v iT ,

A.5 Linear algebra 649
eigenvalues are also the squares of the singular values of A, and the associated eigenvectors are the left singular vectors of A.
The first or largest singular value is also written as σmax(A). It can be expressed
as
The righthand expression shows that the maximum singular value is the l2 operator
σmax(A) = sup xT Ay = sup ∥Ay∥2 . x,y̸=0 ∥x∥2∥y∥2 y̸=0 ∥y∥2
norm of A. The minimum singul􏰆ar value of A ∈ Rm×n is given by
σr (A) r = min{m, n} 0 r < min{m, n}, which is positive if and only if A is full rank. The singular values of a symmetric matrix are the absolute values of its nonzero eigenvalues, sorted into descending order. The singular values of a symmetric positive semidefinite matrix are the same as its nonzero eigenvalues. The condition number of a nonsingular A ∈ Rn×n, denoted cond(A) or κ(A), is defined as Pseudo-inverse σmin(A) = cond(A) = ∥A∥2∥A−1∥2 = σmax(A)/σmin(A). LetA=UΣVT bethesingularvaluedecompositionofA∈Rm×n,withrankA= r. We define the pseudo-inverse or Moore-Penrose inverse of A as A† =VΣ−1UT ∈Rn×m. Alternative expressions are A† = lim(AT A + ǫI)−1AT = lim AT (AAT + ǫI)−1, ǫ→0 ǫ→0 where the limits are taken with ǫ > 0, which ensures that the inverses in the expressions exist. If rank A = n, then A† = (AT A)−1 AT . If rank A = m, then A† = AT (AAT )−1. If A is square and nonsingular, then A† = A−1.
The pseudo-inverse comes up in problems involving least-squares, minimum norm, quadratic minimization, and (Euclidean) projection. For example, A†b is a solution of the least-squares problem
minimize ∥Ax − b∥2
in general. When the solution is not unique, A†b gives the solution with minimum (Euclidean) norm. As another example, the matrix AA† = UUT gives (Euclidean) projection on R(A). The matrix A†A = V V T gives (Euclidean) projection on R(AT ).
The optimal value p⋆ of the (general, nonconvex) quadratic optimization prob- lem
minimize (1/2)xT P x + qT x + r, where P ∈ Sn, can be 􏰆expressed as
−(1/2)qTP†q+r P≽0, q∈R(P) −∞ otherwise.
p⋆ =
(This generalizes the expression p⋆ = −(1/2)qT P −1q + r, valid for P ≻ 0.)

650
A.5.5
A Mathematical background
Schur complement
Consider a matrix X ∈ Sn partitioned as X=􏰒AB􏰓,
BT C where A ∈ Sk. If detA ̸= 0, the matrix
S = C − BT A−1B
is called the Schur complement of A in X. Schur complements arise in several contexts, and appear in many important formulas and theorems. For example, we have
detX = detAdetS.
Inverse of block matrix
The Schur complement comes up in solving linear equations, by eliminating one block of variables. We start with
􏰒 A B􏰓􏰒x􏰓=􏰒u􏰓, BT C y v
and assume that detA ̸= 0. If we eliminate x from the top block equation and substitute it into the bottom block equation, we obtain v = BT A−1u + Sy, so
y = S−1(v − BT A−1u). Substituting this into the first equation yields
x = 􏰀A−1 + A−1BS−1BT A−1􏰁 u − A−1BS−1v.
We can express these two equations as a formula for the inverse of a block matrix:
􏰒 A B 􏰓−1 􏰒 A−1 + A−1BS−1BT A−1 −A−1BS−1 􏰓 BT C = −S−1BTA−1 S−1 .
In particular, we see that the Schur complement is the inverse of the 2,2 block entry of the inverse of X.
Minimization and definiteness
The Schur complement arises when you minimize a quadratic form over some of the variables. Suppose A ≻ 0, and consider the minimization problem
minimize uT Au + 2vT BT u + vT Cv (A.13) with variable u. The solution is u = −A−1Bv, and the optimal value is
inf 􏰒u􏰓T􏰒 A B􏰓􏰒u􏰓=vTSv. (A.14) u v BT C v
From this we can derive the following characterizations of positive definiteness or semidefiniteness of the block matrix X:

A.5 Linear algebra 651
• X≻0ifandonlyifA≻0andS≻0.
• IfA≻0,thenX≽0ifandonlyifS≽0.
Schur complement with singular A
Some Schur complement results have generalizations to the case when A is singular, although the details are more complicated. As an example, if A ≽ 0 and Bv ∈ R(A), then the quadratic minimization problem (A.13) (with variable u) is solvable, and has optimal value
vT (C − BT A†B)v,
where A† is the pseudo-inverse of A. The problem is unbounded if Bv ̸∈ R(A) or if A̸≽0.
The range condition Bv ∈ R(A) can also be expressed as (I − AA†)Bv = 0, so we have the following characterization of positive semidefiniteness of the block matrix X:
X≽0 ⇐⇒ A≽0, (I−AA†)B=0, C−BTA†B≽0.
Here the matrix C − BT A†B serves as a generalization of the Schur complement, when A is singular.

652
A Mathematical background
Bibliography
Some basic references for the material in this appendix are Rudin [Rud76] for analysis, and Strang [Str80] and Meyer [Mey00] for linear algebra. More advanced linear algebra texts include Horn and Johnson [HJ85, HJ91], Parlett [Par98], Golub and Van Loan [GL89], Trefethen and Bau [TB97], and Demmel [Dem97].
The concept of closed function (§A.3.3) appears frequently in convex optimization, al- though the terminology varies. The term is used by Rockafellar [Roc70, page 51], Hiriart- Urruty and Lemar ́echal [HUL93, volume 1, page 149], Borwein and Lewis [BL00, page 76], and Bertsekas, Nedi ́c, and Ozdaglar [Ber03, page 28].

Appendix B
Problems involving two quadratic functions
In this appendix we consider some optimization problems that involve two quadratic, but not necessarily convex, functions. Several strong results hold for these prob- lems, even when they are not convex.
B.1 Single constraint quadratic optimization
We consider the problem with one constraint minimize xT A0x + 2bT0 x + c0
subjectto xTA1x+2bT1x+c1 ≤0,
(B.1)
with variable x ∈ Rn, and problem parameters Ai ∈ Sn, bi ∈ Rn, ci ∈ R. We do not assume that Ai ≽ 0, so problem (B.1) is not a convex optimization problem.
The Lagrangian of (B.1) is
L(x,λ)=xT(A0 +λA1)x+2(b0 +λb1)Tx+c0 +λc1,
and the dual function is g(λ) = inf L(x, λ)
x
 c0 +λc1 −(b0 +λb1)T(A0 +λA1)†(b0 +λb1) =
−∞
(see §A.5.4). Using a Schur complement, we can express the dual problem as
(B.2)
maximize γ subject to λ ≥ 0
􏰒 A0+λA1 b0+λb1 􏰓≽0, (b0 +λb1)T c0 +λc1 −γ
A0 +λA1 ≽0,
b0 + λb1 ∈ R(A0 + λA1)
otherwise

654
B Problems involving two quadratic functions
an SDP with two variables γ, λ ∈ R.
The first result is that strong duality holds for problem (B.1) and its Lagrange
dual (B.2), provided Slater’s constraint qualification is satisfied, i.e., there exists an x with xT A1x + 2bT1 x + c1 < 0. In other words, if (B.1) is strictly feasible, the optimal values of (B.1) and (B.2) are equal. (A proof is given in §B.4.) Relaxation interpretation The dual of the SDP (B.2) is minimize subject to tr(A0X) + 2bT0 x + c0 tr(A1X) + 2bT1 x + c1 ≤ 0 􏰒X x􏰓 xT 1 ≽0, (B.3) an SDP with variables X ∈ Sn, x ∈ Rn. This dual SDP has an interesting interpretation in terms of the original problem (B.1). We first note that (B.1) is equivalent to minimize tr(A0X) + 2bT0 x + c0 subject to tr(A1X) + 2bT1 x + c1 ≤ 0 (B.4) X = xxT . In this formulation we express the quadratic terms xT Aix as tr(AixxT ), and then introduce a new variable X = xxT . Problem (B.4) has a linear objective function, one linear inequality constraint, and a nonlinear equality constraint X = xxT . The next step is to replace the equality constraint by an inequality X ≽ xxT : minimize tr(A0X) + bT0 x + c0 subject to tr(A1X) + bT1 x + c1 ≤ 0 (B.5) X ≽ xxT . This problem is called a relaxation of (B.4), since we have replaced one of the constraints with a looser constraint. Finally we note that the inequality in (B.5) can be expressed as a linear matrix inequality by using a Schur complement, which gives (B.3). A number of interesting facts follow immediately from this interpretation of (B.3) as a relaxation of (B.1). First, it is obvious that the optimal value of (B.3) is less than or equal to the optimal value of (B.1), since we minimize the same objec- tive function over a larger set. Second, we can conclude that if X = xxT at the optimum of (B.3), then x must be optimal in (B.1). Combining the result above, that strong duality holds between (B.1) and (B.2) (if (B.1) is strictly feasible), with strong duality between the dual SDPs (B.2) and (B.3), we conclude that strong duality holds between the original, nonconvex quadratic problem (B.1), and the SDP relaxation (B.3), provided (B.1) is strictly feasible. B.2 The S-procedure 655 B.2 The S-procedure The next result is a theorem of alternatives for a pair of (nonconvex) quadratic inequalities. Let A1, A2 ∈ Sn, b1, b2 ∈ Rn, c1, c2 ∈ R, and suppose there exists an xˆ w i t h xˆTA2xˆ+2bT2xˆ+c2 <0. Then there exists an x ∈ Rn satisfying xTA1x+2bT1x+c1 <0, xTA2x+2bT2x+c2 ≤0, if and only if there exists no λ such that λ≥0, 􏰒 A1 b1 􏰓+λ􏰒 A2 b2 􏰓≽0. bT1 c1 bT2 c2 (B.6) (B.7) In other words, (B.6) and (B.7) are strong alternatives. This result is readily shown to be equivalent to the result from §B.1, and a proof is given in §B.4. Here we point out that the two inequality systems are clearly weak alternatives, since (B.6) and (B.7) together lead to a contradiction: 0 ≤ 􏰒x􏰓T􏰄􏰒A1 b1 􏰓+λ􏰒A2 b2 􏰓􏰅􏰒x􏰓 1 bT1 c1 bT2 c2 1 = xTA1x+2bT1 x+c1 +λ(xTA2x+2bT2 x+c2) < 0. This theorem of alternatives is sometimes called the S-procedure, and is usually stated in the following form: the implication xTF1x+2g1Tx+h1 ≤0 =⇒ xTF2x+2g2Tx+h2 ≤0, whereFi ∈Sn,gi ∈Rn,hi ∈R,holdsifandonlyifthereexistsaλsuchthat λ≥0, 􏰒 F2 g2 􏰓≼λ􏰒 F1 g1 􏰓, g2T h2 g1T h1 provided there exists a point xˆ with xˆT F1xˆ + 2g1T xˆ + h1 < 0. (Note that sufficiency is clear.) Example B.1 Ellipsoid containment. An ellipsoid E ⊆ Rn with nonempty interior can be represented as the sublevel set of a quadratic function, E ={x|xTFx+2gTx+h≤0}, where F ∈ S++ and h − gT F −1g < 0. Suppose E ̃ is another ellipsoid with similar representation, with F ̃ ∈ S++, h ̃ − g ̃T F ̃−1g ̃ < 0. By the S-procedure, we see that E ⊆ E ̃ if and only E ̃ = { x | x T F ̃ x + 2 g ̃ T x + h ̃ ≤ 0 } , if there is a λ > 0 such that
􏰒 F ̃ g ̃ 􏰓 ≼ λ 􏰒 F g 􏰓 . g ̃T h ̃ gT h

656
B.3
B Problems involving two quadratic functions
The field of values of two symmetric matrices
The following result is the basis for the proof of the strong duality result in §B.1 and the S-procedure in §B.2. If A,B ∈ Sn, then for all X ∈ Sn+, there exists an x ∈ Rn such that
xT Ax = tr(AX), xT Bx = tr(BX). (B.8) Remark B.1 Geometric interpretation. This result has an interesting interpretation
in terms of the set
W(A,B) = {(xT Ax,xT Bx) | x ∈ Rn}, which is a cone in R2. It is the cone generated by the set
F(A,B) = {(xT Ax,xT Bx) | ∥x∥2 = 1},
which is called the 2-dimensional field of values of the pair (A,B). Geometrically, W(A,B) is the image of the set of rank-one positive semidefinite matrices under the linear transformation f : Sn → R2 defined by
f(X) = (tr(AX), tr(BX)).
The result that for every X ∈ Sn+ there exists an x satisfying (B.8) means that
W ( A , B ) = f ( S n+ ) . In other words, W (A, B) is a convex cone.
The proof is constructive and uses induction on the rank of X. Suppose it is trueforallX∈Sn+ with1≤rankX≤k,wherek≥2,thatthereexistsanxsuch that (B.8) holds. Then the result also holds if rank X = k + 1, as can be seen as follows. AmatrixX∈Sn+ withrankX=k+1canbeexpressedasX=yyT +Z where y ̸= 0 and Z ∈ Sn+ with rank Z = k. By assumption, there exists a z such that tr(AZ) = zT Az, tr(AZ) = zT Bz. Therefore
tr(AX) = tr(A(yyT + zzT )), tr(BX) = tr(B(yyT + zzT )).
The rank of yyT + zzT is one or two, so by assumption there exists an x such that (B.8) holds.
It is therefore sufficient to prove the result if rank X ≤ 2. If rank X = 0 and rank X = 1 there is nothing to prove. If rank X = 2, we can factor X as X = V V T where V ∈ Rn×2, with linearly independent columns v1 and v2. Without loss of generality we can assume that V T AV is diagonal. (If V T AV is not diagonal we replace V with V P where V T AV = P diag(λ)P T is the eigenvalue decomposition of VTAV.) We will write VTAV and VTBV as
and define
VTAV=􏰒λ1 0 􏰓, VTBV=􏰒σ1 γ 􏰓, 0 λ2 γ σ2
w=􏰒tr(AX)􏰓=􏰒λ1+λ2 􏰓. tr(BX) σ1 +σ2

B.4 Proofs of the strong duality results 657
We need to show that w = (xT Ax, xT Bx) for some x.
We distinguish two cases. First, assume (0,γ) is a linear combination of the
vectors (λ1, σ1) and (λ2, σ2):
0 = z1λ1 + z2λ2, γ = z1σ1 + z2σ2,
for some z1, z2. In this case we choose x = αv1 +βv2, where α and β are determined by solving two quadratic equations in two variables
α2 +2αβz1 =1, β2 +2αβz2 =1. (B.9) This will give the desired result, since
􏰒 (αv1 + βv2)T A(αv1 + βv2) 􏰓 (αv1 + βv2)T B(αv1 + βv2)
= α2􏰒λ1 􏰓+2αβ􏰒0􏰓+β2􏰒λ2 􏰓 σ1 γ σ2
= (α2 +2αβz1)􏰒 λ1 􏰓+(β2 +2αβz2)􏰒 λ2 􏰓 􏰒λ1+λ2􏰓 σ1 σ2
= σ1 + σ2 .
It remains to show that the equations (B.9) are solvable. To see this, we first note
that α and β must be nonzero, so we can write the equations equivalently as α2(1 + 2(β/α)z1) = 1, (β/α)2 + 2(β/α)(z2 − z1) = 1.
The equation t2 + 2t(z2 − z1) = 1 has a positive and a negative root. At least one of these roots (the root with the same sign as z1) satisfies 1 + 2tz1 > 0, so we can choose √
α=±1/ 1+2tz1, β=tα.
This yields two solutions (α, β) that satisfy (B.9). (If both roots of t2 +2t(z2 −z1) = 1 satisfy 1 + 2tz1 > 0, we obtain four solutions.)
Next, assume that (0, γ) is not a linear combination of (λ1, σ1) and (λ2, σ2). In particular, this means that (λ1, σ1) and (λ2, σ2) are linearly dependent. Therefore their sum w = (λ1 + λ2, σ1 + σ2) is a nonnegative multiple of (λ1, σ1), or (λ2, σ2), or both. If w = α2(λ1,σ1) for some α, we can choose x = αv1. If w = β2(λ2,σ2) for some β, we can choose x = βv2.
B.4 Proofs of the strong duality results
We first prove the S-procedure result given in §B.2. The assumption of strict feasibility of xˆ implies that the matrix
􏰒A2 b2􏰓 bT2 c2

658
B Problems involving two quadratic functions
has at least one negative eigenvalue. Therefore
τ≥0, τ􏰒A2 b2􏰓≽0 =⇒ τ=0.
bT2 c2
We can apply the theorem of alternatives for nonstrict linear matrix inequalities,
given in example 5.14, which states that (B.7) is infeasible if and only if X ≽0, tr􏰄X􏰒 A1 b1 􏰓􏰅<0, tr􏰄X􏰒 A2 b2 􏰓􏰅≤0 bT1 c1 bT2 c2 is feasible. From §B.3 this is equivalent to feasibility of 􏰒v􏰓T􏰒A1 b1 􏰓􏰒v􏰓<0, 􏰒v􏰓T􏰒A2 b2 􏰓􏰒v􏰓≤0. wbT1c1w wbT2c2w Ifw̸=0,thenx=v/wisfeasiblein(B.6). Ifw=0,wehavevTA1v<0, v T A 2 v ≤ 0 , s o x = xˆ + t v s a t i s fi e s xTA1x+2bT1 x+c1 = xTA2x+2bT2 x+c2 = i.e., x becomes feasible as t → ±∞, depending on the sign of (A2xˆ + b2)T v. Finally, we prove the result in §B.1, i.e., that the optimal values of (B.1) and (B.2) are equal if (B.1) is strictly feasible. To do this we note that γ is a lower bound for the optimal value of (B.1) if xTA1x+bT1x+c1 ≤0 =⇒ xTA0x+bT0x+c0 ≥γ. By the S-procedure this is true if and only if there exists a λ ≥ 0 such that 􏰒A0 b0 􏰓+λ􏰒A1 b1 􏰓≽0, b T0 c 0 − γ b T1 c 1 i.e., γ, λ are feasible in (B.2). xˆTA1xˆ+2bT1 xˆ+c1 +t2vTA1v+2t(A1xˆ+b1)Tv xˆTA2xˆ+2bT2 xˆ+c2 +t2vTA2v+2t(A2xˆ+b2)Tv < 2t(A2xˆ + b2)T v, Bibliography 659 Bibliography The results in this appendix are known under different names in different disciplines. The term S-procedure is from control; see Boyd, El Ghaoui, Feron, and Balakrishnan [BEFB94, pages 23, 33] for a survey and references. Variations of the S-procedure are known in linear algebra in the context of joint diagonalization of a pair of symmetric matrices; see, for example, Calabi [Cal64] and Uhlig [Uhl79]. Special cases of the strong duality result are studied in the nonlinear programming literature on trust-region methods (Stern and Wolkowicz [SW95], Nocedal and Wright [NW99, page 78]). Brickman [Bri61] proves that the field of values of a pair of matrices A, B ∈ Sn (i.e., the set F(A,B) defined in remark B.1) is a convex set if n > 2, and that the set W(A,B) is a convex cone (for any n). Our proof in §B.3 is based on Hestenes [Hes68]. Many related results and additional references can be found in Horn and Johnson [HJ91, §1.8] and Ben-Tal and Nemirovski [BTN01, §4.10.5].

Appendix C
Numerical linear algebra background
In this appendix we give a brief overview of some basic numerical linear algebra, concentrating on methods for solving one or more sets of linear equations. We focus on direct (i.e., noniterative) methods, and how problem structure can be exploited to improve efficiency. There are many important issues and methods in numerical linear algebra that we do not consider here, including numerical stability, details of matrix factorizations, methods for parallel or multiple processors, and iterative methods. For these (and other) topics, we refer the reader to the references given at the end of this appendix.
C.1 Matrix structure and algorithm complexity
We concentrate on methods for solving the set of linear equations
Ax = b (C.1)
where A ∈ Rn×n and b ∈ Rn. We assume A is nonsingular, so the solution is unique for all values of b, and given by x = A−1b. This basic problem arises in many optimization algorithms, and often accounts for most of the computation. In the context of solving the linear equations (C.1), the matrix A is often called the coefficient matrix, and the vector b is called the righthand side.
The standard generic methods for solving (C.1) require a computational effort that grows approximately like n3. These methods assume nothing more about A than nonsingularity, and so are generally applicable. For n several hundred or smaller, these generic methods are probably the best methods to use, except in the most demanding real-time applications. For n more than a thousand or so, the generic methods of solving Ax = b become less practical.

662
C Numerical linear algebra background
C.1.1
In many cases the coefficient matrix A has some special structure or form that can be exploited to solve the equation Ax = b more efficiently, using methods tailored for the special structure. For example, in the Newton system ∇2f(x)∆xnt = −∇f(x), the coefficient matrix is symmetric and positive definite, which allows us to use a solution method that is around twice as fast as the generic method (and also has better roundoff properties). There are many other types of structure that can be exploited, with computational savings (or algorithm speedup) that is usually far more than a factor of two. In many cases, the effort is reduced to something proportional to n2 or even n, as compared to n3 for the generic methods. Since these methods are usually applied when n is at least a hundred, and often far larger, the savings can be dramatic.
A wide variety of coefficient matrix structures can be exploited. Simple exam- ples related to the sparsity pattern (i.e., the pattern of zero and nonzero entries in the matrix) include banded, block diagonal, or sparse matrices. A more subtle exploitable structure is diagonal plus low rank. Many common forms of convex optimization problems lead to linear equations with coefficient matrices that have these exploitable structures. (There are many other matrix structures that can be exploited, e.g., Toeplitz, Hankel, and circulant, that we will not consider in this appendix.)
We refer to a generic method that does not exploit any sparsity pattern in the matrices as one for dense matrices. We refer to a method that does not exploit any structure at all in the matrices as one for unstructured matrices.
Complexity analysis via flop count
The cost of a numerical linear algebra algorithm is often expressed by giving the total number of floating-point operations or flops required to carry it out, as a function of various problem dimensions. We define a flop as one addition, sub- traction, multiplication, or division of two floating-point numbers. (Some authors define a flop as one multiplication followed by one addition, so their flop counts are smaller by a factor up to two.) To evaluate the complexity of an algorithm, we count the total number of flops, express it as a function (usually a polynomial) of the dimensions of the matrices and vectors involved, and simplify the expression by ignoring all terms except the leading (i.e., highest order or dominant) terms.
As an example, suppose that a particular algorithm requires a total of m3 +3m2n+mn+4mn2 +5m+22
flops, where m and n are problem dimensions. We would normally simplify this flop count to
m3 +3m2n+4mn2
flops, since these are the leading terms in the problem dimensions m and n. If
in addition we assumed that m ≪ n, we would further simplify the flop count to 4mn2 .
Coefficient matrix structure

C.1 Matrix structure and algorithm complexity 663
Flop counts were originally popularized when floating-point operations were rel- atively slow, so counting the number gave a good estimate of the total computation time. This is no longer the case: Issues such as cache boundaries and locality of reference can dramatically affect the computation time of a numerical algorithm. However, flop counts can still give us a good rough estimate of the computation time of a numerical algorithm, and how the time grows with increasing problem size. Since a flop count no longer accurately predicts the computation time of an algorithm, we usually pay most attention to its order or orders, i.e., its largest exponents, and ignore differences in flop counts smaller than a factor of two or so. For example, an algorithm with flop count 5n2 is considered comparable to one with a flop count 4n2, but faster than an algorithm with flop count (1/3)n3.
C.1.2 Cost of basic matrix-vector operations Vector operations
To compute the inner product xT y of two vectors x, y ∈ Rn we form the products xiyi, and then add them, which requires n multiplies and n−1 additions, or 2n−1 flops. As mentioned above, we keep only the leading term, and say that the inner product requires 2n flops, or even more approximately, order n flops. A scalar- vector multiplication αx, where α ∈ R and x ∈ Rn costs n flops. The addition x+yoftwovectorsx,y∈Rn alsocostsnflops.
If the vectors x and y are sparse, i.e., have only a few nonzero terms, these basic operations can be carried out faster (assuming the vectors are stored using an appropriate data structure). For example, if x is a sparse vector with N nonzero entries, then the inner product xT y can be computed in 2N flops.
Matrix-vector multiplication
A matrix-vector multiplication y = Ax where A ∈ Rm×n costs 2mn flops: We have to calculate m components of y, each of which is the product of a row of A with x, i.e., an inner product of two vectors in Rn.
Matrix-vector products can often be accelerated by taking advantage of struc- ture in A. For example, if A is diagonal, then Ax can be computed in n flops, instead of 2n2 flops for multiplication by a general n × n matrix. More generally, if A is sparse, with only N nonzero elements (out of mn), then 2N flops are needed to form Ax, since we can skip multiplications and additions with zero.
As a less obvious example, suppose the matrix A has rank p ≪ min{m, n}, and is represented (stored) in the factored form A = UV , where U ∈ Rm×p, V ∈ Rp×n. Then we can compute Ax by first computing V x (which costs 2pn flops), and then computing U (V x) (which costs 2mp flops), so the total is 2p(m + n) flops. Since p ≪ min{m, n}, this is small compared to 2mn.
Matrix-matrix multiplication
The matrix-matrix product C = AB, where A ∈ Rm×n and B ∈ Rn×p, costs 2mnp flops. We have mp elements in C to calculate, each of which is an inner product of

664
C Numerical linear algebra background
two vectors of length n. Again, we can often make substantial savings by taking advantage of structure in A and B. For example, if A and B are sparse, we can accelerate the multiplication by skipping additions and multiplications with zero. If m = p and we know that C is symmetric, then we can calculate the matrix product in m2n flops, since we only have to compute the (1/2)m(m + 1) elements in the lower triangular part.
To form the product of several matrices, we can carry out the matrix-matrix multiplications in different ways, which have different flop counts in general. The simplest example is computing the product D = ABC, where A ∈ Rm×n, B ∈ Rn×p, and C ∈ Rp×q. Here we can compute D in two ways, using matrix-matrix multiplies. One method is to first form the product AB (2mnp flops), and then form D = (AB)C (2mpq flops), so the total is 2mp(n+q) flops. Alternatively, we can first form the product BC (2npq flops), and then form D = A(BC) (2mnq flops), with a total of 2nq(m+p) flops. The first method is better when 2mp(n+q) < 2nq(m+p), i.e., when n1 + 1q < m1 + p1 . This assumes that no structure of the matrices is exploited in carrying out matrix- matrix products. For products of more than three matrices, there are many ways to parse the product into matrix-matrix multiplications. Although it is not hard to develop an algorithm that determines the best parsing (i.e., the one with the fewest required flops) given the matrix dimensions, in most applications the best parsing is clear. Solving linear equations with factored matrices Linear equations that are easy to solve We start by examining some cases for which Ax = b is easily solved, i.e., x = A−1b is easily computed. Diagonal matrices Suppose A is diagonal and nonsingular (i.e., aii ̸= 0 for all i). The set of linear equations Ax = b can be written as aiixi = bi, i = 1,...,n. The solution is given by xi = bi/aii, and can be calculated in n flops. Lower triangular matrices A matrix A ∈ Rn×n is lower triangular if aij = 0 for j > i. A lower triangular matrix is called unit lower triangular if the diagonal elements are equal to one. A lower triangular matrix is nonsingular if and only if aii ̸= 0 for all i.
C.2 C.2.1

C.2 Solving linear equations with factored matrices 665
Suppose A is lower triangular and nonsingular. The equations Ax = b are
a11 0 ··· 0  a21 a22 ··· 0
 . . … . . . an1 an2 ··· ann
From the first row, we have a11 x1 = b1 ,
From the second row we have a21x1 + a22x2 = b2, so we can express x2 as x2 = (b2 −a21x1)/a22. (We have already computed x1, so every number on the righthand side is known.) Continuing this way, we can express each component of x in terms of previous components, yielding the algorithm
x1 := b1/a11
xn, then xn−1, and
so on. The algorithm is := bn/ann
x2 :=
x3 :=
.
xn :=
(b2 − a21x1)/a22
(b3 − a31x1 − a32x2)/a33
(bn − an1x1 − an2x2 − · · · − an,n−1xn−1)/ann.
This procedure is called forward substitution, since we successively compute the components of x by substituting the known values into the next equation.
Let us give a flop count for forward substitution. We start by calculating x1 (1 flop). We substitute x1 in the second equation to find x2 (3 flops), then substitute x1 and x2 in the third equation to find x3 (5 flops), etc. The total number of flops is
1 + 3 + 5 + · · · + (2n − 1) = n2.
Thus, when A is lower triangular and nonsingular, we can compute x = A−1b in n2 flops.
If the matrix A has additional structure, in addition to being lower triangular, then forward substitution can be more efficient than n2 flops. For example, if A is sparse (or banded), with at most k nonzero entries per row, then each forward substitution step requires at most 2k+1 flops, so the overall flop count is 2(k+1)n, or 2kn after dropping the term 2n.
Upper triangular matrices
A matrix A ∈ Rn×n is upper triangular if AT is lower triangular, i.e., if aij = 0 for j < i. We can solve linear equations with nonsingular upper triangular coefficient matrix in a way similar to forward substitution, except that we start by calculating xn xn−1 := xn−2 := (bn−1 − an−1,nxn)/an−1,n−1 (bn−2 − an−2,n−1xn−1 − an−2,nxn)/an−2,n−2 (b1 − a12x2 − a13x3 − · · · − a1nxn)/a11. . x1 :=   x 1   b 1   x2   b2   .  =  . . xn bn from which we conclude x1 = b1 /a11 . 666 C Numerical linear algebra background This is called backward substitution or back substitution since we determine the coefficients in backward order. The cost to compute x = A−1b via backward substitution is n2 flops. If A is upper triangular and sparse (or banded), with at most k nonzero entries per row, then back substitution costs 2kn flops. Orthogonal matrices AmatrixA∈Rn×n isorthogonal ifATA=I,i.e.,A−1 =AT. Inthiscasewecan compute x = A−1b by a simple matrix-vector product x = AT b, which costs 2n2 in general. If the matrix A has additional structure, we can compute x = A−1b even more efficiently than 2n2 flops. For example, if A has the form A = I − 2uuT , where ∥u∥2 = 1, we can compute x = A−1b = (I − 2uuT )T b = b − 2(uT b)u by first computing uT b, then forming b − 2(uT b)u, which costs 4n flops. Permutation matrices C.2.2 Let π = (π1, . . . , πn) be a permutation of (1, 2, . . . , n). The associated permutation matrix A∈Rn×n isgivenbyAij =􏰆 1 j=πi 0 otherwise. In each row (or column) of a permutation matrix there is exactly one entry with value one; all other entries are zero. Multiplying a vector by a permutation matrix simply permutes its coefficients: Ax = (xπ1,...,xπn). The inverse of a permutation matrix is the permutation matrix associated with the inverse permutation π−1. This turns out to be AT , which shows that permutation matrices are orthogonal. If A is a permutation matrix, solving Ax = b is very easy: x is obtained by permuting the entries of b by π−1. This requires no floating point operations, according to our definition (but, depending on the implementation, might involve copying floating point numbers). We can reach the same conclusion from the equation x = AT b. The matrix AT (like A) has only one nonzero entry per row, with value one. Thus no additions are required, and the only multiplications required are by one. The factor-solve method The basic approach to solving Ax = b is based on expressing A as a product of nonsingular matrices, A = A1A2 ···Ak, C.2 Solving linear equations with factored matrices 667 so that We can compute x using this formula, working from right to left: z := A−1b 11 z := A−1z =A−1A−1b 22121 A−1 z = A−1 ···A−1b k−1 k−2 k−1 1 A−1z = A−1 ···A−1b. k k−1 k 1 The ith step of this process requires computing zi = A−1zi−1, i.e., solving the i linear equations Aizi = zi−1. If each of these equations is easy to solve (e.g., if Ai is diagonal, lower or upper triangular, a permutation, etc.), this gives a method for computing x = A−1b. The step of expressing A in factored form (i.e., computing the factors Ai) is called the factorization step, and the process of computing x = A−1b recursively, by solving a sequence problems of the form Aizi = zi−1, is often called the solve step. The total flop count for solving Ax = b using this factor-solve method is f +s, where f is the flop count for computing the factorization, and s is the total flop count for the solve step. In many cases, the cost of the factorization, f, dominates the total solve cost s. In this case, the cost of solving Ax = b, i.e., computing x = A−1b, is just f. Solving equations with multiple righthand sides Suppose we need to solve the equations Ax1 =b1, Ax2 =b2, ..., Axm =bm, where A ∈ Rn×n is nonsingular. In other words, we need to solve m sets of linear equations, with the same coefficient matrix, but different righthand sides. Alternatively, we can think of this as computing the matrix x = A−1b = A−1A−1 ···A−1b. k k−1 1 . z := k−1 x := X = A−1B X=􏰋 x1 x2 ··· xm 􏰌∈Rn×m, B=􏰋 b1 b2 ··· bm 􏰌∈Rn×m. where To do this, we first factor A, which costs f. Then for i = 1,...,m we compute A−1bi using the solve step. Since we only factor A once, the total effort is f + ms. In other words, we amortize the factorization cost over the set of m solves. Had we (needlessly) repeated the factorization step for each i, the cost would be m(f + s). When the factorization cost f dominates the solve cost s, the factor-solve method allows us to solve a small number of linear systems, with the same co- efficient matrix, at essentially the same cost as solving one. This is because the most expensive step, the factorization, is done only once. 668 C.3 C.3.1 C Numerical linear algebra background We can use the factor-solve method to compute the inverse A−1 by solving Ax = ei for i = 1, . . . , n, i.e., by computing A−1 I . This requires one factorization and n solves, so the cost is f + ns. LU, Cholesky, and LDLT factorization LU factorization Every nonsingular matrix A ∈ Rn×n can be factored as A = P LU where P ∈ Rn×n is a permutation matrix, L ∈ Rn×n is unit lower triangular, and U ∈ Rn×n is upper triangular and nonsingular. This is called the LU factorization ofA. WecanalsowritethefactorizationasPTA=LU,wherethematrixPTAis obtained from A by re-ordering the rows. The standard algorithm for computing an LU factorization is called Gaussian elimination with partial pivoting or Gaussian elimination with row pivoting. The cost is (2/3)n3 flops if no structure in A is exploited, which is the case we consider first. Solving sets of linear equations using the LU factorization The LU factorization, combined with the factor-solve approach, is the standard method for solving a general set of linear equations Ax = b. Algorithm C.1 Solving linear equations by LU factorization. given a set of linear equations Ax = b, with A nonsingular. 1. LU factorization. Factor A as A = P LU ((2/3)n3 flops). 2. Permutation. Solve P z1 = b (0 flops). 3. Forward substitution. Solve Lz2 = z1 (n2 flops). 4. Backward substitution. Solve Ux = z2 (n2 flops). The total cost is (2/3)n3 + 2n2, or (2/3)n3 flops if we keep only the leading term. If we need to solve multiple sets of linear equations with different righthand sides, i.e., Axi = bi, i = 1,...,m, the cost is (2/3)n3 + 2mn2, since we factor A once, and carry out m pairs of forward and backward substi- tutions. For example, we can solve two sets of linear equations, with the same coefficient matrix but different righthand sides, at essentially the same cost as solving one. We can compute the inverse A−1 by solving the equations Axi = ei, where xi is the ith column of A−1, and ei is the ith unit vector. This costs (8/3)n3, i.e., about 3n3 flops. If the matrix A has certain structure, for example banded or sparse, the LU fac- torization can be computed in less than (2/3)n3 flops, and the associated forward and backward substitutions can also be carried out more efficiently. C.3 LU, Cholesky, and LDLT factorization 669 LU factorization of banded matrices Suppose the matrix A ∈ Rn×n is banded, i.e., aij = 0 if |i − j| > k, where k < n − 1 is called the bandwidth of A. We are interested in the case where k ≪ n, i.e., the bandwidth is much smaller than the size of the matrix. In this case an LU factorization of A can be computed in roughly 4nk2 flops. The resulting upper triangular matrix U has bandwidth at most 2k, and the lower triangular matrix L has at most k + 1 nonzeros per column, so the forward and back substitutions can be carried out in order 6nk flops. Therefore if A is banded, the linear equations Ax = b can be solved in about 4nk2 flops. LU factorization of sparse matrices When the matrix A is sparse, the LU factorization usually includes both row and column permutations, i.e., A is factored as A = P1LUP2, where P1 and P2 are permutation matrices, L is lower triangular, and U is upper triangular. If the factors L and U are sparse, the forward and backward substi- tutions can be carried out efficiently, and we have an efficient method for solving Ax = b. The sparsity of the factors L and U depends on the permutations P1 and P2, which are chosen in part to yield relatively sparse factors. The cost of computing the sparse LU factorization depends in a complicated way on the size of A, the number of nonzero elements, its sparsity pattern, and the particular algorithm used, but is often dramatically smaller than the cost of a dense LU factorization. In many cases the cost grows approximately linearly with n, when n is large. This means that when A is sparse, we can solve Ax = b very efficiently, often with an order approximately n. C.3.2 Cholesky factorization If A ∈ Rn×n is symmetric and positive definite, then it can be factored as A = LLT where L is lower triangular and nonsingular with positive diagonal elements. This is called the Cholesky factorization of A, and can be interpreted as a symmetric LU factorization (with L = UT). The matrix L, which is uniquely determined by A, is called the Cholesky factor of A. The cost of computing the Cholesky factorization of a dense matrix, i.e., without exploiting any structure, is (1/3)n3 flops, half the cost of an LU factorization. Solving positive definite sets of equations using Cholesky factorization The Cholesky factorization can be used to solve Ax = b when A is symmetric positive definite. 670 C Numerical linear algebra background Algorithm C.2 Solving linear equations by Cholesky factorization. given a set of linear equations Ax = b, with A ∈ Sn++. 1. Cholesky factorization. Factor A as A = LLT ((1/3)n3 flops). 2. Forward substitution. Solve Lz1 = b (n2 flops). 3. Backward substitution. Solve LT x = z1 (n2 flops). The total cost is (1/3)n3 + 2n2, or roughly (1/3)n3 flops. There are specialized algorithms, with a complexity much lower than (1/3)n3, for Cholesky factorization of banded and sparse matrices. Cholesky factorization of banded matrices If A is symmetric positive definite and banded with bandwidth k, then its Cholesky factor L is banded with bandwidth k, and can be calculated in nk2 flops. The cost of the associated solve step is 4nk flops. Cholesky factorization of sparse matrices When A is symmetric positive definite and sparse, it is usually factored as A = P LLT P T , where P is a permutation matrix and L is lower triangular with positive diagonal elements. We can also express this as P T AP = LLT , i.e., LLT is the Cholesky factorization of P T AP . We can interpret this as first re-ordering the variables and equations, and then forming the (standard) Cholesky factorization of the resulting permuted matrix. Since P T AP is positive definite for any permutation matrix P , we are free to choose any permutation matrix; for each choice there is a unique associated Cholesky factor L. The choice of P, however, can greatly affect the sparsity of the factor L, which in turn can greatly affect the efficiency of solving Ax = b. Various heuristic methods are used to select a permutation P that leads to a sparse factor L. Example C.1 Cholesky factorization with an arrow sparsity pattern. Consider a sparse matrix of the form where D ∈ Rn×n is positive diagonal, and u ∈ Rn. It can be shown that A is positive definite if uT D−1u < 1. The Cholesky factorization of A is 􏰒1 uT 􏰓=􏰒1 0􏰓􏰒1 uT 􏰓 (C.2) uD uL0LT whereLislowertriangularwithLLT =D−uuT.Forgeneralu,thematrixD−uuT is dense, so we can expect L to be dense. Although the matrix A is very sparse (most of its rows have just two nonzero elements), its Cholesky factors are almost completely dense. A=􏰒1 uT 􏰓 uD C.3 LU, Cholesky, and LDLT factorization 671 On the other hand, suppose we permute the first row and column of A to the end. After this re-ordering, we obtain the Cholesky factorization 􏰒 D u 􏰓=􏰒 D1/2 √ 0 􏰓􏰒 D1/2 √ D−1/2u 􏰓. uT 1 uT D−1/2 1 − uT D−1u 0 1 − uT D−1u Now the Cholesky factor has a diagonal 1,1 block, so it is very sparse. This example illustrates that the re-ordering greatly affects the sparsity of the Cholesky factors. Here it was quite obvious what the best permutation is, and all good re- ordering heuristics would select this re-ordering and permute the dense row and column to the end. For more complicated sparsity patterns, it can be very difficult to find the ‘best’ re-ordering (i.e., resulting in the greatest number of zero elements in L), but various heuristics provide good suboptimal permutations. For the sparse Cholesky factorization, the re-ordering permutation P is often determined using only sparsity pattern of the matrix A, and not the particular numerical values of the nonzero elements of A. Once P is chosen, we can also determine the sparsity pattern of L without knowing the numerical values of the nonzero entries of A. These two steps combined are called the symbolic factorization of A, and form the first step in a sparse Cholesky factorization. In contrast, the permutation matrices in a sparse LU factorization do depend on the numerical values in A, in addition to its sparsity pattern. The symbolic factorization is then followed by the numerical factorization, i.e., the calculation of the nonzero elements of L. Software packages for sparse Cholesky factorization often include separate routines for the symbolic and the numerical factorization. This is useful in many applications, because the cost of the symbolic factorization is significant, and often comparable to the numerical factorization. Suppose, for example, that we need to solve m sets of linear equations A1x = b1, A2x = b2, ..., Amx = bm where the matrices Ai are symmetric positive definite, with different numerical values, but the same sparsity pattern. Suppose the cost of a symbolic factorization is fsymb, the cost of a numerical factorization is fnum, and the cost of the solve step is s. Then we can solve the m sets of linear equations in fsymb + m(fnum + s) flops, since we only need to carry out the symbolic factorization once, for all m sets of equations. If instead we carry out a separate symbolic factorization for each set of linear equations, the flop count is m(fsymb + fnum + s). C.3.3 LDLT factorization Every nonsingular symmetric matrix A can be factored as A = P LDLT P T 672 C Numerical linear algebra background where P is a permutation matrix, L is lower triangular with positive diagonal elements, and D is block diagonal, with nonsingular 1 × 1 and 2 × 2 diagonal blocks. This is called an LDLT factorization of A. (The Cholesky factorization can be considered a special case of LDLT factorization, with P = I and D = I.) An LDLT factorization can be computed in (1/3)n3 flops, if no structure of A is exploited. Algorithm C.3 Solving linear equations by LDLT factorization. given a set of linear equations Ax = b, with A ∈ Sn nonsingular. 1. LDLT factorization. Factor A as A = P LDLT P ((1/3)n3 flops). 2. Permutation. Solve P z1 = b (0 flops). 3. Forward substitution. Solve Lz2 = z1 (n2 flops). 4. (Block) diagonal solve. Solve Dz3 = z2 (order n flops). 5. Backward substitution. Solve LT z4 = z3 (n2 flops). 6. Permutation. Solve P T x = z4 (0 flops). The total cost is, keeping only the dominant term, (1/3)n3 flops. LDLT factorization of banded and sparse matrices As with the LU and Cholesky factorizations, there are specialized methods for calculating the LDLT factorization of a sparse or banded matrix. These are similar to the analogous methods for Cholesky factorization, with the additional factor D. In a sparse LDLT factorization, the permutation matrix P cannot be chosen only on the basis of the sparsity pattern of A (as in a sparse Cholesky factorization); it also depends on the particular nonzero values in the matrix A. Block elimination and Schur complements Eliminating a block of variables In this section we describe a general method that can be used to solve Ax = b by first eliminating a subset of the variables, and then solving a smaller system of linear equations for the remaining variables. For a dense unstructured matrix, this approach gives no advantage. But when the submatrix of A associated with the eliminated variables is easily factored (for example, if it is block diagonal or banded) the method can be substantially more efficient than a general method. Suppose we partition the variable x ∈ Rn into two blocks or subvectors, x=􏰒x1 􏰓, x2 where x1 ∈ Rn1 , x2 ∈ Rn2 . We conformally partition the linear equations Ax = b C.4 C.4.1 as 􏰒A11 A12 􏰓􏰒x1 􏰓=􏰒b1 􏰓 (C.3) A21A22 x2 b2 C.4 Block elimination and Schur complements 673 where A11 ∈ Rn1 ×n1 , A22 ∈ Rn2 ×n2 . Assuming that the submatrix A11 is invert- ible, we can eliminate x1 from the equations, as follows. Using the first equation, we can express x1 in terms of x2: x =A−1(b −A x). 1 111 122 (C.4) (C.5) Substituting this expression into the second equation yields (A −A A−1A )x =b −A A−1b . 22 21 11 12 2 2 21 11 1 We refer to this as the reduced equation obtained by eliminating x1 from the orig- inal equation. The reduced equation (C.5) and the equation (C.4) together are equivalent to the original equations (C.3). The matrix appearing in the reduced equation is called the Schur complement of the first block A11 in A: S=A −A A−1A 22 21 11 12 (see also §A.5.5). The Schur complement S is nonsingular if and only if A is nonsingular. The two equations (C.5) and (C.4) give us an alternative approach to solving the original system of equations (C.3). We first form the Schur complement S, then find x2 by solving (C.5), and then calculate x1 from (C.4). We can summarize this method as follows. Algorithm C.4 Solving linear equations by block elimination. given a nonsingular set of linear equations (C.3), with A11 nonsingular. 1. Form A−1A12 and A−1b1. 11 11 2. Form S = A22 − A21A−1A12 and ̃b = b2 − A21A−1b1. 11 11 3. Determine x2 by solving Sx2 = ̃b. 4. Determine x1 by solving A11x1 = b1 − A12x2. Remark C.1 Interpretation as block factor-solve. Block elimination can be interpreted in terms of the factor-solve approach described in §C.2.2, based on the factorization 􏰒A11 A12 􏰓 􏰒A11 0􏰓􏰒I A−1A12 􏰓 = 11 , I which can be considered a block LU factorization. This block LU factorization sug- gests the following method for solving (C.3). We first do a ‘block forward substitution’ to solve and then solve 􏰒A11 0􏰓􏰒z1 􏰓=􏰒b1 􏰓, A21 S z2 b2 􏰒I A−1A12 􏰓􏰒x1 􏰓 􏰒z1 􏰓 11 = 0 I x2 z2 A21 A22 A21 S 0 674 C Numerical linear algebra background by ‘block backward substitution’. This yields the same expressions as the block elimination method: z1= z2= x2 = x1= A−1 b1 11 S−1(b2 − A21z1) z2 z1 − A−1A12z2. 11 In fact, the modern approach to the factor-solve method is based on block factor and solve steps like these, with the block sizes optimally chosen for the processor (or processors), cache sizes, etc. Complexity analysis of block elimination method To analyze the (possible) advantage of solving the set of linear equations using block elimination, we carry out a flop count. We let f and s denote the cost of factoring A11 and carrying out the associated solve step, respectively. To keep the analysis simple we assume (for now) that A12, A22, and A21 are treated as dense, unstructured matrices. The flop counts for each of the four steps in solving Ax = b using block elimination are: 1. Computing A−1A and A−1b requires factoring A and n + 1 solves, so 11 12 11 1 11 2 it costs f + (n2 + 1)s, or just f + n2s, dropping the dominated term s. 2. Forming the Schur complement S requires the matrix multiply A (A−1A ), which costs 2n2n1, and an n2 × n2 matrix subtraction, which costs n2 (and can be dropped). The cost of forming ̃b = b −A A−1b is dominated by the cost of forming S, and so can be ignored. The total cost of step 2, ignoring dominated terms, is then 2n2n1. 3. To compute x2 = S−1 ̃b, we factor S and solve, which costs (2/3)n32. 4. Formingb −A x costs2n n +n flops. Tocomputex =A−1(b −A x ), 1122 121 1111122 we can use the factorization of A11 already computed in step 1, so only the solve is necessary, which costs s. Both of these costs are dominated by other terms, and can be ignored. The total cost is then flops. f + n2s + 2n2n1 + (2/3)n32 (C.6) Eliminating an unstructured matrix We first consider the case when no structure in A11 is exploited. We factor A11 using a standard LU factorization, so f = (2/3)n31, and then solve using a forward and a backward substitution, so s = 2n21. The flop count for solving the equations via block elimination is then (2/3)n31 + n2(2n21) + 2n2n1 + (2/3)n32 = (2/3)(n1 + n2)3, 2 21111 21 11 12 C.4 Block elimination and Schur complements 675 which is the same as just solving the larger set of equations using a standard LU factorization. In other words, solving a set of equations by block elimination gives no advantage when no structure of A11 is exploited. On the other hand, when the structure of A11 allows us to factor and solve more efficiently than the standard method, block elimination can be more efficient than applying the standard method. Eliminating a diagonal matrix If A11 is diagonal, no factorization is needed, and we can carry out a solve in n1 flops, so we have f = 0 and s = n1. Substituting these values into (C.6) and keeping only the leading terms yields 2n2n1 + (2/3)n32, flops, which is far smaller than (2/3)(n1 +n2)3, the cost using the standard method. In particular, the flop count of the standard method grows cubicly in n1, whereas for block elimination the flop count grows only linearly in n1. Eliminating a banded matrix If A11 is banded with bandwidth k, we can carry out the factorization in about f = 4k2n1 flops, and the solve can be done in about s = 6kn1 flops. The overall complexity of solving Ax = b using block elimination is 4k2n1 + 6n2kn1 + 2n2n1 + (2/3)n32 flops. Assuming k is small compared to n1 and n2, this simplifies to 2n2n1+(2/3)n32, the same as when A11 is diagonal. In particular, the complexity grows linearly in n1, as opposed to cubicly in n1 for the standard method. A matrix for which A11 is banded is sometimes called an arrow matrix since the sparsity pattern, when n1 ≫ n2, looks like an arrow pointing down and right. Block elimination can solve linear equations with arrow structure far more efficiently than the standard method. Eliminating a block diagonal matrix Suppose that A11 is block diagonal, with (square) block sizes m1,...,mk, where n1 = m1 + ··· + mk. In this case we can factor A11 by factoring each block separately, and similarly we can carry out the solve step on each block separately. Using standard methods for these we find f =(2/3)m31 +···+(2/3)m3k, s=2m21 +···+2m2k, so the overall complexity of block elimination is 􏰊k 􏰊k 􏰊k (2/3) m3i + 2n2 m2i + 2n2 mi + (2/3)n32. i=1 i=1 i=1 If the block sizes are small compared to n1 and n1 ≫ n2, the savings obtained by block elimination is dramatic. 676 C Numerical linear algebra background The linear equations Ax = b, where A11 is block diagonal, are called partially separable for the following reason. If the subvector x2 is fixed, the remaining equations decouple into k sets of independent linear equations (which can be solved separately). The subvector x2 is sometimes called the complicating variable since the equations are much simpler when x2 is fixed. Using block elimination, we can solve partially separable linear equations far more efficiently than by using a standard method. Eliminating a sparse matrix If A11 is sparse, we can eliminate A11 using a sparse factorization and sparse solve steps, so the values of f and s in (C.6) are much less than for unstructured A11. When A11 in (C.3) is sparse and the other blocks are dense, and n2 ≪ n1, we say that A is a sparse matrix with a few dense rows and columns. Eliminating the sparse block A11 provides an efficient method for solving equations which are sparse except for a few dense rows and columns. An alternative is to simply apply a sparse factorization algorithm to the entire matrix A. Most sparse solvers will handle dense rows and columns, and select a permutation that results in sparse factors, and hence fast factorization and solve times. This is more straightforward than using block elimination, but often slower, especially in applications where we can exploit structure in the other blocks (see, e.g., example C.4). Remark C.2 As already suggested in remark C.1, these two methods for solving sys- tems with a few dense rows and columns are closely related. Applying the elimination method by factoring A11 and S as A11 = P1L1U1P2, S = P3L2U2, can be interpreted as factoring A as 􏰒A11 A12􏰓= A21 A22 􏰒 P1 0 􏰓􏰒 L1 0 􏰓􏰒 U1 L−1PTA12 􏰓􏰒 P2 0 􏰓 TT−1 11 , 0P3 P3A21P2U1 L2 0 U2 0I followed by forward and backward substitutions. C.4.2 Block elimination and structure Symmetry and positive definiteness There are variants of the block elimination method that can be used when A is symmetric, or symmetric and positive definite. When A is symmetric, so are A11 and the Schur complement S, so a symmetric factorization can be used for A11 and S. Symmetry can also be exploited in the other operations, such as the matrix multiplies. Overall the savings over the nonsymmetric case is around a factor of two. C.4 Block elimination and Schur complements 677 Positive definiteness can also be exploited in block elimination. When A is sym- metric and positive definite, so are A11 and the Schur complement S, so Cholesky factorizations can be used. Exploiting structure in other blocks Our complexity analysis above assumes that we exploit no structure in the matrices A12, A21, A22, and the Schur complement S, i.e., they are treated as dense. But in many cases there is structure in these blocks that can be exploited in forming the Schur complement, factoring it, and carrying out the solve steps. In such cases the computational savings of the block elimination method over a standard method can be even higher. Example C.2 Block triangular equations. Suppose that A12 = 0, i.e., the linear equations Ax = b have block lower triangular structure: 􏰒A11 0 􏰓􏰒x1 􏰓=􏰒b1 􏰓. A21A22 x2 b2 In this case the Schur complement is just S = A22, and the block elimination method reduces to block forward substitution: x1 := A−1b1 11 x2 := A−1(b2 − A21x1). 22 Example C.3 Block diagonal and banded systems. Suppose that A11 is block diagonal, with maximum block size l × l, and that A12, A21, and A22 are banded, say with bandwidth k. In this case, A−1 is also block diagonal, with the same block sizes as 11 A11. Therefore the product A−1A12 is also banded, with bandwidth k + l, and the 11 Schur complement, S = A22 − A21A−1A12 is banded with bandwidth 2k + l. This 11 means that forming the Schur complement S can be done more efficiently, and that the factorization and solve steps with S can be done efficiently. In particular, for fixed maximum block size l and bandwidth k, we can solve Ax = b with a number of flops that grows linearly with n. Example C.4 KKT structure. Suppose that the matrix A has KKT structure, i.e., A=􏰒A11 A12􏰓, AT12 0 where A11 ∈ Sp++, and A12 ∈ Rp×m with rankA12 = m. Since A11 ≻ 0, we can use a Cholesky factorization. The Schur complement S = −AT A−1A12 is negative 12 11 definite, so we can factor −S using a Cholesky factorization. 678 C.4.3 C Numerical linear algebra background The matrix inversion lemma The idea of block elimination is to remove variables, and then solve a smaller set of equations that involve the Schur complement of the original matrix with respect to the eliminated variables. The same idea can be turned around: When we recognize a matrix as a Schur complement, we can introduce new variables, and create a larger set of equations to solve. In most cases there is no advantage to doing this, since we end up with a larger set of equations. But when the larger set of equations has some special structure that can be exploited to solve it, introducing variables can lead to an efficient method. The most common case is when another block of variables can be eliminated from the larger matrix. We start with the linear equations (A + BC)x = b, (C.7) where A ∈ Rn×n is nonsingular, and B ∈ Rn×p, C ∈ Rp×n. We introduce a new variable y = Cx, and rewrite the equations as or, in matrix form, Ax + By = b, y = Cx, 􏰒A B􏰓􏰒x􏰓=􏰒b􏰓. (C.8) C−I y 0 Note that our original coefficient matrix, A + BC, is the Schur complement of −I in the larger matrix that appears in (C.8). If we were to eliminate the variable y from (C.8), we would get back the original equation (C.7). In some cases, it can be more efficient to solve the larger set of equations (C.8) than the original, smaller set of equations (C.7). This would be the case, for example, if A, B, and C were relatively sparse, but the matrix A + BC were far less sparse. After introducing the new variable y, we can eliminate the original variable x from the larger set of equations (C.8), using x = A−1(b − By). Substituting this into the second equation y = Cx, we obtain so that (I + CA−1B)y = CA−1b, y = (I + CA−1B)−1CA−1b. Using x = A−1(b − By), we get x = 􏰀A−1 − A−1B(I + CA−1B)−1CA−1􏰁 b. Since b is arbitrary, we conclude that (A + BC)−1 = A−1 − A−1B 􏰀I + CA−1B􏰁−1 CA−1. (C.9) This is known as the matrix inversion lemma, or the Sherman-Woodbury-Morrison formula. The matrix inversion lemma has many applications. For example if p is small (or even just not very large), it gives us a method for solving (A + BC)x = b, provided we have an efficient method for solving Au = v. C.4 Block elimination and Schur complements 679 Diagonal or sparse plus low rank Suppose that A is diagonal with nonzero diagonal elements, and we want to solve an equation of the form (C.7). The straightforward solution would consist in first forming the matrix D = A + BC, and then solving Dx = b. If the product BC is dense, then the complexity of this method is 2pn2 flops to form A + BC, plus (2/3)n3 flops for the LU factorization of D, so the total cost is 2pn2 + (2/3)n3 flops. The matrix inversion lemma suggests a more efficient method. We can calculate x by evaluating the expression (C.9) from right to left, as follows. We first evaluate z = A−1b (n flops, since A is diagonal). Then we form the matrix E = I +CA−1B (2p2n flops). Next we solve Ew = Cz, which is a set of p linear equations in p variables. The cost is (2/3)p3 flops, plus 2pn to form Cz. Finally, we evaluate x = z − A−1Bw (2pn flops for the matrix-vector product Bw, plus lower order terms). The total cost is 2p2n + (2/3)p3 flops, dropping dominated terms. Comparing with the first method, we see that the second method is more efficient when p < n. In particular if p is small and fixed, the complexity grows linearly with n. Another important application of the matrix inversion lemma occurs when A is sparse and nonsingular, and the matrices B and C are dense. Again we can compare two methods. The first method is to form the (dense) matrix A + BC, and to solve (C.7) using a dense LU factorization. The cost of this method is 2pn2+(2/3)n3 flops. The second method is based on evaluating the expression (C.9), using a sparse LU factorization of A. Specifically, suppose that f is the cost of factoring A as A = P1LUP2, and s is the cost of solving the factored system P1LUP2x = d. We can evaluate (C.9) from right to left as follows. We first factor A, and solve p + 1 linear systems Az = b, AD = B, to find z ∈ Rn, and D ∈ Rn×p. The cost is f +(p+1)s flops. Next, we form the matrix E = I + CD, and solve Ew = Cz, which is a set of p linear equations in p variables w. The cost of this step is 2p2n + (2/3)p3 plus lower order terms. Finally, we evaluate x = z − Dw, at a cost of 2pn flops. This gives us a total cost of f + ps + 2p2n + (2/3)p3 flops. If f ≪ (2/3)n3 and s ≪ 2n2, this is much lower than the complexity of the first method. Remark C.3 The augmented system approach. A different approach to exploiting sparse plus low rank structure is to solve (C.8) directly using a sparse LU-solver. The system (C.8) is a set of p + n linear equations in p + n variables, and is sometimes 680 C Numerical linear algebra background called the augmented system associated with (C.7). If A is very sparse and p is small, then solving the augmented system using a sparse solver can be much faster than solving the system (C.7) using a dense solver. The augmented system approach is closely related to the method that we described above. Suppose A = P1LUP2 is a sparse LU factorization of A, and I + C A − 1 B = P 3 L ̃ U ̃ is a dense LU factorization of I + CA−1B. Then 􏰒AB􏰓 C􏰒−I 􏰓􏰒 􏰓􏰒 −1T 􏰓􏰒 􏰓 (C.10) and this factorization can be used to solve the augmented system. It can be verified that this is equivalent to the method based on the matrix inversion lemma that we described above. Of course, if we solve the augmented system using a sparse LU solver, we have no control over the permutations that are selected. The solver might choose a factor- ization different from (C.10), and more expensive to compute. In spite of this, the augmented system approach remains an attractive option. It is easier to implement than the method based on the matrix inversion lemma, and it is numerically more stable. Low rank updates Suppose A ∈ Rn×n is nonsingular, u, v ∈ Rn with 1 + vT A−1u ̸= 0, and we want to solve two sets of linear equations A x = b , ( A + u v T ) x ̃ = b . The solution x ̃ of the second system is called a rank-one update of x. The matrix inversion lemma allows us to calculate the rank-one update x ̃ very cheaply, once we have computed x. We have =P10 L0ULP1BP20, 0 P3 P3TCP2TU−1 −L ̃ 0 U ̃ 0 I x ̃ = (A+uvT)−1b = (A−1− 1 A−1uvTA−1)b 1+vTA−1u = x− vTx A−1u. 1+vTA−1u We can therefore solve both systems by factoring A, computing x = A−1b and w = A−1u, and then evaluating x ̃=x− vTx w. 1+vTw The overall cost is f + 2s, as opposed to 2(f + s) if we were to solve for x ̃ from scratch. C.5 Solving underdetermined linear equations 681 C.5 Solving underdetermined linear equations To conclude this appendix, we mention a few important facts about underdeter- mined linear equations Ax = b, (C.11) where A ∈ Rp×n with p < n. We assume that rank A = p, so there is at least one solution for all b. In many applications it is sufficient to find just one particular solution xˆ. In other situations we might need a complete parametrization of all solutions as { x | A x = b } = { F z + xˆ | z ∈ R n − p } ( C . 1 2 ) where F is a matrix whose columns form a basis for the nullspace of A. Inverting a nonsingular submatrix of A The solution of the underdetermined system is straightforward if a p×p nonsingular submatrix of A is known. We start by assuming that the first p columns of A are independent. Then we can write the equation Ax = b as Ax=􏰋 A1 A2 􏰌􏰒 x1 􏰓=A1x1 +A2x2 =b, x2 where A1 ∈ Rp×p is nonsingular. We can express x1 as x =A−1(b−A x )=A−1b−A−1A x . 11221122 This expression allows us to easily calculate a solution: we simply take xˆ2 = 0, xˆ = A−1b. The cost is equal to the cost of solving one square set of p linear 11 equations A1xˆ1 = b. We can also parametrize all solutions of Ax = b, using x2 ∈ Rn−p as a free parameter. The general solution of Ax = b can be expressed as 􏰒x1 􏰓 􏰒−A−1A2 􏰓 􏰒A−1b􏰓 x= = 1 x2+ 1 . x2 I 0 This gives a parametrization of the form (C.12) with 􏰒−A−1A 􏰓 􏰒A−1b􏰓 F= 1 2 , xˆ= 1 . To summarize, assume that the cost of factoring A1 is f and the cost of solving one system of the form A1x = d is s. Then the cost of finding one solution of (C.11) is f + s. The cost of parametrizing all solutions (i.e., calculating F and xˆ) is f +s(n−p+1). Now we consider the general case, when the first p columns of A need not be independent. Since rankA = p, we can select a set of p columns of A that is independent, permute them to the front, and then apply the method described I0 682 C Numerical linear algebra background above. In other words, we find a permutation matrix P such that the first p columns of A ̃ = AP are independent, i.e., A ̃=AP=􏰋A1 A2􏰌, where A1 is invertible. The general solution of A ̃x ̃ = b, where x ̃ = P T x, is then given by 􏰒−A−1A 􏰓 􏰒A−1b􏰓 x ̃= 1 2 x ̃2+ 1 . I0 The general solution of Ax = b is then given by 􏰒−A−1A 􏰓 􏰒A−1b􏰓 x=Px ̃=P 1 2 z+P 1 , I0 where z ∈ Rn−p is a free parameter. This idea is useful when it is easy to identify a nonsingular or easily inverted submatrix of A, for example, a diagonal matrix with nonzero diagonal elements. The QR factorization IfC∈Rn×p withp≤nandrankC=p,thenitcanbefactoredas C=􏰋Q1 Q2􏰌􏰒R0􏰓, where Q1 ∈ Rn×p and Q2 ∈ Rn×(n−p) satisfy QT1Q1 =I, QT2Q2 =I, QT1Q2 =0, and R ∈ Rp×p is upper triangular with nonzero diagonal elements. This is called the QR factorization of C. The QR factorization can be calculated in 2p2(n−p/3) flops. (The matrix Q is stored in a factored form that makes it possible to efficiently compute matrix-vector products Qx and QT x.) The QR factorization can be used to solve the underdetermined set of linear equations (C.11). Suppose AT=􏰋Q1 Q2􏰌􏰒R0􏰓 is the QR factorization of AT . Substituting in the equations it is clear that xˆ = Q1R−T b satisfies the equations: A xˆ = R T Q T1 Q 1 R − T b = b . Moreover, the columns of Q2 form a basis for the nullspace of A, so the complete solution set can be parametrized as { x = xˆ + Q 2 z | z ∈ R n − p } . The QR factorization method is the most common method for solving under- determined equations. One drawback is that it is difficult to exploit sparsity. The factor Q is usually dense, even when C is very sparse. C.5 Solving underdetermined linear equations 683 LU factorization of a rectangular matrix IfC∈Rn×p withp≤nandrankC=p,thenitcanbefactoredas C = PLU where P ∈ Rn×n is a permutation matrix, L ∈ Rn×p is unit lower triangular (i.e., lij = 0 for i < j and lii = 1), and U ∈ Rp×p is nonsingular and upper triangular. The cost is (2/3)p3 + p2(n − p) flops if no structure in C is exploited. If the matrix C is sparse, the LU factorization usually includes row and column permutations, i.e., we factor C as C = P1LUP2 where P1, P2 ∈ Rp×p are permutation matrices. The LU factorization of a sparse rectangular matrix can be calculated very efficiently, at a cost that is much lower than for dense matrices. The LU factorization can be used to solve underdetermined sets of linear equa- tions. Suppose AT = P LU is the LU factorization of the matrix AT in (C.11), and we partition L as L=􏰒L1 􏰓, L2 where L1 ∈ Rp×p and L2 ∈ R(n−p)×p. It is easily verified that the solution set can be parametrized as (C.12) with 􏰒 L−TU−Tb 􏰓 􏰒 −L−TLT 􏰓 xˆ=P 1 , F=P 1 2 . 0I 684 C Numerical linear algebra background Bibliography Standard references for dense numerical linear algebra are Golub and Van Loan [GL89], Demmel [Dem97], Trefethen and Bau [TB97], and Higham [Hig96]. The sparse Cholesky factorization is covered in George and Liu [GL81]. Duff, Erisman, and Reid [DER86] and Duff [Duf93] discuss the sparse LU and LDLT factorizations. The books by Gill, Murray, and Wright [GMW81, §2.2], Wright [Wri97, chapter 11], and Nocedal and Wright [NW99, §A.2] include introductions to numerical linear algebra that focus on problems arising in numerical optimization. High-quality implementations of common dense linear algebra algorithms are included in the LAPACK package [ABB+99]. LAPACK is built upon the Basic Linear Algebra Subprograms (BLAS), a library of routines for basic vector and matrix operations that can be easily customized to take advantage of specific computer architectures. Several codes for solving sparse linear equations are also available, including SPOOLES [APWW99], SuperLU [DGL03], UMFPACK [Dav03], and WSMP [Gup00], to mention only a few. References [ABB+ 99] [AE61] [AG03] [AHO98] [Ali91] [And70] E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen. LA- PACK Users’ Guide. Society for Industrial and Applied Mathematics, third edition, 1999. Available from www.netlib.org/lapack. K. J. Arrow and A. C. Enthoven. Quasi-concave programming. Econometrica, 29(4):779–800, 1961. F. Alizadeh and D. Goldfarb. Second-order cone programming. Mathematical Programming Series B, 95:3–51, 2003. F. Alizadeh, J.-P. A. Haeberly, and M. L. Overton. Primal-dual interior- point methods for semidefinite programming: Convergence rates, stability and numerical results. SIAM Journal on Optimization, 8(3):746–768, 1998. F. Alizadeh. Combinatorial Optimization with Interior-Point Methods and Semi-Definite Matrices. PhD thesis, University of Minnesota, 1991. T. W. Anderson. Estimation of covariance matrices which are linear com- binations or whose inverses are linear combinations of given matrices. In R. C. Bose et al., editor, Essays in Probability and Statistics, pages 1–24. University of North Carolina Press, 1970. [APWW99] C. Ashcraft, D. Pierce, D. K. Wah, and J. Wu. The Reference Man- ual for SPOOLES Version 2.2: An Object Oriented Software Library for Solving Sparse Linear Systems of Equations, 1999. Available from www.netlib.org/linalg/spooles/spooles.2.2.html. [AY98] [Bar02] [BB65] [BB91] [BBI71] [BD77] [BDX04] [BE93] E. D. Andersen and Y. Ye. A computational study of the homogeneous algorithm for large-scale convex optimization. Computational Optimization and Applications, 10:243–269, 1998. A. Barvinok. A Course in Convexity, volume 54 of Graduate Studies in Mathematics. American Mathematical Society, 2002. E. F. Beckenbach and R. Bellman. Inequalities. Springer, second edition, 1965. S. Boyd and C. Barratt. Linear Controller Design: Limits of Performance. Prentice-Hall, 1991. A. Berman and A. Ben-Israel. More on linear inequalities with applications to matrix theory. Journal of Mathematical Analysis and Applications, 33:482– 496, 1971. P. J. Bickel and K. A. Doksum. Mathematical Statistics. Holden-Day, 1977. S. Boyd, P. Diaconis, and L. Xiao. Fastest mixing Markov chain on a graph. SIAM Review, 46(4):667–689, 2004. S. Boyd and L. El Ghaoui. Method of centers for minimizing generalized eigenvalues. Linear Algebra and Its Applications, 188:63–111, 1993. 686 References S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix In- equalities in System and Control Theory. Society for Industrial and Applied Mathematics, 1994. A. Berman. Cones, Matrices and Mathematical Programming. Springer, 1973. M. Berger. Convexity. The American Mathematical Monthly, 97(8):650–678, 1990. D.P. Bertsekas. Nonlinear Programming. Athena Scientific, second edition, 1999. D.P. Bertsekas. Convex Analysis and Optimization. Athena Scientific, 2003. With A. Nedi ́c and A. E. Ozdaglar. T. Bonnesen and W. Fenchel. Theorie der konvexen Ko ̈rper. Chelsea Pub- lishing Company, 1948. First published in 1934. R. Bellman and K. Fan. On systems of linear inequalities in Hermitian matrix variables. In V. L. Klee, editor, Convexity, volume VII of Proceedings of the Symposia in Pure Mathematics, pages 1–11. American Mathematical Society, 1963. R. G. Bland, D. Goldfarb, and M. J. Todd. The ellipsoid method: A survey. Operations Research, 29(6):1039–1091, 1981. A. Ben-Israel. Linear equations and inequalities on finite dimensional, real or complex vector spaces: A unified theory. Journal of Mathematical Analysis and Applications, 27:367–389, 1969. A. Bj ̈orck. Numerical Methods for Least Squares Problems. Society for In- dustrial and Applied Mathematics, 1996. A. Brooke, D. Kendrick, A. Meeraus, and R. Raman. GAMS: A User’s Guide. The Scientific Press, 1998. J. M. Borwein and A. S. Lewis. Convex Analysis and Nonlinear Optimization. Springer, 2000. O. Barndorff-Nielsen. Information and Exponential Families in Statistical Theory. John Wiley & Sons, 1978. J. V. Bondar. Comments on and complements to Inequalities: Theory of Ma- jorization and Its Applications. Linear Algebra and Its Applications, 199:115– 129, 1994. B. Borchers. CSDP User’s Guide, 2002. Available from www.nmt.edu/~borchers/csdp.html. A. Berman and R. J. Plemmons. Nonnegative Matrices in the Mathemati- cal Sciences. Society for Industrial and Applied Mathematics, 1994. First published in 1979 by Academic Press. L. Brickman. On the field of values of a matrix. Proceedings of the American Mathematical Society, 12:61–66, 1961. D. Bertsimas and J. Sethuraman. Moment problems and semidefinite opti- mization. In H. Wolkowicz, R. Saigal, and L. Vandenberghe, editors, Hand- book of Semidefinite Programming, chapter 16, pages 469–510. Kluwer Aca- demic Publishers, 2000. M. S. Bazaraa, H. D. Sherali, and C. M. Shetty. Nonlinear Programming. Theory and Algorithms. John Wiley & Sons, second edition, 1993. D. Bertsimas and J. N. Tsitsiklis. Introduction to Linear Optimization. Athena Scientific, 1997. A. Ben-Tal and A. Nemirovski. Robust convex optimization. Mathematics of Operations Research, 23(4):769–805, 1998. [BEFB94] [Ber73] [Ber90] [Ber99] [Ber03] [BF48] [BF63] [BGT81] [BI69] [Bj ̈o96] [BKMR98] [BL00] [BN78] [Bon94] [Bor02] [BP94] [Bri61] [BS00] [BSS93] [BT97] [BTN98] References [BTN99] [BTN01] [BY02] [BYT99] [Cal64] [CDS01] [CGGS98] [CH53] [CK77] [CT91] [Dan63] [Dav63] [Dav03] [DDB95] [Deb59] [Dem97] [DER86] [DGL03] [dH93] [DHS99] [Dik67] 687 A. Ben-Tal and A. Nemirovski. Robust solutions of uncertain linear programs. Operations Research Letters, 25(1):1–13, 1999. A. Ben-Tal and A. Nemirovski. Lectures on Modern Convex Optimization. Analysis, Algorithms, and Engineering Applications. Society for Industrial and Applied Mathematics, 2001. S. J. Benson and Y. Ye. DSDP — A Software Package Implementing the Dual-Scaling Algorithm for Semidefinite Programming, 2002. Available from www-unix.mcs.anl.gov/~benson. E. Bai, Y. Ye, and R. Tempo. Bounded error parameter estimation: A se- quential analytic center approach. IEEE Transactions on Automatic control, 44(6):1107–1117, 1999. E. Calabi. Linear systems of real quadratic forms. Proceedings of the Amer- ican Mathematical Society, 15(5):844–846, 1964. S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM Review, 43(1):129–159, 2001. S. Chandrasekaran, G. H. Golub, M. Gu, and A. H. Sayed. Parameter es- timation in the presence of bounded data uncertainties. SIAM Journal of Matrix Analysis and Applications, 19(1):235–252, 1998. R. Courant and D. Hilbert. Method of Mathematical Physics. Volume 1. Interscience Publishers, 1953. Tranlated and revised from the 1937 German original. B. D. Craven and J. J. Koliha. Generalizations of Farkas’ theorem. SIAM Journal on Numerical Analysis, 8(6), 1977. T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, 1991. G. B. Dantzig. Linear Programming and Extensions. Princeton University Press, 1963. C. Davis. Notions generalizing convexity for functions defined on spaces of matrices. In V. L. Klee, editor, Convexity, volume VII of Proceedings of the Symposia in Pure Mathematics, pages 187–201. American Mathematical Society, 1963. T. A. Davis. UMFPACK User Guide, 2003. Available from www.cise.ufl.edu/research/sparse/umfpack. M. A. Dahleh and I. J. Diaz-Bobillo. Control of Uncertain Systems: A Linear Programming Approach. Prentice-Hall, 1995. G. Debreu. Theory of Value: An Axiomatic Analysis of Economic Equilib- rium. Yale University Press, 1959. J. W. Demmel. Applied Numerical Linear Algebra. Society for Industrial and Applied Mathematics, 1997. I. S. Duff, A. M. Erismann, and J. K. Reid. Direct Methods for Sparse Matrices. Clarendon Press, 1986. J. W. Demmel, J. R. Gilbert, and X. S. Li. SuperLU Users’ Guide, 2003. Available from crd.lbl.gov/~xiaoye/SuperLU. D. den Hertog. Interior Point Approach to Linear, Quadratic and Convex Programming. Kluwer, 1993. R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification. John Wiley & Sons, second edition, 1999. I. Dikin. Iterative solution of problems of linear and quadratic programming. Soviet Mathematics Doklady, 8(3):674–675, 1967. 688 References T. N. Davidson, Z.-Q. Luo, and K. M. Wong. Design of orthogonal pulse shapes for communications via semidefinite programming. IEEE Transactions on Signal Processing, 48(5):1433–1445, 2000. G. E. Dullerud and F. Paganini. A Course in Robust Control Theory: A Convex Approach. Springer, 2000. R. J. Duffin, E. L. Peterson, and C. Zener. Geometric Programming. Theory and Applications. John Wiley & Sons, 1967. J. E. Dennis and R. S. Schnabel. Numerical Methods for Unconstrained Opti- mization and Nonlinear Equations. Society for Industrial and Applied Math- ematics, 1996. First published in 1983 by Prentice-Hall. I. S. Duff. The solution of augmented systems. In D. F. Griffiths and G. A. Watson, editors, Numerical Analysis 1993. Proceedings of the 15th Dundee Conference, pages 40–55. Longman Scientific & Technical, 1993. J. G. Ecker. Geometric programming: Methods, computations and applica- tions. SIAM Review, 22(3):338–362, 1980. H. G. Eggleston. Convexity. Cambridge University Press, 1958. L. El Ghaoui and H. Lebret. Robust solutions to least-squares problems with uncertain data. SIAM Journal of Matrix Analysis and Applications, 18(4):1035–1064, 1997. J. Elzinga and T. G. Moore. A central cutting plane algorithm for the convex programming problem. Mathematical Programming Studies, 8:134–145, 1975. L. El Ghaoui and S. Niculescu, editors. Advances in Linear Matrix Inequality Methods in Control. Society for Industrial and Applied Mathematics, 2000. L. El Ghaoui, F. Oustry, and H. Lebret. Robust solutions to uncertain semidefinite programs. SIAM Journal on Optimization, 9(1):33–52, 1998. I. Ekeland and R. T ́emam. Convex Analysis and Variational Inequalities. Classics in Applied Mathematics. Society for Industrial and Applied Mathe- matics, 1999. Originally published in 1976. J. Farkas. Theorie der einfachen Ungleichungen. Journal fu ̈r die Reine und Angewandte Mathematik, 124:1–27, 1902. J. P. Fishburn and A. E. Dunlop. TILOS: A posynomial programming ap- proach to transistor sizing. In IEEE International Conference on Computer- Aided Design: ICCAD-85. Digest of Technical Papers, pages 326–328. IEEE Computer Society Press, 1985. W. Fenchel. Convexity through the ages. In P. M. Gruber and J. M. Wills, editors, Convexity and Its Applications, pages 120–130. Birkh ̈auser Verlag, 1983. R. Fourer, D. M. Gay, and B. W. Kernighan. AMPL: A Modeling Language for Mathematical Programming. Duxbury Press, 1999. A. Forsgren, P. E. Gill, and M. H. Wright. Interior methods for nonlinear optimization. SIAM Review, 44(4):525–597, 2002. K. Fujisawa, M. Kojima, and K. Nakata. SDPA User’s Manual, 1998. Avail- able from grid.r.dendai.ac.jp/sdpa. M. Florenzano and C. Le Van. Finite Dimensional Convexity and Optimiza- tion. Number 13 in Studies in Economic Theory. Springer, 2001. A. V. Fiacco and G. P. McCormick. Nonlinear Programming. Sequential Unconstrained Minimization Techniques. Society for Industrial and Applied Mathematics, 1990. First published in 1968 by Research Analysis Corpora- tion. [DLW00] [DP00] [DPZ67] [DS96] [Duf93] [Eck80] [Egg58] [EL97] [EM75] [EN00] [EOL98] [ET99] [Far02] [FD85] [Fen83] [FGK99] [FGW02] [FKN98] [FL01] [FM90] References [Fre56] [FW56] [Gau95] [GI03a] [GI03b] [GKT51] [GL81] [GL89] [GLS88] [GLY96] [GMS+ 86] [GMW81] [Gon92] [Gow85] [Gup00] [GW95] [Han98] [HBL01] [Hes68] [Hig96] 689 R. J. Freund. The introduction of risk into a programming model. Econo- metrica, 24(3):253–263, 1956. M. Frank and P. Wolfe. An algorithm for quadratic programming. Naval Research Logistics Quarterly, 3:95–110, 1956. C. F. Gauss. Theory of the Combination of Observations Least Subject to Errors. Society for Industrial and Applied Mathematics, 1995. Translated from original 1820 manuscript by G. W. Stewart. D. Goldfarb and G. Iyengar. Robust convex quadratically constrained pro- grams. Mathematical Programming Series B, 97:495–515, 2003. D. Goldfarb and G. Iyengar. Robust portfolio selection problems. Mathemat- ics of Operations Research, 28(1):1–38, 2003. D. Gale, H. W. Kuhn, and A. W. Tucker. Linear programming and the theory of games. In T. C. Koopmans, editor, Activity Analysis of Production and Allocation, volume 13 of Cowles Commission for Research in Economics Monographs, pages 317–335. John Wiley & Sons, 1951. A. George and J. W.-H. Liu. Computer solution of large sparse positive definite systems. Prentice-Hall, 1981. G. Golub and C. F. Van Loan. Matrix Computations. Johns Hopkins Uni- versity Press, second edition, 1989. M. Gro ̈tschel, L. Lovasz, and A. Schrijver. Geometric Algorithms and Com- binatorial Optimization. Springer, 1988. J.-L. Goffin, Z.-Q. Luo, and Y. Ye. Complexity analysis of an interior cutting plane method for convex feasibility problems. SIAM Journal on Optimization, 6:638–652, 1996. P. E. Gill, W. Murray, M. A. Saunders, J. A. Tomlin, and M. H. Wright. On projected newton barrier methods for linear programming and an equivalence to Karmarkar’s projective method. Mathematical Programming, 36:183–209, 1986. P. E. Gill, W. Murray, and M. H. Wright. Practical Optimization. Academic Press, 1981. C. C. Gonzaga. Path-following methods for linear programming. SIAM Re- view, 34(2):167–224, 1992. J. C. Gower. Properties of Euclidean and non-Euclidean distance matrices. Linear Algebra and Its Applications, 67:81–97, 1985. A. Gupta. WSMP: Watson Sparse Matrix Package. Part I — Direct Solution of Symmetric Sparse Systems. Part II — Direct Solution of General Sparse Systems, 2000. Available from www.cs.umn.edu/~agupta/wsmp. M. X. Goemans and D. P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the Association for Computing Machinery, 42(6):1115–1145, 1995. P. C. Hansen. Rank-Deficient and Discrete Ill-Posed Problems. Numerical Aspects of Linear Inversion. Society for Industrial and Applied Mathematics, 1998. M. del Mar Hershenson, S. P. Boyd, and T. H. Lee. Optimal design of a CMOS op-amp via geometric programming. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 20(1):1–21, 2001. M. R. Hestenes. Pairs of quadratic forms. Linear Algebra and Its Applications, 1:397–407, 1968. N. J. Higham. Accuracy and Stability of Numerical Algorithms. Society for Industrial and Applied Mathematics, 1996. 690 References C. Hildreth. A quadratic programming procedure. Naval Research Logistics Quarterly, 4:79–85, 1957. R. A. Horn and C. A. Johnson. Matrix Analysis. Cambridge University Press, 1985. R. A. Horn and C. A. Johnson. Topics in Matrix Analysis. Cambridge University Press, 1991. G. H. Hardy, J. E. Littlewood, and G. P ́olya. Inequalities. Cambridge Uni- versity Press, second edition, 1952. R. Horst and P. Pardalos. Handbook of Global Optimization. Kluwer, 1994. C. Helmberg, F. Rendl, R. Vanderbei, and H. Wolkowicz. An interior- point method for semidefinite programming. SIAM Journal on Optimization, 6:342–361, 1996. T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learn- ing. Data Mining, Inference, and Prediction. Springer, 2001. P. J. Huber. Robust estimation of a location parameter. The Annals of Mathematical Statistics, 35(1):73–101, 1964. P. J. Huber. Robust Statistics. John Wiley & Sons, 1981. J.-B. Hiriart-Urruty and C. Lemar ́echal. Convex Analysis and Minimization Algorithms. Springer, 1993. Two volumes. J.-B. Hiriart-Urruty and C. Lemar ́echal. Fundamentals of Convex Analy- sis. Springer, 2001. Abridged version of Convex Analysis and Minimization Algorithms volumes 1 and 2. K. Isii. Inequalities of the types of Chebyshev and Cram ́er-Rao and math- ematical programming. Annals of The Institute of Statistical Mathematics, 16:277–293, 1964. F. Jarre. Optimal ellipsoidal approximations around the analytic center. Applied Mathematics and Optimization, 30:15–19, 1994. J. L. W. V. Jensen. Sur les fonctions convexes et les in ́egalit ́es entre les valeurs moyennes. Acta Mathematica, 30:175–193, 1906. F. John. Extremum problems with inequalities as subsidiary conditions. In J. Moser, editor, Fritz John, Collected Papers, pages 543–560. Birkh ̈auser Verlag, 1985. First published in 1948. L. V. Kantorovich. Functional Analysis and Applied Mathematics. National Bureau of Standards, 1952. Translated from Russian by C. D. Benster. First published in 1948. L. V. Kantorovich. Mathematical methods of organizing and planning pro- duction. Management Science, 6(4):366–422, 1960. Translated from Russian. First published in 1939. N. Karmarkar. A new polynomial-time algorithm for linear programming. Combinatorica, 4(4):373–395, 1984. J. E. Kelley. The cutting-plane method for solving convex programs. Journal of the Society for Industrial and Applied Mathematics, 8(4):703–712, 1960. V. L. Klee, editor. Convexity, volume 7 of Proceedings of Symposia in Pure Mathematics. American Mathematical Society, 1963. V. Klee. What is a convex set? The American Mathematical Monthly, 78(6):616–631, 1971. M. G. Krein and A. A. Nudelman. The Markov Moment Problem and Ex- tremal Problems. American Mathematical Society, 1977. Translated from Russian. First published in 1973. [Hil57] [HJ85] [HJ91] [HLP52] [HP94] [HRVW96] [HTF01] [Hub64] [Hub81] [HUL93] [HUL01] [Isi64] [Jar94] [Jen06] [Joh85] [Kan52] [Kan60] [Kar84] [Kel60] [Kle63] [Kle71] [KN77] References [Koo51] [KS66] [KSH97] [KSH00] [KSJA91] [KT51] [Kuh76] [Las95] [Las02] [Lay82] [LH66] [LH95] [LMS94] [LO96] [L ̈of04] [L ̈ow34] [LSZ00] [Lue68] [Lue69] [Lue84] 691 T. C. Koopmans, editor. Activity Analysis of Production and Allocation, volume 13 of Cowles Commission for Research in Economics Monographs. John Wiley & Sons, 1951. S. Karlin and W. J. Studden. Tchebycheff Systems: With Applications in Analysis and Statistics. John Wiley & Sons, 1966. M. Kojima, S. Shindoh, and S. Hara. Interior-point methods for the monotone semidefinite linear complementarity problem in symmetric matrices. SIAM Journal on Optimization, 7(1):86–125, 1997. T. Kailath, A. H. Sayed, and B. Hassibi. Linear Estimation. Prentice-Hall, 2000. J. M. Kleinhaus, G. Sigl, F. M. Johannes, and K. J. Antreich. GORDIAN: VLSI placement by quadratic programming and slicing optimization. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 10(3):356–200, 1991. H. W. Kuhn and A. W. Tucker. Nonlinear programming. In J. Neyman, ed- itor, Proceedings of the Second Berkeley Symposium on Mathematical Statis- tics and Probability, pages 481–492. University of California Press, 1951. H. W. Kuhn. Nonlinear programming. A historical view. In R. W. Cottle and C. E. Lemke, editors, Nonlinear Programming, volume 9 of SIAM-AMS Proceedings, pages 1–26. American Mathematical Society, 1976. J. B. Lasserre. A new Farkas lemma for positive semidefinite matrices. IEEE Transactions on Automatic Control, 40(6):1131–1133, 1995. J. B. Lasserre. Bounds on measures satisfying moment conditions. The Annals of Applied Probability, 12(3):1114–1137, 2002. S. R. Lay. Convex Sets and Their Applications. John Wiley & Sons, 1982. B. Liˆeu ̃ and P. Huard. La m ́ethode des centres dans un espace topologique. Numerische Mathematik, 8:56–67, 1966. C. L. Lawson and R. J. Hanson. Solving Least Squares Problems. Society for Industrial and Applied Mathematics, 1995. First published in 1974 by Prentice-Hall. I. J. Lustig, R. E. Marsten, and D. F. Shanno. Interior point methods for linear programming: Computational state of the art. ORSA Journal on Computing, 6(1):1–14, 1994. A. S. Lewis and M. L. Overton. Eigenvalue optimization. Acta Numerica, 5:149–190, 1996. J. L ̈ofberg. YALMIP : A toolbox for modeling and optimization in MAT- LAB. In Proceedings of the IEEE International Symposium on Com- puter Aided Control Systems Design, pages 284–289, 2004. Available from control.ee.ethz.ch/~joloef/yalmip.php. K. L ̈owner. U ̈ber monotone Matrixfunktionen. Mathematische Zeitschrift, 38:177–216, 1934. Z.-Q. Luo, J. F. Sturm, and S. Zhang. Conic convex programming and self- dual embedding. Optimization Methods and Software, 14:169–218, 2000. D. G. Luenberger. Quasi-convex programming. SIAM Journal on Applied Mathematics, 16(5), 1968. D. G. Luenberger. Optimization by Vector Space Methods. John Wiley & Sons, 1969. D. G. Luenberger. Linear and Nonlinear Programming. Addison-Wesley, second edition, 1984. 692 References D. G. Luenberger. Investment Science. Oxford University Press, 1998. Z.-Q. Luo. Applications of convex optimization in signal processing and digital communication. Mathematical Programming Series B, 97:177–207, 2003. M. S. Lobo, L. Vandenberghe, S. Boyd, and H. Lebret. Applications of second- order cone programming. Linear Algebra and Its Applications, 284:193–228, 1998. O. Mangasarian. Linear and nonlinear separation of patterns by linear pro- gramming. Operations Research, 13(3):444–452, 1965. O. Mangasarian. Nonlinear Programming. Society for Industrial and Applied Mathematics, 1994. First published in 1969 by McGraw-Hill. H. Markowitz. Portfolio selection. The Journal of Finance, 7(1):77–91, 1952. H. Markowitz. The optimization of a quadratic function subject to linear constraints. Naval Research Logistics Quarterly, 3:111–133, 1956. W.-K. Ma, T. N. Davidson, K. M. Wong, Z.-Q. Luo, and P.-C. Ching. Quasi- maximum-likelihood multiuser detection using semi-definite relaxation with application to synchronous CDMA. IEEE Transactions on Signal Processing, 50:912–922, 2002. S. Mehrotra. On the implementation of a primal-dual interior point method. SIAM Journal on Optimization, 2(4):575–601, 1992. C. D. Meyer. Matrix Analysis and Applied Linear Algebra. Society for In- dustrial and Applied Mathematics, 2000. M. Marcus and L. Lopes. Inequalities for symmetric functions and Hermitian matrices. Canadian Journal of Mathematics, 9:305–312, 1957. A. W. Marshall and I. Olkin. Multivariate Chebyshev inequalities. Annals of Mathematical Statistics, 32(4):1001–1014, 1960. A. W. Marshall and I. Olkin. Inequalities: Theory of Majorization and Its Applications. Academic Press, 1979. R. D. C. Monteiro. Primal-dual path-following algorithms for semidefinite programming. SIAM Journal on Optimization, 7(3):663–678, 1997. MOSEK ApS. The MOSEK Optimization Tools. User’s Manual and Refer- ence, 2002. Available from www.mosek.com. T. Motzkin. Beitr ̈age zur Theorie der linearen Ungleichungen. PhD thesis, University of Basel, 1933. R. F. Meyer and J. W. Pratt. The consistent assessment and fairing of pref- erence functions. IEEE Transactions on Systems Science and Cybernetics, 4(3):270–278, 1968. R. Motwani and P. Raghavan. Randomized Algorithms. Cambridge University Press, 1995. M. Morari and E. Zafiriou. Robust Process Control. Prentice-Hall, 1989. Y. Nesterov. Semidefinite relaxations and nonconvex quadratic optimization. Optimization Methods and Software, 9(1-3):141–160, 1998. Y. Nesterov. Squared functional systems and optimization problems. In J. Frenk, C. Roos, T. Terlaky, and S. Zhang, editors, High Performance Optimization Techniques, pages 405–440. Kluwer, 2000. H. Nikaidˆo. On von Neumann’s minimax theorem. Pacific Journal of Math- ematics, 1954. [Lue95] [Lue98] [Luo03] [LVBL98] [Man65] [Man94] [Mar52] [Mar56] [MDW+02] [Meh92] [Mey00] [ML57] [MO60] [MO79] [Mon97] [MOS02] [Mot33] [MP68] [MR95] [MZ89] [Nes98] [Nes00] [Nik54] D. G. Luenberger. Microeconomic Theory. McGraw-Hill, 1995. References [NN94] [NT98] [NW99] [NWY00] [NY83] [OR00] [Par71] [Par98] [Par00] [Par03] [Pet76] [Pin95] [Pol87] [Pon67] [Pr ́e71] [Pr ́e73] [Pr ́e80] [Pro01] [PRT02] [PS98] [PSU88] [Puk93] [Ren01] [Roc70] 693 Y. Nesterov and A. Nemirovskii. Interior-Point Polynomial Methods in Con- vex Programming. Society for Industrial and Applied Mathematics, 1994. Y. E. Nesterov and M. J. Todd. Primal-dual interior-point methods for self- scaled cones. SIAM Journal on Optimization, 8(2):324–364, 1998. J. Nocedal and S. J. Wright. Numerical Optimization. Springer, 1999. Y. Nesterov, H. Wolkowicz, and Y. Ye. Semidefinite programming relaxations of nonconvex quadratic optimization. In H. Wolkowicz, R. Saigal, and L. Van- denberghe, editors, Handbook of Semidefinite Programming, chapter 13, pages 361–419. Kluwer Academic Publishers, 2000. A. Nemirovskii and D. Yudin. Problem Complexity and Method Efficiency in Optimization. John Wiley & Sons, 1983. J. M. Ortega and W. C. Rheinboldt. Iterative Solution of Nonlinear Equations in Several Variables. Society for Industrial and Applied Mathematics, 2000. First published in 1970 by Academic Press. V. Pareto. Manual of Political Economy. A. M. Kelley Publishers, 1971. Translated from the French edition. First published in Italian in 1906. B. N. Parlett. The Symmetric Eigenvalue Problem. Society for Industrial and Applied Mathematics, 1998. First published in 1980 by Prentice-Hall. P. A. Parrilo. Structured Semidefinite Programs and Semialgebraic Geometry Methods in Robustness and Optimization. PhD thesis, California Institute of Technology, 2000. P. A. Parrilo. Semidefinite programming relaxations for semialgebraic prob- lems. Mathematical Programming Series B, 96:293–320, 2003. E. L. Peterson. Geometric programming. SIAM Review, 18(1):1–51, 1976. J. Pinter. Global Optimization in Action, volume 6 of Nonconvex Optimiza- tion and Its Applications. Kluwer, 1995. B. T. Polyak. Introduction to Optimization. Optimization Software, 1987. Translated from Russian. J. Ponstein. Seven kinds of convexity. SIAM Review, 9(1):115–119, 1967. A. Pr ́ekopa. Logarithmic concave measures with application to stochastic programming. Acta Scientiarum Mathematicarum, 32:301–315, 1971. A. Pr ́ekopa. On logarithmic concave measures and functions. Acta Scien- tiarum Mathematicarum, 34:335–343, 1973. A. Pr ́ekopa. Logarithmic concave measures and related topics. In M. A. H. Dempster, editor, Stochastic Programming, pages 63–82. Academic Press, 1980. J. G. Proakis. Digital Communications. McGraw-Hill, fourth edition, 2001. J. Peng, C. Roos, and T. Terlaky. Self-Regularity. A New Paradigm for Primal-Dual Interior-Point Algorithms. Princeton University Press, 2002. C. H. Papadimitriou and K. Steiglitz. Combinatorial Optimization. Algo- rithms and Complexity. Dover Publications, 1998. First published in 1982 by Prentice-Hall. A. L. Peressini, F. E. Sullivan, and J. J. Uhl. The Mathematics of Nonlinear Programming. Undergraduate Texts in Mathematics. Springer, 1988. F. Pukelsheim. Optimal Design of Experiments. Wiley & Sons, 1993. J. Renegar. A Mathematical View of Interior-Point Methods in Convex Op- timization. Society for Industrial and Applied Mathematics, 2001. R. T. Rockafellar. Convex Analysis. Princeton University Press, 1970. 694 References R. T. Rockafellar. Conjugate Duality and Optimization. Society for Industrial and Applied Mathematics, 1989. First published in 1974. R. T. Rockafellar. Lagrange multipliers and optimality. SIAM Review, 35:183–283, 1993. L. Rudin, S. J. Osher, and E. Fatemi. Nonlinear total variation based noise removal algorithms. Physica D, 60:259–268, 1992. J. B. Rosen. Pattern separation by convex programming. Journal of Mathe- matical Analysis and Applications, 10:123–134, 1965. S. M. Ross. An Introduction to Mathematical Finance: Options and Other Topics. Cambridge University Press, 1999. C. Roos, T. Terlaky, and J.-Ph. Vial. Theory and Algorithms for Linear Optimization. An Interior Point Approach. John Wiley & Sons, 1997. W. Rudin. Principles of Mathematical Analysis. McGraw-Hill, 1976. A. W. Roberts and D. E. Varberg. Convex Functions. Academic Press, 1973. D. Ralph and S. J. Wright. Superlinear convergence of an interior-point method for monotone variational inequalities. In M. C. Ferris and J.-S. Pang, editors, Complementarity and Variational Problems: State of the Art, pages 345–385. Society for Industrial and Applied Mathematics, 1997. C. V. Rao, S. J. Wright, and J. B. Rawlings. Application of interior-point methods to model predictive control. Journal of Optimization Theory and Applications, 99(3):723–757, 1998. I. J. Schoenberg. Remarks to Maurice Fr ́echet’s article “Sur la d ́efinition axiomatique d’une classe d’espaces distanci ́es vectoriellement applicable sur l’espace de Hilbert”. Annals of Mathematics, 38(3):724–732, 1935. S. Schaible. Bibliography in fractional programming. Zeitschrift fu ̈r Opera- tions Research, 26:211–241, 1982. S. Schaible. Fractional programming. Zeitschrift fu ̈r Operations Research, 27:39–54, 1983. A. Schrijver. Theory of Linear and Integer Programming. John Wiley & Sons, 1986. L. L. Scharf. Statistical Signal Processing. Detection, Estimation, and Time Series Analysis. Addison Wesley, 1991. With C ́edric Demeure. G. Sigl, K. Doll, and F. M. Johannes. Analytical placement: A linear or quadratic objective function? In Proceedings of the 28th ACM/IEEE Design Automation Conference, pages 427–432, 1991. C. Scherer, P. Gahinet, and M. Chilali. Multiobjective output-feedback control via LMI optimization. IEEE Transactions on Automatic Control, 42(7):896–906, 1997. N. Sherwani. Algorithms for VLSI Design Automation. Kluwer Academic Publishers, third edition, 1999. N. Z. Shor. Minimization Methods for Non-differentiable Functions. Springer Series in Computational Mathematics. Springer, 1985. N. Z. Shor. The development of numerical methods for nonsmooth optimiza- tion in the USSR. In J. K. Lenstra, A. H. G. Rinnooy Kan, and A. Schri- jver, editors, History of Mathematical Programming. A Collection of Personal Reminiscences, pages 135–139. Centrum voor Wiskunde en Informatica and North-Holland, Amsterdam, 1991. G. Sonnevend. An ‘analytical centre’ for polyhedrons and new classes of global algorithms for linear (smooth, convex) programming. In Lecture Notes in Control and Information Sciences, volume 84, pages 866–878. Springer, 1986. [Roc89] [Roc93] [ROF92] [Ros65] [Ros99] [RTV97] [Rud76] [RV73] [RW97] [RWR98] [Sch35] [Sch82] [Sch83] [Sch86] [Sch91] [SDJ91] [SGC97] [She99] [Sho85] [Sho91] [Son86] References [SPV99] [SRVK93] [SS01] [Str80] [Stu99] [SW70] [SW95] [TA77] [TB97] [Ter96] [Tib96] [Tik90] [Tit75] [TKE88] [Tod01] [Tod02] [TTT98] [TTT02] [Tuy98] [Uhl79] [Val64] 695 A. Seifi, K. Ponnambalam, and J. Vlach. A unified approach to statisti- cal design centering of integrated circuits with correlated parameters. IEEE Transactions on Circuits and Systems — I. Fundamental Theory and Appli- cations, 46(1):190–196, 1999. S. S. Sapatnekar, V. B. Rao, P. M. Vaidya, and S.-M. Kang. An exact solution to the transistor sizing problem for CMOS circuits using convex optimization. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 12(11):1621–1634, 1993. B. Sch ̈olkopf and A. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, 2001. G. Strang. Linear Algebra and its Applications. Academic Press, 1980. J. F. Sturm. Using SEDUMI 1.02, a MATLAB toolbox for optimization over symmetric cones. Optimization Methods and Software, 11-12:625–653, 1999. Available from sedumi.mcmaster.ca. J. Stoer and C. Witzgall. Convexity and Optimization in Finite Dimensions I. Springer-Verlag, 1970. R. J. Stern and H. Wolkowicz. Indefinite trust region subproblems and non- symmetric eigenvalue perturbations. SIAM Journal on Optimization, 15:286– 313, 1995. A. N. Tikhonov and V. Y. Arsenin. Solutions of Ill-Posed Problems. V. H. Winston & Sons, 1977. Translated from Russian. L. N. Trefethen and D. Bau, III. Numerical Linear Algebra. Society for Industrial and Applied Mathematics, 1997. T. Terlaky, editor. Interior Point Methods of Mathematical Programming, volume 5 of Applied Optimization. Kluwer Academic Publishers, 1996. R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B, 58(1):267–288, 1996. V. M. Tikhomorov. Convex analysis. In R. V. Gamkrelidze, editor, Analy- sis II: Convex Analysis and Approximation Theory, volume 14, pages 1–92. Springer, 1990. D. M. Titterington. Optimal design: Some geometrical aspects of D- optimality. Biometrika, 62(2):313–320, 1975. S. Tarasov, L. Khachiyan, and I. E`rlikh. The method of inscribed ellipsoids. Soviet Mathematics Doklady, 37(1):226–230, 1988. M. J. Todd. Semidefinite optimization. Acta Numerica, 10:515–560, 2001. M. J. Todd. The many facets of linear programming. Mathematical Program- ming Series B, 91:417–436, 2002. M. J. Todd, K. C. Toh, and R. H. Tu ̈tu ̈ncu ̈. On the Nesterov-Todd direction in semidefinite programming. SIAM Journal on Optimization, 8(3):769–796, 1998. K.C.Toh,R.H.Tu ̈tu ̈ncu ̈,andM.J.Todd.SDPT3.AMatlabsoft- ware for semidefinite-quadratic-linear programming, 2002. Available from www.math.nus.edu.sg/~mattohkc/sdpt3.html. H. Tuy. Convex Analysis and Global Optimization, volume 22 of Nonconvex Optimization and Its Applications. Kluwer, 1998. F. Uhlig. A recurring theorem about pairs of quadratic forms and extensions. A survey. Linear Algebra and Its Applications, 25:219–237, 1979. F. A. Valentine. Convex Sets. McGraw-Hill, 1964. 696 References G. N. Vanderplaats. Numerical Optimization Techniques for Engineering Design. McGraw-Hill, 1984. R. J. Vanderbei. Linear Programming: Foundations and Extensions. Kluwer, 1996. R. J. Vanderbei. LOQO User’s Manual, 1997. Available from www.orfe.princeton.edu/~rvdb. V. N. Vapnik. The Nature of Statistical Learning Theory. Springer, second edition, 2000. S. A. Vavasis. Nonlinear Optimization: Complexity Issues. Oxford University Press, 1991. L. Vandenberghe and S. Boyd. Semidefinite programming. SIAM Review, pages 49–95, 1995. J. von Neumann. Discussion of a maximum problem. In A. H. Taub, editor, John von Neumann. Collected Works, volume VI, pages 89–95. Pergamon Press, 1963. Unpublished working paper from 1947. J. von Neumann. A model of general economic equilibrium. Review of Eco- nomic Studies, 13(1):1–9, 1945-46. J. von Neumann and O. Morgenstern. Theory of Games and Economic Be- havior. Princeton University Press, third edition, 1953. First published in 1944. J. van Tiel. Convex Analysis. An Introductory Text. John Wiley & Sons, 1984. A. Weber. Theory of the Location of Industries. Russell & Russell, 1971. Translated from German by C. J. Friedrich. First published in 1929. R. Webster. Convexity. Oxford University Press, 1994. P. Whittle. Optimization under Constraints. John Wiley & Sons, 1971. H. Wolkowicz. Some applications of optimization in matrix theory. Linear Algebra and Its Applications, 40:101–118, 1981. S. J. Wright. Primal-Dual Interior-Point Methods. Society for Industrial and Applied Mathematics, 1997. H. Wolkowicz, R. Saigal, and L. Vandenberghe, editors. Handbook of Semidef- inite Programming. Kluwer Academic Publishers, 2000. X. Xu, P. Hung, and Y. Ye. A simplified homogeneous and self-dual lin- ear programming algorithm and its implementation. Annals of Operations Research, 62:151–172, 1996. Y. Ye. Interior Point Algorithms. Theory and Analysis. John Wiley & Sons, 1997. Y. Ye. Approximating quadratic programming with bound and quadratic constraints. Mathematical Programming, 84:219–226, 1999. Y. Ye, M. J. Todd, and S. Mizuno. An O(√nL)-iteration homogeneous and self-dual linear programming algorithm. Mathematics of Operations Research, 19:53–67, 1994. C. Zener. Engineering Design by Geometric Programming. John Wiley & Sons, 1971. Y. Zhang. On extending some primal-dual interior-point algorithms from linear programming to semidefinite programming. SIAM Journal on Opti- mization, 8(2):365–386, 1998. [Van84] [Van96] [Van97] [Vap00] [Vav91] [VB95] [vN63] [vN46] [vNM53] [vT84] [Web71] [Web94] [Whi71] [Wol81] [Wri97] [WSV00] [XHY96] [Ye97] [Ye99] [YTM94] [Zen71] [Zha98] Notation Some specific sets R Rn Rm×n R+, R++ C Cn Cm×n Z Z+ Sn Sn+, Sn++ Real numbers. Real n-vectors (n × 1 matrices). Real m × n matrices. Nonnegative, positive real numbers. Complex numbers. Complex n-vectors. Complex m × n matrices. Integers. Nonnegative integers. Symmetric n × n matrices. Symmetric positive semidefinite, positive definite, n × n matrices. Vectors and matrices 1 ei I XT XH trX λi(X) λmax(X), λmin(X) σi(X) σmax(X), σmin(X) X† x⊥y V⊥ diag(x) diag(X, Y, . . .) rank A R(A) N(A) Vector with all components one. ith standard basis vector. Identity matrix. Transpose of matrix X. Hermitian (complex conjugate) transpose of matrix X. Trace of matrix X. ith largest eigenvalue of symmetric matrix X. Maximum, minimum eigenvalue of symmetric matrix X. ith largest singular value of matrix X. Maximum, minimum singular value of matrix X. Moore-Penrose or pseudo-inverse of matrix X. Vectors x and y are orthogonal: xT y = 0. Orthogonal complement of subspace V . Diagonal matrix with diagonal entries x1, . . . , xn. Block diagonal matrix with diagonal blocks X, Y, . . .. Rank of matrix A. Range of matrix A. Nullspace of matrix A. 698 Notation A norm. Dualofnorm∥·∥. Euclidean (or l2-) norm of vector x. l1 -norm of vector x. l∞ -norm of vector x. Spectral norm (maximum singular value) of matrix X. Ball with center c and radius r. Distance between sets (or points) A and B. Norms and distances ∥·∥ ∥·∥∗ ∥x∥2 ∥x∥1 ∥x∥∞ ∥X ∥2 B(c,r) dist(A,B) Generalized inequalities x≼y x≺y X≼Y X≺Y x ≼K y x ≺K y x≼K∗ y x≺K∗ y Componentwise inequality between vectors x and y. Strict componentwise inequality between vectors x and y Matrix inequality between symmetric matrices X and Y . Strict matrix inequality between symmetric matrices X and Y . Generalized inequality induced by proper cone K. Strict generalized inequality induced by proper cone K. Dual generalized inequality. Dual strict generalized inequality. Topology and convex analysis card C intC relint C clC bdC conv C aff C K∗ IC SC f∗ Probability EX prob S varX N (c, Σ) Φ Cardinality of set C. Interior of set C. Relative interior of set C. Closure of set C. BoundaryofsetC: bdC=clC\intC. Convex hull of set C. Affine hull of set C . Dual cone associated with K. Indicator function of set C . Support function of set C. Conjugate function of f . Expected value of random vector X. Probability of event S. Variance of scalar random variable X. Gaussian distribution with mean c, covariance (matrix) Σ. Cumulative distribution function of N (0, 1) random vari- able. Notation 699 Functions and derivatives f : A → B domf epif ∇f ∇2f Df f is a function on the set dom f ⊆ A into the set B. Domain of function f. Epigraph of function f. Gradient of function f. Hessian of function f. Derivative (Jacobian) matrix of function f. Index A-optimal experiment design, 387 abstract form convex problem, 137 active constraint, 128 activity planning, 149, 195 aff (affine hull), 23 affine combination, 22 dimension, 23 function, 36 composition, 79, 95, 508, 642, 645 hull, 23 independence, 32 invariance, 486 analytic center, 449 Newton decrement, 487 Newton step, 527 Newton’s method, 494, 496 self-concordance, 498 set, 21 separation from convex set, 49 algorithm, see method alignment constraint, 442 allocation asset, 155, 186, 209 power, 196, 210, 212, 245 resource, 253, 523, 559 alternatives, 258, 285 generalized inequalities, 54, 269 linear discrimination, 423 linear inequalities, 50, 63 linear matrix inequality, 287 nonconvex quadratic, 655 strong, 260 weak, 258 amplitude distribution, 294, 304 analytic center, 141, 419, 449, 458, 519, 535, 541, 546, 547 affine invariance, 449 dual, 276, 525 efficient line search, 518 ellipsoid, 420, 450 linear matrix inequality, 422, 459, 508, 553 method, 626 ML interpretation, 450 quadratic inequalities, 519 angle, 633 approximation, 448 constraint, 406 problem, 405, 408 anisotropy, 461 approximate Newton method, 519 approximation Chebyshev, 6, 293 complex, 197 fitting angles, 448 l1-norm, 193, 294 least-squares, 293 log-Chebyshev, 344 matrix norm, 194 minimax, 293 monomial, 199 penalty function, 294, 353 regularized, 305 residual, 291 robust, 318 sparse, 333 total variation, 312 variable bounds, 301 width, 121 with constraints, 301 arbitrage-free price, 263 arithmetic mean, 75 arithmetic-geometric mean inequality, 78 arrow matrix, 670 asymptotic cone, 66 backtracking line search, 464 backward substitution, 666 ball, 30, 634 Euclidean, 29 banded matrix, 510, 546, 553, 669, 675 bandwidth allocation, 210 barrier cone, 66 function, 563 method, 568 complexity, 585, 595 convergence analysis, 577 convex-concave game, 627 generalized inequalities, 596, 601, 605 infeasible start, 571 702 Index second-order cone programming, 599 semidefinite programming, 600 tangent, 624 certificate infeasibility, 259, 582 suboptimality, 241, 568 chain rule, 642 second derivative, 645 change of variable, 130 Chebyshev approximation, 6, 293 lower bounds via least-squares, 274 robust, 323 bounds, 150, 374 center, 148, 416 inequalities, 150, 154 norm, 635 Chernoff bounds, 379 Cholesky factorization, 118, 406, 509, 546, 617, 669 banded matrix, 670 sparse matrix, 670 circuit design, 2, 17, 432, 446 cl (closure), 638 classification, 422 Bayesian, 428 linear, 423 logistic, 427 nonlinear, 429 polynomial, 430 quadratic, 429 support vector, 425 closed function, 458, 529, 577, 639 set, 637 sublevel set assumption, 457, 529 closure, 638 combination affine, 22 conic, 25 convex, 24 communication channel capacity, 207 dual, 279 power allocation, 245 complementary slackness, 242 generalized inequalities, 267 complex norm approximation, 197 semidefinite program, 202 complexity barrier method, 585 generalized inequalities, 605 linear equations, 662 second-order cone program, 606 semidefinite program, 608 linear program, 616 second-order cone program, 601 semidefinite program, 602, 618 basis, 405 dictionary, 333 dual, 407 functions, 326 Lagrange, 326 over-complete, 333 pursuit, 310, 333, 580 well conditioned, 407 Bayesian classification, 428 detector, 367 estimation, 357 bd (boundary), 50, 638 best linear unbiased estimator, 176 binary hypothesis testing, 370 bisection method, 249, 430 quasiconvex optimization, 146 BLAS, 684 block elimination, 546, 554, 672 LU factorization, 673 matrix inverse, 650 separable, 552 tridiagonal, 553 Boolean linear program Lagrangian relaxation, 276 LP relaxation, 194 boundary, 638 bounding box, 433 bounds Chebyshev, 150, 374 Chernoff, 379 convex function values, 338 correlation coefficients, 408 expected values, 361 for global optimization, 11 probabilities, 361 box constraints, 129 cantilever beam, 163, 199 capacity of communication channel, 207 card (cardinality), 98 l1-norm heuristic, 310 Cauchy-Schwartz inequality, 633 ceiling, 96 center analytic, 419 Chebyshev, 148, 416 maximum volume ellipsoid, 418 central path, 564 duality, 565 generalized inequalities, 598 KKT conditions, 567 predictor-corrector, 625 Index componentwise inequality, 32, 43 composition, 83 affine function, 508, 642, 645 quasiconvexity, 102 self-concordance, 499 concave function, 67 maximization problem, 137 cond (condition number), 649 condition number, 203, 407, 649 ellipsoid, 461 gradient method, 473 Newton’s method, 495 set, 461 conditional probability, 42, 357, 428 cone, 25 barrier, 66 dual, 51 Euclidean, 449 hyperbolic, 39 in R2, 64 lexicographic, 64 Lorentz, 31 moments, 66 monotone nonnegative, 64 normal, 66 pointed, 43 positive semidefinite, 34, 64 program, 168 dual, 266 proper, 43 recession, 66 second-order, 31, 449 separation, 66 solid, 43 conic combination, 25 form problem, 168, 201 hull, 25 conjugate and Lagrange dual, 221 function, 90 self-concordance, 517 logarithm, 607 constraint active, 128 box, 129 explicit, 134 hyperbolic, 197 implicit, 134 kinematic, 247 qualifications, 226 redundant, 128 set, 127 consumer preference, 339 continuous function, 639 703 control model predictive, 17 optimal, 194, 303, 552 conv (convex hull), 24 convergence infeasible Newton method, 536 linear, 467 Newton method, 529 quadratic, 489, 539 convex combination, 24 cone, 25 equality constraints, 191 function, 67 bounded, 114 bounding values, 338 first-order conditions, 69 interpolation, 337 inverse, 114 level set, 113 over concave function, 103 product, 119 geometric program, 162 hull, 24 function, 119 minimizing over, 207 optimization, 2, 7, 136 abstract form, 137 set, 23 image under linear-fractional func- tion, 62 separating hyperplane, 403 separation from affine set, 49 convex-concave, 238 fractional problem, 191 function, 115 saddle-point property, 281 game, 540, 542, 560 barrier method, 627 bounded inverse derivative condition, 559 Newton method, 540 Newton step, 559 convexity first-order condition, 69 matrix, 110 midpoint, 60 second-order conditions, 71 strong, 459, 558 coordinate projection, 38 copositive matrix, 65, 202 correlation coefficient, 406 bounding, 408 cost, 127 random, 154 risk-sensitive, 155 704 Index covariance estimation, 355 estimation error, 384 incomplete information, 171 covering ellipsoid, 275 cumulant generating function, 106 cumulative distribution, 107 log-concavity, 124 curve minimum length piecewise-linear, 547 optimal trade-off, 182 piecewise-arc, 453 D-optimal experiment design, 387 damped Newton step, 489 data fitting, 2, 291 de-noising, 310 deadzone-linear penalty function, 295, 434 dual, 345 decomposition eigenvalue, 646 generalized eigenvalue, 647 orthogonal, 646 singular value, 648 deconvolution, 307 degenerate ellipsoid, 30 density function log-concave, 104, 124, 352 depth, 416 derivative, 640 chain rule, 642 directional, 642 pricing, 264 second, 643 descent direction, 463 feasible, 527 method, 463 gradient, 466 steepest, 475 design circuit, 2, 17, 432, 446 detector, 364 of experiments, 384 optimal, 292, 303 detector Bayes, 367 design, 364 MAP, 369 minimax, 367 ML, 369 randomized, 365 robust, 372 determinant, 73 derivative, 641 device sizing, 2 diagonal plus low rank, 511, 678 diagonal scaling, 163 dictionary, 333 diet problem, 148 differentiable function, 640 directional derivative, 642 Dirichlet density, 124 discrete memoryless channel, 207 discrimination, 422 dist (distance), 46, 634 distance, 46, 634 between polyhedra, 154, 403 between sets, 402 constraint, 443 maximum probability, 118 ratio function, 97 to farthest point in set, 81 to set, 88, 397 distribution amplitude, 294 Gaussian, 104 Laplacian, 352 maximum entropy, 362 Poisson, 353 Wishart, 105 dom (domain), 639 domain function, 639 problem, 127 dual basis, 407 cone, 51 logarithm, 607 properties, 64 feasibility equations, 521 feasible, 216 function, 216 geometric interpretation, 232 generalized inequality, 53 characterization of minimal points, 54 least-squares, 218 logarithm, 607 Newton method, 557 norm, 93, 637 problem, 223 residual, 532 spectral norm, 637 stopping criterion, 242 variable, 215 duality, 215 central path, 565 game interpretation, 239 gap, 241 optimal, 226 surrogate, 612 multicriterion interpretation, 236 Index price interpretation, 240 saddle-point interpretation, 237 strong, 226 weak, 225 dynamic activity planning, 149 E-optimal experiment design, 387 eccentricity, 461 ei (ith unit vector), 33 eigenvalue decomposition, 646 generalized, 647 interlacing theorem, 122 maximum, 82, 203 optimization, 203 spread, 203 sum of k largest, 118 electronic device sizing, 2 elementary symmetric functions, 122 elimination banded matrix, 675 block, 546 constraints, 132 equality constraints, 523, 542 variables, 672 ellipsoid, 29, 39, 635 condition number, 461 covering, 275 degenerate, 30 intersection, 262 L ̈owner-John, 410 maximum volume, 414 minimum volume, 410 separation, 197 via analytic center, 420 volume, 407 embedded optimization, 3 entropy, 72, 90, 117 maximization, 537, 558, 560, 562 dual function, 222 self-concordance, 497 epigraph, 75 problem, 134 equality constrained minimization, 521 constraint, 127 convex, 191 elimination, 132, 523, 542 equations KKT, 243 normal, 458 equivalent norms, 636 problems, 130 estimation, 292 Bayesian, 357 covariance, 355 705 least-squares, 177 linear measurements, 352 maximum a posteriori, 357 noise free, 303 nonparametric distribution, 359 statistical, 351 Euclidean ball, 29 distance matrix, 65 problems, 405 norm, 633 projection via pseudo-inverse, 649 exact line search, 464 exchange rate, 184 expanded set, 61 experiment design, 384 A-optimal, 387 D-optimal, 387 dual, 276 E-optimal, 387 explanatory variables, 353 explicit constraint, 134 exponential, 71 distribution, 105 matrix, 110 extended-value extension, 68 extrapolation, 333 extremal volume ellipsoids, 410 facility location problem, 432 factor-solve method, 666 factorization block LU, 673 Cholesky, 118, 546, 669 LDLT, 671 LU, 668 QR, 682 symbolic, 511 Farkas lemma, 263 fastest mixing Markov chain, 173 dual, 286 feasibility methods, 579 problem, 128 feasible, 127 descent direction, 527 dual, 216 point, 127 problem, 127 set, 127 Fenchel’s inequality, 94 first-order approximation, 640 condition convexity, 69 monotonicity, 109 706 Index quasiconvexity, 99, 121 fitting minimum norm, 331 polynomial, 331 spline, 331 floor planning, 438 geometric program, 444 flop count, 662 flow optimal, 193, 550, 619 forward substitution, 665 fractional program generalized, 205 Frobenius norm, 634 scaling, 163, 478 fuel use map, 194, 213 function affine, 36 barrier, 563 closed, 458, 529, 577, 639 composition, 83 concave, 67 conjugate, 90, 221 continuous, 639 convex, 67 convex hull, 119 convex-concave, 115 derivative, 640 differentiable, 640 domain, 639 dual, 216 elementary symmetric, 122 extended-value extension, 68 first-order approximation, 640 fitting, 324 gradient, 641 Huber, 345 interpolation, 324, 329 Lagrange dual, 216 Lagrangian, 215 Legendre transform, 95 likelihood, 351 linear-fractional, 41 log barrier, 563 log-concave, 104 log-convex, 104 matrix monotone, 108 monomial, 160 monotone, 115 notation, 14, 639 objective, 127 penalty, 294 perspective, 39, 89, 117 piecewise-linear, 119, 326 pointwise maximum, 80 posynomial, 160 projection, 397 projective, 41 quasiconvex, 95 quasilinear, 122 self-concordant, 497 separable, 249 support, 63 unimodal, 95 utility, 115, 211, 339 game, 238 advantage of going second, 240 barrier method, 627 bounded inverse condition, 559 continuous, 239 convex-concave, 540, 542, 560 Newton step, 559 duality, 231 duality interpretation, 239 matrix, 230 gamma function, 104 log-convexity, 123 Gauss-Newton method, 520 Gaussian distribution log-concavity, 104, 123 generalized eigenvalue decomposition, 647 minimization, 204 quasiconvexity, 102 fractional program, 205 geometric program, 200 inequality, 43 barrier method, 596, 601 central path, 598 dual, 53, 264 log barrier, 598 logarithm, 597 optimization problem, 167 theorem of alternatives, 269 linear-fractional program, 152 logarithm, 597 dual, 607 positive semidefinite cone, 598 second-order cone, 597 posynomial, 200 geometric mean, 73, 75 conjugate, 120 maximizing, 198 program, 160, 199 barrier method, 573 convex form, 162 dual, 256 floor planning, 444 sensitivity analysis, 284 unconstrained, 254, 458 Index global optimization, 10 bounds, 11 GP, see geometric program gradient, 641 conjugate, 121 log barrier, 564 method, 466 and condition number, 473 projected, 557 Gram matrix, 405 halfspace, 27 Voronoi description, 60 Hankel matrix, 65, 66, 170, 204 harmonic mean, 116, 198 log-concavity, 122 Hessian, 71, 643 conjugate, 121 Lipschitz continuity, 488 log barrier, 564 sparse, 511 H ̈older’s inequality, 78 Huber penalty function, 190, 299, 345 hull affine, 23 conic, 25 convex, 24 hybrid vehicle, 212 hyperbolic cone, 39 constraint, 197 set, 61 hyperplane, 27 separating, 46, 195, 423 supporting, 50 hypothesis testing, 364, 370 IID noise, 352 implementation equality constrained methods, 542 interior-point methods, 615 line search, 508 Newton’s method, 509 unconstrained methods, 508 implicit constraint, 134 Lagrange dual, 257 indicator function, 68, 92 linear approximation, 218 projection and separation, 401 induced norm, 636 inequality arithmetic-geometric mean, 75, 78 Cauchy-Schwartz, 633 Chebyshev, 150, 154 componentwise, 32, 43 constraint, 127 Fenchel’s, 94 707 form linear program, 147 dual, 225 generalized, 43 H ̈older’s, 78 information, 115 Jensen’s, 77 matrix, 43, 647 triangle, 634 Young’s, 94, 120 inexact line search, 464 infeasibility certificate, 259 infeasible barrier method, 571 Newton method, 531, 534, 558 convergence analysis, 536 phase I, 582 problem, 127 weak duality, 273 infimum, 638 information inequality, 115 inner product, 633 input design, 307 interior, 637 relative, 23 interior-point method, 561 implementation, 615 primal-dual method, 609 internal rate of return, 97 interpolation, 324, 329 least-norm, 333 with convex function, 337 intersection ellipsoids, 262 sets, 36 int (interior), 637 inverse convex function, 114 linear-fractional function, 62 investment log-optimal, 559 return, 208 IRR (internal rate of return), 97 Jacobian, 640 Jensen’s inequality, 77 quasiconvex function, 98 Karush-Kuhn-Tucker, see KKT kinematic constraints, 247 KKT conditions, 243 central path, 567 generalized inequalities, 267 mechanics interpretation, 246 modified, 577 708 Index supporting hyperplane interpretation, 283 matrix, 522 bounded inverse assumption, 530 nonsingularity, 523, 547 system, 677 nonsingularity, 557 solving, 542 Kullback-Leibler divergence, 90, 115, 362 l1 -norm approximation, 294, 353, 514 barrier method, 617 regularization, 308 steepest descent method, 477 Lagrange basis, 326 dual function, 216 dual problem, 223 multiplier, 215 contact force interpretation, 247 price interpretation, 253 Lagrangian, 215 relaxation, 276, 654 LAPACK, 684 Laplace transform, 106 Laplacian distribution, 352 LDLT factorization, 671 least-norm interpolation, 333 problem, 131, 302 least-penalty problem, 304 statistical interpretation, 359 least-squares, 4, 131, 153, 177, 293, 304, 458 convex function fit, 338 cost as function of weights, 81 dual function, 218 regularized, 184, 205 robust, 190, 300, 323 strong duality, 227 Legendre transform, 95 length, 96, 634 level set convex function, 113 lexicographic cone, 64 likelihood function, 351 likelihood ratio test, 371 line, 21 search, 464, 514 backtracking, 464 exact, 464 implementation, 508 pre-computation, 518 primal-dual interior-point method, 612 segment, 21 linear classification, 423 convergence, 467 discrimination, 423 equality constraint eliminating, 132 equations banded, 669 block elimination, 672 easy, 664 factor-solve method, 666 KKT system, 677 LAPACK, 684 least-squares, 304 low rank update, 680 lower triangular, 664 multiple righthand sides, 667 Newton system, 510 orthogonal, 666 Schur complement, 672 software, 684 solution set, 22 solving, 661 sparse solution, 304 symmetric positive definite, 669 underdetermined, 681 upper triangular, 665 estimation, 292 best unbiased, 176 facility location, 432 inequalities alternative, 261 analytic center, 458 log-barrier, 499 solution set, 27, 31 theorem of alternatives, 50, 54 matrix inequality, 38, 76, 82 alternative, 270 analytic center, 422, 459, 508, 553 multiple, 169 strong alternatives, 287 program, 1, 6, 146 barrier method, 571, 574 Boolean, 194, 276 central path, 565 dual, 224, 274 dual function, 219 inequality form, 147 primal-dual interior-point method, 613 random constraints, 157 random cost, 154 relaxation of Boolean, 194 robust, 157, 193, 278 standard form, 146 strong duality, 227, 280 separation ellipsoids, 197 linear-fractional Index function, 41 composition, 102 image of convex set, 62 inverse, 62 quasiconvexity, 97 program, 151 generalized, 152 linearized optimality condition, 485 LMI, see linear matrix inequality locally optimal, 9, 128, 138 location, 432 log barrier, 563 generalized inequalities, 597, 598 gradient and Hessian, 564 linear inequalities, 499 linear matrix inequality, 459 penalty function, 295 log-Chebyshev approximation, 344, 629 log-concave density, 104, 352 function, 104 log-convex function, 104 log-convexity Perron-Frobenius eigenvalue, 200 second-order conditions, 105 log-determinant, 499 function, 73 gradient, 641 Hessian, 644 log-likelihood function, 352 log-optimal investment, 209, 559 log-sum-exp function, 72, 93 gradient, 642 logarithm, 71 dual, 607 generalized inequality, 597 self-concordance, 497 logistic classification, 427 function, 122 model, 210 regression, 354 Lorentz cone, 31 low rank update, 680 lower triangular matrix, 664 L ̈owner-John ellipsoid, 410 LP, see linear progam lp-norm, 635 dual, 637 LU factorization, 668 manufacturing yield, 211 MAP, see maximum a posteriori probability Markov chain equilibrium distribution, 285 estimation, 394 709 fastest mixing, 173 dual, 286 Markowitz portfolio optimization, 155 matrix arrow, 670 banded, 510, 546, 553, 669, 675 block inverse, 650 completion problem, 204 condition number, 649 convexity, 110, 112 copositive, 65, 202 detection probabilities, 366 diagonal plus low rank, 511, 678 Euclidean distance, 65 exponential, 110 factorization, 666 fractional function, 76, 82, 89 fractional minimization, 198 game, 230 Gram, 405 Hankel, 65, 66, 170, 204 Hessian, 643 inequality, 43, 647 inverse matrix convexity, 124 inversion lemma, 515, 678 KKT, 522 nonsingularity, 557 low rank update, 680 minimal upper bound, 180 monotone function, 108 multiplication, 663 node incidence, 551 nonnegative, 165 nonnegative definite, 647 norm, 82 approximation, 194 minimization, 169 orthogonal, 666 P0, 202 permutation, 666 positive definite, 647 positive semidefinite, 647 power, 110, 112 pseudo-inverse, 649 quadratic function, 111 sparse, 511 square-root, 647 max function, 72 conjugate, 120 max-min inequality, 238 property, 115, 237 max-row-sum norm, 194, 636 maximal element, 45 maximization problem, 129 710 Index via dual inequalities, 54 fuel optimal control, 194 length piecewise-linear curve, 547 norm fitting, 331 singular value, 649 variance linear unbiased estimator, 176 volume ellipsoid dual, 222, 228 Minkowski function, 119 mixed strategy matrix game, 230 ML, see maximum likelihood model predictive control, 17 moment, 66 bounds, 170 function log-concavity, 123 generating function, 106 multidimensional, 204 monomial, 160 approximation, 199 monotone mapping, 115 nonnegative cone, 64 vector function, 108 monotonicity first-order condition, 109 Moore-Penrose inverse, 649 Motzkin’s theorem, 447 multicriterion detector design, 368 optimization, 181 problem, 181 scalarization, 183 multidimensional moments, 204 multiplier, 215 mutual information, 207 N (nullspace), 646 network optimal flow, 193, 550 rate optimization, 619, 628 Newton decrement, 486, 515, 527 infeasible start method, 531 method, 484 affine invariance, 494, 496 approximate, 519 convergence analysis, 529, 536 convex-concave game, 540 dual, 557 equality constraints, 525, 528 implementing, 509 infeasible, 558 self-concordance, 531 trust region, 515 step concave, 137 maximum a posteriori probability estimation, 357 determinant matrix completion, 204 eigenvalue, 82, 203 element, 45 entropy, 254, 558 distribution, 362 dual, 248 strong duality, 228 likelihood detector, 369 estimation, 351 probability distance, 118 singular value, 82, 649 dual, 637 minimization, 169 norm, 636 volume ellipsoid, 414 rectangle, 449, 629 mean harmonic, 116 method analytic centers, 626 barrier, 568 bisection, 146 descent, 463 factor-solve, 666 feasibility, 579 Gauss-Newton, 520 infeasible start Newton, 534 interior-point, 561 local optimization, 9 Newton’s, 484 phase I, 579 primal-dual, 609 randomized, 11 sequential unconstrained minimization, 569 steepest descent, 475 midpoint convexity, 60 minimal element, 45 via dual inequalities, 54 surface, 159 minimax angle fitting, 448 approximation, 293 detector, 367 minimization equality constrained, 521 minimizing sequence, 457 minimum element, 45 Index system, 510 Neyman-Pearson lemma, 371 node incidence matrix, 551 nonconvex optimization, 9 quadratic problem strong duality, 229 nonlinear classification, 429 facility location problem, 434 optimization, 9 programming, 9 nonnegative definite matrix, 647 matrix, 165 orthant, 32, 43 minimization, 142 polynomial, 44, 65 nonparametric distribution estimation, 359 norm, 72, 93, 634 approximation, 291 by quadratic, 636 dual, 254 dual function, 221 weighted, 293 ball, 30 cone, 31 dual, 52 conjugate, 93 dual, 637 equivalence, 636 Euclidean, 633 Frobenius, 634 induced, 636 matrix, 82 max-row-sum, 636 maximum singular value, 636 operator, 636 quadratic, 635 approximation, 413 spectral, 636 sum-absolute-value, 635 normal cone, 66 distribution log-concavity, 104 equations, 458, 510 vector, 27 normalized entropy, 90 nuclear norm, 637 nullspace, 646 objective function, 127 open set, 637 711 affine invariance, 527 equality constraints, 526 primal-dual, 532 operator norm, 636 optimal activity levels, 195 allocation, 523 consumption, 208 control, 194, 303, 552 hybrid vehicle, 212 minimum fuel, 194 design, 292, 303 detector design, 364 duality gap, 226 input design, 307 Lagrange multipliers, 223 locally, 9 network flow, 550 Pareto, 57 point, 128 local, 138 resource allocation, 559 set, 128 trade-off analysis, 182 value, 127, 175 bound via dual function, 216 optimality conditions, 241 generalized inequalities, 266 KKT, 243 linearized, 485, 526 optimization convex, 7 embedded, 3 global, 10 local, 9 multicriterion, 181 nonlinear, 9 over polynomials, 203 problem, 127 epigraph form, 134 equivalent, 130 feasibility, 128 feasible, 127 generalized inequalities, 167 maximization, 129 optimal value, 127 perturbation analysis, 249, 250 sensitivity analysis, 250 standard form, 127 symmetry, 189 recourse, 211, 519 robust, 208 two-stage, 211, 519 variable, 127 vector objective, 174 optimizing over some variables, 133 option pricing, 285 712 Index oracle problem description, 136 ordering lexicographic, 64 orthogonal complement, 27 decomposition, 646 matrix, 666 outliers, 298 outward normal vector, 27 over-complete basis, 333 parameter problem description, 136 parametric distribution estimation, 351 Pareto optimal, 57, 177, 206 partial ordering via cone, 43 sum, 62 partitioning problem, 219, 629 dual, 226 dual function, 220 eigenvalue bound, 220 semidefinite program relaxation, 285 pattern recognition, 422 penalty function approximation, 294 deadzone-linear, 295 Huber, 299 log barrier, 295 robust, 299, 343 statistical interpretation, 353 permutation matrix, 666 Perron-Frobenius eigenvalue, 165 log-convexity, 200 perspective, 39, 89, 117 conjugate, 120 function, 207 image of polyhedron, 62 perturbed optimization problem, 250 phase I method, 579 complexity, 592 infeasible start, 582 sum of infeasibilities, 580 piecewise arc, 453 polynomial, 327 piecewise-linear curve minimum length, 547 function, 80, 119, 326 conjugate, 120 minimization, 150, 562 dual, 275 pin-hole camera, 39 placement, 432 quadratic, 434 point minimal, 45 minimum, 45 pointed cone, 43 pointwise maximum, 80 Poisson distribution, 353 polyhedral uncertainty robust linear program, 278 polyhedron, 31, 38 Chebyshev center, 148, 417 convex hull description, 34 distance between, 154, 403 Euclidean projection on, 398 image under perspective, 62 volume, 108 Voronoi description, 60 polynomial classification, 430 fitting, 326, 331 interpolation, 326 log-concavity, 123 nonnegative, 44, 65, 203 piecewise, 327 positive semidefinite, 203 sum of squares, 203 trigonometric, 116, 326 polytope, 31 portfolio bounding risk, 171 diversification constraint, 279 log-optimal, 209 loss risk constraints, 158 optimization, 2, 155 risk-return trade-off, 185 positive definite matrix, 647 semidefinite cone, 34, 36, 64 matrix, 647 matrix completion, 204 polynomial, 203 posynomial, 160 generalized, 200 two-term, 200 power allocation, 196 broadcast channel, 210 communication channel, 210 hybrid vehicle, 212 power function, 71 conjugate, 120 log-concavity, 104 pre-computation for line search, 518 predictor-corrector method, 625 preference relation, 340 present value, 97 price, 57 arbitrage-free, 263 interpretation of duality, 240 Index option, 285 shadow, 241 primal residual, 532 primal-dual method, 609 geometric program, 613 linear program, 613 Newton step, 532 search direction, 609 probability conditional, 42 distribution convex sets, 62 maximum distance, 118 simplex, 33 problem conic form, 168 control, 303 convex, 136 data, 136 dual, 223 equality constrained, 521 estimation, 292 Euclidean distance and angle, 405 floor planning, 438 Lagrange dual, 223 least-norm, 302 least-penalty, 304 location, 432 matrix completion, 204 maximization, 129 multicriterion, 181 norm approximation, 291 optimal design, 292, 303 partitioning, 629 placement, 432 quasiconvex, 137 regression, 291 regressor selection, 310 unbounded below, 128 unconstrained, 457 unconstrained quadratic, 458 product convex functions, 119 inner, 633 production frontier, 57 program geometric, 160 linear, 146 quadratic, 152 quadratically constrained quadratic, 152 semidefinite, 168, 201 projected gradient method, 557 projection coordinate, 38 Euclidean, 649 713 function, 397 indicator and support function, 401 on affine set, 304 on set, 397 on subspace, 292 projective function, 41 proper cone, 43 PSD (positive semidefinite), 203 pseudo-inverse, 88, 141, 153, 177, 185, 305, 649 QCQP (quadratically constrained quadratic program), 152 QP (quadratic program), 152 QR factorization, 682 quadratic convergence, 489, 539 discrimination, 429 function convexity, 71 gradient, 641 Hessian, 644 minimizing, 140, 514 inequalities analytic center, 519 inequality solution set, 61 matrix function, 111 minimization, 458, 649 equality constraints, 522 norm, 635 approximation, 636 norm approximation, 413 optimization, 152, 196 placement, 434 problem strong duality, 229 program, 152 primal-dual interior-point method, 630 robust, 198 smoothing, 312 quadratic-over-linear function, 72, 76 minimizing, 514 quadratically constrained quadratic program, 152, 196 strong duality, 227 quartile, 62, 117 quasi-Newton methods, 496 quasiconvex function, 95 convex representation, 103 first-order conditions, 99, 121 Jensen’s inequality, 98 second-order conditions, 101 optimization, 137 via convex feasibility, 145 quasilinear function, 122 714 Index Sn+ (positive semidefinite n × n matrices), 34 saddle-point, 115 convex-concave function, 281 duality interpretation, 237 via Newton’s method, 627 scalarization, 178, 206, 306, 368 duality interpretation, 236 multicriterion problem, 183 scaling, 38 Schur complement, 76, 88, 124, 133, 546, 650, 672 SDP, see semidefinite program search direction, 463 Newton, 484, 525 primal-dual, 609 second derivative, 643 chain rule, 645 second-order conditions convexity, 71 log-convexity, 105 quasiconvexity, 101 cone, 31, 449 generalized logarithm, 597 cone program, 156 barrier method, 601 central path, 599 complexity, 606 dual, 287 segment, 21 self-concordance, 496, 516 barrier method complexity, 585 composition, 499 conjugate function, 517 Newton method with equality constraints, 531 semidefinite program, 168, 201 barrier method, 602, 618 central path, 600 complex, 202 complexity, 608 dual, 265 relaxation partitioning problem, 285 sensitivity analysis, 250 geometric program, 284 separable block, 552 function, 249 separating affine and convex set, 49 cones, 66 convex sets, 403, 422 R (range), 645 R (reals), 14 R+ (nonnegative reals), 14 R++ (positive reals), 14 Rn+ (nonnegative orthant), 32 randomized algorithm, 11 detector, 365, 395 strategy, 230 range, 645 rank, 645 quasiconcavity, 98 ratio of distances, 97 recession cone, 66 reconstruction, 310 recourse, 211, 519 rectangle, 61 maximum volume, 449, 629 redundant constraint, 128 regression, 153, 291 logistic, 354 robust, 299 regressor, 291 selection, 310, 334 regularization, 5 l1, 308 smoothing, 307 Tikhonov, 306 regularized approximation, 305 least-squares, 184, 205 relative entropy, 90 interior, 23 positioning constraint, 439 residual, 291 amplitude distribution, 296 dual, 532 primal, 532 resource allocation, 559 restricted set, 61 Riccati recursion, 553 Riesz-Fej ́er theorem, 348 risk-return trade-off, 185 risk-sensitive cost, 155 robust approximation, 318 Chebyshev approximation, 323 detector, 372 least-squares, 190, 300, 323 linear discrimination, 424 linear program, 157, 193, 278 optimization, 208 penalty function, 299, 343 quadratic program, 198 regression, 299 Sn (symmetric n × n matrices), 34 standard inner product, 633 Index hyperplane, 46, 195, 423 converse theorem, 50 duality proof, 235 polyhedra, 278 theorem proof, 46 point and convex set, 49, 399 point and polyhedron, 401 sphere, 195 strictly, 49 set affine, 21 boundary, 638 closed, 637 closure, 638 condition number, 461 convex, 23 distance between, 402 distance to, 397 eccentricity, 461 expanded, 61 hyperbolic, 61 intersection, 36 open, 637 projection, 397 rectangle, 61 restricted, 61 slab, 61 sublevel, 75 sum, 38 superlevel, 75 wedge, 61 width, 461 shadow price, 241, 253 signomial, 200 simplex, 32 probability, 33 unit, 33 volume, 407 singular value, 82 decomposition, 648 slab, 61 slack variable, 131 Slater’s condition, 226 generalized inequalities, 265 proof of strong duality, 234 smoothing, 307, 310 quadratic, 312 SOCP, see second-order cone program solid cone, 43 solution set linear equations, 22 linear inequality, 27 linear matrix inequality, 38 quadratic inequality, 61 strict linear inequalities, 63 SOS (sum of squares), 203 715 sparse approximation, 333 description, 334 matrix, 511 Cholesky factorization, 670 LU factorization, 669 solution, 304 vectors, 663 spectral decomposition, 646 norm, 636 dual, 637 minimization, 169 sphere separating, 195 spline, 327 fitting, 331 spread of eigenvalues, 203 square-root of matrix, 647 standard form cone program, 168 dual, 266 linear program, 146 dual, 224 standard inner product, 633 Sn, 633 statistical estimation, 351 steepest descent method, 475 l1-norm, 477 step length, 463 stopping criterion via duality, 242 strict linear inequalities, 63 separation, 49 strong alternatives, 260 convexity, 459, 558 duality, 226 linear program, 280 max-min property, 238 convex-concave function, 281 sublevel set, 75 closedness assumption, 457 condition number, 461 suboptimality certificate, 241 condition, 460 substitution of variable, 130 sum of k largest, 80 conjugate, 120 solving via dual, 278 of squares, 203 partial, 62 sets, 38 sum-absolute-value norm, 635 716 Index SUMT (sequential unconstrained minimiza- tion method), 569 superlevel set, 75 support function, 63, 81, 92, 120 projection and separation, 401 support vector classifier, 425 supporting hyperplane, 50 converse theorem, 63 KKT conditions, 283 theorem, 51 supremum, 638 surface area, 159 optimal trade-off, 182 surrogate duality gap, 612 SVD (singular value decomposition), 648 symbolic factorization, 511 symmetry, 189 constraint, 442 theorem alternatives, 50, 54, 258 generalized inequalities, 269 eigenvalue interlacing, 122 Gauss-Markov, 188 Motzkin, 447 Perron-Frobenius, 165 Riesz-Fej ́er, 348 separating hyperplane, 46 Slater, 226 supporting hyperplane, 51 Tikhonov regularization, 306 time-frequency analysis, 334 total variation reconstruction, 312 trade-off analysis, 182 transaction fee, 155 translation, 38 triangle inequality, 634 triangularization, 326 trigonometric polynomial, 116, 326 trust region, 302 Newton method, 515 problem, 229 two-stage optimization, 519 two-way partitioning problem, see partition- ing problem unbounded below, 128 uncertainty ellipsoid, 322 unconstrained minimization, 457 method, 568 underdetermined linear equations, 681 uniform distribution, 105 unimodal function, 95 unit ball, 634 simplex, 33 upper triangular matrix, 665 utility function, 115, 130, 211, 339 variable change of, 130 dual, 215 elimination, 672 explanatory, 353 optimization, 127 slack, 131 vector normal, 27 optimization, 174 scalarization, 178 verification, 10 volume ellipsoid, 407 polyhedron, 108 simplex, 407 Von Neuman growth problem, 152 Voronoi region, 60 water-filling method, 245 weak alternatives, 258 duality, 225 infeasible problems, 273 max-min inequality, 281 wedge, 61 weight vector, 179 weighted least-squares, 5 norm approximation, 293 well conditioned basis, 407 width, 461 wireless communication system, 196 Wishart distribution, 105 worst-case analysis, 10 robust approximation, 319 yield function, 107, 211 Young’s inequality, 94, 120 Z (integers), 697