MA629 Nonlinear Optimization
Course Catalog Description
Knowledge of Linear Algebra and Multivariate CalculusMotivation
“Namely, because the shape of the whole universe is most perfect and, in fact, designed by the wisest creator, nothing in all of the world will occur in which no minimum or maximum rule is somehow shining forth.” Leonard Euler (1744)
The objective of this course is to introduce the students to the foundation of optimization. The first part of the class focuses on basic results of convex analysis and their application to the development of necessary and sufficient conditions of optimality and Lagrangian duality theory. Basic numerical methods of optimization and their convergence constitute the second portion of the class. Along with the theoretical results and methods, examples of optimization models in probability, statistics, and approximation theory will be discussed as well as some basic models from management, finance, and other practical situations will be introduced in order to illustrate the discussed notions and phenomena, and to demonstrate the scope of applications. Linear optimization techniques will be treated as a special case. The numerical assignments will require the use of software. Non-differentiable problems are subject of MA 630.
Topics covered: basic optimization models, separation, representation of convex sets, properties of convex functions, optimality conditions, saddle points, constraint qualifications; Lagrange duality, gradient and non-gradient methods for unconstraint optimization, primal and dual methods for constrained optimization methods for differentiable problems, conditioning, convergence rates.
Main text: Andrzej Ruszczynski, Nonlinear Optimization, Princeton University Press, 2006.
Supplementary: J.-F. Bonnans, J.C. Gilbert, C. Lemarechal, C.A. Sagastizabal, Nonlinear Optimization, Theoretical and Practical aspects, Springer 2006.
None of the text books is required. You can consult the books in the library. All assignments and the final exam require only material discussed in class.
of lectures is expected as well as participation in the class discussions.Graded work:
7 assignments, a project, and a final exam.Grading Procedures:
Grades will be based on Class Participation 10%, Homework grades 35%, Project 20% and Final Exam 35%.
|Week 1||Examples of optimization models. Introduction to convex sets.|
|Week 2||Separation theorems; extreme points and representation of convex sets|
|Week 3||Convex functions and convexity criteria|
|Week 4||Tangent and normal cones.|
|Week 5||Unconstrained minima|
|Week 6||Optimality conditions.|
|Week 7||Lagrangian duality.|
|Week 8||Line search. Steepest decent method.|
|Week 9||Conditioning. Newton’s method.|
|Week 10||Conjugate gradient method. Quasi-Newton methods.|
|Week 11||Non-gradient methods. Truncated Newton’s method.|
|Week 12||Penalty methods. Dual methods.|
|Week 13||Augmented Lagrangian methods.|
|Week 14||Sequential quadratic programming. Interior point (barrier) methods.|