Optimal Control Theory An Introduction

Advertisement



  optimal control theory an introduction: Optimal Control Theory Donald E. Kirk, 2012-04-26 Upper-level undergraduate text introduces aspects of optimal control theory: dynamic programming, Pontryagin's minimum principle, and numerical techniques for trajectory optimization. Numerous figures, tables. Solution guide available upon request. 1970 edition.
  optimal control theory an introduction: Introduction to Optimal Control Theory Jack Macki, Aaron Strauss, 2012-12-06 This monograph is an introduction to optimal control theory for systems governed by vector ordinary differential equations. It is not intended as a state-of-the-art handbook for researchers. We have tried to keep two types of reader in mind: (1) mathematicians, graduate students, and advanced undergraduates in mathematics who want a concise introduction to a field which contains nontrivial interesting applications of mathematics (for example, weak convergence, convexity, and the theory of ordinary differential equations); (2) economists, applied scientists, and engineers who want to understand some of the mathematical foundations. of optimal control theory. In general, we have emphasized motivation and explanation, avoiding the definition-axiom-theorem-proof approach. We make use of a large number of examples, especially one simple canonical example which we carry through the entire book. In proving theorems, we often just prove the simplest case, then state the more general results which can be proved. Many of the more difficult topics are discussed in the Notes sections at the end of chapters and several major proofs are in the Appendices. We feel that a solid understanding of basic facts is best attained by at first avoiding excessive generality. We have not tried to give an exhaustive list of references, preferring to refer the reader to existing books or papers with extensive bibliographies. References are given by author's name and the year of publication, e.g., Waltman [1974].
  optimal control theory an introduction: Optimal Control Theory L.D. Berkovitz, 2013-03-14 This book is an introduction to the mathematical theory of optimal control of processes governed by ordinary differential eq- tions. It is intended for students and professionals in mathematics and in areas of application who want a broad, yet relatively deep, concise and coherent introduction to the subject and to its relati- ship with applications. In order to accommodate a range of mathema- cal interests and backgrounds among readers, the material is arranged so that the more advanced mathematical sections can be omitted wi- out loss of continuity. For readers primarily interested in appli- tions a recommended minimum course consists of Chapter I, the sections of Chapters II, III, and IV so recommended in the introductory sec tions of those chapters, and all of Chapter V. The introductory sec tion of each chapter should further guide the individual reader toward material that is of interest to him. A reader who has had a good course in advanced calculus should be able to understand the defini tions and statements of the theorems and should be able to follow a substantial portion of the mathematical development. The entire book can be read by someone familiar with the basic aspects of Lebesque integration and functional analysis. For the reader who wishes to find out more about applications we recommend references [2], [13], [33], [35], and [50], of the Bibliography at the end of the book.
  optimal control theory an introduction: Optimal Control Theory with Applications in Economics Thomas A. Weber, 2011-09-30 A rigorous introduction to optimal control theory, with an emphasis on applications in economics. This book bridges optimal control theory and economics, discussing ordinary differential equations, optimal control, game theory, and mechanism design in one volume. Technically rigorous and largely self-contained, it provides an introduction to the use of optimal control theory for deterministic continuous-time systems in economics. The theory of ordinary differential equations (ODEs) is the backbone of the theory developed in the book, and chapter 2 offers a detailed review of basic concepts in the theory of ODEs, including the solution of systems of linear ODEs, state-space analysis, potential functions, and stability analysis. Following this, the book covers the main results of optimal control theory, in particular necessary and sufficient optimality conditions; game theory, with an emphasis on differential games; and the application of control-theoretic concepts to the design of economic mechanisms. Appendixes provide a mathematical review and full solutions to all end-of-chapter problems. The material is presented at three levels: single-person decision making; games, in which a group of decision makers interact strategically; and mechanism design, which is concerned with a designer's creation of an environment in which players interact to maximize the designer's objective. The book focuses on applications; the problems are an integral part of the text. It is intended for use as a textbook or reference for graduate students, teachers, and researchers interested in applications of control theory beyond its classical use in economic growth. The book will also appeal to readers interested in a modeling approach to certain practical problems involving dynamic continuous-time models.
  optimal control theory an introduction: Calculus of Variations and Optimal Control Theory Daniel Liberzon, 2012 This textbook offers a concise yet rigorous introduction to calculus of variations and optimal control theory, and is a self-contained resource for graduate students in engineering, applied mathematics, and related subjects. Designed specifically for a one-semester course, the book begins with calculus of variations, preparing the ground for optimal control. It then gives a complete proof of the maximum principle and covers key topics such as the Hamilton-Jacobi-Bellman theory of dynamic programming and linear-quadratic optimal control. Calculus of Variations and Optimal Control Theory also traces the historical development of the subject and features numerous exercises, notes and references at the end of each chapter, and suggestions for further study. Offers a concise yet rigorous introduction Requires limited background in control theory or advanced mathematics Provides a complete proof of the maximum principle Uses consistent notation in the exposition of classical and modern topics Traces the historical development of the subject Solutions manual (available only to teachers) Leading universities that have adopted this book include: University of Illinois at Urbana-Champaign ECE 553: Optimum Control Systems Georgia Institute of Technology ECE 6553: Optimal Control and Optimization University of Pennsylvania ESE 680: Optimal Control Theory University of Notre Dame EE 60565: Optimal Control
  optimal control theory an introduction: Practical Methods for Optimal Control and Estimation Using Nonlinear Programming John T. Betts, 2010-01-01 A focused presentation of how sparse optimization methods can be used to solve optimal control and estimation problems.
  optimal control theory an introduction: Optimal Control Michael Athans, Peter L. Falb, 2013-04-26 Geared toward advanced undergraduate and graduate engineering students, this text introduces the theory and applications of optimal control. It serves as a bridge to the technical literature, enabling students to evaluate the implications of theoretical control work, and to judge the merits of papers on the subject. Rather than presenting an exhaustive treatise, Optimal Control offers a detailed introduction that fosters careful thinking and disciplined intuition. It develops the basic mathematical background, with a coherent formulation of the control problem and discussions of the necessary conditions for optimality based on the maximum principle of Pontryagin. In-depth examinations cover applications of the theory to minimum time, minimum fuel, and to quadratic criteria problems. The structure, properties, and engineering realizations of several optimal feedback control systems also receive attention. Special features include numerous specific problems, carried through to engineering realization in block diagram form. The text treats almost all current examples of control problems that permit analytic solutions, and its unified approach makes frequent use of geometric ideas to encourage students' intuition.
  optimal control theory an introduction: A Primer on the Calculus of Variations and Optimal Control Theory Mike Mesterton-Gibbons, 2009 The calculus of variations is used to find functions that optimize quantities expressed in terms of integrals. Optimal control theory seeks to find functions that minimize cost integrals for systems described by differential equations. This book is an introduction to both the classical theory of the calculus of variations and the more modern developments of optimal control theory from the perspective of an applied mathematician. It focuses on understanding concepts and how to apply them. The range of potential applications is broad: the calculus of variations and optimal control theory have been widely used in numerous ways in biology, criminology, economics, engineering, finance, management science, and physics. Applications described in this book include cancer chemotherapy, navigational control, and renewable resource harvesting. The prerequisites for the book are modest: the standard calculus sequence, a first course on ordinary differential equations, and some facility with the use of mathematical software. It is suitable for an undergraduate or beginning graduate course, or for self study. It provides excellent preparation for more advanced books and courses on the calculus of variations and optimal control theory.
  optimal control theory an introduction: Primer on Optimal Control Theory Jason L. Speyer, David H. Jacobson, 2010-05-13 A rigorous introduction to optimal control theory, which will enable engineers and scientists to put the theory into practice.
  optimal control theory an introduction: Optimal Control Theory Donald E. Kirk, 1976
  optimal control theory an introduction: Optimal Control Leslie M. Hocking, 1991 Systems that evolve with time occur frequently in nature and modelling the behavior of such systems provides an important application of mathematics. These systems can be completely deterministic, but it may be possible too to control their behavior by intervention through controls. The theory of optimal control is concerned with determining such controls which, at minimum cost, either direct the system along a given trajectory or enable it to reach a given point in its state space. This textbook is a straightforward introduction to the theory of optimal control with an emphasis on presenting many different applications. Professor Hocking has taken pains to ensure that the theory is developed to display the main themes of the arguments but without using sophisticated mathematical tools. Problems in this setting can arise across a wide range of subjects and there are illustrative examples of systems from fields as diverse as dynamics, economics, population control, and medicine. Throughout there are many worked examples, and numerous exercises (with solutions) are provided.
  optimal control theory an introduction: Optimal Control Arturo Locatelli, 2001-03 From the reviews: The style of the book reflects the author’s wish to assist in the effective learning of optimal control by suitable choice of topics, the mathematical level used, and by including numerous illustrated examples. . . .In my view the book suits its function and purpose, in that it gives a student a comprehensive coverage of optimal control in an easy-to-read fashion. —Measurement and Control
  optimal control theory an introduction: Nonlinear Optimal Control Theory Leonard David Berkovitz, Negash G. Medhin, 2012-08-25 Nonlinear Optimal Control Theory presents a deep, wide-ranging introduction to the mathematical theory of the optimal control of processes governed by ordinary differential equations and certain types of differential equations with memory. Many examples illustrate the mathematical issues that need to be addressed when using optimal control techniques in diverse areas. Drawing on classroom-tested material from Purdue University and North Carolina State University, the book gives a unified account of bounded state problems governed by ordinary, integrodifferential, and delay systems. It also discusses Hamilton-Jacobi theory. By providing a sufficient and rigorous treatment of finite dimensional control problems, the book equips readers with the foundation to deal with other types of control problems, such as those governed by stochastic differential equations, partial differential equations, and differential games.
  optimal control theory an introduction: The Calculus of Variations and Optimal Control George Leitmann, 2013-06-29 When the Tyrian princess Dido landed on the North African shore of the Mediterranean sea she was welcomed by a local chieftain. He offered her all the land that she could enclose between the shoreline and a rope of knotted cowhide. While the legend does not tell us, we may assume that Princess Dido arrived at the correct solution by stretching the rope into the shape of a circular arc and thereby maximized the area of the land upon which she was to found Carthage. This story of the founding of Carthage is apocryphal. Nonetheless it is probably the first account of a problem of the kind that inspired an entire mathematical discipline, the calculus of variations and its extensions such as the theory of optimal control. This book is intended to present an introductory treatment of the calculus of variations in Part I and of optimal control theory in Part II. The discussion in Part I is restricted to the simplest problem of the calculus of variations. The topic is entirely classical; all of the basic theory had been developed before the turn of the century. Consequently the material comes from many sources; however, those most useful to me have been the books of Oskar Bolza and of George M. Ewing. Part II is devoted to the elementary aspects of the modern extension of the calculus of variations, the theory of optimal control of dynamical systems.
  optimal control theory an introduction: Optimal Control Theory and Static Optimization in Economics Daniel Léonard, Ngo van Long, 1992-01-31 Optimal control theory is a technique being used increasingly by academic economists to study problems involving optimal decisions in a multi-period framework. This textbook is designed to make the difficult subject of optimal control theory easily accessible to economists while at the same time maintaining rigour. Economic intuitions are emphasized, and examples and problem sets covering a wide range of applications in economics are provided to assist in the learning process. Theorems are clearly stated and their proofs are carefully explained. The development of the text is gradual and fully integrated, beginning with simple formulations and progressing to advanced topics such as control parameters, jumps in state variables, and bounded state space. For greater economy and elegance, optimal control theory is introduced directly, without recourse to the calculus of variations. The connection with the latter and with dynamic programming is explained in a separate chapter. A second purpose of the book is to draw the parallel between optimal control theory and static optimization. Chapter 1 provides an extensive treatment of constrained and unconstrained maximization, with emphasis on economic insight and applications. Starting from basic concepts, it derives and explains important results, including the envelope theorem and the method of comparative statics. This chapter may be used for a course in static optimization. The book is largely self-contained. No previous knowledge of differential equations is required.
  optimal control theory an introduction: An Introduction to Optimal Control Problems in Life Sciences and Economics Sebastian Aniţa, Viorel Arnăutu, Vincenzo Capasso, 2011-05-05 Combining control theory and modeling, this textbook introduces and builds on methods for simulating and tackling concrete problems in a variety of applied sciences. Emphasizing learning by doing, the authors focus on examples and applications to real-world problems. An elementary presentation of advanced concepts, proofs to introduce new ideas, and carefully presented MATLAB® programs help foster an understanding of the basics, but also lead the way to new, independent research. With minimal prerequisites and exercises in each chapter, this work serves as an excellent textbook and reference for graduate and advanced undergraduate students, researchers, and practitioners in mathematics, physics, engineering, computer science, as well as biology, biotechnology, economics, and finance.
  optimal control theory an introduction: Applied Optimal Control A. E. Bryson, 2018-05-04 This best-selling text focuses on the analysis and design of complicated dynamics systems. CHOICE called it a high-level, concise book that could well be used as a reference by engineers, applied mathematicians, and undergraduates. The format is good, the presentation clear, the diagrams instructive, the examples and problems helpful...References and a multiple-choice examination are included.
  optimal control theory an introduction: Optimal Control Theory Suresh P. Sethi, Gerald L. Thompson, 2006 Optimal control methods are used to determine optimal ways to control a dynamic system. The theoretical work in this field serves as a foundation for the book, which the authors have applied to business management problems developed from their research and classroom instruction. Sethi and Thompson have provided management science and economics communities with a thoroughly revised edition of their classic text on Optimal Control Theory. The new edition has been completely refined with careful attention to the text and graphic material presentation. Chapters cover a range of topics including finance, production and inventory problems, marketing problems, machine maintenance and replacement, problems of optimal consumption of natural resources, and applications of control theory to economics. The book contains new results that were not available when the first edition was published, as well as an expansion of the material on stochastic optimal control theory.
  optimal control theory an introduction: Optimal and Robust Estimation Frank L. Lewis, Lihua Xie, Dan Popa, 2017-12-19 More than a decade ago, world-renowned control systems authority Frank L. Lewis introduced what would become a standard textbook on estimation, under the title Optimal Estimation, used in top universities throughout the world. The time has come for a new edition of this classic text, and Lewis enlisted the aid of two accomplished experts to bring the book completely up to date with the estimation methods driving today's high-performance systems. A Classic Revisited Optimal and Robust Estimation: With an Introduction to Stochastic Control Theory, Second Edition reflects new developments in estimation theory and design techniques. As the title suggests, the major feature of this edition is the inclusion of robust methods. Three new chapters cover the robust Kalman filter, H-infinity filtering, and H-infinity filtering of discrete-time systems. Modern Tools for Tomorrow's Engineers This text overflows with examples that highlight practical applications of the theory and concepts. Design algorithms appear conveniently in tables, allowing students quick reference, easy implementation into software, and intuitive comparisons for selecting the best algorithm for a given application. In addition, downloadable MATLAB® code allows students to gain hands-on experience with industry-standard software tools for a wide variety of applications. This cutting-edge and highly interactive text makes teaching, and learning, estimation methods easier and more modern than ever.
  optimal control theory an introduction: Optimal Control Theory , 1967
  optimal control theory an introduction: Optimal Control Brian D. O. Anderson, John B. Moore, 2007-02-27 Numerous examples highlight this treatment of the use of linear quadratic Gaussian methods for control system design. It explores linear optimal control theory from an engineering viewpoint, with illustrations of practical applications. Key topics include loop-recovery techniques, frequency shaping, and controller reduction. Numerous examples and complete solutions. 1990 edition.
  optimal control theory an introduction: Optimal Control with Engineering Applications Hans P. Geering, 2007-03-23 This book introduces a variety of problem statements in classical optimal control, in optimal estimation and filtering, and in optimal control problems with non-scalar-valued performance criteria. Many example problems are solved completely in the body of the text. All chapter-end exercises are sketched in the appendix. The theoretical part of the book is based on the calculus of variations, so the exposition is very transparent and requires little mathematical rigor.
  optimal control theory an introduction: Optimal Control Theory for Applications David G. Hull, 2013-03-09 The published material represents the outgrowth of teaching analytical optimization to aerospace engineering graduate students. To make the material available to the widest audience, the prerequisites are limited to calculus and differential equations. It is also a book about the mathematical aspects of optimal control theory. It was developed in an engineering environment from material learned by the author while applying it to the solution of engineering problems. One goal of the book is to help engineering graduate students learn the fundamentals which are needed to apply the methods to engineering problems. The examples are from geometry and elementary dynamical systems so that they can be understood by all engineering students. Another goal of this text is to unify optimization by using the differential of calculus to create the Taylor series expansions needed to derive the optimality conditions of optimal control theory.
  optimal control theory an introduction: Optimal Control and the Calculus of Variations Enid R. Pinch, 1995 A paperback edition of this successful textbook for final year undergraduate mathematicians and control engineering students, this book contains exercises and many worked examples, with complete solutions and hints making it ideal not only as a class textbook but also for individual study. Theintorduction to optimal control begins by considering the problem of minimizing a function of many variables, before moving on to the main subject: the optimal control of systems governed by ordinary differential equations.
  optimal control theory an introduction: Introduction to Optimal Control Theory Aaron Strauss, 1982
  optimal control theory an introduction: Mathematical Control Theory Jerzy Zabczyk, 2008 In a mathematically precise manner, this book presents a unified introduction to deterministic control theory. It includes material on the realization of both linear and nonlinear systems, impulsive control, and positive linear systems.
  optimal control theory an introduction: An Introduction to Optimal Control Theory Aaron Strauss, 2012-12-06 This paper is intended for the beginner. It is not a state of-the-art paper for research workers in the field of control theory. Its purpose is to introduce the reader to some of the problems and results in control theory, to illustrate the application of these re sults, and to provide a guide for his further reading on this subject. I have tried to motivate the results with examples, especial ly with one canonical, simple example described in §3. Many results, such as the maximum principle, have long and difficult proofs. I have omitted these proofs. In general I have included only the proofs which are either (1) not too difficult or (2) fairly enlightening as to the nature of the result. I have, however, usually attempted to draw the strongest conclusion from a given proof. For example, many existing proofs in control theory for compact targets and uniqueness of solutions also hold for closed targets and non-uniqueness. Finally, at the end of each section I have given references to generalizations and origins of the results discussed in that section. I make no claim of completeness in the references, however, as I have often been content merely to refer the reader either to an exposition or to a paper which has an extensive bibliography. IV These 1ecture notes are revisions of notes I used for aseries of nine 1ectures on contro1 theory at the International Summer Schoo1 on Mathematica1 Systems and Economics held in Varenna, Ita1y, June 1967.
  optimal control theory an introduction: Mathematical Control Theory Eduardo D. Sontag, 2013-11-21 Geared primarily to an audience consisting of mathematically advanced undergraduate or beginning graduate students, this text may additionally be used by engineering students interested in a rigorous, proof-oriented systems course that goes beyond the classical frequency-domain material and more applied courses. The minimal mathematical background required is a working knowledge of linear algebra and differential equations. The book covers what constitutes the common core of control theory and is unique in its emphasis on foundational aspects. While covering a wide range of topics written in a standard theorem/proof style, it also develops the necessary techniques from scratch. In this second edition, new chapters and sections have been added, dealing with time optimal control of linear systems, variational and numerical approaches to nonlinear control, nonlinear controllability via Lie-algebraic methods, and controllability of recurrent nets and of linear systems with bounded controls.
  optimal control theory an introduction: Optimal Control in Thermal Engineering Viorel Badescu, 2017-03-14 This book is the first major work covering applications in thermal engineering and offering a comprehensive introduction to optimal control theory, which has applications in mechanical engineering, particularly aircraft and missile trajectory optimization. The book is organized in three parts: The first part includes a brief presentation of function optimization and variational calculus, while the second part presents a summary of the optimal control theory. Lastly, the third part describes several applications of optimal control theory in solving various thermal engineering problems. These applications are grouped in four sections: heat transfer and thermal energy storage, solar thermal engineering, heat engines and lubrication.Clearly presented and easy-to-use, it is a valuable resource for thermal engineers and thermal-system designers as well as postgraduate students.
  optimal control theory an introduction: Optimal Control Systems D. Subbaram Naidu, 2018-10-03 The theory of optimal control systems has grown and flourished since the 1960's. Many texts, written on varying levels of sophistication, have been published on the subject. Yet even those purportedly designed for beginners in the field are often riddled with complex theorems, and many treatments fail to include topics that are essential to a thorough grounding in the various aspects of and approaches to optimal control. Optimal Control Systems provides a comprehensive but accessible treatment of the subject with just the right degree of mathematical rigor to be complete but practical. It provides a solid bridge between traditional optimization using the calculus of variations and what is called modern optimal control. It also treats both continuous-time and discrete-time optimal control systems, giving students a firm grasp on both methods. Among this book's most outstanding features is a summary table that accompanies each topic or problem and includes a statement of the problem with a step-by-step solution. Students will also gain valuable experience in using industry-standard MATLAB and SIMULINK software, including the Control System and Symbolic Math Toolboxes. Diverse applications across fields from power engineering to medicine make a foundation in optimal control systems an essential part of an engineer's background. This clear, streamlined presentation is ideal for a graduate level course on control systems and as a quick reference for working engineers.
  optimal control theory an introduction: Optimal Control Theory with Economic Applications A. Seierstad, K. Sydsæter, 1987-02 This book serves not only as an introduction, but also as an advanced text and reference source in the field of deterministic optimal control systems governed by ordinary differential equations. It also includes an introduction to the classical calculus of variations. An important feature of the book is the inclusion of a large number of examples, in which the theory is applied to a wide variety of economics problems. The presentation of simple models helps illuminate pertinent qualitative and analytic points, useful when confronted with a more complex reality. These models cover: economic growth in both open and closed economies, exploitation of (non-) renewable resources, pollution control, behaviour of firms, and differential games. A great emphasis on precision pervades the book, setting it apart from the bulk of literature in this area. The rigorous techniques presented should help the reader avoid errors which often recur in the application of control theory within economics.
  optimal control theory an introduction: Deterministic and Stochastic Optimal Control Wendell H. Fleming, Raymond W. Rishel, 2012-12-06 This book may be regarded as consisting of two parts. In Chapters I-IV we pre sent what we regard as essential topics in an introduction to deterministic optimal control theory. This material has been used by the authors for one semester graduate-level courses at Brown University and the University of Kentucky. The simplest problem in calculus of variations is taken as the point of departure, in Chapter I. Chapters II, III, and IV deal with necessary conditions for an opti mum, existence and regularity theorems for optimal controls, and the method of dynamic programming. The beginning reader may find it useful first to learn the main results, corollaries, and examples. These tend to be found in the earlier parts of each chapter. We have deliberately postponed some difficult technical proofs to later parts of these chapters. In the second part of the book we give an introduction to stochastic optimal control for Markov diffusion processes. Our treatment follows the dynamic pro gramming method, and depends on the intimate relationship between second order partial differential equations of parabolic type and stochastic differential equations. This relationship is reviewed in Chapter V, which may be read inde pendently of Chapters I-IV. Chapter VI is based to a considerable extent on the authors' work in stochastic control since 1961. It also includes two other topics important for applications, namely, the solution to the stochastic linear regulator and the separation principle.
  optimal control theory an introduction: Differential Geometry and Its Applications John Oprea, 2007-09-06 This book studies the differential geometry of surfaces and its relevance to engineering and the sciences.
  optimal control theory an introduction: Optimal Control of Nonlinear Processes Dieter Grass, Jonathan P. Caulkins, Gustav Feichtinger, Gernot Tragler, Doris A. Behrens, 2008-07-24 Dynamic optimization is rocket science – and more. This volume teaches researchers and students alike to harness the modern theory of dynamic optimization to solve practical problems. These problems not only cover those in space flight, but also in emerging social applications such as the control of drugs, corruption, and terror. This volume is designed to be a lively introduction to the mathematics and a bridge to these hot topics in the economics of crime for current scholars. The authors celebrate Pontryagin’s Maximum Principle – that crowning intellectual achievement of human understanding. The rich theory explored here is complemented by numerical methods available through a companion web site.
  optimal control theory an introduction: Stochastic Optimal Control in Infinite Dimension Giorgio Fabbri, Fausto Gozzi, Andrzej Święch, 2017-06-22 Providing an introduction to stochastic optimal control in infinite dimension, this book gives a complete account of the theory of second-order HJB equations in infinite-dimensional Hilbert spaces, focusing on its applicability to associated stochastic optimal control problems. It features a general introduction to optimal stochastic control, including basic results (e.g. the dynamic programming principle) with proofs, and provides examples of applications. A complete and up-to-date exposition of the existing theory of viscosity solutions and regular solutions of second-order HJB equations in Hilbert spaces is given, together with an extensive survey of other methods, with a full bibliography. In particular, Chapter 6, written by M. Fuhrman and G. Tessitore, surveys the theory of regular solutions of HJB equations arising in infinite-dimensional stochastic control, via BSDEs. The book is of interest to both pure and applied researchers working in the control theory of stochastic PDEs, and in PDEs in infinite dimension. Readers from other fields who want to learn the basic theory will also find it useful. The prerequisites are: standard functional analysis, the theory of semigroups of operators and its use in the study of PDEs, some knowledge of the dynamic programming approach to stochastic optimal control problems in finite dimension, and the basics of stochastic analysis and stochastic equations in infinite-dimensional spaces.
  optimal control theory an introduction: Control Theory and Optimization I M.I. Zelikin, 2013-03-14 The only monograph on the topic, this book concerns geometric methods in the theory of differential equations with quadratic right-hand sides, closely related to the calculus of variations and optimal control theory. Based on the author’s lectures, the book is addressed to undergraduate and graduate students, and scientific researchers.
  optimal control theory an introduction: Counterexamples in Optimal Control Theory Semen Ya. Serovaiskii, 2011-12-01 This monograph deals with cases where optimal control either does not exist or is not unique, cases where optimality conditions are insufficient of degenerate, or where extremum problems in the sense of Tikhonov and Hadamard are ill-posed, and other situations. A formal application of classical optimisation methods in such cases either leads to wrong results or has no effect. The detailed analysis of these examples should provide a better understanding of the modern theory of optimal control and the practical difficulties of solving extremum problems.
  optimal control theory an introduction: Optimal Control of Partial Differential Equations Fredi Tröltzsch, 2024-03-21 Optimal control theory is concerned with finding control functions that minimize cost functions for systems described by differential equations. The methods have found widespread applications in aeronautics, mechanical engineering, the life sciences, and many other disciplines. This book focuses on optimal control problems where the state equation is an elliptic or parabolic partial differential equation. Included are topics such as the existence of optimal solutions, necessary optimality conditions and adjoint equations, second-order sufficient conditions, and main principles of selected numerical techniques. It also contains a survey on the Karush-Kuhn-Tucker theory of nonlinear programming in Banach spaces. The exposition begins with control problems with linear equations, quadratic cost functions and control constraints. To make the book self-contained, basic facts on weak solutions of elliptic and parabolic equations are introduced. Principles of functional analysis are introduced and explained as they are needed. Many simple examples illustrate the theory and its hidden difficulties. This start to the book makes it fairly self-contained and suitable for advanced undergraduates or beginning graduate students. Advanced control problems for nonlinear partial differential equations are also discussed. As prerequisites, results on boundedness and continuity of solutions to semilinear elliptic and parabolic equations are addressed. These topics are not yet readily available in books on PDEs, making the exposition also interesting for researchers. Alongside the main theme of the analysis of problems of optimal control, Tröltzsch also discusses numerical techniques. The exposition is confined to brief introductions into the basic ideas in order to give the reader an impression of how the theory can be realized numerically. After reading this book, the reader will be familiar with the main principles of the numerical analysis of PDE-constrained optimization.
  optimal control theory an introduction: Applications of Optimal Control Theory to Computer Controller Design William S. Widnall, 1968
  optimal control theory an introduction: Classical Mechanics with Calculus of Variations and Optimal Control Mark Levi, 2014-03-07 This is an intuitively motivated presentation of many topics in classical mechanics and related areas of control theory and calculus of variations. All topics throughout the book are treated with zero tolerance for unrevealing definitions and for proofs which leave the reader in the dark. Some areas of particular interest are: an extremely short derivation of the ellipticity of planetary orbits; a statement and an explanation of the tennis racket paradox; a heuristic explanation (and a rigorous treatment) of the gyroscopic effect; a revealing equivalence between the dynamics of a particle and statics of a spring; a short geometrical explanation of Pontryagin's Maximum Principle, and more. In the last chapter, aimed at more advanced readers, the Hamiltonian and the momentum are compared to forces in a certain static problem. This gives a palpable physical meaning to some seemingly abstract concepts and theorems. With minimal prerequisites consisting of basic calculus and basic undergraduate physics, this book is suitable for courses from an undergraduate to a beginning graduate level, and for a mixed audience of mathematics, physics and engineering students. Much of the enjoyment of the subject lies in solving almost 200 problems in this book.
Optimal Control Theory An Introduction controlengineers
This book introduces three facets of optimal control theory-dynamic programming, Pontryagin’s minimum principle, and numerical techniques for trajectory optimization-at a level appropriate …

An Introduction to Optimal Control - École Polytechnique
An Introduction to Optimal Control. Ugo Boscain. Benetto Piccoli. The aim of these notes is to give an introduction to the Theory of Optimal Control for nite dimensional systems and in particular …

Optimal Control Theory - University of Washington
Optimal control theory is a mature mathematical discipline with numerous applications in both science and engineering. It is emerging as the computational framework of choice

Optimal Control Theory
These notes gives a brief introduction to the theory of optimal control to mathematics students, with emphasis on both the underlying mathematical theory, and numerical algorithms for …

Introduction to optimal control theory - KSU
Introduction to optimal control theory. Christiane P. Koch. Laboratoire Aim ́e Cotton CNRS, France. The Hebrew University Jerusalem, Israel. Outline. 0. Terminology. Intuitive control …

Introduction to Optimal Control - Princeton University
Introduction to Optimal Control. ORF523 CONVEX AND CONIC OPTIMIZATION SUMEET SINGH, GOOGLE BRAIN ROBOTICS APRIL 22, 2021. Outline. Optimal Control Problem. …

Optimal Control Optimal Control Introduction - LTH, Lunds …
Optimal Control Introduction Karl Johan Åström Department of Automatic Control LTH Lund University Optimal Control K. J. Åström 1. Introduction 2. Calculus of Variations 3. Optimal …

Introduction to Control Theory - University of Utah
1 Optimal control. The theory of optimal control was developed starting from the1950s to meet the needs of designed automatic control systems. Optimal control problem is essentially the …

LECTURES ON OPTIMAL CONTROL THEORY - Universitetet i Oslo
OPTIMAL CONTROL THEORY. INTRODUCTION. In the theory of mathematical optimization one try to nd maximum or minimum points of functions depending of real variables and of other …

1 Introduction to Optimal Control Theory - St. Francis Xavier …
Optimal Control Theory is a modern approach to the dynamic optimization without being constrained to Interior Solutions, nonetheless it still relies on di erentiability. The approach di …

Optimal Control Theory - Bryn Mawr College
Introduction. Optimal Control theory is an extension of Calculus of Variations that deals with finding a control law so that a certain optimality criterion is achieved. Here is a sample …

INTRODUCTION TO THE OPTIMAL CONTROL THEORY AND SOME …
This paper aims to give a brief introduction to the optimal control theory and attempts to derive some of the central results of the subject, in-cluding the Hamilton-Jacobi-Bellman PDE and the …

OPTIMAL CONTROL - EPFL
3.1 INTRODUCTION After more than three hundred years of evolution, optimal control theory has been formu-lated as an extension of the calculus of variations. Based on the theoretical …

Optimal control and the linear quadratic regulator
These notes represent an introduction to the theory of optimal control and the linear quadratic regulator (LQR). There exist two main approaches to optimal control:

Introduction to Optimal Control Theory - Warning
Optimal Control Theory by Donald E. Kirk. ematical theory of control by Alberto B. What do you mean by control of system ? Control of a system has a double meaning: (Weak sense): …

Introduction to Optimal Control - recherche.enac.fr
Introduction The application of optimal control theory to the practical design of multivariable control systems started in the 1960s 1: in 1957 R. Bellman applied dynamic programming to …

LECTURES ON OPTIMAL CONTROL THEORY - UiO
Optimal control theory is a modern extension of the classical calculus of variations. Euler and Lagrange developed the theory of the calculus of variations in the eighteenth century. Its main …

Introduction to optimal control in growth theory - unibas.ch
Introduction to optimal control. Ramsey-Cass-Koopmans Model. Inclusion of Technical Change. Main features. Multi-stage decision-making; Optimization of a dynamic process in time; …

Optimal Control: Introduction and Overview - University of British …
8 Jul 2020 · What is Optimal Control? We de ne Optimal Control as the active manipulation of dynamical systems to achieve a given engineering goal. Core Idea: Closed Loop Feed Back …

Geometric Optimal Control - uni-bonn.de
[AS]A.A. Agrachev, Y. Sachkov, Control theory from the geometric viewpoint. Springer, 2013. [BC]M. Bardi, I. Capuzzo-Dolcetta, Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations. Springer, 2008. ... Lecture notes of the course "An Introduction to Mathematical Optimal Control Theory", https://math.berkeley.edu/ evans ...

Calculus of Variations and Optimal Control Theory - GBV
Optimal Control Theory A Concise Introduction Daniel Liberzon PRINCETON PRINCETON UNIVERSITY PRESS AND OXFORD . Contents Preface xiii 1 Introduction 1 ... 3.3 Optimal control problem formulation and assumptions 83 3.3.1 Control …

INTRODUCTION TO OPTIMAL CONTROL - UP
Introduction to Optimal Control Organization 1. Introduction. General considerations. Motivation. Problem Formulation. Classes of problems. Issues in optimal control theory 2. Necessary Conditions of Optimality - Linear Systems Linear Systems Without and with state constraints. Minimum time. Linear quadratic regulator. 3.

Multi-Objective Optimal Control: A Direct Approach - Springer
the MONLP problem and return a partial reconstruction of the globally optimal Pareto set. An illustrative example concludes the chapter. Keywords Multi-objectiveoptimisation · Optimal control · Finite elements · Trajectoryoptimisation 1 Introduction Optimal control theory is a branch of mathematical optimisation that searches for

Short notes on Optimal Control 1 Introduction to optimal control
1 Introduction to optimal control Various optimization problems appear in open and closed loop control, deterministic and stochastic control and estimation theory. Optimal control is intersection of these areas. In general, the objective is to choose an optimal input w.r.t. some performance index which gives a cost function

Optimal control: An introduction to the theory with applications, …
OPTIMAL CONTROL: AN INTRODUCTION TO THE THEORY WITH APPLICATIONS, L. M. Hocking, Clarendon Press, Oxford, 1991, ISBN 0-19- 8596824, xiv + 254 pp., f16.00. The essential properties of an optimal control problem are that we have a system which evolves in time according to certain laws. There are two main branches of optimal control theory, depen-

MAE 598: LMIs in Optimal and Robust Control Syllabus
The following is an introduction to classical control and state-space theory. Franklin, Powell and Enami. \Feedback Control of Dynamical Systems", Addison-Wiley, 1994. The following are references for LMI methods in control. Zhou, Doyle and Glover. \Robust and Optimal Control", Prentice Hall, 1996. Boyd, El Ghaoui, Feron and Balakrishnan.

OPTIMAL CONTROL THEORY - Middle East Technical University
Textbook: D.E. Kirk. Optimal Control Theory: An Introduction. Dover, 2004. Tentative course outline: I. Chapters 1-3 (Kirk) Optimal control problem (de nition and applications) Principle of optimality and dynamic programming HJB equation LQR II. Chapter 4 …

Optimal Control Theory An Introduction Solution
Introduction to Optimal Control Theory Jack Macki,Aaron Strauss,2012-12-06 This monograph is an introduction to optimal control theory for systems governed by vector ordinary differential equations. It is not intended as a state-of-the-art handbook for researchers. We have tried to keep two types of reader in mind: (1) mathematicians, graduate ...

Stochastic optimal control theory - tu-berlin.de
Stochastic optimal control theory ICML, Helsinki 2008 tutorial∗ H.J. Kappen, Radboud University, Nijmegen, the Netherlands July 4, 2008 Abstract Control theory is a mathematical description of how to act optimally to gain future rewards. In this paper I give an introduction to deter-

Contents Introduction to differential game and optimal control
questions, we use control theory and viscosity solution approaches, appealing to the existence of the Isaacs’ conditions: the relation between the upper and lower Hamiltonians. Contents 1. Introduction to di erential game and optimal control 1 2. Strategies 4 3. Properties of the value function 5 4. Viscosity solutions and another veri cation ...

WCG:8506&Academiaan Introduction To Mathematical Optimal Control Theory …
Introduction to Optimal Control Theory Jack Macki,Aaron Strauss,2012-12-06 This monograph is an introduction to optimal control theory for systems governed by vector ordinary differential equations. It is not intended as a state-of-the-art handbook for researchers. We have tried to keep two types of reader in mind: (1) mathematicians, graduate ...

Optimal control theory and the linear Bellman Equation
Optimal control theory and the linear Bellman Equation Hilbert J. Kappen1 1.1 Introduction Optimizing a sequence of actions to attain some future goal is the general topic of control theory Stengel (1993); Fleming and Soner (1992). It views an agent as an automaton that seeks to maximize expected reward (or minimize cost) over some future time ...

NONLINEAR AND OPTIMAL CONTROL THEORY - Yale University
OPTIMAL CONTROL THEORY Lectures given at the C.I.M.E. Summer School held in Cetraro, Italy, June 19-29, 2004 Editors: P. Nistri and G. Stefani Springer Berlin Heidelberg NewYork HongKong London Milan Paris Tokyo

Introduction to Optimal and Robust Control - Wiley
industry, which helped the development of new control techniques, but optimal control and estimation were particularly encouraged by the applications in space. The most significant contributions were by Kalman [7] who introduced the state-space approach to optimal control and filtering theory. In this second stage of development the system

Optimal Control Theory An Introduction Solution
Optimal Control Theory An Introduction Solution .pdf By understanding the fundamentals of optimal control and the different solution techniques available, practitioners can effectively apply this theory to solve a wide range of real-world problems, leading to improved performance, efficiency, and

Optimal Control Theory - IISER TVM
Optimal Control Theory Mrinal Kanti Ghosh Department of Mathematics Indian Institute of Science Bangalore 560 012 email : mkg@math.iisc.ernet.in 1 Introduction Optimal Control Theory deals with optimization problems involving a controlled dynamical system. A controlled dynamical system is a dynamical system in which the trajectory can be

The calculus of variations and optimal control: An introduction
George Leitmann: The Calculus of Variations and Optimal Control: An Introduction, Plenum Press, New York, 1981, 311 pp., $35 ($42 outside the U.S.) ... In Part II the theory of the optimal control of dynamical systems is developed. The approach is geometric rather than variational, being based on original work of the author done in conjunction ...

Calculus of variations and optimal control theory. A concise
A review of the book \Calculus of variations and optimal control theory. A concise introduction" by Daniel Liberzon, Princeton University Press, Princeton, NJ, 2012. This nicely and carefully written textbook collects lecture notes for a graduate course on optimal control given by the author at the University of Illinois.

Optimal Control: Introduction and Overview - University of British …
8 Jul 2020 · Optimal Control: Introduction and Overview Jonathan Wilder Lavington July 8, 2020 University of British Columbia, Department of Computer Science 1. ... Presentation 2: Control Theory From RL / Optimization Perspective Read \Optimal Control Theory" (pg 1-23) from here Major Topics Discrete Control / Dynamic Programming

Reinforcement Learning and Optimal Control
%PDF-1.3 %Äåòåë§ó ÐÄÆ 4 0 obj /Filter /FlateDecode /Length 1051 >> stream x VÛn 7 }Ÿ¯àã,P˺ Fê[Ò EŠ mš ü ô!ÙÆ‰›:®³iƒöcû-=G”öæ] 0` %’" ©½—çr/ Éy™‹—OoåJ>Êåw 'ï6òx‘ kì„Óe->š9Ï® ôo—& ³$›†åV.—µ '˵¼’ñç·+¹˜œŒ7\­Œ ›|Ýä;®AÆO+ñ2®ë µ Æ[ža·ûè¶Ÿ± ]’ñ …

B15 Linear Dynamic Systems and Optimal Control - GitHub Pages
D Liberzon Calculus of Variations and Optimal Control Theory: A Concise Introduction Princeton U.P., 2012. The course follows the recommended texts which, however, provide a more general treatment of the theory targeting at a graduate level. It should be acknowledged that the exposition and worked out examples in these notes have been inspired

CALCULUS OF VARIATIONS AND OPTIMAL CONTROL THEORY
11 Optimal control problems linear in the State variables 240 12 An extension of Theorem 10.1 245 13 An approximation lemma 248 Chapter 6 A General Fixed Endpoint Problem 1 Introduction 250 2 A maximum principle 253 3 A control problem of Lagrange with equality constraints 256 4 Control problems of Lagrange with inequality constraints 260

An Introduction To Mathematical Optimal Control Theory
Introduction to Optimal Control Theory Jack Macki,Aaron Strauss,2012-12-06 This monograph is an introduction to optimal control theory for systems governed by vector ordinary differential equations. It is not intended as a state-of-the-art handbook for researchers. We have tried to keep two types of reader in mind: (1) mathematicians, graduate ...

LECTURES ON OPTIMAL CONTROL THEORY - Universitetet i Oslo
OPTIMAL CONTROL THEORY 1 INTRODUCTION In the theory of mathematical optimization one tries to nd maximum or minimum points of functions depending of real variables and of other func-tions. Optimal control theory is a modern extension of the classical calculus of variations. Euler and Lagrange developed the theory of the calculus of

Optimal Control Theory An Introduction Solution
Optimal Control Theory An Introduction controlengineers This book introduces three facets of optimal control theory-dynamic programming, Pontryagin’s minimum principle, and numerical techniques for trajectory optimization-at a level appropriate for a first- or second-year Optimal Control Theory -

Stochastic Optimal Control in Finance - Durham
In this Chapter, we will outline the basic structure of an optimal control prob-lem. Then, this structure will be explained through several examples mainly from mathematical finance. Analysis and the solution to these problems will be provided later. 1.1 Optimal Control. In very general terms, an optimal control problem consists of the following

Optimal Control Theory and the Linear Quadratic Regulator
Optimal Control Theory and the Linear Quadratic Regulator Lucas Janson and Sham Kakade CS/Stat 184: Introduction to Reinforcement Learning Fall 2022 ... Optimal control: 19 Goal: stabilizing around the point (s=s2,a=0) More Generally: Optimal Control 20.

Calculus of Variations and Optimal Control Theory Exercises
Calculus of Variations and Optimal Control Theory Exercises Athil George Updated Last 8/17/24 Solutions 1. Contents Solutions 2. CALCULUS OF VARIATIONS AND OPTIMAL CONTROL THEORY A Concise Introduction DANIEL LIBERZON . Created Date:

Lecture Notes “Mathematical Systems and Control Theory”
}uptq}dt: (energy minimizing control) (1.12) Here}} is an appropriate vector norm such as the Euclidean norm, but 1-and 8-norms can be useful as well. Mixtures of these cost functionals appear quite often, in particular, we will have a closer look at combinations of (1.11) and (1.12), while (1.10) is subject of optimal control theory. In ...

Optimal and Robust Estimation: With an Introduction to …
Optimal Control of Singularly Perturbed Linear Systems and Applications: High-Accuracy Techniques, Zoran Gajif ... Optimal and Robust Estimation: With an Introduction to Stochastic Control Theory, Second Edition,Frank L. Lewis, Lihua Xie, and Dan Popa. This page intentionally left blank . CRC 9008 FM.pdf 14/8/2007 14:39 Optimal and

An introduction to optimal control theory and Hamilton-Jacobi …
An introduction to optimal control theory and Hamilton-Jacobi equations. Ariela Briani, LMPT, Universit e de Tours, France. email: ariela.briani@lmpt.univ-tours. r Contents 1 Introduction 2 2 Some optimal control problems. 2 3 The Dynamic Programming Principle 4 4 The Hamilton Jacobi equation and the viscosity solutions. 7

An Introduction To Mathematical Optimal Control Theory
18 Jan 2018 · An Introduction To Mathematical Optimal Control Theory Altannar Chinchuluun,Panos M. Pardalos,Rentsen Enkhbat,Ider Tseveendorj Introduction to the Mathematical Theory of Control Alberto Bressan,Benedetto Piccoli,2007 Mathematical Control Theory John B. Baillieul,J.C. Willems,2012-12-06 This volume on mathematical control theory

Optimal Control Theory Applied to Ship Maneuvering in Restricted …
modern optimal control theory to such maneuvering scenarios in order to show that helmsmen may some day be replaced by modern controllers. The maneuvering equations of motion are cast ... 1 Introduction The problem of controlling ships during their passages at sea has a history as old as human maritime endeavors. The past century saw great ...

6% LINEAR OPTIMAL CONTROL THEORY FOR DISCRETE-TIME …
Discrete-time linear optimal control theory is of Feat interest because of ... 6.2.1 Introduction In this section the theory of linear discrete-time systems is briefly reviewed. The section is organized along the lines of Chapter 1. Many of the results stated in this section are more extensively discussed by Freeman (1965).

Optimal Control Theory: Introduction to the Special Issue
deterministic control models described by ordinary differential equations, the Pontryagin maximum principle is used as often as Bellman’s dynamic programming method. An optimal control problem includes a calculation of the optimal control and the syn-thesis of the optimal control system. Optimal control, as a rule, is calculated by numerical

Calculus of Variations and Optimal Control Theory: A Concise ...
Technology & Engineering - Optimal Control Theory - 464 pages - An Introduction - ISBN:9780486135076 - Geared toward upper-level undergraduates, this text introduces three aspects of optimal control theory: dynamic programming, Pontryagin's minimum principle, and numerical - Apr 26, 2012 - Donald E. Kirk Concise pdf download pdf file Calculus ...

Chapter 2: Optimal Control - mnwhowell.com
An optimal control of a dynamic system is designed to optimize (maximize or minimize) this performance index. This function reflects the quality of ... As stated in the introduction, this chapter will primarily restrict discussion of optimal control to performance indices that are formulated over all future time. This

August 9, 2011 - University of Illinois Urbana-Champaign
This book grew out of my lecture notes for a graduate course on optimal control theory which I taught at the University of Illinois at Urbana-Champaign during the period from 2005 to 2010. While preparingthe lectures, I have accumulated an entire shelf of textbooks on calculus of variations and optimal control systems.

Introduction: Optimal Control Problems - Springer
2 1 INTRODUCTION: OPTIMAL CONTROL PROBLEMS control or action variable, with values in some action space A;andξt is a disturbance or perturbation in a disturbance set S.In most applications of control theory, the spaces X, A, and S are subsets of finite-dimensional spaces. However, for technical reasons (to be briefly discussed in Remark 1.7), it

Optimal control and the linear quadratic regulator
Optimal control and the linear quadratic regulator Shankar Sastry, Forrest Laine, Claire Tomlin February 3, 2021 These notes represent an introduction to the theory of optimal control and the linear quadratic regulator (LQR). There exist two main approaches to optimal control: 1. via the Calculus of Variations (making use of the Maximum Principle);

Calculus of Variations and Optimal Control Theory - De Gruyter
The words \control theory" are, of course, of recent origin, but the subject itself is much older, since it contains the classical calculus of variations as a special case, and the rst calculus of variations

MA40061: NONLINEAR & OPTIMAL CONTROL THEORY
MA40061: NONLINEAR & OPTIMAL CONTROL THEORY Background Reading February 4, 2011 The following is a somewhat arbitrarily chosen collection of useful texts in the Library:

An Introduction To Mathematical Optimal Control Theory
Primer on Optimal Control Theory - SIAM Publications Library WEBThe objective of the book is to make optimal control theory accessible to a large class of engineers and scientists who are not mathematicians, although they have a basic mathematical background, but who need to understand and want to appreciate the sophisticated material ...

Optimal control theory and the linear Bellman Equation
Optimal control theory and the linear Bellman Equation Hilbert J. Kappen1 1.1 Introduction Optimizing a sequence of actions to attain some future goal is the general topic of control theory Stengel (1993); Fleming and Soner (1992). It views an agent as an automaton that seeks to maximize expected reward (or minimize cost) over some future time ...

Optimal Control Theory - University of Washington
Optimal Control Theory Emanuel Todorov University of California San Diego Optimal control theory is a mature mathematical discipline with numerous applications in both science and engineering. It is emerging as the computational framework of choice for studying the neural control of movement, in much the same way that probabilistic infer-

ECE 821 Optimal Control and Variational Methods Lecture Notes
1 Introduction Optimal control theory is the study of dynamic systems, where an fiinput functionflis sought to minimize a given ficost functionfl. The input and state of the system may be constrained in a variety of ways. In most applications, a general solution is …

2 Introduction to Stochastic Control - School of Mathematics
2 Introduction to Stochastic Control ... In the first criterion relying on the theory of choice in uncertainty, the agent compares random incomes for which he knows the probability distributions. Under some con- ... (2.5), if ⌫⇤ is the optimal control, then we have the value function V(x)=sup ...

Optimal and Robust Estimation: With an Introduction to …
Optimal Control of Singularly Perturbed Linear Systems and Applications: High-Accuracy Techniques, Zoran Gajif ... Optimal and Robust Estimation: With an Introduction to Stochastic Control Theory, Second Edition,Frank L. Lewis, Lihua Xie, and Dan Popa. This page intentionally left blank . CRC 9008 FM.pdf 14/8/2007 14:39 Optimal and