
small (250x250 max)
medium (500x500 max)
large ( > 500x500)
Full Resolution


Polynomial Real Root Finding in Bernstein Form A Dissertation Presented to the Department of Civil Engineering Brigham Young University In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy by Melvin R . Spencer August 1 9 9 4 This dissertation by Melvin R . Spencer is accepted in its present form by the Department of Civil Engineering of Brigham Young University as satisfying the disseration requirement for the degree of Doctor of Philosophy . Thomas W . Sederberg , Committee Chairman Norman L . Jones , Committee Member Rida T . Farouki , Committee Member Date S . Olani Durrant Department Chairman ii ACKNOWLEDGEMENTS I am deeply grateful and would like to thank Dr . Thomas W . Sederberg for this disser  ation , being my advisory chair , research assistantships , and the lucid insight he provided throughout my graduate work . Without his constant supportive source of ideas , sugges  tions , encouragement , motivation , and dedication this work would not have met fruition . I also thank him for his patient , considerate demeanor and grace under pressure . I am very indebted and would like to thank Dr . Henry N . Christiansen for permitting an enthusiastic under  graduate to pursue knowledge and gain enlightenment through the many opportunities provided and with the many talented colleagues associated through his graduate program at Brigham Young University . I thank the faculty , deans , and administrators of the Department of Civil Engineering , the College of Engineering and Technology , and the Office of Graduate Studies of Brigham Young University for their continual support in granting the many considerations requested for the fulfillment of this degree . In addtion , thanks to the staff at the University Library and its Inter  Library Loan Department for their constant service and unending supply of reference material . I extend my sincere appreciation and thanks to Dr . Thomas W . Sederberg and Dr . Rida T . Farouki for their significant and incessant contributions towards the completion of this document which include their derivation and presentation of the chapter on preconditioning which is the core of this work , their insights regarding Bernstein form polynomial real root finding , the many days sacrificed and dedicated to the composition , rewriting , and editing , and above all their considerate , good  natured personalities and much appreciated and welcomed suggestions which lessened the burden of the oral and written presentation of this thesis . iii Special effort needs to be recognized and thanks extended to my advisory committee Dr . Thomas W . Sederberg , Dr . Norman L . Jones , and Dr . Rida T . Farouke as well as my oral committee chairman Dr . Henry N . Christiansen , and members Dr . Thomas W . Sederberg , Dr . Norman L . Jones , Dr . Rida T . Farouki , and Dr . C . Gregory Jensen for their preparation , examination , and suggestions regarding the content of this dissertation . I would like recognize and thank my fellow graduate colleagues for their assistance in preparing this work . Dr . Alan K . Zundel for his many contributions to this work which include his insights into the multiple root  finding strategy and its implementation and testing in his algebraic surface rendering program ; and particularly his last minute efforts and contributions to the performance assessment and testing of the various algorithms . Kris Klimaszewski for his initial implementation of the degree six isolator polynomial algorithm as well as his many programming insights and comradery . Peisheng Gao and Hong Mu for their unending and expert assistance in programming , illustrations , and testing which they performed simultaneously with their many other graduate duties and responsibilities . I would like to thank Dr . Anders N . Grimsrud and my other associates at work , for the additional responsibility they have shouldered by providing me a leave of absence from work to finish this academic goal and project . I would like to thank my parents , relatives , and friends for their emotional support and confidence regarding the completion of this work . Especially my father , Melvin Reed Spencer ; and my brother and his family Robert and Shannon , Erin , and Chase Spencer for their financial and emotional support , providing a weekend refuge to refresh and collect my thoughts . And last but sincerely not least , I am grateful and thankful for the divine inspiration empowered to all concerned for the motivation and exchange supporting the conception , development , and completion of this collaborative effort . iv Contents 1 Introduction 1 1 . 1 Literature Search : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3 1 . 2 Overview : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 4 2 Review of Polynomial Real Root Finding 5 2 . 1 Polynomial Representations and Properties : : : : : : : : : : : : : : : : : : 5 2 . 1 . 1 Power Form Polynomials : : : : : : : : : : : : : : : : : : : : : : : : 5 2 . 1 . 2 Bernstein Form Polynomials : : : : : : : : : : : : : : : : : : : : : : : 6 Explicit B ! ezier Curves : : : : : : : : : : : : : : : : : : : : : : : : : : 6 Convex Hull Property : : : : : : : : : : : : : : : : : : : : : : : : : : 7 Variation Diminishing Property : : : : : : : : : : : : : : : : : : : : : 7 2 . 2 Polynomial Real Root Localization : : : : : : : : : : : : : : : : : : : : : : : 8 2 . 2 . 1 Techniques for Bounding Real Roots : : : : : : : : : : : : : : : : : : 9 2 . 2 . 2 Isolation Techniques for Distinct Real Roots : : : : : : : : : : : : : : 1 1 Polynomial Factors and Roots : : : : : : : : : : : : : : : : : : : : : 1 2 Location Principles : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 5 v Variation of coefficient Signs : : : : : : : : : : : : : : : : : : : : : : 1 6 Sturm Sequence : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 6 Budan { Fourier Sequence : : : : : : : : : : : : : : : : : : : : : : : : : 1 8 Isolating Polynomials : : : : : : : : : : : : : : : : : : : : : : : : : : 1 8 2 . 2 . 3 Isolation Techniques Accounting Multiple Real Roots : : : : : : : : 1 9 Common Roots of Two Polynomials : : : : : : : : : : : : : : : : : : 1 9 Separation of Multiple Roots : : : : : : : : : : : : : : : : : : : : : : 2 0 2 . 3 Polynomial Real Root Approximation : : : : : : : : : : : : : : : : : : : : : 2 0 2 . 3 . 1 Closed Form Solvers : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 1 2 . 3 . 2 Basic Serial Iterative Methods : : : : : : : : : : : : : : : : : : : : : 2 2 2 . 3 . 3 Hybrid Serial Iterative Methods : : : : : : : : : : : : : : : : : : : : : 2 3 2 . 3 . 4 Simultaneous Iterative Methods : : : : : : : : : : : : : : : : : : : : : 2 3 2 . 4 Power Form General Purpose Root Finders : : : : : : : : : : : : : : : : : : 2 4 Jenkins { Traub ' s Method : : : : : : : : : : : : : : : : : : : : : : : : : 2 4 Dunaway ' s Composite Method : : : : : : : : : : : : : : : : : : : : 2 5 Madsen { Reid ' s Newton { Based Method : : : : : : : : : : : : : : : : : 2 6 Laguerre ' s Method : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 6 2 . 5 Power Form Real Root Finders : : : : : : : : : : : : : : : : : : : : : : : : : 2 7 2 . 5 . 1 Sturm Sequence Techniques : : : : : : : : : : : : : : : : : : : : : : : 2 7 Hook { McAree ' s Method : : : : : : : : : : : : : : : : : : : : : : : : : 2 7 2 . 5 . 2 differentiation Techniques : : : : : : : : : : : : : : : : : : : : : : : : 2 8 Collins { Loos ' Method : : : : : : : : : : : : : : : : : : : : : : : : : : 2 8 vi 2 . 5 . 3 Variation of Signs Techniques : : : : : : : : : : : : : : : : : : : : : : 2 8 Collins { Akritas ' Method : : : : : : : : : : : : : : : : : : : : : : : : : 2 8 2 . 5 . 4 Interval Techniques : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 9 Hansen ' s Newton Interval Methods : : : : : : : : : : : : : : : : : : : 2 9 Dedieu { Yakoubsohn ' s Exclusion Method : : : : : : : : : : : : : : : 2 9 2 . 6 Bernstein Form Real Root Finders : : : : : : : : : : : : : : : : : : : : : : : 3 0 2 . 6 . 1 Recursive Subdivide Techniques : : : : : : : : : : : : : : : : : : : : 3 0 Lane { Riesenfeld ' s Method : : : : : : : : : : : : : : : : : : : : : : : : 3 0 Rockwood ' s Method : : : : : : : : : : : : : : : : : : : : : : : : : : : 3 1 Schneider ' s Method : : : : : : : : : : : : : : : : : : : : : : : : : : : 3 1 2 . 6 . 2 Newton { Based Techniques : : : : : : : : : : : : : : : : : : : : : : : : 3 1 Grandine ' s Method : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3 2 Marchepoil { Chenin ' s Method : : : : : : : : : : : : : : : : : : : : : : 3 2 2 . 6 . 3 Hull Approximation Techniques : : : : : : : : : : : : : : : : : : : : : 3 2 Rajan { Klinkner { Farouki ' s Method : : : : : : : : : : : : : : : : : : : 3 3 3 Review of Numerical Considerations 3 4 3 . 1 Numerical Error : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3 5 3 . 1 . 1 Sources and Types of Numerical Error : : : : : : : : : : : : : : : : : 3 5 Causes of Computational Error : : : : : : : : : : : : : : : : : : : : : 3 6 3 . 1 . 2 Roundofferror : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3 7 Floating  Point Computation : : : : : : : : : : : : : : : : : : : : : : 3 7 vii Machine Epsilon and Roundo Unit : : : : : : : : : : : : : : : : 3 7 Roundofferror Due to Floating  Point Operations : : : : : : : : : : 3 8 3 . 2 Estimating Bounds for Roundofferror : : : : : : : : : : : : : : : : : : : : : 3 9 3 . 2 . 1 Forward Error Analysis : : : : : : : : : : : : : : : : : : : : : : : : : 4 0 3 . 2 . 2 Running Error Analysis : : : : : : : : : : : : : : : : : : : : : : : : : 4 0 3 . 2 . 3 Backward Error Analysis : : : : : : : : : : : : : : : : : : : : : : : : 4 1 3 . 2 . 4 Interval Error Analysis : : : : : : : : : : : : : : : : : : : : : : : : : : 4 2 3 . 2 . 5 General Stochastic Error Analysis : : : : : : : : : : : : : : : : : : : 4 2 3 . 2 . 6 Permutation { Perturbation Error Analysis : : : : : : : : : : : : : : : 4 3 3 . 3 Numerical Stability and Condition : : : : : : : : : : : : : : : : : : : : : : : 4 4 3 . 3 . 1 Preliminary definitions : : : : : : : : : : : : : : : : : : : : : : : : : 4 4 3 . 3 . 2 Condition of Perturbing Polynomial coefficients : : : : : : : : : : : 4 5 3 . 3 . 3 Condition of Bernstein and Power Forms : : : : : : : : : : : : : : : 4 6 3 . 4 Conversion Between Bernstein and Power Forms : : : : : : : : : : : : : : : 4 7 3 . 4 . 1 Example : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 4 9 3 . 4 . 2 Closed Form Expression : : : : : : : : : : : : : : : : : : : : : : : : : 4 9 3 . 4 . 3 Numerical ramifications of Basis Conversion : : : : : : : : : : : : : 5 0 3 . 5 Performance of Polynomial Root Finders : : : : : : : : : : : : : : : : : : : : 5 0 3 . 5 . 1 Benchmarking Principles : : : : : : : : : : : : : : : : : : : : : : : : 5 0 Polynomial Classes for Performance Assessment : : : : : : : : : : : : 5 1 4 Preconditioning for Bernstein form polynomials 5 3 viii 4 . 1 An Illustrative Example : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5 4 4 . 1 . 1 The Bernstein Form : : : : : : : : : : : : : : : : : : : : : : : : : : : 5 6 4 . 1 . 2 Relative Errors in the Construction Process : : : : : : : : : : : : : : 5 9 4 . 2 Problems of Error Propagation and amplification : : : : : : : : : : : : : : : 6 1 4 . 2 . 1 Error Analysis of de Casteljau Algorithm : : : : : : : : : : : : : : : 6 2 4 . 2 . 2 Condition of the Subdivision Map : : : : : : : : : : : : : : : : : : : 6 6 4 . 2 . 3 Backward Error Analysis : : : : : : : : : : : : : : : : : : : : : : : : 7 1 4 . 3 Application to Curve / Curve Intersections : : : : : : : : : : : : : : : : : : : 7 5 4 . 4 Concluding Remarks : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 7 8 5 General Root Finding Concepts 8 1 5 . 0 . 1 Pseudo Code : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8 1 5 . 1 Bernstein Subdivision : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8 2 The de Casteljau Algorithm : : : : : : : : : : : : : : : : : : : : : : : 8 3 Subdivision coefficient Errors : : : : : : : : : : : : : : : : : : : : : : 8 4 Subdivision Global Error Bound : : : : : : : : : : : : : : : : : : : : 8 5 5 . 1 . 1 Algorithms : Bsubdivlef t and Bsubdivright : : : : : : : : : : : : : : 8 5 5 . 1 . 2 Numerical Stability : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8 6 5 . 2 Bernstein modified Horner Evaluation : : : : : : : : : : : : : : : : : : : : : 8 7 5 . 2 . 1 Horner ' s Method Expressed in Power Form : : : : : : : : : : : : : : 8 7 5 . 2 . 2 Horner ' s Method modified for Bernstein Form : : : : : : : : : : : : 8 8 5 . 2 . 3 Algorithm : Beval : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8 9 ix 5 . 2 . 4 Polynomial Evaluation Considerations : : : : : : : : : : : : : : : : : 9 1 Scaled Bernstein Polynomial coefficients : : : : : : : : : : : : : : : : 9 1 5 . 3 Bernstein De ation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 9 2 5 . 3 . 1 Preliminary Concepts : : : : : : : : : : : : : : : : : : : : : : : : : : 9 2 5 . 3 . 2 Algorithms : Bdeflatet , Bdeflatelef t , and Bdeflateright : : : : 9 2 5 . 3 . 3 De ation Considerations : : : : : : : : : : : : : : : : : : : : : : : : : 9 4 5 . 4 Polynomial coefficient Normalization : : : : : : : : : : : : : : : : : : : : : : 9 5 5 . 4 . 1 Preliminary Concepts : : : : : : : : : : : : : : : : : : : : : : : : : : 9 5 5 . 4 . 2 Algorithms : NormPolybase and NormPolymax : : : : : : : : : : : 9 6 5 . 4 . 3 Normalization considerations : : : : : : : : : : : : : : : : : : : : : : 9 6 5 . 5 Bernstein End Control Point Root Approximation : : : : : : : : : : : : : : 9 7 5 . 5 . 1 Algorithms : Bend 0 lef t and Bend 0 right : : : : : : : : : : : : : : : : 9 7 5 . 5 . 2 Bernstein End Zero Considerations : : : : : : : : : : : : : : : : : : : 9 9 5 . 6 Bernstein differentiation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 9 9 5 . 6 . 1 Algorithms : Bderiv and Bderivpseudo : : : : : : : : : : : : : : : : 1 0 0 5 . 6 . 2 differentiation Considerations : : : : : : : : : : : : : : : : : : : : : : 1 0 1 5 . 7 Polynomial Real Root Bounds : : : : : : : : : : : : : : : : : : : : : : : : : : 1 0 2 5 . 8 Pseudo Bernstein Basis Conversion : : : : : : : : : : : : : : : : : : : : : : : 1 0 2 5 . 8 . 1 Preliminary Concepts : : : : : : : : : : : : : : : : : : : : : : : : : : 1 0 2 5 . 8 . 2 Algorithms : ConvCoefsb 2 ^ p and ConvCoefs p 2 ^ b : : : : : : : : : : : 1 0 5 5 . 8 . 3 Algorithms : ConvRoots ^ b and ConvRoots ^ p : : : : : : : : : : : : 1 0 5 5 . 9 Closed Form Real Root Solvers : : : : : : : : : : : : : : : : : : : : : : : : : 1 0 6 x Quadratic Solution : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 0 7 Cubic and Quartic Solutions : : : : : : : : : : : : : : : : : : : : : : 1 0 8 5 . 9 . 1 Algorithms : Psolver 2 , Psolver 3 , and Psolver 4 : : : : : : : : : : 1 0 9 6 Polynomial Real Root  Finding Algorithms 1 1 6 6 . 1 Bernstein Convex Hull Approximating Step ( Bcha 1 ) : : : : : : : : : : : : : 1 1 7 6 . 1 . 1 Preliminary Concepts : : : : : : : : : : : : : : : : : : : : : : : : : : 1 1 7 6 . 1 . 2 Algorithm : Bcha 1 : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 1 9 6 . 2 Higher Degree Approximating Steps ( Bcha 2 ?? 4 ) : : : : : : : : : : : : : : : : 1 2 1 6 . 2 . 1 Preliminary Concepts : : : : : : : : : : : : : : : : : : : : : : : : : : 1 2 3 6 . 2 . 2 Higher Degree Approximating Step Considerations : : : : : : : : : : 1 2 7 6 . 3 Bernstein Combined Subdivide & Derivative ( Bcom ) : : : : : : : : : : : : : 1 2 7 6 . 3 . 1 Preliminary Concepts : : : : : : : : : : : : : : : : : : : : : : : : : : 1 2 7 6 . 3 . 2 Algorithm : Bcom : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 2 8 6 . 3 . 3 Bcom Considerations : : : : : : : : : : : : : : : : : : : : : : : : : : 1 3 5 6 . 4 Bernstein Convex Hull Isolating Step ( Bchi ) : : : : : : : : : : : : : : : : : 1 3 8 6 . 4 . 1 Preliminary Concepts : : : : : : : : : : : : : : : : : : : : : : : : : : 1 3 8 6 . 4 . 2 Algorithm : Bchi : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 4 1 6 . 5 Standard Isolator Polynomial Separation ( Sips ) : : : : : : : : : : : : : : : : 1 4 3 6 . 5 . 1 Preliminary Concepts : : : : : : : : : : : : : : : : : : : : : : : : : : 1 4 4 Isolator Polynomials : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 4 4 6 . 5 . 2 Algorithm : Sips : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 4 6 xi 6 . 5 . 3 Algorithm Considerations : : : : : : : : : : : : : : : : : : : : : : : : 1 4 8 Restricting Input to Power Form Polynomials : : : : : : : : : : : : : 1 4 8 Tighter Root Bounds from the Convex Hull : : : : : : : : : : : : : : 1 4 9 Pseudo  basis Conversion : : : : : : : : : : : : : : : : : : : : : : : : : 1 4 9 Common Roots : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 4 9 6 . 5 . 4 Illustrative Examples of IPs : : : : : : : : : : : : : : : : : : : : : : : 1 5 0 7 Numerical Results 1 5 8 7 . 1 Test Problems and Results for Distinct Real Roots : : : : : : : : : : : : : : 1 6 1 7 . 2 Discussion : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 6 5 8 Conclusions 1 6 6 xii List of Figures 2 . 1 Bernstein polynomial as an explicit B ! ezier curve : : : : : : : : : : : : : : : 7 2 . 2 Convex hull property : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8 2 . 3 Application of Rolle ' s theorem . : : : : : : : : : : : : : : : : : : : : : : : : : 1 7 4 . 1 Two degree { 7 B ! ezier curves having 4 9 real intersections : : : : : : : : : : : 7 9 5 . 1 Subdividing a cubic explicit B ! ezier curve . : : : : : : : : : : : : : : : : : : : 8 4 5 . 2 Explicit B ! ezier curves before and after de ating roots at t = a ; b : : : : : : 9 8 5 . 3 Explicit B ! ezier curve y [ 0 ; 1 ] ( t ) with a cluster of roots at t = b : : : : : : : : : 1 0 0 5 . 4 Log  log plot of pseudo  mapping x ( t ) = t = ( 1 ?? t ) over the region t 2 [ 0 ; 1 ] . : 1 0 4 5 . 5 Log  log plot of pseudo  mapping t ( x ) = x = ( 1 + x ) over the region x 2 [ 0 ; 1 ] . : 1 0 4 6 . 1 Bcha 1 root finding : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 2 2 6 . 2 ^ P [ a ; b ] ( t ) of degrees one through four . : : : : : : : : : : : : : : : : : : : : : : 1 2 4 6 . 3 Bcom root isolation heuristic ( a  d ) . : : : : : : : : : : : : : : : : : : : : : : 1 3 3 6 . 4 Bcom root isolation heuristic ( e  h ) . : : : : : : : : : : : : : : : : : : : : : : 1 3 4 6 . 5 Bcomsubdivision step heuristic . : : : : : : : : : : : : : : : : : : : : : : : : 1 3 7 6 . 6 Root isolation using Bchi : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 4 0 xiii 6 . 7 IPs of degree 1 and 3 based on a degree 5 Wilkinson polynomial over I [ 0 ; 1 ] . 1 5 0 6 . 8 IPs of degree 1 and 3 based on a degree 5 polynomial over I [ 0 ; 1 ] with roots at f : 1 ; : 2 ; : 4 ; : 6 ; : 8 g . : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 5 1 6 . 9 IPs both of degree 2 based on a degree 5 Wilkinson polynomial over I [ 0 ; 1 ] . 1 5 1 6 . 1 0 IPs both of degree 2 based on a degree 5 polynomial over I [ 0 ; 1 ] with roots at f : 1 ; : 2 ; : 4 ; : 6 ; : 8 g . : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 5 2 6 . 1 1 IPs of degree 1 and 4 based on a degree 6 Wilkinson polynomial over I [ 0 ; 1 ] . 1 5 2 6 . 1 2 IPs of degree 1 and 4 based on a degree 6 polynomial over I [ 0 ; 1 ] with roots at f : 1 ; : 2 ; : 3 ; : 4 ; : 7 ; : 9 g . : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 5 3 6 . 1 3 IPs of degree 2 and 3 based on a degree 6 Wilkinson polynomial over I [ 0 ; 1 ] . 1 5 3 6 . 1 4 IPs of degree 2 and 3 based on a degree 6 polynomial over I [ 0 ; 1 ] with roots at f : 1 ; : 2 ; : 3 ; : 4 ; : 7 ; : 9 g . : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 5 4 6 . 1 5 IPs of degree 2 and 4 based on a degree 7 Wilkinson polynomial over I [ 0 ; 1 ] . 1 5 4 6 . 1 6 IPs of degree 2 and 4 based on a degree 7 polynomial over I [ 0 ; 1 ] with roots at f : 1 ; : 2 ; : 3 ; : 4 ; : 5 ; : 6 ; 1 : g . : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 5 5 6 . 1 7 IPs both of degree 3 based on a degree 7 Wilkinson polynomial over I [ 0 ; 1 ] . 1 5 5 6 . 1 8 IPs both of degree 3 based on a degree 7 polynomial over I [ 0 ; 1 ] with roots at f : 1 ; : 2 ; : 3 ; : 4 ; : 5 ; : 6 ; 1 : g . : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 5 6 6 . 1 9 IPs of degree 3 and 4 based on a degree 8 Wilkinson polynomial over I [ 0 ; 1 ] . 1 5 6 6 . 2 0 IPs of degree 3 and 4 based on a degree 8 polynomial over I [ 0 ; 1 ] with roots at f : 1 ; : 2 ; : 3 ; : 4 ; : 5 ; : 6 ; : 7 ; 1 : g . : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 5 7 xiv List of Tables 2 . 1 Global Bounding of Power Form Polynomial Real Roots : : : : : : : : : : : 1 0 4 . 1 Comparison of the computed roots of P 2 5 ( t ) , B 2 5 [ 0 ; 1 ] ( t ) , and B 2 5 [ : 2 5 ; : 7 5 ] ( u ) . : : 5 7 4 . 2 Condition numbers of subdivision matrices in various norms . : : : : : : : : 6 9 7 . 1 Machine Floating  Point Constants : : : : : : : : : : : : : : : : : : : : : : : 1 5 8 7 . 2 Relative Execution Times for Degree Five Polynomials : : : : : : : : : : : : 1 6 1 7 . 3 Relative Execution Times for Degree Ten Polynomials : : : : : : : : : : : : 1 6 1 7 . 4 Relative Execution Times for Degree Twenty Polynomials : : : : : : : : : : 1 6 2 7 . 5 Relative Execution Times for Wilkinson Polynomials in I [ 0 ; 1 ] : : : : : : : : 1 6 2 7 . 6 Relative Execution Times for Degree Two Polynomials : : : : : : : : : : : : 1 6 3 7 . 7 Relative Execution Times for Degree Three Polynomials : : : : : : : : : : : 1 6 3 7 . 8 Relative Execution Times for Degree Four Polynomials : : : : : : : : : : : : 1 6 4 7 . 9 Relative Execution Times for Degree 2 0 with Clustered Roots : : : : : : : : 1 6 4 7 . 1 0 Relative Execution Times for Degree 1 0 with Clustered Roots : : : : : : : : 1 6 4 xv Chapter 1 Introduction This dissertation addresses the problem of approximating , in oating { point arithmetic , all real roots ( simple , clustered , and multiple ) over the unit interval of polynomials in Bernstein form with real coefficients . The Bernstein polynomial basis enjoys widespread use in the fields of computer aided geometric design ( CAGD ) and computer graphics . The use of the Bernstein basis functions to express B ! ezier curves and surfaces allows many basic algorithms concerned with the processing of such forms to be reduced to the problem of computing the real roots of polynomials in Bernstein form . A typical example is the problem of computing the intersection points of two B ! ezier curves . Given two curves of degrees m and n , respectively , the problem of identifying their intersection points can be reformulated in terms of finding the real roots on the unit interval of a degree mn polynomial in Bernstein form [ SP 8 6 ] . Other examples from CAGD include finding a point on each loop of the intersection curve of two surfaces [ Far 8 6 ] , and the \ trimming " of offset and bisector curves [ FJ 9 4 , FN 9 0 a ] . Examples from computer graphics include ray tracing [ Kaj 8 2 , Han 8 3 , SA 8 4 , Wij 8 4 , BDM 8 4 ] and algebraic surface rendering [ Sed 8 5 , Zun 8 9 ] . In the early years of computer graphics and CAGD , the tendency was to convert from 1 CHAPTER 1 . INTRODUCTION 2 the Bernstein to the power basis for root finding ( see , for example , [ Kaj 8 2 ] ) since the power basis is more \ familiar " and library routines are commonly available to solve for polynomial roots in the power basis . However , it has subsequently been demonstrated [ FR 8 7 ] that the Bernstein basis is inherently better conditioned than the power basis for finding real roots on the unit interval ( and this is all that is of interest when dealing with B ! ezier curves anyway ) . Thus , root finding algorithms for polynomials in Bernstein form are not merely a convenience , they in fact offer significantly better accuracy than their power basis counterparts . Furthermore , by expressing Bernstein form polynomials as explicit B ! ezier curves , one can take advantage of geometric insights based on the control polygon in developing root { finding algorithms . This can allow us to robustly find all real roots in the unit interval , typically much faster than can a library polynomial root finder which finds all real and complex roots . Thus , a root finder that works in the Bernstein basis could provide enhanced speed and accuracy when applied to problems whose solution traditionally relies on the computation of real polynomial roots in the power basis . The research reported in this dissertation falls in the domain of \ experimental com  puter science and engineering " . Many of the results are empirical . Fortunately , there is now growing recognition that proof of performance through such experimental means is an acceptable and even crucial form of computer research [ ECS 9 4 ] . Nearly 2 5 years ago , Henrici remarked , at a symposium on the numerical solutions of nonlinear problems : I do not feel that the polynomial problem has been solved perfectly , not even from a practical point of view [ Hen 7 0 ] . Our study of root finding for polynomials in Bernstein form suggests that the field remains fruitful even now . CHAPTER 1 . INTRODUCTION 3 1 . 1 Literature Search The first known work on our problem is an algorithm , due to Lane and Riesenfeld [ LR 8 1 ] , which repeatedly applies the de Casteljau algorithm to isolate and refine roots of polynomials in Bernstein form . Several of the algorithms presented in this dissertation were first documented in an unpublished report drafted nearly ten years ago [ SSd 8 6 ] . The significant developments on the numerical stability of the Bernstein form that arose in the intervening decade justify our procrastination in submitting that report for publication . We always suspected there was much more waiting to be reported ! [ Gra 8 9 ] presents an algorithm for finding roots of B  spline functions , which is similar to the convex hull root finding algorithm in [ SSd 8 6 ] . [ Sch 9 0 a ] describes basically the same algorithm . An intriguing algorithm based on \ parabolic hulls " is introduced in [ RKF 8 8 ] . The approach involves a more sophisticated method for isolating roots . [ Roc 9 0 ] presents a heuristic for accelerating the root isolation process . Each of these algorithms is reviewed more thoroughly in [ x 2 . 6 ] in Chapter 2 . This dissertation borrows concepts from the following four fields : 1 . Computer Aided Geometric Design and Graphics [ SP 8 6 , FR 8 7 , RKF 8 8 , FR 8 8 ] 2 . Numerical Analysis [ Act 7 0 , Buc 6 6 , BFR 8 1 , Bor 8 5 , Bre 7 3 , Dod 7 8 , DM 7 2 , FMM 7 7 , IK 6 6 , Ham 7 1 , Hen 6 4 , Hen 8 2 , Hil 7 4 , Hou 5 3 , Hou 7 0 , Mac 6 3 , Mar 6 6 , McN 7 3 , Mer 0 6 , Mor 8 3 , Non 8 4 , Ost 6 0 , Pen 7 0 , Pet 8 9 , PC 8 6 , Piz 7 5 , PW 8 3 , PFTV 9 0 , RR 7 8 , Ric 8 3 , Sta 7 0 , Tra 6 4 ] 3 . Theory of Equations [ Bor 5 0 , BP 6 0 , Caj 0 4 , Dic 2 2 , Mac 5 4 , Tur 5 2 , Usp 4 8 ] 4 . Higher Algebra [ Boc 6 4 , Dav 2 7 , FS 6 5 , Kur 8 0 , MP 6 5 , Sal 8 5 ] CHAPTER 1 . INTRODUCTION 4 1 . 2 Overview Chapter 2 outlines some basic concepts pertaining to root finding for polynomials in power and Bernstein form . Chapter 3 discusses numerical considerations for implementing root finding schemes in oating { point arithmetic . Chapter 4 presents one of the most significant results of our research , the fact that root condition for polynomials in Bernstein form can often be significantly improved by shrinking the domain in the earliest stages of coefficient construction . For example , in the curve intersection algorithm mentioned earlier , if one of the curves is split in half before proceeding with the intersection algorithm , the accuracy of the final polynomial roots ( which indicate the curve parameter values at the intersection points ) can be greatly improved . Chapter 6 outlines several different algorithms for root finding of polynomials in Bern  stein form . Chapter 5 describes basic functions common to those root finding algorithms . Chapter 7 compares the execution speed and numerical precision of the algorithms in chap  ter 6 and of some general purpose power form polynomial root finding schemes . Chapter 8 concludes and summarizes this work . Chapter 2 Review of Polynomial Real Root Finding This chapter reviews basic concepts and algorithms concerned with computing the real roots of polynomials represented in the power and Bernstein bases . 2 . 1 Polynomial Representations and Properties We begin by recalling some definitions and properties of polynomials represented in both power and Bernstein form . 2 . 1 . 1 Power Form Polynomials A degree n polynomial in the power basis is defined by f ( t ) = n X i = 0 piti ( 2 . 1 . 1 . 1 ) with pn 6 = 0 . 5 CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 6 2 . 1 . 2 Bernstein Form Polynomials A degree n polynomial in Bernstein form is given by y [ 0 ; 1 ] ( t ) = n X k = 0 yk n k ! ( 1 ?? t ) n ?? ktk ( 2 . 1 . 2 . 2 ) where ?? n k ( 1 ?? t ) n ?? ktk is the kth Bernstein basis function of degree n , and yk are the Bernstein coefficients . The subscript [ 0 ; 1 ] indicates that the domain of interest is t 2 [ 0 ; 1 ] . A polynomial in Bernstein form can be defined over arbitrary domain intervals by in  troducing the change of variables u = t ?? a b ?? a ( 2 . 1 . 2 . 3 ) that maps t 2 [ a ; b ] to u 2 [ 0 ; 1 ] . We then have y [ a ; b ] ( t ) = n X k = 0 yk n k ! ( 1 ?? u ) n ?? kuk = n X k = 0 yk n k ! ( b ?? t ) n ?? k ( t ?? a ) k ( b ?? a ) n ( 2 . 1 . 2 . 4 ) Explicit B ! ezier Curves Useful geometric properties may be associated with a polynomial in Bernstein form by casting it as an explicit B ! ezier curve ( see Figure 2 . 1 ) : P [ a ; b ] ( t ) = ( t ; y [ a ; b ] ( t ) ) = n X k = 0 Pk n k ! ( 1 ?? u ) n ?? kuk ( 2 . 1 . 2 . 5 ) where the control points Pk are evenly spaced along the horizontal axis : Pk = ( a + k n ( b ?? a ) ; yk ) : ( 2 . 1 . 2 . 6 ) The control polygon is formed by the set of line segments connecting adjacent control points . The phrase control polygon is not an entirely accurate label , since it is not actually a closed polygon because Pn and P 0 are not connected . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 7 P 0 P 1 P 2 P 3 P 4 P 5 0 .2 .4 .6 .8 1 Figure 2 . 1 : Bernstein polynomial as an explicit B ! ezier curve Convex Hull Property We will make extensive use of the convex hull property of B ! ezier curves , which states that a B ! ezier curve lies entirely within the convex hull of its control points , as illustrated in Figure 2 . 2 . The convex hull property is assured because the Bernstein polynomial basis functions ?? n k ( b ?? t ) n ?? k ( t ?? a ) k ( b ?? a ) n sum to one and are non  negative for t 2 [ a ; b ] . Variation Diminishing Property The variation diminishing property is really an expression of Descartes Law of Signs in the Bernstein polynomial basis . It states that no line intersects a B ! ezier curve more times than it intersects the control polygon . More specifically , if a given line intersects a B ! ezier curve n 1 times and the same line intersects the control polygon n 2 times , then n 1 = n 2 ?? 2 n 3 ( 2 . 1 . 2 . 7 ) where n 3 is a non  negative integer . For example , for the case of the t { axis in Figure 2 . 1 , n 1 = n 2 = 3 . Equation ( 2 . 1 . 2 . 7 ) requires us to count multiple intersections properly , so a simple tangent intersection accounts for two intersections . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 8 P 0 P 1 P 2 P 3 P 4 P 5 Figure 2 . 2 : Convex hull property Two implications of the variation diminishing property are immediately apparent . First , if all coefficients of y [ a ; b ] ( t ) have the same sign , there can be no roots in [ a ; b ] . Second , if the control polygon of P [ a ; b ] ( t ) crosses the t { axis exactly once , then y [ a ; b ] ( t ) has exactly one root in [ a ; b ] . 2 . 2 Polynomial Real Root Localization The first step for many root finding methods is to divide the domain into disjoint intervals which each contain zero or one root , a procedure known as root isolation . The more generic term localization [ Hou 7 0 ] refers to root isolation as well as the determination of global root bounds . Localization schemes are usually based on examining simple expressions involving only the polynomial coefficients . This section is limited to a discussion of real root localization methods . Complex root localization techniques are outlined in [ Hou 7 0 , Mar 6 6 ] . If a polynomial has real coefficients , CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 9 then any complex roots must occur in conjugate pairs . The following three subsections review techniques for globally bounding , and locally isolating , distinct as well as multiple real roots . 2 . 2 . 1 Techniques for Bounding Real Roots This section reviews techniques for determining global bounds for real roots of polynomials in power form . Concepts relevant to bounding real roots of polynomials in Bernstein form are presented in [ x 5 . 7 ] . The real roots of polynomials expressed in power form can lie anywhere in the open interval ( ?? 1 ; + 1 ) . The only way a degree n polynomial in power form can have a root exactly at infinity is if its leading coefficient is pn = 0 . Here we review four of several classical methods for determining a tighter upper bound on the positive real roots of a power basis polynomial [ Kur 8 0 , MP 6 5 , BP 6 0 , Caj 0 4 ] . With some simple transformations , it is possible to use these bounding methods to also find lower bounds on positive roots , as well as upper and lower bounds on negative roots . Suppose a method exists which guarantees that all real roots of f ( t ) are less than an upper bound B 1 . To compute a lower bound for the positive real roots of f ( t ) , find the upper bound B 2 of the roots of 2 tnf ( 1 = t ) . Then , a lower bound for the positive roots of f ( t ) is 1 = B 2 since if is a positive root of f ( t ) , then 1 = will be a root of 2 ( t ) and if 1 = < B 2 , then > 1 = B 2 . Likewise , if B 3 is an upper bound on the roots of 3 ( t ) f ( ?? t ) , then ?? B 3 is a lower bound on the negative roots of f ( t ) ; and if B 4 is an upper bound on the roots of 4 tnf ( ?? 1 = t ) , then ?? 1 = B 4 is an upper bound on the negative roots of f ( t ) . Listed below are several of the more convenient and less computationally { intensive meth  ods for finding positive upper bounds on the real roots of a polynomial . Others not cited can be found in [ Hou 7 0 , Mar 6 6 , FS 6 5 ] . Many of these theorems are simplified versions of their more general forms attributed to bounding complex roots of a polynomial found in CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 1 0 Table 2 . 1 : Global Bounding of Power Form Polynomial Real Roots Positive roots of f ( t ) lie in [ 1 = B 2 ; B 1 ] Negative roots of f ( t ) lie in [ ?? B 3 ; ?? 1 = B 4 ] Polynomial coefficients Upper bound of roots 1 ( t ) f ( t ) f 0 ; f 1 ; : : : ; fn B 1 2 ( t ) tnf ( 1 = t ) fn ; fn ?? 1 ; : : : ; f 0 B 2 3 ( t ) f ( ?? t ) f 0 ; ?? f 1 ; f 2 ; : : : ; ( ?? 1 ) nfn B 3 4 ( t ) tnf ( ?? t ) ( ?? 1 ) nfn ; ( ?? 1 ) n ?? 1 fn ?? 1 ; : : : ; f 0 B 4 [ Hou 7 0 , Mar 6 6 ] . In the following discussion , we assume that f ( t ) = P aiti is a degree n monic polynomial ( an = 1 ) . The Cauchy bound [ Kur 8 0 , MP 6 5 , Hou 7 0 ] is usually too conservative when one is only interested in real roots [ Kur 8 0 ] , although it does provide a basis for other methods . Theorem 2 . 2 . 1 . 1 ( Cauchy Bound ) All real roots of f ( t ) 2 < [ t ] are contained in the closed interval [ ?? B ; B ] , where B = 1 + max i ( jaij ) : ( 2 . 2 . 1 . 8 ) The modified Cauchy bound [ Mac 5 4 , Dav 2 7 , Mer 0 6 , Caj 0 4 ] has the Cauchy bound for its worst case , and thus may give a closer bound . Theorem 2 . 2 . 1 . 2 ( modified Cauchy Bound ) Let N be the absolute value of the most negative coefficient of f ( t ) 2 < [ t ] . Then B = 1 + N ( 2 . 2 . 1 . 9 ) is a positive upper bound for all real roots of f ( t ) . The negative inverse sum bound [ BP 6 0 , Dic 2 2 , Caj 0 4 ] and the Maclaurin bound [ Kur 8 0 , MP 6 5 , FS 6 5 , BP 6 0 , Dic 2 2 , Mer 0 6 ] are nearly always much better , although still rather conservative . Examples in [ BP 6 0 , Dic 2 2 ] illustrate that both of these bounds are polynomial dependent ; either one may yield a better bound than the other . Theorem 2 . 2 . 1 . 3 ( Negative Inverse Sum Bound ) For each negative coefficient aj of f ( t ) 2 < [ t ] , let Sj be the sum of all positive coefficients following aj in the sequence CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 1 1 a 0 ; : : : ; an . Then B = 1 + max j aj Sj ! ( 2 . 2 . 1 . 1 0 ) is a positive upper bound for all real roots of f ( t ) . Theorem 2 . 2 . 1 . 4 ( Maclaurin Bound ) Let N be the absolute value of the most negative coefficient of f ( t ) 2 < [ t ] , and let k be the index of the last negative coefficient in the sequence a 0 ; : : : ; an . Then B = 1 + N 1 = ( n ?? k ) ( 2 . 2 . 1 . 1 1 ) is a positive upper bound for all real roots of f ( t ) . The term grouping bound [ MP 6 5 , BP 6 0 ] and the Newton bound [ Kur 8 0 , MP 6 5 , BP 6 0 , Mac 5 4 , Tur 5 2 ] both require initial guesses , and are more computationally intensive , but they ordinarily provide closer bounds than any of the preceding methods [ BP 6 0 ] , with the Newton bound providing the tightest bound on real roots . Theorem 2 . 2 . 1 . 5 ( Term Grouping Bound ) Let f ( t ) 2 < [ t ] be expressed in the form f 1 ( t ) + f 2 ( t ) + : : : + fk ( t ) , such that the ordered coefficients of each of the polynomials fi ( t ) exhibit only one change of sign , the coefficients of the highest { degree terms being positive . Then any positive number B that renders fi ( B ) > 0 for i = 1 ; : : : ; k is an upper bound for all real roots of f ( t ) . Theorem 2 . 2 . 1 . 6 ( Newton Bound ) Let f ( t ) 2 < [ t ] and let B be a number such that f ( B ) , f 0 ( B ) , f 0 0 ( B ) , : : : , f ( n ) ( B ) are all positive . Then B is a positive upper bound for all real roots of f ( t ) . The values for f ( B ) , f 0 ( B ) , : : : , fn ( B ) are easily found using Horner ' s algorithm [ x 5 . 2 . 1 ] , [ MP 6 5 , Mac 5 4 ] . 2 . 2 . 2 Isolation Techniques for Distinct Real Roots The classical literature is abundant with root isolation methods that separate the distinct real roots of a polynomial . Root isolation is an important step in solving for the roots of a polynomial , since many iterative real root approximations methods require a \ sufficiently CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 1 2 close " starting value or a narrow inclusion interval to guarantee that their sequences of approximations converge efficiently to a root . We now review some of the salient concepts for isolating real roots . Polynomial Factors and Roots The result of substituting a number for x into a polynomial f ( x ) is a number called the value f ( ) of the polynomial at x = [ Usp 4 8 ] . If f ( ) is equal to zero the polynomial f is said to vanish , and is called a zero or root of polynomial f [ Bor 5 0 ] . An algebraic equation of degree n is an expression formed by equating a non { zero degree { n polynomial f ( x ) 2 F [ x ] to zero [ Caj 0 4 , Usp 4 8 , Bor 5 0 , Tur 5 2 ] , f ( x ) = 0 : ( 2 . 2 . 2 . 1 2 ) Let f ( x ) 2 F [ x ] and suppose that K is a field containing F , i . e . , K is an extension of F such that F K [ Bor 5 0 ] . The result of substituting any number 2 K for x into the polynomial f ( x ) yields f ( ) = n X i = 0 ai i = ; where 2 K is the value of f at x = . By definition , is called a zero , root , or solution of the equation f ( x ) = 0 when = 0 [ BP 6 0 , Hou 7 0 ] . The existence of a root of any algebraic equation , or non  zero polynomial , is guaranteed by the fundamental theorem of algebra [ Dic 2 2 , Boc 6 4 , MP 6 5 , Ric 8 3 ] . Theorem 2 . 2 . 2 . 1 ( The Fundamental Theorem of Algebra ) Any non { zero polynomial of degree n ( an 6 = 0 ) in C [ x ] ( and thus in R [ x ] ) always has at least one complex 1 ( real or imaginary ) root . In other words , every algebraic equation has at least one solution [ Caj 0 4 , Usp 4 8 , Tur 5 2 , 1 A complex number a + ib , with a ; b 2 R , i = p ?? 1 , is real if b = 0 , imaginary ( or nonreal ) if b 6 = 0 , and pure imaginary if a = 0 . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 1 3 Hou 7 0 , Hil 7 4 ] . This theorem only guarantees the existence of a root ; determining the number and type of the roots of f ( x ) is more involved . The definitions of value and root of a polynomial may also be established on the basis of the remainder theorem [ Caj 0 4 , Dic 2 2 , Usp 4 8 , Bor 5 0 , Mac 5 4 , Hou 7 0 , Kur 8 0 ] , and its corol  lary the factor theorem [ Caj 0 4 , Dic 2 2 , Bor 5 0 , Mac 5 4 , Hou 7 0 , Kur 8 0 ] , both a consequence of the polynomial division algorithm represented by equation ( 2 . 2 . 2 . 1 3 ) and the conditions ( 2 . 2 . 2 . 1 4 ) and ( 2 . 2 . 2 . 1 5 ) . Theorem 2 . 2 . 2 . 2 ( Polynomial Division Algorithm ) For any two polynomials f ( x ) , f 1 ( x ) 2 F [ x ] ( called the dividend and the divisor polynomials , respectively ) , where f 1 ( x ) 6 = 0 , there exists an unique pair of polynomials q ( x ) ; r ( x ) 2 F [ x ] ( the quotient and remainder polynomials ) such that f ( x ) f 1 ( x ) q ( x ) + r ( x ) ( 2 . 2 . 2 . 1 3 ) with either Deg ( r ) < Deg ( f 1 ) ( 2 . 2 . 2 . 1 4 ) or r ( x ) = 0 : ( 2 . 2 . 2 . 1 5 ) Theorem 2 . 2 . 2 . 3 ( The Remainder Theorem ) If the polynomial f ( x ) is divided by the linear term x ?? , where is any number , then the remainder is a constant , equal to the value f ( ) of f ( x ) at x = , i . e . , f ( x ) ( x ?? ) q ( x ) + f ( ) : ( Deg ( f ) = n ) ( 2 . 2 . 2 . 1 6 ) Theorem 2 . 2 . 2 . 4 ( The Factor Theorem ) The remainder f ( ) vanishes i x ?? is a linear factor of the polynomial f ( x ) . Equivalently , is a root of f ( x ) = 0 i x ?? divides exactly ( i . e . , with zero remainder ) into f ( x ) , so that f ( x ) ( x ?? ) q ( x ) : ( Deg ( f ) = n ; Deg ( q ) = n ?? 1 ) ( 2 . 2 . 2 . 1 7 ) Together the fundamental theorem and the factor theorem indicate the number of lin  ear factors , and hence the number of roots , of a polynomial degree n . These results are succinctly described by the following three theorems [ Caj 0 4 , Bor 5 0 , Dic 2 2 , Ham 7 1 , Hen 6 4 , Hou 5 3 , Hou 7 0 , Kur 8 0 , Usp 4 8 ] . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 1 4 Theorem 2 . 2 . 2 . 5 ( Factored , or Product , Form Theorem ) If Deg ( f ) = n 1 , then f ( x ) 2 C [ x ] is uniquely expressible in factored form as the product of n monic linear factors x ?? i and the constant an , where 1 ; : : : ; n denote the roots corresponding to the linear factors : f ( x ) an ( x ?? 1 ) ( x ?? 2 ) ( x ?? n ) ( an 6 = 0 ) : ( 2 . 2 . 2 . 1 8 ) Theorem 2 . 2 . 2 . 6 ( Repeated Factored Form Theorem ) If Deg ( f ) = n 1 , then f ( x ) 2 C [ x ] is uniquely expressible in repeated factored form as the product of k n distinct monic power factors ( x ?? j ) mj and the constant an , where the positive integers mj sum to n and denote the multiplicities of the distinct roots 1 ; : : : ; k , i . e . , f ( x ) an ( x ?? 1 ) m 1 ( x ?? 2 ) m 2 ( x ?? k ) mk ( an 6 = 0 ; n = k X j = 1 mj ) : ( 2 . 2 . 2 . 1 9 ) Theorem 2 . 2 . 2 . 7 ( Root Census Theorem ) An algebraic equation of degree n may be regarded as having exactly n roots ( not necessarily distinct ) , if the roots are counted according to their multiplicities . Each factor ( x ?? j ) mj corresponds to an mj  fold or mj  tuple root or a multiple root of multiplicity mj [ Bor 5 0 , MP 6 5 ] . When mj = 1 , a root j is called a simple or distinct root . Roots of multiplicity mj = 2 ; 3 ; 4 ; : : : are referred to as double , triple , quadruple , : : : roots . From a numerical perspective , groups of roots that are nearly multiple form a \ cluster " of roots and are termed root clusters [ Hou 7 0 ] . Multiple roots are necessarily simultaneous roots of f ( x ) and successive derivatives of f ( x ) [ Usp 4 8 ] , i . e . , f ( ) = 0 ; f 0 ( ) 6 = 0 f ( ) = f 0 ( ) = 0 ; f 0 0 ( ) 6 = 0 f ( ) = f 0 ( ) = f 0 0 ( ) = 0 ; f 0 0 0 ( ) 6 = 0 : : : : : : ; for a simple , double , triple , : : : root . The first derivative of the polynomial f ( x ) = P ni = 0 aixi can be represented in the re  peated factored form of equation ( 2 . 2 . 2 . 1 9 ) as f 0 ( x ) = n ?? 1 X i = 0 ( i + 1 ) ai + 1 xi = k X j = 1 mj f ( x ) ( x ?? j ) ( 2 . 2 . 2 . 2 0 ) CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 1 5 and has as its first derivative f 0 0 ( x ) , and so on [ Caj 0 4 , Dic 2 2 , Kur 8 0 , Mac 5 4 ] . For any polynomial in C [ x ] , the number of roots is synonymous with the number of its linear factors . A discussion of polynomials in R [ x ] requires two additional theorems on conjugate pair roots and real factored forms [ Bor 5 0 , Kur 8 0 , Mac 5 4 , McN 7 3 , Usp 4 8 ] . Theorem 2 . 2 . 2 . 8 ( Conjugate Pair Roots Theorem ) If a polynomial in R [ x ] has an imaginary root = a + ib of multiplicity m , it has also the conjugate value = a ?? ib as a root of the same multiplicity , i . e . , imaginary roots occur in conjugate pairs . Theorem 2 . 2 . 2 . 9 ( Real Factored Form Theorem ) Every polynomial f ( x ) of degree n > 2 in R [ x ] can be expressed as a product of linear and / or quadratic factors that are irreducible monic polynomials in R [ x ] , that is f ( x ) an ( x ?? 1 ) m 1 ( x ?? i ) mi ( x 2 + b 1 x + c 1 ) l 1 ( x 2 + bjx + cj ) lj ; ( 2 . 2 . 2 . 2 1 ) where x 2 + bjx + cj = ( x ?? j ) ( x ?? j ) : ( 2 . 2 . 2 . 2 2 ) Location Principles Rolle ' s Theorem [ Tur 5 2 ] relates the locations of the roots of f ( x ) and the roots of its derivatives . Theorem 2 . 2 . 2 . 1 0 ( Rolle ' s Theorem ) Between any two consecutive real roots a and b of the polynomial f ( x ) there lies an odd number of roots  and therefore at least one root  of its derivative polynomial f 0 ( x ) , a root of multiplicity m being counted as m roots . Rolle ' s theorem provides the following concepts for isolating real roots of a polynomial f ( x ) based on the real roots of its derivative polynomial f 0 ( x ) . 1 . Between any two consecutive real roots c and d of f 0 ( x ) there lies at most one real root of f ( x ) . 2 . The smallest and largest real roots , and ! , of f 0 ( x ) constrain the smallest and largest real roots of f ( x ) to lie in the intervals [ ?? 1 ; ] and [ ! ; + 1 ] , respectively . 3 . Multiple roots of f ( x ) are necessarily roots of f 0 ( x ) . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 1 6 4 . A method for separating the real roots of a polynomial f ( x ) whose roots are all simple , when all the real roots of f 0 ( x ) are known . Variation of coefficient Signs The number of sign variations in the sequence of coefficients a 0 ; a 1 ; : : : ; an of a degree { n polynomial f ( x ) is equal to the number of consecutive pairs of coefficients that are of opposite signs , after zero values have been deleted from the sequence . Theorem 2 . 2 . 2 . 1 1 ( Descartes ' Rule of Signs ) A real polynomial f ( t ) cannot have more positive real roots than there are changes of sign in its sequence of coefficients ; nor more negative real roots than there are changes of sign in the sequence of coefficients f ( ?? t ) . Thus , the number of positive or negative roots will be even if there is an even number of variations of sign , and odd if there is an odd number of changes . Sturm Sequence Descartes ' rule provides only an upper bound on the number of real roots of a polynomial . A method that determines the exact number of distinct roots of a polynomial f ( t ) on an interval ( a ; b ) is based on computing a Sturm sequence f 1 ( t ) ; : : : ; fm ( t ) for f ( t ) , defined by f 1 ( t ) = f ( t ) f 2 ( t ) = f 0 ( t ) fi ( t ) = qi ( t ) fi + 1 ( t ) ?? fi + 2 ( t ) ( i 1 ) . The sequence terminates upon encountering a polynomial fm ( t ) , possibly a constant , that does not vanish on the interval of interest . Theorem 2 . 2 . 2 . 1 2 ( Sturm ' s Theorem ) Let f 1 ( t ) ; f 2 ( t ) ; : : : ; fm ( t ) be a Sturm sequence for the real polynomial f ( t ) , and let the interval ( a ; b ) be such that neither a nor b is a root of f ( t ) . Then if S ( a ) and S ( b ) are the number of sign changes in the sequences of values f 1 ( a ) ; f 2 ( a ) ; : : : ; fm ( a ) and f 1 ( b ) ; f 2 ( b ) ; : : : ; fm ( b ) , the number of distinct real roots of f ( t ) between a and b is given exactly by S ( a ) ?? S ( b ) . Note that Sturm ' s theorem does not count roots on ( a ; b ) according to their multiplicities . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 1 7 P ( t) 0.2 0.4 0.6 0.8 1.0 t Pâ€™ ( t) 0.2 0.4 0.6 0.8 1.0 t Pâ€™ â€™( t) 0.2 0.4 0.6 0.8 1.0 t Figure 2 . 3 : Application of Rolle ' s theorem . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 1 8 Budan { Fourier Sequence A weaker result , which relies only on the polynomial and its derivatives rather than a division sequence , is the Budan { Fourier theorem : Theorem 2 . 2 . 2 . 1 3 ( Budan { Fourier ) Let the interval ( a ; b ) be such that neither a nor b is a root of the polynomial f ( t ) of degree n . Then if C ( a ) and C ( b ) denote the number of sign changes in the sequences of values f ( a ) ; f 0 ( a ) ; : : : ; f ( n ) ( a ) and f ( b ) ; f 0 ( b ) ; : : : ; f ( n ) ( b ) , the number of roots between a and b , counted according to multiplicity , differs from C ( a ) ?? C ( b ) by at most a positive even amount . Isolating Polynomials Another insightful approach to separating the real zeros of a given polynomial is to utilize the real roots of isolating polynomials . Rolle ' s theorem may be regarded as a corollary to the following theorem of Borofsky [ Bor 5 0 ] : Theorem 2 . 2 . 2 . 1 4 Suppose f ( x ) , g ( x ) , and h ( x ) are real polynomials , such that a and b are consecutive real roots of f ( x ) , and g ( a ) and g ( b ) have the same signs . Then the polynomial F ( x ) g ( x ) f 0 ( x ) + h ( x ) f ( x ) has an odd number of roots between a and b if each root is counted according to its multiplicity . Uspensky [ Usp 4 8 ] describes a related theorem due to de Gua : Theorem 2 . 2 . 2 . 1 5 ( de Gua ) Let f ( x ) be a real polynomial of degree n having distinct positive roots x 1 ; : : : ; xr with multiplicities m 1 ; : : : ; mr . Then if we define F ( x ) = x f 0 ( x ) + f ( x ) ; where is an arbitrary non { zero constant , the following are true : 1 . F ( x ) has at least one root in each of the intervals ( x 1 ; x 2 ) ; : : : ; ( xr ?? 1 ; xr ) ; 2 . for mj > 1 each root xj is also a root of F ( x ) with multiplicity mj ?? 1 ; and 3 . all roots of F ( x ) are real when > 0 . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 1 9 Similar considerations apply to the negative roots of f ( x ) . A generalized version of the above theorem is presented by Sederberg and Chang in [ SC 9 4 ] ; this will be discussed in detail in [ x 6 . 5 ] . Finally , Dedieu and Yakoubsohn [ DY 9 3 ] use an exclusion function to separate the real roots of a polynomial in their algorithm that is brie y discussed in [ x 2 . 5 . 4 ] . 2 . 2 . 3 Isolation Techniques Accounting Multiple Real Roots Multiple real roots lie , in a sense , at the limit between cases of distinct real roots and complex roots [ BP 6 0 , Dic 2 2 , Mer 0 6 ] . ( Dickson [ Dic 2 2 , p . 2 9 ] illustrates this principle for the roots of a real quadratic polynomial with a simple ruler { and { compass construction . ) Common Roots of Two Polynomials The Euclidean algorithm for deriving the greatest common divisor ( GCD ) of two polynomials [ Van 7 0 ] provides for : 1 . the determination of the multiple roots of a polynomial by reducing it to a polynomial with the same , but simple , roots ; and thus 2 . the determination of the number of roots of a polynomial over an interval of the real axis . The GCD of f ( x ) and f 0 ( x ) , denoted by g ( x ) = ?? f ( x ) ; f 0 ( x ) ; ( 2 . 2 . 3 . 2 3 ) can be used to reduce the multiple roots of f ( x ) to simple ones by performing the division h ( x ) = f ( x ) g ( x ) ; ( Deg ( g ( x ) ) 6 = 0 ) : ( 2 . 2 . 3 . 2 4 ) The polynomial h ( x ) then possesses the same roots as f ( x ) , but without multiplicity . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 2 0 Separation of Multiple Roots An extension of the GCD algorithm that separates all the multiple roots of a polynomial into distinct linear factors and simultaneously determines the multiplicity of each root is described in [ Dun 7 2 , Dun 7 4 , MP 6 5 , Van 7 0 ] and implemented by [ Dun 7 2 ] for this purpose . define a GCD sequence by g 1 ( x ) = g 0 ( x ) ; g 0 0 ( x ) ; g 0 ( x ) = f ( x ) g 2 ( x ) = g 1 ( x ) ; g 0 1 ( x ) : : : gm ( x ) = gm ?? 1 ( x ) ; g 0 m ?? 1 ( x ) ; Deg ( gm ( x ) ) = 0 and set ( hj ( x ) = gj ?? 1 ( x ) gj ( x ) ) m j = 1 ; ( 2 . 2 . 3 . 2 5 ) and then Fk ( x ) = hk ( x ) hk + 1 ( x ) m ?? 1 k = 1 ; Fm ( x ) = hm ( x ) : ( 2 . 2 . 3 . 2 6 ) All the roots of the polynomials Fk ( x ) are then simple roots , and correspond to k { fold roots of f ( x ) . Note that f ( x ) can not possess roots of multiplicity greater than m . 2 . 3 Polynomial Real Root Approximation This section reviews both closed { form algebraic solutions as well as numerical iterative ap  proximations to polynomial equations expressed in power form . The first subsection [ x 2 . 3 . 1 ] discusses closed { form solutions for low { degree equations , with special attention to methods that are concerned only with real roots . The next three subsections classify iterative numer  ical schemes under three general groups : basic serial iterative methods [ x 2 . 3 . 2 ] , combined or hybrid serial iterative methods [ x 2 . 3 . 3 ] , and simultaneous iterative methods [ x 2 . 3 . 4 ] . specific power form polynomial root finders that find all polynomial roots , as well as power CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 2 1 form and Bernstein polynomial root finders that find only the real roots , are reviewed in the subsequent sections [ x 2 . 4 ] , [ x 2 . 5 ] and [ x 2 . 6 ] . 2 . 3 . 1 Closed Form Solvers Direct root { finding algorithms for equations that are algebraically solvable by radicals ( the roots may be expressed explicitly in terms of the coefficients by a general algebraic formula [ Lod 9 0 , MP 6 5 , Bor 5 0 , Caj 0 4 ] ) are called exact , or closed { form solvers . Any polynomial equation of degree 4 can be solved in closed form ; for higher degrees a general closed form solution is impossible [ Kur 8 0 , MP 6 5 ] . A thorough discussion of classical solutions for quadratic , cubic , and quartic equations can be found in elementary texts on the theory of equations [ BP 6 0 , Mac 5 4 , Tur 5 2 , Bor 5 0 , Usp 4 8 , Dic 2 2 , Mer 0 6 , Caj 0 4 ] , and higher algebra [ Kur 8 0 , MP 6 5 , Dav 2 7 ] . In principle , these methods yield exact results , although oating { point implementations are , of course , subject round { offerror and possible ill { conditioning effects . Although numerical implementations of direct methods are subject to accuracy loss and require square or cube root extractions , special purpose library functions for solving real quadratic and cubic equations have bested general purpose root finding methods by a factor of 7 [ MR 7 5 ] . Closed form solvers restricted to finding only real roots are of particular interest for CAGD and graphics applications , as discussed in [ x 1 ] . Hanrahan recorded nearly a 2 5 percent reduction of image rendering CPU time using exact solutions as compared to general polynomial root finders [ Han 8 3 ] . Numerical algorithms returning non  complex solutions are given in [ Sch 9 0 b ] . Numer  ical considerations addressing the stable computation of closed form root expressions are presented in a report by Lodha [ Lod 9 0 ] , and are addressed for the quadratic and cubic in [ PFTV 9 0 ] . Vignes [ Vig 7 8 ] also outlines a stable cubic real root solver developed by M . La Porte . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 2 2 Further implementation considerations are deferred to [ x 5 . 9 ] . Other references dis  cussing numerical implementation of exact solvers include [ HE 8 6 , PW 8 3 , Ric 8 3 , BFR 8 1 , McN 7 3 , Hen 6 4 , Hou 7 0 , Pen 7 0 , Buc 6 6 ] . 2 . 3 . 2 Basic Serial Iterative Methods Methods referred to by [ Pet 8 9 ] that compute one zero at a time and require successive pre { isolation of the roots and post  de ation to remove the corresponding linear factors , and by Householder [ Hou 7 0 ] as local ( convergent ) methods that are dependent on close initial guesses to roots for convergence will be called serial iterative methods here . Basic serial iterative methods are presented in a variety of numerical texts that discuss iterative solution of the real roots of an equation . Rice [ Ric 8 3 ] outlines several of these basic iterative methods along with their corresponding iterative steps , which include the following schemes : Bisection Method Regual Falsi Method modified Regula Falsi Method Secant Method Mueller ' s Method ( see [ Dun 7 2 , Mul 5 6 , PFTV 9 0 ] ) Fixed  Point Method Newton ' s Method Higher  Order Newton Method The bisection and regula falsi methods are frequently referred to as root trapping or bracketing methods because they require a bounding interval containing the real root in their operation . Bracketing methods always converge , but at a linear rate , or superlinear rate depending on what acceleration techniques are applied . Basic serial iterative schemes may also be characterized by their use of interpolating polynomials [ Hil 7 4 , Hou 7 0 , Ost 6 0 , Tra 6 4 ] : CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 2 3 Linear interpolation , e . g . , as in the secant method . Linear inverse interpolation , e . g . , the regula falsi methods . Quadratic interpolation , e . g . , Muller ' s method . Other interpolation { based schemes , e . g . , Newton ' s method . Any of these methods can be combined , extended , and applied to general purpose poly  nomial root finding schemes for polynomials in both power and Bernstein forms . 2 . 3 . 3 Hybrid Serial Iterative Methods Hybrid serial methods combine basic iterative root bracketing techniques ( which always converge , but usually at a slow rate ) with the faster convergent schemes that occasionally exhibit non  convergence [ PW 8 3 ] . The result is a hybrid approximation strategy that always converges at a faster rate by using a higher order convergence technique while the iterative step remains within the specified interval , and the bracketing technique when the result steps out of the interval . Hybrid methods found in the literature include a secant { false position algorithm [ PW 8 3 ] in PASCAL , a Newton { bisection algorithm in both FORTRAN and C [ PFTV 8 6 , PFTV 9 0 ] , a Newton { regula falsi algorithm in C by Blinn [ Gla 8 9 ] , and an algorithm in FORTRAN and C [ FMM 7 7 , PFTV 8 6 , PFTV 9 0 ] by Brent [ Bre 7 1 ] based on previous work by Van Wijngaarden and Dekker which combines root bracketing , bisection , and inverse quadratic interpolation to approximate a real root . 2 . 3 . 4 Simultaneous Iterative Methods Methods referred to by Petkovic [ Pet 8 9 ] that simultaneously determine all zeros of a poly  nomial , and by Householder [ Hou 7 0 ] as global ( convergent ) methods that are not dependent on close initial guesses to roots for convergence are designated here as simultaneous iterative methods . Such methods are not the focus of this study , but some of the more established CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 2 4 algorithms are discussed in [ x 2 . 4 ] as a reference to their use as benchmarking algorithms compared against the algorithms presented in Chapter 6 . Other simultaneous methods mentioned in the literature include Bernoulli ' s method [ Ait 2 6 ] , Grae ee ' s method [ Gra 6 5 ] , the quotient difference method [ Mor 8 3 , Hen 8 2 , BFR 8 1 , McN 7 3 , Hou 7 0 , Hen 6 4 ] Petkovic ' s simultaneous inclusion method [ Pet 8 9 , Pet 8 1 ] , Gargantini ' s method [ Gar 7 9 ] , etc . 2 . 4 Power Form General Purpose Root Finders This section reviews some general purpose root finding methods found in the literature that approximate all the roots of polynomials in power form . Root finders discussed below include Jenkins and Traub ' s Rpoly , Dunaway ' s Composite , Madsen and Reid ' s Newton { based , and Laguerre ' s methods . Additional general purpose methods not outlined here , but referenced in other works , include the Lehmer { Schurr method [ PFTV 9 0 , Mor 8 3 , Dun 7 2 , Act 7 0 , Hou 7 0 , Hou 5 3 ] , Lin ' s linear and quadratic methods [ Hil 7 4 , McN 7 3 , Act 7 0 , Buc 6 6 , Lin 4 3 , Lin 4 1 ] , Bairstow ' s Newton quadratic factor method [ PFTV 9 0 , PC 8 6 , Mor 8 3 , Hen 8 2 , RR 7 8 , Piz 7 5 , Hil 7 4 , McN 7 3 , Dun 7 2 , Ham 7 1 , Hou 7 0 , Buc 6 6 , IK 6 6 , Hen 6 4 ] as well as other higher order methods referenced in [ HP 7 7 , Hil 7 4 , Dun 7 2 , Hou 5 3 ] . Jenkins { Traub ' s Method Jenkin ' s Rpoly FORTRAN program , listed in [ Jen 7 5 ] , finds all the zeros of real polynomials in power form . Rpoly is based on a three { stage algorithm described in [ JT 7 0 ] which extracts an approximate linear or quadratic factor from a sequence of polynomials of degree one less than the original polynomial that are generated by various shifting operations . The real zero or quadratic factor is then de ated from the original polynomial and the process repeats on the reduced polynomial until all zeros are approximated . The first and second stages are linear convergent , and the third stage is superquadratic convergent and thus usually requires only a few iterations . Termination criteria for stage three iterations CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 2 5 are based on round { offerror analysis . Due to its robust and efficient performance , Rpoly has become established as a de facto state { of { the { art algorithm that provides a basis of comparison for many root finding algorithms . Jenkins and Traub ' s algorithm implemented for real polynomials reported about 2 n 2 milliseconds to calculate all the zeros for degree n 2 0 real polynomials , in contrast to about 8 n 2 milliseconds for their complex version [ JT 7 0 ] . Other polynomial root finding publications that refer to these methods include [ PFTV 9 0 , Ric 8 3 , BFR 8 1 , RR 7 8 , MR 7 5 , Dun 7 2 , Hou 7 0 ] . Dunaway ' s Composite Method Dunaway ' s Composite FORTRAN algorithm is included in [ Dun 7 2 ] . It computes all the roots of real power form polynomials , with particular emphasis on the problem of solving polynomials possessing multiple roots with greater accuracy and reasonable efficiency than had previously been available . This general purpose algorithm is actually applicable to any class of polynomials with real coefficients . In essence , the algorithm can be divided into two separate stages , finding the real roots first , and then the complex roots . The first part com  prises several procedures which first isolate a GCD sequence of unique polynomial factors containing distinct real roots , and then solves for these roots using a Sturm { Newton strat  egy . The second part isolates the complex factors by forming interpolating polynomials as an alternative to de ating the real roots , and then uses a Lehmer { Shur method to approx  imate initial root estimates for a complex Newton method . The composite algorithm was compared against [ JT 7 0 ] for real polynomials and yielded a significant increase in accuracy and reasonable speed for cases tested which included randomly generated polynomials with multiplicities from one to nine [ Dun 7 4 ] . Other related works which refer to this composite algorithm include [ GK 9 0 , FL 8 5 , LN 7 9 , Mat 7 9 , JT 7 5 ] . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 2 6 Madsen { Reid ' s Newton { Based Method Madsen and Reid ' s Newton { based FORTRAN algorithms for both complex and real poly  nomial coefficients are included in [ MR 7 5 ] . Based on Madsen ' s earlier work [ Mad 7 3 ] , they utilize a two stage global and local Newton step process to isolate and approximate all the zeros of a polynomial . To ensure stable de ation , the algorithm repeatedly approximates the zeros of the smallest modulus by first successively searching and computing tentative Newton { based steps until the iterate is close enough to apply straightforward Newton ap  proximation to quadratically converge to the root . Error estimation methods expanded from Peters and Wilkinson [ PW 7 1 ] are implemented to enhance convergence criteria for both complex and real coefficient cases . The method reports a two { fold advantage over the Jenkins and Traub algorithm [ JT 7 0 ] : a 2 { 4 times increase in efficiency with at least as accurate results , and a simpler algorithm yielding less bulky code ( object code about 2 3 as long ) . The FORTRAN code is set up to easily accommodate various precisions , checks for leading and trailing zero coefficients , and normalizes de ated coefficients . Other works referring to this method include [ Wes 8 1 , RR 7 8 , SR 7 7 , Moo 7 6 ] . Laguerre ' s Method Laguerre ' s method FORTRAN and C algorithms [ PFTV 8 6 , PFTV 9 0 ] find all the roots of a complex polynomial by first isolating guaranteed initial approximations for any root ( real , complex , simple , and multiple ) using parabolas , and then uses further refinement , or root polishing , techniques to converge to within required precision . Laguerre ' s method is guaranteed to converge independent of any initial values and incurs the calculation of f ( t ) , f 0 ( t ) , and f 0 0 ( t ) for each stage of the iteration which provides cubic convergence near simple real roots and linear convergence near multiple zeros . Other works referring to this method include [ Ric 8 3 , RR 7 8 , Act 7 0 , Hou 7 0 , Bod 4 9 ] . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 2 7 2 . 5 Power Form Real Root Finders This section reviews existing power form real root finding methods that may be categorized according to how they isolate the real roots of the polynomial . We consider methods based on : 1 . Sturm sequences , 2 . differentiation , 3 . Variation of signs , 4 . Interval techniques . 2 . 5 . 1 Sturm Sequence Techniques The determination of the number of real roots on an interval using Sturm sequences has been described above . One root { finding technique using Sturm sequences is described in [ Dun 7 2 ] . Another is the method of Hook and McAree . Hook { McAree ' s Method Hook and McAree present a C implementation using Sturm sequences to bracket the real roots of power form polynomials for subsequent refinement by the modified regula falsi method [ HM 9 0 ] . The isolation phase builds a Sturm sequence using a pseudo { polynomial remainder sequence derived from the original polynomial and its first derivative as outlined in [ x 2 . 2 . 2 ] , and then applys a bisection technique to the sequence until a single sign variation is achieved . A ray tracing program used this algorithm to render algebraic surfaces of arbitrary order which frequently encounters the solution of ill  conditioned polynomials . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 2 8 2 . 5 . 2 differentiation Techniques Polynomial derivative sequences provide natural bounding intervals for the real roots of the given polynomial , as exemplified by the work of Collins and Loos . Collins { Loos ' Method Collins and Loos [ CL 7 6 ] present a polynomial real root isolation method that utilizes re  cursive derivative sequences of the given power form polynomial to achieve root interval separation , applying Rolle ' s theorem , a simple tangent construction heuristic , and polyno  mial greatest common divisor calculations to determine the existence of two or no roots in an interval . The algorithm is compared both analytically and empirically to Heindel ' s Sturm sequence algorithm [ Hei 7 1 ] which uses integer arithmetic to compute the real zeros of a polynomial . It performs significantly faster in general , which is attributed primarily to smaller coefficients in the derivative sequence versus the Sturm sequence . Test cases consisted of both random and perturbed random product polynomials as well as Chebyshev and Legendre polynomials up to degree 2 5 . Other references that cite this work include [ CA 7 6 , CL 8 2 , MC 9 0 , Rum 7 9 ] . 2 . 5 . 3 Variation of Signs Techniques Real root isolation strategies that use Descartes ' rule of signs to bound the roots are con  tained in the work by Collins and Akritas . Collins { Akritas ' Method Collins and Akritas [ CA 7 6 ] present a polynomial real root isolation method that combines Descartes ' rule of signs with linear fractional transformations . This is a modification of Uspensky ' s algorithm based on a theorem by Vincent [ Usp 4 8 ] . The algorithm is 2 to 8 CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 2 9 times more efficient than the Collins and Loos method described above . Other sources that cite this work include [ CL 8 2 , MC 9 0 ] . 2 . 5 . 4 Interval Techniques Real root isolation schemes that construct bounding intervals to contain and exclude the roots include Hansen ' s Newton interval and Dedieu and Yakoubsohn ' s exclusion interval methods , respectively . Hansen ' s Newton Interval Methods Hansen develops polynomial real root finding methods [ Han 7 8 b , Han 7 8 a ] introduced by Moore [ Moo 6 6 ] that extend interval analysis to Newton ' s method . This guarantees isolation and approximation of all the real roots of a polynomial by bounding intervals . Other interval versions of Newton ' s method are cited in Hansen ' s works as well as in [ AH 8 3 , Han 6 9 , Moo 7 9 ] . Further sources extending these concepts to solutions of complex roots , as well as systems of non { linear equations , include [ AH 8 3 , GH 7 3 , Han 6 9 , HG 8 3 , Moo 7 7 , Moo 7 9 ] . Dedieu { Yakoubsohn ' s Exclusion Method Dedieu and Yakoubsohn ' s exclusion algorithm [ DY 9 3 ] is a real root isolation scheme that localizes all the real roots of a real polynomial , based on an exclusion function which defines intervals on which the original polynomial does not have any roots . The arrangement of the exclusion intervals provides the necessary bounding intervals for approximation by an appropriate root bracketing scheme [ x 2 . 3 . 2 , 2 . 3 . 3 ] . The exclusion method guarantees con  vergence to the accuracy " in O ( jlog " j ) steps and is stable under modifications of the initial iterate as well as rounding errors . The exclusion scheme proved stable for higher degree polynomials as opposed to the unstable effects of a Sturm ' s method with identical step convergence order , although Sturm ' s method exhibits better efficiency for well { conditioned CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 3 0 lower degree polynomial cases . The exclusion algorithm also extends to computing complex roots . The authors provide a C version of the isolating algorithm . 2 . 6 Bernstein Form Real Root Finders This section brie y reviews existing Bernstein form polynomial real root finding methods , which are grouped according to how they isolate the real roots , using either : 1 . Recursive subdivision , 2 . Newton { based methods , 3 . Hull approximation techniques . 2 . 6 . 1 Recursive Subdivide Techniques Bernstein root finders that repeatedly subdivide to isolate and even approximate the real roots of a polynomial are designated as recursive subdivision techniques . Three methods developed by Lane and Riesenfeld , Rockwood , and Schneider are reviewed below . Another method is presented in this document in [ x 6 . 3 ] that utilizes subdivision and derivative heuristics to isolate each root . The roots are then further refined using a hybrid serial approximation scheme adapted to Bernstein form polynomials . Lane { Riesenfeld ' s Method Lane and Riesenfeld [ LR 8 1 ] couple recursive bisection with properties of the Bernstein form to isolate and eventually approximate real roots over a specified interval . The polynomial is recursively subdivided at its midpoint until either the root is approximated to a speci  ed precision , or no root exists in the polynomial subinterval , whereupon the polynomial segment is eliminated . The variation diminishing property [ x 2 . 1 . 2 ] is utilized to indicate whether or not a root exists in the subinterval . Binary subdivision incurs O ( n 2 ) steps and CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 3 1 provides one bit of accuracy for each subdivision step . Other related works which reference this method include [ Gra 8 9 , MC 9 0 , RKF 8 8 , Roc 8 9 , Sch 9 0 a ] . Rockwood ' s Method Rockwood ' s algorithm [ Roc 8 9 ] is a variation on the Lane and Riesenfeld method suitable for parallel and vector processing . Instead of binary subdivision , Rockwood uses a step hueristic that estimates the root using the linearly interpolated value between two points on the curve receiving the greatest in uence from the nearest control points to the crossing . This estimate eventually becomes a Newton step in the vicinity of the approximate root , exhibiting quadratic convergence . The ordered real roots are found to a specified tolerance . This work is also referenced in [ MC 9 0 ] . Schneider ' s Method Schneider ' s algorithm [ Sch 9 0 a ] is also a variant of the Lane and Riesenfeld algorithm which uses recursive binary subdivision to approximate the ordered roots over the Bernstein in  terval . A root is determined when either the recursion depth limit is reached , or when the control polygon crosses the abscissa once and approximates a straight line , or is at enough , the root in the corresponding region being then computed as the intersection of the t { axis with the chord joining the first and last control point . The at enough termination criterion is achieved when the maximum perpendicular distance of the control points to the chord is bounded by a specified tolerance . 2 . 6 . 2 Newton { Based Techniques Bernstein root finding schemes that use Newton  type steps to isolate and approximate the real roots of a polynomial are exemplified by the algorithms of Grandine and of Marchepoil and Chenin . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 3 2 Grandine ' s Method Grandine [ Gra 8 9 ] computes all the zeros of a univariate spline function using a Newton interval method that requires bounds on the derivative of the function , without breaking the interval into its individual polynomial pieces . The method exhibits Newton convergence for simple roots , and thus at best linear convergence for multiple roots . Marchepoil { Chenin ' s Method A report by Marchepoil and Chenin in French [ MC 9 0 ] uses a combination of Bernstein subdivision techniques combined with Newton iterations to find the ordered set of real roots over an interval for ray tracing applications . Unfortunately , the author was unable to obtain a complete translation of this report and thus cannot provide a detailed comparison with other methods . The method is benchmarked against the Rockwood and Lane { Riesenfeld algorithms described above . 2 . 6 . 3 Hull Approximation Techniques Bernstein root finding methods can also exploit the convex hull property to isolate and even approximate the real roots of a polynomial . A method by Rajan , Klinkner , and Farouki exemplifies this type of method . Two other methods are presented in this document that use optimal convex hulls to provide steps that in the first case [ x 6 . 4 ] isolate the ordered real roots ( which are subsequently approximated using a hybrid serial approximation method adapted to Bernstein form polynomials ) , and in the second case [ x 6 . 1 ] successively approximate the ordered real roots . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 3 3 Rajan { Klinkner { Farouki ' s Method Rajan et al . [ RKF 8 8 ] use parabolic hulls to isolate and approximate simple real roots of a Bernstein form polynomial . A parabolic hull is a parabolic generalization of the convex hull property [ x 2 . 1 . 2 ] that bounds the polynomial from above and below . This method favors high degree polynomials ( examples up to degree 2 0 4 8 ) with few roots over the interval of interest , and was up to several orders of magnitude faster than other algorithms for this type of problem . The scheme exhibits cubic convergence when approximating a root . Generalization of the technique to higher { degree hulls is also outlined . Chapter 3 Review of Numerical Considerations Robust computer implementation of polynomial root finding methods requires , in addition to a formal mathematical description of the algorithm , a keen awareness of the limitations of the oating { point number system and issues of numerical stability and error propaga  tion . This chapter reviews key numerical considerations pertaining to the implementation of polynomial real root finding algorithms . Section [ x 3 . 1 ] reviews sources and types of numerical error noted in the literature . Section [ x 3 . 2 ] discusses general deterministic and indeterministic analysis techniques for estimating bounds for roundofferror . Section [ x 3 . 3 ] reviews numerical stability and condition properties including the condition of polynomial roots with respect to perturbations in the coefficients , with particular attention to the power and Bernstein representations . Section [ x 3 . 4 ] reviews and provides an example of the conversion process between Bernstein and power forms of a polynomial . Finally , section [ x 3 . 5 ] presents guidelines for benchmarking the performance of polynomial root finders , identifying various classes of polynomials appropriate for testing . 3 4 CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 3 5 3 . 1 Numerical Error The computer implementation of mathematical algorithms involves a variety of ways in which numerical errors may be generated , propagated , and amplified . This section out  lines the various types and sources of numerical error discussed in the literature [ Buc 6 6 , Dod 7 8 , Dun 7 2 , Hen 8 2 , Hil 7 4 , Hou 5 3 , McN 7 3 , Mor 8 3 , Pen 7 0 , PC 8 6 , Piz 7 5 , RR 7 8 , Sta 7 0 ] with particular emphasis on oating { point roundofferror , indicating guidelines to minimize the error whenever possible . 3 . 1 . 1 Sources and Types of Numerical Error We may distinguish between two distinct types of error , which arise in the the \ modeling " and \ computing " stages : Modeling Error ( as distinct from numerical error ) re ects the deficiencies of , for example , a discrete approximation to a continuous mathematical problem , or the possible failure of an iterative method to have guaranteed convergence to the desired solution . Computational Error measures the numerical errors due to implementing an algorithm in oating { point arithmetic on a computer . Computational error may be classified into the following categories : Gross Error is caused by mistakes , oversights , or malfunctions of man or machine . Truncation Error results from using a finite approximation to represent an ideal function , equation , or value that does not have a tractable finite representation . Convergence Error arises from either a divergent iterative process , or premature termination of a convergent process . Over ow  Under ow Error results from the finite dynamic range of the oating { point number system . It may be largely avoided by using appropriate data scaling and normalization techniques . Roundofferror arises from the fact that the results of oating { point operations must be rounded to be representable within the oating { point number system . This incurs errors at each arithmetic step of an algorithm , and the propagation and possible amplification of errors incurred in preceding steps . CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 3 6 Although distinct from computational error , the proper modeling and formulation of a mathematical problem is an important aspect of robust implementations [ x 3 . 3 ] . As a general guideline for minimizing numerical error , computations should be kept as concise as possible , although there are exceptions to even this rule . Causes of Computational Error Computational errors arise under a variety of circumstances , such as : 1 . Inherent error in the input data of a computation . This may arise from physical measurement errors , the fact that the input data is itself the result of oating { point computations , or simply the need to round \ exact " input to oating { point format . 2 . Cancellation error in subtracting nearly equal numbers , which incurs an amplification of pre { existing relative errors in the operands . This may be ameliorated by rearranging sums so that differences close to zero are avoided , and using reducible subtraction operations [ Vig 7 8 ] which detect and ensure the summation of like signed numbers . 3 . Roundo in adding or subtracting one large and one small number . Summing numbers in increasing magnitude helps remedy this source of error . 4 . Simple roundo due to the ordinary arithmetic operations of addition , subtraction , multiplication , and division . 5 . Floating { point over ow or under ow ( e . g . , due to division by a very small number ) . Other pertinent tips for minimizing roundofferror include : Avoid ( if possible ) expressions of the form ab where b is a oating point number , Use double precision for oating { point calculations , Rearrange formulas to use original data rather than derived data , Rearrange formulas to minimize arithmetic operations , Avoid chaining operations which involve round { offerror , and Work with integers whenever possible and economically feasible . CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 3 7 3 . 1 . 2 Roundofferror Discussions regarding oating  point computation preface most numerical analysis texts in part and is exclusively defined and examined by Sterbenz [ Ste 7 4 ] . In addition , a text by Plauger [ Pla 9 2 ] provides extremely useful insight regarding standardized oating  point constants for a UNIX  based machine usually located in the le : nusrnincluden oat . h . Floating  Point Computation A computer expression in F formed by the arithmetic set of computer operators f ; ; ; g ( 3 . 1 . 2 . 1 ) that numerically approximates an algebraic expression defined by the set of real numbers R and formed by an arithmetic set of exact operators f + ; ?? ; ; = g ( 3 . 1 . 2 . 2 ) results in numerical rounding of the algebraic expression when F and its respective operators define the set of oating  point numbers and operations . A oating  point number x has the form x = m e 2 [ xmin ; xmax ] 2 F ( 3 . 1 . 2 . 3 ) where the signed number m 2 F is the normalized mantissa of x , having d significant digits in the base of the machine , and e 2 Z is the exponent of x , lying within some range [ emin ; emax ] . Machine Epsilon and Roundo Unit The precision of oating  point arithmetic is characterized by the machine epsilon which is defined as the smallest oating point number such that 1 > 1 or = 1 ?? d ; ( 3 . 1 . 2 . 4 ) CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 3 8 and its corresponding roundofferror bound by the machine roundo unit [ Mor 8 3 ] which is defined as the smallest oating point number such that 1 = 1 or = 1 2 = 1 2 1 ?? d ( 3 . 1 . 2 . 5 ) which reduces to = 2 ?? d on a binary machine . A function that calculates the machine epsilon for any machine is given in the following algorithm [ FMM 7 7 ] . Algorithm 3 . 1 . 2 . 1 ( = ComputeMachineEps ( ) ) Compute and return the ma  chine epsilon . 1 . = 1 2 . While ( ( 1 + ) > 1 ) 3 . = 1 2 EndWhile End of ComputeMachineEps Roundofferror Due to Floating  Point Operations The absolute error of a computed value x due to roundo of a binary oating  point operation f ; ; ; g is represented by " x ( 3 . 1 . 2 . 6 ) and the corresponding relative error is denoted by " x x ( 3 . 1 . 2 . 7 ) which measures the precision , or the number of correct digits , in x . The formulas that account for the roundofferror due to oating  point operations are x y = ( x + y ) ( 1 + ) x y = ( x ?? y ) ( 1 + ) x y = x y ( 1 + ) x y = x = y ( 1 + ) CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 3 9 where is the relative roundofferror introduced by a single oating  point operation which satisfies the relation j j [ Dod 7 8 , Hil 7 4 , Hou 5 3 , IK 6 6 , KM 8 1 , Pen 7 0 , PW 7 1 , PW 8 3 , Ric 8 3 , Sta 7 0 ] . Note that these formulas do not take into consideration inherent or pre { existing errors in the operands ( see [ xx 3 . 2 . 2 ] , [ PW 8 3 ] ) . 3 . 2 Estimating Bounds for Roundofferror There are a variety of approaches to monitoring the propagation of rounding errors in numerical computations so as to estimate error bounds for computed quantities . They may be boradly classified as as either deterministic or indeterministic techniques [ BC 8 6 , FV 8 5 , Ham 7 1 , Hen 8 2 , Mil 7 5 b , RR 7 8 , Ric 8 3 , Ste 7 4 , Wil 6 3 ] . Deterministic techniques estimate an upper bound for the roundofferror incurred during a computation , and may be divided into the following approaches : 1 . Forward Error Analysis 2 . Running Error Analysis 3 . Backward Error Analysis 4 . Interval Error Analysis Indeterministic techniques estimate an average bound based on certain measures of reli  ability for the roundofferror incurred during a computation which separates into the following current approaches . 1 . General Stochastic Error Analysis 2 . Permutation  Perturbation Error Analysis It is important to note that these methods yield estimates of the propagated roundo error at the expense of a loss of efficiency in both execution and a more complicated im  plementation . The general purpose polynomial root finder implemented by Madsen and Reid discussed in [ xxx 2 . 4 ] reported their running error analysis overhead increased CPU time usually about 2 5 % for simple roots and up to 1 0 0 % when many multiple roots were considered [ MR 7 5 ] . Thus , a good rule of thumb is to use methods that avoid roundofferror analysis when possible , and to use error analysis judiciously when needed . CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 4 0 3 . 2 . 1 Forward Error Analysis Forward error analysis bounds the absolute or relative roundofferrors of the intermediate steps of a computation . Various perspectives on forward error analysis are presented in [ Mil 7 5 b , PW 7 1 , RR 7 8 , Ric 8 3 , Ste 7 4 , Wil 6 3 ] . This approach is often too conservative , yielding error bounds that can be many orders of magnitude larger than the actual error and thus virtually useless when used as a convergence criterion . In addition , forward error analysis is tedious and often di cult to apply to complex computations ( although applied skillfully to straightforward equations this approach can produce a tight error bound on the computed value ) . Also , it does not account for cancellation effects  see [ x 4 . 2 . 1 ] below . 3 . 2 . 2 Running Error Analysis Peters and Wilkinson coined the term running error analysis in [ PW 7 1 ] to describe a method that computes error estimates \ on { the { y " for the case of Horner evaluation of f ( x ) . The essence of this method is to obtain for each intermediate step of the computed value an absolute or relative bound on the error due to the effects of both the inherent and roundo error . Such methods are often termed linearized running error analyses , since they neglect higher { order terms in small quantities . Works that discuss concepts related to running error analysis include [ Hen 6 4 , Hen 8 2 , Hil 7 4 , Hou 5 3 , Mor 8 3 , Pen 7 0 ] . The absolute error of a computed value x represented by " x = " x + " x ! ( 3 . 2 . 2 . 8 ) that accounts for both roundo and inherent error when computed with a binary operator consists of a component " x due to roundo from the a oating point operation [ xxx 3 . 1 . 2 ] as well as a component " x ! due to the inherent error which already exists in x due to prior sources of error [ PW 8 3 ] . The corresponding relative error is measured and denoted by " x x = " x x + " x ! x : ( 3 . 2 . 2 . 9 ) CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 4 1 There are two main approaches presented in the literature that account for the inherent and roundofferror terms in equations ( 3 . 2 . 2 . 8 ) and ( 3 . 2 . 2 . 9 ) . Binary Operator approach is based on the fact that variables in any sequence of a com  putation are only processed two at a time by binary operators . Thus , the operations composing the entire computation are arranged into a sequence of binary operations . The error bound is obtained by accumulating the errors from each binary operation such that the effects of each binary operation contributes to the next and so forth . A calculation graph which maps the two operands of each binary operation to the corresponding nodes of a binary graph is presented in [ DM 7 2 ] . Partial Derivative Operator approach is based on the idea that a computation can be expressed as the sum of a sequence of linear successive operators . A partial differential equation represents the sum of these linear successive operators which each represent an independent variable in the differential equation . The error bound is obtained by applying the chain rule to the partial differential equation for each linear successive operator , and thus , is the sum of the effects of all the errors contributing to the computation . A calculation graph which maps each partial derivative operation to a corresponding node of a directed graph is presented in [ Pen 7 0 , PW 8 3 ] . Both methods work for simple expressions . Complex formulations need to be decom  posed or factored into simple parts which are then composed and analyzed as a simple function . Although running error analysis provides reasonably tight bounds , it at least doubles the operations in solving a particular expression . 3 . 2 . 3 Backward Error Analysis Rather than tracking the absolute or relative errors of computed values at each intermediate step of an algorithm , backward error analysis is concerned with showing that the computed result is exact for some \ neighboring " problem , corresponding to perturbed input parame  ters . The approach indicates a range of input values which give results di ering from the true solution by an amount that re ects the cumulative effects of intermediate computa  tional errors . Sources describing this process include [ Hen 8 2 , Mil 7 5 b , Mil 7 5 a , RR 7 8 , Ric 8 3 , CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 4 2 Ste 7 4 , Wil 6 3 ] ; see also [ x 4 . 2 . 3 ] below . 3 . 2 . 4 Interval Error Analysis Interval error analysis introduced by Moore [ Moo 6 6 ] and also covered by [ AH 8 3 , GL 7 0 , Han 6 9 , KM 8 1 , Moo 6 6 , Moo 7 9 , RL 7 1 , Ste 7 4 ] replaces each computed value x with a scalar interval represented by [ a ; b ] = fx j a x bg : ( 3 . 2 . 4 . 1 0 ) Interval arithmetic operations for two intervals [ a ; b ] and [ c ; d ] are represented by [ a ; b ] + [ c ; d ] = [ a + c ; b + d ] [ a ; b ] ?? [ c ; d ] = [ a ?? d ; b ?? c ] [ a ; b ] [ c ; d ] = [ Min ( ac ; ad ; bc ; bd ) ; Max ( ac ; ad ; bc ; bd ) ] [ a ; b ] = [ c ; d ] = [ a ; b ] [ 1 = d ; 1 = c ] where interval division is defined only for denominator intervals that do not contain 0 . Although interval analysis gives a guaranteed bound the error of the computed value , this bound is often too conservative and can be very expensive due to interval arithmetic overhead if not judiciously applied [ Ham 7 1 ] . Interval analysis has been applied to algo  rithms involving Bernstein { form computations [ Rok 7 7 , SF 9 2 ] , bounds on interval polyno  mials [ Rok 7 5 , Rok 7 7 ] , and the determination of polynomial zeros [ GH 7 2 , Han 7 0 ] . 3 . 2 . 5 General Stochastic Error Analysis Stochastic error analysis techniques estimate the correctness of a computation by studying the statistical variance and mean values for the generated roundofferror . These values are not upper ( worst { case ) error bounds , but rather approximations to an upper error bound to within a certain level of confidence . Various concepts and techniques for the general application of this process are presented in [ Buc 6 6 , Ham 7 1 , Hil 7 4 , Hen 6 4 , Hou 5 3 , KL 7 3 , Pen 7 0 , Piz 7 5 , RR 7 8 , Ric 8 3 , Ste 7 4 , Wil 6 3 ] . Of well { tested approach is the permutation { perturbation stochastic method , which is reviewed in the next subsection . CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 4 3 3 . 2 . 6 Permutation { Perturbation Error Analysis The permutation { perturbation or CESTAC technique of error analysis Introduced by Vignes and La Porte [ VL 7 4 ] and further developed in [ FV 8 5 , Vig 7 8 ] determines the precision of a oating { point computation by considering the stochastic effects that roundo due to the permutation effects of the computer arithmetic operations and the perturbation of their possible images has on the computed values . The method essentially simpli es to the following three steps which , provide an error estimate with a computed number of significant digits with a probability of 9 5 % as a consequence of Student ' s law and the central limit theorem [ Alt 8 6 ] . 1 . Compute three values fxig 3 i = 1 via an ( machine dependent ) algorithm that randomly permutes the permutable operators and randomly perturbs the last bit of the mantissa on any intermediate value . 2 . Compute the mean value x and the standard deviation of the three values fxig by x = P 3 i = 1 xi 3 and = sP 3 i = 1 ( xi ?? x ) 2 2 : 3 . Compute the number of significant digits d of x from the formula d = log 1 0 x : ( 3 . 2 . 6 . 1 1 ) Although the computational overhead is increased by at least a factor of three ( step 1 ) , this method provides an extremely accurate estimate of the incurred roundofferror . This method has been successfully applied to algorithms involving computational geometry [ DS 9 0 ] , eigenvalues [ Fay 8 3 ] , Fourier transforms [ BV 8 0 ] , integration of ordinary differential equations [ Alt 8 3 ] , linear and non { linear programming [ Tol 8 3 ] , optimization [ Vig 8 4 ] , parallel and closed form polynomial root finding [ Alt 8 6 , Vig 7 8 ] . It has been recommended for scienti c computing in general in [ BC 8 6 , Vig 8 8 ] , with sample FORTRAN codes provided in [ BV 8 0 , Vig 7 8 ] of the machine dependent permutation { perturbation function required in the initial step above . CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 4 4 3 . 3 Numerical Stability and Condition An important indication of whether or not computational errors are likely to be significant in a given problem is the condition or \ intrinsic stability " of that problem . If small changes in the input values induce large changes of the ( exact ) solution , the problem is said to be ill { conditioned , and it is unrealistic to expect high { accuracy solutions from finite { precision computations . A related , but distinct , issue is whether the particular algorithm used to address a given problem is stable ( e . g . , whether it incurs subtraction of near { identical quantities ) . This section reviews some basic definitions and concepts pertaining to numerical stability and conditioning , the condition of perturbing power form polynomial coefficients , and compares the condition of the Bernstein and the power polynomial forms . 3 . 3 . 1 Preliminary definitions We summarize below the basic concepts and terminology concerning numerical stability and condition [ FR 8 7 , Hen 8 2 , RR 7 8 , Ric 8 3 ] . Numerical Stability is a property of an algorithm which measures its propensity for generating and propagating roundofferror and inherent errors in the input data . Numerical Instability results when the intermediate errors due to roundo strongly in  uence the final result . Numerical Condition is a mathematical property of a problem which measures the sen  sitivity of the solution to perturbations in the input parameters . Condition Number is one or more scalar values that describes the condition of a problem , estimating how much uncertainty in the initial data of a problem is magni ed in its solution ; condition numbers may be formulated in terms of both absolute and relative perturbations of the input and output values . Well { Conditioned describes a problem characterized by a small condition number , i . e . , changes of the initial data yield commensurate changes of the output data , so the problem can be solved to reasonable accuracy in finite { precision arithmetic . Ill { Conditioned describes a problem characterized by a large condition number , changes of the input leading to much larger changes of the output , so that accurate solutions CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 4 5 are di cult to compute in finite { precision arithmetic . Thus , it is almost impossible for an ill { conditioned problem to be numerically stable in implementation [ Ric 8 3 ] . On the other hand , an unstable method for a well { conditioned problem may at first produce accurate results until the propagation of roundo accumulates to yield erroneous intermediate and final results [ RR 7 8 ] . Since the condition of a problem is dependent upon its formulation , it may be possible to ameliorate effects of ill { conditioning by reformulating the problem , i . e . , restating it in terms of different input / output variables [ Ric 8 3 ] . 3 . 3 . 2 Condition of Perturbing Polynomial coefficients The numerical condition of polynomials with respect to root { finding is analyzed by perturb  ing the coefficients and observing the resulting effects on the roots . This is accomplished by establishing a relationship between the roots f ig of the original polynomial f ( x ) = n X i = 0 aixi and the roots f i + ig of the perturbed polynomial f ( x ) = n X i = 0 ( ai + ai ) xi ( 3 . 3 . 2 . 1 2 ) where for purposes of analysis the coefficient errors f aig are regarded as begin specified arbitrarily ( in practice they might re ect rounding errors due to earlier computations ) . For instance , the well { known Wilkinson polynomial [ Wil 6 3 ] represented in product form by f ( x ) = 2 0 X i = 1 ( x + i ) ; f ig = f ?? 1 ; ?? 2 ; : : : ; ?? 2 0 g ( 3 . 3 . 2 . 1 3 ) yields the following roots f i + ig ( correct to 9 decimal places ) : CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 4 6  1 . 0 0 0 0 0 0 0 0 0  2 . 0 0 0 0 0 0 0 0 0  3 . 0 0 0 0 0 0 0 0 0  4 . 0 0 0 0 0 0 0 0 0  4 . 9 9 9 9 9 9 9 2 8  6 . 0 0 0 0 0 6 9 4 4  6 . 9 9 9 6 9 7 2 3 4  8 . 0 0 7 2 6 7 6 0 3  8 . 9 1 7 2 6 0 2 4 9  1 0 . 0 9 5 2 6 6 1 4 5 0 . 6 4 3 5 0 0 9 0 4 i  1 1 . 7 9 3 6 3 3 8 8 1 1 . 6 5 2 3 2 9 7 2 8 i  1 3 . 9 9 2 3 5 8 1 3 7 2 . 5 1 8 8 3 0 0 7 0 i  1 6 . 7 3 0 7 3 7 4 6 6 2 . 8 1 2 6 2 4 8 9 4 i  1 9 . 5 0 2 4 3 9 4 0 0 1 . 9 4 0 3 3 0 3 4 7 i  2 0 . 8 4 6 9 0 8 1 0 1 when the power coefficient a 1 9 is perturbed by as little as a 1 9 = 2 ?? 2 3 [ Hou 7 0 , PC 8 6 , Piz 7 5 , RR 7 8 , Wil 6 3 ] . These results illustrate the di culties in computing the roots of high degree polynomials represented in power form . 3 . 3 . 3 Condition of Bernstein and Power Forms Both analytical [ FR 8 7 , FR 8 8 ] and empirical [ SP 8 6 ] studies indicate that polynomials ex  pressed in Bernstein form can provide superior computational stability and root conditioning than in the power form . The following results outlined in [ FR 8 7 ] regarding the improved root condition of the Bernstein basis over the power basis are duplicated for convenience : Given a Bernstein basis defined over an arbitrary interval [ a ; b ] ( typically [ 0 , 1 ] ) that contains all the roots of interest , and a power basis defined about any point ( typically the origin ) excluding the interior of [ a ; b ] , then for any simple real root of an arbitrary polynomial the root condition number 1 . is smaller in the Bernstein basis than in the power basis , 2 . decreases monotonically under Bernstein degree elevation , 3 . decreases monotonically under Bernstein subdivision , and CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 4 7 4 . is smaller in the Bernstein basis than in any other basis which may be expressed as a non  negative combination of the Bernstein basis on that interval . The same holds true for multiple roots if the condition number ( which is formally in finite ) is regarded as the constant factor of the non  linear growth rate of the root displacements with the coefficient perturbations . 3 . 4 Conversion Between Bernstein and Power Forms Consider a polynomial in Bernstein form over the [ 0 ; 1 ] domain B ( t ) = n X i = 0 bi n i ! ( 1 ?? t ) n ?? iti and the same polynomial in power form P ( t ) = n X i = 0 piti : The problem we consider is , given the Bernstein coefficients bi , find the corresponding power coefficients pi ( Bernstein to power basis conversion ) or given the pi , find the bi ( power to Bernstein conversion ) . An elegant solution to this problem can be obtained by performing a Taylor ' s series expansion of B ( t ) : B ( t ) B ( 0 ) + B 0 ( 0 ) t + B 0 0 ( 0 ) t 2 2 ! + : : : + B ( n ) ( 0 ) tn n ! : A power basis polynomial is equivalent to a Bernstein basis polynomial ( P ( t ) B ( t ) ) if and only if P ( i ) ( 0 ) ti i ! B ( i ) ( 0 ) ti i ! ; i = 0 ; : : : ; n : But , for the power basis , P ( i ) ( 0 ) i ! = pi CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 4 8 so pi = B ( i ) ( 0 ) i ! ; i = 0 ; : : : ; n : ( 3 . 4 . 0 . 1 4 ) The terms B ( i ) ( 0 ) can be found as follows . Recall from Section 5 . 6 that the coefficients of the derivative of a polynomial in Bernstein form are : n ( b 1 ?? b 0 ) ; n ( b 2 ?? b 1 ) ; : : : ; n ( bn ?? bn ?? 1 ) : The coefficients of the second derivative are : n ( n ?? 1 ) ( b 2 ?? 2 b 1 + b 0 ) ; n ( n ?? 1 ) ( b 3 ?? 2 b 2 + b 1 ) ; : : : ; i n ( n ?? 1 ) ( bn ?? 2 bn ?? 1 + bn ?? 2 ) : Since B ( 0 ) = P ni = 0 bi ?? n i ( 1 ?? 0 ) n ?? i 0 i = b 0 , we have B 0 ( 0 ) = n ( b 1 ?? b 0 ) ; B 0 0 ( 0 ) = n ( n ?? 1 ) ( b 2 ?? 2 b 1 + b 0 ) B ( i ) ( 0 ) = n ( n ?? 1 ) ( n ?? i + 1 ) i X j = 0 ( ?? 1 ) ( i ?? j + 1 ) i j ! bj : This can be written more neatly if we define the recurrence b j i = b j ?? 1 i + 1 ?? b j ?? 1 i with b 0 i bi . Then B ( i ) ( 0 ) = n ( n ?? 1 ) ( n ?? i + 1 ) bi 0 = n ! ( n ?? i ) ! bi 0 : From equation 3 . 4 . 0 . 1 4 , pi = n ! ( n ?? i ) ! i ! bi 0 = n i ! bi 0 : Thus , the problem reduces to one of finding the values bi 0 . This is easily done using a difference table : b 0 0 = b 0 = p 0 b 0 1 = b 1 : : : b 0 n = bn b 1 0 = b 0 1 ?? b 0 0 = p 1 = ?? n 1 b 1 1 = b 0 2 ?? b 0 1 : : : b 0 n = b 0 n ?? b 0 n ?? 1 b 2 0 = b 1 1 ?? b 1 0 = p 2 = ?? n 2 b 2 1 = b 1 2 ?? b 1 1 : : : b 2 n = b 1 n ?? b 1 n ?? 1 : : : : : : : : : : : : b n ?? 1 0 = b n ?? 2 1 ?? b n ?? 2 0 = pn ?? 1 = ?? n n ?? 1 b n ?? 1 1 = b n ?? 2 2 ?? b n ?? 2 1 bn 0 = bn ?? 1 1 ?? bn ?? 1 0 = pn CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 4 9 Thus , to perform Bernstein to power basis conversion , load the Bernstein coefficients into the top row and compute the difference table . Scale the left column by ?? n i , and you have the power coefficients . To perform power to Bernstein conversion , divide the power coefficients by ?? n i , load them into the left column , compute the difference table backwards , and read the Bernstein coefficients o the top row . 3 . 4 . 1 Example Convert to power basis the degree 4 Bernstein form polynomial with coefficients ( 1 ; 3 ; 4 ; 6 ; 8 ) . This is done by setting up the difference table 1 3 4 6 8 2 1 2 2  1 1 0 2  1  3 so the power coefficient are taken from the left column , times the binomial coefficients : p 0 = 1 ?? 4 0 = 1 p 1 = 2 ?? 4 1 = 8 p 2 = ?? 1 ?? 4 2 = ?? 6 p 3 = 2 ?? 4 3 = 8 p 4 = ?? 3 ?? 4 4 = ?? 3 3 . 4 . 2 Closed Form Expression The conversion from Bernstein to power basis can be written concisely as follows : pi = i X k = 0 bk n i ! i k ! ( ?? 1 ) i ?? k : Power to Bernstein conversion is accomplished by the formula : bi = i X k = 0 i k ! n k ! pk : ( 3 . 4 . 2 . 1 5 ) CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 5 0 3 . 4 . 3 Numerical ramifications of Basis Conversion The numerical condition of basis transformation has been studied in [ FR 8 8 , DD 8 8 , Far 9 1 b ] . The condition number for the basis transformation matrix gives an upper bound  for arbitrary polynomials  on the largest possible value by which basis conversion ampli es the relative coefficient error . It grows exponentially with the polynomial degree n . 3 . 5 Performance of Polynomial Root Finders Guidelines for performance comparisons of mathematical software are proposed in [ CDM 7 9 , Ign 7 1 , Ric 8 3 ] , and more specifically for polynomial root finding algorithms by Jenkins and Traub in [ JT 7 4 , JT 7 5 ] and Wilkinson in [ Wil 6 3 ] . This section reviews principles regarding establishing and evaluating the overall performance of polynomial root finding algorithms based on the above references . 3 . 5 . 1 Benchmarking Principles The criteria for testing root finding algorithms may be concisely summarized under the categories : 1 ) Robustness , 2 ) Convergence , 3 ) deficiencies , and 4 ) Assessment . Program Robustness is validated with pathological examples that test for specific pro  gram properties and underlying methods , such as : 1 . Checking leading coefficients that approximate zero , 2 . Checking trailing coefficients that approximate zero , 3 . Proper handling of low degree polynomials , 4 . Proper scaling of polynomial coefficients to avoid oating  point over ow and under ow , 5 . Assessing the condition of each zero and adjusting the precision of computation accordingly , 6 . Specifying iteration limits , and 7 . Providing a failure exit or testable failure ags . CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 5 1 Program Convergence is a computational problem and needs validation regarding the : 1 . Starting conditions for proper convergence , 2 . Convergence of iterate sequences to acceptable tolerances , 3 . Slow convergence of iterate sequences as in multiple zeros cases , 4 . Divergence of iterate sequences , and 5 . Program termination criteria . Program deficiencies for specific known implementation defects or weaknesses are im  portant to document and are tested by forcing : 1 . A program to produce an undefined result , 2 . The decision mechanisms of the program to fail , and 3 . Roundofferror to destroy accuracy of solution as in de ating zeros . Program Assessment regarding performance is best analyzed by comparing program statistical information based on the establishment of meaningful measures . 1 . Reliability of the solution . 2 . Accuracy or closeness to the exact solution . 3 . Precision or the number of significant digits in the answer . 4 . efficiency measurements including : CPU time based on accurate timers and uniformly optimized code , Function evaluations , Arithmetic operations , and Storage requirements regarding arrays , stacks , object code size . 5 . Portability of the program to different computer platforms . Polynomial Classes for Performance Assessment Classes of polynomials for performance assessment should include both well  conditioned and ill  conditioned polynomials . A few well  conditioned cases should be tested against other algorithms in the literature , with the major testing focusing on ill  conditioned cases . 1 . Well  conditioned Polynomials CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 5 2 Uniformly Modulated Zeros yield similar zero cases . Polynomials constructed from uniformly distributed random zeros di er little from each other because the randomness tends to ` average out ' in the coefficients . Uniformly Modulated coefficients yield uniformly distributed zeros . Polynomials constructed from uniformly distributed random coefficients tend to have their zeros uniformly distributed around the origin near the unit circle . 2 . Ill  conditioned Polynomials Dual Bands of Modulated Zeros yield clustered zeros . Polynomials constructed from both a wide band and a much narrower band of uniformly distributed random zeros produce ill  conditioned polynomials with clustered zeros , where the severity of the condition is controlled by the amount the narrower band is decreased and its percentage of zeros is increased . Widely Modulated Zeros yield widely varying sizes of zeros . Polynomials constructed from zeros whose mantissa and exponent are chosen from separate random uniform distributions yield zeros of widely varying size which often illustrate ine ciences in a program usually not apparent from other tests , e . g . , poorly chosen initial iterates may tend to ` wander ' slowly to the area of the zero and such behavior will be magni ed on this class of polynomials . Widely Modulated coefficients yield widely varying intervals of zeros . Polynomials constructed from coefficients whose mantissa and exponent are cho  sen from separate random uniform distributions yield zeros with widely varying intervals which typify wide applicability of the program . Sources that define examples of the various generated polynomials include [ Dun 7 2 , GK 9 0 ] . Chapter 4 Preconditioning for Bernstein form polynomials As mentioned in Chapter 1 , several fundamental geometrical problems that arise in the processing of free { form curves and surfaces may be reduced computationally to the task of isolating and approximating the distinct real roots of univariate polynomials on finite inter  vals . Representative examples are : ray { tracing of surfaces [ Han 8 3 , Kaj 8 2 , SA 8 4 ] ; computing intersections of plane curves [ GSA 8 4 , SAG 8 5 , SP 8 6 ] ; finding a point on each loop of the intersection curve of two surfaces [ Far 8 6 ] ; and the \ trimming " of offset and bisector curves [ FJ 9 4 , FN 9 0 a ] . Such an approach is attractive in that it provides an algebraically precise formulation of the geometrical problem under consideration  given an \ algorithmic " root finder  but it is widely perceived to be impractical in all but the simplest cases , due to the potentially poor numerical condition of the high { degree polynomials incurred . Thus , it has not found as widespread application as subdivision methods [ CLR 8 0 , LR 8 0 ] based on successively refined piecewise { linear approximations of curves and surfaces ( generated recursively from their B ! ezier / B { spline control polygons ) . In the aforementioned problems , the polynomials whose roots we seek are not specified ab initio , but rather must be \ constructed " by a sequence of oating { point arithmetic op  erations on given numerical data , which may incur significant errors in the final polynomial 5 3 CHAPTER 4 . PRECONDITIONING FOR BERNSTEIN FORM POLYNOMIALS 5 4 coefficients . Our investigations suggest that the sensitivity of the roots to errors in the polynomial coefficients incurred during the construction stage typically dominates any error associated with the actual root { finding process . In this chapter we describe an approach to processing polynomials in Bernstein form that can significantly improve the accuracy of computed roots by minimizing adverse effects of the polynomial \ construction " errors . In essence , the approach amounts to performing de Casteljau subdivision in the earliest stages of an algorithm , a procedure that we shall call \ a priori subdivision . " As is well known [ FR 8 7 ] , the Bernstein form exhibits monotonically { decreasing root condition numbers with respect to subdivision . The practical meaning of this statement is that , if Bernstein representations are available to the same relative accuracy in the coefficients on both the nominal interval [ 0 ; 1 ] and a subset [ a ; b ] thereof , then the latter form always exhibits smaller worst { case errors for those roots actually located on the smaller interval . Obviously , explicit oating { point subdivision of a polynomial with given initial coefficient errors cannot result in any diminution of the perturbations of its roots due to those initial errors  one cannot gain \ something for nothing . " Our key observation here , how  ever , is that the consequences of errors incurred in the polynomial constructions for various geometrical operations can be ameliorated by supplying them with input polynomials that have been subdivided  even in oating { point arithmetic  a priori . 4 . 1 An Illustrative Example A simple experiment on a well { known pathological example [ Wil 5 9 ] serves to motivate the proposed approach . Consider the degree { n Wilkinson polynomial having evenly { spaced roots k = n , k = 1 ; : : : n , between 0 and 1 . In the power basis , we have Pn ( t ) = n Y k = 1 ( t ?? k = n ) = n X k = 0 pk tk : ( 4 . 1 . 0 . 1 ) CHAPTER 4 . PRECONDITIONING FOR BERNSTEIN FORM POLYNOMIALS 5 5 Here the polynomial Pn ( t ) is \ constructed " by explicitly multiplying out , in oating { point arithmetic , the linear factors in the above product expression to obtain the power coefficients fpkg . Column 2 of Table 4 . 1 shows the results of attempting to approximate the roots in the case n = 2 5 using this \ constructed " power representation . The cause of the very large inaccuracies in the computed roots is discussed in [ Wil 5 9 ] . A simple intuitive understanding may be gleaned from the following calculation , taken from [ Far 9 1 a ] , for the value of the polynomial halfway between the roots t = 0 : 5 0 and t = 0 : 5 5 in the case n = 2 0 ( the values are correct to the number of digits shown ) : p 0 = + 0 : 0 0 0 0 0 0 0 2 3 2 0 1 9 6 1 5 9 5 p 1 t = ?? 0 : 0 0 0 0 0 0 8 7 6 4 8 3 4 8 2 2 2 7 p 2 t 2 = + 0 : 0 0 0 0 1 4 5 1 3 6 3 0 9 8 9 4 4 6 p 3 t 3 = ?? 0 : 0 0 0 1 4 2 0 9 4 7 2 4 4 8 9 8 6 0 p 4 t 4 = + 0 : 0 0 0 9 3 1 7 4 0 8 0 9 1 3 0 5 6 9 p 5 t 5 = ?? 0 : 0 0 4 3 8 1 7 4 0 0 7 8 1 0 0 3 6 6 p 6 t 6 = + 0 : 0 1 5 4 2 1 1 3 7 4 4 3 6 9 3 2 4 4 p 7 t 7 = ?? 0 : 0 4 1 7 7 8 3 4 5 1 9 1 9 0 8 1 5 8 p 8 t 8 = + 0 : 0 8 8 8 1 1 1 2 7 1 5 0 1 0 5 2 3 9 p 9 t 9 = ?? 0 : 1 5 0 0 5 1 4 5 9 8 4 9 1 9 5 6 3 9 p 1 0 t 1 0 = + 0 : 2 0 3 1 1 7 0 6 0 9 4 6 7 1 5 7 9 6 p 1 1 t 1 1 = ?? 0 : 2 2 1 1 5 3 9 0 2 7 1 2 3 1 1 8 4 3 p 1 2 t 1 2 = + 0 : 1 9 3 7 0 6 8 2 2 3 1 1 5 6 8 5 3 2 p 1 3 t 1 3 = ?? 0 : 1 3 5 9 7 1 1 0 8 1 0 7 8 9 4 0 1 6 p 1 4 t 1 4 = + 0 : 0 7 5 8 5 2 7 3 7 4 7 9 8 7 7 5 7 5 p 1 5 t 1 5 = ?? 0 : 0 3 3 1 5 4 9 8 0 8 5 5 8 1 9 2 1 0 p 1 6 t 1 6 = + 0 : 0 1 1 1 0 1 5 5 2 7 8 9 1 1 6 2 9 6 CHAPTER 4 . PRECONDITIONING FOR BERNSTEIN FORM POLYNOMIALS 5 6 p 1 7 t 1 7 = ?? 0 : 0 0 2 7 4 7 2 7 1 7 5 0 1 9 0 9 5 2 p 1 8 t 1 8 = + 0 : 0 0 0 4 7 3 1 4 1 2 4 5 8 6 6 2 1 9 p 1 9 t 1 9 = ?? 0 : 0 0 0 0 5 0 6 0 7 6 3 7 5 0 3 5 1 8 p 2 0 t 2 0 = + 0 : 0 0 0 0 0 2 5 3 0 3 8 1 8 7 5 1 7 6 P 2 0 ( t ) = 0 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 8 9 9 : ( 4 . 1 . 0 . 2 ) We see that the value of P 2 0 ( t ) is an extremely small residual resulting from the addition of large terms of alternating sign . Owing to massive cancellations , the final value is 1 0 1 3 times smaller than the individual terms pktk , and thus any initial relative errors in the coefficients pk ( as incurred , say , by \ constructing " them in oating { point ) will be magnified by this enormous factor in determining the relative error of the value P 2 0 ( t )  resulting in a commensurate loss of accuracy in the root { finding process . The above argument may be cast in more rigorous terms by computing the root condition numbers [ FR 8 7 ] for the polynomial P 2 0 ( t ) , which measure the displacement of roots due to given fractional errors in the coefficients  these condition numbers are , in fact , of order 1 0 1 3 . One should subsequently bear in mind the lesson of this example , namely , that severe ill { conditioning generally arises from the \ magnification " of initial ( perhaps seemingly innocuous ) relative coefficient errors through cancellation effects . 4 . 1 . 1 The Bernstein Form We mention the power form of the Wilkinson polynomial only as a point of reference ; henceforth we shall be concerned solely with the Bernstein representation Bn [ 0 ; 1 ] ( t ) = n Y k = 1 [ k ( 1 ?? t ) + ( k ?? n ) t ] = n X k = 0 bk n k ! ( 1 ?? t ) n ?? k tk ( 4 . 1 . 1 . 3 ) of this polynomial . Note that , apart from a multiplicative constant , each of the factors k ( 1 ?? t ) + ( k ?? n ) t corresponds to the terms t ?? k = n in ( 4 . 1 . 0 . 1 ) . As emphasized in [ FR 8 7 ] , CHAPTER 4 . PRECONDITIONING FOR BERNSTEIN FORM POLYNOMIALS 5 7 exact computed root computed root computed root root of P 2 5 ( t ) of B 2 5 [ 0 ; 1 ] ( t ) of B 2 5 [ : 2 5 ; : 7 5 ] ( u ) 0 . 0 4 0 . 0 3 9 9 9 9 9 9 9 9 9 9 9 9 7 0 . 0 4 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 . 0 8 0 . 0 7 9 9 9 9 9 9 9 9 9 7 9 2 4 0 . 0 7 9 9 9 9 9 9 9 9 9 9 9 7 8 1 0 . 1 2 0 . 1 2 0 0 0 0 0 0 0 0 3 9 8 8 6 0 . 1 2 0 0 0 0 0 0 0 0 0 0 6 3 4 6 0 . 1 6 0 . 1 5 9 9 9 9 9 9 9 4 9 5 9 4 5 0 . 1 5 9 9 9 9 9 9 9 9 9 6 0 4 9 4 0 . 2 0 0 . 2 0 0 0 0 0 0 2 0 7 7 8 9 2 8 0 . 1 9 9 9 9 9 9 9 9 9 9 7 1 2 6 1 0 . 2 4 0 . 2 3 9 9 9 9 2 3 0 5 6 5 9 5 3 0 . 2 4 0 0 0 0 0 0 0 0 9 0 4 5 6 3 0 . 2 8 0 . 2 8 0 0 1 6 1 8 0 8 2 9 2 3 3 0 . 2 7 9 9 9 9 9 9 9 8 1 8 1 7 1 4 0 . 2 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 . 3 2 0 . 3 1 9 7 9 3 9 5 8 6 6 4 6 4 2 0 . 3 1 9 9 9 9 9 9 9 1 4 6 3 2 9 9 0 . 3 1 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 0 . 3 6 0 . 3 6 2 0 0 1 3 0 8 2 5 2 3 3 6 0 . 3 6 0 0 0 0 0 0 5 7 5 6 6 0 1 4 0 . 3 5 9 9 9 9 9 9 9 9 9 9 9 9 9 7 9 0 . 4 0 0 . 3 9 1 6 0 5 6 3 6 8 0 1 3 4 4 0 . 3 9 9 9 9 9 9 8 3 4 4 1 1 1 6 0 0 . 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 . 4 4 complex 0 . 4 4 0 0 0 0 0 3 0 7 2 3 4 7 6 7 0 . 4 4 0 0 0 0 0 0 0 0 0 0 0 0 1 4 0 0 . 4 8 complex 0 . 4 7 9 9 9 9 9 5 8 5 1 2 0 5 9 1 0 . 4 7 9 9 9 9 9 9 9 9 9 9 9 8 8 4 9 0 . 5 2 complex 0 . 5 2 0 0 0 0 0 4 2 6 5 0 9 1 3 8 0 . 5 2 0 0 0 0 0 0 0 0 0 0 0 2 3 7 0 0 . 5 6 complex 0 . 5 5 9 9 9 9 9 6 6 8 9 6 1 8 3 8 0 . 5 5 9 9 9 9 9 9 9 9 9 9 9 7 9 5 9 0 . 6 0 complex 0 . 6 0 0 0 0 0 0 1 8 3 9 3 8 2 2 2 0 . 6 0 0 0 0 0 0 0 0 0 0 0 0 0 7 1 0 0 . 6 4 complex 0 . 6 3 9 9 9 9 9 9 3 4 9 5 1 2 0 2 0 . 6 3 9 9 9 9 9 9 9 9 9 9 9 9 9 2 9 0 . 6 8 complex 0 . 6 8 0 0 0 0 0 0 0 9 2 4 5 8 1 9 0 . 6 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 . 7 2 complex 0 . 7 2 0 0 0 0 0 0 0 3 1 4 1 2 1 6 0 . 7 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 . 7 6 complex 0 . 7 5 9 9 9 9 9 9 9 8 0 4 5 5 7 6 0 . 8 0 complex 0 . 8 0 0 0 0 0 0 0 0 0 4 2 2 7 7 4 0 . 8 4 complex 0 . 8 3 9 9 9 9 9 9 9 9 9 6 3 7 1 9 0 . 8 8 complex 0 . 8 7 9 9 9 9 9 9 9 9 9 9 9 7 5 2 0 . 9 2 0 . 9 2 5 6 0 6 8 2 0 6 0 6 0 3 1 0 . 9 2 0 0 0 0 0 0 0 0 0 0 0 1 3 5 0 . 9 6 0 . 9 6 5 4 3 6 7 0 5 9 0 6 7 7 6 0 . 9 6 0 0 0 0 0 0 0 0 0 0 0 0 0 7 1 . 0 0 0 . 9 9 8 8 1 2 5 5 4 5 9 5 7 6 6 1 . 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Table 4 . 1 : Comparison of the computed roots of P 2 5 ( t ) , B 2 5 [ 0 ; 1 ] ( t ) , and B 2 5 [ : 2 5 ; : 7 5 ] ( u ) . CHAPTER 4 . PRECONDITIONING FOR BERNSTEIN FORM POLYNOMIALS 5 8 the Bernstein form is much better conditioned than the power form for roots in [ 0 ; 1 ] . See column 3 of Table 4 . 1 for the roots obtained using the Bernstein form , as constructed by multiplying out the linear factors in ( 4 . 1 . 1 . 3 ) using oating { point arithmetic . ( Note that transformations between the power and Bernstein forms should be avoided [ DD 8 8 , Far 9 1 b ] ; algorithms of interest may readily be formulated directly in the Bernstein basis [ FR 8 8 ] in lieu of the power basis . ) We now illustrate the effect of \ a priori subdivision " on the root accuracy of ( 4 . 1 . 1 . 3 ) by constructing the Bernstein representation of this polynomial on a subinterval [ a ; b ] [ 0 ; 1 ] as follows . First , we define the change of variables u = t ?? a b ?? a ( 4 . 1 . 1 . 4 ) that maps t 2 [ a ; b ] to u 2 [ 0 ; 1 ] . We then represent each of the linear factors k ( 1 ?? t ) + ( k ?? n ) t in ( 4 . 1 . 1 . 3 ) in terms of u , i . e . , we re { write these factors in the equivalent form ( k ?? na ) ( 1 ?? u ) + ( k ?? nb ) u ; ( 4 . 1 . 1 . 5 ) and then multiply them together to obtain Bn [ a ; b ] ( u ) = n Y k = 1 [ ( k ?? na ) ( 1 ?? u ) + ( k ?? nb ) u ] = n X k = 0 ~ b k n k ! ( 1 ?? u ) n ?? kuk : ( 4 . 1 . 1 . 6 ) For the case [ a ; b ] = [ 0 : 2 5 ; 0 : 7 5 ] we find that the coefficients ~ b k in ( 4 . 1 . 1 . 6 ) are of much smaller magnitude than those bk for the representation ( 4 . 1 . 1 . 3 ) on the full interval [ 0 ; 1 ] . Furthermore , as shown in column 4 of Table 4 . 1 , when using this representation the twelve roots that actually lie on the interval t 2 [ 0 : 2 5 ; 0 : 7 5 ] are computed to 1 3 accurate digits  almost twice as many as were obtained using the representation B 2 5 [ 0 ; 1 ] ( t ) . For higher degree polynomials , this experiment yields even more impressive results . For example , n = 3 7 is the highest degree for which the roots of Bn [ 0 ; 1 ] ( t ) can be computed to at least one digit of accuracy in double { precision oating point : for n = 3 8 , some of the computed roots become complex . Yet tests indicate that a priori subdivision allows us to CHAPTER 4 . PRECONDITIONING FOR BERNSTEIN FORM POLYNOMIALS 5 9 approximate the roots of Wilkinson polynomials of very high degree to full double precision accuracy if the interval width b ?? a is sufficiently small . For the case n = 1 0 0 0 , for example , the 1 0 1 roots within the interval [ a ; b ] = [ 0 : 4 5 ; 0 : 5 5 ] can be computed to the full 1 7 digits of accuracy if a priori subdivision is used when constructing the polynomial . It should be emphasized that the dramatic improvement seen in Table 4 . 1 would not have occurred with \ a posteriori subdivision "  i . e . , if we had adopted the approach of first constructing the representation ( 4 . 1 . 1 . 3 ) on [ 0 ; 1 ] and then explicitly subdividing this representation to [ a ; b ] using ( say ) the de Casteljau algorithm . With such an approach , the quality of the computed roots is found to be comparable to , or even slightly worse , than those given in column 3 of Table 4 . 1 . Using exact arithmetic , of course , one would find that the \ a priori " and \ a posteriori " subdivision paradigms yield identical results . 4 . 1 . 2 Relative Errors in the Construction Process An empirical explanation for the improvements obtained using a priori subdivision can be given as follows . The construction of the Bernstein form of Wilkinson ' s polynomial was performed for degrees 1 0, and intervals [ a ; b ] of width 1 , 0 . 5 , 0 . 2 5 , 0 . 1 2 5 , and 0 . 0 6 2 5 , in each case using both double and quadruple precision oating { point arithmetic . The relative coefficient errors incurred by the construction process were then estimated by regarding the outcome of the quadruple { precision calculations as \ exact " reference values . Characteristic measures of these errors were formulated in the 1 , 2 , and 1 norms ( see [ x 4 . 2 ] below ) , and were found not to depend significantly on the chosen norm . For a given degree , the relative errors of the constructed coefficients showed no systematic dependence on the interval width , and in all cases were quite small and within a few orders of magnitude of each other ( typically 1 0 ?? 1 6 to 1 0 ?? 1 4 ) . Further , any tendency for the errors to increase with n was quite mild , and barely discernible within the 2 orders { of { magnitude randomness . These results are consistent with the commonly { accepted notion that each CHAPTER 4 . PRECONDITIONING FOR BERNSTEIN FORM POLYNOMIALS 6 0 oating { point operation incurs only a small relative error [ Wil 6 0 ] , bounded by the \ machine unit " , and the fact that the number of operations required to construct the polynomial is independent of [ a ; b ] and grows only linearly with n . It cannot be overemphasized , however , that is a valid bound on the relative error only for operands that are presumed to have exact initial oating { point representations . If  as is the case in practice  the relative error of interest is that defined by taking arbitrary real numbers , converting the
Click tabs to swap between content that is broken into logical sections.
Rating  
Permanent URL  http://hdl.lib.byu.edu/1877/2395 
Title  Polynomial Real Root Finding in Bernstein Form 
BYU Creator 
Spencer, Melvin R. 
Description/Abstract  This dissertation addresses the problem of approximating, in floatingpoint arithmetic, all real roots (simple, clustered, and multiple) over the unit interval of polynomials in Bernstein form with real coefficients. 
Date Original  199408 
Date Digital  20100831 
Language 
English eng en 
Publication Type 
Dissertations 
Type 
Text 
Format  application/pdf 
Owning Institution 
Brigham Young University 
BYU College 
Ira A. Fulton College of Engineering and Technology 
BYU Department 
Civil and Environmental Engineering 
Copyright Status  (c) 1994 Melvin R. Spencer; 
Copyright Use Information  http://lib.byu.edu/about/copyright/generic.php 
Full Text  Polynomial Real Root Finding in Bernstein Form A Dissertation Presented to the Department of Civil Engineering Brigham Young University In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy by Melvin R . Spencer August 1 9 9 4 This dissertation by Melvin R . Spencer is accepted in its present form by the Department of Civil Engineering of Brigham Young University as satisfying the disseration requirement for the degree of Doctor of Philosophy . Thomas W . Sederberg , Committee Chairman Norman L . Jones , Committee Member Rida T . Farouki , Committee Member Date S . Olani Durrant Department Chairman ii ACKNOWLEDGEMENTS I am deeply grateful and would like to thank Dr . Thomas W . Sederberg for this disser  ation , being my advisory chair , research assistantships , and the lucid insight he provided throughout my graduate work . Without his constant supportive source of ideas , sugges  tions , encouragement , motivation , and dedication this work would not have met fruition . I also thank him for his patient , considerate demeanor and grace under pressure . I am very indebted and would like to thank Dr . Henry N . Christiansen for permitting an enthusiastic under  graduate to pursue knowledge and gain enlightenment through the many opportunities provided and with the many talented colleagues associated through his graduate program at Brigham Young University . I thank the faculty , deans , and administrators of the Department of Civil Engineering , the College of Engineering and Technology , and the Office of Graduate Studies of Brigham Young University for their continual support in granting the many considerations requested for the fulfillment of this degree . In addtion , thanks to the staff at the University Library and its Inter  Library Loan Department for their constant service and unending supply of reference material . I extend my sincere appreciation and thanks to Dr . Thomas W . Sederberg and Dr . Rida T . Farouki for their significant and incessant contributions towards the completion of this document which include their derivation and presentation of the chapter on preconditioning which is the core of this work , their insights regarding Bernstein form polynomial real root finding , the many days sacrificed and dedicated to the composition , rewriting , and editing , and above all their considerate , good  natured personalities and much appreciated and welcomed suggestions which lessened the burden of the oral and written presentation of this thesis . iii Special effort needs to be recognized and thanks extended to my advisory committee Dr . Thomas W . Sederberg , Dr . Norman L . Jones , and Dr . Rida T . Farouke as well as my oral committee chairman Dr . Henry N . Christiansen , and members Dr . Thomas W . Sederberg , Dr . Norman L . Jones , Dr . Rida T . Farouki , and Dr . C . Gregory Jensen for their preparation , examination , and suggestions regarding the content of this dissertation . I would like recognize and thank my fellow graduate colleagues for their assistance in preparing this work . Dr . Alan K . Zundel for his many contributions to this work which include his insights into the multiple root  finding strategy and its implementation and testing in his algebraic surface rendering program ; and particularly his last minute efforts and contributions to the performance assessment and testing of the various algorithms . Kris Klimaszewski for his initial implementation of the degree six isolator polynomial algorithm as well as his many programming insights and comradery . Peisheng Gao and Hong Mu for their unending and expert assistance in programming , illustrations , and testing which they performed simultaneously with their many other graduate duties and responsibilities . I would like to thank Dr . Anders N . Grimsrud and my other associates at work , for the additional responsibility they have shouldered by providing me a leave of absence from work to finish this academic goal and project . I would like to thank my parents , relatives , and friends for their emotional support and confidence regarding the completion of this work . Especially my father , Melvin Reed Spencer ; and my brother and his family Robert and Shannon , Erin , and Chase Spencer for their financial and emotional support , providing a weekend refuge to refresh and collect my thoughts . And last but sincerely not least , I am grateful and thankful for the divine inspiration empowered to all concerned for the motivation and exchange supporting the conception , development , and completion of this collaborative effort . iv Contents 1 Introduction 1 1 . 1 Literature Search : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3 1 . 2 Overview : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 4 2 Review of Polynomial Real Root Finding 5 2 . 1 Polynomial Representations and Properties : : : : : : : : : : : : : : : : : : 5 2 . 1 . 1 Power Form Polynomials : : : : : : : : : : : : : : : : : : : : : : : : 5 2 . 1 . 2 Bernstein Form Polynomials : : : : : : : : : : : : : : : : : : : : : : : 6 Explicit B ! ezier Curves : : : : : : : : : : : : : : : : : : : : : : : : : : 6 Convex Hull Property : : : : : : : : : : : : : : : : : : : : : : : : : : 7 Variation Diminishing Property : : : : : : : : : : : : : : : : : : : : : 7 2 . 2 Polynomial Real Root Localization : : : : : : : : : : : : : : : : : : : : : : : 8 2 . 2 . 1 Techniques for Bounding Real Roots : : : : : : : : : : : : : : : : : : 9 2 . 2 . 2 Isolation Techniques for Distinct Real Roots : : : : : : : : : : : : : : 1 1 Polynomial Factors and Roots : : : : : : : : : : : : : : : : : : : : : 1 2 Location Principles : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 5 v Variation of coefficient Signs : : : : : : : : : : : : : : : : : : : : : : 1 6 Sturm Sequence : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 6 Budan { Fourier Sequence : : : : : : : : : : : : : : : : : : : : : : : : : 1 8 Isolating Polynomials : : : : : : : : : : : : : : : : : : : : : : : : : : 1 8 2 . 2 . 3 Isolation Techniques Accounting Multiple Real Roots : : : : : : : : 1 9 Common Roots of Two Polynomials : : : : : : : : : : : : : : : : : : 1 9 Separation of Multiple Roots : : : : : : : : : : : : : : : : : : : : : : 2 0 2 . 3 Polynomial Real Root Approximation : : : : : : : : : : : : : : : : : : : : : 2 0 2 . 3 . 1 Closed Form Solvers : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 1 2 . 3 . 2 Basic Serial Iterative Methods : : : : : : : : : : : : : : : : : : : : : 2 2 2 . 3 . 3 Hybrid Serial Iterative Methods : : : : : : : : : : : : : : : : : : : : : 2 3 2 . 3 . 4 Simultaneous Iterative Methods : : : : : : : : : : : : : : : : : : : : : 2 3 2 . 4 Power Form General Purpose Root Finders : : : : : : : : : : : : : : : : : : 2 4 Jenkins { Traub ' s Method : : : : : : : : : : : : : : : : : : : : : : : : : 2 4 Dunaway ' s Composite Method : : : : : : : : : : : : : : : : : : : : 2 5 Madsen { Reid ' s Newton { Based Method : : : : : : : : : : : : : : : : : 2 6 Laguerre ' s Method : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 6 2 . 5 Power Form Real Root Finders : : : : : : : : : : : : : : : : : : : : : : : : : 2 7 2 . 5 . 1 Sturm Sequence Techniques : : : : : : : : : : : : : : : : : : : : : : : 2 7 Hook { McAree ' s Method : : : : : : : : : : : : : : : : : : : : : : : : : 2 7 2 . 5 . 2 differentiation Techniques : : : : : : : : : : : : : : : : : : : : : : : : 2 8 Collins { Loos ' Method : : : : : : : : : : : : : : : : : : : : : : : : : : 2 8 vi 2 . 5 . 3 Variation of Signs Techniques : : : : : : : : : : : : : : : : : : : : : : 2 8 Collins { Akritas ' Method : : : : : : : : : : : : : : : : : : : : : : : : : 2 8 2 . 5 . 4 Interval Techniques : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 9 Hansen ' s Newton Interval Methods : : : : : : : : : : : : : : : : : : : 2 9 Dedieu { Yakoubsohn ' s Exclusion Method : : : : : : : : : : : : : : : 2 9 2 . 6 Bernstein Form Real Root Finders : : : : : : : : : : : : : : : : : : : : : : : 3 0 2 . 6 . 1 Recursive Subdivide Techniques : : : : : : : : : : : : : : : : : : : : 3 0 Lane { Riesenfeld ' s Method : : : : : : : : : : : : : : : : : : : : : : : : 3 0 Rockwood ' s Method : : : : : : : : : : : : : : : : : : : : : : : : : : : 3 1 Schneider ' s Method : : : : : : : : : : : : : : : : : : : : : : : : : : : 3 1 2 . 6 . 2 Newton { Based Techniques : : : : : : : : : : : : : : : : : : : : : : : : 3 1 Grandine ' s Method : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3 2 Marchepoil { Chenin ' s Method : : : : : : : : : : : : : : : : : : : : : : 3 2 2 . 6 . 3 Hull Approximation Techniques : : : : : : : : : : : : : : : : : : : : : 3 2 Rajan { Klinkner { Farouki ' s Method : : : : : : : : : : : : : : : : : : : 3 3 3 Review of Numerical Considerations 3 4 3 . 1 Numerical Error : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3 5 3 . 1 . 1 Sources and Types of Numerical Error : : : : : : : : : : : : : : : : : 3 5 Causes of Computational Error : : : : : : : : : : : : : : : : : : : : : 3 6 3 . 1 . 2 Roundofferror : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3 7 Floating  Point Computation : : : : : : : : : : : : : : : : : : : : : : 3 7 vii Machine Epsilon and Roundo Unit : : : : : : : : : : : : : : : : 3 7 Roundofferror Due to Floating  Point Operations : : : : : : : : : : 3 8 3 . 2 Estimating Bounds for Roundofferror : : : : : : : : : : : : : : : : : : : : : 3 9 3 . 2 . 1 Forward Error Analysis : : : : : : : : : : : : : : : : : : : : : : : : : 4 0 3 . 2 . 2 Running Error Analysis : : : : : : : : : : : : : : : : : : : : : : : : : 4 0 3 . 2 . 3 Backward Error Analysis : : : : : : : : : : : : : : : : : : : : : : : : 4 1 3 . 2 . 4 Interval Error Analysis : : : : : : : : : : : : : : : : : : : : : : : : : : 4 2 3 . 2 . 5 General Stochastic Error Analysis : : : : : : : : : : : : : : : : : : : 4 2 3 . 2 . 6 Permutation { Perturbation Error Analysis : : : : : : : : : : : : : : : 4 3 3 . 3 Numerical Stability and Condition : : : : : : : : : : : : : : : : : : : : : : : 4 4 3 . 3 . 1 Preliminary definitions : : : : : : : : : : : : : : : : : : : : : : : : : 4 4 3 . 3 . 2 Condition of Perturbing Polynomial coefficients : : : : : : : : : : : 4 5 3 . 3 . 3 Condition of Bernstein and Power Forms : : : : : : : : : : : : : : : 4 6 3 . 4 Conversion Between Bernstein and Power Forms : : : : : : : : : : : : : : : 4 7 3 . 4 . 1 Example : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 4 9 3 . 4 . 2 Closed Form Expression : : : : : : : : : : : : : : : : : : : : : : : : : 4 9 3 . 4 . 3 Numerical ramifications of Basis Conversion : : : : : : : : : : : : : 5 0 3 . 5 Performance of Polynomial Root Finders : : : : : : : : : : : : : : : : : : : : 5 0 3 . 5 . 1 Benchmarking Principles : : : : : : : : : : : : : : : : : : : : : : : : 5 0 Polynomial Classes for Performance Assessment : : : : : : : : : : : : 5 1 4 Preconditioning for Bernstein form polynomials 5 3 viii 4 . 1 An Illustrative Example : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5 4 4 . 1 . 1 The Bernstein Form : : : : : : : : : : : : : : : : : : : : : : : : : : : 5 6 4 . 1 . 2 Relative Errors in the Construction Process : : : : : : : : : : : : : : 5 9 4 . 2 Problems of Error Propagation and amplification : : : : : : : : : : : : : : : 6 1 4 . 2 . 1 Error Analysis of de Casteljau Algorithm : : : : : : : : : : : : : : : 6 2 4 . 2 . 2 Condition of the Subdivision Map : : : : : : : : : : : : : : : : : : : 6 6 4 . 2 . 3 Backward Error Analysis : : : : : : : : : : : : : : : : : : : : : : : : 7 1 4 . 3 Application to Curve / Curve Intersections : : : : : : : : : : : : : : : : : : : 7 5 4 . 4 Concluding Remarks : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 7 8 5 General Root Finding Concepts 8 1 5 . 0 . 1 Pseudo Code : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8 1 5 . 1 Bernstein Subdivision : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8 2 The de Casteljau Algorithm : : : : : : : : : : : : : : : : : : : : : : : 8 3 Subdivision coefficient Errors : : : : : : : : : : : : : : : : : : : : : : 8 4 Subdivision Global Error Bound : : : : : : : : : : : : : : : : : : : : 8 5 5 . 1 . 1 Algorithms : Bsubdivlef t and Bsubdivright : : : : : : : : : : : : : : 8 5 5 . 1 . 2 Numerical Stability : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8 6 5 . 2 Bernstein modified Horner Evaluation : : : : : : : : : : : : : : : : : : : : : 8 7 5 . 2 . 1 Horner ' s Method Expressed in Power Form : : : : : : : : : : : : : : 8 7 5 . 2 . 2 Horner ' s Method modified for Bernstein Form : : : : : : : : : : : : 8 8 5 . 2 . 3 Algorithm : Beval : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8 9 ix 5 . 2 . 4 Polynomial Evaluation Considerations : : : : : : : : : : : : : : : : : 9 1 Scaled Bernstein Polynomial coefficients : : : : : : : : : : : : : : : : 9 1 5 . 3 Bernstein De ation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 9 2 5 . 3 . 1 Preliminary Concepts : : : : : : : : : : : : : : : : : : : : : : : : : : 9 2 5 . 3 . 2 Algorithms : Bdeflatet , Bdeflatelef t , and Bdeflateright : : : : 9 2 5 . 3 . 3 De ation Considerations : : : : : : : : : : : : : : : : : : : : : : : : : 9 4 5 . 4 Polynomial coefficient Normalization : : : : : : : : : : : : : : : : : : : : : : 9 5 5 . 4 . 1 Preliminary Concepts : : : : : : : : : : : : : : : : : : : : : : : : : : 9 5 5 . 4 . 2 Algorithms : NormPolybase and NormPolymax : : : : : : : : : : : 9 6 5 . 4 . 3 Normalization considerations : : : : : : : : : : : : : : : : : : : : : : 9 6 5 . 5 Bernstein End Control Point Root Approximation : : : : : : : : : : : : : : 9 7 5 . 5 . 1 Algorithms : Bend 0 lef t and Bend 0 right : : : : : : : : : : : : : : : : 9 7 5 . 5 . 2 Bernstein End Zero Considerations : : : : : : : : : : : : : : : : : : : 9 9 5 . 6 Bernstein differentiation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 9 9 5 . 6 . 1 Algorithms : Bderiv and Bderivpseudo : : : : : : : : : : : : : : : : 1 0 0 5 . 6 . 2 differentiation Considerations : : : : : : : : : : : : : : : : : : : : : : 1 0 1 5 . 7 Polynomial Real Root Bounds : : : : : : : : : : : : : : : : : : : : : : : : : : 1 0 2 5 . 8 Pseudo Bernstein Basis Conversion : : : : : : : : : : : : : : : : : : : : : : : 1 0 2 5 . 8 . 1 Preliminary Concepts : : : : : : : : : : : : : : : : : : : : : : : : : : 1 0 2 5 . 8 . 2 Algorithms : ConvCoefsb 2 ^ p and ConvCoefs p 2 ^ b : : : : : : : : : : : 1 0 5 5 . 8 . 3 Algorithms : ConvRoots ^ b and ConvRoots ^ p : : : : : : : : : : : : 1 0 5 5 . 9 Closed Form Real Root Solvers : : : : : : : : : : : : : : : : : : : : : : : : : 1 0 6 x Quadratic Solution : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 0 7 Cubic and Quartic Solutions : : : : : : : : : : : : : : : : : : : : : : 1 0 8 5 . 9 . 1 Algorithms : Psolver 2 , Psolver 3 , and Psolver 4 : : : : : : : : : : 1 0 9 6 Polynomial Real Root  Finding Algorithms 1 1 6 6 . 1 Bernstein Convex Hull Approximating Step ( Bcha 1 ) : : : : : : : : : : : : : 1 1 7 6 . 1 . 1 Preliminary Concepts : : : : : : : : : : : : : : : : : : : : : : : : : : 1 1 7 6 . 1 . 2 Algorithm : Bcha 1 : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 1 9 6 . 2 Higher Degree Approximating Steps ( Bcha 2 ?? 4 ) : : : : : : : : : : : : : : : : 1 2 1 6 . 2 . 1 Preliminary Concepts : : : : : : : : : : : : : : : : : : : : : : : : : : 1 2 3 6 . 2 . 2 Higher Degree Approximating Step Considerations : : : : : : : : : : 1 2 7 6 . 3 Bernstein Combined Subdivide & Derivative ( Bcom ) : : : : : : : : : : : : : 1 2 7 6 . 3 . 1 Preliminary Concepts : : : : : : : : : : : : : : : : : : : : : : : : : : 1 2 7 6 . 3 . 2 Algorithm : Bcom : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 2 8 6 . 3 . 3 Bcom Considerations : : : : : : : : : : : : : : : : : : : : : : : : : : 1 3 5 6 . 4 Bernstein Convex Hull Isolating Step ( Bchi ) : : : : : : : : : : : : : : : : : 1 3 8 6 . 4 . 1 Preliminary Concepts : : : : : : : : : : : : : : : : : : : : : : : : : : 1 3 8 6 . 4 . 2 Algorithm : Bchi : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 4 1 6 . 5 Standard Isolator Polynomial Separation ( Sips ) : : : : : : : : : : : : : : : : 1 4 3 6 . 5 . 1 Preliminary Concepts : : : : : : : : : : : : : : : : : : : : : : : : : : 1 4 4 Isolator Polynomials : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 4 4 6 . 5 . 2 Algorithm : Sips : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 4 6 xi 6 . 5 . 3 Algorithm Considerations : : : : : : : : : : : : : : : : : : : : : : : : 1 4 8 Restricting Input to Power Form Polynomials : : : : : : : : : : : : : 1 4 8 Tighter Root Bounds from the Convex Hull : : : : : : : : : : : : : : 1 4 9 Pseudo  basis Conversion : : : : : : : : : : : : : : : : : : : : : : : : : 1 4 9 Common Roots : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 4 9 6 . 5 . 4 Illustrative Examples of IPs : : : : : : : : : : : : : : : : : : : : : : : 1 5 0 7 Numerical Results 1 5 8 7 . 1 Test Problems and Results for Distinct Real Roots : : : : : : : : : : : : : : 1 6 1 7 . 2 Discussion : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 6 5 8 Conclusions 1 6 6 xii List of Figures 2 . 1 Bernstein polynomial as an explicit B ! ezier curve : : : : : : : : : : : : : : : 7 2 . 2 Convex hull property : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8 2 . 3 Application of Rolle ' s theorem . : : : : : : : : : : : : : : : : : : : : : : : : : 1 7 4 . 1 Two degree { 7 B ! ezier curves having 4 9 real intersections : : : : : : : : : : : 7 9 5 . 1 Subdividing a cubic explicit B ! ezier curve . : : : : : : : : : : : : : : : : : : : 8 4 5 . 2 Explicit B ! ezier curves before and after de ating roots at t = a ; b : : : : : : 9 8 5 . 3 Explicit B ! ezier curve y [ 0 ; 1 ] ( t ) with a cluster of roots at t = b : : : : : : : : : 1 0 0 5 . 4 Log  log plot of pseudo  mapping x ( t ) = t = ( 1 ?? t ) over the region t 2 [ 0 ; 1 ] . : 1 0 4 5 . 5 Log  log plot of pseudo  mapping t ( x ) = x = ( 1 + x ) over the region x 2 [ 0 ; 1 ] . : 1 0 4 6 . 1 Bcha 1 root finding : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 2 2 6 . 2 ^ P [ a ; b ] ( t ) of degrees one through four . : : : : : : : : : : : : : : : : : : : : : : 1 2 4 6 . 3 Bcom root isolation heuristic ( a  d ) . : : : : : : : : : : : : : : : : : : : : : : 1 3 3 6 . 4 Bcom root isolation heuristic ( e  h ) . : : : : : : : : : : : : : : : : : : : : : : 1 3 4 6 . 5 Bcomsubdivision step heuristic . : : : : : : : : : : : : : : : : : : : : : : : : 1 3 7 6 . 6 Root isolation using Bchi : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 4 0 xiii 6 . 7 IPs of degree 1 and 3 based on a degree 5 Wilkinson polynomial over I [ 0 ; 1 ] . 1 5 0 6 . 8 IPs of degree 1 and 3 based on a degree 5 polynomial over I [ 0 ; 1 ] with roots at f : 1 ; : 2 ; : 4 ; : 6 ; : 8 g . : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 5 1 6 . 9 IPs both of degree 2 based on a degree 5 Wilkinson polynomial over I [ 0 ; 1 ] . 1 5 1 6 . 1 0 IPs both of degree 2 based on a degree 5 polynomial over I [ 0 ; 1 ] with roots at f : 1 ; : 2 ; : 4 ; : 6 ; : 8 g . : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 5 2 6 . 1 1 IPs of degree 1 and 4 based on a degree 6 Wilkinson polynomial over I [ 0 ; 1 ] . 1 5 2 6 . 1 2 IPs of degree 1 and 4 based on a degree 6 polynomial over I [ 0 ; 1 ] with roots at f : 1 ; : 2 ; : 3 ; : 4 ; : 7 ; : 9 g . : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 5 3 6 . 1 3 IPs of degree 2 and 3 based on a degree 6 Wilkinson polynomial over I [ 0 ; 1 ] . 1 5 3 6 . 1 4 IPs of degree 2 and 3 based on a degree 6 polynomial over I [ 0 ; 1 ] with roots at f : 1 ; : 2 ; : 3 ; : 4 ; : 7 ; : 9 g . : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 5 4 6 . 1 5 IPs of degree 2 and 4 based on a degree 7 Wilkinson polynomial over I [ 0 ; 1 ] . 1 5 4 6 . 1 6 IPs of degree 2 and 4 based on a degree 7 polynomial over I [ 0 ; 1 ] with roots at f : 1 ; : 2 ; : 3 ; : 4 ; : 5 ; : 6 ; 1 : g . : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 5 5 6 . 1 7 IPs both of degree 3 based on a degree 7 Wilkinson polynomial over I [ 0 ; 1 ] . 1 5 5 6 . 1 8 IPs both of degree 3 based on a degree 7 polynomial over I [ 0 ; 1 ] with roots at f : 1 ; : 2 ; : 3 ; : 4 ; : 5 ; : 6 ; 1 : g . : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 5 6 6 . 1 9 IPs of degree 3 and 4 based on a degree 8 Wilkinson polynomial over I [ 0 ; 1 ] . 1 5 6 6 . 2 0 IPs of degree 3 and 4 based on a degree 8 polynomial over I [ 0 ; 1 ] with roots at f : 1 ; : 2 ; : 3 ; : 4 ; : 5 ; : 6 ; : 7 ; 1 : g . : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 5 7 xiv List of Tables 2 . 1 Global Bounding of Power Form Polynomial Real Roots : : : : : : : : : : : 1 0 4 . 1 Comparison of the computed roots of P 2 5 ( t ) , B 2 5 [ 0 ; 1 ] ( t ) , and B 2 5 [ : 2 5 ; : 7 5 ] ( u ) . : : 5 7 4 . 2 Condition numbers of subdivision matrices in various norms . : : : : : : : : 6 9 7 . 1 Machine Floating  Point Constants : : : : : : : : : : : : : : : : : : : : : : : 1 5 8 7 . 2 Relative Execution Times for Degree Five Polynomials : : : : : : : : : : : : 1 6 1 7 . 3 Relative Execution Times for Degree Ten Polynomials : : : : : : : : : : : : 1 6 1 7 . 4 Relative Execution Times for Degree Twenty Polynomials : : : : : : : : : : 1 6 2 7 . 5 Relative Execution Times for Wilkinson Polynomials in I [ 0 ; 1 ] : : : : : : : : 1 6 2 7 . 6 Relative Execution Times for Degree Two Polynomials : : : : : : : : : : : : 1 6 3 7 . 7 Relative Execution Times for Degree Three Polynomials : : : : : : : : : : : 1 6 3 7 . 8 Relative Execution Times for Degree Four Polynomials : : : : : : : : : : : : 1 6 4 7 . 9 Relative Execution Times for Degree 2 0 with Clustered Roots : : : : : : : : 1 6 4 7 . 1 0 Relative Execution Times for Degree 1 0 with Clustered Roots : : : : : : : : 1 6 4 xv Chapter 1 Introduction This dissertation addresses the problem of approximating , in oating { point arithmetic , all real roots ( simple , clustered , and multiple ) over the unit interval of polynomials in Bernstein form with real coefficients . The Bernstein polynomial basis enjoys widespread use in the fields of computer aided geometric design ( CAGD ) and computer graphics . The use of the Bernstein basis functions to express B ! ezier curves and surfaces allows many basic algorithms concerned with the processing of such forms to be reduced to the problem of computing the real roots of polynomials in Bernstein form . A typical example is the problem of computing the intersection points of two B ! ezier curves . Given two curves of degrees m and n , respectively , the problem of identifying their intersection points can be reformulated in terms of finding the real roots on the unit interval of a degree mn polynomial in Bernstein form [ SP 8 6 ] . Other examples from CAGD include finding a point on each loop of the intersection curve of two surfaces [ Far 8 6 ] , and the \ trimming " of offset and bisector curves [ FJ 9 4 , FN 9 0 a ] . Examples from computer graphics include ray tracing [ Kaj 8 2 , Han 8 3 , SA 8 4 , Wij 8 4 , BDM 8 4 ] and algebraic surface rendering [ Sed 8 5 , Zun 8 9 ] . In the early years of computer graphics and CAGD , the tendency was to convert from 1 CHAPTER 1 . INTRODUCTION 2 the Bernstein to the power basis for root finding ( see , for example , [ Kaj 8 2 ] ) since the power basis is more \ familiar " and library routines are commonly available to solve for polynomial roots in the power basis . However , it has subsequently been demonstrated [ FR 8 7 ] that the Bernstein basis is inherently better conditioned than the power basis for finding real roots on the unit interval ( and this is all that is of interest when dealing with B ! ezier curves anyway ) . Thus , root finding algorithms for polynomials in Bernstein form are not merely a convenience , they in fact offer significantly better accuracy than their power basis counterparts . Furthermore , by expressing Bernstein form polynomials as explicit B ! ezier curves , one can take advantage of geometric insights based on the control polygon in developing root { finding algorithms . This can allow us to robustly find all real roots in the unit interval , typically much faster than can a library polynomial root finder which finds all real and complex roots . Thus , a root finder that works in the Bernstein basis could provide enhanced speed and accuracy when applied to problems whose solution traditionally relies on the computation of real polynomial roots in the power basis . The research reported in this dissertation falls in the domain of \ experimental com  puter science and engineering " . Many of the results are empirical . Fortunately , there is now growing recognition that proof of performance through such experimental means is an acceptable and even crucial form of computer research [ ECS 9 4 ] . Nearly 2 5 years ago , Henrici remarked , at a symposium on the numerical solutions of nonlinear problems : I do not feel that the polynomial problem has been solved perfectly , not even from a practical point of view [ Hen 7 0 ] . Our study of root finding for polynomials in Bernstein form suggests that the field remains fruitful even now . CHAPTER 1 . INTRODUCTION 3 1 . 1 Literature Search The first known work on our problem is an algorithm , due to Lane and Riesenfeld [ LR 8 1 ] , which repeatedly applies the de Casteljau algorithm to isolate and refine roots of polynomials in Bernstein form . Several of the algorithms presented in this dissertation were first documented in an unpublished report drafted nearly ten years ago [ SSd 8 6 ] . The significant developments on the numerical stability of the Bernstein form that arose in the intervening decade justify our procrastination in submitting that report for publication . We always suspected there was much more waiting to be reported ! [ Gra 8 9 ] presents an algorithm for finding roots of B  spline functions , which is similar to the convex hull root finding algorithm in [ SSd 8 6 ] . [ Sch 9 0 a ] describes basically the same algorithm . An intriguing algorithm based on \ parabolic hulls " is introduced in [ RKF 8 8 ] . The approach involves a more sophisticated method for isolating roots . [ Roc 9 0 ] presents a heuristic for accelerating the root isolation process . Each of these algorithms is reviewed more thoroughly in [ x 2 . 6 ] in Chapter 2 . This dissertation borrows concepts from the following four fields : 1 . Computer Aided Geometric Design and Graphics [ SP 8 6 , FR 8 7 , RKF 8 8 , FR 8 8 ] 2 . Numerical Analysis [ Act 7 0 , Buc 6 6 , BFR 8 1 , Bor 8 5 , Bre 7 3 , Dod 7 8 , DM 7 2 , FMM 7 7 , IK 6 6 , Ham 7 1 , Hen 6 4 , Hen 8 2 , Hil 7 4 , Hou 5 3 , Hou 7 0 , Mac 6 3 , Mar 6 6 , McN 7 3 , Mer 0 6 , Mor 8 3 , Non 8 4 , Ost 6 0 , Pen 7 0 , Pet 8 9 , PC 8 6 , Piz 7 5 , PW 8 3 , PFTV 9 0 , RR 7 8 , Ric 8 3 , Sta 7 0 , Tra 6 4 ] 3 . Theory of Equations [ Bor 5 0 , BP 6 0 , Caj 0 4 , Dic 2 2 , Mac 5 4 , Tur 5 2 , Usp 4 8 ] 4 . Higher Algebra [ Boc 6 4 , Dav 2 7 , FS 6 5 , Kur 8 0 , MP 6 5 , Sal 8 5 ] CHAPTER 1 . INTRODUCTION 4 1 . 2 Overview Chapter 2 outlines some basic concepts pertaining to root finding for polynomials in power and Bernstein form . Chapter 3 discusses numerical considerations for implementing root finding schemes in oating { point arithmetic . Chapter 4 presents one of the most significant results of our research , the fact that root condition for polynomials in Bernstein form can often be significantly improved by shrinking the domain in the earliest stages of coefficient construction . For example , in the curve intersection algorithm mentioned earlier , if one of the curves is split in half before proceeding with the intersection algorithm , the accuracy of the final polynomial roots ( which indicate the curve parameter values at the intersection points ) can be greatly improved . Chapter 6 outlines several different algorithms for root finding of polynomials in Bern  stein form . Chapter 5 describes basic functions common to those root finding algorithms . Chapter 7 compares the execution speed and numerical precision of the algorithms in chap  ter 6 and of some general purpose power form polynomial root finding schemes . Chapter 8 concludes and summarizes this work . Chapter 2 Review of Polynomial Real Root Finding This chapter reviews basic concepts and algorithms concerned with computing the real roots of polynomials represented in the power and Bernstein bases . 2 . 1 Polynomial Representations and Properties We begin by recalling some definitions and properties of polynomials represented in both power and Bernstein form . 2 . 1 . 1 Power Form Polynomials A degree n polynomial in the power basis is defined by f ( t ) = n X i = 0 piti ( 2 . 1 . 1 . 1 ) with pn 6 = 0 . 5 CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 6 2 . 1 . 2 Bernstein Form Polynomials A degree n polynomial in Bernstein form is given by y [ 0 ; 1 ] ( t ) = n X k = 0 yk n k ! ( 1 ?? t ) n ?? ktk ( 2 . 1 . 2 . 2 ) where ?? n k ( 1 ?? t ) n ?? ktk is the kth Bernstein basis function of degree n , and yk are the Bernstein coefficients . The subscript [ 0 ; 1 ] indicates that the domain of interest is t 2 [ 0 ; 1 ] . A polynomial in Bernstein form can be defined over arbitrary domain intervals by in  troducing the change of variables u = t ?? a b ?? a ( 2 . 1 . 2 . 3 ) that maps t 2 [ a ; b ] to u 2 [ 0 ; 1 ] . We then have y [ a ; b ] ( t ) = n X k = 0 yk n k ! ( 1 ?? u ) n ?? kuk = n X k = 0 yk n k ! ( b ?? t ) n ?? k ( t ?? a ) k ( b ?? a ) n ( 2 . 1 . 2 . 4 ) Explicit B ! ezier Curves Useful geometric properties may be associated with a polynomial in Bernstein form by casting it as an explicit B ! ezier curve ( see Figure 2 . 1 ) : P [ a ; b ] ( t ) = ( t ; y [ a ; b ] ( t ) ) = n X k = 0 Pk n k ! ( 1 ?? u ) n ?? kuk ( 2 . 1 . 2 . 5 ) where the control points Pk are evenly spaced along the horizontal axis : Pk = ( a + k n ( b ?? a ) ; yk ) : ( 2 . 1 . 2 . 6 ) The control polygon is formed by the set of line segments connecting adjacent control points . The phrase control polygon is not an entirely accurate label , since it is not actually a closed polygon because Pn and P 0 are not connected . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 7 P 0 P 1 P 2 P 3 P 4 P 5 0 .2 .4 .6 .8 1 Figure 2 . 1 : Bernstein polynomial as an explicit B ! ezier curve Convex Hull Property We will make extensive use of the convex hull property of B ! ezier curves , which states that a B ! ezier curve lies entirely within the convex hull of its control points , as illustrated in Figure 2 . 2 . The convex hull property is assured because the Bernstein polynomial basis functions ?? n k ( b ?? t ) n ?? k ( t ?? a ) k ( b ?? a ) n sum to one and are non  negative for t 2 [ a ; b ] . Variation Diminishing Property The variation diminishing property is really an expression of Descartes Law of Signs in the Bernstein polynomial basis . It states that no line intersects a B ! ezier curve more times than it intersects the control polygon . More specifically , if a given line intersects a B ! ezier curve n 1 times and the same line intersects the control polygon n 2 times , then n 1 = n 2 ?? 2 n 3 ( 2 . 1 . 2 . 7 ) where n 3 is a non  negative integer . For example , for the case of the t { axis in Figure 2 . 1 , n 1 = n 2 = 3 . Equation ( 2 . 1 . 2 . 7 ) requires us to count multiple intersections properly , so a simple tangent intersection accounts for two intersections . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 8 P 0 P 1 P 2 P 3 P 4 P 5 Figure 2 . 2 : Convex hull property Two implications of the variation diminishing property are immediately apparent . First , if all coefficients of y [ a ; b ] ( t ) have the same sign , there can be no roots in [ a ; b ] . Second , if the control polygon of P [ a ; b ] ( t ) crosses the t { axis exactly once , then y [ a ; b ] ( t ) has exactly one root in [ a ; b ] . 2 . 2 Polynomial Real Root Localization The first step for many root finding methods is to divide the domain into disjoint intervals which each contain zero or one root , a procedure known as root isolation . The more generic term localization [ Hou 7 0 ] refers to root isolation as well as the determination of global root bounds . Localization schemes are usually based on examining simple expressions involving only the polynomial coefficients . This section is limited to a discussion of real root localization methods . Complex root localization techniques are outlined in [ Hou 7 0 , Mar 6 6 ] . If a polynomial has real coefficients , CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 9 then any complex roots must occur in conjugate pairs . The following three subsections review techniques for globally bounding , and locally isolating , distinct as well as multiple real roots . 2 . 2 . 1 Techniques for Bounding Real Roots This section reviews techniques for determining global bounds for real roots of polynomials in power form . Concepts relevant to bounding real roots of polynomials in Bernstein form are presented in [ x 5 . 7 ] . The real roots of polynomials expressed in power form can lie anywhere in the open interval ( ?? 1 ; + 1 ) . The only way a degree n polynomial in power form can have a root exactly at infinity is if its leading coefficient is pn = 0 . Here we review four of several classical methods for determining a tighter upper bound on the positive real roots of a power basis polynomial [ Kur 8 0 , MP 6 5 , BP 6 0 , Caj 0 4 ] . With some simple transformations , it is possible to use these bounding methods to also find lower bounds on positive roots , as well as upper and lower bounds on negative roots . Suppose a method exists which guarantees that all real roots of f ( t ) are less than an upper bound B 1 . To compute a lower bound for the positive real roots of f ( t ) , find the upper bound B 2 of the roots of 2 tnf ( 1 = t ) . Then , a lower bound for the positive roots of f ( t ) is 1 = B 2 since if is a positive root of f ( t ) , then 1 = will be a root of 2 ( t ) and if 1 = < B 2 , then > 1 = B 2 . Likewise , if B 3 is an upper bound on the roots of 3 ( t ) f ( ?? t ) , then ?? B 3 is a lower bound on the negative roots of f ( t ) ; and if B 4 is an upper bound on the roots of 4 tnf ( ?? 1 = t ) , then ?? 1 = B 4 is an upper bound on the negative roots of f ( t ) . Listed below are several of the more convenient and less computationally { intensive meth  ods for finding positive upper bounds on the real roots of a polynomial . Others not cited can be found in [ Hou 7 0 , Mar 6 6 , FS 6 5 ] . Many of these theorems are simplified versions of their more general forms attributed to bounding complex roots of a polynomial found in CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 1 0 Table 2 . 1 : Global Bounding of Power Form Polynomial Real Roots Positive roots of f ( t ) lie in [ 1 = B 2 ; B 1 ] Negative roots of f ( t ) lie in [ ?? B 3 ; ?? 1 = B 4 ] Polynomial coefficients Upper bound of roots 1 ( t ) f ( t ) f 0 ; f 1 ; : : : ; fn B 1 2 ( t ) tnf ( 1 = t ) fn ; fn ?? 1 ; : : : ; f 0 B 2 3 ( t ) f ( ?? t ) f 0 ; ?? f 1 ; f 2 ; : : : ; ( ?? 1 ) nfn B 3 4 ( t ) tnf ( ?? t ) ( ?? 1 ) nfn ; ( ?? 1 ) n ?? 1 fn ?? 1 ; : : : ; f 0 B 4 [ Hou 7 0 , Mar 6 6 ] . In the following discussion , we assume that f ( t ) = P aiti is a degree n monic polynomial ( an = 1 ) . The Cauchy bound [ Kur 8 0 , MP 6 5 , Hou 7 0 ] is usually too conservative when one is only interested in real roots [ Kur 8 0 ] , although it does provide a basis for other methods . Theorem 2 . 2 . 1 . 1 ( Cauchy Bound ) All real roots of f ( t ) 2 < [ t ] are contained in the closed interval [ ?? B ; B ] , where B = 1 + max i ( jaij ) : ( 2 . 2 . 1 . 8 ) The modified Cauchy bound [ Mac 5 4 , Dav 2 7 , Mer 0 6 , Caj 0 4 ] has the Cauchy bound for its worst case , and thus may give a closer bound . Theorem 2 . 2 . 1 . 2 ( modified Cauchy Bound ) Let N be the absolute value of the most negative coefficient of f ( t ) 2 < [ t ] . Then B = 1 + N ( 2 . 2 . 1 . 9 ) is a positive upper bound for all real roots of f ( t ) . The negative inverse sum bound [ BP 6 0 , Dic 2 2 , Caj 0 4 ] and the Maclaurin bound [ Kur 8 0 , MP 6 5 , FS 6 5 , BP 6 0 , Dic 2 2 , Mer 0 6 ] are nearly always much better , although still rather conservative . Examples in [ BP 6 0 , Dic 2 2 ] illustrate that both of these bounds are polynomial dependent ; either one may yield a better bound than the other . Theorem 2 . 2 . 1 . 3 ( Negative Inverse Sum Bound ) For each negative coefficient aj of f ( t ) 2 < [ t ] , let Sj be the sum of all positive coefficients following aj in the sequence CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 1 1 a 0 ; : : : ; an . Then B = 1 + max j aj Sj ! ( 2 . 2 . 1 . 1 0 ) is a positive upper bound for all real roots of f ( t ) . Theorem 2 . 2 . 1 . 4 ( Maclaurin Bound ) Let N be the absolute value of the most negative coefficient of f ( t ) 2 < [ t ] , and let k be the index of the last negative coefficient in the sequence a 0 ; : : : ; an . Then B = 1 + N 1 = ( n ?? k ) ( 2 . 2 . 1 . 1 1 ) is a positive upper bound for all real roots of f ( t ) . The term grouping bound [ MP 6 5 , BP 6 0 ] and the Newton bound [ Kur 8 0 , MP 6 5 , BP 6 0 , Mac 5 4 , Tur 5 2 ] both require initial guesses , and are more computationally intensive , but they ordinarily provide closer bounds than any of the preceding methods [ BP 6 0 ] , with the Newton bound providing the tightest bound on real roots . Theorem 2 . 2 . 1 . 5 ( Term Grouping Bound ) Let f ( t ) 2 < [ t ] be expressed in the form f 1 ( t ) + f 2 ( t ) + : : : + fk ( t ) , such that the ordered coefficients of each of the polynomials fi ( t ) exhibit only one change of sign , the coefficients of the highest { degree terms being positive . Then any positive number B that renders fi ( B ) > 0 for i = 1 ; : : : ; k is an upper bound for all real roots of f ( t ) . Theorem 2 . 2 . 1 . 6 ( Newton Bound ) Let f ( t ) 2 < [ t ] and let B be a number such that f ( B ) , f 0 ( B ) , f 0 0 ( B ) , : : : , f ( n ) ( B ) are all positive . Then B is a positive upper bound for all real roots of f ( t ) . The values for f ( B ) , f 0 ( B ) , : : : , fn ( B ) are easily found using Horner ' s algorithm [ x 5 . 2 . 1 ] , [ MP 6 5 , Mac 5 4 ] . 2 . 2 . 2 Isolation Techniques for Distinct Real Roots The classical literature is abundant with root isolation methods that separate the distinct real roots of a polynomial . Root isolation is an important step in solving for the roots of a polynomial , since many iterative real root approximations methods require a \ sufficiently CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 1 2 close " starting value or a narrow inclusion interval to guarantee that their sequences of approximations converge efficiently to a root . We now review some of the salient concepts for isolating real roots . Polynomial Factors and Roots The result of substituting a number for x into a polynomial f ( x ) is a number called the value f ( ) of the polynomial at x = [ Usp 4 8 ] . If f ( ) is equal to zero the polynomial f is said to vanish , and is called a zero or root of polynomial f [ Bor 5 0 ] . An algebraic equation of degree n is an expression formed by equating a non { zero degree { n polynomial f ( x ) 2 F [ x ] to zero [ Caj 0 4 , Usp 4 8 , Bor 5 0 , Tur 5 2 ] , f ( x ) = 0 : ( 2 . 2 . 2 . 1 2 ) Let f ( x ) 2 F [ x ] and suppose that K is a field containing F , i . e . , K is an extension of F such that F K [ Bor 5 0 ] . The result of substituting any number 2 K for x into the polynomial f ( x ) yields f ( ) = n X i = 0 ai i = ; where 2 K is the value of f at x = . By definition , is called a zero , root , or solution of the equation f ( x ) = 0 when = 0 [ BP 6 0 , Hou 7 0 ] . The existence of a root of any algebraic equation , or non  zero polynomial , is guaranteed by the fundamental theorem of algebra [ Dic 2 2 , Boc 6 4 , MP 6 5 , Ric 8 3 ] . Theorem 2 . 2 . 2 . 1 ( The Fundamental Theorem of Algebra ) Any non { zero polynomial of degree n ( an 6 = 0 ) in C [ x ] ( and thus in R [ x ] ) always has at least one complex 1 ( real or imaginary ) root . In other words , every algebraic equation has at least one solution [ Caj 0 4 , Usp 4 8 , Tur 5 2 , 1 A complex number a + ib , with a ; b 2 R , i = p ?? 1 , is real if b = 0 , imaginary ( or nonreal ) if b 6 = 0 , and pure imaginary if a = 0 . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 1 3 Hou 7 0 , Hil 7 4 ] . This theorem only guarantees the existence of a root ; determining the number and type of the roots of f ( x ) is more involved . The definitions of value and root of a polynomial may also be established on the basis of the remainder theorem [ Caj 0 4 , Dic 2 2 , Usp 4 8 , Bor 5 0 , Mac 5 4 , Hou 7 0 , Kur 8 0 ] , and its corol  lary the factor theorem [ Caj 0 4 , Dic 2 2 , Bor 5 0 , Mac 5 4 , Hou 7 0 , Kur 8 0 ] , both a consequence of the polynomial division algorithm represented by equation ( 2 . 2 . 2 . 1 3 ) and the conditions ( 2 . 2 . 2 . 1 4 ) and ( 2 . 2 . 2 . 1 5 ) . Theorem 2 . 2 . 2 . 2 ( Polynomial Division Algorithm ) For any two polynomials f ( x ) , f 1 ( x ) 2 F [ x ] ( called the dividend and the divisor polynomials , respectively ) , where f 1 ( x ) 6 = 0 , there exists an unique pair of polynomials q ( x ) ; r ( x ) 2 F [ x ] ( the quotient and remainder polynomials ) such that f ( x ) f 1 ( x ) q ( x ) + r ( x ) ( 2 . 2 . 2 . 1 3 ) with either Deg ( r ) < Deg ( f 1 ) ( 2 . 2 . 2 . 1 4 ) or r ( x ) = 0 : ( 2 . 2 . 2 . 1 5 ) Theorem 2 . 2 . 2 . 3 ( The Remainder Theorem ) If the polynomial f ( x ) is divided by the linear term x ?? , where is any number , then the remainder is a constant , equal to the value f ( ) of f ( x ) at x = , i . e . , f ( x ) ( x ?? ) q ( x ) + f ( ) : ( Deg ( f ) = n ) ( 2 . 2 . 2 . 1 6 ) Theorem 2 . 2 . 2 . 4 ( The Factor Theorem ) The remainder f ( ) vanishes i x ?? is a linear factor of the polynomial f ( x ) . Equivalently , is a root of f ( x ) = 0 i x ?? divides exactly ( i . e . , with zero remainder ) into f ( x ) , so that f ( x ) ( x ?? ) q ( x ) : ( Deg ( f ) = n ; Deg ( q ) = n ?? 1 ) ( 2 . 2 . 2 . 1 7 ) Together the fundamental theorem and the factor theorem indicate the number of lin  ear factors , and hence the number of roots , of a polynomial degree n . These results are succinctly described by the following three theorems [ Caj 0 4 , Bor 5 0 , Dic 2 2 , Ham 7 1 , Hen 6 4 , Hou 5 3 , Hou 7 0 , Kur 8 0 , Usp 4 8 ] . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 1 4 Theorem 2 . 2 . 2 . 5 ( Factored , or Product , Form Theorem ) If Deg ( f ) = n 1 , then f ( x ) 2 C [ x ] is uniquely expressible in factored form as the product of n monic linear factors x ?? i and the constant an , where 1 ; : : : ; n denote the roots corresponding to the linear factors : f ( x ) an ( x ?? 1 ) ( x ?? 2 ) ( x ?? n ) ( an 6 = 0 ) : ( 2 . 2 . 2 . 1 8 ) Theorem 2 . 2 . 2 . 6 ( Repeated Factored Form Theorem ) If Deg ( f ) = n 1 , then f ( x ) 2 C [ x ] is uniquely expressible in repeated factored form as the product of k n distinct monic power factors ( x ?? j ) mj and the constant an , where the positive integers mj sum to n and denote the multiplicities of the distinct roots 1 ; : : : ; k , i . e . , f ( x ) an ( x ?? 1 ) m 1 ( x ?? 2 ) m 2 ( x ?? k ) mk ( an 6 = 0 ; n = k X j = 1 mj ) : ( 2 . 2 . 2 . 1 9 ) Theorem 2 . 2 . 2 . 7 ( Root Census Theorem ) An algebraic equation of degree n may be regarded as having exactly n roots ( not necessarily distinct ) , if the roots are counted according to their multiplicities . Each factor ( x ?? j ) mj corresponds to an mj  fold or mj  tuple root or a multiple root of multiplicity mj [ Bor 5 0 , MP 6 5 ] . When mj = 1 , a root j is called a simple or distinct root . Roots of multiplicity mj = 2 ; 3 ; 4 ; : : : are referred to as double , triple , quadruple , : : : roots . From a numerical perspective , groups of roots that are nearly multiple form a \ cluster " of roots and are termed root clusters [ Hou 7 0 ] . Multiple roots are necessarily simultaneous roots of f ( x ) and successive derivatives of f ( x ) [ Usp 4 8 ] , i . e . , f ( ) = 0 ; f 0 ( ) 6 = 0 f ( ) = f 0 ( ) = 0 ; f 0 0 ( ) 6 = 0 f ( ) = f 0 ( ) = f 0 0 ( ) = 0 ; f 0 0 0 ( ) 6 = 0 : : : : : : ; for a simple , double , triple , : : : root . The first derivative of the polynomial f ( x ) = P ni = 0 aixi can be represented in the re  peated factored form of equation ( 2 . 2 . 2 . 1 9 ) as f 0 ( x ) = n ?? 1 X i = 0 ( i + 1 ) ai + 1 xi = k X j = 1 mj f ( x ) ( x ?? j ) ( 2 . 2 . 2 . 2 0 ) CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 1 5 and has as its first derivative f 0 0 ( x ) , and so on [ Caj 0 4 , Dic 2 2 , Kur 8 0 , Mac 5 4 ] . For any polynomial in C [ x ] , the number of roots is synonymous with the number of its linear factors . A discussion of polynomials in R [ x ] requires two additional theorems on conjugate pair roots and real factored forms [ Bor 5 0 , Kur 8 0 , Mac 5 4 , McN 7 3 , Usp 4 8 ] . Theorem 2 . 2 . 2 . 8 ( Conjugate Pair Roots Theorem ) If a polynomial in R [ x ] has an imaginary root = a + ib of multiplicity m , it has also the conjugate value = a ?? ib as a root of the same multiplicity , i . e . , imaginary roots occur in conjugate pairs . Theorem 2 . 2 . 2 . 9 ( Real Factored Form Theorem ) Every polynomial f ( x ) of degree n > 2 in R [ x ] can be expressed as a product of linear and / or quadratic factors that are irreducible monic polynomials in R [ x ] , that is f ( x ) an ( x ?? 1 ) m 1 ( x ?? i ) mi ( x 2 + b 1 x + c 1 ) l 1 ( x 2 + bjx + cj ) lj ; ( 2 . 2 . 2 . 2 1 ) where x 2 + bjx + cj = ( x ?? j ) ( x ?? j ) : ( 2 . 2 . 2 . 2 2 ) Location Principles Rolle ' s Theorem [ Tur 5 2 ] relates the locations of the roots of f ( x ) and the roots of its derivatives . Theorem 2 . 2 . 2 . 1 0 ( Rolle ' s Theorem ) Between any two consecutive real roots a and b of the polynomial f ( x ) there lies an odd number of roots  and therefore at least one root  of its derivative polynomial f 0 ( x ) , a root of multiplicity m being counted as m roots . Rolle ' s theorem provides the following concepts for isolating real roots of a polynomial f ( x ) based on the real roots of its derivative polynomial f 0 ( x ) . 1 . Between any two consecutive real roots c and d of f 0 ( x ) there lies at most one real root of f ( x ) . 2 . The smallest and largest real roots , and ! , of f 0 ( x ) constrain the smallest and largest real roots of f ( x ) to lie in the intervals [ ?? 1 ; ] and [ ! ; + 1 ] , respectively . 3 . Multiple roots of f ( x ) are necessarily roots of f 0 ( x ) . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 1 6 4 . A method for separating the real roots of a polynomial f ( x ) whose roots are all simple , when all the real roots of f 0 ( x ) are known . Variation of coefficient Signs The number of sign variations in the sequence of coefficients a 0 ; a 1 ; : : : ; an of a degree { n polynomial f ( x ) is equal to the number of consecutive pairs of coefficients that are of opposite signs , after zero values have been deleted from the sequence . Theorem 2 . 2 . 2 . 1 1 ( Descartes ' Rule of Signs ) A real polynomial f ( t ) cannot have more positive real roots than there are changes of sign in its sequence of coefficients ; nor more negative real roots than there are changes of sign in the sequence of coefficients f ( ?? t ) . Thus , the number of positive or negative roots will be even if there is an even number of variations of sign , and odd if there is an odd number of changes . Sturm Sequence Descartes ' rule provides only an upper bound on the number of real roots of a polynomial . A method that determines the exact number of distinct roots of a polynomial f ( t ) on an interval ( a ; b ) is based on computing a Sturm sequence f 1 ( t ) ; : : : ; fm ( t ) for f ( t ) , defined by f 1 ( t ) = f ( t ) f 2 ( t ) = f 0 ( t ) fi ( t ) = qi ( t ) fi + 1 ( t ) ?? fi + 2 ( t ) ( i 1 ) . The sequence terminates upon encountering a polynomial fm ( t ) , possibly a constant , that does not vanish on the interval of interest . Theorem 2 . 2 . 2 . 1 2 ( Sturm ' s Theorem ) Let f 1 ( t ) ; f 2 ( t ) ; : : : ; fm ( t ) be a Sturm sequence for the real polynomial f ( t ) , and let the interval ( a ; b ) be such that neither a nor b is a root of f ( t ) . Then if S ( a ) and S ( b ) are the number of sign changes in the sequences of values f 1 ( a ) ; f 2 ( a ) ; : : : ; fm ( a ) and f 1 ( b ) ; f 2 ( b ) ; : : : ; fm ( b ) , the number of distinct real roots of f ( t ) between a and b is given exactly by S ( a ) ?? S ( b ) . Note that Sturm ' s theorem does not count roots on ( a ; b ) according to their multiplicities . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 1 7 P ( t) 0.2 0.4 0.6 0.8 1.0 t Pâ€™ ( t) 0.2 0.4 0.6 0.8 1.0 t Pâ€™ â€™( t) 0.2 0.4 0.6 0.8 1.0 t Figure 2 . 3 : Application of Rolle ' s theorem . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 1 8 Budan { Fourier Sequence A weaker result , which relies only on the polynomial and its derivatives rather than a division sequence , is the Budan { Fourier theorem : Theorem 2 . 2 . 2 . 1 3 ( Budan { Fourier ) Let the interval ( a ; b ) be such that neither a nor b is a root of the polynomial f ( t ) of degree n . Then if C ( a ) and C ( b ) denote the number of sign changes in the sequences of values f ( a ) ; f 0 ( a ) ; : : : ; f ( n ) ( a ) and f ( b ) ; f 0 ( b ) ; : : : ; f ( n ) ( b ) , the number of roots between a and b , counted according to multiplicity , differs from C ( a ) ?? C ( b ) by at most a positive even amount . Isolating Polynomials Another insightful approach to separating the real zeros of a given polynomial is to utilize the real roots of isolating polynomials . Rolle ' s theorem may be regarded as a corollary to the following theorem of Borofsky [ Bor 5 0 ] : Theorem 2 . 2 . 2 . 1 4 Suppose f ( x ) , g ( x ) , and h ( x ) are real polynomials , such that a and b are consecutive real roots of f ( x ) , and g ( a ) and g ( b ) have the same signs . Then the polynomial F ( x ) g ( x ) f 0 ( x ) + h ( x ) f ( x ) has an odd number of roots between a and b if each root is counted according to its multiplicity . Uspensky [ Usp 4 8 ] describes a related theorem due to de Gua : Theorem 2 . 2 . 2 . 1 5 ( de Gua ) Let f ( x ) be a real polynomial of degree n having distinct positive roots x 1 ; : : : ; xr with multiplicities m 1 ; : : : ; mr . Then if we define F ( x ) = x f 0 ( x ) + f ( x ) ; where is an arbitrary non { zero constant , the following are true : 1 . F ( x ) has at least one root in each of the intervals ( x 1 ; x 2 ) ; : : : ; ( xr ?? 1 ; xr ) ; 2 . for mj > 1 each root xj is also a root of F ( x ) with multiplicity mj ?? 1 ; and 3 . all roots of F ( x ) are real when > 0 . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 1 9 Similar considerations apply to the negative roots of f ( x ) . A generalized version of the above theorem is presented by Sederberg and Chang in [ SC 9 4 ] ; this will be discussed in detail in [ x 6 . 5 ] . Finally , Dedieu and Yakoubsohn [ DY 9 3 ] use an exclusion function to separate the real roots of a polynomial in their algorithm that is brie y discussed in [ x 2 . 5 . 4 ] . 2 . 2 . 3 Isolation Techniques Accounting Multiple Real Roots Multiple real roots lie , in a sense , at the limit between cases of distinct real roots and complex roots [ BP 6 0 , Dic 2 2 , Mer 0 6 ] . ( Dickson [ Dic 2 2 , p . 2 9 ] illustrates this principle for the roots of a real quadratic polynomial with a simple ruler { and { compass construction . ) Common Roots of Two Polynomials The Euclidean algorithm for deriving the greatest common divisor ( GCD ) of two polynomials [ Van 7 0 ] provides for : 1 . the determination of the multiple roots of a polynomial by reducing it to a polynomial with the same , but simple , roots ; and thus 2 . the determination of the number of roots of a polynomial over an interval of the real axis . The GCD of f ( x ) and f 0 ( x ) , denoted by g ( x ) = ?? f ( x ) ; f 0 ( x ) ; ( 2 . 2 . 3 . 2 3 ) can be used to reduce the multiple roots of f ( x ) to simple ones by performing the division h ( x ) = f ( x ) g ( x ) ; ( Deg ( g ( x ) ) 6 = 0 ) : ( 2 . 2 . 3 . 2 4 ) The polynomial h ( x ) then possesses the same roots as f ( x ) , but without multiplicity . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 2 0 Separation of Multiple Roots An extension of the GCD algorithm that separates all the multiple roots of a polynomial into distinct linear factors and simultaneously determines the multiplicity of each root is described in [ Dun 7 2 , Dun 7 4 , MP 6 5 , Van 7 0 ] and implemented by [ Dun 7 2 ] for this purpose . define a GCD sequence by g 1 ( x ) = g 0 ( x ) ; g 0 0 ( x ) ; g 0 ( x ) = f ( x ) g 2 ( x ) = g 1 ( x ) ; g 0 1 ( x ) : : : gm ( x ) = gm ?? 1 ( x ) ; g 0 m ?? 1 ( x ) ; Deg ( gm ( x ) ) = 0 and set ( hj ( x ) = gj ?? 1 ( x ) gj ( x ) ) m j = 1 ; ( 2 . 2 . 3 . 2 5 ) and then Fk ( x ) = hk ( x ) hk + 1 ( x ) m ?? 1 k = 1 ; Fm ( x ) = hm ( x ) : ( 2 . 2 . 3 . 2 6 ) All the roots of the polynomials Fk ( x ) are then simple roots , and correspond to k { fold roots of f ( x ) . Note that f ( x ) can not possess roots of multiplicity greater than m . 2 . 3 Polynomial Real Root Approximation This section reviews both closed { form algebraic solutions as well as numerical iterative ap  proximations to polynomial equations expressed in power form . The first subsection [ x 2 . 3 . 1 ] discusses closed { form solutions for low { degree equations , with special attention to methods that are concerned only with real roots . The next three subsections classify iterative numer  ical schemes under three general groups : basic serial iterative methods [ x 2 . 3 . 2 ] , combined or hybrid serial iterative methods [ x 2 . 3 . 3 ] , and simultaneous iterative methods [ x 2 . 3 . 4 ] . specific power form polynomial root finders that find all polynomial roots , as well as power CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 2 1 form and Bernstein polynomial root finders that find only the real roots , are reviewed in the subsequent sections [ x 2 . 4 ] , [ x 2 . 5 ] and [ x 2 . 6 ] . 2 . 3 . 1 Closed Form Solvers Direct root { finding algorithms for equations that are algebraically solvable by radicals ( the roots may be expressed explicitly in terms of the coefficients by a general algebraic formula [ Lod 9 0 , MP 6 5 , Bor 5 0 , Caj 0 4 ] ) are called exact , or closed { form solvers . Any polynomial equation of degree 4 can be solved in closed form ; for higher degrees a general closed form solution is impossible [ Kur 8 0 , MP 6 5 ] . A thorough discussion of classical solutions for quadratic , cubic , and quartic equations can be found in elementary texts on the theory of equations [ BP 6 0 , Mac 5 4 , Tur 5 2 , Bor 5 0 , Usp 4 8 , Dic 2 2 , Mer 0 6 , Caj 0 4 ] , and higher algebra [ Kur 8 0 , MP 6 5 , Dav 2 7 ] . In principle , these methods yield exact results , although oating { point implementations are , of course , subject round { offerror and possible ill { conditioning effects . Although numerical implementations of direct methods are subject to accuracy loss and require square or cube root extractions , special purpose library functions for solving real quadratic and cubic equations have bested general purpose root finding methods by a factor of 7 [ MR 7 5 ] . Closed form solvers restricted to finding only real roots are of particular interest for CAGD and graphics applications , as discussed in [ x 1 ] . Hanrahan recorded nearly a 2 5 percent reduction of image rendering CPU time using exact solutions as compared to general polynomial root finders [ Han 8 3 ] . Numerical algorithms returning non  complex solutions are given in [ Sch 9 0 b ] . Numer  ical considerations addressing the stable computation of closed form root expressions are presented in a report by Lodha [ Lod 9 0 ] , and are addressed for the quadratic and cubic in [ PFTV 9 0 ] . Vignes [ Vig 7 8 ] also outlines a stable cubic real root solver developed by M . La Porte . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 2 2 Further implementation considerations are deferred to [ x 5 . 9 ] . Other references dis  cussing numerical implementation of exact solvers include [ HE 8 6 , PW 8 3 , Ric 8 3 , BFR 8 1 , McN 7 3 , Hen 6 4 , Hou 7 0 , Pen 7 0 , Buc 6 6 ] . 2 . 3 . 2 Basic Serial Iterative Methods Methods referred to by [ Pet 8 9 ] that compute one zero at a time and require successive pre { isolation of the roots and post  de ation to remove the corresponding linear factors , and by Householder [ Hou 7 0 ] as local ( convergent ) methods that are dependent on close initial guesses to roots for convergence will be called serial iterative methods here . Basic serial iterative methods are presented in a variety of numerical texts that discuss iterative solution of the real roots of an equation . Rice [ Ric 8 3 ] outlines several of these basic iterative methods along with their corresponding iterative steps , which include the following schemes : Bisection Method Regual Falsi Method modified Regula Falsi Method Secant Method Mueller ' s Method ( see [ Dun 7 2 , Mul 5 6 , PFTV 9 0 ] ) Fixed  Point Method Newton ' s Method Higher  Order Newton Method The bisection and regula falsi methods are frequently referred to as root trapping or bracketing methods because they require a bounding interval containing the real root in their operation . Bracketing methods always converge , but at a linear rate , or superlinear rate depending on what acceleration techniques are applied . Basic serial iterative schemes may also be characterized by their use of interpolating polynomials [ Hil 7 4 , Hou 7 0 , Ost 6 0 , Tra 6 4 ] : CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 2 3 Linear interpolation , e . g . , as in the secant method . Linear inverse interpolation , e . g . , the regula falsi methods . Quadratic interpolation , e . g . , Muller ' s method . Other interpolation { based schemes , e . g . , Newton ' s method . Any of these methods can be combined , extended , and applied to general purpose poly  nomial root finding schemes for polynomials in both power and Bernstein forms . 2 . 3 . 3 Hybrid Serial Iterative Methods Hybrid serial methods combine basic iterative root bracketing techniques ( which always converge , but usually at a slow rate ) with the faster convergent schemes that occasionally exhibit non  convergence [ PW 8 3 ] . The result is a hybrid approximation strategy that always converges at a faster rate by using a higher order convergence technique while the iterative step remains within the specified interval , and the bracketing technique when the result steps out of the interval . Hybrid methods found in the literature include a secant { false position algorithm [ PW 8 3 ] in PASCAL , a Newton { bisection algorithm in both FORTRAN and C [ PFTV 8 6 , PFTV 9 0 ] , a Newton { regula falsi algorithm in C by Blinn [ Gla 8 9 ] , and an algorithm in FORTRAN and C [ FMM 7 7 , PFTV 8 6 , PFTV 9 0 ] by Brent [ Bre 7 1 ] based on previous work by Van Wijngaarden and Dekker which combines root bracketing , bisection , and inverse quadratic interpolation to approximate a real root . 2 . 3 . 4 Simultaneous Iterative Methods Methods referred to by Petkovic [ Pet 8 9 ] that simultaneously determine all zeros of a poly  nomial , and by Householder [ Hou 7 0 ] as global ( convergent ) methods that are not dependent on close initial guesses to roots for convergence are designated here as simultaneous iterative methods . Such methods are not the focus of this study , but some of the more established CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 2 4 algorithms are discussed in [ x 2 . 4 ] as a reference to their use as benchmarking algorithms compared against the algorithms presented in Chapter 6 . Other simultaneous methods mentioned in the literature include Bernoulli ' s method [ Ait 2 6 ] , Grae ee ' s method [ Gra 6 5 ] , the quotient difference method [ Mor 8 3 , Hen 8 2 , BFR 8 1 , McN 7 3 , Hou 7 0 , Hen 6 4 ] Petkovic ' s simultaneous inclusion method [ Pet 8 9 , Pet 8 1 ] , Gargantini ' s method [ Gar 7 9 ] , etc . 2 . 4 Power Form General Purpose Root Finders This section reviews some general purpose root finding methods found in the literature that approximate all the roots of polynomials in power form . Root finders discussed below include Jenkins and Traub ' s Rpoly , Dunaway ' s Composite , Madsen and Reid ' s Newton { based , and Laguerre ' s methods . Additional general purpose methods not outlined here , but referenced in other works , include the Lehmer { Schurr method [ PFTV 9 0 , Mor 8 3 , Dun 7 2 , Act 7 0 , Hou 7 0 , Hou 5 3 ] , Lin ' s linear and quadratic methods [ Hil 7 4 , McN 7 3 , Act 7 0 , Buc 6 6 , Lin 4 3 , Lin 4 1 ] , Bairstow ' s Newton quadratic factor method [ PFTV 9 0 , PC 8 6 , Mor 8 3 , Hen 8 2 , RR 7 8 , Piz 7 5 , Hil 7 4 , McN 7 3 , Dun 7 2 , Ham 7 1 , Hou 7 0 , Buc 6 6 , IK 6 6 , Hen 6 4 ] as well as other higher order methods referenced in [ HP 7 7 , Hil 7 4 , Dun 7 2 , Hou 5 3 ] . Jenkins { Traub ' s Method Jenkin ' s Rpoly FORTRAN program , listed in [ Jen 7 5 ] , finds all the zeros of real polynomials in power form . Rpoly is based on a three { stage algorithm described in [ JT 7 0 ] which extracts an approximate linear or quadratic factor from a sequence of polynomials of degree one less than the original polynomial that are generated by various shifting operations . The real zero or quadratic factor is then de ated from the original polynomial and the process repeats on the reduced polynomial until all zeros are approximated . The first and second stages are linear convergent , and the third stage is superquadratic convergent and thus usually requires only a few iterations . Termination criteria for stage three iterations CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 2 5 are based on round { offerror analysis . Due to its robust and efficient performance , Rpoly has become established as a de facto state { of { the { art algorithm that provides a basis of comparison for many root finding algorithms . Jenkins and Traub ' s algorithm implemented for real polynomials reported about 2 n 2 milliseconds to calculate all the zeros for degree n 2 0 real polynomials , in contrast to about 8 n 2 milliseconds for their complex version [ JT 7 0 ] . Other polynomial root finding publications that refer to these methods include [ PFTV 9 0 , Ric 8 3 , BFR 8 1 , RR 7 8 , MR 7 5 , Dun 7 2 , Hou 7 0 ] . Dunaway ' s Composite Method Dunaway ' s Composite FORTRAN algorithm is included in [ Dun 7 2 ] . It computes all the roots of real power form polynomials , with particular emphasis on the problem of solving polynomials possessing multiple roots with greater accuracy and reasonable efficiency than had previously been available . This general purpose algorithm is actually applicable to any class of polynomials with real coefficients . In essence , the algorithm can be divided into two separate stages , finding the real roots first , and then the complex roots . The first part com  prises several procedures which first isolate a GCD sequence of unique polynomial factors containing distinct real roots , and then solves for these roots using a Sturm { Newton strat  egy . The second part isolates the complex factors by forming interpolating polynomials as an alternative to de ating the real roots , and then uses a Lehmer { Shur method to approx  imate initial root estimates for a complex Newton method . The composite algorithm was compared against [ JT 7 0 ] for real polynomials and yielded a significant increase in accuracy and reasonable speed for cases tested which included randomly generated polynomials with multiplicities from one to nine [ Dun 7 4 ] . Other related works which refer to this composite algorithm include [ GK 9 0 , FL 8 5 , LN 7 9 , Mat 7 9 , JT 7 5 ] . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 2 6 Madsen { Reid ' s Newton { Based Method Madsen and Reid ' s Newton { based FORTRAN algorithms for both complex and real poly  nomial coefficients are included in [ MR 7 5 ] . Based on Madsen ' s earlier work [ Mad 7 3 ] , they utilize a two stage global and local Newton step process to isolate and approximate all the zeros of a polynomial . To ensure stable de ation , the algorithm repeatedly approximates the zeros of the smallest modulus by first successively searching and computing tentative Newton { based steps until the iterate is close enough to apply straightforward Newton ap  proximation to quadratically converge to the root . Error estimation methods expanded from Peters and Wilkinson [ PW 7 1 ] are implemented to enhance convergence criteria for both complex and real coefficient cases . The method reports a two { fold advantage over the Jenkins and Traub algorithm [ JT 7 0 ] : a 2 { 4 times increase in efficiency with at least as accurate results , and a simpler algorithm yielding less bulky code ( object code about 2 3 as long ) . The FORTRAN code is set up to easily accommodate various precisions , checks for leading and trailing zero coefficients , and normalizes de ated coefficients . Other works referring to this method include [ Wes 8 1 , RR 7 8 , SR 7 7 , Moo 7 6 ] . Laguerre ' s Method Laguerre ' s method FORTRAN and C algorithms [ PFTV 8 6 , PFTV 9 0 ] find all the roots of a complex polynomial by first isolating guaranteed initial approximations for any root ( real , complex , simple , and multiple ) using parabolas , and then uses further refinement , or root polishing , techniques to converge to within required precision . Laguerre ' s method is guaranteed to converge independent of any initial values and incurs the calculation of f ( t ) , f 0 ( t ) , and f 0 0 ( t ) for each stage of the iteration which provides cubic convergence near simple real roots and linear convergence near multiple zeros . Other works referring to this method include [ Ric 8 3 , RR 7 8 , Act 7 0 , Hou 7 0 , Bod 4 9 ] . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 2 7 2 . 5 Power Form Real Root Finders This section reviews existing power form real root finding methods that may be categorized according to how they isolate the real roots of the polynomial . We consider methods based on : 1 . Sturm sequences , 2 . differentiation , 3 . Variation of signs , 4 . Interval techniques . 2 . 5 . 1 Sturm Sequence Techniques The determination of the number of real roots on an interval using Sturm sequences has been described above . One root { finding technique using Sturm sequences is described in [ Dun 7 2 ] . Another is the method of Hook and McAree . Hook { McAree ' s Method Hook and McAree present a C implementation using Sturm sequences to bracket the real roots of power form polynomials for subsequent refinement by the modified regula falsi method [ HM 9 0 ] . The isolation phase builds a Sturm sequence using a pseudo { polynomial remainder sequence derived from the original polynomial and its first derivative as outlined in [ x 2 . 2 . 2 ] , and then applys a bisection technique to the sequence until a single sign variation is achieved . A ray tracing program used this algorithm to render algebraic surfaces of arbitrary order which frequently encounters the solution of ill  conditioned polynomials . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 2 8 2 . 5 . 2 differentiation Techniques Polynomial derivative sequences provide natural bounding intervals for the real roots of the given polynomial , as exemplified by the work of Collins and Loos . Collins { Loos ' Method Collins and Loos [ CL 7 6 ] present a polynomial real root isolation method that utilizes re  cursive derivative sequences of the given power form polynomial to achieve root interval separation , applying Rolle ' s theorem , a simple tangent construction heuristic , and polyno  mial greatest common divisor calculations to determine the existence of two or no roots in an interval . The algorithm is compared both analytically and empirically to Heindel ' s Sturm sequence algorithm [ Hei 7 1 ] which uses integer arithmetic to compute the real zeros of a polynomial . It performs significantly faster in general , which is attributed primarily to smaller coefficients in the derivative sequence versus the Sturm sequence . Test cases consisted of both random and perturbed random product polynomials as well as Chebyshev and Legendre polynomials up to degree 2 5 . Other references that cite this work include [ CA 7 6 , CL 8 2 , MC 9 0 , Rum 7 9 ] . 2 . 5 . 3 Variation of Signs Techniques Real root isolation strategies that use Descartes ' rule of signs to bound the roots are con  tained in the work by Collins and Akritas . Collins { Akritas ' Method Collins and Akritas [ CA 7 6 ] present a polynomial real root isolation method that combines Descartes ' rule of signs with linear fractional transformations . This is a modification of Uspensky ' s algorithm based on a theorem by Vincent [ Usp 4 8 ] . The algorithm is 2 to 8 CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 2 9 times more efficient than the Collins and Loos method described above . Other sources that cite this work include [ CL 8 2 , MC 9 0 ] . 2 . 5 . 4 Interval Techniques Real root isolation schemes that construct bounding intervals to contain and exclude the roots include Hansen ' s Newton interval and Dedieu and Yakoubsohn ' s exclusion interval methods , respectively . Hansen ' s Newton Interval Methods Hansen develops polynomial real root finding methods [ Han 7 8 b , Han 7 8 a ] introduced by Moore [ Moo 6 6 ] that extend interval analysis to Newton ' s method . This guarantees isolation and approximation of all the real roots of a polynomial by bounding intervals . Other interval versions of Newton ' s method are cited in Hansen ' s works as well as in [ AH 8 3 , Han 6 9 , Moo 7 9 ] . Further sources extending these concepts to solutions of complex roots , as well as systems of non { linear equations , include [ AH 8 3 , GH 7 3 , Han 6 9 , HG 8 3 , Moo 7 7 , Moo 7 9 ] . Dedieu { Yakoubsohn ' s Exclusion Method Dedieu and Yakoubsohn ' s exclusion algorithm [ DY 9 3 ] is a real root isolation scheme that localizes all the real roots of a real polynomial , based on an exclusion function which defines intervals on which the original polynomial does not have any roots . The arrangement of the exclusion intervals provides the necessary bounding intervals for approximation by an appropriate root bracketing scheme [ x 2 . 3 . 2 , 2 . 3 . 3 ] . The exclusion method guarantees con  vergence to the accuracy " in O ( jlog " j ) steps and is stable under modifications of the initial iterate as well as rounding errors . The exclusion scheme proved stable for higher degree polynomials as opposed to the unstable effects of a Sturm ' s method with identical step convergence order , although Sturm ' s method exhibits better efficiency for well { conditioned CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 3 0 lower degree polynomial cases . The exclusion algorithm also extends to computing complex roots . The authors provide a C version of the isolating algorithm . 2 . 6 Bernstein Form Real Root Finders This section brie y reviews existing Bernstein form polynomial real root finding methods , which are grouped according to how they isolate the real roots , using either : 1 . Recursive subdivision , 2 . Newton { based methods , 3 . Hull approximation techniques . 2 . 6 . 1 Recursive Subdivide Techniques Bernstein root finders that repeatedly subdivide to isolate and even approximate the real roots of a polynomial are designated as recursive subdivision techniques . Three methods developed by Lane and Riesenfeld , Rockwood , and Schneider are reviewed below . Another method is presented in this document in [ x 6 . 3 ] that utilizes subdivision and derivative heuristics to isolate each root . The roots are then further refined using a hybrid serial approximation scheme adapted to Bernstein form polynomials . Lane { Riesenfeld ' s Method Lane and Riesenfeld [ LR 8 1 ] couple recursive bisection with properties of the Bernstein form to isolate and eventually approximate real roots over a specified interval . The polynomial is recursively subdivided at its midpoint until either the root is approximated to a speci  ed precision , or no root exists in the polynomial subinterval , whereupon the polynomial segment is eliminated . The variation diminishing property [ x 2 . 1 . 2 ] is utilized to indicate whether or not a root exists in the subinterval . Binary subdivision incurs O ( n 2 ) steps and CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 3 1 provides one bit of accuracy for each subdivision step . Other related works which reference this method include [ Gra 8 9 , MC 9 0 , RKF 8 8 , Roc 8 9 , Sch 9 0 a ] . Rockwood ' s Method Rockwood ' s algorithm [ Roc 8 9 ] is a variation on the Lane and Riesenfeld method suitable for parallel and vector processing . Instead of binary subdivision , Rockwood uses a step hueristic that estimates the root using the linearly interpolated value between two points on the curve receiving the greatest in uence from the nearest control points to the crossing . This estimate eventually becomes a Newton step in the vicinity of the approximate root , exhibiting quadratic convergence . The ordered real roots are found to a specified tolerance . This work is also referenced in [ MC 9 0 ] . Schneider ' s Method Schneider ' s algorithm [ Sch 9 0 a ] is also a variant of the Lane and Riesenfeld algorithm which uses recursive binary subdivision to approximate the ordered roots over the Bernstein in  terval . A root is determined when either the recursion depth limit is reached , or when the control polygon crosses the abscissa once and approximates a straight line , or is at enough , the root in the corresponding region being then computed as the intersection of the t { axis with the chord joining the first and last control point . The at enough termination criterion is achieved when the maximum perpendicular distance of the control points to the chord is bounded by a specified tolerance . 2 . 6 . 2 Newton { Based Techniques Bernstein root finding schemes that use Newton  type steps to isolate and approximate the real roots of a polynomial are exemplified by the algorithms of Grandine and of Marchepoil and Chenin . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 3 2 Grandine ' s Method Grandine [ Gra 8 9 ] computes all the zeros of a univariate spline function using a Newton interval method that requires bounds on the derivative of the function , without breaking the interval into its individual polynomial pieces . The method exhibits Newton convergence for simple roots , and thus at best linear convergence for multiple roots . Marchepoil { Chenin ' s Method A report by Marchepoil and Chenin in French [ MC 9 0 ] uses a combination of Bernstein subdivision techniques combined with Newton iterations to find the ordered set of real roots over an interval for ray tracing applications . Unfortunately , the author was unable to obtain a complete translation of this report and thus cannot provide a detailed comparison with other methods . The method is benchmarked against the Rockwood and Lane { Riesenfeld algorithms described above . 2 . 6 . 3 Hull Approximation Techniques Bernstein root finding methods can also exploit the convex hull property to isolate and even approximate the real roots of a polynomial . A method by Rajan , Klinkner , and Farouki exemplifies this type of method . Two other methods are presented in this document that use optimal convex hulls to provide steps that in the first case [ x 6 . 4 ] isolate the ordered real roots ( which are subsequently approximated using a hybrid serial approximation method adapted to Bernstein form polynomials ) , and in the second case [ x 6 . 1 ] successively approximate the ordered real roots . CHAPTER 2 . REVIEW OF POLYNOMIAL REAL ROOT FINDING 3 3 Rajan { Klinkner { Farouki ' s Method Rajan et al . [ RKF 8 8 ] use parabolic hulls to isolate and approximate simple real roots of a Bernstein form polynomial . A parabolic hull is a parabolic generalization of the convex hull property [ x 2 . 1 . 2 ] that bounds the polynomial from above and below . This method favors high degree polynomials ( examples up to degree 2 0 4 8 ) with few roots over the interval of interest , and was up to several orders of magnitude faster than other algorithms for this type of problem . The scheme exhibits cubic convergence when approximating a root . Generalization of the technique to higher { degree hulls is also outlined . Chapter 3 Review of Numerical Considerations Robust computer implementation of polynomial root finding methods requires , in addition to a formal mathematical description of the algorithm , a keen awareness of the limitations of the oating { point number system and issues of numerical stability and error propaga  tion . This chapter reviews key numerical considerations pertaining to the implementation of polynomial real root finding algorithms . Section [ x 3 . 1 ] reviews sources and types of numerical error noted in the literature . Section [ x 3 . 2 ] discusses general deterministic and indeterministic analysis techniques for estimating bounds for roundofferror . Section [ x 3 . 3 ] reviews numerical stability and condition properties including the condition of polynomial roots with respect to perturbations in the coefficients , with particular attention to the power and Bernstein representations . Section [ x 3 . 4 ] reviews and provides an example of the conversion process between Bernstein and power forms of a polynomial . Finally , section [ x 3 . 5 ] presents guidelines for benchmarking the performance of polynomial root finders , identifying various classes of polynomials appropriate for testing . 3 4 CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 3 5 3 . 1 Numerical Error The computer implementation of mathematical algorithms involves a variety of ways in which numerical errors may be generated , propagated , and amplified . This section out  lines the various types and sources of numerical error discussed in the literature [ Buc 6 6 , Dod 7 8 , Dun 7 2 , Hen 8 2 , Hil 7 4 , Hou 5 3 , McN 7 3 , Mor 8 3 , Pen 7 0 , PC 8 6 , Piz 7 5 , RR 7 8 , Sta 7 0 ] with particular emphasis on oating { point roundofferror , indicating guidelines to minimize the error whenever possible . 3 . 1 . 1 Sources and Types of Numerical Error We may distinguish between two distinct types of error , which arise in the the \ modeling " and \ computing " stages : Modeling Error ( as distinct from numerical error ) re ects the deficiencies of , for example , a discrete approximation to a continuous mathematical problem , or the possible failure of an iterative method to have guaranteed convergence to the desired solution . Computational Error measures the numerical errors due to implementing an algorithm in oating { point arithmetic on a computer . Computational error may be classified into the following categories : Gross Error is caused by mistakes , oversights , or malfunctions of man or machine . Truncation Error results from using a finite approximation to represent an ideal function , equation , or value that does not have a tractable finite representation . Convergence Error arises from either a divergent iterative process , or premature termination of a convergent process . Over ow  Under ow Error results from the finite dynamic range of the oating { point number system . It may be largely avoided by using appropriate data scaling and normalization techniques . Roundofferror arises from the fact that the results of oating { point operations must be rounded to be representable within the oating { point number system . This incurs errors at each arithmetic step of an algorithm , and the propagation and possible amplification of errors incurred in preceding steps . CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 3 6 Although distinct from computational error , the proper modeling and formulation of a mathematical problem is an important aspect of robust implementations [ x 3 . 3 ] . As a general guideline for minimizing numerical error , computations should be kept as concise as possible , although there are exceptions to even this rule . Causes of Computational Error Computational errors arise under a variety of circumstances , such as : 1 . Inherent error in the input data of a computation . This may arise from physical measurement errors , the fact that the input data is itself the result of oating { point computations , or simply the need to round \ exact " input to oating { point format . 2 . Cancellation error in subtracting nearly equal numbers , which incurs an amplification of pre { existing relative errors in the operands . This may be ameliorated by rearranging sums so that differences close to zero are avoided , and using reducible subtraction operations [ Vig 7 8 ] which detect and ensure the summation of like signed numbers . 3 . Roundo in adding or subtracting one large and one small number . Summing numbers in increasing magnitude helps remedy this source of error . 4 . Simple roundo due to the ordinary arithmetic operations of addition , subtraction , multiplication , and division . 5 . Floating { point over ow or under ow ( e . g . , due to division by a very small number ) . Other pertinent tips for minimizing roundofferror include : Avoid ( if possible ) expressions of the form ab where b is a oating point number , Use double precision for oating { point calculations , Rearrange formulas to use original data rather than derived data , Rearrange formulas to minimize arithmetic operations , Avoid chaining operations which involve round { offerror , and Work with integers whenever possible and economically feasible . CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 3 7 3 . 1 . 2 Roundofferror Discussions regarding oating  point computation preface most numerical analysis texts in part and is exclusively defined and examined by Sterbenz [ Ste 7 4 ] . In addition , a text by Plauger [ Pla 9 2 ] provides extremely useful insight regarding standardized oating  point constants for a UNIX  based machine usually located in the le : nusrnincluden oat . h . Floating  Point Computation A computer expression in F formed by the arithmetic set of computer operators f ; ; ; g ( 3 . 1 . 2 . 1 ) that numerically approximates an algebraic expression defined by the set of real numbers R and formed by an arithmetic set of exact operators f + ; ?? ; ; = g ( 3 . 1 . 2 . 2 ) results in numerical rounding of the algebraic expression when F and its respective operators define the set of oating  point numbers and operations . A oating  point number x has the form x = m e 2 [ xmin ; xmax ] 2 F ( 3 . 1 . 2 . 3 ) where the signed number m 2 F is the normalized mantissa of x , having d significant digits in the base of the machine , and e 2 Z is the exponent of x , lying within some range [ emin ; emax ] . Machine Epsilon and Roundo Unit The precision of oating  point arithmetic is characterized by the machine epsilon which is defined as the smallest oating point number such that 1 > 1 or = 1 ?? d ; ( 3 . 1 . 2 . 4 ) CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 3 8 and its corresponding roundofferror bound by the machine roundo unit [ Mor 8 3 ] which is defined as the smallest oating point number such that 1 = 1 or = 1 2 = 1 2 1 ?? d ( 3 . 1 . 2 . 5 ) which reduces to = 2 ?? d on a binary machine . A function that calculates the machine epsilon for any machine is given in the following algorithm [ FMM 7 7 ] . Algorithm 3 . 1 . 2 . 1 ( = ComputeMachineEps ( ) ) Compute and return the ma  chine epsilon . 1 . = 1 2 . While ( ( 1 + ) > 1 ) 3 . = 1 2 EndWhile End of ComputeMachineEps Roundofferror Due to Floating  Point Operations The absolute error of a computed value x due to roundo of a binary oating  point operation f ; ; ; g is represented by " x ( 3 . 1 . 2 . 6 ) and the corresponding relative error is denoted by " x x ( 3 . 1 . 2 . 7 ) which measures the precision , or the number of correct digits , in x . The formulas that account for the roundofferror due to oating  point operations are x y = ( x + y ) ( 1 + ) x y = ( x ?? y ) ( 1 + ) x y = x y ( 1 + ) x y = x = y ( 1 + ) CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 3 9 where is the relative roundofferror introduced by a single oating  point operation which satisfies the relation j j [ Dod 7 8 , Hil 7 4 , Hou 5 3 , IK 6 6 , KM 8 1 , Pen 7 0 , PW 7 1 , PW 8 3 , Ric 8 3 , Sta 7 0 ] . Note that these formulas do not take into consideration inherent or pre { existing errors in the operands ( see [ xx 3 . 2 . 2 ] , [ PW 8 3 ] ) . 3 . 2 Estimating Bounds for Roundofferror There are a variety of approaches to monitoring the propagation of rounding errors in numerical computations so as to estimate error bounds for computed quantities . They may be boradly classified as as either deterministic or indeterministic techniques [ BC 8 6 , FV 8 5 , Ham 7 1 , Hen 8 2 , Mil 7 5 b , RR 7 8 , Ric 8 3 , Ste 7 4 , Wil 6 3 ] . Deterministic techniques estimate an upper bound for the roundofferror incurred during a computation , and may be divided into the following approaches : 1 . Forward Error Analysis 2 . Running Error Analysis 3 . Backward Error Analysis 4 . Interval Error Analysis Indeterministic techniques estimate an average bound based on certain measures of reli  ability for the roundofferror incurred during a computation which separates into the following current approaches . 1 . General Stochastic Error Analysis 2 . Permutation  Perturbation Error Analysis It is important to note that these methods yield estimates of the propagated roundo error at the expense of a loss of efficiency in both execution and a more complicated im  plementation . The general purpose polynomial root finder implemented by Madsen and Reid discussed in [ xxx 2 . 4 ] reported their running error analysis overhead increased CPU time usually about 2 5 % for simple roots and up to 1 0 0 % when many multiple roots were considered [ MR 7 5 ] . Thus , a good rule of thumb is to use methods that avoid roundofferror analysis when possible , and to use error analysis judiciously when needed . CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 4 0 3 . 2 . 1 Forward Error Analysis Forward error analysis bounds the absolute or relative roundofferrors of the intermediate steps of a computation . Various perspectives on forward error analysis are presented in [ Mil 7 5 b , PW 7 1 , RR 7 8 , Ric 8 3 , Ste 7 4 , Wil 6 3 ] . This approach is often too conservative , yielding error bounds that can be many orders of magnitude larger than the actual error and thus virtually useless when used as a convergence criterion . In addition , forward error analysis is tedious and often di cult to apply to complex computations ( although applied skillfully to straightforward equations this approach can produce a tight error bound on the computed value ) . Also , it does not account for cancellation effects  see [ x 4 . 2 . 1 ] below . 3 . 2 . 2 Running Error Analysis Peters and Wilkinson coined the term running error analysis in [ PW 7 1 ] to describe a method that computes error estimates \ on { the { y " for the case of Horner evaluation of f ( x ) . The essence of this method is to obtain for each intermediate step of the computed value an absolute or relative bound on the error due to the effects of both the inherent and roundo error . Such methods are often termed linearized running error analyses , since they neglect higher { order terms in small quantities . Works that discuss concepts related to running error analysis include [ Hen 6 4 , Hen 8 2 , Hil 7 4 , Hou 5 3 , Mor 8 3 , Pen 7 0 ] . The absolute error of a computed value x represented by " x = " x + " x ! ( 3 . 2 . 2 . 8 ) that accounts for both roundo and inherent error when computed with a binary operator consists of a component " x due to roundo from the a oating point operation [ xxx 3 . 1 . 2 ] as well as a component " x ! due to the inherent error which already exists in x due to prior sources of error [ PW 8 3 ] . The corresponding relative error is measured and denoted by " x x = " x x + " x ! x : ( 3 . 2 . 2 . 9 ) CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 4 1 There are two main approaches presented in the literature that account for the inherent and roundofferror terms in equations ( 3 . 2 . 2 . 8 ) and ( 3 . 2 . 2 . 9 ) . Binary Operator approach is based on the fact that variables in any sequence of a com  putation are only processed two at a time by binary operators . Thus , the operations composing the entire computation are arranged into a sequence of binary operations . The error bound is obtained by accumulating the errors from each binary operation such that the effects of each binary operation contributes to the next and so forth . A calculation graph which maps the two operands of each binary operation to the corresponding nodes of a binary graph is presented in [ DM 7 2 ] . Partial Derivative Operator approach is based on the idea that a computation can be expressed as the sum of a sequence of linear successive operators . A partial differential equation represents the sum of these linear successive operators which each represent an independent variable in the differential equation . The error bound is obtained by applying the chain rule to the partial differential equation for each linear successive operator , and thus , is the sum of the effects of all the errors contributing to the computation . A calculation graph which maps each partial derivative operation to a corresponding node of a directed graph is presented in [ Pen 7 0 , PW 8 3 ] . Both methods work for simple expressions . Complex formulations need to be decom  posed or factored into simple parts which are then composed and analyzed as a simple function . Although running error analysis provides reasonably tight bounds , it at least doubles the operations in solving a particular expression . 3 . 2 . 3 Backward Error Analysis Rather than tracking the absolute or relative errors of computed values at each intermediate step of an algorithm , backward error analysis is concerned with showing that the computed result is exact for some \ neighboring " problem , corresponding to perturbed input parame  ters . The approach indicates a range of input values which give results di ering from the true solution by an amount that re ects the cumulative effects of intermediate computa  tional errors . Sources describing this process include [ Hen 8 2 , Mil 7 5 b , Mil 7 5 a , RR 7 8 , Ric 8 3 , CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 4 2 Ste 7 4 , Wil 6 3 ] ; see also [ x 4 . 2 . 3 ] below . 3 . 2 . 4 Interval Error Analysis Interval error analysis introduced by Moore [ Moo 6 6 ] and also covered by [ AH 8 3 , GL 7 0 , Han 6 9 , KM 8 1 , Moo 6 6 , Moo 7 9 , RL 7 1 , Ste 7 4 ] replaces each computed value x with a scalar interval represented by [ a ; b ] = fx j a x bg : ( 3 . 2 . 4 . 1 0 ) Interval arithmetic operations for two intervals [ a ; b ] and [ c ; d ] are represented by [ a ; b ] + [ c ; d ] = [ a + c ; b + d ] [ a ; b ] ?? [ c ; d ] = [ a ?? d ; b ?? c ] [ a ; b ] [ c ; d ] = [ Min ( ac ; ad ; bc ; bd ) ; Max ( ac ; ad ; bc ; bd ) ] [ a ; b ] = [ c ; d ] = [ a ; b ] [ 1 = d ; 1 = c ] where interval division is defined only for denominator intervals that do not contain 0 . Although interval analysis gives a guaranteed bound the error of the computed value , this bound is often too conservative and can be very expensive due to interval arithmetic overhead if not judiciously applied [ Ham 7 1 ] . Interval analysis has been applied to algo  rithms involving Bernstein { form computations [ Rok 7 7 , SF 9 2 ] , bounds on interval polyno  mials [ Rok 7 5 , Rok 7 7 ] , and the determination of polynomial zeros [ GH 7 2 , Han 7 0 ] . 3 . 2 . 5 General Stochastic Error Analysis Stochastic error analysis techniques estimate the correctness of a computation by studying the statistical variance and mean values for the generated roundofferror . These values are not upper ( worst { case ) error bounds , but rather approximations to an upper error bound to within a certain level of confidence . Various concepts and techniques for the general application of this process are presented in [ Buc 6 6 , Ham 7 1 , Hil 7 4 , Hen 6 4 , Hou 5 3 , KL 7 3 , Pen 7 0 , Piz 7 5 , RR 7 8 , Ric 8 3 , Ste 7 4 , Wil 6 3 ] . Of well { tested approach is the permutation { perturbation stochastic method , which is reviewed in the next subsection . CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 4 3 3 . 2 . 6 Permutation { Perturbation Error Analysis The permutation { perturbation or CESTAC technique of error analysis Introduced by Vignes and La Porte [ VL 7 4 ] and further developed in [ FV 8 5 , Vig 7 8 ] determines the precision of a oating { point computation by considering the stochastic effects that roundo due to the permutation effects of the computer arithmetic operations and the perturbation of their possible images has on the computed values . The method essentially simpli es to the following three steps which , provide an error estimate with a computed number of significant digits with a probability of 9 5 % as a consequence of Student ' s law and the central limit theorem [ Alt 8 6 ] . 1 . Compute three values fxig 3 i = 1 via an ( machine dependent ) algorithm that randomly permutes the permutable operators and randomly perturbs the last bit of the mantissa on any intermediate value . 2 . Compute the mean value x and the standard deviation of the three values fxig by x = P 3 i = 1 xi 3 and = sP 3 i = 1 ( xi ?? x ) 2 2 : 3 . Compute the number of significant digits d of x from the formula d = log 1 0 x : ( 3 . 2 . 6 . 1 1 ) Although the computational overhead is increased by at least a factor of three ( step 1 ) , this method provides an extremely accurate estimate of the incurred roundofferror . This method has been successfully applied to algorithms involving computational geometry [ DS 9 0 ] , eigenvalues [ Fay 8 3 ] , Fourier transforms [ BV 8 0 ] , integration of ordinary differential equations [ Alt 8 3 ] , linear and non { linear programming [ Tol 8 3 ] , optimization [ Vig 8 4 ] , parallel and closed form polynomial root finding [ Alt 8 6 , Vig 7 8 ] . It has been recommended for scienti c computing in general in [ BC 8 6 , Vig 8 8 ] , with sample FORTRAN codes provided in [ BV 8 0 , Vig 7 8 ] of the machine dependent permutation { perturbation function required in the initial step above . CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 4 4 3 . 3 Numerical Stability and Condition An important indication of whether or not computational errors are likely to be significant in a given problem is the condition or \ intrinsic stability " of that problem . If small changes in the input values induce large changes of the ( exact ) solution , the problem is said to be ill { conditioned , and it is unrealistic to expect high { accuracy solutions from finite { precision computations . A related , but distinct , issue is whether the particular algorithm used to address a given problem is stable ( e . g . , whether it incurs subtraction of near { identical quantities ) . This section reviews some basic definitions and concepts pertaining to numerical stability and conditioning , the condition of perturbing power form polynomial coefficients , and compares the condition of the Bernstein and the power polynomial forms . 3 . 3 . 1 Preliminary definitions We summarize below the basic concepts and terminology concerning numerical stability and condition [ FR 8 7 , Hen 8 2 , RR 7 8 , Ric 8 3 ] . Numerical Stability is a property of an algorithm which measures its propensity for generating and propagating roundofferror and inherent errors in the input data . Numerical Instability results when the intermediate errors due to roundo strongly in  uence the final result . Numerical Condition is a mathematical property of a problem which measures the sen  sitivity of the solution to perturbations in the input parameters . Condition Number is one or more scalar values that describes the condition of a problem , estimating how much uncertainty in the initial data of a problem is magni ed in its solution ; condition numbers may be formulated in terms of both absolute and relative perturbations of the input and output values . Well { Conditioned describes a problem characterized by a small condition number , i . e . , changes of the initial data yield commensurate changes of the output data , so the problem can be solved to reasonable accuracy in finite { precision arithmetic . Ill { Conditioned describes a problem characterized by a large condition number , changes of the input leading to much larger changes of the output , so that accurate solutions CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 4 5 are di cult to compute in finite { precision arithmetic . Thus , it is almost impossible for an ill { conditioned problem to be numerically stable in implementation [ Ric 8 3 ] . On the other hand , an unstable method for a well { conditioned problem may at first produce accurate results until the propagation of roundo accumulates to yield erroneous intermediate and final results [ RR 7 8 ] . Since the condition of a problem is dependent upon its formulation , it may be possible to ameliorate effects of ill { conditioning by reformulating the problem , i . e . , restating it in terms of different input / output variables [ Ric 8 3 ] . 3 . 3 . 2 Condition of Perturbing Polynomial coefficients The numerical condition of polynomials with respect to root { finding is analyzed by perturb  ing the coefficients and observing the resulting effects on the roots . This is accomplished by establishing a relationship between the roots f ig of the original polynomial f ( x ) = n X i = 0 aixi and the roots f i + ig of the perturbed polynomial f ( x ) = n X i = 0 ( ai + ai ) xi ( 3 . 3 . 2 . 1 2 ) where for purposes of analysis the coefficient errors f aig are regarded as begin specified arbitrarily ( in practice they might re ect rounding errors due to earlier computations ) . For instance , the well { known Wilkinson polynomial [ Wil 6 3 ] represented in product form by f ( x ) = 2 0 X i = 1 ( x + i ) ; f ig = f ?? 1 ; ?? 2 ; : : : ; ?? 2 0 g ( 3 . 3 . 2 . 1 3 ) yields the following roots f i + ig ( correct to 9 decimal places ) : CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 4 6  1 . 0 0 0 0 0 0 0 0 0  2 . 0 0 0 0 0 0 0 0 0  3 . 0 0 0 0 0 0 0 0 0  4 . 0 0 0 0 0 0 0 0 0  4 . 9 9 9 9 9 9 9 2 8  6 . 0 0 0 0 0 6 9 4 4  6 . 9 9 9 6 9 7 2 3 4  8 . 0 0 7 2 6 7 6 0 3  8 . 9 1 7 2 6 0 2 4 9  1 0 . 0 9 5 2 6 6 1 4 5 0 . 6 4 3 5 0 0 9 0 4 i  1 1 . 7 9 3 6 3 3 8 8 1 1 . 6 5 2 3 2 9 7 2 8 i  1 3 . 9 9 2 3 5 8 1 3 7 2 . 5 1 8 8 3 0 0 7 0 i  1 6 . 7 3 0 7 3 7 4 6 6 2 . 8 1 2 6 2 4 8 9 4 i  1 9 . 5 0 2 4 3 9 4 0 0 1 . 9 4 0 3 3 0 3 4 7 i  2 0 . 8 4 6 9 0 8 1 0 1 when the power coefficient a 1 9 is perturbed by as little as a 1 9 = 2 ?? 2 3 [ Hou 7 0 , PC 8 6 , Piz 7 5 , RR 7 8 , Wil 6 3 ] . These results illustrate the di culties in computing the roots of high degree polynomials represented in power form . 3 . 3 . 3 Condition of Bernstein and Power Forms Both analytical [ FR 8 7 , FR 8 8 ] and empirical [ SP 8 6 ] studies indicate that polynomials ex  pressed in Bernstein form can provide superior computational stability and root conditioning than in the power form . The following results outlined in [ FR 8 7 ] regarding the improved root condition of the Bernstein basis over the power basis are duplicated for convenience : Given a Bernstein basis defined over an arbitrary interval [ a ; b ] ( typically [ 0 , 1 ] ) that contains all the roots of interest , and a power basis defined about any point ( typically the origin ) excluding the interior of [ a ; b ] , then for any simple real root of an arbitrary polynomial the root condition number 1 . is smaller in the Bernstein basis than in the power basis , 2 . decreases monotonically under Bernstein degree elevation , 3 . decreases monotonically under Bernstein subdivision , and CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 4 7 4 . is smaller in the Bernstein basis than in any other basis which may be expressed as a non  negative combination of the Bernstein basis on that interval . The same holds true for multiple roots if the condition number ( which is formally in finite ) is regarded as the constant factor of the non  linear growth rate of the root displacements with the coefficient perturbations . 3 . 4 Conversion Between Bernstein and Power Forms Consider a polynomial in Bernstein form over the [ 0 ; 1 ] domain B ( t ) = n X i = 0 bi n i ! ( 1 ?? t ) n ?? iti and the same polynomial in power form P ( t ) = n X i = 0 piti : The problem we consider is , given the Bernstein coefficients bi , find the corresponding power coefficients pi ( Bernstein to power basis conversion ) or given the pi , find the bi ( power to Bernstein conversion ) . An elegant solution to this problem can be obtained by performing a Taylor ' s series expansion of B ( t ) : B ( t ) B ( 0 ) + B 0 ( 0 ) t + B 0 0 ( 0 ) t 2 2 ! + : : : + B ( n ) ( 0 ) tn n ! : A power basis polynomial is equivalent to a Bernstein basis polynomial ( P ( t ) B ( t ) ) if and only if P ( i ) ( 0 ) ti i ! B ( i ) ( 0 ) ti i ! ; i = 0 ; : : : ; n : But , for the power basis , P ( i ) ( 0 ) i ! = pi CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 4 8 so pi = B ( i ) ( 0 ) i ! ; i = 0 ; : : : ; n : ( 3 . 4 . 0 . 1 4 ) The terms B ( i ) ( 0 ) can be found as follows . Recall from Section 5 . 6 that the coefficients of the derivative of a polynomial in Bernstein form are : n ( b 1 ?? b 0 ) ; n ( b 2 ?? b 1 ) ; : : : ; n ( bn ?? bn ?? 1 ) : The coefficients of the second derivative are : n ( n ?? 1 ) ( b 2 ?? 2 b 1 + b 0 ) ; n ( n ?? 1 ) ( b 3 ?? 2 b 2 + b 1 ) ; : : : ; i n ( n ?? 1 ) ( bn ?? 2 bn ?? 1 + bn ?? 2 ) : Since B ( 0 ) = P ni = 0 bi ?? n i ( 1 ?? 0 ) n ?? i 0 i = b 0 , we have B 0 ( 0 ) = n ( b 1 ?? b 0 ) ; B 0 0 ( 0 ) = n ( n ?? 1 ) ( b 2 ?? 2 b 1 + b 0 ) B ( i ) ( 0 ) = n ( n ?? 1 ) ( n ?? i + 1 ) i X j = 0 ( ?? 1 ) ( i ?? j + 1 ) i j ! bj : This can be written more neatly if we define the recurrence b j i = b j ?? 1 i + 1 ?? b j ?? 1 i with b 0 i bi . Then B ( i ) ( 0 ) = n ( n ?? 1 ) ( n ?? i + 1 ) bi 0 = n ! ( n ?? i ) ! bi 0 : From equation 3 . 4 . 0 . 1 4 , pi = n ! ( n ?? i ) ! i ! bi 0 = n i ! bi 0 : Thus , the problem reduces to one of finding the values bi 0 . This is easily done using a difference table : b 0 0 = b 0 = p 0 b 0 1 = b 1 : : : b 0 n = bn b 1 0 = b 0 1 ?? b 0 0 = p 1 = ?? n 1 b 1 1 = b 0 2 ?? b 0 1 : : : b 0 n = b 0 n ?? b 0 n ?? 1 b 2 0 = b 1 1 ?? b 1 0 = p 2 = ?? n 2 b 2 1 = b 1 2 ?? b 1 1 : : : b 2 n = b 1 n ?? b 1 n ?? 1 : : : : : : : : : : : : b n ?? 1 0 = b n ?? 2 1 ?? b n ?? 2 0 = pn ?? 1 = ?? n n ?? 1 b n ?? 1 1 = b n ?? 2 2 ?? b n ?? 2 1 bn 0 = bn ?? 1 1 ?? bn ?? 1 0 = pn CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 4 9 Thus , to perform Bernstein to power basis conversion , load the Bernstein coefficients into the top row and compute the difference table . Scale the left column by ?? n i , and you have the power coefficients . To perform power to Bernstein conversion , divide the power coefficients by ?? n i , load them into the left column , compute the difference table backwards , and read the Bernstein coefficients o the top row . 3 . 4 . 1 Example Convert to power basis the degree 4 Bernstein form polynomial with coefficients ( 1 ; 3 ; 4 ; 6 ; 8 ) . This is done by setting up the difference table 1 3 4 6 8 2 1 2 2  1 1 0 2  1  3 so the power coefficient are taken from the left column , times the binomial coefficients : p 0 = 1 ?? 4 0 = 1 p 1 = 2 ?? 4 1 = 8 p 2 = ?? 1 ?? 4 2 = ?? 6 p 3 = 2 ?? 4 3 = 8 p 4 = ?? 3 ?? 4 4 = ?? 3 3 . 4 . 2 Closed Form Expression The conversion from Bernstein to power basis can be written concisely as follows : pi = i X k = 0 bk n i ! i k ! ( ?? 1 ) i ?? k : Power to Bernstein conversion is accomplished by the formula : bi = i X k = 0 i k ! n k ! pk : ( 3 . 4 . 2 . 1 5 ) CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 5 0 3 . 4 . 3 Numerical ramifications of Basis Conversion The numerical condition of basis transformation has been studied in [ FR 8 8 , DD 8 8 , Far 9 1 b ] . The condition number for the basis transformation matrix gives an upper bound  for arbitrary polynomials  on the largest possible value by which basis conversion ampli es the relative coefficient error . It grows exponentially with the polynomial degree n . 3 . 5 Performance of Polynomial Root Finders Guidelines for performance comparisons of mathematical software are proposed in [ CDM 7 9 , Ign 7 1 , Ric 8 3 ] , and more specifically for polynomial root finding algorithms by Jenkins and Traub in [ JT 7 4 , JT 7 5 ] and Wilkinson in [ Wil 6 3 ] . This section reviews principles regarding establishing and evaluating the overall performance of polynomial root finding algorithms based on the above references . 3 . 5 . 1 Benchmarking Principles The criteria for testing root finding algorithms may be concisely summarized under the categories : 1 ) Robustness , 2 ) Convergence , 3 ) deficiencies , and 4 ) Assessment . Program Robustness is validated with pathological examples that test for specific pro  gram properties and underlying methods , such as : 1 . Checking leading coefficients that approximate zero , 2 . Checking trailing coefficients that approximate zero , 3 . Proper handling of low degree polynomials , 4 . Proper scaling of polynomial coefficients to avoid oating  point over ow and under ow , 5 . Assessing the condition of each zero and adjusting the precision of computation accordingly , 6 . Specifying iteration limits , and 7 . Providing a failure exit or testable failure ags . CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 5 1 Program Convergence is a computational problem and needs validation regarding the : 1 . Starting conditions for proper convergence , 2 . Convergence of iterate sequences to acceptable tolerances , 3 . Slow convergence of iterate sequences as in multiple zeros cases , 4 . Divergence of iterate sequences , and 5 . Program termination criteria . Program deficiencies for specific known implementation defects or weaknesses are im  portant to document and are tested by forcing : 1 . A program to produce an undefined result , 2 . The decision mechanisms of the program to fail , and 3 . Roundofferror to destroy accuracy of solution as in de ating zeros . Program Assessment regarding performance is best analyzed by comparing program statistical information based on the establishment of meaningful measures . 1 . Reliability of the solution . 2 . Accuracy or closeness to the exact solution . 3 . Precision or the number of significant digits in the answer . 4 . efficiency measurements including : CPU time based on accurate timers and uniformly optimized code , Function evaluations , Arithmetic operations , and Storage requirements regarding arrays , stacks , object code size . 5 . Portability of the program to different computer platforms . Polynomial Classes for Performance Assessment Classes of polynomials for performance assessment should include both well  conditioned and ill  conditioned polynomials . A few well  conditioned cases should be tested against other algorithms in the literature , with the major testing focusing on ill  conditioned cases . 1 . Well  conditioned Polynomials CHAPTER 3 . REVIEW OF NUMERICAL CONSIDERATIONS 5 2 Uniformly Modulated Zeros yield similar zero cases . Polynomials constructed from uniformly distributed random zeros di er little from each other because the randomness tends to ` average out ' in the coefficients . Uniformly Modulated coefficients yield uniformly distributed zeros . Polynomials constructed from uniformly distributed random coefficients tend to have their zeros uniformly distributed around the origin near the unit circle . 2 . Ill  conditioned Polynomials Dual Bands of Modulated Zeros yield clustered zeros . Polynomials constructed from both a wide band and a much narrower band of uniformly distributed random zeros produce ill  conditioned polynomials with clustered zeros , where the severity of the condition is controlled by the amount the narrower band is decreased and its percentage of zeros is increased . Widely Modulated Zeros yield widely varying sizes of zeros . Polynomials constructed from zeros whose mantissa and exponent are chosen from separate random uniform distributions yield zeros of widely varying size which often illustrate ine ciences in a program usually not apparent from other tests , e . g . , poorly chosen initial iterates may tend to ` wander ' slowly to the area of the zero and such behavior will be magni ed on this class of polynomials . Widely Modulated coefficients yield widely varying intervals of zeros . Polynomials constructed from coefficients whose mantissa and exponent are cho  sen from separate random uniform distributions yield zeros with widely varying intervals which typify wide applicability of the program . Sources that define examples of the various generated polynomials include [ Dun 7 2 , GK 9 0 ] . Chapter 4 Preconditioning for Bernstein form polynomials As mentioned in Chapter 1 , several fundamental geometrical problems that arise in the processing of free { form curves and surfaces may be reduced computationally to the task of isolating and approximating the distinct real roots of univariate polynomials on finite inter  vals . Representative examples are : ray { tracing of surfaces [ Han 8 3 , Kaj 8 2 , SA 8 4 ] ; computing intersections of plane curves [ GSA 8 4 , SAG 8 5 , SP 8 6 ] ; finding a point on each loop of the intersection curve of two surfaces [ Far 8 6 ] ; and the \ trimming " of offset and bisector curves [ FJ 9 4 , FN 9 0 a ] . Such an approach is attractive in that it provides an algebraically precise formulation of the geometrical problem under consideration  given an \ algorithmic " root finder  but it is widely perceived to be impractical in all but the simplest cases , due to the potentially poor numerical condition of the high { degree polynomials incurred . Thus , it has not found as widespread application as subdivision methods [ CLR 8 0 , LR 8 0 ] based on successively refined piecewise { linear approximations of curves and surfaces ( generated recursively from their B ! ezier / B { spline control polygons ) . In the aforementioned problems , the polynomials whose roots we seek are not specified ab initio , but rather must be \ constructed " by a sequence of oating { point arithmetic op  erations on given numerical data , which may incur significant errors in the final polynomial 5 3 CHAPTER 4 . PRECONDITIONING FOR BERNSTEIN FORM POLYNOMIALS 5 4 coefficients . Our investigations suggest that the sensitivity of the roots to errors in the polynomial coefficients incurred during the construction stage typically dominates any error associated with the actual root { finding process . In this chapter we describe an approach to processing polynomials in Bernstein form that can significantly improve the accuracy of computed roots by minimizing adverse effects of the polynomial \ construction " errors . In essence , the approach amounts to performing de Casteljau subdivision in the earliest stages of an algorithm , a procedure that we shall call \ a priori subdivision . " As is well known [ FR 8 7 ] , the Bernstein form exhibits monotonically { decreasing root condition numbers with respect to subdivision . The practical meaning of this statement is that , if Bernstein representations are available to the same relative accuracy in the coefficients on both the nominal interval [ 0 ; 1 ] and a subset [ a ; b ] thereof , then the latter form always exhibits smaller worst { case errors for those roots actually located on the smaller interval . Obviously , explicit oating { point subdivision of a polynomial with given initial coefficient errors cannot result in any diminution of the perturbations of its roots due to those initial errors  one cannot gain \ something for nothing . " Our key observation here , how  ever , is that the consequences of errors incurred in the polynomial constructions for various geometrical operations can be ameliorated by supplying them with input polynomials that have been subdivided  even in oating { point arithmetic  a priori . 4 . 1 An Illustrative Example A simple experiment on a well { known pathological example [ Wil 5 9 ] serves to motivate the proposed approach . Consider the degree { n Wilkinson polynomial having evenly { spaced roots k = n , k = 1 ; : : : n , between 0 and 1 . In the power basis , we have Pn ( t ) = n Y k = 1 ( t ?? k = n ) = n X k = 0 pk tk : ( 4 . 1 . 0 . 1 ) CHAPTER 4 . PRECONDITIONING FOR BERNSTEIN FORM POLYNOMIALS 5 5 Here the polynomial Pn ( t ) is \ constructed " by explicitly multiplying out , in oating { point arithmetic , the linear factors in the above product expression to obtain the power coefficients fpkg . Column 2 of Table 4 . 1 shows the results of attempting to approximate the roots in the case n = 2 5 using this \ constructed " power representation . The cause of the very large inaccuracies in the computed roots is discussed in [ Wil 5 9 ] . A simple intuitive understanding may be gleaned from the following calculation , taken from [ Far 9 1 a ] , for the value of the polynomial halfway between the roots t = 0 : 5 0 and t = 0 : 5 5 in the case n = 2 0 ( the values are correct to the number of digits shown ) : p 0 = + 0 : 0 0 0 0 0 0 0 2 3 2 0 1 9 6 1 5 9 5 p 1 t = ?? 0 : 0 0 0 0 0 0 8 7 6 4 8 3 4 8 2 2 2 7 p 2 t 2 = + 0 : 0 0 0 0 1 4 5 1 3 6 3 0 9 8 9 4 4 6 p 3 t 3 = ?? 0 : 0 0 0 1 4 2 0 9 4 7 2 4 4 8 9 8 6 0 p 4 t 4 = + 0 : 0 0 0 9 3 1 7 4 0 8 0 9 1 3 0 5 6 9 p 5 t 5 = ?? 0 : 0 0 4 3 8 1 7 4 0 0 7 8 1 0 0 3 6 6 p 6 t 6 = + 0 : 0 1 5 4 2 1 1 3 7 4 4 3 6 9 3 2 4 4 p 7 t 7 = ?? 0 : 0 4 1 7 7 8 3 4 5 1 9 1 9 0 8 1 5 8 p 8 t 8 = + 0 : 0 8 8 8 1 1 1 2 7 1 5 0 1 0 5 2 3 9 p 9 t 9 = ?? 0 : 1 5 0 0 5 1 4 5 9 8 4 9 1 9 5 6 3 9 p 1 0 t 1 0 = + 0 : 2 0 3 1 1 7 0 6 0 9 4 6 7 1 5 7 9 6 p 1 1 t 1 1 = ?? 0 : 2 2 1 1 5 3 9 0 2 7 1 2 3 1 1 8 4 3 p 1 2 t 1 2 = + 0 : 1 9 3 7 0 6 8 2 2 3 1 1 5 6 8 5 3 2 p 1 3 t 1 3 = ?? 0 : 1 3 5 9 7 1 1 0 8 1 0 7 8 9 4 0 1 6 p 1 4 t 1 4 = + 0 : 0 7 5 8 5 2 7 3 7 4 7 9 8 7 7 5 7 5 p 1 5 t 1 5 = ?? 0 : 0 3 3 1 5 4 9 8 0 8 5 5 8 1 9 2 1 0 p 1 6 t 1 6 = + 0 : 0 1 1 1 0 1 5 5 2 7 8 9 1 1 6 2 9 6 CHAPTER 4 . PRECONDITIONING FOR BERNSTEIN FORM POLYNOMIALS 5 6 p 1 7 t 1 7 = ?? 0 : 0 0 2 7 4 7 2 7 1 7 5 0 1 9 0 9 5 2 p 1 8 t 1 8 = + 0 : 0 0 0 4 7 3 1 4 1 2 4 5 8 6 6 2 1 9 p 1 9 t 1 9 = ?? 0 : 0 0 0 0 5 0 6 0 7 6 3 7 5 0 3 5 1 8 p 2 0 t 2 0 = + 0 : 0 0 0 0 0 2 5 3 0 3 8 1 8 7 5 1 7 6 P 2 0 ( t ) = 0 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 8 9 9 : ( 4 . 1 . 0 . 2 ) We see that the value of P 2 0 ( t ) is an extremely small residual resulting from the addition of large terms of alternating sign . Owing to massive cancellations , the final value is 1 0 1 3 times smaller than the individual terms pktk , and thus any initial relative errors in the coefficients pk ( as incurred , say , by \ constructing " them in oating { point ) will be magnified by this enormous factor in determining the relative error of the value P 2 0 ( t )  resulting in a commensurate loss of accuracy in the root { finding process . The above argument may be cast in more rigorous terms by computing the root condition numbers [ FR 8 7 ] for the polynomial P 2 0 ( t ) , which measure the displacement of roots due to given fractional errors in the coefficients  these condition numbers are , in fact , of order 1 0 1 3 . One should subsequently bear in mind the lesson of this example , namely , that severe ill { conditioning generally arises from the \ magnification " of initial ( perhaps seemingly innocuous ) relative coefficient errors through cancellation effects . 4 . 1 . 1 The Bernstein Form We mention the power form of the Wilkinson polynomial only as a point of reference ; henceforth we shall be concerned solely with the Bernstein representation Bn [ 0 ; 1 ] ( t ) = n Y k = 1 [ k ( 1 ?? t ) + ( k ?? n ) t ] = n X k = 0 bk n k ! ( 1 ?? t ) n ?? k tk ( 4 . 1 . 1 . 3 ) of this polynomial . Note that , apart from a multiplicative constant , each of the factors k ( 1 ?? t ) + ( k ?? n ) t corresponds to the terms t ?? k = n in ( 4 . 1 . 0 . 1 ) . As emphasized in [ FR 8 7 ] , CHAPTER 4 . PRECONDITIONING FOR BERNSTEIN FORM POLYNOMIALS 5 7 exact computed root computed root computed root root of P 2 5 ( t ) of B 2 5 [ 0 ; 1 ] ( t ) of B 2 5 [ : 2 5 ; : 7 5 ] ( u ) 0 . 0 4 0 . 0 3 9 9 9 9 9 9 9 9 9 9 9 9 7 0 . 0 4 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 . 0 8 0 . 0 7 9 9 9 9 9 9 9 9 9 7 9 2 4 0 . 0 7 9 9 9 9 9 9 9 9 9 9 9 7 8 1 0 . 1 2 0 . 1 2 0 0 0 0 0 0 0 0 3 9 8 8 6 0 . 1 2 0 0 0 0 0 0 0 0 0 0 6 3 4 6 0 . 1 6 0 . 1 5 9 9 9 9 9 9 9 4 9 5 9 4 5 0 . 1 5 9 9 9 9 9 9 9 9 9 6 0 4 9 4 0 . 2 0 0 . 2 0 0 0 0 0 0 2 0 7 7 8 9 2 8 0 . 1 9 9 9 9 9 9 9 9 9 9 7 1 2 6 1 0 . 2 4 0 . 2 3 9 9 9 9 2 3 0 5 6 5 9 5 3 0 . 2 4 0 0 0 0 0 0 0 0 9 0 4 5 6 3 0 . 2 8 0 . 2 8 0 0 1 6 1 8 0 8 2 9 2 3 3 0 . 2 7 9 9 9 9 9 9 9 8 1 8 1 7 1 4 0 . 2 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 . 3 2 0 . 3 1 9 7 9 3 9 5 8 6 6 4 6 4 2 0 . 3 1 9 9 9 9 9 9 9 1 4 6 3 2 9 9 0 . 3 1 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 0 . 3 6 0 . 3 6 2 0 0 1 3 0 8 2 5 2 3 3 6 0 . 3 6 0 0 0 0 0 0 5 7 5 6 6 0 1 4 0 . 3 5 9 9 9 9 9 9 9 9 9 9 9 9 9 7 9 0 . 4 0 0 . 3 9 1 6 0 5 6 3 6 8 0 1 3 4 4 0 . 3 9 9 9 9 9 9 8 3 4 4 1 1 1 6 0 0 . 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 . 4 4 complex 0 . 4 4 0 0 0 0 0 3 0 7 2 3 4 7 6 7 0 . 4 4 0 0 0 0 0 0 0 0 0 0 0 0 1 4 0 0 . 4 8 complex 0 . 4 7 9 9 9 9 9 5 8 5 1 2 0 5 9 1 0 . 4 7 9 9 9 9 9 9 9 9 9 9 9 8 8 4 9 0 . 5 2 complex 0 . 5 2 0 0 0 0 0 4 2 6 5 0 9 1 3 8 0 . 5 2 0 0 0 0 0 0 0 0 0 0 0 2 3 7 0 0 . 5 6 complex 0 . 5 5 9 9 9 9 9 6 6 8 9 6 1 8 3 8 0 . 5 5 9 9 9 9 9 9 9 9 9 9 9 7 9 5 9 0 . 6 0 complex 0 . 6 0 0 0 0 0 0 1 8 3 9 3 8 2 2 2 0 . 6 0 0 0 0 0 0 0 0 0 0 0 0 0 7 1 0 0 . 6 4 complex 0 . 6 3 9 9 9 9 9 9 3 4 9 5 1 2 0 2 0 . 6 3 9 9 9 9 9 9 9 9 9 9 9 9 9 2 9 0 . 6 8 complex 0 . 6 8 0 0 0 0 0 0 0 9 2 4 5 8 1 9 0 . 6 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 . 7 2 complex 0 . 7 2 0 0 0 0 0 0 0 3 1 4 1 2 1 6 0 . 7 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 . 7 6 complex 0 . 7 5 9 9 9 9 9 9 9 8 0 4 5 5 7 6 0 . 8 0 complex 0 . 8 0 0 0 0 0 0 0 0 0 4 2 2 7 7 4 0 . 8 4 complex 0 . 8 3 9 9 9 9 9 9 9 9 9 6 3 7 1 9 0 . 8 8 complex 0 . 8 7 9 9 9 9 9 9 9 9 9 9 9 7 5 2 0 . 9 2 0 . 9 2 5 6 0 6 8 2 0 6 0 6 0 3 1 0 . 9 2 0 0 0 0 0 0 0 0 0 0 0 1 3 5 0 . 9 6 0 . 9 6 5 4 3 6 7 0 5 9 0 6 7 7 6 0 . 9 6 0 0 0 0 0 0 0 0 0 0 0 0 0 7 1 . 0 0 0 . 9 9 8 8 1 2 5 5 4 5 9 5 7 6 6 1 . 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Table 4 . 1 : Comparison of the computed roots of P 2 5 ( t ) , B 2 5 [ 0 ; 1 ] ( t ) , and B 2 5 [ : 2 5 ; : 7 5 ] ( u ) . CHAPTER 4 . PRECONDITIONING FOR BERNSTEIN FORM POLYNOMIALS 5 8 the Bernstein form is much better conditioned than the power form for roots in [ 0 ; 1 ] . See column 3 of Table 4 . 1 for the roots obtained using the Bernstein form , as constructed by multiplying out the linear factors in ( 4 . 1 . 1 . 3 ) using oating { point arithmetic . ( Note that transformations between the power and Bernstein forms should be avoided [ DD 8 8 , Far 9 1 b ] ; algorithms of interest may readily be formulated directly in the Bernstein basis [ FR 8 8 ] in lieu of the power basis . ) We now illustrate the effect of \ a priori subdivision " on the root accuracy of ( 4 . 1 . 1 . 3 ) by constructing the Bernstein representation of this polynomial on a subinterval [ a ; b ] [ 0 ; 1 ] as follows . First , we define the change of variables u = t ?? a b ?? a ( 4 . 1 . 1 . 4 ) that maps t 2 [ a ; b ] to u 2 [ 0 ; 1 ] . We then represent each of the linear factors k ( 1 ?? t ) + ( k ?? n ) t in ( 4 . 1 . 1 . 3 ) in terms of u , i . e . , we re { write these factors in the equivalent form ( k ?? na ) ( 1 ?? u ) + ( k ?? nb ) u ; ( 4 . 1 . 1 . 5 ) and then multiply them together to obtain Bn [ a ; b ] ( u ) = n Y k = 1 [ ( k ?? na ) ( 1 ?? u ) + ( k ?? nb ) u ] = n X k = 0 ~ b k n k ! ( 1 ?? u ) n ?? kuk : ( 4 . 1 . 1 . 6 ) For the case [ a ; b ] = [ 0 : 2 5 ; 0 : 7 5 ] we find that the coefficients ~ b k in ( 4 . 1 . 1 . 6 ) are of much smaller magnitude than those bk for the representation ( 4 . 1 . 1 . 3 ) on the full interval [ 0 ; 1 ] . Furthermore , as shown in column 4 of Table 4 . 1 , when using this representation the twelve roots that actually lie on the interval t 2 [ 0 : 2 5 ; 0 : 7 5 ] are computed to 1 3 accurate digits  almost twice as many as were obtained using the representation B 2 5 [ 0 ; 1 ] ( t ) . For higher degree polynomials , this experiment yields even more impressive results . For example , n = 3 7 is the highest degree for which the roots of Bn [ 0 ; 1 ] ( t ) can be computed to at least one digit of accuracy in double { precision oating point : for n = 3 8 , some of the computed roots become complex . Yet tests indicate that a priori subdivision allows us to CHAPTER 4 . PRECONDITIONING FOR BERNSTEIN FORM POLYNOMIALS 5 9 approximate the roots of Wilkinson polynomials of very high degree to full double precision accuracy if the interval width b ?? a is sufficiently small . For the case n = 1 0 0 0 , for example , the 1 0 1 roots within the interval [ a ; b ] = [ 0 : 4 5 ; 0 : 5 5 ] can be computed to the full 1 7 digits of accuracy if a priori subdivision is used when constructing the polynomial . It should be emphasized that the dramatic improvement seen in Table 4 . 1 would not have occurred with \ a posteriori subdivision "  i . e . , if we had adopted the approach of first constructing the representation ( 4 . 1 . 1 . 3 ) on [ 0 ; 1 ] and then explicitly subdividing this representation to [ a ; b ] using ( say ) the de Casteljau algorithm . With such an approach , the quality of the computed roots is found to be comparable to , or even slightly worse , than those given in column 3 of Table 4 . 1 . Using exact arithmetic , of course , one would find that the \ a priori " and \ a posteriori " subdivision paradigms yield identical results . 4 . 1 . 2 Relative Errors in the Construction Process An empirical explanation for the improvements obtained using a priori subdivision can be given as follows . The construction of the Bernstein form of Wilkinson ' s polynomial was performed for degrees 1 0, and intervals [ a ; b ] of width 1 , 0 . 5 , 0 . 2 5 , 0 . 1 2 5 , and 0 . 0 6 2 5 , in each case using both double and quadruple precision oating { point arithmetic . The relative coefficient errors incurred by the construction process were then estimated by regarding the outcome of the quadruple { precision calculations as \ exact " reference values . Characteristic measures of these errors were formulated in the 1 , 2 , and 1 norms ( see [ x 4 . 2 ] below ) , and were found not to depend significantly on the chosen norm . For a given degree , the relative errors of the constructed coefficients showed no systematic dependence on the interval width , and in all cases were quite small and within a few orders of magnitude of each other ( typically 1 0 ?? 1 6 to 1 0 ?? 1 4 ) . Further , any tendency for the errors to increase with n was quite mild , and barely discernible within the 2 orders { of { magnitude randomness . These results are consistent with the commonly { accepted notion that each CHAPTER 4 . PRECONDITIONING FOR BERNSTEIN FORM POLYNOMIALS 6 0 oating { point operation incurs only a small relative error [ Wil 6 0 ] , bounded by the \ machine unit " , and the fact that the number of operations required to construct the polynomial is independent of [ a ; b ] and grows only linearly with n . It cannot be overemphasized , however , that is a valid bound on the relative error only for operands that are presumed to have exact initial oating { point representations . If  as is the case in practice  the relative error of interest is that defined by taking arbitrary real numbers , converting the 



1 

A 

B 

C 

D 

E 

F 

G 

H 

I 

J 

K 

L 

M 

N 

O 

P 

R 

S 

T 

U 

V 

W 

Y 

Z 


