全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

A Two-Point Newton Method Suitable for Nonconvergent Cases and with Super-Quadratic Convergence

DOI: 10.1155/2013/687382

Full-Text   Cite this paper   Add to My Lib

Abstract:

An iterative formula based on Newton’s method alone is presented for the iterative solutions of equations that ensures convergence in cases where the traditional Newton Method may fail to converge to the desired root. In addition, the method has super-quadratic convergence of order 2.414 (i.e., ). Newton method is said to fail in certain cases leading to oscillation, divergence to increasingly large number, or offshooting away to another root further from the desired domain or offshooting to an invalid domain where the function may not be defined. In addition when the derivative at the iteration point is zero, Newton method stalls. In most of these cases, hybrids of several methods such as Newton, bisection, and secant methods are suggested as substitute methods and Newton method is essentially blended with other methods or altogether abandoned. This paper argues that a solution is still possible in most of these cases by the application of Newton method alone without resorting to other methods and with the same computational effort (two functional evaluations per iteration) like the traditional Newton method. In addition, the proposed modified formula based on Newton method has better convergence characteristics than the traditional Newton method. 1. Introduction Iterative procedures for solutions of equations are routinely employed in many science and engineering problems. Starting with the classical Newton methods, a number of methods for finding roots of equations have come to exist, each of which has its own advantages and limitations. The Newton method of root finding is based on the iterative formula: Newton’s method displays a faster quadratic convergence near the root while it requires evaluation of the function and its derivative at each step of the iteration. However, when the derivative evaluated is zero, Newton method stalls. For low values of the derivative, the Newton iteration offshoots away from the current point of iteration and may possibly converge to a root far away from the intended domain. For certain forms of equations, Newton method diverges or oscillates and fails to converge to the desired root. In addition, the convergence of Newton method can be slow near roots of multiplicity although modifications can be made to increase the rate of convergence [1]. Modifications of the Newton method with higher order convergence have been proposed that require also evaluation of a function and its derivatives. An example of such methods is a third order convergence method by Weerakoon and Fernando [2] that requires evaluation of one

References

[1]  C. F. Gerald and P. O. Wheatley, Applied Numerical Analysis, 5th edition, 1994.
[2]  S. Weerakoon and T. G. I. Fernando, “A variant of Newton's method with accelerated third-order convergence,” Applied Mathematics Letters, vol. 13, no. 8, pp. 87–93, 2000.
[3]  J. F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall, Englewood Cliffs, NJ, USA, 1964.
[4]  M. Grau-Sánchez and J. L. Díaz-Barrero, “A technique to composite a modified Newton's method for solving nonlinear equations,” Annals of the University of Bucharest, vol. 2, no. 1, pp. 53–61, 2011.
[5]  J. R. Sharma and R. K. Guha, “A family of modified Ostrowski methods with accelerated sixth order convergence,” Applied Mathematics and Computation, vol. 190, no. 1, pp. 111–115, 2007.
[6]  C. Chun, “Some improvements of Jarratt's method with sixth-order convergence,” Applied Mathematics and Computation, vol. 190, no. 2, pp. 1432–1437, 2007.
[7]  J. Kou and X. Wang, “Sixth-order variants of Chebyshev-Halley methods for solving non-linear equations,” Applied Mathematics and Computation, vol. 190, no. 2, pp. 1839–1843, 2007.
[8]  J. Kou, “On Chebyshev-Halley methods with sixth-order convergence for solving non-linear equations,” Applied Mathematics and Computation, vol. 190, no. 1, pp. 126–131, 2007.
[9]  J. Kou and Y. Li, “An improvement of the Jarratt method,” Applied Mathematics and Computation, vol. 189, no. 2, pp. 1816–1821, 2007.
[10]  J. Kou, Y. Li, and X. Wang, “Some modifications of Newton's method with fifth-order convergence,” Journal of Computational and Applied Mathematics, vol. 209, no. 2, pp. 146–152, 2007.
[11]  S. K. Parhi and D. K. Gupta, “A sixth order method for nonlinear equations,” Applied Mathematics and Computation, vol. 203, no. 1, pp. 50–55, 2008.
[12]  D. E. Muller, “A method for solving algebraic equations using an automatic computer,” Mathematical Tables and Other Aids to Computation, vol. 10, pp. 208–215, 1956.
[13]  W. R. Mekwi, Iterative methods for roots of polynomials [M.S. thesis], University of Oxford, 2001.
[14]  T. J. Dekker, “Finding a zero by means of successive linear interpolation,” in Constructive Aspects of the Fundamental Theorem of Algebra, B. Dejon and P. Henrici, Eds., Wiley-Interscience, London, UK, 1969.
[15]  R. P. Brent, Algorithms for Minimization without Derivatives, chapter 4, Prentice-Hall, Englewood Cliffs, NJ, USA, 1973.
[16]  A. B. Kasturiarachi, “Leap-frogging Newton's method,” International Journal of Mathematical Education in Science and Technology, vol. 33, no. 4, pp. 521–527, 2002.

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133