![Oscar Veliz](/img/default-banner.jpg)
- 50
- 2 047 554
Oscar Veliz
United States
Приєднався 15 тра 2008
Numerical Analysis, Root-Finding, and Mathematics lessons. Code for the channel is hosted on GitHub (github.com/osveliz/numerical-veliz). You can leave a topic request in the video comments or preferably in the GitHub forum. Support the channel by becoming a GitHub Sponsor (github.com/sponsors/osveliz).
exp(x) explained
The algorithms behind the exponential function in the computer revealed. Lesson also includes discussion of the pow(x,y) function, historical digital computing of the exponential function, modern versions of the algorithm, and a high-precision approach to finding any number of digits for e.
Example code hosted on GitHub github.com/osveliz/numerical-veliz
Chapters:
0:00 MATLAB
0:16 Euler's Number
0:38 Computing e^x
0:52 Computing pow(x,y)
1:19 pow(x,y) source code
1:56 Computing e^x continued
3:21 exp(x) source code
4:03 Long Story Short
4:29 High-precison e
4:54 e-spigot
5:58 Why does e-spigot work?
6:22 Mathemaniac
6:52 Oscar's Notes
7:28 Outro
Recommended Viewing:
@mathemaniac 's exp video ua-cam.com/video/u1taDXNzFto/v-deo.html
Computing π: Machin-like formula ua-cam.com/video/M_fTdDx8IlY/v-deo.html
Reference links:
math.h github.com/openbsd/src/blob/master/include/math.h
exp(x) source code netlib.org/fdlibm/e_exp.c
pow(x,y) source code netlib.org/fdlibm/e_pow.c
"Algorithms for Digital Computers" by Hastings et. al. press.princeton.edu/books/paperback/9780691626949/approximations-for-digital-computers
"The Mathematical-Function Computation Handbook" by Beebe link.springer.com/book/10.1007/978-3-319-64110-2
"A Spigot Algorithm for the Digits of π" by Rabinowitz & Wagon www.tandfonline.com/doi/abs/10.1080/00029890.1995.11990560?journalCode=uamm20
Background music "Drifting at 432 Hz" by @UnicornHeads
#NumericalAnalysis #SoME3 #exp
Example code hosted on GitHub github.com/osveliz/numerical-veliz
Chapters:
0:00 MATLAB
0:16 Euler's Number
0:38 Computing e^x
0:52 Computing pow(x,y)
1:19 pow(x,y) source code
1:56 Computing e^x continued
3:21 exp(x) source code
4:03 Long Story Short
4:29 High-precison e
4:54 e-spigot
5:58 Why does e-spigot work?
6:22 Mathemaniac
6:52 Oscar's Notes
7:28 Outro
Recommended Viewing:
@mathemaniac 's exp video ua-cam.com/video/u1taDXNzFto/v-deo.html
Computing π: Machin-like formula ua-cam.com/video/M_fTdDx8IlY/v-deo.html
Reference links:
math.h github.com/openbsd/src/blob/master/include/math.h
exp(x) source code netlib.org/fdlibm/e_exp.c
pow(x,y) source code netlib.org/fdlibm/e_pow.c
"Algorithms for Digital Computers" by Hastings et. al. press.princeton.edu/books/paperback/9780691626949/approximations-for-digital-computers
"The Mathematical-Function Computation Handbook" by Beebe link.springer.com/book/10.1007/978-3-319-64110-2
"A Spigot Algorithm for the Digits of π" by Rabinowitz & Wagon www.tandfonline.com/doi/abs/10.1080/00029890.1995.11990560?journalCode=uamm20
Background music "Drifting at 432 Hz" by @UnicornHeads
#NumericalAnalysis #SoME3 #exp
Переглядів: 8 193
Відео
Bairstow's Method
Переглядів 7 тис.Рік тому
Bairstow's Method for finding the roots of polynomials including complex roots. Discussion of method derivation, relation to synthetic division of two variables, stopping condition, selection of initial values, fractals, and historical context. Submission for Summer of Math Exposition 2 contest by @3blue1brown. Example code hosted on GitHub github.com/osveliz/numerical-veliz Chapters: 00:00 Int...
Graeffe's Method
Переглядів 3,6 тис.2 роки тому
Graeffe's Root-Squaring Method (also called Graeffe-Dandelin-Lobachevskiĭ or Dandelin-Lobachesky-Graeffe method) for finding roots of polynomials. The method solves for all of the roots of a polynomial by only using the coefficients and does not require derivatives nor an interation function. This lesson provides a history of the method, motivates "why" the method works, and walks through an e...
Generalized Bisection Method for Systems of Nonlinear Equations
Переглядів 2 тис.2 роки тому
Generalization of the Bisection Method for solving systems of equations. This lesson explains the algorithm for a 2 dimension example based on Harvey-Stenger's approach using bisecting triangles. It includes a visualization of the method in action on an example nonlinear system. Other methods for solving in 3 dimensions and for larger systems are also discussed as well as hybrid approaches. Exa...
Generalized False Position & Alternative Secant Methods
Переглядів 7372 роки тому
False Position Method for Nonlinear Systems (aka Generalized Regula Falsi) along with two Alternative Secant Methods. Includes discussion of history and primary sources along with numeric examples and visualizations. Example code hosted on GitHub github.com/osveliz/numerical-veliz Chapters: 0:00 Scaffolding 0:25 Korganoff 1:02 Robinson 1:32 Some History 1:50 Robinson Continued 2:51 Robinson ver...
Global Newton's Method - It Always Converges
Переглядів 6 тис.2 роки тому
Globally convergent modification of Newton's Method that uses backtracking whenever a test point would not cause the function iterations to shrink in absolute value based on the Armijo's Search. Lesson also covers fractals using Global Newton Method as well as solving systems of nonlinear equations. Example code hosted on GitHub github.com/osveliz/numerical-veliz Chapters: 0:00 Intro 0:30 Liter...
Halley's Method for Systems of Nonlinear Equations
Переглядів 2,9 тис.2 роки тому
Halley's Method for Solving Systems of Nonlinear Equations. Submission for The Summer of Math Exposition. Lesson includes motivation & explanation of notation, description of the method, numerical example, discussion of order, and comparison with the Method of Tangent Hyperbolas. Example code hosted on GitHub github.com/osveliz/numerical-veliz Chapters: 0:00 Wikipedia 0:44 Intro 0:54 Recommende...
Broyden's Method
Переглядів 11 тис.3 роки тому
Broyden's Method for solving systems of nonlinear equations. Lesson covers motivation, history, examples, discussion, and order of this Quasi-Newton Method. It also explains the "Good" and "Bad", as well as the third version of the method. Example code hosted on GitHub github.com/osveliz/numerical-veliz Chapters: 0:00 Intro 0:22 Newton's Method According to Broyden 1:08 Nonlinear System Example...
Approximating the Jacobian: Finite Difference Method for Systems of Nonlinear Equations
Переглядів 6 тис.3 роки тому
Generalized Finite Difference Method for Simultaneous Nonlinear Systems by approximating the Jacobian using the limit of partial derivatives with the forward finite difference. Example code on GitHub www.github.com/osveliz/numerical-veliz Chapters 0:00 Intro 0:13 Prerequisites 0:32 Refresher 0:43 What is the Jacobian? 2:06 Approximating the Jacobian 3:00 Finite Differences 3:21 Note on Notation...
Steffensen's Method for Systems of Nonlinear Equations
Переглядів 3,1 тис.3 роки тому
Generalized Steffensen's Method for Simultaneous Nonlinear Systems originally credited to J. F. Traub. Video shows how to solve nonlinear systems by approximating the Jacobian. Example code on GitHub www.github.com/osveliz/numerical-veliz Chapters 0:00 Prerequisites 0:20 Intro 0:40 Traub 1:24 Soleymani et al 1:58 Explaining Notation 2:32 1D Example 3:06 Two Methods - Same Method 3:20 System of ...
Secant Method for Systems of Nonlinear Equations
Переглядів 5 тис.3 роки тому
Generalized Secant Method for Simultaneous Nonlinear Systems originally credited to Wolfe and Bittner. Lesson shows how to solve nonlinear systems without the Jacobian, nor the need to approximate it, in a straightforward and visual manner. Example code on GitHub www.github.com/osveliz/numerical-veliz Chapters 0:00 Intro 0:15 Prerequisites 0:25 Secant Method Recap 0:45 Literature 1:00 Secant Me...
Newton's Method for Systems of Nonlinear Equations
Переглядів 16 тис.3 роки тому
Generalized Newton's method for systems of nonlinear equations. Lesson goes over numerically solving multivariable nonlinear equations step-by-step with visual examples and explanation of the Jacobian, the backslash operator, and the inverse Jacobian. Example code in MATLAB / GNU Octave on GitHub: github.com/osveliz/numerical-veliz Chapters 0:00 Intro 0:12 Prerequisites 0:32 Background 0:58 Set...
Video Mistakes II: The Sequel
Переглядів 4053 роки тому
This video corrects mistakes in my videos on Taylor Series Origin, Ternary Search, Dichotomous Search, Fixed Point Iteration for Fixed Point Iteration System of Equations with Banach, and Wegstein's Method. Thanks to commenters who pointed these errors out. If you find other mistakes feel free to comment or post in the GitHub Issues Forum for the code repository www.github.com/osveliz/numerical...
Brent's Minimization Method
Переглядів 8 тис.3 роки тому
Hybrid minimization algorithm combining Golden-section Search and Successive Parabolic Interpolation (Jarratt's Method) that is guaranteed to locate minima with superlinear convergence order. Example code github.com/osveliz/numerical-veliz Chapters: 0:00 Intro 0:16 Scaffolding 0:31 Motivation 1:17 Parabolic Interpolation Review 1:48 Renaming Variables 2:40 Brent's Method Algorithm 3:19 SPI Beha...
Successive Parabolic Interpolation - Jarratt's Method
Переглядів 4 тис.3 роки тому
Optimization method for finding extrema of functions using three points to create a parabola that is then used to find the next approximation to the solution. This lesson visualizes the behavior of the method with numeric examples as well as its convergence through fractals. Based off the paper "An iterative method for locating turning points" by P. Jarratt. Example code github.com/osveliz/nume...
Subscriber Milestone - 5 Ways to Help the Channel
Переглядів 2974 роки тому
Subscriber Milestone - 5 Ways to Help the Channel
I was 9 when you uploaded this video and now I almost got my bachelor in CS
GOAT
Please explain, I have a negative discriminant. At the same time, the equation well clearly has a root On which it depends and how to fix it?
how to find norm square of matrix In Jn kindly guide me
Very well explained , thanks !!
@5:00 the reality is that none of them discovered differential calculus. The person who discovered it was Madhava from India in 1300. It's well know that intelectual property was subtracted from Kerala and these dudes managed to understand it 200 or 300 years later. All credits go to India.
Go to 11:10
Hey,Is all this convergence or divergence totally depend upon intial guess ?
Both the initial and g(x).
Absolutely amazing video, thank you!
I would really like to see a proof of why Horner scheme gives the quotient. I understand why it gives the remainder of division, but it giving the quotient looks like magic. I saw some proofs by solving equations for polynomial coefficients, but I wonder if there's a quick and simple argument, at least for low degrees like 2 and 3.
Thanks for such a wonderful video, it made the topic easily understandable.
Thanks!
Thank you sooo much @guillaumeleclerc3346 for my first ever Super Thanks!!
@@OscarVeliz you deserve it for sure, this channel replaced reference textbook for me when I have to implement solvers 🙂
How did you get the 1+root5/2 from
so succesfull thank you
Best stuff i could find on yt. Well done
you have no Idea how many people you are helping from diffenrent places of the world. thanks a lot.
Thank you! Was able to understand the method in one go! :)
I want to a video of how regula falsi is mathematically same like regula falsi can be converted to secant method so they are mathematically same you why we call a secant method a contribution how these two are different
Can't believe how few views this has for just have fantastic it is, needed to find a simple and fast method for computing minima today, and this video explained exactly what was needed in a mere few minutes. Otherwise it would have taken much longer for me to be sure of what I was doing, thank you so much!
extremely helpful .. tysm
I find it ironic that you need the root to find out whether it'll converge at the root. Great video, very helpful!
I discuss this irony in my followup video ua-cam.com/video/FyCviw2ZA2o/v-deo.html
Most results on google for this method link to this video… is it not well known or not used much?
The method is not well known which leads to not being used much. Seems like every numerical methods textbook covers fixed point iteration, but then stops there without going over Wegstein or Steffensen.
thanks bro very helpful
Absolutely fantastic explanation. Even after 12 years. Huge thank you and much much much appreciated!!!
You elucidate the method effortlessly in such simple words. This channel is pretty helpful in helping me, a beginner easily grasp the steps of these methods, found it jus now.
You sure that’s not 10^50 factorial at 0:59 😂
Oscar, I wish your videos were more popular. We give millions of views to other UA-camrs that deal with content that is as captivating as useless. On the other hand, you deal with actual problems that can be encountered in school or at work. You provide simple and effective explanations, resources, and code. This is a very pragmatic and scientist oriented approach. No fricking fireworks and smoke, and this penalizes you. I hope that real life rewards you as you deserve. Keep up the good work!!
Thank you for this comment. It means a lot.
one of the best iteration theorem is banach's fixed point theorem which is great for all the continued fraction and continued root because they all satisfy the conditions for the theorem
useless video, no coding
Code is provided in the GitHub repo for the channel. Link in description.
I finally figured out what I was doing wrong after I found this video. Thank you!!
Thank you so much! just within 4 minutes you open gates of the secant method. Much appreciated!
yeah
Please explain Which software is used for this video making ❤
PowerPoint and Microsoft Mathematics 😁
unable to find if there exist any numerical iterations base on Laurent series or complex Fourier seris
there are too-scarce sources discussing error terms and error analaysis mid-terms and finals ever propose these forgettable error problems
there are too-scarce sources discussing error terms and error analaysis mid-terms and finals ever propose these forgettable error problems
Short yet quite adequate, Thanks a lot!!!!
How will you know the fixed point n that the aitken accelerated you to
The root of what value??? someone plz explain 2:53
I have a follow up video answering this and other commonly asked questions ua-cam.com/video/FyCviw2ZA2o/v-deo.html
Simply put and straight to the point. If only Textbooks were this way.
my teacher explained it terribly, thank you so much for clearing up this muck!
dude this covers 4 lectures from trefethen bau ....covered in just 7 mins...amazing stuff!
Where do the a values come from 2:50
They come from Hastings Jr's paper (link in description). If I recall correctly, they were determined through interpolation.
Wonderful! I needed a way to approximate implicit relations robustly! I'll need to change it so that instead of finding a minimum/maximum, it will find the complex solution, but this is really useful thanks for the upload
Instead of dividing by 2, try dividing by 2i
@eliz Finally got it implemented! It was harder than i expected, because i didn't understand that even when globally convergent, newton's method is really sensitive to the initial guess. I had it converging on erroneous solutions for a bit because i didn't understand how to pick a good initial
Its 2024 and this is still one banger of a video.Great explanation as usual.Many thanks !
They switch to the methods which do not guarantee convergence , strange approach
Great video. If you ever feel like getting back to this topic, I'd love to see your take on TOMS 748, which is considered(?) superior to Brent's method (albeit, some doubt has been cast from a very thorough review by Gregory W. Chicares).
Great Video! Thanks so much!!!
2:45 how did you arrive at the expression of alpha as the ratio of error ratios?
Check out my video on order of convergence ua-cam.com/video/JTinepDn1dI/v-deo.html
thanks!
There is a variant of this method for simultaneous solving of all roots, possibly with multiplicities > 1.