Homework #2: Quick Baked Pi
There are several iterative methods to calculate pi. A particularly fast method is the iteration:
P(1) = 3.0
P(n + 1) = P(n) + sin(P(n))
...where P(n) is the approximation of pi at iteration 'n'. Given a first estimate P(1) of pi as '3.0', this iteration converges to an approximation of pi correct to as many digits of precision that you are allowed when you call sin().
P(1) = 3 + sin(3) = 3.1411200...
P(2) = 3.14112 + sin(3.14112) = 3.141592654...
After the second iteration, the error is very small (about 1.8*10^(-11)). And, on the next iteration, the error drops to about 10^(-32). Why does it work?
This algorithm is a particular case of a method called fixed point iteration, and is used to find solutions to equations of the form:
x = f(x) [1]
In this particular case, we have f(x) = x + sin(x), and, as sin(n*pi) = 0 for any integer n, any multiple of pi is a solution of that equation.
An equation like [1] can sometimes (often?) be solved by iterating the formula:
x[n+1] = f(x[n])
where x[n] is the nth approximation of the solution. There is a famous theorem which states that the method will converge to the solution if the absolute value of the derivative f'(x) is less than some number L < 1 in some interval containing the solution, and if you start with a value in that interval. The smaller the derivative is, the faster the sequence will converge. In the case of f(x) = x + sin(x), we have: f'(x) = 1 + cos(x) and, if x is close to pi, cos(x) is close to -1, and f'(x) is close to 0, which explains why the method converges rapidly. We can also notice that f"(x), the second derivative, is -sin(x), which is 0 at the root. This means that you can start in a relatively large interval about pi, and still get a very fast convergence.
We can see this as a slight modification of Newton's method applied to the solution of the equation sin(x) = 0. Newton's method gives the formula: x[n+1] = x[n] - sin(x[n])/cos(x[n] and, as cos(x[n]) is close to -1, you get your formula by replacing cos(x[n]) by -1 in the above formula.
1. Determine if Newton's method or the fixed point method is better for computing pi (up to 15 digits of accuracy) by counting how many iterations each needs to converge.
2. Determine if Newton's method or the fixed point method is better for computing the Dottie number (0.739085...), which is the unique, real number fixed point of the cos() function.
3. Include comments that discuss your analysis of this problem.
N.B. You can use and modify the code for Newton's method from the textbook. You should write a higer-order function to compute an iterative fixed point iteration as follows.
def fixed_point_iteration(f, x=1.0):
step = 0
while not approx_fixed_point(f, x):
x = f(x)
step += 1
return x, step
def approx_fixed_point(f, x) should returns True if and only if f(x) is very close (distance < 1e-15) to x.
Sample Run:
>> from math import sin,cos
>>> print(fixed_point_iteration(lambda x: sin(x) + x, 3.0)
(3.141592653589793, 3) # Fixed point pi:
>>> print(newton_find_zero(lambda x: sin(x) , lambda x: cos(x), 3.0)
(3.141592653589793, 3) # Newtons's pi
>>> print(fixed_point_iteration(lambda x: cos(x), 1.0)
(0.7390851332151611, 86) # Fixed point dottie
>>> print(newton_find_zero(lambda x: cos(x) - x , lambda x: -sin(x)-1, 1.0)
(0.7390851332151606, 7) # Newtons's dottie
What I have tried:
I am not sure where to begin as I am not sure what the question is askig me to do