-
Notifications
You must be signed in to change notification settings - Fork 106
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
781c64b
commit b528b34
Showing
7 changed files
with
2,694 additions
and
2 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,2 +1,112 @@ | ||
# MATH50003NumericalAnalysis | ||
Notes and course material for MATH50003 Numerical Analysis (2022–2023) | ||
# MATH50003NumericalAnalysis (2022–2023) | ||
Notes and course material for MATH50003 Numerical Analysis | ||
|
||
Lecturer: [Dr Sheehan Olver](https://www.ma.imperial.ac.uk/~solver/) | ||
|
||
|
||
## Course notes | ||
|
||
## Assessment | ||
|
||
1. [Practice computer-based exam](https://github.com/Imperial-MATH50003/MATH50003NumericalAnalysis/blob/main/exams/practice.ipynb) ([Solutions](https://github.com/Imperial-MATH50003/MATH50003NumericalAnalysis/blob/main/exams/practices.ipynb)) | ||
2. [2021–22 Computer-based exam](https://github.com/Imperial-MATH50003/MATH50003NumericalAnalysis/blob/main/exams/computerexam2122.ipynb) ([Solutions](https://github.com/Imperial-MATH50003/MATH50003NumericalAnalysis/blob/main/exams/computerexam2122s.ipynb)) | ||
3. Computer-based exam: TBA | ||
3. [Practice final exam](https://github.com/Imperial-MATH50003/MATH50003NumericalAnalysis/blob/main/exams/practicefinal.pdf) ([Solutions](https://github.com/Imperial-MATH50003/MATH50003NumericalAnalysis/blob/main/exams/practicefinals.pdf)) | ||
3. Final exam (pen-and-paper): TBA | ||
|
||
## Problem sheets | ||
|
||
## Workbooks | ||
|
||
|
||
## Reading List | ||
|
||
1. Nicholas J. Higham, [Accuracy and Stability of Numerical Algorithms](https://epubs.siam.org/doi/book/10.1137/1.9780898718027?mobileUi=0), Chapters 1–3 | ||
1. Michael L. Overton, [Numerical Computing with IEEE Floating Point Arithmetic](https://epubs.siam.org/doi/book/10.1137/1.9780898718072), Chapters 2–6 | ||
2. Lloyd N. Trefethen & David Bau III, [Numerical Linear Algebra](https://my.siam.org/Store/Product/viewproduct/?ProductId=950/&ct=c257a1956367c57b599612fbf383d0d3c674af4f9181d827444b5cdaca95b0686d6d20467a7c1e3290fb5b31c310ce74f5b2ede375934b844b1171bc734358e2), Chapters 1–4 | ||
3. Lloyd N. Trefethen, [Approximation Theory and Approximation Practice](https://people.maths.ox.ac.uk/trefethen/ATAP/ATAPfirst6chapters.pdf), Chapters 1–4, 17–19 | ||
4. [The Julia Documentation](https://docs.julialang.org) | ||
5. [The Julia–Matlab–Python Cheatsheet](https://cheatsheets.quantecon.org) | ||
6. [Think Julia](https://benlauwens.github.io/ThinkJulia.jl/latest/book) | ||
|
||
## Notes from lectures | ||
|
||
|
||
## What is numerical analysis? | ||
|
||
Broadly speaking, numerical analysis is the study of approximating | ||
solutions to _continuous problems_ in mathematics, for example, integration, differentiation, | ||
and solving differential equations. There are three key topics in numerical analysis: | ||
|
||
1. Design of algorithms: discuss the construction of algorithms and their implmentation in | ||
software. | ||
2. Convergence of algorithms: proving under which conditions algorithms converge to the | ||
true solution, and the rate of convergence. | ||
3. Stability of algorithms: the study of robustness of algorithms to small perturbations in | ||
data, for example, those that arise from measurement error, errors if data are themselves computed using | ||
algorithms, or simply errors arising from the way computers represent real numbers. | ||
|
||
The modern world is built on numerical algorithms: | ||
|
||
|
||
1. [Fast Fourier Transform (FFT)](https://en.wikipedia.org/wiki/Fast_Fourier_transform): Gives a highly efficient way of computing Fourier coefficients from function samples, | ||
and is used in many places, e.g., the mp3 format for compressing audio and JPEG image format. | ||
(Though, in a bizarre twist, GIF, a completely uncompressed format, has made a remarkable comeback.) | ||
2. [Singular Value Decomposition (SVD)](https://en.wikipedia.org/wiki/Singular_value_decomposition): Allows for approximating matrices by those with low rank. This is related to the [PageRank algorithm](https://en.wikipedia.org/wiki/PageRank) underlying Google. | ||
3. [Stochastic Gradient Descent (SGD)](https://en.wikipedia.org/wiki/Stochastic_gradient_descent): Minima of high-dimensional functions can be effectively computed using gradients | ||
in a randomised algorithm. This is used in the training of machine learning algorithms. | ||
4. [Finite element method (FEM)](https://en.wikipedia.org/wiki/Finite_element_method): | ||
used to solve many partial differential equations including in aerodynamics and | ||
weather prediction. [Firedrake](https://firedrakeproject.org) is a major project based at | ||
Imperial that utilises finite element method. | ||
|
||
|
||
This is not to say that numerical analysis is only important in applied mathematics. | ||
It is playing an increasing important role in pure mathematics with important proofs based on numerical computations: | ||
|
||
1. [The Kepler conjecture](https://en.wikipedia.org/wiki/Kepler_conjecture). This 400 year conjecture on optimal sphere packing | ||
was finally proved in 2005 by Thomas Hales using numerical linear programming. | ||
2. [Numerical verification of the Riemann Hypothesis](https://en.wikipedia.org/wiki/Riemann_hypothesis#Numerical_calculations). | ||
It has been proved that there are no zeros of the Riemann zeta function off the critical line provided the imaginary part is | ||
less than 30,610,046,000. | ||
3. [Steve Smale's 14th problem](https://en.wikipedia.org/wiki/Lorenz_system) on the stability of the Lorenz system was solved | ||
using interval arithmetic. | ||
|
||
Note these proofs are _rigorous_: as we shall see it is possible to obtain precise error bounds in numerical | ||
calculations, and the computations themselves can be computer-verified | ||
(a la [The Lean Theorem Prover](https://leanprover.github.io)). | ||
As computer-verified proofs become increasingly important, the role of numerical analysis in | ||
pure mathematics will also increase, as it provides the theory for rigorously controlling errors in | ||
computations. Note that there are many computer-assisted proofs that do not fall under numerical analysis because | ||
they do not involve errors in computations or approximation or involve discrete problems, for | ||
example, the proof the [Four Colour Theorem](https://en.wikipedia.org/wiki/Four_color_theorem). | ||
|
||
## The Julia Programming Language | ||
|
||
In this course we will use the programming language [Julia](https://julialang.org). This is a modern, compiled, high-level, | ||
open-source language developed at MIT. It is becoming increasingly important in high-performance computing and | ||
AI, including by Astrazeneca, Moderna and Pfizer in drug development and clinical trial accelleration, IBM for medical diagnosis, MIT for robot | ||
locomotion, and elsewhere. | ||
|
||
It is ideal for a course on numerical analysis because it both allows | ||
_fast_ implementation of algorithms but also has support for _fast_ automatic-differentiation, a feature | ||
that is of increasing importance in machine learning. It also is low level enough that we can | ||
really understand what the computer is doing. As a bonus, it is easy-to-read and fun to write. | ||
|
||
To run Julia in a Jupyter notebook on your own machine: | ||
|
||
1. Download [Julia v1.8.4](https://julialang.org/downloads/) | ||
2. Open the Julia app which will launch a new window | ||
3. Install the needed packages by typing (`]` will change the prompt to a package manager): | ||
```julia | ||
] add IJulia Plots ColorBitstring SetRounding | ||
``` | ||
3. Build Jupyter via | ||
```julia | ||
] bulid IJulia | ||
``` | ||
4. Launch Jupyter by typing | ||
```julia | ||
using IJulia | ||
notebook() | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,242 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"source": [ | ||
"# MATH50003 Numerical Analysis (2021–2022) Computer-based Exam\n\nInstructions for uploading and downloading:\n\n1. Rename the file to include your CID.\n2. You have 30 mins to download the exam beginning at 3pm on 18 March.\n2. You have 1 hour to complete the exam beginning at 3:30pm on 18 March.\n3. Deadline is 5pm on 18 March to upload the completed Jupyter notebook (`.ipynb`) to Blackboard. \nLate submissions received before 7pm will be capped at 40%.\n5. Once uploaded, re-download the file before the final submission time to confirm it is correct.\nYou are allowed to upload additional submissions but only last valid upload before 5pm will be used.\n6. If uploading via Blackboard fails you may e-mail the UG Office: [email protected]\n\nInstructions for the exam:\n\n1. For each problem, replace the `# TODO` to complete the question.\nThe unit tests are provided to help you test your answers.\n3. Problems are marked A/B/C to indicate difficulty (\"A\" being most difficult).\nPartial credit will be awarded for reasonable attempts even if the tests\nare not passed. A and B questions are worth 12 marks while C questions are worth 10 marks.\n3. If you have technical queries please email [email protected]. Any other queries\nshould be sent to the UG Office: [email protected]\n4. You may use existing code from anywhere\nbut you are **REQUIRED** to cite the source if it is not part of the module material,\nideally by including a weblink in a comment. \n5. You **MUST NOT** ask for help online or\ncommunicate with others within or outside the module.\nFailure to follow these rules will be considered misconduct.\n\n\n\nYou should use the following packages:" | ||
], | ||
"metadata": {} | ||
}, | ||
{ | ||
"outputs": [], | ||
"cell_type": "code", | ||
"source": [ | ||
"using LinearAlgebra, Test" | ||
], | ||
"metadata": {}, | ||
"execution_count": null | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"source": [ | ||
"(Note `SetRounding` is not necessary.)\n\n**WARNING** It may be necessary to restart the kernel if issues arise. Remember to reload the packages\nwhen you do so.\n\n## 1. Numbers\n\n**Problem 1.1 (C)** \nImplement the function `issub` that determines whether a `Float16` is a sub-normal number.\nDO NOT use the inbuilt routine `issubnormal`." | ||
], | ||
"metadata": {} | ||
}, | ||
{ | ||
"outputs": [], | ||
"cell_type": "code", | ||
"source": [ | ||
"function issub(x::Float16)\n # TODO: return `true` if `x` is a sub-normal float. Otherwise return `false`\nend" | ||
], | ||
"metadata": {}, | ||
"execution_count": null | ||
}, | ||
{ | ||
"outputs": [], | ||
"cell_type": "code", | ||
"source": [ | ||
"@test issub(Float16(0))\n@test issub(nextfloat(Float16(0)))\n@test issub(prevfloat(Float16(0)))\n@test !issub(Float16(1))\n@test !issub(reinterpret(Float16,0b0000010000000000))\n@test issub(reinterpret(Float16,0b0000001111111111))" | ||
], | ||
"metadata": {}, | ||
"execution_count": null | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"source": [ | ||
"## 2. Differentiation\n\n**Problem 2.1 (C)** Use second-order finite differences\nwith an appropriately chosen $h$ to approximate the second derivative of\n$$\nf(x) = \\cos(x^2)\n$$\nat $x = 0.1$ to 5 digits accuracy." | ||
], | ||
"metadata": {} | ||
}, | ||
{ | ||
"outputs": [], | ||
"cell_type": "code", | ||
"source": [ | ||
"function fd2(x)\n # TODO: implement a second-order finite-difference rule \n # to approximate f''(x)\n # for f(x) = cos(x^2)\n # with step-size h chosen to get sufficient accuracy\nend" | ||
], | ||
"metadata": {}, | ||
"execution_count": null | ||
}, | ||
{ | ||
"outputs": [], | ||
"cell_type": "code", | ||
"source": [ | ||
"@test abs(fd2(0.1) + 2*sin(0.1^2) + 4*0.1^2*cos(0.1^2)) ≤ 1E-5" | ||
], | ||
"metadata": {}, | ||
"execution_count": null | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"source": [ | ||
"**Problem 2.2 (A)** Consider a 2nd order version of a dual number:\n$$\na + b ϵ_1 + c ϵ_2\n$$\nsuch that\n$$\n\\begin{align*}\nϵ_1^2 &= ϵ_2 \\\\\nϵ_2^2 &= ϵ_1 ϵ_2 = 0.\n\\end{align*}\n$$\nComplete the following implementation supporting `+` and `*` (and\nassuming `a,b,c` are `Float64`). Hint: you may need to work out on paper\nhow to multiply `(s.a + s.b ϵ_1 + s.c ϵ_2)*(t.a + t.b ϵ_1 + t.c ϵ_2)` using the\nrelationship above." | ||
], | ||
"metadata": {} | ||
}, | ||
{ | ||
"outputs": [], | ||
"cell_type": "code", | ||
"source": [ | ||
"import Base: *, +, ^\nstruct Dual2\n a::Float64\n b::Float64\n c::Float64\nend\n\nfunction +(s::Dual2, t::Dual2)\n ## TODO: Implement Dual2(...) + Dual2(...), returning a Dual2\nend\n\nfunction +(s::Dual2, c::Real)\n ## TODO: Implement Dual2(...) + c, returning a Dual2\nend\n\nfunction *(c::Number, s::Dual2)\n ## TODO: Implement c * Dual2(...), returning a Dual2\nend\n\nfunction *(s::Dual2, t::Dual2)\n ## TODO: Implement Dual2(...) * Dual2(...), returning a Dual2\nend" | ||
], | ||
"metadata": {}, | ||
"execution_count": null | ||
}, | ||
{ | ||
"outputs": [], | ||
"cell_type": "code", | ||
"source": [ | ||
"f = x -> x*x*x + 2x + 1\nx = 0.1\n@test f(Dual2(x,1.,0.)) == Dual2(f(x), 3x^2+2, 6x / 2)\n\n# This has computed the first and second derivatives as\n# as f(x) + f'(x)*ϵ_1 + f''(x)/2*ϵ_2\n# == (x^3 + x) + (3x^2+1)*ϵ_1 + 6x/2*ϵ_2" | ||
], | ||
"metadata": {}, | ||
"execution_count": null | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"source": [ | ||
"## 3. Structured Matrices\n\n**Problem 3.1 (C)** Complete the implementation of `LowerTridiagonal` which represents a banded matrix with\nbandwidths $(l,u) = (2,0)$ by storing only its diagonal, sub-diagonal, and second-sub-diagonal as vectors." | ||
], | ||
"metadata": {} | ||
}, | ||
{ | ||
"outputs": [], | ||
"cell_type": "code", | ||
"source": [ | ||
"import Base: getindex, size, *\n\nstruct LowerTridiagonal <: AbstractMatrix{Float64}\n d::Vector{Float64} # diagonal entries of length n\n dl::Vector{Float64} # sub-diagonal entries of length n-1\n dl2::Vector{Float64} # second-sub-diagonal entries of length n-2\nend\n\nsize(L::LowerTridiagonal) = (length(L.d),length(L.d))\n\nfunction getindex(L::LowerTridiagonal, k::Int, j::Int)\n d, dl, dl2 = L.d, L.dl, L.dl2\n # TODO: return L[k,j].\n # If `k == j` then it should be equal to `d[k]`.\n # If `k == j+1` then it should be equal to `dl[j]`.\n # If `k == j+2` then it should be equal to `dl2[j]`.\n # Otherwise, it should return 0.0\nend\n\nn = 10\nd, dl, dl2 = randn(n), randn(n-1), randn(n-2)\n@test LowerTridiagonal(d, dl, dl2) == diagm(0 => d, -1 => dl, -2 => dl2)" | ||
], | ||
"metadata": {}, | ||
"execution_count": null | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"source": [ | ||
"**Problem 3.2 (B)** Complete the implementation of `*` for a `LowerTridiagonal` matrix\nso that it takes $O(n)$ operations." | ||
], | ||
"metadata": {} | ||
}, | ||
{ | ||
"outputs": [], | ||
"cell_type": "code", | ||
"source": [ | ||
"function *(L::LowerTridiagonal, x::AbstractVector)\n ## TODO: Return L*x but computed in O(n) operations\nend\n\nn = 10\nd, dl, dl2 = randn(n), randn(n-1), randn(n-2)\nx = randn(n)\n@test LowerTridiagonal(d, dl, dl2)*x ≈ diagm(0 => d, -1 => dl, -2 => dl2)*x" | ||
], | ||
"metadata": {}, | ||
"execution_count": null | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"source": [ | ||
"## 4. Decompositions\n\n**Problem 4.1 (C)** Approximate $\\exp x$ by a cubic polynomial by minimising\nthe least squares error when sampled at $n$ evenly spaced points in $[0,1]$,\nthat is, $x_k = (k-1)/(n-1)$,\nreturning the coefficients in the monomial basis." | ||
], | ||
"metadata": {} | ||
}, | ||
{ | ||
"outputs": [], | ||
"cell_type": "code", | ||
"source": [ | ||
"function expfit(n)\n ## TODO: return the coefficients [c_0,c_1,c_2,c_3] of the polynomial\n # c_0 + c_1*x + c_2*x^2 + c_3*x^3 that minimises the L^2 error at `n`\n # evenly spaced samples\nend" | ||
], | ||
"metadata": {}, | ||
"execution_count": null | ||
}, | ||
{ | ||
"outputs": [], | ||
"cell_type": "code", | ||
"source": [ | ||
"c₀,c₁,c₂,c₃ = expfit(1000)\n x = 0.1\n @test abs(c₀ + c₁*x + c₂*x^2 + c₃*x^3 - exp(x)) ≤ 1E-3" | ||
], | ||
"metadata": {}, | ||
"execution_count": null | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"source": [ | ||
"**Problem 4.2 (A)** Complete the function `lq(A)` that\nreturns a LQ decomposition, that is, `A = LQ` where `L` is lower triangular and `Q` is an orthogonal\nmatrix. You may assume that `A`\nis a square `Matrix{Float64}`. Hint: think of how a Householder reflection\ncan be constructed such that, for $𝐱 ∈ ℝ^n$,\n$$\n𝐱^⊤ Q = \\|𝐱\\|𝐞_1^⊤.\n$$" | ||
], | ||
"metadata": {} | ||
}, | ||
{ | ||
"outputs": [], | ||
"cell_type": "code", | ||
"source": [ | ||
"function lq(A)\n m,n = size(A)\n m == n || error(\"not square\")\n ## TODO: Create Q and L such that A = L*Q, Q'Q == I and L is lower triangular\nend" | ||
], | ||
"metadata": {}, | ||
"execution_count": null | ||
}, | ||
{ | ||
"outputs": [], | ||
"cell_type": "code", | ||
"source": [ | ||
"A = [1.0 2 3; 1 4 9; 1 1 1]\nL,Q = lq(A)\n@test Q'Q ≈ I\n@test L*Q ≈ A\n@test L ≈ tril(L) # it is acceptable to have small non-zero entries in L" | ||
], | ||
"metadata": {}, | ||
"execution_count": null | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"source": [ | ||
"## 5. Singular Value Decomposition\n\n**Problem 5.1 (B)** Implement `pseudoinv` that returns the pseudo-inverse $A^+$\nfor an arbitrary square matrix, assuming that any singular value less than\n$10^{-15}$ is in fact exactly zero. DO NOT use the inbuilt routine `pinv`." | ||
], | ||
"metadata": {} | ||
}, | ||
{ | ||
"outputs": [], | ||
"cell_type": "code", | ||
"source": [ | ||
"function pseudoinv(A)\n m,n = size(A)\n m == n || error(\"A must be square\")\n tol = 1E-15 # threshold below which we assume a singular value is zero\n ## TODO: construct and return the pseudo inverse of A\nend" | ||
], | ||
"metadata": {}, | ||
"execution_count": null | ||
}, | ||
{ | ||
"outputs": [], | ||
"cell_type": "code", | ||
"source": [ | ||
"A = [1 2 3; 4 5 6; 7 8 9]\nA⁺ = pseudoinv(A)\n@test A⁺*A*A⁺ ≈ A⁺\n@test A*A⁺*A ≈ A" | ||
], | ||
"metadata": {}, | ||
"execution_count": null | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"source": [ | ||
"## 6. Differential Equations\n\n**Problem 6.1 (B)** Complete the function `mathieu(n)` that returns a length-$n$ `Vector{Float64}`\n$$\n\\begin{bmatrix}\nu_1 \\\\\n⋮ \\\\\nu_n\n\\end{bmatrix}\n$$\nsuch that $u_k$ approximates the solution to the time-evolution equation\n$$\n\\begin{align*}\nu(0) &= 0 \\\\\nu'(0) &= 1 \\\\\nu''(t) &= cos(t) u(t)\n\\end{align*}\n$$\nat the point $t_k = (k-1)/(n-1)$ using the Forward Euler method, by first recasting the problem\nas a vector ODE." | ||
], | ||
"metadata": {} | ||
}, | ||
{ | ||
"outputs": [], | ||
"cell_type": "code", | ||
"source": [ | ||
"function mathieu(n)\n # TODO: return a Vector{Float64} approximating the solution to the ODE\n # at n evenly spaced points between 0 and 1.\nend" | ||
], | ||
"metadata": {}, | ||
"execution_count": null | ||
}, | ||
{ | ||
"outputs": [], | ||
"cell_type": "code", | ||
"source": [ | ||
"u = mathieu(100_000)\n@test u isa Vector\n@test abs(u[1]) ≤ 1E-15\n# this compares against the exact formula\n@test abs(u[end] - 1.148783704310448) ≤ 2E-5" | ||
], | ||
"metadata": {}, | ||
"execution_count": null | ||
} | ||
], | ||
"nbformat_minor": 2, | ||
"metadata": { | ||
"language_info": { | ||
"file_extension": ".jl", | ||
"mimetype": "application/julia", | ||
"name": "julia", | ||
"version": "1.7.0" | ||
}, | ||
"kernelspec": { | ||
"name": "julia-1.7", | ||
"display_name": "Julia 1.7.0", | ||
"language": "julia" | ||
} | ||
}, | ||
"nbformat": 4 | ||
} |
Oops, something went wrong.