No handwritten homework reports are accepted for this course. We work with Git/GitHub. Efficient and abundant use of Git, e.g., frequent and well-documented commits, is an important criterion for grading your homework.
Apply for the Student Developer Pack at GitHub using your UCLA email.
Create a private repository biostat-m280-2018-spring
and add Hua-Zhou
and LuZhangstat
(TA) as your collaborators.
Top directories of the repository should be hw1
, hw2
, ... Create two branches master
and develop
. The develop
branch will be your main playground, the place where you develop solution (code) to homework problems and write up report. The master
branch will be your presentation area. Put your homework submission files (IJulia notebook .ipynb
, html converted from notebook, all code and data set to reproduce results) in master
branch.
After each homework due date, teaching assistant and instructor will check out your master
branch for grading. Tag each of your homework submissions with tag names hw1
, hw2
, ... Tagging time will be used as your submission time. That means if you tag your hw1 submission after deadline, penalty points will be deducted for late submission.
Read the style guide for Julia programming by John Myles White. Following rules in the style guide will be strictly enforced when grading: (4), (6), (7), (8), (9), (12), (13) and (16).
Let's check whether floating-point numbers obey certain algebraic rules.
Associative rule for addition says (x + y) + z == x + (y + z)
. Check association rule using x = 0.1
, y = 0.1
and z = 1.0
in Julia. Explain what you find.
Do floating-point numbers obey the associative rule for multiplication: (x * y) * z == x * (y * z)
?
Do floating-point numbers obey the distributive rule: a * (x + y) == a * x + a * y
?
Is 0 * x == 0
true for all floating-point number x
?
Is x / a == x * (1 / a)
always true?
Consider Julia function
function g(k)
for i = 1:10
k = 5k - 1
end
k
end
@code_llvm
to find the LLVM bitcode of compiled g
with Int64
input. @code_llvm
to find the LLVM bitcode of compiled g
with Float64
input. @fastmath
and repeat the questions 1-3 on the function function g_fastmath(k)
@fastmath for i = 1:10
k = 5k - 1
end
k
end
Explain what does macro @fastmath
do?
Create the vector x = (0.988, 0.989, 0.990, ..., 1.010, 1.011, 1.012)
.
Plot the polynomial y = x^7 - 7x^6 + 21x^5 - 35x^4 + 35x^3 - 21x^2 + 7x -1
at points x
.
Plot the polynomial y = (x - 1)^7
at points x
.
Explain what you found.
Let the $n \times n$ matrix H
have elements H[i, j] = 1 / (i + j - 1)
.
h(n)
that outputs $n \times n$ matrix H
. Try at least 3 ways, e.g., looping, comprehension, and vectorization. Compute and print H
for n = 5
. n = 1000
.setrounding(Float64, RoundingMode)
) and report the entry inv(H)[1, 1]
for n = 15
.Show the Sherman-Morrison formula $$ (\mathbf{A} + \mathbf{u} \mathbf{u}^T)^{-1} = \mathbf{A}^{-1} - \frac{1}{1 + \mathbf{u}^T \mathbf{A}^{-1} \mathbf{u}} \mathbf{A}^{-1} \mathbf{u} \mathbf{u}^T \mathbf{A}^{-1}, $$ where $\mathbf{A} \in \mathbb{R}^{n \times n}$ is nonsingular and $\mathbf{u} \in \mathbb{R}^n$. This formula supplies the inverse of the symmetric, rank-one perturbation of $\mathbf{A}$.
Show the Woodbury formula $$ (\mathbf{A} + \mathbf{U} \mathbf{V}^T)^{-1} = \mathbf{A}^{-1} - \mathbf{A}^{-1} \mathbf{U} (\mathbf{I}_m + \mathbf{V}^T \mathbf{A}^{-1} \mathbf{U})^{-1} \mathbf{V}^T \mathbf{A}^{-1}, $$ where $\mathbf{A} \in \mathbb{R}^{n \times n}$ is nonsingular, $\mathbf{U}, \mathbf{V} \in \mathbb{R}^{n \times m}$, and $\mathbf{I}_m$ is the $m \times m$ identity matrix. In many applications $m$ is much smaller than $n$. Woodbury formula generalizes Sherman-Morrison and is valuable because the smaller matrix $\mathbf{I}_m + \mathbf{V}^T \mathbf{A}^{-1} \mathbf{U}$ is cheaper to invert than the larger matrix $\mathbf{A} + \mathbf{U} \mathbf{V}^T$.
Show the binomial inversion formula $$ (\mathbf{A} + \mathbf{U} \mathbf{B} \mathbf{V}^T)^{-1} = \mathbf{A}^{-1} - \mathbf{A}^{-1} \mathbf{U} (\mathbf{B}^{-1} + \mathbf{V}^T \mathbf{A}^{-1} \mathbf{U})^{-1} \mathbf{V}^T \mathbf{A}^{-1}, $$ where $\mathbf{A}$ and $\mathbf{B}$ are nonsingular.
Show the identity $$ \text{det}(\mathbf{A} + \mathbf{U} \mathbf{V}^T) = \text{det}(\mathbf{A}) \text{det}(\mathbf{I}_m + \mathbf{V}^T \mathbf{A}^{-1} \mathbf{U}). $$ This formula is useful for evaluating the density of a multivariate normal with covariance matrix $\mathbf{A} + \mathbf{U} \mathbf{V}^T$.