Biostat M280 Homework 1

Due Apr 19 @ 11:59PM

Q1

No handwritten homework reports are accepted for this course. We work with Git/GitHub. Efficient and abundant use of Git, e.g., frequent and well-documented commits, is an important criterion for grading your homework.

  1. If you don't have a GitHub account, apply for the Student Developer Pack at GitHub using your UCLA email.

  2. Create a private repository biostat-m280-2019-spring and add Hua-Zhou and chris-german (TA) as your collaborators.

  3. Top directories of the repository should be hw1, hw2, ... You may create other branches for developing your homework solutions; but the master branch will be your presentation area. Put your homework submission files (IJulia notebook .ipynb, html converted from notebook, all code and data set to reproduce results) in master branch.

  4. After each homework due date, teaching assistant and instructor will check out your master branch for grading. Tag each of your homework submissions with tag names hw1, hw2, ... Tagging time will be used as your submission time. That means if you tag your hw1 submission after deadline, penalty points will be deducted for late submission.

  5. Read the style guide for Julia programming by John Myles White. Following rules in the style guide will be strictly enforced when grading: (4), (6), (7), (8), (9), (12), (13) and (16).

Q2

Let's check whether floating-point numbers obey certain algebraic rules.

  1. Associative rule for addition says (x + y) + z == x + (y + z). Check association rule using x = 0.1, y = 0.1 and z = 1.0 in Julia. Explain what you find.

  2. Do floating-point numbers obey the associative rule for multiplication: (x * y) * z == x * (y * z)?

  3. Do floating-point numbers obey the distributive rule: a * (x + y) == a * x + a * y?

  4. Is 0 * x == 0 true for all floating-point number x?

  5. Is x / a == x * (1 / a) always true?

Q3

Consider Julia function

function g(k)
    for i in 1:10
        k = 5k - 1
    end
    k
end
  1. Use @code_llvm to find the LLVM bitcode of compiled g with Int64 input.
  2. Use @code_llvm to find the LLVM bitcode of compiled g with Float64 input.
  3. Compare the bitcode from questions 1 and 2. What do you find?
  4. Read Julia documentation on @fastmath and repeat the questions 1-3 on the function
function g_fastmath(k)  
    @fastmath for i in 1:10  
        k = 5k - 1
    end
    k
end

Explain what does macro @fastmath do?

Q4

Create the vector x = (0.988, 0.989, 0.990, ..., 1.010, 1.011, 1.012).

  1. Plot the polynomial y = x^7 - 7x^6 + 21x^5 - 35x^4 + 35x^3 - 21x^2 + 7x -1 at points x.

  2. Plot the polynomial y = (x - 1)^7 at points x.

  3. Explain what you found.

Q5

  1. Show the Sherman-Morrison formula $$ (\mathbf{A} + \mathbf{u} \mathbf{u}^T)^{-1} = \mathbf{A}^{-1} - \frac{1}{1 + \mathbf{u}^T \mathbf{A}^{-1} \mathbf{u}} \mathbf{A}^{-1} \mathbf{u} \mathbf{u}^T \mathbf{A}^{-1}, $$ where $\mathbf{A} \in \mathbb{R}^{n \times n}$ is nonsingular and $\mathbf{u} \in \mathbb{R}^n$. This formula supplies the inverse of the symmetric, rank-one perturbation of $\mathbf{A}$.

  2. Show the Woodbury formula $$ (\mathbf{A} + \mathbf{U} \mathbf{V}^T)^{-1} = \mathbf{A}^{-1} - \mathbf{A}^{-1} \mathbf{U} (\mathbf{I}_m + \mathbf{V}^T \mathbf{A}^{-1} \mathbf{U})^{-1} \mathbf{V}^T \mathbf{A}^{-1}, $$ where $\mathbf{A} \in \mathbb{R}^{n \times n}$ is nonsingular, $\mathbf{U}, \mathbf{V} \in \mathbb{R}^{n \times m}$, and $\mathbf{I}_m$ is the $m \times m$ identity matrix. In many applications $m$ is much smaller than $n$. Woodbury formula generalizes Sherman-Morrison and is valuable because the smaller matrix $\mathbf{I}_m + \mathbf{V}^T \mathbf{A}^{-1} \mathbf{U}$ is cheaper to invert than the larger matrix $\mathbf{A} + \mathbf{U} \mathbf{V}^T$.

  3. Show the binomial inversion formula $$ (\mathbf{A} + \mathbf{U} \mathbf{B} \mathbf{V}^T)^{-1} = \mathbf{A}^{-1} - \mathbf{A}^{-1} \mathbf{U} (\mathbf{B}^{-1} + \mathbf{V}^T \mathbf{A}^{-1} \mathbf{U})^{-1} \mathbf{V}^T \mathbf{A}^{-1}, $$ where $\mathbf{A}$ and $\mathbf{B}$ are nonsingular.

  4. Show the identity $$ \text{det}(\mathbf{A} + \mathbf{U} \mathbf{V}^T) = \text{det}(\mathbf{A}) \text{det}(\mathbf{I}_m + \mathbf{V}^T \mathbf{A}^{-1} \mathbf{U}). $$ This formula is useful for evaluating the density of a multivariate normal with covariance matrix $\mathbf{A} + \mathbf{U} \mathbf{V}^T$.

Q6

Show the following facts about triangular matrices. A unit triangular matrix is a triangular matrix with all diagonal entries being 1.

  1. The product of two upper (lower) triangular matrices is upper (lower) triangular.

  2. The inverse of an upper (lower) triangular matrix is upper (lower) triangular.

  3. The product of two unit upper (lower) triangular matrices is unit upper (lower) triangular.

  4. The inverse of a unit upper (lower) triangular matrix is unit upper (lower) triangular.

  5. An orthogonal upper (lower) triangular matrix is diagonal.

Q7

  1. Show that a symmetric matrix is positive semidefinite if and only if it is the covariance matrix of a random vector.

  2. Suppose two matrices $\mathbf{A}, \mathbf{B} \in \mathbb{R}^{n \times n}$ are positive semidefinite. Show that their Hadamard product (elementwise product) $\mathbf{A} \circ \mathbf{B} = (a_{ij} b_{ij})_{ij}$ is positive semidefinite.

  3. Suppose a symmetric matrix $\mathbf{A} \in \mathbb{R}^{n \times n}$ has entries $a_{ij} = i(n − j + 1)$ for $j \ge i$. Show that $\mathbf{A}$ is positive semidefinite.