Matrices
Key highlights
Matrix representation
- Simple graph
- Only 0s and 1s
- Symmetric
- Compute complement using 1 - A
- If you do row-wise OR column-wise for given row or column, you will get the given node’s degree
- Sum of all entries = $2|E|$
- Directed graph
- Not symmetric anymore
- Make sure you understand what is the convention of the author $A_{u,v}$ vs $A_{v, u}$
- If the convention is $A_{u,v}$ then:
- Row wise sum - out degree
- Column wise sum sum - in degree
- Sum of all entries = $|E|$
- Weighted graphs
- Not anymore only 0s and 1s
- Bipartite network
- Rows represent one node type
- Columns the other
- (Problem: Lose the diagonal property)
- You can squarify the matrix - book page 71 - nice visualisation
- Multilayer network
- Three dimensional tensor
- Note that in the book, two dimensional tensor = matrix, one dimensional = vector and zero dimensional = number
Linear algebra
Transpose
- Reverses the direction of all edges and also switches the size of matrix dimensions —> Impacts directed and bipartite graphs
- Why is it useful?
- Projection example: $AA^T$ where matrix A is supposed to represent the bipartite network. Note that both matrices are normalized by the sum of the row. You obtain the probability of going from a given node v to the corresponding node u and vice versa. As a result of the matrix multiplication, for each entry, you want to compute the probability that you will end up in the node $v_2$ given you start at node $v_1$. This is computed by doing dot product of two vectors:
- First vector contains probabilities of going from $v_1$ to any other node of the other type
- Second vector can be actually read the exact same way.
- Therefore, if you multiply the vectors which represent the same node, you should get one.
Positive (Semi) Definite matrices
- If you have a vector $z$, matrix $A$ and assuming $z^TAz$ is allowed, then $A$ is called positive semi definite if $z^TAz \geq 0$. (if you drop the equal sign then it is only positive definite)
- This is for example useful when computing distance between two $m$ dimensional vectors $p, q$
- More specifically, to for example compute Euclidean distance, you would write $E = [(p - q)^TM(p - q)]^{\frac{1}{2}}$ where $M = I$ (Identity matrix)
- In general, you can replace M by any positive definite matrix to obtain some sort of distance. For example to compute Mahalanobis distance, you would replace it with $cov(p, q)^{-1}$ (invesre of covariance matrix)