Bayesian Linear Regression part 3: Posterior
Now I have priors on the weights and observations. In this post, I’ll show a formula for finding the posterior on the weights, and show one plot using it. The next post will have more plots.
My goal is to find the distribution of the weights given the data. This is given by
Since I’m assuming the prior is Gaussian and likelihood is a combination of Gaussians, the posterior will also be Gaussian. That means there is a closed form expression for the mean and covariance of the posterior. Skipping ahead, I can use the equations from “Computing the Posterior” in the class notes:
Code
I’ll convert this to code. Heads up, I know this isn’t the most efficient way to do this. I’ll try to update this when I find more tricks.
Variables
\( w_0 \) and \( V_0 \) are the prior’s mean and variance, which I defined back in priors on the weights. The code for that was
\(V_0^{-1}\) is the inverse of the prior’s variance. It shows up a few times, so I’ll
compute it once. It doesn’t look like I can use np.linalg.solve
on it, so I’ll use
np.linalg.inv
:
\(\Phi\) is the augmented input matrix. In this case, it’s the x
values of the observations, with the column of 1s I add to deal with the bias term. So, from the last post, I had x
as
Then \(\Phi\) is
\(\textbf y\) is also from the last post. It’s the vector containing all the observations. The code used there was
But since I already have \(\Phi\), I’ll skip the function and just use
\(\sigma_y\) is my guess of true_sigma_y
. On a real dataset, I might not know the true \(\sigma_y\), so I keep separate true_sigma_y
and sigma_y
constants that I can use to explore what happens if my guess is off. I’ll start with imagining I know it
The rest is a matter of copying the equation over correctly and hoping I got it right!
Complete code
Putting it all together, I get
Sweet! Now it seems like after doing all that code and math, I should be rewarded with pretty graphs!
Plotting
In this post, I’ll just show one graph. w_n
is the mean guess of the weights, so I can plot that function. I can also compare it to the weights from
least squares and the true weights.
These are pretty similar! They are different at least in part due to the prior, which are centered at 0, meaning that it expects most lines to go through the origin and have a slope of 0.
This is more obvious if I make the true bias is very far away from 0. Then the Bayesian fit might not even go through the points! This might remind you of the effects of regularization, which makes extreme values less likely, at the cost of sometimes having poorer fits.
Next
See Also
- Still thanks to MLPR!
- Wikipedia on Conjugate Priors