# Logistic Regression

Most of the machine learning concepts in my recent posts have had convenient closed-form solutions for optimal weights. Most problems in machine learning don’t! For example, neural networks usually have to use stochastic gradient descent to optimize weights.

In this post, I walk through logistic regression (which can also be thought of as a one-neuron neural network!) Optimal weights in logistic regression requires an iterative optimizer.

See also:

- MLPR notes
- Section 17.4 of Barber’s Bayesian Reasoning and Machine Learning
- Section 4.3.2 of Bishop’s Pattern Recognition and Machine Learning

## Problem and data generation

Logistic regression can be used for classification. For example, if I have a bunch of observed data points that belong to class A and class B, I can pick a new point and ask which class the model thinks the point belongs to.

For this demo, I’ll generate two groups of points that correspond to the two classes.

Note: We could also learn this without iterative optimization using Bayes classifiers. But these require quadratically more parameters than logistic regression (see Bishop for more info).

## Logistic sigmoid

Logistic regression learns the weights of a logistic sigmoid function. The logistic sigmoid is given by:

where \( \textbf w \) and \( b \) are the weights I want to learn and \( \textbf{x} \) is the input data. \( \sigma \) ranges from 0 to 1. In the case of logistic regression, its value is treated as the probability that a point \( \textbf{x} \) falls into one of the two classes.

I’ll define a `sigmoid_with_bias`

function I can use. (Heads up, I combine the bias with the rest of the weights.)

*(Oh hey, my ipywidgets post has an interactive example of how the weights and bias influence the shape of the sigmoid.)*

### Sigmoid with two input variables

My data has two input variables, so I’d like a function that takes in two inputs and returns the probability that the point belongs to class A. The `sigmoid_with_bias`

defined above can do this.

To demonstrate, I can plot sigmoid with two input dimensions (and a 3rd output dimension) using a contour plot.

Below I plot the contours for a \( \sigma \) using \( \textbf w = \begin{bmatrix}3/2 \ -2\end{bmatrix} \) and \( b = 2 \). The graph shows a region where \( \sigma \) is close to 1, another region where \( \sigma \) is close to 0, and a sloping boundary in between. If I were to view the boundary from the side, I’d get something like the sigmoid shape shown above.

If a data point is in the area on this slope, I’d say things like “I’m 80% sure that this point is in class B”. This is useful for where the data from the two classes overlap, such as the example data.

If I drop this sigmoid on the Fuzzy Data, it looks like this \( \textbf w \) and \( b \) make a terrible boundary.

## Optimizers

Later I’ll need to use an optimizer to find the best weights, so here I’ll show how `minimize`

works. I’ll try to minimize the function \( x^2 - 2x \).
I also need to provide the gradient of \( f \). Because that can be tricky to get right, I’ll run `check_grad`

, a method in `scipy`

that numerically checks the gradient. I’ll also need an initial weight for the minimizer to start at.

When I run this, the minimum is in `example_optimizer_result.x`

, and is \( x = 1 \).

## Maximum Likelihood of Logistic Regression

One way to learn a better value of \( \textbf w \) and \( b \) given the labeled observations is by using maxmimum likelihood. I’ll use the equations from Chapter 17 of Barber.

(Heads up, I moved the bias \( b \) into weights \( \textbf w \) by adding a column of 1’s to \( \textbf x \).)

To find the \( \textbf w \) using maximum likelihood, I find \( \textbf w^* \) so that

where \( \mathcal{D} \) is the observed data. Using \( \textbf x_n \) as an input and \( y_n \) as a label from an observation, the log likelihood is given by the sum over all observations of the probabilities, or

\begin{align}
\log P(\mathcal{D} \mid \textbf w) &= \sum_n \log P(\textbf x_n, y_n \mid \textbf w) .

\end{align}

For this logistic regression set-up, \( \log P(\textbf x_n, y_n \mid \textbf w) \) becomes

The goal is to find the value of \( \textbf w \) that maximizes \( \log P(\mathcal{D} \mid \textbf w) \). There is no closed-form solution, so I’ll use an iterative optimizer. As above, iterative optimizers sometimes require the gradient with respect to the weights, which for this logistic regression set-up is given by

While there isn’t a closed-form solution, it turns out that \( \log P(\mathcal{D} \mid \textbf w) \) does have a single maximum, so I don’t need to worry about local maxima. This makes logistic regression different than other models where multiple runs gives different results.

Finally, I’ll add a regularizer to keep the weights reasonable. The equations with regularization are given by

I return to why regularization is important below.

### Coding up equations

Now I translate the equations for \( \log P(\mathcal{D} \mid \textbf w) \) and \( \nabla_{\textbf w} \log P(\mathcal D \mid \textbf w) \) and the regularization terms into code.

In general:

- \( \log P(\mathcal{D} \mid \textbf w) \) should return a scalar
- \( \nabla_{\textbf w} \log P(\mathcal D \mid \textbf w) \) should return a vector the same size as \( \textbf w \).

There are a few catches:

- The optimizer I’m using,
`minimize`

, is a minimizer, so I’ll actually minimize the*negative*log likelihood. - \( \log \) overflows when the sigmoid start returning values that round to 0. There might be better ways to solve this, but I avoid this by adding a tiny offset.
- Because of how I’m dealing with the bias term, I have to explicitly make an \( \textbf x \) with a column of ones here called
`data_with_bias`

. - I’m returning the loss function as well so I can plot it. Eh.

## Optimizing for logistic regression

Now I optimize those functions and plot the results.

```
fun: 15.919840119847468
hess_inv: array([[ 0.17848323, -0.05018526, 0.00905405],
[-0.05018526, 0.26530471, 0.17622941],
[ 0.00905405, 0.17622941, 0.44592668]])
jac: array([ -2.61269672e-06, 4.82700074e-06, -2.26616311e-06])
message: 'Optimization terminated successfully.'
nfev: 19
nit: 16
njev: 19
status: 0
success: True
x: array([ 0.1293201 , -2.03361611, -2.73868849])
```

### Plotting the loss function

To visualize what the minimizer found, I’ll plot the loss function, or the negative log likelihood with regularization, and the optimal \( w \) found. While the real loss surface has three input dimensions, one for the bias and two for the weights, for visualization, I’ll just vary one of the weights.

### Logistic sigmoid on the data with fitted weights

Plotting the logistic sigmoid with the fitted weights looks much better!

## Regularization

Regularization is important in logistic regression. One problem with logistic regression is that if data is linearly separable, the boundary becomes super steep.

I’ll generate the data points, run logistic regression without regularization on it, and plot the boundary.

The main thing I want to show is that with no regularization constant, the boundary will become as steep as it can until the optimizer gives up. I also show the effect of a higher regularization constant.