Generalized Linear Models
In my previous posts I have talked about linear and logistic regression. Today I will talk about the broader family of models to which both methods belong Generalized Linear Models. To work our way up to GLMs, we will begin by defining the exponential family.
The exponential family: A class of distributions is in the exponential family if it can be written in the form:
$$ p(y;\eta) = b(y) \text{exp}(\eta^T T(y) - \alpha(\eta)) $$
More information can be found here
Linear regression as a GLM
Maybe you remember that the underlying assumption used in the least squares cost function of linear regression was that, the conditional distribution of $y$ given $x$ is defined by a gaussian (normal) distribution. It turns out a gaussian distribution is part of the exponential family and can be written as follows:
Note that standard deviation $\sigma$ has been set to 1. This is done because it makes the derivation easier and since it is of no influence on our final choice of $\theta$ we are allowed to choose it arbitrarily.
One of the assumptions made when constructing a GLM is that the natural parameter $\eta$ and the inputs $x$ are related linearly: $\eta = \theta^T x$. Which brings us back to final form the hypothesis function of the linear regression algorithm. To formulate the hypothesis function of a GLM we equate it to the expected value of the conditional distribution:
Logistic regression as a GLM
The same procedure can be used to derive the logistic regression classifier as a GLM. The classifier assumes a bernoulli conditional distribution of $y$ given $x$. Rewriting the bernoulli distribution as part of the exponential family gives:
Again to formulate the hypothesis function of the logistic regression classifier we equate to the expected value of the conditional distribution:
Softmax regression
An example of a GLM that I have not mentioned before in other posts is the softmax regression algorithm. This class of GLM that assumes a multinomial conditional distribution of $y$ given $x$. This type of distribution is appropriate for classification with more than two classes.
To derive the hypothesis function for the softmax algorithm we will start by expressing the multinomial as an exponential family distribution. Up until now we have only seen $T(y) = y$, however for the multinomial distribution the sufficient statistic $T(y)$ is actually a vector of size equal to the number of classes:
Here the notation 1{$\cdot$} takes on a value of 1 if its argument is true, and 0 otherwise. Also $(T(y))_i$ denotes the $i-$th element of the vector $T(y)$.
Because $T(y)$ is a vector the hypothesis function will also be a vector which contains a hypothesis for every class. The expectation of a single class in obtained from the natural parameter as follows:
Finally The entire hypothesis function will look as follows:
Note that to use the softmax algorithm as a classifier all you have to do is output the class for which the hypothesis is highest.
We have not talked about the cost function and the update rule to actually train this classifier. The appropriate update rule can be derived using maximum likelihood estimation. I will not go into detail about this here.
See … for more details.