Perceptron Learning Algorithm(PLA)
The Perceptron algorithm is a supervised learning algorithm that find a binary classifiers(Yes/No, True/False). Let $x \in \mathbb{R}^d$ and $y \in \mathbb{R}$. Then, a function that maps $d$ dimensional vector to a single signed value can be defined as follow :
$$\begin{aligned}
f(x) & = \sum_{i=1}^{d}w_i x_i + b \\
& = w^T \cdot x + b
\end{aligned}$$
where $w \in \mathbb{R}^d$ be the weight vector(classifier), $b$ be the bias, and $\cdot$ denotes the dot product. If input data is linearly separable, then there exists a line $x \cdot w + b = 0$ that clearly separates the data into two groups.
To make the formula even simpler, we can merge the bias term $b$ into the vector $w = [w_0, w_1, \ldots , w_d]$ where $w_0 = b$ and $x = [x_0, x_1, \ldots , x_d]$ where $x_0 = 1$. Then, the function $f(x)$ becomes
$$f(x) = w^T \cdot x$$
The Perceptron Learning Algorithm(PLA) finds the weight vector $w$ with finite number of iterations. The update rule is
$$w(t+1) = w(t) + y(t)x(t)$$
where $y(t)$ is given result scalar value, $x(t)$ is the input vector and $t = 0, 1, 2, \ldots$.
References
- Linear separability Wiki
- Perceptron Wiki
- Learning from data