Perceptron Lab

Theoretical explanation could be read from MLP blog

Perceptrons are one of the simplest neural network architectures and were first proposed in the 1950s by Frank Rosenblatt. A perceptron consists of a single layer of neurons, where each neuron computes a weighted sum of its inputs and applies an activation function to the sum to produce an output.

Single Perceptron

(image by author)

The perceptron learning algorithm is a supervised learning algorithm that adjusts the weights of the neuron based on the errors between the predicted outputs and the actual outputs. The goal of the perceptron learning algorithm is to minimize the errors by adjusting the weights to better fit the training data.

In this tutorial, we will implement a perceptron for the OR gate using Python.

The OR gate is a logic gate that produces a high output (1) when at least one of its inputs is high (1). The truth table for the OR gate is shown below:

OR gate

image by author

( A and B are inputs Y is output)

Our goal is to train a perceptron to learn the OR gate function by adjusting the weights of the neuron.

Perceptron Algorithm

The perceptron algorithm consists of the following steps:

The perceptron algorithm continues to iterate over the training data until the error is minimized or a maximum number of iterations is reached.

Result: we can see that out result is as follows

 input [0,0] output is 0

 input [0,1] output is 1

 input [1,0] output is 1

 input [1,1] output is 1

The perceptron is accurately trained to mimic or gate