What is Perceptron?

What is Perceptron?

understanding the perceptron and its intuition

Perceptron, what is this? No, guys, this is not a newly discovered sibling of Electrons or neutrons. Have you ever wondered how our brain works? Scientists have taken this curiosity to another level and tried to mimic it. Deep learning is inspired by the functioning of the brain.

The neuron is the basic unit cell of the brain tissue; likewise, in deep learning, we have a perception model. A neural network, which defines a deep learning model's architecture (structure), comprises several layers, and the layers contain the perceptions (also called nodes).

$$z = w_1 x_1 + w_2 x_2 + \cdots + w_n x_n + b$$

$$y=f(z)$$

Note:*w_1,w_2....w_n being the weights, and b* is the bias for the node.

The Perceptron Trick: how does this perceptron get trained?

As you have seen, the above perception model will generally give you a linear combination as an output (yeah, whenever there is no activation function: regression models ).

  • Training starts with assuming random values for the weights and bias.

  • Then, the model will randomly choose a point and check if the point is classified right according to the line by checking the prediction and the actual case.

  • If yes, the model will choose another point to check; if not, the model/line will adjust so that the checked point is rightly predicted. (by doing some transformations)

  • The transformation adjusts weights and the bias.

  • If the y(point) > 0, weights will be updated, and the new weights will be. The answer you get when subtracting the weights (x,y, bias) is the point format.

  • If y(point) < 0, weights will be updated, and the new weights will be, the answer you get when you subtract the weights (x,y, bias) is the point format.

The eventual algorithm here would be that the code below should be put in iteration for some k epochs.

$$W_{new} = W_{old} + n(y_i-ŷ_i)X_{i} \\$$

after the k epochs, you may find an efficient model for the prediction.

please find the code for the above algorithm

def perceptron(X,y):

    X = np.insert(X,0,1,axis=1)
    weights = np.ones(X.shape[1])
    lr = 0.1

    for i in range(1000):
        j = np.random.randint(0,100)
        y_hat = step(np.dot(X[j],weights))
        weights = weights + lr*(y[j]-y_hat)*X[j]

    return weights[0],weights[1:]

This is a wrap; understanding what a perceptron does is extremely foundational in DL, and strong foundations are always a start for learning anything.

Stay tuned for more blogs on the projects and other algorithms.

Did you find this article valuable?

Support Sai Prajoth's blog by becoming a sponsor. Any amount is appreciated!