Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Simple Perceptron Activation Function:
Prashant Basnet
Jul 2, 2024
40 views
We've seen how single perceptron behaves, now let's expand on this concept to understand the idea of a neural network.
Let's see how to connect many perceptrons together and then see how to represent this mathematically.
How does multiple perceptron network look like?
Here we can see various layers of single perceptron connected to each other through inputs and outputs.
image is copied from investopedia:
Here we have input layers, hidden layers and output layer.
Essentially hidden layers are layers that in between input and output layer, that don't get to see the outside.
As you go forwards through more layers, the level of abstraction increases.
Let's discuss the activation function in a little more detail.
In previous post, our activation function was just a simple function that output 0 or 1. We were just taking the sum of the input, and had activation function output either 0 or 1. based off weather the number was positive or negative.
our function , weight times the input + bias
z = wx + b,
we will now refer z as function.
Simple Perceptron Activation Function:
Example:
Imagine Z is calculated as follows: Z=WX+bZ = WX + bZ=WX+b
Scenario 1:
Scenario 2:
Scenario 3:
Scenario 4:
Key Point:
Why is this a Problem?
The simple perceptron activation function is not sensitive to the magnitude of ZZZ; it only cares about the sign (positive or negative). This means:
Solution: Using More Sophisticated Activation Functions
To make the model more sensitive to changes in ZZZ, we use more complex activation functions like the sigmoid function.
image is copied from researchgate.
This red line here, our output is not 0 and 1. Sigmoid function is
Formula: σ(x)=1 /[1 + e^{x}]
|Output Range: The sigmoid function outputs a value between 0 and 1.
So changing the activation function can be really beneficial depending on the particular task.
Few more activations functions:
image from medium:
In future post, we will understand how to implement and build our own neural networks models with Keras.