This document is about new way of training artificial neural network, that tries to mimic how a brain might learn.
Traditional method of training AI like Back propagation aren't considered biologically realistic. These researches have created models is more like brain neurons & it's learning rules.
Designed like a real neurons in brain:
They have more complex artificial neurons that have different parts, like real neurons.
Structure difference:
Pyramidal Neurons : Have 3 compartments
apical
basal
soma
Inhibitory interneurons: have 2 compartments.
soma
dendrite
Connection:
They are connected, but in a specific ways.
Pyramidal neurons are connected to other pyramidal neurons in both feedforward & feedback pathways.
Inhibitory interneurons connect laterally to pyramidal neurons within same layer.
Processing and activation:
Processing is also different for these different types of neurons.
Pyramidal neurons:
Basal dendrites receive feedforward input
Apical dendrites receive feedback and lateral inhibitory input
Soma integrates information from both dendrites.
Activation function (ϕ) is applied to the soma's membrane potential
Inhibitory interneurons:
Dendrites receive information from pyramidal neurons in the same layer.
Soma integrate the input
Activation function is applied to soma's membrane potential.
Function in the network:
Pyramidal neurons are the primary information processing unit.
Inhibitory interneurons provide lateral inhibition, helping to balance and regulate the network
What is lateral inhibition?
Lateral inhibition refers to the process where active neurons suppress the activity of their neighboring neurons.
When a pyramidal neurons is activated, it excites nearby inhibitory neurons, these inhibitory interneurons then suppress the activity of other nearby pyramidal neurons.
What's the purpose of doing so?
It prevents the over-excitation of the network. If one neuron becomes very active. It indirectly reduces the activity of it's neighbor.
Active neurons stand out more against less active ones.
Sparse coding: Most relevant neurons respond strongly to a given input, making the network representation more efficient.
Lateral inhibition mechanism is a key feature that makes this artificial neuron network model more biologically plausible & potentially more powerful than traditionally neurons that lack this local regulation.
The network learns through a process called predictive plasticity, where a neurons try to predict their own future activity.
This approach might solve some problems with how we usually train AI, making it more similar to how our real brain might learn.
Researchers can an experiment to demonstrate their method works for task like recognizing hand written digits.
They are exploring how to make the model even more like brain, with the concepts of spiking neural networks.
In essence it's an attempt to bridge the gap between AI & Neuroscience, potentially leading to more brain like AI.
This document is about new way of training artificial neural network, that tries to mimic how a brain might learn.
Traditional method of training AI like Back propagation aren't considered biologically realistic. These researches have created models is more like brain neurons & it's learning rules.
Designed like a real neurons in brain:
They have more complex artificial neurons that have different parts, like real neurons.
Structure difference:
Connection:
They are connected, but in a specific ways.
Processing and activation:
Processing is also different for these different types of neurons.
Function in the network:
What is lateral inhibition?
Lateral inhibition refers to the process where active neurons suppress the activity of their neighboring neurons.
When a pyramidal neurons is activated, it excites nearby inhibitory neurons, these inhibitory interneurons then suppress the activity of other nearby pyramidal neurons.
What's the purpose of doing so?
Lateral inhibition mechanism is a key feature that makes this artificial neuron network model more biologically plausible & potentially more powerful than traditionally neurons that lack this local regulation.
In essence it's an attempt to bridge the gap between AI & Neuroscience, potentially leading to more brain like AI.