In all of my other notes, in all cases that i've referred to neurons. Those are traditional artificial neurons (eg: feedforward neural networks)
These are highly simplified models that don't capture the time-dependent nature of biological neurons.
They typically use a weighted sum of inputs, passed through activation function like sigmoid or Relu.
Output is usually a continuous value between 0 & 1. (-1 or 1).
They don't have a concept of spiking or discrete firing events.
Time is not explicitly modeled; they process all inputs simultaneously.
The traditional neuron simply adds up inputs and produces a number.
Think of it like a simple scale. You put weights on one side (inputs), and it immediately shows a number on the other side (output).
2. Integrate & Fire Neurons:
It's simplified model of how neurons (brain cells) work. These are more biologically inspired models that capture some key temporal dynamics of real neurons. Here temporal dynamics refers to how neurons behave & process information over time.
They integrate (sum up) inputs over time
Introduce concept of membrane potential that changes dynamically.
Produces discrete spikes or action potentials when a threshold is reached.
They have reset mechanism after firing.
Time is explicitly modeled, allowing for study of temporal coding in neural networks.
Inside the circle, there's a graph showing a line rising over time.
When the line reaches a certain height (threshold), an arrow shoots out with a lightning bolt symbol (representing a spike).
The IF neuron accumulates input over time until it reaches a threshold, then "fires" a spike.
Imagine a bucket being filled with water droplets (inputs over time). When the water reaches the top (threshold), the bucket tips over (fires a spike), empties, and starts filling again.
Key Differences:
Temporal Dynamics: IF-neurons model how a neuron's state changes over time, while traditional artificial neurons don't have this temporal component.
Discrete vs Continuous Output: IF-neurons output discrete spikes, while traditional neurons output continuous values,
Discrete:
distinct, separate values.
Have clear gaps between values
Often represented by integers
Examples:
Number of people in a room , 25, 40
Days of the week, 7
Shoe sizes 7, 7.5, 8
Number of cars in a parking lot, 50
Number of action potentials (spikes) in a given time window
Number of synapses between two neurons
Number of neurons in a brain region
Continuous:
deal with smoothly varying quantities
They are uncountable.
No gaps between possible values
Often represented by real numbers
Examples:
Height of a person (e.g., 5.7643... feet)
Weight of an object (e.g., 3.14159... kg)
Temperature (e.g., 22.5... °C)
Membrane potential of a neuron (before reaching threshold)
Strength of synaptic connections (synaptic weight)
Blood flow in brain tissue
Biological Realism: Integrate & fire (IF)-neurons capture more biological features like membrane potential, threshold firing & refractory periods.
Information coding: IF-neurons can encode information in both spike rate & timing, while traditional neurons only use their activation level.
Complexity: IF-neurons are generally more complex to simulate & train in networks compared to traditional artificial neurons.
Application: Traditional neurons are widely used in deep learning for tasks like image recognition, while IF-neurons are more common in computational neuroscience & neuromorphic computing.
Learning mechanisms: Learning mechanism of IF neurons networks are different compared to traditional neural networks due to the discrete nature of spiking.
In conclusion, Integrate & Fire neurons are a step towards more biologically plausible neural network models, bridging the gap between abstracted artificial neurons & complex behaviour of real biological neurons.
They allow us to study aspects of neural computation that are difficult or impossible to capture with traditional artificial neurons. especially regarding timing & discrete event processing.
3. Leaky Integrate and Fire (LIF) Neurons:
RC circuits are fundamental to understanding leaky integrate-and-fire neurons. They consist of a resistor (R) and a capacitor (C) connected in parallel.
In the context of neurons.
The capacitor represents the neuron's membrane capacitance.
Resistor represents the leak in the membrane, allowing charge to dissipate over time.
This makes LIF neurons more biologically realistic and mathematically interesting.
The basic equation τ(dV/dt) = -V + RI
this equation describes how the voltage (V) changes over time (t) based on the input current (I) & circuit properties.
Capacitive Time Constant:
The time constant T = RC is crucial in neuron models. It determines how quickly the neuron responds to inputs:
A larger t means the neurons integrates inputs over a longer time
A smaller t means the neurons responds more quickly to changes in input.
This is important to understand how neurons process information over time.
Leaky neurons are more realistic than basic integrate and fire neurons because they include the term leaky.
In real neurons, charge gradually leaks out of the cell membrane.
This leak is modeled by the resistor in RC
It makes the neuron's response more dynamic & prevents indefinite charge accumulation.
In practice, when neuroscientist & engineers implement artificial spiking neurons, they almost always use the leaky version (LIF) because it's more realistic & has better computational properties. The basic IF neurons are mostly used in theoretical studies or stepping store to understanding more complex models.
1. Traditional Artificial Neurons
In all of my other notes, in all cases that i've referred to neurons. Those are traditional artificial neurons (eg: feedforward neural networks)
2. Integrate & Fire Neurons:
It's simplified model of how neurons (brain cells) work. These are more biologically inspired models that capture some key temporal dynamics of real neurons. Here temporal dynamics refers to how neurons behave & process information over time.
Key Differences:
In conclusion, Integrate & Fire neurons are a step towards more biologically plausible neural network models, bridging the gap between abstracted artificial neurons & complex behaviour of real biological neurons.
They allow us to study aspects of neural computation that are difficult or impossible to capture with traditional artificial neurons. especially regarding timing & discrete event processing.
3. Leaky Integrate and Fire (LIF) Neurons:
RC circuits are fundamental to understanding leaky integrate-and-fire neurons. They consist of a resistor (R) and a capacitor (C) connected in parallel.
In the context of neurons.
The basic equation τ(dV/dt) = -V + RI
this equation describes how the voltage (V) changes over time (t) based on the input current (I) & circuit properties.
Capacitive Time Constant:
The time constant T = RC is crucial in neuron models. It determines how quickly the neuron responds to inputs:
This is important to understand how neurons process information over time.
Leaky neurons are more realistic than basic integrate and fire neurons because they include the term leaky.
In practice, when neuroscientist & engineers implement artificial spiking neurons, they almost always use the leaky version (LIF) because it's more realistic & has better computational properties. The basic IF neurons are mostly used in theoretical studies or stepping store to understanding more complex models.
#neurons #integrateandfireneurons #discreteVsContinous #deeplearning #leakyneurons #lif