Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Auto Encoders | Unsupervised Learning Model
Anonymous User
Dec 24, 2024
11 views
What are Auto Encoders?
Directed neural network. Auto encoders encode itself. It takes some input goes through hidden layers, then outputs. It aims for the outputs to be identical to the inputs.
It is not pure type of unsupervised type of unsupervised learning algorithm. They are self supervised learning algorithm.
Here we are looking for things, we are comparing certain values which are inputs.
It's kinda of in the verge between supervised and unsupervised. We have inputs, they get encoded abd tgeb tget get decoded and they are compared through the training.
What are Auto Encoders used for?
How are biases represented?
How to Train an Auto Encoder:
Steps Summary:
Input Data Setup:
Overcomplete hidden layers is an underlying concept in most of the variations of autoencoders.
If we have 4 input node , 2 hidden node and 4 output node.
What if we wanted to increase the number of nodes in hidden layer than input layer?
for example:
Having good number of hidden nodes would allow us to extract more features.
But we have a problem:
This could be the end state of this model, and this model is just going to be useless. Which is not going to extract any new information for us.
Sparse AutoEncoders:
Sparse auto encoders is used every where. Sometimes also interchangeable with auto encoders. A sparse encoder is an auto encoder where hidden layer is greater than the input layer. But a regularization technique which introduces sparsity has been applied.
In this case if the information was just flying through then it'd be overfitting.
It puts a constraint or a penalty on a loss function, which doesn't allow the autoencoder to use all of its hidden layer every single time. At any given time, auto encoder can onnly use certain number of nodes from it's hidden layer.
Denoising Auto Encoders:
It's a regularization technique, to combat the problem of when we have more nodes in hidden layer than in the input layer . Then the auto encoders just simply copy these values over without findings any meaning.
we take our input, we modify some of our inputs to 0. This happens randomly. Once we put the data through auto encoders. We then compare the outputs with the original value and not with the modified inputs.
This type of auto encoder is stochastic type of auto encoder.
Contractive Autoencoders:
Another regularization techniques like sparse and denoising autoencoders. What it does is leverage the whole training processes. They add penalty into the loss function as it propagates back to the network and simply does not allow the auto encoders to just tsimply just copy these values across.
What is Stacked AutoEncoders:
IF we add two layers of hidden layer into our auto encoer then we have two stage of encoding and one stage of decoding. This is very powerful algorithm. This model (directed models) can supersede the results achieved by deep belief networks (undirected networks), which is very important breakthrough.
Deep autoencoders:
Stacked autoencoders are not the same thing as the stacked auto encoders.
This are RBMs that are stacked, pre-trained layer by layer, unrolled, fine tuned with back propagation. They you do get the directionality. Deep autoencoders comes from RBM.