Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Written by Prashant Basnet
<section class="bg-white dark:bg-gray-900 px-4 py-8 max-w-2xl mx-auto text-gray-800 dark:text-gray-200">
<h1 class="text-2xl sm:text-3xl font-signature italic font-semibold text-center mb-4">
π Welcome β Youβve Landed on My Signature Page
</h1>
<p class="text-base sm:text-lg mb-4">
Hey, Iβm <strong class="text-black dark:text-white">Prashant Basnet</strong> β software developmemt engineer at
<a href="https://unisala.com" class="text-indigo-600 dark:text-indigo-400 underline hover:no-underline" target="_blank" rel="noopener noreferrer">
Unisala.com
</a>.
</p>
<p class="text-base sm:text-lg mb-6">
Youβre viewing my <strong>Signature</strong>, a digital space where I share what Iβm learning, building, and reflecting on, all in one place.
</p>
<div class="border-l-4 border-indigo-400 dark:border-indigo-500 pl-4 italic mb-6 text-sm sm:text-base text-gray-700 dark:text-gray-400">
π Found this page via LinkedIn, my personal site, or a shared link?
<br />
This isnβt a traditional portfolio. Itβs my public digital notebook where I document useful ideas, experiments, and lessons Iβve learned as I build.
</div>
<h2 class="text-lg font-semibold mb-2">What Youβll Find Here:</h2>
<ul class="list-disc list-inside space-y-1 text-sm sm:text-base">
<li>βοΈ Thoughts on algorithms, systems, and software design</li>
<li>π§ Insights from building at Unisala</li>
<li>π Direct links to everything Iβve published on Unisala</li>
</ul>
</section>
Input layers -> suplied to 1st RBM -> inputs to 2nd RBM -> Inputs to 3rd RBM
We also need to make sure directionality is in place for all the layers except for the top two
Greedy layer wise training is:
You train this network layer by layer as RBMs.
The wake-sleep algorithm is basically you train all the way up then you train all the way down. The training as it progresses up it's wake, then the training as it goes down again is called sleep.
You stack you RBMs, you train them up, once you've got the weights, make sure these connections only work downwards.
Deep belief network has stacked RBM, then after the training has happened, all the layers except the top two are directed layers
Deep Boltzmann Machine: We don't deprive the network of the undirectedness of it's connections.