Written by Prashant Basnet
👋 Welcome to my Signature, a space between logic and curiosity.
I’m a Software Development Engineer who loves turning ideas into systems that work beautifully.
This space captures the process: the bugs, breakthroughs, and “aha” moments that keep me building.
Our newsfeed was taking 4 seconds to load a feed. We added a cache. Now it takes 40 milliseconds.
Here's what happened.
The Problem
We ran a load test with just 50 concurrent users:
Every user action triggered heavy DB aggregations that we were computing repeatedly.
We were recalculating the exact same feed over and over. User opens the app, we hit the database. They refresh, we hit it again. Nothing changed, but we did all the work anyway.
The Solution: A Simple Cache
We built an in-memory LRU cache with a strategy pattern for eviction:
How it works:
The Results
Same load test, same 20 users:
Core Web Vitals Impact (Measured via Lighthouse)
Before caching~11.4 s
After caching~3.1 s
Improvement~72% faster
A big part of this optimisation was surprisingly rooted in algorithm practice.
While grinding LeetCode, I spent a lot of time implementing:
Those same principles became the backbone of Unisala’s caching layer:
#Engineering #Performance #Backend #NodeJS #TechThread