Making Caching Efficient
How to make the most out of caching
As I explained in the previous article in this series, caching is very useful when your data doesn't change very quickly. But how can you make your caching efficient?
Caching Design Patterns
You have a cache yes, but how do you get data into it? Caching Design Patterns are different rules you use to populate your cache (get data into your cache). Each design pattern has its advantages and disadvantages, therefore you should use a certain pattern depending on what your app/system requires. Let's Dive In!
Cache Aside/Lazy Loading: In this design pattern, the app checks the cache first, if there is a cache hit, it returns the result to the app, but if there is a cache miss, the app then checks the primary database, retrieves the result, then updates the cache, before returning the result to the app.
This is pretty simple to implement, plus only requested data will be kept in the cache, and should the cache fail, it won't be fatal. It just results in increased latency till the cache is populated again.
But there are some downsides: any cache miss results in 3 extra trips, one to the primary database, the trip back to the cache, and the trip back to the app. Data can also get stale, and it is the responsibility of the developer to invalidate this data (more on this later)
Write Through: In this design pattern, the cache is updated immediately after writes are made to the database. This reduces the number of trips needed to get data into the cache (2 trips as opposed to 3 in lazy-loading) and data in the cache is never stale.
But the downside of this is that: cache churn is a huge possibility (lots of data in the cache that will never be read), and the cache may never be filled with certain data till a write of it has been made to the database.
Cache Eviction
A cache size is designed to be small, so that search is fast and efficient. But due to its small size, the cache is bound to get full. It is our responsibility as developers to decide how the data in the cache gets removed, whether the cache is full or not. This is called cache eviction. There are 3 main ways of implementing cache eviction, which are:
Explicit Deletion: This means explicitly removing a certain piece of data from the cache. This can be used whether the cache is full or not.
Removing the LRU data: LRU stands for least recently used. This means removing the data that was retrieved the earliest from the cache. This eviction strategy is used when the cache is full. This can be easily implemented by tracking when a certain piece of data was last retrieved.
Using a TTL: TTL means Time-To-Live. Essentially this means every piece of data in the cache will only live in the cache for a certain amount of time, like 60 minutes. Once that time elapses, the data is removed from the cache. Adding TTLs to your cached data can be very useful for combatting cache churn in the write-through design pattern. This is used to prevent the cache from keeping stale data and can be used if the cache is full or not.
Lastly, if your cache evictions happen too frequently, you might want to consider scaling up or scaling out your cache. Scaling up means increasing the cache size while scaling out means adding more cache nodes to your system.
If you enjoyed this, don't forget to leave a like or a comment, and you can connect with me on LinkedIn or Twitter. Thanks for reading and I'll see you in the next one!