The Dark Side of Caching: Don't Hurt Your Laravel App

Published on Feb 19, 2025

Caching can be a highly beneficial tool in your belt to help improve your customers user experience by reducing the need for expensive, repetitive computations, or API calls by storing frequently accessed data in memory.

This approach to caching usually improves response times, reduces server load, and enhances the scalability of our applications, However, this perception can lead to blind reliance on caching without fully considering its trade-offs. In some cases, caching can create more problems than it solves, especially when applied indiscriminately without analyzing whether a performance bottleneck actually exists.

Potential Issues From Over Reliance on Caching

1️⃣ Stale Data and Invalidation Issues

Imagine an e-commerce store caches product details, including stock levels, for 30 minutes to improve performance. When a user visits a product page, they see "In Stock," but the last available unit was purchased 5 minutes ago. Since the cache hasn't been invalidated, the next customer checks out but receives an error when the system queries the database and finds the item is out of stock.

This happens because cache invalidation is complex. If stock updates aren’t handled correctly, race conditions can lead to overselling, and stale cache across multiple servers can cause inconsistencies. Event-driven invalidation (e.g., listening for orders, cancellations, and stock updates) is essential but tricky to implement without missing edge cases. A better approach is using short-lived caching, atomic Redis updates, and centralized cache broadcasting to ensure real-time accuracy while maintaining performance.

2️⃣ Cache Stampede

Imagine our e-commerce store is running a flash sale of items at the tail end of a season (think post-christmas sale type of thing). Our product listing page for the sale is heavily cached to handle traffic spikes efficiently. However, as the sale goes live, thousands of users flood the site, all triggering a request for the same set of products. If the cache expires at this moment, all these requests will bypass the cache and hit the database at once, overwhelming it and slowing down the site—or even causing downtime. This is what's known as a the Thundering Herd Problem, or a "cache stampede"

3️⃣ Increased Memory Usage 💾

Let’s say our trusty e-commerce store has a massive product catalog and a large user base, much like Amazon. In an attempt to boost performance, we decide to cache not just stock numbers but entire product details (descriptions, reviews, images, etc.) can bloat memory. While this may provide fast response times for users, this caching strategy becomes problematic, let's take a look why
🖇️ It creates duplication
A lot of this data already exists in the database, so storing it in-memory is duplicating what is already available to us. Not only does this increase our memory usage, but it also puts unnecessary strain on our cache to always be in sync with the primary data store.
💰Increased costs
With cloud infrastructure like AWS or DigitalOcean, we pay for memory usage, so each byte of cached data increases our bill - as the cache grows and the business grows, so does the reliance on the cache, and subsequently our infrastruture bill!
📉Performance degradation
Large caches lead to frequent evictions as the cache grows close to it's max allowed memory size. Realistically in production you should have a sensible eviction policy configured for your cache based on what it's used for. Here's a example table showing the various eviction policies and ideal usage scenarios:

Use Case Recommended Policy
General-purpose caching allkeys-lru
Session storage volatile-lru
API rate limiting volatile-ttl
Background job queues (e.g., Laravel Horizon) noeviction
Financial/critical data noeviction
Randomized eviction (low priority cache) allkeys-random

For Laravel apps with cache-heavy workloads:
✅ allkeys-lru (best for general-purpose caching)
✅ volatile-lru (if most cache keys have TTL)

For rate limiting & transient storage:
✅ volatile-ttl

For queue systems (Horizon, Sidekiq, Resque):
🚨 Use noeviction (Redis should not delete queue jobs unexpectedly).

4️⃣ Security and Data leakage

For repeat customers, say we cached the following things to speed up our checkout experience and improve conversion:

  • Customer details (name, email, postage address)
  • Cart information (products, quantities, promotion codes)
  • Previous payment information (card details, billing address)
    Storing this data in a cache without sufficient authorisation and encryption techniques can risk exposing Personally Identifiable Information (PII) to unauthorised users. Even if the only thing we cached were coupons or promotional codes, we could potentially be giving unauthorised access to private offers (such as employee discounts).

5️⃣ Data Loss on Cache Eviction

If a user’s shopping cart or session data is cached but not backed up in the database, clearing or expiring the cache could result in the loss of that data. Since caching is typically used to improve performance, relying on it alone for essential data can cause problems when the cache is evicted or expires unexpectedly.

Typically unexpected data loss would ocurr because of an incorrect cache configuration. This could be setting the wrong expiration for items stored in the cache, or by setting the wrong eviction policy in Redis, for example.

If the cache for our user’s cart gets evicted before they have a chance to complete their purchase, and we hadn't persisted it to our database, they might lose all the items they added and have to restart their shop.

Best Practices for Smart Caching

✅ Keep Storage to a Minimum

To help improve cache performance, there are a number of pre-requistites you can employ before writing good caching code. For starters, you can make sure you're using the appropriate driver for the data you're expecting to cache. For large data sets consider using Redis or Memcached instead of file or database-based caching.

'default' => env('CACHE_DRIVER''redis'),

Only store the necessary data, small amounts of data reduce the overall memory usage of your cached data and saves you time. Rather than storing the entire model of a product, which can include images and reviews on top of sale information.

Consider offloading image storage to a CDN and store the URL to these images in the products metadata.

Load reviews via a relationship when viewing a single product, rather than loading reviews for all products. If you need a review summary (e.g. average rating), recalculate that value in a queued job whenever a new rating is received and store that directly on the product model and update the cached data in that process

✅ Strict Eviction policies

To combat data going stale, you can ensure you're effectively evicting data when there's a change that effects the cache. Use event-driven cache invalidation techniques by reacting to events such as ProductUpdated, ProductCreated and ProductDeleted:

event(new ProductUpdated($product)); // Clear relevant cache
final class ReloadProductCache implements ShouldQueue
{
public function __construct(private int $productId, private $ttl) {}
 
public function handle(): void
{
$product = Product::find($this->productId);
 
if (! $product) {
return;
}
 
$productData = $product->essentialDataAsArray();
 
Cache::tags(['products'])->put('product_' . $this-productId, $productData, ttl$this->ttl);
}
}

✅ Background cache warming

Background cache warming is a caching strategy that pre-loads your cache with frequently accessed items. For example, if there was a Hot Deals page in our e-commerce store that was frequently accessed, we can utilise background cache warming to ensure that the latest deals are always cached so we don't have to keep hitting the database when our users visit the page. We can do so via a queued job

// WarmHotDealsJob
 
public function handle(): void
{
// Fetch all products that are on sale
$saleProducts = Product::query()
->where('sale_ends_at', '>'now())
->limit(50)
->get();
 
foreach ($saleProducts as $product) {
Cache::tags(['sale', 'sale_product_' . $product->id], $product, ttl: $product->sale_ends_at);
}
}

We can then dispatch that job in response to an event (for example, a new product being added to the sale) or we could have it run on a schedule.

If we have a large number of products in the sale, we can use chunking to ensure we're not loading a large number of products into our applications memory during the warning process:

Product::query()
->where('sale_ends_at', '>'now())
->chunk(100, fn (Collection $products) => $products
->each(function (Product $product) {
Cache::tags(['sale', 'sale_product_' . $product->id], $product, ttl: $product->sale_ends_at);
}),
);

✅ Locks and Conditional Caching

When large volumes of users need to access the same information from the cache, you can prevent cache stampedes by locking the cache entry. Cache locking prevents multiple processes from querying the database simultaneously for the same cache key. When one process is updating or accessing the data, the other requests will wait for it to complete, reducing the risk of multiple database hits. Here's how you'd implement it for accessing product information in our e-commerce example:

Cache::lock('product_' . $productId . '_lock', $ttl)->get(function () use ($productId) {
return DB::table('products')->where('id', $productId)->first();
});

An alternative approach to cache locking is conditionally cache data when it's not already present in the cache. In Laravel, this looks like using the Cache::remember function:

$data = Cache::remember('product_' . $productId . '_lock', $ttl, function () use ($productId) {
return DB::table('products')->where('id', $productId)->first();
});

✅ Use Tags for Fine-Grained Invalidations

In Laravel, we can leverage tagged cache stores to provide fine-grained control for invalidating cached results. For example, we can call Cache::tags to store a list of products by their category:

Cache::tags(['products', 'category_1'])->put('category_1_products', $products, now()->addMinutes(60));

We can then invalidate all products for a specific category when a product within that category is updated:

Cache::tags(['category_' . $product->category_id])->flush();

But we can also flush the cache for all products in every category:

Cache::tags(['products'])->flush();

Summary

Caching is a powerful tool for improving performance, but striking the right balance between speed and reliability is crucial. Over-reliance on caching can introduce significant issues, such as cache stampedes when multiple requests attempt to regenerate an expired cache simultaneously, leading to unnecessary load on the database. High memory usage and rising infrastructure costs can also become a problem when excessive or improperly managed caching leads to bloated storage. Stale data is another risk, especially when cache invalidation is not handled correctly, potentially causing outdated or incorrect information to persist longer than intended.

However, when implemented thoughtfully, caching can be a game-changer, reducing server load, lowering response times, and ultimately increasing conversion rates. Techniques like background cache warming ensure critical data is readily available without causing spikes in resource usage. Proper eviction strategies prevent unnecessary memory bloat while maintaining efficiency. Locks help mitigate stampedes by ensuring only one process regenerates a cache at a time, while conditional caching optimizes storage by only caching responses when beneficial. Tagging allows for fine-grained control over invalidation, making cache clearing more efficient and reducing the risk of serving stale data. By carefully balancing these strategies, developers can harness caching to deliver a faster, more cost-effective, and more reliable application without falling into the trap of excessive dependence.

Likes
Reposts
Comments

Comments

Reply on Bluesky to join the conversation.