Is it Possible to use Redis as output cache for 8.2 based websites instead of standard in memory cache? Can we also store the fetch cache there too?

We have a big website and it takes a while to refill all the cache of sub-layouts. What we want to do is to move the caching out of application/web server/IIS . Hence a downtime of a server would not cause an unnecessary time to refill that cache. 

  • The short answer would be - yes, this is technically possible.
    All you need is to cook/implement BaseCacheManager contract, or override (DefaultCacheManager.GetHtmlCache API), and register a new dependency in the DI:

    There are a few global questions to think about:
    1. Who will be responsible for clearing it? Each server cleans its own cache, and using centralized storage would lead to multiple scavanges instead of one

    Each server cleans own cache as soon as it processes the PublishEndRemote event from Event Queue.
    This event is raised after all other item-related events were processed; in other words item caches have 100% fresh data inside.
    Each server has own EventQueue processing speed that much depends on cache % filled.
    Thereby one server might clean cache earlier than another one.

    Having one centralized storage might lead to the case:
    * Server A finished cleaning all item caches, and cleaned HTML cache
    * Server B has not yet cleaned all item caches, and needs to produce HTML markup; partially-obsolete content shall be used to produce HTML markup
    * Server A will use results of Server B -> show partially old content even though it has fresh caches

    It will auto-recover as soon as Server B finishes event processing, but that might be in a few minutes...

    2. What steps should be done in case HTML cache gets old value inside? In other words - how to completely scavenge the cache?
    3. Redis C# driver is CPU-sensitive, high CPU usage will lead to driver unable to parse recieved response -> timeout:
    4. Each Redis call costs time -> network VS memory access:

    To sum up - the idea of using Out-Of-Proc caching is good, but there are some challenges along the way.

    The default Sitecore HTML cache & its in-proc implementation should be able to handle large volume of requests.

    I'd expect main bottlenecks in other areas, so carefull performance testing should help in your case.

    You can launch a load test & capture 'mini' memory dumps each ~10 seconds to see what are threads mostly doing.

  • In reply to NikolayMitikov_1581153661:

    Thanks a lot for your detailed answers and obviously further questions.

    Ignoring the updates to respective servers, do you think it is technical possible to implement a Redis Cache provider.

    1. Can we solve the race problem by adding detection/control while clearing the cache.
    1.1. A designate server can be given responsibility of clearing the cache as clearing will be controlled through our own custom implementation and caching will be shared among servers.
    1.2. It could work on FIFO, any first request to clear the content can be respected and additional requests from others servers can be ignored.

    2. While describing the "Update Problem" between the multi-servers , you mentioned that one server can serve the stale content due to syncing (which make sense ) however, if the content is mostly static articles and not time sensitive , would not that resolve it any way? (As this problem could exist in multi-server environment as well where remote publish monitoring is taking time or perhaps any multi-layer caching stack)

    3. To avoid unresponsive Redis Server, this is why we were thinking towards Redis Cluster. This will ensure reduced timeouts. At the end, out of cache requests always pushed to origin servers , which are already serving high volume.

    The current bottleneck for us the boot time for the Sitecore when we have a server down. Currently there are hundreds of sublayouts and Sitecore takes time to load them fully to be responsive on that server.

    Therefore, we are wondering , if we can move caching out of web server , we can ensure that boot up time is increased. Obviously , if you can suggest some other solution then we can take that route.

    Is it possible to have a call with you to discuss this in details?
  • In reply to Usman Bashir:

    Before reading my entire response - I suggest to have a look at the following threads, which refer to the same -,

    I very much share Nikolay's sentiment. You would most likely end up creating more issues for yourself than fixing (as this is not a trivial task as pointed out). Your proposed solution (Redis for HTML cache) for handling concurrency have several issues:
    a) Clearing Redis cache will lock you entire cache, causing all servers to wait, unless u add some timeout and try to serve it either from a local file cache or bypass cache altogether.
    b) A single cache controller server becomes a single point of failure.
    c) For performance reasons - you may still want to cache Redis cache on CD's local file system. Not the entire, only per request, otherwise your severs will spend time to download the entire Redis cache to the local file storage during Sitecore warm-up.
    d) You will need to handle concurrency of populating Redis cache during cache misses - this can become fairly expensive after any publish as Redis cache will be invalidated and thus every server serving request will be possibly subjected to cache miss (not different from today's situation, but it's just happening on a local storage, which is faster)
    e) It's probably not as cost effective as using local storage.

    There's a reason why HTML cache is local to the delivery server - moving it to Redis will certainly impact its performance.

    If warming up your Sitecore instance, I'd rather suggest to look into your data caches - especially the prefetch - you may loading far too much than u need. The initial (by Sitecore) settings is very generic and you do need to optimize for sites with large content trees - you may want to tune it differently for CM and CDs. I'd suggest to check the rest of the caches to see, if you're not having too many misses due to the insufficient cache size.

    You also mentioned hundreds of sublayouts - this should normally not be an issue as HTML cache is gradually populated, unlike prefetch cache. If this should be an issue than one could look at the performance of those.

    To sum up, before venturing into implementing Redis for HTML cache (I'd leave that to Sitecore):
    1) Optimize your Sitecore data caches (use Cache.aspx to check how they're performing)
    2) Use ASP.NET and Sitecore counters to monitor performance of your CD, when booting up and once it's warmed up.
    3) Check performance of your disk - Disk Queue Length - during CD boot up and and once it's warmed up. Maybe you need to switch to faster disks (SSD)?
    4) Check performance of your code and pages using pipelines.aspx and stats.aspx
    5) Consider using SDN, if you're able to cache so many sublayouts (depending on on the granularity of your HTML cache settings)

    BTW: I believe Sitecore Data Caches are just local process caches, not ASP.NET cache backed up. That would make it more difficult to move out of process - again, the reason is speed.

  • There is currently a feature request in Sitecore backlog to introduce a distributed cache support based on Redis solution ( Feel free to file a support ticket and ask to link it with this feature #225170 to increase viability.