Caching Architectural pattern in microservice

Kumar Shivam
6 min readDec 20, 2020

--

Caching is the mechanism to enhance the performance of a system by avoiding round trip calls(either to the DB or to the external resources). It works on the principle: recently requested data is likely to be requested again. Considering the principle, the primary reason to set up caching outside of our DB is to reduce load within our DB engine. Caching can increase responsiveness, performance, scalability and availability for individual microservice. It reduces the latency and contention of handling large volumes of concurrent requests to a data store. We can implement a cache by using the in-built approach or using external technology(i.e. Redis, Memcached, Apache ignite, EhCache, Hazelcast, infinispan, couchbase, caffeine etc.).

Let’s explore caching architectural pattern in microservice:

  1. Embedded cache
Embedded cache

The embedded cache always intact within the application.

Steps Involved:-

a) Request hit the Load Balancer

b) Load Balancer forwards the request to the application service

c) The application receives the request and checks if the same request was already executed

d) If yes, return from the cached value, else return from the data source

In java, we can implement our cache

Ex:-

Private ConcurrentHashMap<String, String> cache = new ConcurrentHashMap<>();Private String processRequest(String request){
If(cache.contains(request)){
return cache.get(request);
}
String response = process(request);
cache.put(request,response);
return response;
}

Many developers wouldn’t agree with my first approach of implementing a cache, because of below reasons

· No statistics

· No build-in cache loader

· No Eviction policies

· No Expiration Time

· No Notification mechanism

· No max-size limit (OutOfMemoryError)

Spring boot provides an easy way to implement cache using annotation @Cacheable.

@Service
public class TitleService {
@Cacheable("books")
public String getTitleNameByTitleId(String titleId) {
return findBookInSlowSource(titleId);
}
}

With the annotation-based approach, spring first checks if this method was already executed with the given parameter. If yes, then the cached value will be returned rather executing the method again.

Challenge

· Consider a scenario, if the first request sent to an Application server 1 and the value is cached over there, and another request send to an Application server 2 of the same type. since the cached value is unavailable on Application server 2, in that case, the method will execute again if not executed, else it will return the old cached value which is of another request which hits the application server 2 earlier.

To get rid of this situation at a certain extent, we can write custom logic + sticky session concept

We can use Guava or EHCache to implement an embedded cache.

Pros :

· Simple to configure

· Simple to deploy

· Low latency as it is embedded within the application

· No separate ops team is required to manage

Cons :

· Data is collated within the application, which gradually increases the size of the application

· Limited to JVM based application

· Not flexible

Use :

· Reference Mapping

· Private value cache

· Some metadata information caching

2) Client-Server Cache

Client-Server Cache

Steps Involved:-

a) Request hit the Load Balancer

b) Load Balancer forwards the request to the application service

c) Application hit the cache to fetch the data, if data is available then return, else it will execute the logic and return the response (or fetch the data from the DB).

EX:-

@Configuration
public class CacheConfiguration {
@Bean
RedisTemplate<Object, Object> redisTemplate() {
RedisTemplate<Object, Object> redisTemplate = new RedisTemplate<Object, Object>();
redisTemplate.setConnectionFactory(jedisConnectionFactory()); return redisTemplate;
}
@Bean
public CacheManager cacheManager() {
RedisCacheManager cacheManager = new RedisCacheManager(redisTemplate());
return cacheManager;
}
}

This cache is used when we want to share the cache among different Applications or APIs.

Pros:-

· A cache server is separate and shared.

· Easy to manage, Scale-up/down, backup and security separately

· The shared cache can support many application (homogeneous or heterogeneous )

Cons:-

· Separate ops team required to manage

3) Cloud cache(Cache as Service)

The cache is hosted as cloud-based backing service. The above pattern is cache-aside pattern.

Steps Involved:-

a) API Gateway gets the request

b) It queries the cache for a response. If found, return the data else call the microservice to fetch the data.

c) Before sending the response it will again update the cache with the latest data for future requests.

Pros :

· Data is separate from the microservice

· Autonomously(or with little effort ) can grow and shrink

· Distributed implementation increases system responsiveness by returning the cache data

· Polyglot microservice support

Cons:

· Separate ops effort required

· Server network requires adjustment(same region, same VPC)

4) Reverse proxy cache

Reverse proxy cache

This cache solution is based on the HTTP protocol

Steps Involved:-

a) The request comes the reverse proxy(i.e. NGNIX)

b) Reverse proxy checks if such request is already cached

c) If yes, then the response is return and further propagation is stopped

Pros :

· HTTP protocol based cache solution.

· We can specify Cache as a Configuration, so we needn’t have to change any code in the application.

· Autonomously(or with little effort ) can grow and shrink

· Distributed implementation increases system responsiveness by returning the cache data

· Polyglot microservice support

Cons:

· We can’t write any code to invalidate the cache, invalidation must be timeout based.

· Not Distributed

· Not highly available

NGINX is the mature reverse proxy caching solution.it stores data over the disk.

5) Side-car cache

Side-car cache

Steps Involved:-

a) The request hits the AKS and is forwarded to one of the PODs

b) The request reaches to the Application container and Application uses the cache client to connect to the cache container

Pros :

· Low latency as Application and cache always at the same machine

· Cache cluster discovery is easy because both hosted at the same machine

· Autonomously(or with little effort ) can grow and shrink

· Polyglot microservice support

· Resource pool and management activities are shared between cache and application

Cons:

· Limited to container-based environments

· Data localised with pods

· Separate ops effort required

· Server network requires adjustment(same region, same VPC)

6) Reverse-proxy (side-car) cache

Reverse-proxy (side-car) cache

In this Application, the container is unaware of cache existence.

Steps Involved:-

a) The request hits the AKS and is forwarded to one of the PODs

b) Within the pod, reverse proxy cache container receives the request and checks request already cached

c) If yes, then it sends the cached response else forward the request to the application container.

Pros :

· Configuration -based

· Consistent with containers and microservice

· Polyglot microservice support

Cons:

· Protocol(e.g. HTTP) based

· Difficult cache invalidation

Conclusion

Appropriate selection of Cache architecture pattern within microservice architecture will enhance application responsiveness otherwise it may be agony.

--

--

Kumar Shivam
Kumar Shivam

Written by Kumar Shivam

Technical Consultant | Passionate about exploring new Technology | Cyber Security Enthusiast | Technical Blogger | Problem Solver