Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Lightning fast NoSQL with Spring Data Redis

Dr. Xinyu Liu | May 25, 2016
6 uses cases for Redis in server-side Java applications

Listing 1 shows a Spring data caching example.

Listing 1. Enabling caching in Spring-based applications


@Cacheable(value="User_CACHE_REPOSITORY", key = "#id")
public User get(Long id) {
  return em.find(User.class, id);
}
@Caching(put = {@CachePut(value="USER_CACHE_REPOSITORY", key = "#user.getId()")})  
public User update(User user) {
    em.merge(user);
    return user;  
}
@Caching(evict = {@CacheEvict(value="USER_CACHE_REPOSITORY", key = "#user.getId()")})  
public void delete(User user) {
    em.remove(user);
}
@Caching(evict = {@CacheEvict(value="USER_CACHE_REPOSITORY", key = "#user.getId()")})  
public void evictCache(User user) {
}

Here the read operation is surrounded with Spring's @Cacheable annotation, which is implemented as an AOP advisor under the hood. A time-to-live setting in Spring also specifies how long these objects will remain in the cache. When the get() method is invoked, Spring tries to fetch and return the object from the remote cache first. If the object isn't found, Spring will execute the body of the method and place the database result in the remote cache before returning it.

But what if the same object is updated in the database by another process (such as another server node), or even another thread in the same JVM? With just the @Cacheable annotation employed, you might receive a stale copy from the remote cache server.

To prevent this from happening, you could add a @CachePut annotation to all database update operations. Every time these methods are invoked, the return value replaces the old object in the remote cache. Updating the cache on both database reads and writes keeps the records in-sync between the cache server and the backend database.

Fault tolerance

This sounds perfect, right? Actually, no. With the config in Listing 1 you might not experience any issues under light load, but as you gradually increase the load on the server cluster you will start to see stale data in the remote cache. Be prepared for contention from server nodes, or worse. Even with a successful write in the database, you could end up with a failed PUT in the cache server due to a network glitch. Additionally, NoSQL generally doesn't support full transaction semantics in relational databases, which can lead to partial commits. In order to make your code fault tolerant, consider adding a version number for optimistic locking to your data model.

Upon receiving OptimisticLockingFailureException or CurrentModificationException (depending on your persistence solution), you would call a method annotated with @CacheEvict to purge the stale copy from the cache, then retry the same operation:

Listing 2. Resolving stale objects in the cache


try{
    User user = userDao.get(id);    // user fetched in cache server
    userDao.update(user, oldname, newname);      
}catch(ConcurrentModificationException ex) {   // cached user object may be stale
    userDao.evictCache(user);
    user =  userDao.get(id);     // refresh user object
    userDao.update(user, oldname, newname);    // retry the same operation. Note it may still throw legitimate ConcurrentModificationException.
}

 

Previous Page  1  2  3  4  5  6  7  Next Page 

Sign up for CIO Asia eNewsletters.