12

Our current caching implementation caches large amounts of data in report objects (50MB in some cases).

We’ve moved from memory cache to file cache and use ProtoBuf to serialize and de-serialize. This works well, however we are now experimenting with Redis cache. Below is an example of how much longer it takes for Redis than using the file system. (Note: using protobuf instead of JsonConvert improves set time to 15 seconds and get time to 4 seconds in the below example, when setting a byte array).

// Extremely SLOW – caching using Redis (JsonConvert to serialize/de-serialize)
IDatabase cache = Connection.GetDatabase();

// 23 seconds!
cache.StringSet("myKey", JsonConvert.SerializeObject(bigObject));

// 5 seconds!
BigObject redisResult = JsonConvert.DeserializeObject<BigObject>(cache.StringGet("myKey")); 




// FAST - caching using file system (protobuf to serialize/de-serialize)
IDataAccessCache fileCache = new DataAccessFileCache();

// .5 seconds
fileCache.SetCache("myKey",bigObject); 

// .5 seconds                                          
BigObject fileResult = fileCache.GetCache<BigObject>("myKey");                              

Thanks in advance for any help.

ps. I didn’t find an answer from similar questions asked. Caching large objects - LocalCache performance

or

Caching large objects, reducing impact of retrieval times

3
  • 1
    Can you separate the serialization from the cache insertion, to determine what is consuming time? It's probably the JSon serialization. Try a different serialization method i.e. BinaryFormatter. Commented Dec 22, 2015 at 22:50
  • Thanks for the quick response. The serialization is only about 1 second (of the 23). When we moved from in memory to file storage we started with BinaryFormatter, but it was "slow", so we switched to ProtoBuf. I will give it a shot. Commented Dec 23, 2015 at 13:48
  • How big is the object serialized? Have you tried compression? i.e. This Commented Dec 23, 2015 at 17:25

2 Answers 2

12

Redis actually is not designed for storing large objects (many MB) because it is a single-thread server. So, one request will be fast enough, but a few requests will be slow because they all will be processed by one thread. In the last versions some optimizations were done.

Speed of RAM and memory bandwidth seem less critical for global performance especially for small objects. For large objects (>10 KB), it may become noticeable though. Usually, it is not really cost-effective to buy expensive fast memory modules to optimize Redis. https://redis.io/topics/benchmarks

So, you can use Jumbo frames or buy a faster memory if it is possible. But actually it won't help significantly. Consider using Memcached instead. It is multi-threaded and can be scaled out horizontally to support large amount of data. Redis can be scaled only with master-slave replication.

Sign up to request clarification or add additional context in comments.

2 Comments

This answer is outdated since Redis cluster provides scale out options now 🙂
@jocull you should post an updated answer! I'm looking for a 2023 answer to the question.
0

Since redis 6 there is multiplexing, while it retains a core single-threaded data-access interface, I/O is now threaded.

otherwise you can try KeyDB that is 'fully multithreaded'

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.