This is already implemented in reference architecture for MongoDB open source project called "Socialite" though it's in Java and not node.js so my answers are based on my experience stress and load-testing that code.
As you can see from its implementation of status feed, the feed has option fanoutOnWrite cache which will create a cache (limited size document) for active users, capping the number of most recent entries in the cache document (that number is configurable).
The key principles from that implementation is that content requirements are in fact different from timeline cache requirements, and the write to content database is first as that is the system of record for all content, then you update the cache (if it exists). This part can be done asynchronously, if desired. The update utilizes "capped arrays" aka update $slice functionality to atomically push a new value/content to the array and chop the oldest one off at the same time.
Don't create cache for a user if it doesn't already exist (if they never log in then you're wasting the effort). Optionally you can expire caches based on some TTL parameter.
When you go to read a cache for a user when they log in and it's not there, then fall back on "fanoutOnRead" (which is querying all content of users they follow) and then build their cache out of that result.
The Socialite project used MongoDB for all back end, but when benchmarking it we found that the timeline cache did not need to be replicated or persisted so its MongoDB servers were configured to be "in memory" only (no journal, no replication, no disk flushing) which is analogous to your Redis use. If you lose the cache, it will just get rebuilt from permanent content DB "on demand".