We use Redis as our caching layer due to its set operations being useful for the logic in our API service. However, we've encountered scaling issues with Redis, primarily due to its single-threaded nature and limited horizontal scaling capabilities. We initially struggled with memory usage, but disabled saves to utilize all available memory on the box. Our current bottleneck is approaching the upper bound of available CPU. To overcome this, we attempted to use Redis clustering, but were limited by the open-source version's status as a work-in-progress. Instead, we employed blue-green traffic testing using em-proxy, which allowed us to test our new cluster with live production traffic. The results showed significant performance improvements, with response times averaging 55ms compared to intermittent spikes of up to 700ms with the standard open-source Redis server. We discovered that disabling certain commands and adjusting key naming conventions were necessary to resolve issues related to sharding and object management. Our testing demonstrated the powerful nature of this method and highlighted the potential for significant performance gains by utilizing Redis, particularly when compared to our prior environment.