Valkey 9.0 Is Faster Than Redis. Now What?

Infrastructure engineers have one core preference: boring. Fast is good. Fast and boring is the dream. So when Redis decided to renegotiate its relationship with the concept of “open source,” the community did what communities do. It forked the thing, renamed it, and started making it better. That’s Valkey. Born from spite, and somehow that worked.

On March 11, 2026, Google Cloud announced that Memorystore for Valkey 9.0 is generally available. This isn’t a rename exercise or a licensing escape hatch, though it’s those things too. Valkey 9.0 is genuinely, measurably faster than the Redis you’ve been running. Some of the most demanding workloads on the internet are already running on it. The “what’s next” question has been answered. This post is about what to do now that you know the answer.

Real World Success: Snap, Juspay, and Fubo

This isn’t a theoretical upgrade. Companies like Snap, Juspay, and Fubo are already putting Valkey 9.0 to the test in production. For Snap, a high-performance caching layer is critical infrastructure. They are leveraging architectural enhancements like SIMD optimizations to drive better throughput and lower latency for their global user base.

In the financial sector, where milliseconds determine success, Juspay is using Memorystore for Valkey to power the GPay stack for top Indian banks. They need to handle high-throughput transactional data with exceptionally low latency, and the performance gains from memory prefetching are providing the scale they require. Meanwhile, Fubo is using it to absorb the massive traffic spikes that come with live streaming major events. These aren’t minor use cases; these are “break the internet” levels of scale.

Beyond the Licensing Drama

Most teams are looking at Valkey for the open standard, but they are staying for the raw speed. Valkey 9.0 on Memorystore isn’t just “as fast” as legacy Redis. It’s significantly faster for the workloads that actually drive revenue in 2026. Google has integrated several key architectural enhancements that move the needle from incremental to transformative.

Let’s talk about pipeline memory prefetching. In a typical caching setup, network round-trip time is the primary bottleneck. Valkey 9.0 re-architects this entire flow, prefetching operations in a pipeline simultaneously. The result is an increase in overall throughput of up to 40%. It’s like having a supercomputer connected to the internet via a high-speed fiber line instead of a wet piece of string. When you’re managing millions of operations per second like Fubo does during a championship game, that 40% is the difference between a happy audience and a social media nightmare.

Developer “Quality of Life”

Valkey 9.0 introduces specific commands that solve long-standing pain points for developers. The standout is HEXPIRE. This allows for granular hash field expiration. Finally, we can stop treating our cache like a closet where we just shove everything and hope it disappears eventually.

Historically, if you had a “User Session” object with some fields that needed to last 30 minutes and others that needed to last all day, you had to manage that logic manually in your application. It was a tedious process that led to “temporary” code living for five years. HEXPIRE lets you set a TTL on an individual field within a hash. It sounds like a minor detail, but it allows you to keep your logical objects together while managing their lifecycles independently. Clean code wins every time.

The Community Built Something Better

There is a strategic lesson here. For the last decade, people assumed that proprietary versions of open-source tools would eventually diverge and offer “better” performance. Valkey 9.0 flips that script. Because it’s community-driven and backed by the world’s largest cloud providers, the innovation is happening in the open standard first. The open web wins again.

Google Cloud’s decision to lean into Valkey 9.0 is a signal to ISVs. The “Redis tax” is now optional. You can have the performance of a managed service without the vendor lock-in of a proprietary license. For any CTO looking at their 2027 roadmap, that is a very compelling starting position. Choose wisely, unless you enjoy explaining to your CFO why your infrastructure bill grew faster than your ARR.

If you’re still running legacy Redis clusters, the question isn’t whether you should migrate, but why you haven’t yet. Valkey 9.0 isn’t just a safe harbor. It’s what Redis should have been.

Want to Go Deeper?