When to use this
Anywhere your Worker calls an external API on the hot path — pricing lookups, GitHub stars, weather, an MDX corpus — and the upstream isn’t ideally rate-limited or cheap enough to call every request. Wrap the fetcher with this and you get sub-10ms KV reads instead of 200ms upstream hits.
The code
import type { KVNamespace } from '@cloudflare/workers-types';
export async function kvCache<T>(
kv: KVNamespace,
key: string,
ttl: number,
fetcher: () => Promise<T>
): Promise<T> {
const cached = await kv.get(key, 'json');
if (cached !== null) return cached as T;
const fresh = await fetcher();
// Fire-and-forget the write; don't block the response on it.
await kv.put(key, JSON.stringify(fresh), { expirationTtl: ttl });
return fresh;
}
// Usage:
const stars = await kvCache(env.CACHE, 'gh:stars', 3600, async () => {
const r = await fetch('https://api.github.com/repos/foo/bar');
return (await r.json()).stargazers_count;
});
How it works
KV’s get(key, 'json') returns parsed JSON or null — that’s the
“is-it-cached” check. On miss, we call the fetcher, serialize the result,
and write with expirationTtl so KV evicts the key after ttl seconds.
await on the kv.put is intentional even though you could ctx.waitUntil
it — for read-heavy keys you want the next request in the same edge POP to
hit cache, not race a write.
The TTL is per-key, not per-call, so calling with the same key from different routes shares the same cache lifetime. Pick keys carefully.
Treat this snippet as the starting point — adapt names, types, and edge-case handling to your project. The shape is reusable; the details are yours.