Overview
The TinyURL system splits into two flows: a low-traffic write path for creating short URLs and a high-traffic read path for redirects. The write path is straightforward — a Creator client sends the long URL to a server, which generates a short key, stores the mapping in a database, and returns the short URL. The redirect path must handle 100x more traffic, so it needs a Load Balancer to distribute requests across servers, and a Cache to serve the most popular URL mappings without touching the database on every request.
Explanation
Write flow (Create Short URL): The Creator client sends a POST request to an application server. The server generates a unique short key (e.g., via base62 encoding of an auto-increment ID or a hash), writes the mapping to the database, and returns the short URL to the client. At only ~4 requests/second, a single server and database handle this easily.
Redirect flow: The Consumer client hits a Load Balancer, which distributes requests across multiple application servers (at 400 req/s, this keeps each server well under its 140 req/s capacity after fan-out). Each server first checks the Cache — with a 90% hit rate, only ~40 requests/second reach the database, well within its 50 req/s capacity. On a cache miss, the server reads from the database, populates the cache, and returns a 301/302 redirect to the client.
Key design decisions: • Cache is essential — without it, 400 redirects/second would overwhelm a single database (capacity: 50 req/s). A 90% cache hit rate reduces DB load to just ~4 req/s for the read path. • Load Balancer distributes redirect traffic so no single server is a bottleneck. • The write and read paths are intentionally asymmetric — the simple write path needs no LB or cache because traffic is 100x lower.