18px
System Design·March 14, 2026·7 min read

Your Delivery Rider's GPS Never Writes to a Database

That little bike icon moving on your Zomato screen updates every second. But none of those GPS pings ever touch a traditional database. Here's how real-time location tracking actually works at scale.

Your Delivery Rider's GPS Never Writes to a Database

Open Zomato. Order biryani. Watch the little bike icon crawl towards your apartment in real time.

That icon updates every second or two. Sometimes faster. Your rider turns a corner and the map reflects it almost immediately. It feels smooth, responsive, alive.

Now think about this from a systems perspective. Zomato has hundreds of thousands of active delivery riders at any given time during peak hours. Each one is sending GPS coordinates every 1–3 seconds. That's potentially millions of location pings every minute.

If every single ping wrote to a PostgreSQL row, the database would be on fire within seconds. Literally. The write throughput alone would melt any traditional relational database.

So how does it actually work?

The Obvious Approach (That Doesn't Work)

What Actually Happens: Memory, Not Disk

But How Does the Customer's Map Update?

Most large food delivery platforms use a mix of both. WebSockets for active tracking screens, and smart polling as a fallback when WebSocket connections drop (which happens a lot on mobile networks).

The Ingestion Pipeline: Handling the Firehose

This separation is critical. The thing receiving data and the thing storing data are not the same thing. They scale independently and fail independently.

Geospatial Queries: "Show Me Riders Near This Restaurant"

What About History? ETAs? Analytics?

Why This Design Is Elegant

  • Redis holds the current position — fast reads, fast writes, always fresh

  • Kafka absorbs the firehose — buffering, fault tolerance, decoupling

  • WebSockets push updates to the customer — real-time, no polling

  • Data warehouse stores history — analytics, ETAs, route optimization

No single system does everything. Each one does one thing well. And the GPS ping never has to wait for a slow database write before the next one arrives.

That's the core insight: real-time location tracking is a streaming problem, not a storage problem. The moment you treat it like a storage problem and reach for a database, you've already lost.

The Numbers That Make This Real

Wrapping Up

FAQ

Why don't food delivery apps store GPS data in a relational database?

GPS pings arrive at extremely high volume — potentially hundreds of thousands per second during peak hours. Relational databases like PostgreSQL aren't designed for this write pattern. They'd choke on lock contention, WAL writes, and index updates. In-memory stores like Redis can handle this volume easily because they avoid disk I/O entirely for the hot path.

How does Redis handle geospatial queries for finding nearby riders?

Redis has a built-in GeoSet data structure (based on Sorted Sets with geohash encoding). You add rider positions with GEOADD and query nearby riders with GEOSEARCH, specifying a radius. Redis handles the geospatial math in-memory, making it fast enough for real-time rider assignment.

How do food delivery platforms calculate ETAs if GPS data is ephemeral?

The real-time GPS path (Kafka → Redis) is separate from the analytics path. The same Kafka topic feeds a second pipeline that writes historical GPS data into a data warehouse. ETA models are ML services that consume historical routes, current traffic, restaurant prep times, and the rider's live speed to generate estimates.

Filed under fieldnotesMarch 14, 2026