How Blinkit, Zepto, Swiggy Instamart and BigBasket Deliver in 10 Minutes — and Why the Hard Part Has Nothing to Do with Speed
The real quick-commerce moat is not rider speed. It is the forecasting system that places the right SKU in the right dark store hours before a customer even opens the app.

The rider doing 15 kmph is not the product. The forecasting model that put the right item in the right store 12 hours before you ordered it, that is the product.
The Wrong Question Everyone Asks
What a Dark Store Actually Is
Average rider speed: ~15 kmph
Target delivery time: 10 minutes
Max distance in 10 min: 15 x (10/60) = 2.5 km
Subtract pack time (~2.5m): effective riding time = 7.5 min
Maximum viable radius: 15 x (7.5/60) ~= 1.875 km ~= 2 km
Every quick-commerce player in India independently converged on roughly a 2 km radius. This is not coincidence. The math gives exactly one answer given Indian urban density and traffic conditions.
The Knapsack Problem Running in Production
Each SKU has a weight: shelf space it occupies
Each SKU has a value: expected demand x margin for that pin code
Constraint: total shelf space is fixed
Objective: maximize value under the space limit
This is NP-hard in the general case. In practice, it is solved approximately using ML, specifically, a demand forecasting model trained per pin code, per time window.
The model runs continuously. It does not wait for orders to arrive. It pre-positions inventory based on predicted demand.
The Forecasting Signals
Historical Order Data Per Pin Code
Real-Time Weather
Festival and Local Calendar
Time-of-Day Demand Patterns
Redis: Why Not Postgres
Why Redis and Not a Relational Database?
Redis is updated by the forecasting pipeline, not by user orders. When the model decides that a store should carry 200 units of a product, that figure gets written into the online inventory layer before the items are physically moved to the store. The app reflects the ground truth of what is on that shelf right now.
The Order Flow: What Happens in 10 Minutes
T+0:00 Customer taps "Place Order" -> Store Assignment Service runs haversine distance across active stores within 3 km, picks the nearest one with inventory, and writes the order to DynamoDB
T+0:05 Picker at the dark store gets a handheld notification showing exact shelf coordinates and an optimized pick path
T+2:30 Order is packed, sealed, and labelled while rider assignment runs in parallel
T+3:00 Rider picks up the order and leaves the store
T+8:00 Order is delivered
The 2.5 minute pack time is not humans being fast. It is a store layout designed by engineers, not merchandisers. Every SKU in a dark store is positioned to minimize the picker's walking distance across the most common order combinations. Shelf position is an optimization output, not a display decision.
DynamoDB and Why Zepto Migrated
Single-digit millisecond latency at scale with no tuning
No schema migrations as order structure evolves
Auto-scaling during demand spikes
Native event-driven integration for order state transitions
Order state is modeled as a finite state machine: PLACED -> ASSIGNED -> PICKING -> PACKED -> DISPATCHED -> DELIVERED.
Each transition emits an event consumed by downstream services like customer notifications, analytics, rider payout calculation, and inventory decrement.
Zepto Maps: Why Google Maps Is Not Enough
Why All Four Companies Converged on the Same Stack
Real-time inventory: Redis for in-memory reads and atomic updates
Order state: DynamoDB or another NoSQL store for flexible schema and low-latency writes
Demand forecasting: ML per pin code because geographic demand variance is extreme
Store radius: ~2 km because rider speed and SLA leave no room for more
Pack target: under 90 seconds because transit time already consumes most of the SLA
Last-mile routing: proprietary hyperlocal mapping because generic maps miss the final 50 metres
This convergence is not industry coordination. It is the same math and the same constraints producing the same optimal answers independently.
The Go Prototype: Core Store Assignment
gopackage main
import (
"context"
"fmt"
"math"
"sort"
"github.com/redis/go-redis/v9"
)
type Store struct {
ID string
Lat float64
Lng float64
Name string
}
// haversine returns distance in km between two lat/lng points
func haversine(lat1, lon1, lat2, lon2 float64) float64 {
const R = 6371.0
dLat := (lat2 - lat1) * math.Pi / 180
dLon := (lon2 - lon1) * math.Pi / 180
a := math.Sin(dLat/2)*math.Sin(dLat/2) +
math.Cos(lat1*math.Pi/180)*math.Cos(lat2*math.Pi/180)*
math.Sin(dLon/2)*math.Sin(dLon/2)
return R * 2 * math.Atan2(math.Sqrt(a), math.Sqrt(1-a))
}
// checkStock queries Redis for current stock at a given store
func checkStock(ctx context.Context, rdb *redis.Client,
storeID, itemID string) (int, error) {
key := fmt.Sprintf("stock:%s:%s", storeID, itemID)
val, err := rdb.Get(ctx, key).Int()
if err == redis.Nil {
return 0, nil // key missing = out of stock
}
return val, err
}
// assignStore finds nearest store within 2 km that has item in stock
func assignStore(ctx context.Context, userLat, userLng float64,
itemID string, stores []Store, rdb *redis.Client) (*Store, float64, error) {
// sort stores by distance from user
sort.Slice(stores, func(i, j int) bool {
di := haversine(userLat, userLng, stores[i].Lat, stores[i].Lng)
dj := haversine(userLat, userLng, stores[j].Lat, stores[j].Lng)
return di < dj
})
for _, store := range stores {
dist := haversine(userLat, userLng, store.Lat, store.Lng)
if dist > 2.0 {
break // beyond 2 km radius, stop searching
}
stock, err := checkStock(ctx, rdb, store.ID, itemID)
if err != nil {
continue
}
if stock > 0 {
return &store, dist, nil
}
}
return nil, 0, fmt.Errorf("item %s unavailable within 2 km", itemID)
}
This is the core of the entire system. Everything else, the AI forecasting, the pack optimization, the proprietary maps, exists to make this store assignment return the right answer, reliably, in under a millisecond.