TL;DR

  • Scenario: E-commerce seckill/ticket-grabbing scenarios with instantaneous traffic peaks, high read/write concurrency
  • Conclusion: Use pre-static + rate limiting queuing; write path uses Redis Lua atomic pre-deduction + MQ async persistence; read path uses multi-level cache
  • Output: Seckill frontend/backend implementation checklist + Kafka produce/consume examples + Lua inventory deduction script framework

Application Scenarios

E-commerce Case - High Concurrency Seckill System Response Strategy

Pre-seckill Strategy:

  1. Page Static Processing

    • Pre-generate product detail page static HTML
    • Use CDN to distribute static resources
  2. Request Rate Limiting and Queuing

    • Layered rate limiting:
      • Frontend JS limits refresh frequency
      • Gateway layer limits IP requests
      • Service layer limits interface calls
  3. Cache Warm-up

    • Pre-load seckill product info into Redis cache
    • Use multi-level cache architecture

Core Seckill Processing:

  1. Inventory Deduction Optimization

    • Use Redis atomic operations for pre-deduction
    • Lua script guarantees atomicity
  2. Order Processing Diversion

    • Use message queue for peak cutting
    • Architecture: Request → Quick Validation → MQ → Async Order Processing
  3. Service Isolation and Degradation

    • Independently deploy seckill service
    • Key degradation strategy: Disable non-core features

High Concurrency Read Request Processing

1. Cache Strategy Optimization

  • Multi-level Cache Architecture:
    • Client cache
    • CDN edge node cache
    • Application layer cache (Redis/Memcached)
    • Database query cache

2. Data Static Processing

  • Fully Static: Generate HTML files hosted on CDN
  • Semi-static: Static template + dynamic data fragments

3. Smart Rate Limiting

  • Rate Limiting Dimensions: User ID, IP, device fingerprint, interface level
  • Rate Limiting Algorithms: Token bucket, leaky bucket, sliding window counting

High Concurrency Write Request Processing

Message Queue Solution

  1. Traffic Peak Cutting

    • Store burst 10k QPS write requests in queue
    • Database consumes at its own pace (e.g., 2000 QPS)
  2. Async Processing

    • Fast Response Phase: Validate user qualification, generate pre-order, return immediately
    • Async Processing Phase: Actual inventory deduction, generate formal order, notify payment system
  3. System Decoupling

    • Order service only generates order messages
    • Inventory service independently consumes messages
    • Payment service listens for payment messages

Kafka Implementation Example

Producer:

public void handleSeckillRequest(UserRequest request) {
    if(!validate(request)) return;
    PreOrder preOrder = createPreOrder(request);
    kafkaTemplate.send("seckill_orders", preOrder);
    return Response.success("Seckill request received");
}

Consumer:

@KafkaListener(topics = "seckill_orders")
public void processOrder(PreOrder preOrder) {
    try {
        inventoryService.reduceStock(preOrder);
        orderService.createRealOrder(preOrder);
        paymentService.preparePayment(preOrder);
    } catch (Exception e) {
        retryOrCompensate(preOrder, e);
    }
}

Peak Cutting and Valley Filling

Implementation Flow

  1. Request Buffer Phase:

    • Generate unique request ID
    • Serialize to JSON and write to MQ
    • Return within 50ms
  2. Traffic Peak Cutting Mechanism:

    • Queue as buffer for instantaneous traffic (e.g., 100k QPS)
    • Set max queue backlog (e.g., 500k) as circuit breaker
  3. User Experience Balance:

    • Display “Estimated wait about 1 minute”
    • Use WebSocket to push results
    • Three final states: Success, Failure (sold out), Timeout |

Error Quick Reference

SymptomRoot CauseFix
Overselling/negative inventoryNon-atomic inventory deductionRedis Lua atomic pre-deduction; DB optimistic lock
Order success but lost/delayedMQ backlog or insufficient consumersIncrease partitions/consumers; isolate retry queue
Duplicate ordersAt-least-once delivery without idempotencyConsumer idempotency: unique index/deduplication table
Message lossProducer not acknowledged; broker not persistedKafka: acks=all
5xx errors/thread pool exhaustedLong sync chain; no fast failOnly sync quick validation + enqueue; others async
Redis CPU spike/latencyHot key, complex Lua, too many connectionsShard hot keys; local cache; connection pool tuning