eCommerce applications a usually read-intensive — due to the number of products and category listings — and tend to optimize their scaling for a higher number of reads by using replication for example and letting the slaves handle reads.

Writing performance often bottlenecks in the checkout phase of the application where new orders are registered, stocks are balanced etc..

This type of bottleneck is all the more visible in highly cached applications where most of the read-intensive information is served from memory while the checkout still needs a lot of concurrent write access on a single master database.

Replacing synchronous on demand processing with asynchronous message passing and processing should:

  1. Allow more simultaneous connections — since the connections are simple TCP socket
  2. Decrease the number of processes used — no nginx, no php-fpm, just tcp kernel-threads and rabbit worker threads
  3. Decrease the memory use — based on the number of consumers used to process the inbound data
  4. Decrease DB concurrency — based on the number of consumers doing the work rather than the number of buyers placing an order (orders of magnitude lower)

Messages with reply-queues could allow asynchronous responses to be received later on when the processor has finished its task. A TCP proxy in front of a cluster of RabbitMQ machines and a few PHP worker machines behind them should scale much better than receiving all the heavy-processing traffic in nginx + php-fpm processes.

Message brokers such as RabbitMQ can dynamically generate reply-queues when asked to do so and those queues are session-based only so their content is only accessible to the user that sent the request.

Security-wise message brokers support TLS over the socket but extra security measures can be envisioned — ex.: security token, message-digest checks etc..

A short example of the above principles is here.

Advertisements