Benchmarking Message Queues: Comparison
Please note this is a comparison between Version 2 by Fanny Huang and Version 1 by Rokin Maharjan.

Message queues are a way for different software components or applications to communicate with each other asynchronously by passing messages through a shared buffer. This allows a sender to send a message without needing to wait for an immediate response from the receiver, which can help to improve the system’s performance, reduce latency, and allow components to operate independently. The performance of four popular message queues: Redis, ActiveMQ Artemis, RabbitMQ, and Apache Kafka was compared.

  • message queue
  • Redis
  • ActiveMQ Artemis
  • RabbitMQ
  • Apache Kafka

1. Introduction

In the ever-evolving landscape of distributed systems, the demand for robust, scalable, and asynchronous communication among their intricate components is paramount. Asynchronous messaging allows system elements to interact without the immediate need for responses, empowering components to dispatch messages and proceed with other tasks concurrently. This not only enhances the overall system throughput but also reduces response times, enabling systems to tackle complex workloads effectively and deliver real-time results [1]. At the heart of distributed computing lie message queues, pivotal in providing such capabilities. By decoupling services and fostering resiliency, fault tolerance, and responsiveness, message queues play a vital role in modernizing and optimizing distributed systems. Nonetheless, selecting the optimal message queue remains a daunting challenge.
The vast array of available options, each with its own unique strengths and weaknesses, necessitates a thorough understanding of these systems to ensure the performance and efficiency of the overall architecture. Choosing the right message queue is critical for cost optimization, improved system performance, versatility, and scalability. It enables significant cost savings in infrastructure and operational expenses while enhancing system responsiveness through reduced latency, high throughput, and timely message delivery. Message queues with advanced features facilitate the implementation of complex messaging patterns, workflow orchestration, and seamless integration of components.
These message queues represent leading solutions in the field, each with its own distinctive characteristics and strengths. Redis, widely recognized as an open-source in-memory data store, not only excels in its primary role but also serves as a versatile and efficient message broker. ActiveMQ Artemis, specifically designed to cater to enterprise-level applications, offers high-performance and reliable messaging solutions, ensuring seamless communication in demanding and complex environments. Lastly, RabbitMQ, with its extensive developer community and broad adoption, stands as a robust and feature-rich message broker, supporting multiple messaging protocols and providing a solid foundation for scalable and flexible communication infrastructures.
 It is essential to note that a message queue that excels in one scenario might falter in another. To address this challenge, researchers meticulously designed a series of experiments encompassing a diverse spectrum of workloads and scenarios. Our evaluation revolves around two fundamental performance metrics: latency and throughput. Latency measures the time taken for a message to traverse from sender to receiver, whereas throughput quantifies the number of messages processed within a given timeframe. By thoroughly examining the performance of each message queue across various conditions, researchers gain comprehensive insights into their respective strengths and weaknesses, empowering us to make well-informed decisions. 
To conduct our experiments, researchers utilized the OpenMessaging Benchmark Framework [2], a performance testing tool created by The Linux Foundation. This tool was specifically designed to measure the performance of messaging systems across various workloads and scenarios. Notably, it supports multiple messaging protocols and message queues, providing a unified testing framework for comprehensive evaluation. The tool offers detailed metrics for latency and throughput, allowing for precise performance analysis.
Our choice of this benchmarking tool was based on several compelling reasons. Firstly, it allowed us to maintain consistency by supporting the benchmarking of all four message queues. This ensured that the evaluation process was unbiased and devoid of any favoritism toward specific technologies. Secondly, the OpenMessaging Benchmark Framework was developed by The Linux Foundation, a widely respected organization known for its contributions to open-source technologies. This factor ensured the tool’s reliability and credibility. Lastly, the popularity of the tool among developers was evident from its impressive statistics on GitHub, including 295 stars, 183 forks, and contributions from 47 contributors at the time of writing.

2. Benchmarking Message Queues

In the realm of benchmarking message queue systems, several studies have contributed valuable insights into their performance characteristics. Piyush Maheshwari and Michael Pang [11][3] conducted a benchmarking study comparing Tibco Rendezvous [12][4] and Progress Sonic MQ [13][5] using SPECjms2007 [14][6], focusing on evaluating message delivery latency, throughput, program stability, and resource utilization. While their study provided insights into specific message-oriented middleware (MOM), it differs from our research by not encompassing the comparative analysis of Redis, ActiveMQ Artemis, RabbitMQ, and Apache Kafka. While informative for specific message-oriented middleware (MOM), their study differs from ours as researchers compare Redis, ActiveMQ Artemis, RabbitMQ, and Apache Kafka, providing a broader comparative analysis. In a related study, Kai Sachs et al. [15][7] conducted a performance analysis of Apache ActiveMQ using SPEC JMS 2007 [14][6] and jms2009-PS [16][8], comparing its usage as a persistence medium with databases. In contrast, our research expanded the scope to include Redis, ActiveMQ Artemis, RabbitMQ, and Apache Kafka, providing a comprehensive evaluation of these message queue technologies. Researchers extensively assessed their latency and throughput performance across a diverse range of workloads, enabling practitioners to make informed decisions based on specific use cases. Furthermore, Stefan Appel et al. [17][9] proposed a unique approach for benchmarking AMQP implementations, whereas researchers' study focused on a comparative analysis of different message queue technologies, offering valuable insights into their respective strengths and weaknesses. Due to the closed-source nature of SPECjms2007 and jms2009-PS, several benchmarking solutions have been developed to provide a fair comparison between message queues, with the OpenMessaging Benchmark Framework standing out as a notable choice. Souza et al. [18][10] focused on evaluating the performance of Apache Kafka and RabbitMQ in terms of throughput, latency, and resource utilization using the OpenMessaging Benchmark (OMB) tool. Their findings reveal that Apache Kafka outperformed RabbitMQ in terms of throughput and scalability, particularly under heavy workloads with large messages. Additionally, RabbitMQ showcased lower latency and resource utilization, suggesting its suitability for low-latency and resource-constrained environments.  Fu et al. [19][11] proposed a framework used to compare the performance of popular message queue technologies, including Kafka, RabbitMQ, RocketMQ, ActiveMQ, and Apache Pulsar [20][12]. Their research focused on evaluating factors such as message size, the number of producers and consumers, and the number of partitions. The study highlighted Kafka’s high throughput due to optimization techniques but noted its latency limitations with larger message sizes.  John et al. [21][13] conducted a performance comparison between Apache Kafka [10][14] and RabbitMQ in terms of throughput and latency. Their study explored scenarios involving single and multiple publishers and consumers using the Flotilla [22][15] benchmarking tool. The results indicate that Kafka exhibited a superior throughput, whereas RabbitMQ prioritized reliability, especially in scenarios where data security was crucial.  Valeriu Manuel Ionescu et al. [23][16] conducted an analysis of RabbitMQ and ActiveMQ, specifically focusing on their publishing and subscribing rates. Their study employed different-sized images as a real-world comparison instead of traditional byte string loads and considered both single and multiple publisher–consumer scenarios.  Marko et al. [24][17] conducted a study focusing on message queueing technologies for flow control and load balancing in the IoT scenario. They specifically evaluated RabbitMQ and Apache Kafka within a smart home system cloud, assessing their performance with different numbers of consumers. The results highlight that Kafka exhibited stable data buffering and a lower average CPU usage, with no instances of reaching maximum CPU usage during testing.  Our study examined Redis, ActiveMQ Artemis, RabbitMQ, and Apache Kafka, shedding light on their respective performance characteristics. Researchers assessed Redis’s publish/subscribe operations and evaluated the enhanced ActiveMQ Artemis rather than the traditional version. Notably, our findings highlight ActiveMQ Artemis’ advantageous latency performance in scenarios with low throughput, distinguishing it from RabbitMQ. Additionally, researchers provided comprehensive results featuring distinct graphs for throughput and latency, encompassing various percentiles. To ensure unbiased and consistent results, researchers utilized the OpenMessaging Benchmark tool from The Linux Foundation, a trusted and popular open-source solution.

References

  1. Goel, S.; Sharda, H.; Taniar, D. Message-Oriented-Middleware in a Distributed Environment. Innov. Internet Community Syst. 2003, 2877, 93–103.
  2. The Linux Foundation. OpenMessaging Benchmark Framework. Available online: https://openmessaging.cloud/docs/benchmarks/ (accessed on 17 December 2022).
  3. Maheshwari, P.; Pang, M. Benchmarking message-oriented middleware: TIB/RV versus SonicMQ. Concurr. Comput. Pract. Exp. 2005, 17, 1507–1526.
  4. TIBCO Software Inc. TIBCO Rendezvous. Available online: https://www.tibco.com/products/tibco-rendezvous (accessed on 7 June 2023).
  5. Progress Software Corporation. SonicMQ messaging System. Available online: https://docs.progress.com/bundle/openedge-application-and-integration-services-117/page/SonicMQ-Broker.html (accessed on 10 January 2023).
  6. Samuel Kounev, K. SPECjms2007 Benchmark Framework. Available online: https://www.spec.org/jms2007/ (accessed on 10 January 2023).
  7. Sachs, K.; Kounev, S.; Appel, S.; Buchmann, A. Benchmarking of Message-Oriented Middleware. In Proceedings of the DEBS ’09, Third ACM International Conference on Distributed Event-Based Systems, Nashville, TN, USA, 6–9 July 2009; Association for Computing Machinery: New York, NY, USA, 2009.
  8. Sachs, K.; Appel, S.; Kounev, S.; Buchmann, A. Benchmarking Publish/Subscribe-Based Messaging Systems. In Proceedings of the Database Systems for Advanced Applications, Tsukuba, Japan, 1–4 April 2010; Yoshikawa, M., Meng, X., Yumoto, T., Ma, Q., Sun, L., Watanabe, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 203–214.
  9. Appel, S.; Sachs, K.; Buchmann, A.P. Towards benchmarking of AMQP. In Proceedings of the Fourth ACM International Conference on Distributed Event-Based Systems, DEBS 2010, Cambridge, UK, 12–15 July 2010; Bacon, J., Pietzuch, P.R., Sventek, J., Çetintemel, U., Eds.; ACM: New York, NY, USA, 2010; pp. 99–100.
  10. De Arajui Souza, R. Performance Analysis between Apache Kafka and RabbitMQ. Available online: http://dspace.sti.ufcg.edu.br:8080/jspui/bitstream/riufcg/20339/1/RONAN%20DE%20ARAU%CC%81JO%20SOUZA%20-%20TCC%20CIE%CC%82NCIA%20DA%20COMPUTAC%CC%A7A%CC%83O%202020.pdf (accessed on 7 June 2023).
  11. Fu, G.; Zhang, Y.; Yu, G. A fair comparison of message queuing systems. IEEE Access 2020, 9, 421–432.
  12. Apache Software Foundation. Apache Pulsar. Available online: https://pulsar.apache.org/ (accessed on 11 January 2023).
  13. John, V.; Liu, X. A survey of distributed message broker queues. arXiv 2017, arXiv:1704.00411.
  14. Apache Software Foundation. Apache Kafka. Available online: https://kafka.apache.org/ (accessed on 24 May 2023).
  15. John, V. Flotilla. Available online: https://github.com/vineetjohn/flotilla (accessed on 11 January 2023).
  16. Ionescu, V.M. The analysis of the performance of RabbitMQ and ActiveMQ. In Proceedings of the 2015 14th RoEduNet International Conference—Networking in Education and Research (RoEduNet NER), Craiova, Romania, 24–26 September 2015; pp. 132–137.
  17. Milosavljevic, M.; Matic, M.; Jovic, N.; Antic, M. Comparison of Message Queue Technologies for Highly Available Microservices in IoT. Available online: https://www.etran.rs/2021/zbornik/Papers/105_RTI_2.6.pdf (accessed on 7 June 2023).
More
Video Production Service