Cloud and platform

Message Queue Selection Guide 2026: Kafka vs RabbitMQ vs Redis Streams vs SQS

The right message broker depends on your use case, scale requirements, and team capabilities. Here is a practical comparison to make an informed decision.

3/10/20268 min readCloud
Message Queue Selection Guide 2026: Kafka vs RabbitMQ vs Redis Streams vs SQS

Executive summary

The right message broker depends on your use case, scale requirements, and team capabilities. Here is a practical comparison to make an informed decision.

Last updated: 3/10/2026

Executive summary

Message queues are the backbone of asynchronous microservices architectures, enabling decoupling, scalability, and fault tolerance. But not all message brokers solve the same problem. Kafka, RabbitMQ, Redis Streams, and Amazon SQS each have distinct strengths, trade-offs, and operational requirements.

Choosing the wrong message broker leads to architectural frustration: Kafka's complexity for simple workloads, RabbitMQ's throughput limits for high-volume event streaming, Redis Streams' persistence limitations for critical data, or SQS's cloud lock-in for multi-cloud deployments.

This guide provides a practical framework for selecting the right message broker based on your specific requirements.

Comparison matrix: Key characteristics

CharacteristicApache KafkaRabbitMQRedis StreamsAmazon SQS
Message ModelLog-based streamsQueue + ExchangeLog-based streamsQueue
OrderingPer-partition orderingPer-queue orderingPer-stream orderingBest effort
DeliveryAt-least-onceConfigurableAt-least-onceAt-least-once
PersistenceDisk-based (configurable)Disk-based (configurable)Memory-based (optional AOF)Disk-based
ThroughputVery high (millions/sec)High (thousands/sec)High (thousands/sec)High (unlimited)
Latency2-10ms1-5ms<1ms10-100ms
ScalabilityHorizontalVertical + ClusteringVerticalHorizontal (managed)
ComplexityHighMediumLowVery Low
Operational OverheadHighMediumLowNone (managed)
Cloud Lock-inNoneNoneNoneAWS
Use Case FitEvent streaming, high-volumeGeneral purpose, routingSimple workloads, cachingAWS-native, low ops

Apache Kafka: Event streaming at scale

Architecture and strengths

Kafka is a distributed, partitioned, replicated log optimized for high-throughput event streaming. It stores messages in topics divided into partitions, allowing parallel consumption.

When Kafka is the right choice:

  1. High-volume event streaming
  • Millions of messages per second
  • Multiple consumers reading same data
  • Event sourcing and replay requirements
  1. Real-time data pipelines
  • Stream processing (Kafka Streams, ksqlDB)
  • Log aggregation and monitoring
  • IoT data ingestion
  1. Durable event storage
  • Need to replay events from arbitrary positions
  • Long-term retention requirements (days to weeks)

Kafka implementation example:

typescript// Kafka producer implementation
import { Kafka, Producer, ProducerRecord } from 'kafkajs';

class KafkaEventProducer {
  private producer: Producer;
  private topic: string;

  constructor(
    brokers: string[],
    clientId: string,
    topic: string
  ) {
    const kafka = new Kafka({
      clientId,
      brokers,
      retry: {
        initialRetryTime: 100,
        retries: 8
      }
    });

    this.producer = kafka.producer();
    this.topic = topic;

    this.producer.connect().catch(console.error);
  }

  async sendEvent(key: string, value: any): Promise<void> {
    const record: ProducerRecord = {
      topic: this.topic,
      messages: [
        {
          key,
          value: JSON.stringify(value),
          timestamp: Date.now()
        }
      ]
    };

    try {
      await this.producer.send(record);
      console.log(`Event sent to ${this.topic}:`, key);
    } catch (error) {
      console.error('Failed to send event:', error);
      throw error;
    }
  }

  async disconnect(): Promise<void> {
    await this.producer.disconnect();
  }
}

// Kafka consumer implementation
class KafkaEventConsumer {
  private consumer: any;
  private topic: string;
  private groupId: string;

  constructor(
    brokers: string[],
    clientId: string,
    groupId: string,
    topic: string,
    private messageHandler: MessageHandler
  ) {
    const kafka = new Kafka({
      clientId,
      brokers,
      groupId
    });

    this.consumer = kafka.consumer({ groupId });
    this.topic = topic;
    this.groupId = groupId;
  }

  async start(): Promise<void> {
    await this.consumer.subscribe({ topic: this.topic, fromBeginning: false });

    await this.consumer.run({
      eachMessage: async ({ topic, partition, message }) => {
        try {
          const key = message.key?.toString();
          const value = JSON.parse(message.value?.toString() || '{}');

          console.log(`Processing message from ${topic}[${partition}]:`, key);

          await this.messageHandler(key, value);

          // Commit is automatic with eachBatchAutoCommit
        } catch (error) {
          console.error('Failed to process message:', error);
          // Message will be redelivered due to error
        }
      }
    });
  }

  async stop(): Promise<void> {
    await this.consumer.disconnect();
  }
}

Kafka operational considerations:

yaml# Kafka configuration for production
kafka:
  num.partitions: 12  # Increase for parallelism
  default.replication.factor: 3  # High availability
  min.insync.replicas: 2  # Durability guarantee
  auto.create.topics.enable: false  # Security
  log.retention.hours: 168  # 7 days retention
  log.segment.bytes: 1073741824  # 1GB segments
  log.retention.check.interval.ms: 300000  # 5 minutes

zookeeper:
  tickTime: 2000
  initLimit: 10
  syncLimit: 5

Kafka trade-offs:

Pros:

  • Excellent throughput and scalability
  • Built-in partitioning and replication
  • Strong ordering guarantees per partition
  • Long-term message retention
  • Rich ecosystem (Kafka Connect, Streams)

Cons:

  • High operational complexity
  • Steep learning curve
  • Resource-intensive (requires Zookeeper/KRaft)
  • Not ideal for simple queue use cases
  • Complex configuration for high availability

RabbitMQ: Feature-rich general-purpose broker

Architecture and strengths

RabbitMQ is a traditional message broker with exchanges, queues, and bindings. It supports flexible routing patterns and multiple messaging protocols (AMQP, MQTT, STOMP).

When RabbitMQ is the right choice:

  1. General-purpose messaging
  • Work queues, publish-subscribe
  • Request-reply patterns
  • Multiple routing requirements
  1. Complex routing
  • Topic exchanges with wildcards
  • Header-based routing
  • Dead-letter queues
  1. Mixed messaging patterns
  • Need for both queues and pub/sub
  • Different delivery guarantees per queue

RabbitMQ implementation example:

typescript// RabbitMQ publisher implementation
import { connect, Channel, Connection } from 'amqplib';

class RabbitMQPublisher {
  private connection: Connection | null = null;
  private channel: Channel | null = null;

  constructor(private uri: string) {}

  async connect(): Promise<void> {
    this.connection = await connect(this.uri);
    this.channel = await this.connection.createChannel();

    // Declare exchange
    await this.channel.assertExchange('events', 'topic', { durable: true });

    console.log('RabbitMQ publisher connected');
  }

  async publish(routingKey: string, message: any): Promise<void> {
    if (!this.channel) {
      throw new Error('RabbitMQ channel not initialized');
    }

    try {
      const published = this.channel.publish(
        'events',
        routingKey,
        Buffer.from(JSON.stringify(message)),
        {
          persistent: true,
          contentType: 'application/json',
          timestamp: new Date().getTime().toString()
        }
      );

      if (!published) {
        console.warn('Message could not be published');
      }
    } catch (error) {
      console.error('Failed to publish message:', error);
      throw error;
    }
  }

  async disconnect(): Promise<void> {
    if (this.connection) {
      await this.connection.close();
    }
  }
}

// RabbitMQ consumer implementation
class RabbitMQConsumer {
  private connection: Connection | null = null;
  private channel: Channel | null = null;

  constructor(
    private uri: string,
    private queueName: string,
    private routingKey: string,
    private messageHandler: MessageHandler
  ) {}

  async start(): Promise<void> {
    this.connection = await connect(this.uri);
    this.channel = await this.connection.createChannel();

    // Declare exchange and queue
    await this.channel.assertExchange('events', 'topic', { durable: true });
    await this.channel.assertQueue(this.queueName, {
      durable: true,
      arguments: {
        'x-dead-letter-exchange': 'dlx',
        'x-dead-letter-routing-key': this.queueName
      }
    });

    // Bind queue to exchange
    await this.channel.bindQueue(this.queueName, 'events', this.routingKey);

    // Set prefetch
    await this.channel.prefetch(10);

    // Consume messages
    await this.channel.consume(this.queueName, async (msg) => {
      if (!msg) return;

      try {
        const message = JSON.parse(msg.content.toString());
        console.log(`Processing message:`, message);

        await this.messageHandler(message);

        this.channel.ack(msg);
      } catch (error) {
        console.error('Failed to process message:', error);

        // Reject and requeue (max 3 times)
        if (msg.fields.redelivered && msg.fields.deliveryTag > 3) {
          this.channel.reject(msg, false); // Dead-letter
        } else {
          this.channel.reject(msg, true); // Requeue
        }
      }
    });

    console.log('RabbitMQ consumer started');
  }

  async stop(): Promise<void> {
    if (this.connection) {
      await this.connection.close();
    }
  }
}

RabbitMQ operational considerations:

yaml# RabbitMQ configuration for production
rabbitmq:
  default_pass: ${RABBITMQ_PASSWORD}
  default_user: admin
  vm_memory_high_watermark: 0.4
  disk_free_limit: 1000000000  # 1GB
  heartbeat: 60
  channel_max: 2048
  default_vhost: /

plugins:
  - rabbitmq_management
  - rabbitmq_prometheus
  - rabbitmq_shovel
  - rabbitmq_federation

RabbitMQ trade-offs:

Pros:

  • Flexible routing capabilities
  • Multiple messaging protocols
  • Good performance for most workloads
  • Mature ecosystem and tooling
  • Dead-letter queue support

Cons:

  • Limited horizontal scalability
  • Requires clustering for high availability
  • Not ideal for high-volume event streaming
  • Complex setup for clustering
  • Memory-intensive

Redis Streams: Lightweight and fast

Architecture and strengths

Redis Streams is a log data structure added to Redis, providing basic streaming capabilities with Redis' performance and simplicity.

When Redis Streams is the right choice:

  1. Simple workloads
  • Low to moderate message volume
  • Simple pub/sub patterns
  • Already using Redis for caching
  1. Performance-critical
  • Sub-millisecond latency required
  • Simple message processing
  • Temporary data storage acceptable
  1. Quick prototyping
  • Fast development cycle
  • Minimal operational overhead
  • Don't need complex features

Redis Streams implementation example:

typescript// Redis Streams producer implementation
import { createClient } from 'redis';

class RedisStreamsProducer {
  private client: ReturnType<typeof createClient>;
  private streamName: string;

  constructor(url: string, streamName: string) {
    this.client = createClient({ url });
    this.streamName = streamName;
  }

  async connect(): Promise<void> {
    await this.client.connect();
    console.log('Redis Streams producer connected');
  }

  async sendEvent(field: string, value: any): Promise<void> {
    try {
      const result = await this.client.xAdd(
        this.streamName,
        '*',
        {
          [field]: JSON.stringify(value),
          timestamp: Date.now().toString()
        }
      );

      console.log(`Event sent to ${this.streamName}:`, result);
    } catch (error) {
      console.error('Failed to send event:', error);
      throw error;
    }
  }

  async disconnect(): Promise<void> {
    await this.client.disconnect();
  }
}

// Redis Streams consumer implementation
class RedisStreamsConsumer {
  private client: ReturnType<typeof createClient>;
  private streamName: string;
  private consumerGroup: string;
  private consumerName: string;

  constructor(
    url: string,
    streamName: string,
    consumerGroup: string,
    consumerName: string,
    private messageHandler: MessageHandler
  ) {
    this.client = createClient({ url });
    this.streamName = streamName;
    this.consumerGroup = consumerGroup;
    this.consumerName = consumerName;
  }

  async start(): Promise<void> {
    await this.client.connect();

    // Create consumer group if it doesn't exist
    try {
      await this.client.xGroupCreate(this.streamName, this.consumerGroup, '0', {
        MKSTREAM: true
      });
      console.log(`Created consumer group: ${this.consumerGroup}`);
    } catch (error) {
      // Group already exists, ignore
    }

    console.log('Redis Streams consumer started');

    // Start consuming
    while (true) {
      try {
        const messages = await this.client.xReadGroup(
          this.consumerGroup,
          this.consumerName,
          [
            {
              key: this.streamName,
              id: '>'
            }
          ],
          {
            COUNT: 10,
            BLOCK: 5000  // Block for 5 seconds
          }
        );

        if (messages) {
          for (const stream of messages) {
            for (const message of stream.messages) {
              try {
                const field = Object.keys(message.message)[0];
                const value = JSON.parse(message.message[field] as string);

                console.log(`Processing message:`, message.id);

                await this.messageHandler(value);

                // Acknowledge message
                await this.client.xAck(this.streamName, this.consumerGroup, message.id);
              } catch (error) {
                console.error('Failed to process message:', error);
                // Message will be retried by consumer group
              }
            }
          }
        }
      } catch (error) {
        console.error('Error consuming messages:', error);
        await this.sleep(1000); // Wait before retry
      }
    }
  }

  private sleep(ms: number): Promise<void> {
    return new Promise(resolve => setTimeout(resolve, ms));
  }

  async stop(): Promise<void> {
    await this.client.disconnect();
  }
}

Redis Streams trade-offs:

Pros:

  • Extremely fast (sub-millisecond latency)
  • Simple to implement and operate
  • Minimal resource requirements
  • Consumer groups for parallel processing
  • Works with existing Redis infrastructure

Cons:

  • Limited persistence (memory-based, optional AOF)
  • No advanced routing capabilities
  • Limited tooling compared to Kafka/RabbitMQ
  • Not suitable for long-term retention
  • Limited scalability (single instance)

Amazon SQS: Managed and simple

Architecture and strengths

Amazon SQS is a fully managed message queue service that eliminates operational overhead. It provides unlimited throughput and automatic scaling.

When Amazon SQS is the right choice:

  1. AWS-native deployments
  • Already using AWS infrastructure
  • Want minimal operational overhead
  • Need automatic scaling
  1. Simple queue requirements
  • Basic FIFO or standard queues
  • Don't need advanced routing
  • Accept cloud lock-in
  1. Low-operational complexity
  • Don't want to manage message brokers
  • Need high availability out of the box
  • Want predictable pricing

Amazon SQS implementation example:

typescript// SQS producer implementation
import { SQSClient, SendMessageCommand } from '@aws-sdk/client-sqs';

class SQSProducer {
  private client: SQSClient;
  private queueUrl: string;

  constructor(
    region: string,
    queueUrl: string,
    credentials?: {
      accessKeyId: string;
      secretAccessKey: string;
    }
  ) {
    this.client = new SQSClient({
      region,
      credentials
    });
    this.queueUrl = queueUrl;
  }

  async sendMessage(message: any): Promise<void> {
    try {
      const command = new SendMessageCommand({
        QueueUrl: this.queueUrl,
        MessageBody: JSON.stringify(message),
        MessageAttributes: {
          Timestamp: {
            DataType: 'Number',
            StringValue: Date.now().toString()
          },
          ContentType: {
            DataType: 'String',
            StringValue: 'application/json'
          }
        }
      });

      const response = await this.client.send(command);

      console.log(`Message sent to SQS:`, response.MessageId);
    } catch (error) {
      console.error('Failed to send message:', error);
      throw error;
    }
  }
}

// SQS consumer implementation
import { ReceiveMessageCommand, DeleteMessageCommand } from '@aws-sdk/client-sqs';

class SQSConsumer {
  private client: SQSClient;
  private queueUrl: string;
  private maxNumberOfMessages: number;
  private waitTimeSeconds: number;

  constructor(
    region: string,
    queueUrl: string,
    private messageHandler: MessageHandler,
    credentials?: {
      accessKeyId: string;
      secretAccessKey: string;
    }
  ) {
    this.client = new SQSClient({
      region,
      credentials
    });
    this.queueUrl = queueUrl;
    this.maxNumberOfMessages = 10;
    this.waitTimeSeconds = 20;  // Long polling
  }

  async start(): Promise<void> {
    console.log('SQS consumer started');

    while (true) {
      try {
        const command = new ReceiveMessageCommand({
          QueueUrl: this.queueUrl,
          MaxNumberOfMessages: this.maxNumberOfMessages,
          WaitTimeSeconds: this.waitTimeSeconds,
          AttributeNames: ['All'],
          MessageAttributeNames: ['All']
        });

        const response = await this.client.send(command);

        if (response.Messages && response.Messages.length > 0) {
          console.log(`Received ${response.Messages.length} messages`);

          for (const message of response.Messages) {
            try {
              const body = JSON.parse(message.Body || '{}');

              console.log(`Processing message:`, message.MessageId);

              await this.messageHandler(body);

              // Delete message after successful processing
              const deleteCommand = new DeleteMessageCommand({
                QueueUrl: this.queueUrl,
                ReceiptHandle: message.ReceiptHandle
              });

              await this.client.send(deleteCommand);
            } catch (error) {
              console.error('Failed to process message:', error);
              // Message will be retried by SQS visibility timeout
            }
          }
        }
      } catch (error) {
        console.error('Error consuming messages:', error);
        await this.sleep(5000);  // Wait before retry
      }
    }
  }

  private sleep(ms: number): Promise<void> {
    return new Promise(resolve => setTimeout(resolve, ms));
  }
}

Amazon SQS trade-offs:

Pros:

  • Fully managed (no operational overhead)
  • Unlimited throughput and automatic scaling
  • Built-in monitoring and metrics
  • High availability out of the box
  • Simple pricing model

Cons:

  • Cloud lock-in (AWS)
  • No advanced routing capabilities
  • Limited message size (256KB)
  • Not suitable for complex use cases
  • Higher latency compared to Redis/RabbitMQ

Decision framework

Questions to guide your selection

1. What is your message volume?

  • < 1K messages/sec → Redis Streams or RabbitMQ
  • 1K-10K messages/sec → RabbitMQ or SQS
  • > 10K messages/sec → Kafka

2. Do you need to replay messages?

  • Yes → Kafka or Redis Streams
  • No → RabbitMQ or SQS

3. What ordering guarantees do you need?

  • Strict ordering → RabbitMQ or Kafka (per partition)
  • Best effort → SQS or Redis Streams

4. What is your operational capacity?

  • Want minimal ops → SQS or Redis Streams
  • Can manage infrastructure → RabbitMQ or Kafka

5. What is your cloud strategy?

  • Multi-cloud → Kafka or RabbitMQ
  • AWS-native → SQS
  • Cloud-agnostic → Kafka or RabbitMQ

6. What is your retention requirement?

  • Long-term (weeks/months) → Kafka
  • Medium-term (days) → RabbitMQ or SQS
  • Short-term (hours) → Redis Streams

Hybrid approaches

In complex systems, you may need multiple message brokers for different use cases:

typescript// Hybrid message broker architecture
class HybridMessageBroker {
  private kafkaProducer: KafkaEventProducer;
  private rabbitmqPublisher: RabbitMQPublisher;
  private redisStreamsProducer: RedisStreamsProducer;

  constructor() {
    this.kafkaProducer = new KafkaEventProducer(
      ['kafka-broker:9092'],
      'my-app',
      'high-volume-events'
    );

    this.rabbitmqPublisher = new RabbitMQPublisher(
      'amqp://rabbitmq:5672'
    );

    this.redisStreamsProducer = new RedisStreamsProducer(
      'redis://redis:6379',
      'fast-events'
    );
  }

  async sendEvent(event: any): Promise<void> {
    switch (event.type) {
      case 'high_volume_analytics':
        // Use Kafka for high-volume events
        await this.kafkaProducer.sendEvent(event.id, event);
        break;

      case 'business_event':
        // Use RabbitMQ for business events with complex routing
        await this.rabbitmqPublisher.publish(event.routingKey, event);
        break;

      case 'low_latency':
        // Use Redis Streams for low-latency events
        await this.redisStreamsProducer.sendEvent('event', event);
        break;

      default:
        throw new Error(`Unknown event type: ${event.type}`);
    }
  }
}

Conclusion

The right message broker depends on your specific requirements, not on what's "popular" or what competitors are using.

  • Kafka for high-volume event streaming and replay requirements
  • RabbitMQ for general-purpose messaging with complex routing
  • Redis Streams for simple, low-latency workloads
  • Amazon SQS for AWS-native deployments with minimal ops

Start with the simplest solution that meets your requirements. You can always migrate to a more complex broker if needed—but the cost of premature complexity is high.


Need help designing an asynchronous microservices architecture? Talk to Imperialis about message broker selection, architecture design, and implementation for your production system.

Sources

Related reading