Skip to main content

Configuration Examples

Copy-paste ready YAML configurations for every common scenario. All examples use ${ENV_VAR} syntax for secrets — never hardcode credentials in config files.

tip

Start with the Basic Local Backup for development, then adapt to your production storage backend.


1. Basic Local Backup

Simple filesystem-based backup for development and testing.

config/basic-local.yaml
mode: backup
backup_id: "local-backup-001"

source:
amqp_url: "amqp://guest:guest@localhost:5672/%2f"
management_url: "http://localhost:15672"
management_username: guest
management_password: guest

storage:
backend: filesystem
path: ./backups

backup:
compression: zstd
compression_level: 3
prefetch_count: 50
max_concurrent_queues: 2
include_definitions: true
stop_at_current_depth: true
rabbitmq-backup backup --config config/basic-local.yaml

2. AWS S3 Backup

Production backup to S3 with IAM credentials and multi-cluster prefix.

config/aws-s3-backup.yaml
mode: backup
backup_id: "prod-backup-001"

source:
amqp_url: "amqp://${RMQ_USERNAME}:${RMQ_PASSWORD}@rabbitmq.prod.internal:5672/%2f"
management_url: "http://rabbitmq.prod.internal:15672"
management_username: ${RMQ_MGMT_USERNAME}
management_password: ${RMQ_MGMT_PASSWORD}

queues:
include:
- "orders-*"
- "payments-*"
exclude:
- "*-dead-letter"
- "*-dlq"

storage:
backend: s3
bucket: mycompany-rabbitmq-backups
region: us-east-1
prefix: prod-cluster-01/
access_key: ${AWS_ACCESS_KEY_ID}
secret_key: ${AWS_SECRET_ACCESS_KEY}

backup:
segment_max_bytes: 134217728 # 128 MB
compression: zstd
compression_level: 3
prefetch_count: 100
max_concurrent_queues: 4
include_definitions: true
stop_at_current_depth: true

offset_storage:
backend: sqlite
db_path: ./offsets.db
s3_key: state/offsets.db
sync_interval_secs: 30

metrics:
enabled: true
port: 8080
export RMQ_USERNAME="backup_user"
export RMQ_PASSWORD="$(vault kv get -field=password secret/rmq/backup)"
export AWS_ACCESS_KEY_ID="$(vault kv get -field=access_key secret/aws/s3)"
export AWS_SECRET_ACCESS_KEY="$(vault kv get -field=secret_key secret/aws/s3)"

rabbitmq-backup backup --config config/aws-s3-backup.yaml
MinIO Compatibility

For MinIO or other S3-compatible storage, add path_style: true and allow_http: true (for non-TLS), and set endpoint to your MinIO URL.


3. Azure Blob Storage Backup

config/azure-blob-backup.yaml
mode: backup
backup_id: "azure-backup-001"

source:
amqp_url: "amqp://${RMQ_USERNAME}:${RMQ_PASSWORD}@rabbitmq.eastus.internal:5672/%2f"
management_url: "http://rabbitmq.eastus.internal:15672"
management_username: ${RMQ_MGMT_USERNAME}
management_password: ${RMQ_MGMT_PASSWORD}

storage:
backend: azure
account_name: rmqbackupstorage
container_name: rabbitmq-backups
prefix: prod/
# Option 1: Account key
account_key: ${AZURE_STORAGE_ACCOUNT_KEY}
# Option 2: Workload Identity (for AKS) — uncomment and remove account_key
# use_workload_identity: true
# client_id: ${AZURE_CLIENT_ID}
# tenant_id: ${AZURE_TENANT_ID}

backup:
compression: zstd
include_definitions: true
stop_at_current_depth: true
export AZURE_STORAGE_ACCOUNT_KEY="$(az storage account keys list \
--account-name rmqbackupstorage --query '[0].value' -o tsv)"

rabbitmq-backup backup --config config/azure-blob-backup.yaml

4. GCS Backup

config/gcs-backup.yaml
mode: backup
backup_id: "gcs-backup-001"

source:
amqp_url: "amqp://${RMQ_USERNAME}:${RMQ_PASSWORD}@rabbitmq.internal:5672/%2f"
management_url: "http://rabbitmq.internal:15672"
management_username: ${RMQ_MGMT_USERNAME}
management_password: ${RMQ_MGMT_PASSWORD}

storage:
backend: gcs
bucket: rmq-backups-prod
service_account_path: /var/secrets/gcp/key.json
prefix: cluster-us-central1/

backup:
compression: zstd
include_definitions: true
stop_at_current_depth: true
export GOOGLE_APPLICATION_CREDENTIALS="/var/secrets/gcp/key.json"
rabbitmq-backup backup --config config/gcs-backup.yaml

5. Full Production Backup (TLS + Streams + Metrics)

Enterprise-grade configuration with AMQPS, queue filtering, stream support, and Prometheus metrics.

config/production-full.yaml
mode: backup
backup_id: "prod-full-001"

source:
# AMQPS (TLS) connection
amqp_url: "amqps://${RMQ_USERNAME}:${RMQ_PASSWORD}@rabbitmq-0.rabbitmq.svc:5671/%2f"
management_url: "https://rabbitmq-0.rabbitmq.svc:15671"
management_username: ${RMQ_MGMT_USERNAME}
management_password: ${RMQ_MGMT_PASSWORD}
stream_port: 5552

tls:
enabled: true
ca_cert: /etc/rabbitmq/certs/ca.pem
client_cert: /etc/rabbitmq/certs/client-cert.pem
client_key: /etc/rabbitmq/certs/client-key.pem

queues:
include:
- "orders-*"
- "payments-*"
- "events-*"
exclude:
- "*-dead-letter"
- "*-dlq"
- "temp-*"
vhosts:
- "/"
types:
- classic
- quorum
- stream
min_messages: 0

storage:
backend: s3
bucket: mycompany-rmq-backups
region: us-west-2
prefix: production/
access_key: ${AWS_ACCESS_KEY_ID}
secret_key: ${AWS_SECRET_ACCESS_KEY}

backup:
segment_max_bytes: 268435456 # 256 MB
segment_max_interval_ms: 120000 # 2 minutes
compression: zstd
compression_level: 6
prefetch_count: 200
requeue_strategy: cancel
max_concurrent_queues: 8
checkpoint_interval_secs: 5
sync_interval_secs: 30
include_definitions: true
stop_at_current_depth: true
stream_enabled: true

offset_storage:
backend: sqlite
db_path: ./offsets.db
s3_key: state/offsets.db
sync_interval_secs: 10

metrics:
enabled: true
port: 8080
bind_address: "0.0.0.0"
path: /metrics

6. Restore with PITR

Point-in-time recovery: restore only messages from a specific time window.

config/restore-pitr.yaml
mode: restore
backup_id: "prod-full-001"

target:
amqp_url: "amqp://${RMQ_USERNAME}:${RMQ_PASSWORD}@rabbitmq-dr.internal:5672/%2f"
management_url: "http://rabbitmq-dr.internal:15672"
management_username: ${RMQ_MGMT_USERNAME}
management_password: ${RMQ_MGMT_PASSWORD}

storage:
backend: s3
bucket: mycompany-rmq-backups
region: us-west-2
prefix: production/
access_key: ${AWS_ACCESS_KEY_ID}
secret_key: ${AWS_SECRET_ACCESS_KEY}

restore:
# Restore messages from 08:00 to 14:00 UTC on 2026-04-10
time_window_start: 1744329600000 # epoch millis
time_window_end: 1744351200000

# Remap queues to avoid conflicts
queue_mapping:
"orders-queue": "orders-queue-restored"
"payments-queue": "payments-queue-restored"

publish_mode: direct-to-queue
publisher_confirms: true
max_concurrent_queues: 4
produce_batch_size: 100
rate_limit_messages_per_sec: 10000
restore_definitions: false
dry_run: false
# Run dry-run first to preview
rabbitmq-backup restore --config config/restore-pitr.yaml
# Check output, then set dry_run: false and run again

7. Restore with Definitions

Import topology before messages for a complete cluster recovery.

config/restore-definitions.yaml
mode: restore
backup_id: "prod-full-001"

target:
amqp_url: "amqp://${RMQ_USERNAME}:${RMQ_PASSWORD}@rabbitmq-recovery.internal:5672/%2f"
management_url: "http://rabbitmq-recovery.internal:15672"
management_username: ${RMQ_MGMT_USERNAME}
management_password: ${RMQ_MGMT_PASSWORD}

storage:
backend: s3
bucket: mycompany-rmq-backups
region: us-west-2
prefix: production/
access_key: ${AWS_ACCESS_KEY_ID}
secret_key: ${AWS_SECRET_ACCESS_KEY}

restore:
publish_mode: exchange
publisher_confirms: true
max_concurrent_queues: 4
produce_batch_size: 500

# Import topology FIRST, then messages
restore_definitions: true
definitions_dry_run: false # Set true to preview
# Leave false in exchange mode. Exchange restore expects topology bindings
# from definitions restore, not queue auto-creation alone.
create_missing_queues: false

dry_run: false
Two-Phase Restore
  1. Set definitions_dry_run: true to preview what queues/exchanges will be created
  2. Review the output, then set definitions_dry_run: false and run again
  3. Set dry_run: true to preview message counts per queue
  4. Finally set dry_run: false for the actual message restore

8. Multi-Vhost Backup

Back up queues across multiple vhosts in a single operation.

config/multi-vhost-backup.yaml
mode: backup
backup_id: "multi-vhost-001"

source:
amqp_url: "amqp://${RMQ_USERNAME}:${RMQ_PASSWORD}@rabbitmq.internal:5672/%2f"
management_url: "http://rabbitmq.internal:15672"
management_username: ${RMQ_MGMT_USERNAME}
management_password: ${RMQ_MGMT_PASSWORD}

queues:
include:
- "*"
exclude:
- "temp-*"
vhosts:
- "/"
- "/production"
- "/staging"

storage:
backend: s3
bucket: mycompany-rmq-backups
region: us-west-2
prefix: multi-vhost/
access_key: ${AWS_ACCESS_KEY_ID}
secret_key: ${AWS_SECRET_ACCESS_KEY}

backup:
compression: zstd
max_concurrent_queues: 4
include_definitions: true
stop_at_current_depth: true

9. Restore with Missing Target Queue Creation

Use this when you want a message-only restore into target queues that may not exist yet.

config/restore-create-missing.yaml
mode: restore
backup_id: "prod-full-001"

target:
amqp_url: "amqp://${RMQ_USERNAME}:${RMQ_PASSWORD}@rabbitmq-dr.internal:5672/%2f"
management_url: "http://rabbitmq-dr.internal:15672"
management_username: ${RMQ_MGMT_USERNAME}
management_password: ${RMQ_MGMT_PASSWORD}

storage:
backend: s3
bucket: mycompany-rmq-backups
region: us-west-2
prefix: production/
access_key: ${AWS_ACCESS_KEY_ID}
secret_key: ${AWS_SECRET_ACCESS_KEY}

restore:
publish_mode: direct-to-queue
publisher_confirms: true
restore_definitions: false

# If the target queue is missing, declare it before publishing.
# Requires target.management_url, target.management_username, and target.management_password.
create_missing_queues: true

queue_mapping:
"orders-primary": "orders-dr"
vhost_mapping:
"/": "/"

Created queues use the source manifest queue type: classic, quorum, or stream.


10. Selective Definitions Restore

Restore only selected topology from the backed-up definitions before restoring messages.

config/restore-selected-definitions.yaml
mode: restore
backup_id: "prod-full-001"

target:
amqp_url: "amqp://${RMQ_USERNAME}:${RMQ_PASSWORD}@rabbitmq-dr.internal:5672/%2f"
management_url: "http://rabbitmq-dr.internal:15672"
management_username: ${RMQ_MGMT_USERNAME}
management_password: ${RMQ_MGMT_PASSWORD}

storage:
backend: s3
bucket: mycompany-rmq-backups
region: us-west-2
prefix: production/
access_key: ${AWS_ACCESS_KEY_ID}
secret_key: ${AWS_SECRET_ACCESS_KEY}

restore:
restore_definitions: true
definitions_dry_run: false
definitions_selection:
queues:
- "orders-primary"
exchanges:
- "orders-exchange"

publish_mode: exchange
publisher_confirms: true

Queue selection keeps required source exchanges and bindings for the selected queues. Invalid selectors fail before importing definitions.


11. Resumable Restore

Use restore checkpoints for large restores or unreliable networks. Completed queues are skipped on rerun and partial queues resume after the last checkpointed record.

config/restore-resumable.yaml
mode: restore
backup_id: "prod-full-001"

target:
amqp_url: "amqp://${RMQ_USERNAME}:${RMQ_PASSWORD}@rabbitmq-dr.internal:5672/%2f"
management_url: "http://rabbitmq-dr.internal:15672"
management_username: ${RMQ_MGMT_USERNAME}
management_password: ${RMQ_MGMT_PASSWORD}

storage:
backend: s3
bucket: mycompany-rmq-backups
region: us-west-2
prefix: production/

restore:
publish_mode: direct-to-queue
publisher_confirms: true
checkpoint_state: /var/lib/rabbitmq-backup/restore-checkpoint.db
produce_batch_size: 100

12. High-Performance Backup

Tuned for maximum throughput with large segments, LZ4 compression, and high concurrency.

config/high-performance.yaml
mode: backup
backup_id: "perf-backup-001"

source:
amqp_url: "amqp://${RMQ_USERNAME}:${RMQ_PASSWORD}@rabbitmq.internal:5672/%2f"
management_url: "http://rabbitmq.internal:15672"
management_username: ${RMQ_MGMT_USERNAME}
management_password: ${RMQ_MGMT_PASSWORD}

storage:
backend: s3
bucket: mycompany-rmq-backups
region: us-west-2
prefix: perf-backup/
access_key: ${AWS_ACCESS_KEY_ID}
secret_key: ${AWS_SECRET_ACCESS_KEY}

backup:
# Large segments reduce per-segment overhead
segment_max_bytes: 536870912 # 512 MB
segment_max_interval_ms: 120000 # 2 minutes

# LZ4: faster than zstd, slightly worse ratio
compression: lz4

# High concurrency
prefetch_count: 500
max_concurrent_queues: 16

# Reduce checkpoint overhead
checkpoint_interval_secs: 30
sync_interval_secs: 60

include_definitions: true
stop_at_current_depth: true
stream_enabled: true
Resource Impact

High concurrency (max_concurrent_queues: 16) with aggressive prefetch (500) increases memory usage and load on the RabbitMQ broker. Monitor cluster health during backup.


13. Disaster Recovery Restore

Complete cluster recovery with comprehensive queue, vhost, and exchange remapping.

config/disaster-recovery.yaml
mode: restore
backup_id: "prod-full-001"

target:
amqp_url: "amqp://${RMQ_DR_USERNAME}:${RMQ_DR_PASSWORD}@rabbitmq-dr.internal:5672/%2f"
management_url: "http://rabbitmq-dr.internal:15672"
management_username: ${RMQ_DR_MGMT_USERNAME}
management_password: ${RMQ_DR_MGMT_PASSWORD}

storage:
backend: s3
bucket: mycompany-rmq-backups
region: us-west-2
prefix: production/
access_key: ${AWS_ACCESS_KEY_ID}
secret_key: ${AWS_SECRET_ACCESS_KEY}

restore:
# Comprehensive remapping for DR cluster
queue_mapping:
"orders-primary": "orders-dr"
"payments-primary": "payments-dr"
"notifications-primary": "notifications-dr"

vhost_mapping:
"/production": "/dr-production"

exchange_mapping:
"events-v1": "events-v2"

publish_mode: exchange
publisher_confirms: true
max_concurrent_queues: 8
produce_batch_size: 1000
rate_limit_messages_per_sec: 50000

restore_definitions: true
create_missing_queues: false
checkpoint_state: /var/lib/rabbitmq-backup/dr-restore-checkpoint.db

# ALWAYS dry-run first for DR
dry_run: false
# Stage 1: Dry-run to preview
sed 's/dry_run: false/dry_run: true/' config/disaster-recovery.yaml > /tmp/dr-dryrun.yaml
rabbitmq-backup restore --config /tmp/dr-dryrun.yaml

# Stage 2: Actual restore
rabbitmq-backup restore --config config/disaster-recovery.yaml

# Stage 3: Validate
rabbitmq-backup validate --path s3://mycompany-rmq-backups/production \
--backup-id prod-full-001 --deep

Environment Variable Management

HashiCorp Vault

export RMQ_USERNAME="$(vault kv get -field=username secret/prod/rmq)"
export RMQ_PASSWORD="$(vault kv get -field=password secret/prod/rmq)"
export AWS_ACCESS_KEY_ID="$(vault kv get -field=access_key secret/prod/aws)"
export AWS_SECRET_ACCESS_KEY="$(vault kv get -field=secret_key secret/prod/aws)"

AWS Secrets Manager

export RMQ_PASSWORD="$(aws secretsmanager get-secret-value \
--secret-id rmq/prod/backup --query SecretString --output text | jq -r '.password')"

Kubernetes Secrets

kubectl create secret generic rmq-backup-creds \
--from-literal=username=backup_user \
--from-literal=password=$(openssl rand -base64 32)

Quick Reference

ScenarioBackendKey FeaturesBest For
Basic LocalFilesystemSimple, no cloud depsDevelopment
AWS S3S3IAM, multi-cluster prefixProduction AWS
Azure BlobAzureAccount key or Workload IdentityProduction Azure
GCSGCSService accountProduction GCP
Full ProductionS3TLS, filtering, streams, metricsEnterprise
PITR RestoreAnyTime window, queue remappingIncident recovery
Definitions RestoreAnyTopology-first, dry-runCluster rebuild
Create Missing QueuesAnyDirect-to-queue auto-createMessage-only restores
Selective DefinitionsAnyPartial topology importTenant/app-scoped recovery
Resumable RestoreAnyRestore checkpoint DBLarge restores
Multi-VhostAnyCross-vhost, per-vhost filteringMulti-tenant
High-PerformanceAnyLZ4, high concurrency, large segmentsLarge queues
Disaster RecoveryAnyFull remapping (queue/vhost/exchange)Cross-cluster DR