Configuration Examples
Copy-paste ready YAML configurations for every common scenario. All examples use ${ENV_VAR} syntax for secrets — never hardcode credentials in config files.
Start with the Basic Local Backup for development, then adapt to your production storage backend.
1. Basic Local Backup
Simple filesystem-based backup for development and testing.
mode: backup
backup_id: "local-backup-001"
source:
amqp_url: "amqp://guest:guest@localhost:5672/%2f"
management_url: "http://localhost:15672"
management_username: guest
management_password: guest
storage:
backend: filesystem
path: ./backups
backup:
compression: zstd
compression_level: 3
prefetch_count: 50
max_concurrent_queues: 2
include_definitions: true
stop_at_current_depth: true
rabbitmq-backup backup --config config/basic-local.yaml
2. AWS S3 Backup
Production backup to S3 with IAM credentials and multi-cluster prefix.
mode: backup
backup_id: "prod-backup-001"
source:
amqp_url: "amqp://${RMQ_USERNAME}:${RMQ_PASSWORD}@rabbitmq.prod.internal:5672/%2f"
management_url: "http://rabbitmq.prod.internal:15672"
management_username: ${RMQ_MGMT_USERNAME}
management_password: ${RMQ_MGMT_PASSWORD}
queues:
include:
- "orders-*"
- "payments-*"
exclude:
- "*-dead-letter"
- "*-dlq"
storage:
backend: s3
bucket: mycompany-rabbitmq-backups
region: us-east-1
prefix: prod-cluster-01/
access_key: ${AWS_ACCESS_KEY_ID}
secret_key: ${AWS_SECRET_ACCESS_KEY}
backup:
segment_max_bytes: 134217728 # 128 MB
compression: zstd
compression_level: 3
prefetch_count: 100
max_concurrent_queues: 4
include_definitions: true
stop_at_current_depth: true
offset_storage:
backend: sqlite
db_path: ./offsets.db
s3_key: state/offsets.db
sync_interval_secs: 30
metrics:
enabled: true
port: 8080
export RMQ_USERNAME="backup_user"
export RMQ_PASSWORD="$(vault kv get -field=password secret/rmq/backup)"
export AWS_ACCESS_KEY_ID="$(vault kv get -field=access_key secret/aws/s3)"
export AWS_SECRET_ACCESS_KEY="$(vault kv get -field=secret_key secret/aws/s3)"
rabbitmq-backup backup --config config/aws-s3-backup.yaml
For MinIO or other S3-compatible storage, add path_style: true and allow_http: true (for non-TLS), and set endpoint to your MinIO URL.
3. Azure Blob Storage Backup
mode: backup
backup_id: "azure-backup-001"
source:
amqp_url: "amqp://${RMQ_USERNAME}:${RMQ_PASSWORD}@rabbitmq.eastus.internal:5672/%2f"
management_url: "http://rabbitmq.eastus.internal:15672"
management_username: ${RMQ_MGMT_USERNAME}
management_password: ${RMQ_MGMT_PASSWORD}
storage:
backend: azure
account_name: rmqbackupstorage
container_name: rabbitmq-backups
prefix: prod/
# Option 1: Account key
account_key: ${AZURE_STORAGE_ACCOUNT_KEY}
# Option 2: Workload Identity (for AKS) — uncomment and remove account_key
# use_workload_identity: true
# client_id: ${AZURE_CLIENT_ID}
# tenant_id: ${AZURE_TENANT_ID}
backup:
compression: zstd
include_definitions: true
stop_at_current_depth: true
export AZURE_STORAGE_ACCOUNT_KEY="$(az storage account keys list \
--account-name rmqbackupstorage --query '[0].value' -o tsv)"
rabbitmq-backup backup --config config/azure-blob-backup.yaml
4. GCS Backup
mode: backup
backup_id: "gcs-backup-001"
source:
amqp_url: "amqp://${RMQ_USERNAME}:${RMQ_PASSWORD}@rabbitmq.internal:5672/%2f"
management_url: "http://rabbitmq.internal:15672"
management_username: ${RMQ_MGMT_USERNAME}
management_password: ${RMQ_MGMT_PASSWORD}
storage:
backend: gcs
bucket: rmq-backups-prod
service_account_path: /var/secrets/gcp/key.json
prefix: cluster-us-central1/
backup:
compression: zstd
include_definitions: true
stop_at_current_depth: true
export GOOGLE_APPLICATION_CREDENTIALS="/var/secrets/gcp/key.json"
rabbitmq-backup backup --config config/gcs-backup.yaml
5. Full Production Backup (TLS + Streams + Metrics)
Enterprise-grade configuration with AMQPS, queue filtering, stream support, and Prometheus metrics.
mode: backup
backup_id: "prod-full-001"
source:
# AMQPS (TLS) connection
amqp_url: "amqps://${RMQ_USERNAME}:${RMQ_PASSWORD}@rabbitmq-0.rabbitmq.svc:5671/%2f"
management_url: "https://rabbitmq-0.rabbitmq.svc:15671"
management_username: ${RMQ_MGMT_USERNAME}
management_password: ${RMQ_MGMT_PASSWORD}
stream_port: 5552
tls:
enabled: true
ca_cert: /etc/rabbitmq/certs/ca.pem
client_cert: /etc/rabbitmq/certs/client-cert.pem
client_key: /etc/rabbitmq/certs/client-key.pem
queues:
include:
- "orders-*"
- "payments-*"
- "events-*"
exclude:
- "*-dead-letter"
- "*-dlq"
- "temp-*"
vhosts:
- "/"
types:
- classic
- quorum
- stream
min_messages: 0
storage:
backend: s3
bucket: mycompany-rmq-backups
region: us-west-2
prefix: production/
access_key: ${AWS_ACCESS_KEY_ID}
secret_key: ${AWS_SECRET_ACCESS_KEY}
backup:
segment_max_bytes: 268435456 # 256 MB
segment_max_interval_ms: 120000 # 2 minutes
compression: zstd
compression_level: 6
prefetch_count: 200
requeue_strategy: cancel
max_concurrent_queues: 8
checkpoint_interval_secs: 5
sync_interval_secs: 30
include_definitions: true
stop_at_current_depth: true
stream_enabled: true
offset_storage:
backend: sqlite
db_path: ./offsets.db
s3_key: state/offsets.db
sync_interval_secs: 10
metrics:
enabled: true
port: 8080
bind_address: "0.0.0.0"
path: /metrics
6. Restore with PITR
Point-in-time recovery: restore only messages from a specific time window.
mode: restore
backup_id: "prod-full-001"
target:
amqp_url: "amqp://${RMQ_USERNAME}:${RMQ_PASSWORD}@rabbitmq-dr.internal:5672/%2f"
management_url: "http://rabbitmq-dr.internal:15672"
management_username: ${RMQ_MGMT_USERNAME}
management_password: ${RMQ_MGMT_PASSWORD}
storage:
backend: s3
bucket: mycompany-rmq-backups
region: us-west-2
prefix: production/
access_key: ${AWS_ACCESS_KEY_ID}
secret_key: ${AWS_SECRET_ACCESS_KEY}
restore:
# Restore messages from 08:00 to 14:00 UTC on 2026-04-10
time_window_start: 1744329600000 # epoch millis
time_window_end: 1744351200000
# Remap queues to avoid conflicts
queue_mapping:
"orders-queue": "orders-queue-restored"
"payments-queue": "payments-queue-restored"
publish_mode: direct-to-queue
publisher_confirms: true
max_concurrent_queues: 4
produce_batch_size: 100
rate_limit_messages_per_sec: 10000
restore_definitions: false
dry_run: false
# Run dry-run first to preview
rabbitmq-backup restore --config config/restore-pitr.yaml
# Check output, then set dry_run: false and run again
7. Restore with Definitions
Import topology before messages for a complete cluster recovery.
mode: restore
backup_id: "prod-full-001"
target:
amqp_url: "amqp://${RMQ_USERNAME}:${RMQ_PASSWORD}@rabbitmq-recovery.internal:5672/%2f"
management_url: "http://rabbitmq-recovery.internal:15672"
management_username: ${RMQ_MGMT_USERNAME}
management_password: ${RMQ_MGMT_PASSWORD}
storage:
backend: s3
bucket: mycompany-rmq-backups
region: us-west-2
prefix: production/
access_key: ${AWS_ACCESS_KEY_ID}
secret_key: ${AWS_SECRET_ACCESS_KEY}
restore:
publish_mode: exchange
publisher_confirms: true
max_concurrent_queues: 4
produce_batch_size: 500
# Import topology FIRST, then messages
restore_definitions: true
definitions_dry_run: false # Set true to preview
# Leave false in exchange mode. Exchange restore expects topology bindings
# from definitions restore, not queue auto-creation alone.
create_missing_queues: false
dry_run: false
- Set
definitions_dry_run: trueto preview what queues/exchanges will be created - Review the output, then set
definitions_dry_run: falseand run again - Set
dry_run: trueto preview message counts per queue - Finally set
dry_run: falsefor the actual message restore
8. Multi-Vhost Backup
Back up queues across multiple vhosts in a single operation.
mode: backup
backup_id: "multi-vhost-001"
source:
amqp_url: "amqp://${RMQ_USERNAME}:${RMQ_PASSWORD}@rabbitmq.internal:5672/%2f"
management_url: "http://rabbitmq.internal:15672"
management_username: ${RMQ_MGMT_USERNAME}
management_password: ${RMQ_MGMT_PASSWORD}
queues:
include:
- "*"
exclude:
- "temp-*"
vhosts:
- "/"
- "/production"
- "/staging"
storage:
backend: s3
bucket: mycompany-rmq-backups
region: us-west-2
prefix: multi-vhost/
access_key: ${AWS_ACCESS_KEY_ID}
secret_key: ${AWS_SECRET_ACCESS_KEY}
backup:
compression: zstd
max_concurrent_queues: 4
include_definitions: true
stop_at_current_depth: true
9. Restore with Missing Target Queue Creation
Use this when you want a message-only restore into target queues that may not exist yet.
mode: restore
backup_id: "prod-full-001"
target:
amqp_url: "amqp://${RMQ_USERNAME}:${RMQ_PASSWORD}@rabbitmq-dr.internal:5672/%2f"
management_url: "http://rabbitmq-dr.internal:15672"
management_username: ${RMQ_MGMT_USERNAME}
management_password: ${RMQ_MGMT_PASSWORD}
storage:
backend: s3
bucket: mycompany-rmq-backups
region: us-west-2
prefix: production/
access_key: ${AWS_ACCESS_KEY_ID}
secret_key: ${AWS_SECRET_ACCESS_KEY}
restore:
publish_mode: direct-to-queue
publisher_confirms: true
restore_definitions: false
# If the target queue is missing, declare it before publishing.
# Requires target.management_url, target.management_username, and target.management_password.
create_missing_queues: true
queue_mapping:
"orders-primary": "orders-dr"
vhost_mapping:
"/": "/"
Created queues use the source manifest queue type: classic, quorum, or stream.
10. Selective Definitions Restore
Restore only selected topology from the backed-up definitions before restoring messages.
mode: restore
backup_id: "prod-full-001"
target:
amqp_url: "amqp://${RMQ_USERNAME}:${RMQ_PASSWORD}@rabbitmq-dr.internal:5672/%2f"
management_url: "http://rabbitmq-dr.internal:15672"
management_username: ${RMQ_MGMT_USERNAME}
management_password: ${RMQ_MGMT_PASSWORD}
storage:
backend: s3
bucket: mycompany-rmq-backups
region: us-west-2
prefix: production/
access_key: ${AWS_ACCESS_KEY_ID}
secret_key: ${AWS_SECRET_ACCESS_KEY}
restore:
restore_definitions: true
definitions_dry_run: false
definitions_selection:
queues:
- "orders-primary"
exchanges:
- "orders-exchange"
publish_mode: exchange
publisher_confirms: true
Queue selection keeps required source exchanges and bindings for the selected queues. Invalid selectors fail before importing definitions.
11. Resumable Restore
Use restore checkpoints for large restores or unreliable networks. Completed queues are skipped on rerun and partial queues resume after the last checkpointed record.
mode: restore
backup_id: "prod-full-001"
target:
amqp_url: "amqp://${RMQ_USERNAME}:${RMQ_PASSWORD}@rabbitmq-dr.internal:5672/%2f"
management_url: "http://rabbitmq-dr.internal:15672"
management_username: ${RMQ_MGMT_USERNAME}
management_password: ${RMQ_MGMT_PASSWORD}
storage:
backend: s3
bucket: mycompany-rmq-backups
region: us-west-2
prefix: production/
restore:
publish_mode: direct-to-queue
publisher_confirms: true
checkpoint_state: /var/lib/rabbitmq-backup/restore-checkpoint.db
produce_batch_size: 100
12. High-Performance Backup
Tuned for maximum throughput with large segments, LZ4 compression, and high concurrency.
mode: backup
backup_id: "perf-backup-001"
source:
amqp_url: "amqp://${RMQ_USERNAME}:${RMQ_PASSWORD}@rabbitmq.internal:5672/%2f"
management_url: "http://rabbitmq.internal:15672"
management_username: ${RMQ_MGMT_USERNAME}
management_password: ${RMQ_MGMT_PASSWORD}
storage:
backend: s3
bucket: mycompany-rmq-backups
region: us-west-2
prefix: perf-backup/
access_key: ${AWS_ACCESS_KEY_ID}
secret_key: ${AWS_SECRET_ACCESS_KEY}
backup:
# Large segments reduce per-segment overhead
segment_max_bytes: 536870912 # 512 MB
segment_max_interval_ms: 120000 # 2 minutes
# LZ4: faster than zstd, slightly worse ratio
compression: lz4
# High concurrency
prefetch_count: 500
max_concurrent_queues: 16
# Reduce checkpoint overhead
checkpoint_interval_secs: 30
sync_interval_secs: 60
include_definitions: true
stop_at_current_depth: true
stream_enabled: true
High concurrency (max_concurrent_queues: 16) with aggressive prefetch (500) increases memory usage and load on the RabbitMQ broker. Monitor cluster health during backup.
13. Disaster Recovery Restore
Complete cluster recovery with comprehensive queue, vhost, and exchange remapping.
mode: restore
backup_id: "prod-full-001"
target:
amqp_url: "amqp://${RMQ_DR_USERNAME}:${RMQ_DR_PASSWORD}@rabbitmq-dr.internal:5672/%2f"
management_url: "http://rabbitmq-dr.internal:15672"
management_username: ${RMQ_DR_MGMT_USERNAME}
management_password: ${RMQ_DR_MGMT_PASSWORD}
storage:
backend: s3
bucket: mycompany-rmq-backups
region: us-west-2
prefix: production/
access_key: ${AWS_ACCESS_KEY_ID}
secret_key: ${AWS_SECRET_ACCESS_KEY}
restore:
# Comprehensive remapping for DR cluster
queue_mapping:
"orders-primary": "orders-dr"
"payments-primary": "payments-dr"
"notifications-primary": "notifications-dr"
vhost_mapping:
"/production": "/dr-production"
exchange_mapping:
"events-v1": "events-v2"
publish_mode: exchange
publisher_confirms: true
max_concurrent_queues: 8
produce_batch_size: 1000
rate_limit_messages_per_sec: 50000
restore_definitions: true
create_missing_queues: false
checkpoint_state: /var/lib/rabbitmq-backup/dr-restore-checkpoint.db
# ALWAYS dry-run first for DR
dry_run: false
# Stage 1: Dry-run to preview
sed 's/dry_run: false/dry_run: true/' config/disaster-recovery.yaml > /tmp/dr-dryrun.yaml
rabbitmq-backup restore --config /tmp/dr-dryrun.yaml
# Stage 2: Actual restore
rabbitmq-backup restore --config config/disaster-recovery.yaml
# Stage 3: Validate
rabbitmq-backup validate --path s3://mycompany-rmq-backups/production \
--backup-id prod-full-001 --deep
Environment Variable Management
HashiCorp Vault
export RMQ_USERNAME="$(vault kv get -field=username secret/prod/rmq)"
export RMQ_PASSWORD="$(vault kv get -field=password secret/prod/rmq)"
export AWS_ACCESS_KEY_ID="$(vault kv get -field=access_key secret/prod/aws)"
export AWS_SECRET_ACCESS_KEY="$(vault kv get -field=secret_key secret/prod/aws)"
AWS Secrets Manager
export RMQ_PASSWORD="$(aws secretsmanager get-secret-value \
--secret-id rmq/prod/backup --query SecretString --output text | jq -r '.password')"
Kubernetes Secrets
kubectl create secret generic rmq-backup-creds \
--from-literal=username=backup_user \
--from-literal=password=$(openssl rand -base64 32)
Quick Reference
| Scenario | Backend | Key Features | Best For |
|---|---|---|---|
| Basic Local | Filesystem | Simple, no cloud deps | Development |
| AWS S3 | S3 | IAM, multi-cluster prefix | Production AWS |
| Azure Blob | Azure | Account key or Workload Identity | Production Azure |
| GCS | GCS | Service account | Production GCP |
| Full Production | S3 | TLS, filtering, streams, metrics | Enterprise |
| PITR Restore | Any | Time window, queue remapping | Incident recovery |
| Definitions Restore | Any | Topology-first, dry-run | Cluster rebuild |
| Create Missing Queues | Any | Direct-to-queue auto-create | Message-only restores |
| Selective Definitions | Any | Partial topology import | Tenant/app-scoped recovery |
| Resumable Restore | Any | Restore checkpoint DB | Large restores |
| Multi-Vhost | Any | Cross-vhost, per-vhost filtering | Multi-tenant |
| High-Performance | Any | LZ4, high concurrency, large segments | Large queues |
| Disaster Recovery | Any | Full remapping (queue/vhost/exchange) | Cross-cluster DR |