Skip to main content

Your First Backup

This tutorial walks through a complete backup-and-restore cycle. You will:

  1. Start a local RabbitMQ broker
  2. Create queues and publish test messages
  3. Run a non-destructive backup
  4. Verify that messages remain in the original queues
  5. Restore messages to a different queue
  6. Validate everything end to end

By the end, you will understand exactly how OSO RabbitMQ Backup works and have confidence that it does not consume your messages.

Time required: about 20 minutes.

Prerequisites

  • Docker and Docker Compose installed
  • The rabbitmq-backup binary on your PATH (see Quickstart for installation)
  • curl available in your terminal

Step 1: Start RabbitMQ

Use the project's Docker Compose file to start RabbitMQ with the management plugin and stream protocol enabled:

cd /path/to/rabbitmq-backup
docker compose up -d rabbitmq

Wait for the broker to accept connections:

# Poll until the management API responds
until curl -sf -u guest:guest http://localhost:15672/api/overview > /dev/null 2>&1; do
echo "Waiting for RabbitMQ..."
sleep 2
done
echo "RabbitMQ is ready."
tip

The default credentials are guest / guest. The management UI is at http://localhost:15672.

Step 2: Create Queues and an Exchange

Set up a small topology: one exchange, two queues, and a binding for each.

# Create an exchange
docker exec rabbitmq rabbitmqadmin declare exchange \
name=tutorial-exchange type=direct durable=true

# Create two queues
docker exec rabbitmq rabbitmqadmin declare queue \
name=tutorial-orders durable=true

docker exec rabbitmq rabbitmqadmin declare queue \
name=tutorial-payments durable=true

# Bind queues to the exchange
docker exec rabbitmq rabbitmqadmin declare binding \
source=tutorial-exchange destination=tutorial-orders routing_key=orders

docker exec rabbitmq rabbitmqadmin declare binding \
source=tutorial-exchange destination=tutorial-payments routing_key=payments

Step 3: Publish Test Messages

Publish 20 messages to the orders queue and 10 to the payments queue:

# Publish to tutorial-orders
for i in $(seq 1 20); do
docker exec rabbitmq rabbitmqadmin publish \
exchange=tutorial-exchange \
routing_key=orders \
payload="{\"order_id\": $i, \"amount\": $((i * 100)), \"currency\": \"USD\"}"
done

# Publish to tutorial-payments
for i in $(seq 1 10); do
docker exec rabbitmq rabbitmqadmin publish \
exchange=tutorial-exchange \
routing_key=payments \
payload="{\"payment_id\": $i, \"status\": \"completed\"}"
done

Record the queue depths before backup:

docker exec rabbitmq rabbitmqadmin list queues name messages

Expected output:

+--------------------+----------+
| name | messages |
+--------------------+----------+
| tutorial-orders | 20 |
| tutorial-payments | 10 |
+--------------------+----------+
info

Write down these numbers. After the backup runs, you will check them again to confirm that no messages were consumed.

Step 4: Create the Backup Configuration

Create a YAML configuration file for the backup:

cat > tutorial-backup.yaml << 'EOF'
mode: backup
backup_id: "tutorial-001"

source:
amqp_url: "amqp://guest:guest@localhost:5672/%2f"
management_url: "http://localhost:15672"
management_username: guest
management_password: guest

queues:
include:
- "tutorial-*"
vhosts:
- "/"

storage:
backend: filesystem
path: ./tutorial-data

backup:
compression: zstd
compression_level: 3
prefetch_count: 100
requeue_strategy: cancel
max_concurrent_queues: 4
include_definitions: true
stop_at_current_depth: true
EOF

Key configuration options explained

OptionValueWhy
requeue_strategycancelNon-destructive: messages requeue when the consumer is cancelled
stop_at_current_depthtrueOnly read the messages present when the backup starts -- do not chase new arrivals
include_definitionstrueAlso export the exchange/queue/binding topology
compressionzstdGood balance of speed and compression ratio
queues.includetutorial-*Glob pattern matching both tutorial queues

Step 5: Run the Backup

rabbitmq-backup backup --config tutorial-backup.yaml

Expected output:

2026-04-10T14:00:00.000Z  INFO rabbitmq_backup_core::backup: Starting backup tutorial-001
2026-04-10T14:00:00.050Z INFO rabbitmq_backup_core::backup: Connected to RabbitMQ at localhost:5672
2026-04-10T14:00:00.100Z INFO rabbitmq_backup_core::definitions: Exporting definitions from http://localhost:15672
2026-04-10T14:00:00.200Z INFO rabbitmq_backup_core::definitions: Definitions exported (12 exchanges, 4 queues, 4 bindings)
2026-04-10T14:00:00.250Z INFO rabbitmq_backup_core::backup::queue_reader: Backing up queue tutorial-orders (20 messages)
2026-04-10T14:00:00.400Z INFO rabbitmq_backup_core::backup::queue_reader: Queue tutorial-orders complete: 20 messages in 1 segment
2026-04-10T14:00:00.450Z INFO rabbitmq_backup_core::backup::queue_reader: Backing up queue tutorial-payments (10 messages)
2026-04-10T14:00:00.550Z INFO rabbitmq_backup_core::backup::queue_reader: Queue tutorial-payments complete: 10 messages in 1 segment
2026-04-10T14:00:00.600Z INFO rabbitmq_backup_core::backup: Backup tutorial-001 complete: 30 messages, 2 queues, 2 segments

Step 6: Verify Non-Destructive Behaviour

This is the most important step. Check that the queue depths are identical to what they were before the backup:

docker exec rabbitmq rabbitmqadmin list queues name messages

Expected output:

+--------------------+----------+
| name | messages |
+--------------------+----------+
| tutorial-orders | 20 |
| tutorial-payments | 10 |
+--------------------+----------+

The numbers match. No messages were consumed. The cancel-and-requeue strategy worked: the tool started a consumer, read all messages, then cancelled the consumer, causing RabbitMQ to automatically requeue every unacknowledged message.

tip

This is the fundamental difference between OSO RabbitMQ Backup and a regular consumer. A normal consumer would acknowledge messages, removing them from the queue. This tool never acknowledges -- it cancels the consumer instead.

Step 7: Inspect the Backup

List backups

rabbitmq-backup list --path ./tutorial-data

Expected output:

Available backups:
tutorial-001 2026-04-10T14:00:00Z 2 queues 30 messages 2 segments

Describe the backup in detail

rabbitmq-backup describe --path ./tutorial-data --backup-id tutorial-001

Expected output:

Backup: tutorial-001
Created: 2026-04-10T14:00:00Z
Status: complete
Queues: 2
Messages: 30
Segments: 2
Compression: zstd
Definitions: included

Queues:
/tutorial-orders 20 messages 1 segment 8.1 KB
/tutorial-payments 10 messages 1 segment 3.9 KB

Get machine-readable output

rabbitmq-backup describe --path ./tutorial-data --backup-id tutorial-001 --format json
{
"backup_id": "tutorial-001",
"created_at": "2026-04-10T14:00:00Z",
"status": "complete",
"total_messages": 30,
"total_segments": 2,
"compression": "zstd",
"definitions_included": true,
"queues": [
{
"vhost": "/",
"name": "tutorial-orders",
"message_count": 20,
"segment_count": 1,
"size_bytes": 8100
},
{
"vhost": "/",
"name": "tutorial-payments",
"message_count": 10,
"segment_count": 1,
"size_bytes": 3900
}
]
}

Validate integrity

rabbitmq-backup validate --path ./tutorial-data --backup-id tutorial-001 --deep

Expected output:

Validating backup tutorial-001...
Manifest: OK
Definitions: OK
Segments: 2/2 OK (checksums verified)
Messages: 30/30 OK

Validation passed.

Step 8: Restore to a Different Queue

To demonstrate restore without interfering with the original queues, create a new target queue and restore into it.

Create the restore configuration

cat > tutorial-restore.yaml << 'EOF'
mode: restore
backup_id: "tutorial-001"

target:
amqp_url: "amqp://guest:guest@localhost:5672/%2f"
management_url: "http://localhost:15672"
management_username: guest
management_password: guest

storage:
backend: filesystem
path: ./tutorial-data

restore:
restore_definitions: false
publish_mode: exchange
publisher_confirms: true
max_concurrent_queues: 4
produce_batch_size: 100
dry_run: false
EOF
info

Setting restore_definitions: false skips topology restore since the queues already exist. Setting publisher_confirms: true ensures every restored message is acknowledged by the broker before proceeding to the next.

Run the restore

rabbitmq-backup restore --config tutorial-restore.yaml

Expected output:

2026-04-10T14:05:00.000Z  INFO rabbitmq_backup_core::restore: Starting restore from backup tutorial-001
2026-04-10T14:05:00.050Z INFO rabbitmq_backup_core::restore: Connected to RabbitMQ at localhost:5672
2026-04-10T14:05:00.100Z INFO rabbitmq_backup_core::restore::publisher: Restoring tutorial-orders: 20 messages
2026-04-10T14:05:00.300Z INFO rabbitmq_backup_core::restore::publisher: tutorial-orders complete: 20/20 messages confirmed
2026-04-10T14:05:00.350Z INFO rabbitmq_backup_core::restore::publisher: Restoring tutorial-payments: 10 messages
2026-04-10T14:05:00.450Z INFO rabbitmq_backup_core::restore::publisher: tutorial-payments complete: 10/10 messages confirmed
2026-04-10T14:05:00.500Z INFO rabbitmq_backup_core::restore: Restore complete: 30 messages published to 2 queues

Verify the restore

Check the queue depths after restore:

docker exec rabbitmq rabbitmqadmin list queues name messages

Expected output:

+--------------------+----------+
| name | messages |
+--------------------+----------+
| tutorial-orders | 40 |
| tutorial-payments | 20 |
+--------------------+----------+

Each queue now has double the original messages -- the 20 (or 10) originals plus the 20 (or 10) restored copies. This confirms that restore publishes messages back into the queues successfully.

warning

Remember that restore is additive. If you restore to a queue that already has messages, the restored messages are added on top. Purge the queue first if you need a clean slate.

Step 9: Dry-Run Mode

Before running a restore in production, use dry-run mode to validate without publishing anything:

cat > tutorial-restore-dryrun.yaml << 'EOF'
mode: restore
backup_id: "tutorial-001"

target:
amqp_url: "amqp://guest:guest@localhost:5672/%2f"
management_url: "http://localhost:15672"
management_username: guest
management_password: guest

storage:
backend: filesystem
path: ./tutorial-data

restore:
restore_definitions: false
publisher_confirms: true
dry_run: true
EOF
rabbitmq-backup restore --config tutorial-restore-dryrun.yaml

Expected output:

2026-04-10T14:10:00.000Z  INFO rabbitmq_backup_core::restore: DRY RUN — no messages will be published
2026-04-10T14:10:00.050Z INFO rabbitmq_backup_core::restore: Would restore 20 messages to tutorial-orders
2026-04-10T14:10:00.100Z INFO rabbitmq_backup_core::restore: Would restore 10 messages to tutorial-payments
2026-04-10T14:10:00.150Z INFO rabbitmq_backup_core::restore: Dry run complete: 30 messages would be restored to 2 queues

No messages are published during a dry run.

Step 10: Export and Import Definitions Separately

Definitions (exchanges, queues, bindings, policies) can also be managed independently from message backup:

Export definitions

rabbitmq-backup definitions-export \
--config tutorial-backup.yaml \
--output tutorial-definitions.json

Inspect the exported file:

cat tutorial-definitions.json | python3 -m json.tool | head -30

You will see the full RabbitMQ topology including all exchanges, queues, bindings, users, vhosts, and policies.

Import definitions to another cluster

rabbitmq-backup definitions-import \
--config tutorial-restore.yaml \
--input tutorial-definitions.json
tip

The definitions export/import commands are useful for replicating topology across environments (dev, staging, production) without touching messages.

Clean Up

Remove the tutorial data and stop the containers:

# Remove backup data
rm -rf ./tutorial-data
rm -f tutorial-backup.yaml tutorial-restore.yaml tutorial-restore-dryrun.yaml tutorial-definitions.json

# Stop RabbitMQ
docker compose down

What You Learned

In this tutorial, you:

  • Created a RabbitMQ topology with exchanges, queues, and bindings
  • Published test messages to multiple queues
  • Backed up messages non-destructively using the cancel-and-requeue strategy
  • Verified that queue depths remained unchanged after backup
  • Inspected backups using list, describe, and validate commands
  • Restored messages to queues with publisher confirms
  • Tested dry-run mode for safe pre-flight validation
  • Exported and imported definitions independently

What's Next?

Now that you understand the basics, explore production workflows: