Docker Deployment
Run rabbitmq-backup as a Docker container. The image uses a multi-stage build with a minimal Debian runtime, producing a small, production-ready image.
Pull or Build the Image
Build from source
git clone https://github.com/osodevops/rabbitmq-backup.git
cd rabbitmq-backup
docker build -t rabbitmq-backup:latest .
Pull a pre-built image
docker pull ghcr.io/osodevops/rabbitmq-backup:latest
Run a One-Shot Backup
Mount your configuration file and run the backup command:
docker run --rm \
-v $(pwd)/config/backup.yaml:/config/backup.yaml:ro \
-e RABBITMQ_PASSWORD=changeme \
-e AWS_ACCESS_KEY_ID=AKIAEXAMPLE \
-e AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMIEXAMPLEKEY \
rabbitmq-backup:latest \
backup --config /config/backup.yaml
For filesystem storage, also mount a data volume:
docker run --rm \
-v $(pwd)/config/backup.yaml:/config/backup.yaml:ro \
-v rabbitmq-backup-data:/data \
rabbitmq-backup:latest \
backup --config /config/backup.yaml
Environment Variables
Pass credentials and overrides through environment variables rather than baking them into configuration files.
| Variable | Purpose |
|---|---|
RABBITMQ_URL | AMQP connection URL |
RABBITMQ_MANAGEMENT_URL | Management API URL |
RABBITMQ_PASSWORD | Password for ${RABBITMQ_PASSWORD} interpolation in config |
AWS_ACCESS_KEY_ID | S3 access key |
AWS_SECRET_ACCESS_KEY | S3 secret key |
AZURE_STORAGE_KEY | Azure Blob account key |
AZURE_STORAGE_SAS_TOKEN | Azure SAS token |
AZURE_CLIENT_ID | Azure AD client ID |
AZURE_TENANT_ID | Azure AD tenant ID |
AZURE_CLIENT_SECRET | Azure AD client secret |
GOOGLE_APPLICATION_CREDENTIALS | Path to GCS service account JSON (mount the file) |
RUST_LOG | Log level override (info, debug, trace) |
S3_ENDPOINT | Custom S3 endpoint (MinIO, Ceph RGW) |
Docker Compose: Full Local Stack
This compose file starts RabbitMQ with the Management and Stream plugins, MinIO for S3-compatible storage, and the backup tool itself.
docker-compose.yml
networks:
rabbitmq-net:
driver: bridge
volumes:
minio-data:
services:
# RabbitMQ broker with management + stream plugins
rabbitmq:
image: rabbitmq:4.0-management
hostname: rabbitmq
container_name: rabbitmq-backup-rabbitmq
restart: always
networks:
- rabbitmq-net
ports:
- "5672:5672" # AMQP
- "15672:15672" # Management UI
- "5552:5552" # Stream Protocol
environment:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
command: >
bash -c "rabbitmq-plugins enable rabbitmq_stream rabbitmq_stream_management &&
rabbitmq-server"
# MinIO (S3-compatible storage)
minio:
image: minio/minio:latest
hostname: minio
container_name: rabbitmq-backup-minio
networks:
- rabbitmq-net
ports:
- "19000:9000"
- "19001:9001"
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
command: server /data --console-address ":9001"
volumes:
- minio-data:/data
# Create the backup bucket in MinIO
minio-setup:
image: minio/mc:latest
networks:
- rabbitmq-net
depends_on:
- minio
restart: "no"
entrypoint: >
/bin/sh -c "
sleep 5;
mc alias set local http://minio:9000 minioadmin minioadmin;
mc mb local/rabbitmq-backups --ignore-existing;
echo 'Bucket created';
"
# RabbitMQ Backup
rabbitmq-backup:
build:
context: .
dockerfile: Dockerfile
networks:
- rabbitmq-net
environment:
RABBITMQ_URL: amqp://guest:guest@rabbitmq:5672/%2f
RABBITMQ_MANAGEMENT_URL: http://rabbitmq:15672
S3_ENDPOINT: http://minio:9000
S3_ACCESS_KEY: minioadmin
S3_SECRET_KEY: minioadmin
RUST_LOG: info
depends_on:
- rabbitmq
- minio
volumes:
- ./config:/config
profiles:
- tools
Usage
# Start RabbitMQ and MinIO
docker compose up -d rabbitmq minio minio-setup
# Wait for RabbitMQ to be ready (about 15 seconds)
# Then run a backup
docker compose run --rm rabbitmq-backup backup --config /config/example-backup-s3.yaml
# Run a restore
docker compose run --rm rabbitmq-backup restore --config /config/example-restore.yaml
# List backups
docker compose run --rm rabbitmq-backup list --path s3://rabbitmq-backups
Expose Prometheus Metrics
If you enable the metrics endpoint in your config, expose the port:
docker run --rm \
-v $(pwd)/config/backup.yaml:/config/backup.yaml:ro \
-p 8080:8080 \
rabbitmq-backup:latest \
backup --config /config/backup.yaml
Then scrape http://localhost:8080/metrics with Prometheus.
Image Details
The Dockerfile uses a multi-stage build:
- Builder stage (
rust:bookworm) -- compiles the release binary - Runtime stage (
debian:bookworm-slim) -- minimal runtime with onlyca-certificatesandlibssl3
The runtime image:
- Runs as non-root user
rabbitmq-backup(UID 1000) - Entrypoint is
rabbitmq-backup-- pass subcommands directly - Working directory is
/workspace - Data directory
/datais pre-created and owned by the app user