Automating Local MongoDB Backups from Docker Containers
4/20/2025
Backing up a production database isn’t just a good practice — it’s critical. In this post, I’ll walk through how I automated daily MongoDB backups from a Docker container, compressed them, and implemented a retention strategy to keep things clean and efficient.
The goal was simple:
✅ Keep daily backups stored locally
✅ Automatically compress and rotate them
✅ Ensure the script runs without touching the database manually
🧠 The Setup
We’re using:
- A MongoDB instance running inside a Docker container
- A host machine with shell access (cron-compatible)
- A backup strategy to retain the last 31 days of
.tar.gz
files
This is especially useful for projects running in isolated Docker environments where direct filesystem access to the DB isn’t available.
🧾 The Final Script
Here’s the complete shell script that handles:
- Taking a
mongodump
from inside the container - Compressing it into a
.tar.gz
- Copying it to the host machine
- Cleaning up both inside the container and on the host
#!/bin/bash
# Set Variables
BACKUP_DIR="/data/backup"
LOCAL_BACKUP_DIR="/home/Projects/Backups"
CONTAINER_NAME="mongodb"
DB_NAME="project-invoice"
DATE=$(date +"%Y%m%d") # YYYYMMDD format
BACKUP_PATH="$BACKUP_DIR/project-db-backup-${DATE}"
TAR_FILE="${BACKUP_PATH}.tar.gz"
# Ensure backup directories exist
mkdir -p "$LOCAL_BACKUP_DIR"
# Ensure backup directory exists inside the container
docker exec "$CONTAINER_NAME" mkdir -p "$BACKUP_PATH"
# Run mongodump inside the MongoDB container
docker exec "$CONTAINER_NAME" mongodump --db "$DB_NAME" --out "$BACKUP_PATH" --username root --password root --authenticationDatabase admin
# Compress the backup
docker exec "$CONTAINER_NAME" tar -czf "$TAR_FILE" -C "$BACKUP_PATH" .
# Copy from Docker container to host
docker cp "$CONTAINER_NAME":"$TAR_FILE" "$LOCAL_BACKUP_DIR/"
# Remove the compressed backup file inside the container
docker exec "$CONTAINER_NAME" sh -c "[ -f '$TAR_FILE' ] && rm -f '$TAR_FILE'"
# Remove the raw backup folder inside the container
docker exec "$CONTAINER_NAME" sh -c "[ -d '$BACKUP_PATH' ] && rm -rf '$BACKUP_PATH'"
# Remove old backups from host (older than 31 days)
find "$LOCAL_BACKUP_DIR" -type f -name "project-db-backup-*.tar.gz" -mtime +31 -exec rm -f {} \;
# Print completion message
echo "MongoDB backup completed: $TAR_FILE"
echo "Backup also copied to: $LOCAL_BACKUP_DIR"
🔄 Automating with Cron
To run this every night at 12:10 AM Sri Lanka time, you’d use:
30 19 * * * /bin/bash /home/Services/JanDis/Scripts/jandis_daily_backup.sh >> /home/Projects/Backups/project_backup.log 2>&1
🕒 This runs at 7:30 PM UTC, which is 1:00 AM Sri Lanka Time — just after the daily backup script finishes.
This ensures your database is backed up automatically each night, with no manual intervention needed.
🧹 Why We Keep 31 Days
We retain 31 days of backups for two main reasons:
- Recovery window: If something breaks or gets deleted, we have an entire month of data history.
- Disk space: Compressed .tar.gz backups are small, so storing a month’s worth is space-efficient.
Adjust this to your needs by changing the -m time +31 value in the find command.
🧼 Bonus: Log Rotation (Optional)
To keep logs under control, consider using logrotate or rotating logs weekly. Just point the cron >> to a named log file and rotate it periodically.
📦 Final Thoughts
This backup approach is:
🐳 Docker-friendly
💨 Fast and compressed
🔁 Fully automated
✅ Safe with cleanup
If your database lives in Docker and you’re not backing it up, I highly recommend setting something like this up. It could save your project one day.
Thanks for reading! 🙌
I’m Dhaneja, a software engineer living in Japan and building practical tools like this in my spare time.
If you enjoyed this post or found it helpful, feel free to reach out or follow along:
📸 Instagram
🐦 X / Twitter
🌐 dhaneja.com
Let’s connect!