Backup anxiety is the number one concern for developers moving to self-hosted infrastructure. When your database runs on a managed platform like Supabase or PlanetScale, backups happen automatically and you never think about them. When you run your own Dokploy instance, you need a backup strategy configured before anything goes wrong. The good news: Dokploy has built-in backup support that makes this straightforward, and when combined with infrastructure-level data protection, you get a more robust setup than most managed services provide.

Understanding Dokploy's Backup System

Dokploy includes a native backup feature for database services. It supports PostgreSQL, MySQL, MariaDB, and MongoDB out of the box. Under the hood, Dokploy uses the standard dump tools for each database engine: pg_dump for PostgreSQL, mysqldump for MySQL and MariaDB, and mongodump for MongoDB.

Backups are scheduled using cron expressions, giving you precise control over timing. Each backup job can target one of several destination types:

Each backup is a compressed archive containing the full database dump. Dokploy manages retention automatically, deleting old backups based on the retention count you configure.

Setting Up PostgreSQL Backups

PostgreSQL is the most common database in the Dokploy ecosystem, so we will walk through this one in detail. The process for MySQL and MongoDB follows the same pattern with minor differences.

Step 1: Navigate to Your Database Service

In the Dokploy dashboard, go to your project and select the PostgreSQL database service. You will see several tabs: General, Environment, Monitoring, Backups, Logs, and Advanced.

Step 2: Open the Backups Tab

Click the Backups tab. If no backups are configured, you will see an empty state with an "Add Backup" button.

Step 3: Configure the Backup Schedule

Click Add Backup and fill in the configuration:

Step 4: Set the Database Name

Specify the database name to back up. This is the PostgreSQL database name, not the service name. If you are using the default Dokploy PostgreSQL setup, this is typically the value you set in the POSTGRES_DB environment variable.

Step 5: Save and Verify

Save the configuration. You can trigger a manual backup immediately to verify everything works. Check that the backup file appears in your destination and that the file size is reasonable (an empty database produces a small dump, but it should not be 0 bytes).

Configuring S3 Backup Destinations

Local backups are better than no backups, but they share a single point of failure with the database itself. For production systems, configure an S3-compatible destination so backups survive even if the entire server is lost.

Setting Up S3 Credentials in Dokploy

In Dokploy's settings, navigate to the S3 Destinations section. Click Add S3 Destination and provide:

Security tip: Create dedicated IAM credentials with write-only access to the backup bucket. Do not reuse root credentials or credentials with broader permissions. For AWS, create an IAM policy that allows only s3:PutObject and s3:DeleteObject on the specific backup bucket. This limits damage if the credentials are ever exposed.

Using MinIO for Self-Hosted S3

If you want to keep backups entirely self-managed, you can run MinIO as a service within Dokploy itself on a separate server. MinIO provides an S3-compatible API and can be deployed as a Docker container. This is useful if you have a second server and want to create cross-server backups without relying on a third-party cloud storage provider.

Deploy MinIO through Dokploy as a Docker Compose service, configure its access credentials, create a bucket, then use those details as the S3 destination for your database backups.

MySQL and MongoDB Backups

The backup configuration process for MySQL, MariaDB, and MongoDB is nearly identical to PostgreSQL. The differences are in the underlying dump tools and a few configuration specifics.

MySQL and MariaDB

Dokploy uses mysqldump for MySQL and MariaDB backups. The configuration fields are the same: schedule, prefix, retention, and destination. Specify the database name matching your MYSQL_DATABASE environment variable.

One consideration: mysqldump locks tables by default during the dump to ensure consistency. For InnoDB tables (the default engine), this is usually fast and non-disruptive. If you have large MyISAM tables, schedule backups during low-traffic periods to minimize lock contention.

MongoDB

Dokploy uses mongodump for MongoDB backups. MongoDB backups capture the specified database as a BSON dump. The backup includes all collections and indexes.

For MongoDB replica sets, mongodump can read from a secondary member to avoid impacting primary performance. If you are running a single MongoDB instance through Dokploy (the common case), this is not a concern.

Backup Retention and Rotation

Setting the right retention count depends on your recovery requirements and storage budget. Here are practical guidelines:

Backup Frequency Recommended Retention Coverage Storage Impact
Every 6 hours 28 7 days High
Daily 14 2 weeks Medium
Weekly 8 2 months Low
Monthly 12 1 year Very low

For most applications, a combination approach works well: daily backups with 14-day retention for operational recovery, plus weekly backups with 8-week retention for longer-term needs. You can configure multiple backup jobs for the same database with different schedules and destinations.

Monitor your backup storage usage, especially if your database grows over time. A 500 MB database producing daily compressed backups uses approximately 3-5 GB per month of backup storage with 14-day retention. As the database grows, so does backup storage. With MassiveGRID's independent storage scaling, you can increase SSD allocation specifically for backup archives without overpaying for CPU or RAM you do not need.

Testing Your Backups

A backup you have never restored is not a backup. It is a hope. Test your backup restore process at least once before you need it in an emergency.

PostgreSQL Restore Test

  1. Download a recent backup file from your S3 destination or local storage.
  2. Decompress the archive to extract the SQL dump file.
  3. Spin up a temporary PostgreSQL container:
    docker run -d --name pg-restore-test \
      -e POSTGRES_PASSWORD=testpass \
      -e POSTGRES_DB=restore_test \
      -p 5433:5432 postgres:16
  4. Restore the dump:
    cat backup.sql | docker exec -i pg-restore-test \
      psql -U postgres -d restore_test
  5. Connect and verify data:
    docker exec -it pg-restore-test \
      psql -U postgres -d restore_test -c "\dt"
  6. Clean up: docker rm -f pg-restore-test

If you see your tables and data, the backup is valid. If the restore fails, you have a problem to fix now rather than during a real emergency.

Schedule Regular Restore Tests

Add a calendar reminder to test a backup restore once per month. This catches issues like backup corruption, schema changes that break restore compatibility, or misconfigured retention policies that silently deleted all backups.

Infrastructure-Level Protection

Application-level backups through Dokploy are essential, but they are only one layer of data protection. The infrastructure underneath your Dokploy server provides another, fundamentally different layer.

MassiveGRID's storage infrastructure uses Ceph with 3x replication across independent storage nodes. This means every block of data on your server's disk, including Docker volumes, database files, and application data, is automatically replicated to three separate physical locations within the data center. If a single disk or storage node fails, your data remains intact and accessible with no interruption.

This is not the same as an application backup. Ceph replication protects against hardware failure. It does not protect against accidental DROP TABLE, application bugs that corrupt data, or ransomware. That is what Dokploy's application-level backups handle. The two protections are complementary:

Together, they cover the full spectrum of data loss scenarios. Securing your Dokploy instance adds the third pillar: preventing the incidents that would require restore in the first place.

MassiveGRID for Dokploy Hosting

  • Ceph storage with 3x replication — every byte of data replicated across three independent storage nodes automatically
  • Independent storage scaling — increase SSD capacity for growing backup archives without paying for extra CPU or RAM
  • Cloud VPS from $1.99/mo — ideal for development and staging environments with backup testing
  • Dedicated VPS from $4.99/mo — guaranteed resources ensure backup jobs do not compete with application workloads
  • Cloud Dedicated Servers — HA infrastructure with automatic failover for production databases
  • 4 global locations — store backups in a different geographic region from your primary server
Explore Dokploy Hosting on MassiveGRID →

Putting It All Together

A complete backup strategy for a Dokploy-hosted application involves three steps. First, configure Dokploy's built-in backup feature with an S3 destination, a cron schedule that matches your recovery point objective, and a retention count that balances coverage with storage cost. Second, test the restore process at least once to verify your backups actually work. Third, ensure your infrastructure provides block-level redundancy underneath everything.

If you are running Supabase through Dokploy, this backup configuration is especially important since your PostgreSQL instance holds all your application data, auth tables, and storage metadata. The same principles apply to any database-backed application: get backups configured before your first user signs up, not after your first data loss.