When your Nextcloud deployment grows past a few hundred gigabytes of data, the default local filesystem storage starts showing its limitations. A single disk or RAID array has a finite capacity ceiling, scaling requires downtime to add or replace drives, and achieving redundancy means implementing your own replication strategy. Object storage -- specifically S3-compatible storage backed by Ceph -- solves these problems with a fundamentally different architecture that separates storage scaling from compute scaling.

This guide covers how to configure Nextcloud with S3-compatible object storage, focusing on Ceph's RADOS Gateway (RGW) as the S3 endpoint. We will cover the two storage modes Nextcloud supports, the new ADA engine introduced in Nextcloud Hub 9, step-by-step Ceph RGW configuration, performance tuning, tiered storage strategies, and migration from filesystem-based storage. This assumes you have a working Nextcloud instance -- if you are starting from scratch, follow our production installation guide first.

Why Object Storage for Nextcloud

Traditional filesystem storage (ext4, XFS, ZFS on local disks or a SAN) stores files in a hierarchical directory structure. This works well at small scale, but introduces constraints as your data grows:

S3-compatible object storage -- whether provided by Ceph, MinIO, or a cloud service like AWS S3 -- stores data as objects in a flat namespace (buckets). Each object is identified by a key (not a file path), and the storage system handles replication, distribution across nodes, and capacity management transparently. You never run out of space on a single node because the storage pool spans all nodes in the cluster.

For Nextcloud specifically, object storage provides:

Two Storage Modes: Primary vs. External

Nextcloud supports S3 object storage in two fundamentally different modes, and choosing the wrong one for your use case will cause problems. Understanding the distinction is critical before writing any configuration.

Primary Storage (objectstore)

In primary storage mode, S3 replaces the local filesystem as Nextcloud's default storage backend. Every file uploaded by every user is stored as an object in the configured S3 bucket. The local filesystem is used only for temporary files, the Nextcloud application code, and transient caches.

Key characteristics of primary storage mode:

Primary storage is configured in config.php:

'objectstore' => array(
    'class' => '\\OC\\Files\\ObjectStore\\S3',
    'arguments' => array(
        'bucket' => 'nextcloud-primary',
        'hostname' => 'rgw.yourdomain.com',
        'port' => 443,
        'region' => 'default',
        'use_ssl' => true,
        'use_path_style' => true,
        'autocreate' => true,
        'key' => 'YOUR_S3_ACCESS_KEY',
        'secret' => 'YOUR_S3_SECRET_KEY',
    ),
),

External Storage (files_external)

External storage mode uses the "External storage" app to mount S3 buckets as additional storage locations within Nextcloud. Users see them as folders alongside their regular files. The local filesystem remains the primary storage for user files.

Key characteristics:

External storage is configured through the Nextcloud admin panel or via occ:

sudo -u www-data php /var/www/nextcloud/occ files_external:create \
  "Shared Storage" amazons3 null::null \
  --config bucket="nextcloud-shared" \
  --config hostname="rgw.yourdomain.com" \
  --config port="443" \
  --config region="default" \
  --config use_ssl="true" \
  --config use_path_style="true" \
  --config key="YOUR_S3_ACCESS_KEY" \
  --config secret="YOUR_S3_SECRET_KEY"

Which Mode to Choose

Choose primary storage if you want all user data on object storage for unlimited scaling, built-in redundancy, and simplified infrastructure. This is the right choice for new deployments at scale or organizations planning for significant growth.

Choose external storage if you want to add object storage capacity to an existing deployment without migrating existing files, or if you need different storage tiers for different types of data (e.g., fast local NVMe for active projects, S3 for archived data).

The ADA Engine in Nextcloud Hub 9

Nextcloud Hub 9 (released as "Hub 9 Winter") introduced the ADA (Asynchronous Data Access) engine, which fundamentally improves how Nextcloud interacts with object storage. In previous versions, every file operation -- including reading file metadata, generating thumbnails, and serving downloads -- went through PHP synchronously. This meant a slow S3 response would block a PHP-FPM worker for the entire duration of the request.

The ADA engine introduces asynchronous processing for object storage operations:

The ADA engine is enabled automatically when using S3 primary storage on Hub 9 or later. No additional configuration is required beyond the standard objectstore configuration in config.php. If you are running an older Nextcloud version, upgrading to Hub 9 will provide immediate performance improvements for S3-backed deployments, as covered in our performance tuning guide.

Ceph RADOS Gateway (RGW) Configuration

Ceph's RADOS Gateway provides an S3-compatible HTTP interface to a Ceph storage cluster. For organizations running their own infrastructure, Ceph RGW gives you the scalability of S3 without depending on a cloud provider.

Ceph Cluster Prerequisites

This section assumes you have a running Ceph cluster with at least three OSD nodes. If you are running Nextcloud on MassiveGRID's infrastructure, the Ceph cluster is already deployed and managed -- you will be provided with the RGW endpoint and credentials. For self-managed Ceph clusters, ensure the following minimum requirements are met:

Deploy the RADOS Gateway

Install the RGW package on the node(s) that will serve S3 traffic:

sudo apt install -y radosgw

Configure the RGW instance in /etc/ceph/ceph.conf. Add the following section (replace rgw0 with your chosen instance name):

[client.rgw.rgw0]
host = rgw-node-hostname
rgw_frontends = "beast port=7480 ssl_port=7443 ssl_certificate=/etc/ceph/rgw.pem"
rgw_dns_name = rgw.yourdomain.com
rgw_thread_pool_size = 512
rgw_num_rados_handles = 4
rgw_enable_usage_log = true

# S3 compatibility settings
rgw_s3_auth_use_keystone = false
rgw_enable_static_website = false
rgw_relaxed_s3_bucket_names = false

# Performance tuning
rgw_cache_enabled = true
rgw_cache_lru_size = 50000
rgw_bucket_index_max_aio = 128
rgw_max_chunk_size = 4194304
rgw_put_obj_min_window_size = 16777216

Start the RGW service:

sudo systemctl enable ceph-radosgw@rgw.rgw0
sudo systemctl start ceph-radosgw@rgw.rgw0

Create the S3 User and Bucket

Create an S3 user for Nextcloud with the radosgw-admin command:

radosgw-admin user create \
  --uid="nextcloud" \
  --display-name="Nextcloud S3 User" \
  --max-buckets=10

The command outputs the access key and secret key. Save these securely -- they go into Nextcloud's config.php.

Set a quota for the user if desired:

radosgw-admin quota set --uid=nextcloud --quota-scope=user --max-size=5T
radosgw-admin quota enable --uid=nextcloud --quota-scope=user

Create the bucket using the AWS CLI (or any S3-compatible tool):

aws s3 mb s3://nextcloud-primary \
  --endpoint-url https://rgw.yourdomain.com \
  --region default

Set a lifecycle policy if you want to automatically clean up incomplete multipart uploads (which can accumulate and waste storage):

aws s3api put-bucket-lifecycle-configuration \
  --bucket nextcloud-primary \
  --endpoint-url https://rgw.yourdomain.com \
  --lifecycle-configuration '{
    "Rules": [{
      "ID": "cleanup-incomplete-uploads",
      "Status": "Enabled",
      "Filter": {"Prefix": ""},
      "AbortIncompleteMultipartUpload": {
        "DaysAfterInitiation": 7
      }
    }]
  }'

Complete config.php Configuration

Here is a complete S3 primary storage configuration for Nextcloud with all recommended settings:

'objectstore' => array(
    'class' => '\\OC\\Files\\ObjectStore\\S3',
    'arguments' => array(
        // Bucket and endpoint
        'bucket' => 'nextcloud-primary',
        'hostname' => 'rgw.yourdomain.com',
        'port' => 443,
        'region' => 'default',
        'use_ssl' => true,

        // Authentication
        'key' => 'YOUR_ACCESS_KEY',
        'secret' => 'YOUR_SECRET_KEY',

        // Path-style URLs (required for Ceph RGW)
        'use_path_style' => true,

        // Auto-create bucket on first use
        'autocreate' => true,

        // Multipart upload settings
        'uploadPartSize' => 524288000,  // 500 MB per part

        // Connection settings
        'verify_bucket_exists' => false,  // Skip bucket check on every request
    ),
),

Key settings explained:

Performance Tuning

Object storage has different performance characteristics than local filesystems. Latency per operation is higher (network round-trip vs. local I/O), but throughput can be higher because operations are distributed across the cluster. Tuning focuses on minimizing the impact of per-operation latency and maximizing throughput.

Multipart Upload Optimization

Files larger than the uploadPartSize are split into chunks and uploaded as a multipart upload. Each chunk is uploaded in a separate S3 PUT request, and the final COMPLETE request assembles them. The number of concurrent chunk uploads affects throughput:

// In config.php -- increase concurrent upload parts
'objectstore' => array(
    'arguments' => array(
        // ... other settings ...
        'concurrency' => 5,  // Upload 5 parts simultaneously
    ),
),

For deployments where users frequently upload large files (videos, design assets, datasets), increasing concurrency to 5-10 can significantly improve upload speeds, at the cost of higher memory usage per upload operation.

Redis Caching for S3 Metadata

Redis caching is even more critical for S3-backed Nextcloud than for filesystem-based deployments. Every file listing, property check, and metadata lookup that hits S3 instead of cache adds network latency. Ensure Redis is configured with adequate memory and that Nextcloud's caching is properly set up:

'memcache.local' => '\\OC\\Memcache\\APCu',
'memcache.distributed' => '\\OC\\Memcache\\Redis',
'memcache.locking' => '\\OC\\Memcache\\Redis',
'redis' => array(
    'host' => '127.0.0.1',
    'port' => 6379,
    'timeout' => 0.0,
),

Increase the Redis memory allocation for S3-backed deployments. File metadata caching reduces S3 API calls significantly:

# In /etc/redis/redis.conf
maxmemory 512mb
maxmemory-policy allkeys-lru

PHP OPcache and APCu

With S3 adding latency to file operations, it is even more important that PHP itself runs efficiently. Ensure OPcache and APCu are properly configured to minimize the overhead of PHP code execution, so that the total request time is dominated by S3 latency (which you can control through caching) rather than PHP processing time.

Nginx Buffering

For large file downloads streamed from S3, Nginx's proxy buffering behavior matters. Disable buffering for file download requests to avoid Nginx writing the entire file to disk before sending it to the client:

location ~ \.php(?:$|/) {
    # ... standard fastcgi settings ...
    
    # Disable response buffering for downloads
    fastcgi_buffering off;
    fastcgi_request_buffering off;
}

Tiered Storage Strategy

Object storage does not have to be all-or-nothing. A tiered storage strategy uses different storage backends for different data types, optimizing cost and performance:

Tier Storage Type Use Case Characteristics
Hot Local NVMe Active projects, databases, app code Lowest latency, highest cost per GB
Warm Ceph SSD pool User files (primary S3 storage) Good throughput, scalable, triple-replicated
Cold Ceph HDD pool Archived files, old versions, backups High capacity, lower cost, adequate throughput

In Ceph, you can create separate pools backed by different device classes (SSD vs. HDD) and assign them to different S3 buckets. Nextcloud's external storage can mount the cold tier bucket as an "Archive" folder, giving users a clear distinction between active storage and archive storage.

Ceph's CRUSH rules control data placement:

# Create an SSD-only CRUSH rule
ceph osd crush rule create-replicated ssd-rule default host ssd

# Create an HDD-only CRUSH rule
ceph osd crush rule create-replicated hdd-rule default host hdd

# Create pools using these rules
ceph osd pool create nextcloud-primary 128 128 replicated ssd-rule
ceph osd pool create nextcloud-archive 64 64 replicated hdd-rule

This configuration keeps active user files on SSD-backed storage for optimal latency while moving archived data to cost-effective HDD storage -- all within the same Ceph cluster, managed through the same S3 interface.

Backup Considerations

Object storage with triple replication protects against hardware failure, but it does not protect against accidental deletion, application bugs, or ransomware. Your backup strategy for S3-backed Nextcloud should address these scenarios:

Enable bucket versioning:

aws s3api put-bucket-versioning \
  --bucket nextcloud-primary \
  --endpoint-url https://rgw.yourdomain.com \
  --versioning-configuration Status=Enabled

Set a lifecycle policy to expire old versions after a retention period (e.g., 90 days):

aws s3api put-bucket-lifecycle-configuration \
  --bucket nextcloud-primary \
  --endpoint-url https://rgw.yourdomain.com \
  --lifecycle-configuration '{
    "Rules": [{
      "ID": "expire-old-versions",
      "Status": "Enabled",
      "Filter": {"Prefix": ""},
      "NoncurrentVersionExpiration": {
        "NoncurrentDays": 90
      }
    }]
  }'

Migration from Filesystem to S3

Migrating an existing filesystem-based Nextcloud deployment to S3 primary storage requires moving all user files from the local data directory to the S3 bucket and updating the database references. This is a significant operation that requires downtime.

Pre-Migration Checklist

  1. Full backup: Back up the Nextcloud database, data directory, and config.php before starting.
  2. Verify S3 connectivity: Ensure the Nextcloud server can reach the Ceph RGW endpoint and authenticate with the configured credentials.
  3. Estimate migration time: The migration speed is limited by the slower of: local disk read speed, network bandwidth to Ceph, and Ceph write throughput. For 1 TB of data on a 1 Gbps connection, expect approximately 3-4 hours.
  4. Schedule maintenance window: Nextcloud must be in maintenance mode during migration. Users cannot access their files.

Migration Steps

Enable maintenance mode:

sudo -u www-data php /var/www/nextcloud/occ maintenance:mode --on

Nextcloud provides an occ command for migrating files to object storage. The exact process depends on your Nextcloud version:

# Scan all files to ensure the database is in sync with the filesystem
sudo -u www-data php /var/www/nextcloud/occ files:scan --all

# Migrate to object storage (Nextcloud 28+)
sudo -u www-data php /var/www/nextcloud/occ files:move-to-object-store

For older Nextcloud versions that do not have the files:move-to-object-store command, the migration requires a more manual approach: configure the S3 backend in config.php, then use a migration script to copy files from the local data directory to S3 while updating the database entries to point to the new S3 object keys.

After migration, verify the data:

# Check for any missing files
sudo -u www-data php /var/www/nextcloud/occ files:scan --all

# Verify S3 bucket has objects
aws s3 ls s3://nextcloud-primary/ --endpoint-url https://rgw.yourdomain.com | head -20

Disable maintenance mode:

sudo -u www-data php /var/www/nextcloud/occ maintenance:mode --off

Do not delete the old data directory immediately. Keep it for at least two weeks after migration as a rollback safety net. Verify that all users can access their files, that sharing works correctly, and that file versions are intact before removing the old data.

MassiveGRID Ceph Storage for Nextcloud

For organizations that want the scalability and redundancy of Ceph object storage without operating a Ceph cluster themselves, MassiveGRID's infrastructure provides Ceph-backed storage as a core component of the managed Nextcloud hosting platform.

Every Nextcloud deployment on MassiveGRID runs on Ceph distributed storage with:

The high-availability architecture ensures that even if an entire storage node fails, Nextcloud continues operating without interruption. Ceph serves data from the remaining replicas while automatically re-replicating to restore the configured redundancy level.

If your organization is ready to move beyond local filesystem limitations and deploy Nextcloud on enterprise-grade object storage, MassiveGRID's managed Nextcloud platform provides the infrastructure with Ceph distributed storage, production-optimized configuration, and ongoing management included.