xWiki performance tuning is the difference between a wiki that feels sluggish under load and one that responds instantly even with hundreds of concurrent users. The default settings that ship with xWiki are conservative -- designed to work on minimal hardware without crashing, not to deliver fast page loads at scale. As your wiki grows in content, users, and extensions, those defaults become the bottleneck. The good news is that systematic tuning of the JVM, PostgreSQL, Hibernate caching, and Solr indexing can transform a struggling xWiki instance into one that handles enterprise workloads without breaking a sweat. MassiveGRID's managed xWiki hosting includes these optimizations from day one, but the principles apply to any self-hosted deployment.

Performance tuning is not guesswork. Each layer of the xWiki stack -- Java runtime, database, cache, search engine, and frontend delivery -- has specific settings that measurably impact response times. The approach in this guide is sequential: start with the JVM because it affects everything, move to the database because it handles every read and write, then address caching to reduce the load on both, and finally optimize search and frontend delivery. If you are still in the planning stage and have not yet deployed xWiki, our production installation guide covers the initial setup with sensible defaults that this guide builds upon. For a broader comparison of xWiki's capabilities against Confluence, see our xWiki vs Confluence enterprise comparison.

JVM and Java Heap Tuning

The Java Virtual Machine is the foundation of xWiki's performance. Every page render, every database query result, every cached object lives in JVM memory. When the heap is too small, the garbage collector runs constantly, pausing the application to reclaim memory. When the heap is sized correctly, garbage collection is infrequent and fast, and xWiki spends its CPU cycles doing useful work instead of cleaning up after itself.

Heap Sizing Guidelines

The minimum heap for a production xWiki instance is 2 GB, which supports small teams of up to 50 users with moderate content. Once you exceed 100 concurrent users or accumulate more than 100,000 pages, increase the heap to 4 GB. Deployments serving 500 or more users, or those with extensive extension usage and large attachment libraries, should allocate 8 GB or more. The critical rule is to set -Xms and -Xmx to the same value. When these differ, the JVM wastes time growing and shrinking the heap, and the resizing process itself triggers full garbage collection pauses.

CATALINA_OPTS="-Xms4g -Xmx4g -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:+ParallelRefProcEnabled -XX:+UnlockExperimentalVMOptions -Djava.awt.headless=true"

Garbage Collector Selection

G1GC (Garbage-First Garbage Collector) is the recommended collector for xWiki deployments with heap sizes between 2 and 8 GB. It divides the heap into regions and collects garbage incrementally, targeting a maximum pause time that you specify with -XX:MaxGCPauseMillis. A value of 200 milliseconds works well for most xWiki workloads. For very large heaps of 16 GB or more, ZGC offers sub-millisecond pause times at the cost of slightly higher CPU overhead. ZGC is worth evaluating for large enterprise deployments where consistent response times matter more than raw throughput.

GC Logging for Diagnostics

Enable garbage collection logging so you can diagnose performance issues without guessing. Modern JVMs use the unified logging framework, which writes GC events to a rotating log file. Review these logs when users report slowdowns -- a spike in full GC events or pause times exceeding your target is a clear signal that the heap needs more memory or that a memory leak exists in an extension.

-Xlog:gc*:file=/var/log/tomcat/gc.log:time,level,tags:filecount=5,filesize=10M

Tomcat Thread Pool Sizing

Tomcat's default thread pool of 200 threads is excessive for most xWiki deployments and wastes memory. Each thread consumes roughly 1 MB of stack space, so 200 threads consume 200 MB before they do any work. For a deployment with up to 100 concurrent users, a thread pool of 50 to 80 threads is sufficient. Set the maxThreads attribute in Tomcat's server.xml connector configuration to match your expected concurrency, and set minSpareThreads to about 25 percent of that value to keep warm threads ready for incoming requests.

PostgreSQL Optimization

PostgreSQL handles every piece of persistent data in xWiki -- page content, user profiles, permissions, attachment metadata, and extension configurations. The default PostgreSQL configuration is tuned for a machine with 512 MB of RAM and minimal concurrent connections. On a server dedicated to xWiki with 8 GB or more of RAM, the defaults leave enormous performance on the table.

Memory Configuration

The most impactful PostgreSQL setting is shared_buffers, which controls how much memory PostgreSQL uses for caching data pages. Set it to 25 percent of total system RAM. On an 8 GB server, that means 2 GB. The effective_cache_size parameter tells the query planner how much memory is available for caching across both PostgreSQL's own buffers and the operating system's file cache. Set this to 50 to 75 percent of total RAM -- it does not allocate memory but influences query planning decisions. The work_mem parameter controls memory allocated per sort or hash operation. Start with 64 MB for xWiki workloads and increase if you see temporary files being written to disk during complex queries. Set maintenance_work_mem to 512 MB to speed up vacuum, index creation, and other maintenance operations.

# postgresql.conf tuning for xWiki on 8 GB server
shared_buffers = 2GB
effective_cache_size = 6GB
work_mem = 64MB
maintenance_work_mem = 512MB
max_connections = 100
wal_buffers = 64MB
checkpoint_completion_target = 0.9
random_page_cost = 1.1  # NVMe storage

WAL and Checkpoint Tuning

Write-Ahead Logging (WAL) is PostgreSQL's mechanism for ensuring data durability. The wal_buffers setting controls how much memory is available for WAL data before it is written to disk. Set it to 64 MB for xWiki workloads. The checkpoint_completion_target of 0.9 spreads checkpoint I/O over a longer period, preventing the I/O spikes that cause momentary slowdowns during heavy write operations.

Connection Pooling

xWiki opens database connections for each active request. Without connection pooling, each new connection incurs the overhead of authentication and session setup. PgBouncer, a lightweight connection pooler, sits between xWiki and PostgreSQL and maintains a pool of warm connections. Configure it in transaction mode with a pool size of 20 to 50 connections, depending on your concurrency level. This reduces connection overhead and allows PostgreSQL's max_connections to stay low, which reduces memory consumption per connection.

Indexing

Ensure that PostgreSQL's automatic indexing is supplemented by manual indexes on columns that xWiki queries frequently. The xWiki schema includes indexes by default, but custom applications built with xWiki's structured data features may benefit from additional indexes on the properties they query. Use EXPLAIN ANALYZE on slow queries to identify missing indexes, and monitor the pg_stat_user_indexes view to find unused indexes that waste write performance.

Hibernate Second-Level Cache

xWiki uses Hibernate as its object-relational mapping layer, and Hibernate supports a second-level (L2) cache that stores frequently accessed database objects in memory. When the L2 cache is properly configured, xWiki can serve page metadata, user profiles, and permission structures from memory without hitting the database, dramatically reducing query load and response times.

Enabling the L2 Cache

xWiki supports Infinispan as its L2 cache provider by default in recent versions. The cache configuration lives in xWiki's Hibernate configuration file. Enable the cache and configure region-specific settings for different types of data. Entity caching stores complete database objects, query caching stores the results of frequently executed queries, and collection caching stores related sets of objects.

Cache Sizing and Eviction

The cache needs enough entries to hold your working set -- the data that is accessed most frequently. For a wiki with 50,000 pages and 500 active users, the document cache should hold at least 10,000 entries, the user cache should hold all active users, and the permission cache should hold entries for every space and page that is accessed during a typical session. Set maximum entries per region based on your content volume and monitor eviction rates. High eviction rates indicate that the cache is too small and is constantly discarding and reloading data.

Monitoring Cache Hit Rates

A well-configured L2 cache should achieve hit rates of 90 percent or higher for entity and query caches. Monitor hit rates through JMX or by enabling Hibernate statistics in xWiki's configuration. If hit rates are low, either the cache is too small (objects are being evicted before they are reused) or the application is querying data that changes too frequently to benefit from caching. Focus cache tuning on read-heavy data -- page content, user profiles, and permission structures -- rather than frequently modified data like recent activity feeds.

Solr Search Indexing

xWiki uses Apache Solr for full-text search. By default, xWiki runs an embedded Solr instance within the same JVM. This works for small wikis but creates problems at scale: Solr's memory requirements compete with xWiki's own heap, indexing operations cause garbage collection pressure, and a large search index can consume significant disk I/O.

When to Switch to External Solr

Once your wiki exceeds 100 concurrent users or 100,000 pages, move to an external Solr instance running in its own JVM. This separation allows you to tune Solr's memory independently of xWiki's heap, prevents indexing operations from degrading wiki performance, and enables Solr-specific optimizations like dedicated NVMe storage for the search index.

External Solr Configuration

Install Solr on the same server or a dedicated server, configure it with the xWiki schema, and update xWiki's configuration to point to the external instance. Allocate at least 2 GB of heap to Solr for a wiki with 100,000 pages, and increase proportionally for larger wikis. Store the Solr index on NVMe storage for fast read and write performance during indexing operations.

Reindexing and Commit Intervals

After migrating to external Solr or importing a large batch of content, trigger a full reindex from xWiki's administration panel. Adjust the commit interval to balance search freshness against indexing overhead. A commit interval of 10 seconds provides near-real-time search updates without excessive I/O. For bulk import operations, temporarily increase the interval to 60 seconds to avoid overwhelming Solr with commits.

Frontend and Rendering Optimization

Server-side tuning handles the backend, but frontend optimizations reduce the amount of work the server needs to do for each page view and improve the user's perceived performance.

Page Rendering Cache

xWiki can cache the rendered HTML output of pages so that subsequent views do not require re-rendering. Enable the rendering cache for pages that are read frequently but edited infrequently -- documentation pages, policy documents, and reference material. The cache is invalidated automatically when a page is edited, so content freshness is maintained.

Nginx Static Asset Caching

Configure Nginx to serve static assets -- JavaScript, CSS, images, and fonts -- with long cache lifetimes. These assets change only when xWiki is upgraded, so setting a cache lifetime of 30 days with versioned URLs eliminates redundant requests to the application server. Add gzip compression for text-based assets to reduce bandwidth consumption and improve load times for users on slower connections.

location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff2?)$ {
    expires 30d;
    add_header Cache-Control "public, immutable";
    access_log off;
}

gzip on;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml;
gzip_min_length 1000;

Extension Audit

Every installed xWiki extension adds rendering overhead, memory consumption, and potential database queries to each page load. Audit your installed extensions periodically and remove any that are not actively used. Extensions that add UI elements to every page -- sidebars, widgets, notification panels -- have the largest per-page impact. A wiki with 50 extensions will always be slower than one with 20, regardless of how well the JVM and database are tuned.

Storage I/O Considerations

Storage performance is the silent bottleneck that tuning guides often overlook. The fastest PostgreSQL configuration in the world cannot compensate for slow disk I/O. Every database read that misses the buffer cache, every Solr query that hits the index on disk, and every attachment download goes through storage.

NVMe SSD Is Non-Negotiable

NVMe SSDs provide the random read/write performance that xWiki's workload demands. PostgreSQL's random reads during complex queries, Solr's index lookups, and Tomcat's temporary file operations all benefit from NVMe's low latency and high IOPS. Traditional SATA SSDs are acceptable for small deployments, but spinning disks will create persistent performance problems regardless of how well the software is tuned. All MassiveGRID cloud servers include NVMe storage by default.

Separate Mount Points

For larger deployments, place PostgreSQL's data directory, the Solr index, and xWiki's attachment storage on separate mount points. This prevents I/O contention between database operations, search indexing, and file serving. On MassiveGRID infrastructure, Ceph distributed storage provides triple-replicated data with consistent performance regardless of the I/O pattern.

Monitoring and Continuous Tuning

Tuning is not a one-time activity. As your wiki grows and usage patterns change, the optimal settings shift. Monitoring infrastructure provides the data you need to identify bottlenecks before they become user-facing problems.

Prometheus and Grafana

Deploy Prometheus to collect metrics from Tomcat (via the JMX exporter), PostgreSQL (via the PostgreSQL exporter), and the host system (via the node exporter). Grafana provides dashboards that visualize these metrics over time, making it easy to spot trends like gradually increasing GC pause times, growing query latency, or cache hit rates declining as content volume increases.

Key Metrics to Watch

Focus on a small set of metrics that directly correlate with user experience. JVM heap usage and GC pause duration indicate whether memory is sufficient. PostgreSQL's cache hit ratio (target: above 99 percent), active connections, and query latency reveal database health. Solr's query response time and index size show search performance. Tomcat's active threads and request processing time indicate whether the application tier is keeping up with demand. Nginx's request rate and response codes provide the user-facing view of performance.

Alerting

Configure alerts for conditions that require intervention: JVM heap usage above 85 percent for sustained periods, GC pause times exceeding 500 milliseconds, PostgreSQL cache hit ratio below 95 percent, Tomcat thread pool saturation, and disk usage above 80 percent. These alerts should reach your operations team before users start filing complaints about slow performance.

For organizations that prefer to delegate tuning and monitoring to specialists, MassiveGRID's managed xWiki hosting includes all of these optimizations as standard. The infrastructure is pre-tuned for xWiki workloads, monitoring runs continuously, and the operations team proactively adjusts settings as your wiki grows. For deployments that need to scale beyond a single server, our xWiki clustering architecture guide covers multi-node deployments with load balancing and database replication.

If you are experiencing performance issues with your current xWiki deployment or planning a migration from Confluence that needs to handle a large user base, explore MassiveGRID's managed xWiki hosting or contact our team to discuss your specific requirements.

Written by MassiveGRID — As an official xWiki hosting partner, MassiveGRID provides managed xWiki hosting on high-availability infrastructure across data centers in Frankfurt, London, New York, and Singapore.