Programmatic Control Over Your Knowledge Base
Every mature knowledge management deployment eventually reaches a point where manual content management becomes unsustainable. When your wiki contains thousands of pages across dozens of spaces, when content must be synchronized with external systems, when documentation must be generated automatically from code repositories or project management tools, the browser-based editing interface is no longer sufficient. This is where xWiki's REST API transforms the platform from an interactive wiki into a programmable content management engine that can be orchestrated by scripts, pipelines, and external applications.
xWiki has provided RESTful API access for more than a decade, and the API has matured alongside the platform through over twenty years of continuous development. Today, the REST API exposes virtually every capability of the xWiki platform through standard HTTP endpoints, enabling external systems to create, read, update, and delete pages, spaces, attachments, users, groups, and workflow states. Over 800 teams rely on xWiki's extensibility, and for many of them, the REST API is the integration backbone that connects xWiki to the broader technology ecosystem.
When you deploy xWiki on MassiveGRID's managed hosting infrastructure, your API workloads benefit from enterprise-grade compute and network resources across our data centers in Frankfurt, London, New York, and Singapore. Our 100% uptime SLA ensures that the API endpoints your integrations depend on are always available, and our ISO 9001-certified operations guarantee consistent performance characteristics that your automation scripts can rely on.
REST API Architecture and Authentication
xWiki's REST API follows standard RESTful conventions. Resources are identified by URLs, operations are expressed through HTTP methods, and responses are formatted in XML or JSON. The API's URL structure mirrors the wiki's content hierarchy, making endpoints intuitive for anyone familiar with the wiki's organization. A page in the Main space called "ProjectPlan" is accessible at a URL like /rest/wikis/xwiki/spaces/Main/pages/ProjectPlan, and its attachments, objects, properties, and history are available at predictable sub-paths beneath that URL.
The API exposes endpoints for every major content type in xWiki. Pages and their content are the primary resources, but the API also provides access to spaces (the organizational containers for pages), attachments (files uploaded to pages), objects (structured data associated with pages), classes (the schemas that define object structures), users and groups, tags, comments, and annotations. This comprehensive coverage means that any operation that can be performed through the web interface can also be performed through the API.
Authentication is handled through multiple mechanisms to accommodate different integration scenarios. Basic authentication transmits the username and password with each request, encoded in the HTTP Authorization header. While straightforward to implement, Basic authentication should only be used over HTTPS connections to prevent credential exposure. For automated systems and CI/CD pipelines, API tokens provide a more secure alternative that avoids embedding user credentials in scripts. Tokens can be scoped to specific permissions and revoked independently without affecting the user account they were issued from.
# Basic authentication example
curl -u "admin:password" \
"https://wiki.example.com/rest/wikis/xwiki/spaces/Main/pages/ProjectPlan" \
-H "Accept: application/json"
# Token-based authentication
curl -H "Authorization: Bearer xwiki_api_token_here" \
"https://wiki.example.com/rest/wikis/xwiki/spaces/Main/pages/ProjectPlan" \
-H "Accept: application/json"
Session-based authentication is also supported for browser-based JavaScript applications that need to interact with the API from the client side. After authenticating through xWiki's login endpoint, the session cookie is included automatically in subsequent API requests from the same browser session. This approach is particularly useful for custom user interfaces and dashboards that are embedded within wiki pages.
Common CRUD Operations with Examples
The bread and butter of any REST API integration consists of create, read, update, and delete operations on content resources. xWiki's API implements these operations through standard HTTP methods: PUT for creation and updates, GET for reading, and DELETE for deletion. The following examples demonstrate the most common operations that integration developers encounter.
Creating a new page requires a PUT request to the desired page URL with the page content in the request body. The content can be provided in xWiki syntax, HTML, or plain text, and the API will store it according to the specified syntax identifier.
# Create a new page with xWiki syntax content
curl -u "admin:password" -X PUT \
"https://wiki.example.com/rest/wikis/xwiki/spaces/Projects/pages/Q1-Roadmap" \
-H "Content-Type: application/xml" \
-d '<page xmlns="http://www.xwiki.org">
<title>Q1 2026 Product Roadmap</title>
<syntax>xwiki/2.1</syntax>
<content>= Q1 Roadmap =
This document outlines our product priorities for Q1 2026.
== Key Initiatives ==
* Platform scalability improvements
* API v2 development
* Mobile experience redesign</content>
</page>'
Reading page content is a straightforward GET request. The response includes the page's title, content, syntax, author, creation date, modification date, version number, and other metadata. By specifying the Accept header, you can request the response in either XML or JSON format.
# Read page content as JSON
curl -u "admin:password" \
"https://wiki.example.com/rest/wikis/xwiki/spaces/Projects/pages/Q1-Roadmap" \
-H "Accept: application/json"
Updating existing content uses the same PUT method as creation. xWiki's API is idempotent for PUT operations, meaning that a PUT request to an existing page URL updates the page, while a PUT request to a non-existent page URL creates it. This simplifies client code by eliminating the need to check whether a page exists before writing to it.
Querying and searching across the wiki is one of the API's most powerful capabilities. The search endpoint accepts queries in multiple formats, including full-text search, Lucene query syntax, and xWiki's native query language. Search results include page titles, URLs, excerpts, and relevance scores, enabling external applications to build rich search experiences on top of xWiki content.
Uploading attachments uses a PUT request to the attachment endpoint with the file content in the request body. The Content-Type header should reflect the file's MIME type, and the filename is specified in the URL path.
# Upload a PDF attachment to a page
curl -u "admin:password" -X PUT \
"https://wiki.example.com/rest/wikis/xwiki/spaces/Projects/pages/Q1-Roadmap/attachments/roadmap-diagram.pdf" \
-H "Content-Type: application/pdf" \
--data-binary @roadmap-diagram.pdf
For JavaScript-based integrations, the same operations can be performed using the Fetch API or libraries like Axios. The following example demonstrates creating a page from a browser-based application.
// Create a page using JavaScript Fetch API
const response = await fetch(
'https://wiki.example.com/rest/wikis/xwiki/spaces/Projects/pages/NewPage',
{
method: 'PUT',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer your_api_token'
},
body: JSON.stringify({
title: 'New Project Page',
syntax: 'xwiki/2.1',
content: '= New Project =\n\nProject description here.'
})
}
);
const page = await response.json();
console.log('Page created:', page.title);
Advanced Integration Patterns
Beyond basic CRUD operations, the xWiki REST API enables sophisticated integration patterns that connect the wiki to the broader software development and business process landscape.
CI/CD pipeline integration is one of the most impactful patterns for engineering organizations. By incorporating API calls into build and deployment pipelines, teams can ensure that wiki documentation stays synchronized with the code it describes. A pipeline stage might extract API documentation from source code annotations and publish it to xWiki pages automatically, update deployment logs with the results of each deployment run, or generate release notes from commit messages and merge request descriptions. These automations eliminate the documentation lag that plagues most engineering organizations and ensure that the wiki always reflects the current state of the systems it documents.
# CI/CD pipeline step: Update deployment log in xWiki
DEPLOY_DATE=$(date -u +"%Y-%m-%d %H:%M UTC")
CONTENT="| ${DEPLOY_DATE} | ${BUILD_NUMBER} | ${GIT_COMMIT:0:8} | ${DEPLOY_STATUS} |"
curl -u "${WIKI_USER}:${WIKI_TOKEN}" -X PUT \
"https://wiki.example.com/rest/wikis/xwiki/spaces/DevOps/pages/DeploymentLog" \
-H "Content-Type: application/xml" \
-d "<page xmlns='http://www.xwiki.org'>
<content>${EXISTING_CONTENT}
${CONTENT}</content>
</page>"
External data synchronization allows xWiki to serve as a centralized knowledge hub that aggregates information from multiple source systems. A scheduled script might pull customer data from a CRM and update account pages in the wiki, synchronize project milestones from a project management tool, import incident reports from a monitoring system, or replicate configuration documentation from an infrastructure-as-code repository. The API's comprehensive CRUD capabilities make xWiki an effective integration target for virtually any data source that can produce structured output.
Webhooks enable event-driven integrations where external systems react to changes in the wiki rather than polling for updates. When configured, xWiki can send HTTP callbacks to specified URLs whenever pages are created, modified, or deleted. This pattern is particularly valuable for maintaining search indices, triggering notification workflows, invalidating caches, or replicating content to secondary systems. Webhook payloads include the affected page's identifier, the nature of the change, the user who made the change, and a timestamp, providing sufficient context for the receiving system to determine the appropriate response.
Rate Limiting and Performance
Large-scale API operations require careful attention to performance characteristics and rate management. While xWiki's REST API is designed to handle substantial workloads, the performance of any API is ultimately constrained by the underlying infrastructure and the complexity of the operations being performed.
Pagination is essential for operations that return large result sets. The API's search and listing endpoints support pagination through offset and limit parameters. Rather than requesting all pages in a space at once, which could return thousands of results and consume significant server resources, well-designed integrations request results in pages of 50 to 100 items and iterate through the result set progressively. The API response includes metadata indicating the total number of results and the current offset, enabling clients to calculate the number of remaining pages.
# Paginated listing of pages in a space
curl -u "admin:password" \
"https://wiki.example.com/rest/wikis/xwiki/spaces/Projects/pages?start=0&number=50" \
-H "Accept: application/json"
Batch operations can be optimized by structuring API calls to minimize round trips. While xWiki's REST API processes requests individually rather than supporting native batch endpoints, the performance impact can be mitigated through parallel request execution. Modern HTTP clients can maintain multiple concurrent connections to the server, allowing several API operations to proceed simultaneously. A reasonable concurrency level of 5 to 10 parallel requests balances throughput against server load, and the optimal level depends on the server's capacity and the complexity of the operations being performed.
Caching is another critical performance consideration for read-heavy integrations. The API supports standard HTTP caching headers, including ETag and Last-Modified, which enable clients to avoid re-fetching content that has not changed since the last request. For integrations that repeatedly query the same pages, implementing a client-side cache with conditional GET requests can reduce both network traffic and server load by an order of magnitude.
On MassiveGRID's managed hosting platform, API performance benefits from our high-performance compute instances, low-latency SSD storage, and optimized network infrastructure. Our monitoring systems track API response times and throughput, enabling proactive identification of performance trends before they affect integration reliability. For organizations with demanding API workloads, our infrastructure team can configure dedicated resources and caching layers to ensure consistent performance at scale. Our GDPR-compliant infrastructure across Frankfurt, London, New York, and Singapore ensures that API traffic is processed in accordance with data residency requirements, and our 24/7 support team is available to assist with API integration design and optimization.
For organizations comparing xWiki's API capabilities with those of proprietary platforms, our detailed xWiki versus Confluence enterprise comparison includes an analysis of integration and extensibility features. With over 900 extensions and support for more than 40 languages, xWiki's REST API is one component of a comprehensive extensibility architecture that has been refined over more than two decades of development.
Frequently Asked Questions
What authentication methods does the xWiki REST API support?
The xWiki REST API supports three primary authentication methods. Basic authentication encodes the username and password in the HTTP Authorization header and is the simplest method to implement, though it should always be used over HTTPS to protect credentials in transit. API tokens provide a more secure option for automated systems and CI/CD pipelines, as they can be scoped to specific permissions and revoked independently without affecting the user account. Session-based authentication uses cookies from an authenticated browser session, making it suitable for JavaScript applications running within the wiki's own pages. For production integrations, API tokens are the recommended approach because they avoid embedding user credentials in scripts and can be rotated on a regular schedule without disrupting the associated user account.
Can I create or update multiple pages in a single API call?
The xWiki REST API processes requests individually, so there is no native batch endpoint for creating or updating multiple pages in a single HTTP request. However, you can achieve efficient bulk operations by executing multiple API calls in parallel using concurrent HTTP connections. Most HTTP client libraries support configuring a connection pool size, and a concurrency level of 5 to 10 parallel requests provides a good balance between throughput and server load. For very large batch operations involving thousands of pages, consider implementing a throttled queue that processes requests at a controlled rate, monitors the server's response times, and adjusts the concurrency level dynamically to avoid overwhelming the server. Alternatively, for bulk content migration scenarios, xWiki's XAR import/export format may be more efficient than individual API calls.
Does the xWiki REST API have rate limits?
xWiki's REST API does not impose hard rate limits by default, but practical throughput is constrained by the server's available resources, including CPU, memory, and database connection pool capacity. On shared hosting environments, the hosting provider may implement rate limiting to ensure fair resource allocation across tenants. On dedicated infrastructure such as MassiveGRID's managed hosting, rate limits can be configured according to your specific requirements and the server's capacity. As a general best practice, integrations should implement client-side rate limiting with exponential backoff for retries, use pagination for large result sets, cache responses where possible, and monitor API response times to detect when the server is approaching its capacity limits. If your integration requires sustained high-throughput API access, consult with your hosting provider to ensure that the infrastructure is provisioned appropriately.