diff --git a/site2/website/versioned_docs/version-2.10.x/about.md b/site2/website/versioned_docs/version-2.10.x/about.md
new file mode 100644
index 0000000000000..478ac8dd053e8
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/about.md
@@ -0,0 +1,56 @@
+---
+slug: /
+id: about
+title: Welcome to the doc portal!
+sidebar_label: "About"
+---
+
+import BlockLinks from "@site/src/components/BlockLinks";
+import BlockLink from "@site/src/components/BlockLink";
+import { docUrl } from "@site/src/utils/index";
+
+
+# Welcome to the doc portal!
+***
+
+This portal holds a variety of support documents to help you work with Pulsar . If you’re a beginner, there are tutorials and explainers to help you understand Pulsar and how it works.
+
+If you’re an experienced coder, review this page to learn the easiest way to access the specific content you’re looking for.
+
+## Get Started Now
+
+
+
+
+
+
+
+
+
+## Navigation
+***
+
+There are several ways to get around in the doc portal. The index navigation pane is a table of contents for the entire archive. The archive is divided into sections, like chapters in a book. Click the title of the topic to view it.
+
+In-context links provide an easy way to immediately reference related topics. Click the underlined term to view the topic.
+
+Links to related topics can be found at the bottom of each topic page. Click the link to view the topic.
+
+![Page Linking](/assets/page-linking.png)
+
+## Continuous Improvement
+***
+As you probably know, we are working on a new user experience for our documentation portal that will make learning about and building on top of Apache Pulsar a much better experience. Whether you need overview concepts, how-to procedures, curated guides or quick references, we’re building content to support it. This welcome page is just the first step. We will be providing updates every month.
+
+## Help Improve These Documents
+***
+
+You’ll notice an Edit button at the bottom and top of each page. Click it to open a landing page with instructions for requesting changes to posted documents. These are your resources. Participation is not only welcomed – it’s essential!
+
+## Join the Community!
+***
+
+The Pulsar community on github is active, passionate, and knowledgeable. Join discussions, voice opinions, suggest features, and dive into the code itself. Find your Pulsar family here at [apache/pulsar](https://github.com/apache/pulsar).
+
+An equally passionate community can be found in the [Pulsar Slack channel](https://apache-pulsar.slack.com/). You’ll need an invitation to join, but many Github Pulsar community members are Slack members too. Join, hang out, learn, and make some new friends.
+
diff --git a/site2/website/versioned_docs/version-2.10.x/adaptors-kafka.md b/site2/website/versioned_docs/version-2.10.x/adaptors-kafka.md
new file mode 100644
index 0000000000000..e738f9d94b6a9
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/adaptors-kafka.md
@@ -0,0 +1,276 @@
+---
+id: adaptors-kafka
+title: Pulsar adaptor for Apache Kafka
+sidebar_label: "Kafka client wrapper"
+original_id: adaptors-kafka
+---
+
+
+Pulsar provides an easy option for applications that are currently written using the [Apache Kafka](http://kafka.apache.org) Java client API.
+
+## Using the Pulsar Kafka compatibility wrapper
+
+In an existing application, change the regular Kafka client dependency and replace it with the Pulsar Kafka wrapper. Remove the following dependency in `pom.xml`:
+
+```xml
+
+
+ org.apache.kafka
+ kafka-clients
+ 0.10.2.1
+
+
+```
+
+Then include this dependency for the Pulsar Kafka wrapper:
+
+```xml
+
+
+ org.apache.pulsar
+ pulsar-client-kafka
+ @pulsar:version@
+
+
+```
+
+With the new dependency, the existing code works without any changes. You need to adjust the configuration, and make sure it points the
+producers and consumers to Pulsar service rather than Kafka, and uses a particular
+Pulsar topic.
+
+## Using the Pulsar Kafka compatibility wrapper together with existing kafka client
+
+When migrating from Kafka to Pulsar, the application might use the original kafka client
+and the pulsar kafka wrapper together during migration. You should consider using the
+unshaded pulsar kafka client wrapper.
+
+```xml
+
+
+ org.apache.pulsar
+ pulsar-client-kafka-original
+ @pulsar:version@
+
+
+```
+
+When using this dependency, construct producers using `org.apache.kafka.clients.producer.PulsarKafkaProducer`
+instead of `org.apache.kafka.clients.producer.KafkaProducer` and `org.apache.kafka.clients.producer.PulsarKafkaConsumer` for consumers.
+
+## Producer example
+
+```java
+
+// Topic needs to be a regular Pulsar topic
+String topic = "persistent://public/default/my-topic";
+
+Properties props = new Properties();
+// Point to a Pulsar service
+props.put("bootstrap.servers", "pulsar://localhost:6650");
+
+props.put("key.serializer", IntegerSerializer.class.getName());
+props.put("value.serializer", StringSerializer.class.getName());
+
+Producer producer = new KafkaProducer(props);
+
+for (int i = 0; i < 10; i++) {
+ producer.send(new ProducerRecord(topic, i, "hello-" + i));
+ log.info("Message {} sent successfully", i);
+}
+
+producer.close();
+
+```
+
+## Consumer example
+
+```java
+
+String topic = "persistent://public/default/my-topic";
+
+Properties props = new Properties();
+// Point to a Pulsar service
+props.put("bootstrap.servers", "pulsar://localhost:6650");
+props.put("group.id", "my-subscription-name");
+props.put("enable.auto.commit", "false");
+props.put("key.deserializer", IntegerDeserializer.class.getName());
+props.put("value.deserializer", StringDeserializer.class.getName());
+
+Consumer consumer = new KafkaConsumer(props);
+consumer.subscribe(Arrays.asList(topic));
+
+while (true) {
+ ConsumerRecords records = consumer.poll(100);
+ records.forEach(record -> {
+ log.info("Received record: {}", record);
+ });
+
+ // Commit last offset
+ consumer.commitSync();
+}
+
+```
+
+## Complete Examples
+
+You can find the complete producer and consumer examples [here](https://github.com/apache/pulsar-adapters/tree/master/pulsar-client-kafka-compat/pulsar-client-kafka-tests/src/test/java/org/apache/pulsar/client/kafka/compat/examples).
+
+## Compatibility matrix
+
+Currently the Pulsar Kafka wrapper supports most of the operations offered by the Kafka API.
+
+### Producer
+
+APIs:
+
+| Producer Method | Supported | Notes |
+|:------------------------------------------------------------------------------|:----------|:-------------------------------------------------------------------------|
+| `Future send(ProducerRecord record)` | Yes | |
+| `Future send(ProducerRecord record, Callback callback)` | Yes | |
+| `void flush()` | Yes | |
+| `List partitionsFor(String topic)` | No | |
+| `Map metrics()` | No | |
+| `void close()` | Yes | |
+| `void close(long timeout, TimeUnit unit)` | Yes | |
+
+Properties:
+
+| Config property | Supported | Notes |
+|:----------------------------------------|:----------|:------------------------------------------------------------------------------|
+| `acks` | Ignored | Durability and quorum writes are configured at the namespace level |
+| `auto.offset.reset` | Yes | It uses a default value of `earliest` if you do not give a specific setting. |
+| `batch.size` | Ignored | |
+| `bootstrap.servers` | Yes | |
+| `buffer.memory` | Ignored | |
+| `client.id` | Ignored | |
+| `compression.type` | Yes | Allows `gzip` and `lz4`. No `snappy`. |
+| `connections.max.idle.ms` | Yes | Only support up to 2,147,483,647,000(Integer.MAX_VALUE * 1000) ms of idle time|
+| `interceptor.classes` | Yes | |
+| `key.serializer` | Yes | |
+| `linger.ms` | Yes | Controls the group commit time when batching messages |
+| `max.block.ms` | Ignored | |
+| `max.in.flight.requests.per.connection` | Ignored | In Pulsar ordering is maintained even with multiple requests in flight |
+| `max.request.size` | Ignored | |
+| `metric.reporters` | Ignored | |
+| `metrics.num.samples` | Ignored | |
+| `metrics.sample.window.ms` | Ignored | |
+| `partitioner.class` | Yes | |
+| `receive.buffer.bytes` | Ignored | |
+| `reconnect.backoff.ms` | Ignored | |
+| `request.timeout.ms` | Ignored | |
+| `retries` | Ignored | Pulsar client retries with exponential backoff until the send timeout expires. |
+| `send.buffer.bytes` | Ignored | |
+| `timeout.ms` | Yes | |
+| `value.serializer` | Yes | |
+
+
+### Consumer
+
+The following table lists consumer APIs.
+
+| Consumer Method | Supported | Notes |
+|:--------------------------------------------------------------------------------------------------------|:----------|:------|
+| `Set assignment()` | No | |
+| `Set subscription()` | Yes | |
+| `void subscribe(Collection topics)` | Yes | |
+| `void subscribe(Collection topics, ConsumerRebalanceListener callback)` | No | |
+| `void assign(Collection partitions)` | No | |
+| `void subscribe(Pattern pattern, ConsumerRebalanceListener callback)` | No | |
+| `void unsubscribe()` | Yes | |
+| `ConsumerRecords poll(long timeoutMillis)` | Yes | |
+| `void commitSync()` | Yes | |
+| `void commitSync(Map offsets)` | Yes | |
+| `void commitAsync()` | Yes | |
+| `void commitAsync(OffsetCommitCallback callback)` | Yes | |
+| `void commitAsync(Map offsets, OffsetCommitCallback callback)` | Yes | |
+| `void seek(TopicPartition partition, long offset)` | Yes | |
+| `void seekToBeginning(Collection partitions)` | Yes | |
+| `void seekToEnd(Collection partitions)` | Yes | |
+| `long position(TopicPartition partition)` | Yes | |
+| `OffsetAndMetadata committed(TopicPartition partition)` | Yes | |
+| `Map metrics()` | No | |
+| `List partitionsFor(String topic)` | No | |
+| `Map> listTopics()` | No | |
+| `Set paused()` | No | |
+| `void pause(Collection partitions)` | No | |
+| `void resume(Collection partitions)` | No | |
+| `Map offsetsForTimes(Map timestampsToSearch)` | No | |
+| `Map beginningOffsets(Collection partitions)` | No | |
+| `Map endOffsets(Collection partitions)` | No | |
+| `void close()` | Yes | |
+| `void close(long timeout, TimeUnit unit)` | Yes | |
+| `void wakeup()` | No | |
+
+Properties:
+
+| Config property | Supported | Notes |
+|:--------------------------------|:----------|:------------------------------------------------------|
+| `group.id` | Yes | Maps to a Pulsar subscription name |
+| `max.poll.records` | Yes | |
+| `max.poll.interval.ms` | Ignored | Messages are "pushed" from broker |
+| `session.timeout.ms` | Ignored | |
+| `heartbeat.interval.ms` | Ignored | |
+| `bootstrap.servers` | Yes | Needs to point to a single Pulsar service URL |
+| `enable.auto.commit` | Yes | |
+| `auto.commit.interval.ms` | Ignored | With auto-commit, acks are sent immediately to broker |
+| `partition.assignment.strategy` | Ignored | |
+| `auto.offset.reset` | Yes | Only support earliest and latest. |
+| `fetch.min.bytes` | Ignored | |
+| `fetch.max.bytes` | Ignored | |
+| `fetch.max.wait.ms` | Ignored | |
+| `interceptor.classes` | Yes | |
+| `metadata.max.age.ms` | Ignored | |
+| `max.partition.fetch.bytes` | Ignored | |
+| `send.buffer.bytes` | Ignored | |
+| `receive.buffer.bytes` | Ignored | |
+| `client.id` | Ignored | |
+
+
+## Customize Pulsar configurations
+
+You can configure Pulsar authentication provider directly from the Kafka properties.
+
+### Pulsar client properties
+
+| Config property | Default | Notes |
+|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------|
+| [`pulsar.authentication.class`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-org.apache.pulsar.client.api.Authentication-) | | Configure to auth provider. For example, `org.apache.pulsar.client.impl.auth.AuthenticationTls`.|
+| [`pulsar.authentication.params.map`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.util.Map-) | | Map which represents parameters for the Authentication-Plugin. |
+| [`pulsar.authentication.params.string`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.lang.String-) | | String which represents parameters for the Authentication-Plugin, for example, `key1:val1,key2:val2`. |
+| [`pulsar.use.tls`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTls-boolean-) | `false` | Enable TLS transport encryption. |
+| [`pulsar.tls.trust.certs.file.path`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsTrustCertsFilePath-java.lang.String-) | | Path for the TLS trust certificate store. |
+| [`pulsar.tls.allow.insecure.connection`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsAllowInsecureConnection-boolean-) | `false` | Accept self-signed certificates from brokers. |
+| [`pulsar.operation.timeout.ms`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setOperationTimeout-int-java.util.concurrent.TimeUnit-) | `30000` | General operations timeout. |
+| [`pulsar.stats.interval.seconds`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setStatsInterval-long-java.util.concurrent.TimeUnit-) | `60` | Pulsar client lib stats printing interval. |
+| [`pulsar.num.io.threads`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setIoThreads-int-) | `1` | The number of Netty IO threads to use. |
+| [`pulsar.connections.per.broker`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConnectionsPerBroker-int-) | `1` | The maximum number of connection to each broker. |
+| [`pulsar.use.tcp.nodelay`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTcpNoDelay-boolean-) | `true` | TCP no-delay. |
+| [`pulsar.concurrent.lookup.requests`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConcurrentLookupRequest-int-) | `50000` | The maximum number of concurrent topic lookups. |
+| [`pulsar.max.number.rejected.request.per.connection`](/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setMaxNumberOfRejectedRequestPerConnection-int-) | `50` | The threshold of errors to forcefully close a connection. |
+| [`pulsar.keepalive.interval.ms`](/api/client/org/apache/pulsar/client/api/ClientBuilder.html#keepAliveInterval-int-java.util.concurrent.TimeUnit-)| `30000` | Keep alive interval for each client-broker-connection. |
+
+
+### Pulsar producer properties
+
+| Config property | Default | Notes |
+|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------|
+| [`pulsar.producer.name`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setProducerName-java.lang.String-) | | Specify the producer name. |
+| [`pulsar.producer.initial.sequence.id`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setInitialSequenceId-long-) | | Specify baseline for sequence ID of this producer. |
+| [`pulsar.producer.max.pending.messages`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessages-int-) | `1000` | Set the maximum size of the message queue pending to receive an acknowledgment from the broker. |
+| [`pulsar.producer.max.pending.messages.across.partitions`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessagesAcrossPartitions-int-) | `50000` | Set the maximum number of pending messages across all the partitions. |
+| [`pulsar.producer.batching.enabled`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingEnabled-boolean-) | `true` | Control whether automatic batching of messages is enabled for the producer. |
+| [`pulsar.producer.batching.max.messages`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingMaxMessages-int-) | `1000` | The maximum number of messages in a batch. |
+| [`pulsar.block.if.producer.queue.full`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBlockIfQueueFull-boolean-) | | Specify the block producer if queue is full. |
+| [`pulsar.crypto.reader.factory.class.name`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setCryptoKeyReader-org.apache.pulsar.client.api.CryptoKeyReader-) | | Specify the CryptoReader-Factory(`CryptoKeyReaderFactory`) classname which allows producer to create CryptoKeyReader. |
+
+
+### Pulsar consumer Properties
+
+| Config property | Default | Notes |
+|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------|
+| [`pulsar.consumer.name`](/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setConsumerName-java.lang.String-) | | Specify the consumer name. |
+| [`pulsar.consumer.receiver.queue.size`](/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setReceiverQueueSize-int-) | 1000 | Set the size of the consumer receiver queue. |
+| [`pulsar.consumer.acknowledgments.group.time.millis`](/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#acknowledgmentGroupTime-long-java.util.concurrent.TimeUnit-) | 100 | Set the maximum amount of group time for consumers to send the acknowledgments to the broker. |
+| [`pulsar.consumer.total.receiver.queue.size.across.partitions`](/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setMaxTotalReceiverQueueSizeAcrossPartitions-int-) | 50000 | Set the maximum size of the total receiver queue across partitions. |
+| [`pulsar.consumer.subscription.topics.mode`](/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#subscriptionTopicsMode-Mode-) | PersistentOnly | Set the subscription topic mode for consumers. |
+| [`pulsar.crypto.reader.factory.class.name`](/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setCryptoKeyReader-org.apache.pulsar.client.api.CryptoKeyReader-) | | Specify the CryptoReader-Factory(`CryptoKeyReaderFactory`) classname which allows consumer to create CryptoKeyReader. |
diff --git a/site2/website/versioned_docs/version-2.10.x/adaptors-spark.md b/site2/website/versioned_docs/version-2.10.x/adaptors-spark.md
new file mode 100644
index 0000000000000..e14f13b5d4b07
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/adaptors-spark.md
@@ -0,0 +1,91 @@
+---
+id: adaptors-spark
+title: Pulsar adaptor for Apache Spark
+sidebar_label: "Apache Spark"
+original_id: adaptors-spark
+---
+
+## Spark Streaming receiver
+The Spark Streaming receiver for Pulsar is a custom receiver that enables Apache [Spark Streaming](https://spark.apache.org/streaming/) to receive raw data from Pulsar.
+
+An application can receive data in [Resilient Distributed Dataset](https://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds) (RDD) format via the Spark Streaming receiver and can process it in a variety of ways.
+
+### Prerequisites
+
+To use the receiver, include a dependency for the `pulsar-spark` library in your Java configuration.
+
+#### Maven
+
+If you're using Maven, add this to your `pom.xml`:
+
+```xml
+
+
+@pulsar:version@
+
+
+
+ org.apache.pulsar
+ pulsar-spark
+ ${pulsar.version}
+
+
+```
+
+#### Gradle
+
+If you're using Gradle, add this to your `build.gradle` file:
+
+```groovy
+
+def pulsarVersion = "@pulsar:version@"
+
+dependencies {
+ compile group: 'org.apache.pulsar', name: 'pulsar-spark', version: pulsarVersion
+}
+
+```
+
+### Usage
+
+Pass an instance of `SparkStreamingPulsarReceiver` to the `receiverStream` method in `JavaStreamingContext`:
+
+```java
+
+ String serviceUrl = "pulsar://localhost:6650/";
+ String topic = "persistent://public/default/test_src";
+ String subs = "test_sub";
+
+ SparkConf sparkConf = new SparkConf().setMaster("local[*]").setAppName("Pulsar Spark Example");
+
+ JavaStreamingContext jsc = new JavaStreamingContext(sparkConf, Durations.seconds(60));
+
+ ConsumerConfigurationData pulsarConf = new ConsumerConfigurationData();
+
+ Set set = new HashSet();
+ set.add(topic);
+ pulsarConf.setTopicNames(set);
+ pulsarConf.setSubscriptionName(subs);
+
+ SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver(
+ serviceUrl,
+ pulsarConf,
+ new AuthenticationDisabled());
+
+ JavaReceiverInputDStream lineDStream = jsc.receiverStream(pulsarReceiver);
+
+```
+
+For a complete example, click [here](https://github.com/apache/pulsar-adapters/blob/master/examples/spark/src/main/java/org/apache/spark/streaming/receiver/example/SparkStreamingPulsarReceiverExample.java). In this example, the number of messages that contain the string "Pulsar" in received messages is counted.
+
+Note that if needed, other Pulsar authentication classes can be used. For example, in order to use a token during authentication the following parameters for the `SparkStreamingPulsarReceiver` constructor can be set:
+
+```java
+
+SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver(
+ serviceUrl,
+ pulsarConf,
+ new AuthenticationToken("token:"));
+
+```
+
diff --git a/site2/website/versioned_docs/version-2.10.x/adaptors-storm.md b/site2/website/versioned_docs/version-2.10.x/adaptors-storm.md
new file mode 100644
index 0000000000000..76d507164777d
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/adaptors-storm.md
@@ -0,0 +1,96 @@
+---
+id: adaptors-storm
+title: Pulsar adaptor for Apache Storm
+sidebar_label: "Apache Storm"
+original_id: adaptors-storm
+---
+
+Pulsar Storm is an adaptor for integrating with [Apache Storm](http://storm.apache.org/) topologies. It provides core Storm implementations for sending and receiving data.
+
+An application can inject data into a Storm topology via a generic Pulsar spout, as well as consume data from a Storm topology via a generic Pulsar bolt.
+
+## Using the Pulsar Storm Adaptor
+
+Include dependency for Pulsar Storm Adaptor:
+
+```xml
+
+
+ org.apache.pulsar
+ pulsar-storm
+ ${pulsar.version}
+
+
+```
+
+## Pulsar Spout
+
+The Pulsar Spout allows for the data published on a topic to be consumed by a Storm topology. It emits a Storm tuple based on the message received and the `MessageToValuesMapper` provided by the client.
+
+The tuples that fail to be processed by the downstream bolts will be re-injected by the spout with an exponential backoff, within a configurable timeout (the default is 60 seconds) or a configurable number of retries, whichever comes first, after which it is acknowledged by the consumer. Here's an example construction of a spout:
+
+```java
+
+MessageToValuesMapper messageToValuesMapper = new MessageToValuesMapper() {
+
+ @Override
+ public Values toValues(Message msg) {
+ return new Values(new String(msg.getData()));
+ }
+
+ @Override
+ public void declareOutputFields(OutputFieldsDeclarer declarer) {
+ // declare the output fields
+ declarer.declare(new Fields("string"));
+ }
+};
+
+// Configure a Pulsar Spout
+PulsarSpoutConfiguration spoutConf = new PulsarSpoutConfiguration();
+spoutConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650");
+spoutConf.setTopic("persistent://my-property/usw/my-ns/my-topic1");
+spoutConf.setSubscriptionName("my-subscriber-name1");
+spoutConf.setMessageToValuesMapper(messageToValuesMapper);
+
+// Create a Pulsar Spout
+PulsarSpout spout = new PulsarSpout(spoutConf);
+
+```
+
+For a complete example, click [here](https://github.com/apache/pulsar-adapters/blob/master/pulsar-storm/src/test/java/org/apache/pulsar/storm/PulsarSpoutTest.java).
+
+## Pulsar Bolt
+
+The Pulsar bolt allows data in a Storm topology to be published on a topic. It publishes messages based on the Storm tuple received and the `TupleToMessageMapper` provided by the client.
+
+A partitioned topic can also be used to publish messages on different topics. In the implementation of the `TupleToMessageMapper`, a "key" will need to be provided in the message which will send the messages with the same key to the same topic. Here's an example bolt:
+
+```java
+
+TupleToMessageMapper tupleToMessageMapper = new TupleToMessageMapper() {
+
+ @Override
+ public TypedMessageBuilder toMessage(TypedMessageBuilder msgBuilder, Tuple tuple) {
+ String receivedMessage = tuple.getString(0);
+ // message processing
+ String processedMsg = receivedMessage + "-processed";
+ return msgBuilder.value(processedMsg.getBytes());
+ }
+
+ @Override
+ public void declareOutputFields(OutputFieldsDeclarer declarer) {
+ // declare the output fields
+ }
+};
+
+// Configure a Pulsar Bolt
+PulsarBoltConfiguration boltConf = new PulsarBoltConfiguration();
+boltConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650");
+boltConf.setTopic("persistent://my-property/usw/my-ns/my-topic2");
+boltConf.setTupleToMessageMapper(tupleToMessageMapper);
+
+// Create a Pulsar Bolt
+PulsarBolt bolt = new PulsarBolt(boltConf);
+
+```
+
diff --git a/site2/website/versioned_docs/version-2.10.x/admin-api-brokers.md b/site2/website/versioned_docs/version-2.10.x/admin-api-brokers.md
new file mode 100644
index 0000000000000..2674c7da875f9
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/admin-api-brokers.md
@@ -0,0 +1,286 @@
+---
+id: admin-api-brokers
+title: Managing Brokers
+sidebar_label: "Brokers"
+original_id: admin-api-brokers
+---
+
+````mdx-code-block
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+````
+
+
+> **Important**
+>
+> This page only shows **some frequently used operations**.
+>
+> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](/tools/pulsar-admin/).
+>
+> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc.
+>
+> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/).
+
+Pulsar brokers consist of two components:
+
+1. An HTTP server exposing a {@inject: rest:REST:/} interface administration and [topic](reference-terminology.md#topic) lookup.
+2. A dispatcher that handles all Pulsar [message](reference-terminology.md#message) transfers.
+
+[Brokers](reference-terminology.md#broker) can be managed via:
+
+* The `brokers` command of the [`pulsar-admin`](/tools/pulsar-admin/) tool
+* The `/admin/v2/brokers` endpoint of the admin {@inject: rest:REST:/} API
+* The `brokers` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md)
+
+In addition to being configurable when you start them up, brokers can also be [dynamically configured](#dynamic-broker-configuration).
+
+> See the [Configuration](reference-configuration.md#broker) page for a full listing of broker-specific configuration parameters.
+
+## Brokers resources
+
+### List active brokers
+
+Fetch all available active brokers that are serving traffic with cluster name.
+
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin brokers list use
+
+```
+
+```
+
+broker1.use.org.com:8080
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/brokers/:cluster|operation/getActiveBrokers?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.brokers().getActiveBrokers(clusterName)
+
+```
+
+
+
+
+````
+
+### Get the information of the leader broker
+
+Fetch the information of the leader broker, for example, the service url.
+
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin brokers leader-broker
+
+```
+
+```
+
+BrokerInfo(serviceUrl=broker1.use.org.com:8080)
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/brokers/leaderBroker|operation/getLeaderBroker?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.brokers().getLeaderBroker()
+
+```
+
+For the detail of the code above, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/BrokersImpl.java#L80)
+
+
+
+
+````
+
+#### list of namespaces owned by a given broker
+
+It finds all namespaces which are owned and served by a given broker.
+
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin brokers namespaces use \
+ --url broker1.use.org.com:8080
+
+```
+
+```json
+
+{
+ "my-property/use/my-ns/0x00000000_0xffffffff": {
+ "broker_assignment": "shared",
+ "is_controlled": false,
+ "is_active": true
+ }
+}
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/brokers/:cluster/:broker/ownedNamespaces|operation/getOwnedNamespaes?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.brokers().getOwnedNamespaces(cluster,brokerUrl);
+
+```
+
+
+
+
+````
+
+### Dynamic broker configuration
+
+One way to configure a Pulsar [broker](reference-terminology.md#broker) is to supply a [configuration](reference-configuration.md#broker) when the broker is [started up](reference-cli-tools.md#pulsar-broker).
+
+But since all broker configuration in Pulsar is stored in ZooKeeper, configuration values can also be dynamically updated *while the broker is running*. When you update broker configuration dynamically, ZooKeeper will notify the broker of the change and the broker will then override any existing configuration values.
+
+* The [`brokers`](reference-pulsar-admin.md#brokers) command for the [`pulsar-admin`](reference-pulsar-admin.md) tool has a variety of subcommands that enable you to manipulate a broker's configuration dynamically, enabling you to [update config values](#update-dynamic-configuration) and more.
+* In the Pulsar admin {@inject: rest:REST:/} API, dynamic configuration is managed through the `/admin/v2/brokers/configuration` endpoint.
+
+### Update dynamic configuration
+
+````mdx-code-block
+
+
+
+The [`update-dynamic-config`](reference-pulsar-admin.md#brokers-update-dynamic-config) subcommand will update existing configuration. It takes two arguments: the name of the parameter and the new value using the `config` and `value` flag respectively. Here's an example for the [`brokerShutdownTimeoutMs`](reference-configuration.md#broker-brokerShutdownTimeoutMs) parameter:
+
+```shell
+
+$ pulsar-admin brokers update-dynamic-config --config brokerShutdownTimeoutMs --value 100
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/brokers/configuration/:configName/:configValue|operation/updateDynamicConfiguration?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.brokers().updateDynamicConfiguration(configName, configValue);
+
+```
+
+
+
+
+````
+
+### List updated values
+
+Fetch a list of all potentially updatable configuration parameters.
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin brokers list-dynamic-config
+brokerShutdownTimeoutMs
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/brokers/configuration|operation/getDynamicConfigurationName?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.brokers().getDynamicConfigurationNames();
+
+```
+
+
+
+
+````
+
+### List all
+
+Fetch a list of all parameters that have been dynamically updated.
+
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin brokers get-all-dynamic-config
+brokerShutdownTimeoutMs:100
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/brokers/configuration/values|operation/getAllDynamicConfigurations?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.brokers().getAllDynamicConfigurations();
+
+```
+
+
+
+
+````
diff --git a/site2/website/versioned_docs/version-2.10.x/admin-api-clusters.md b/site2/website/versioned_docs/version-2.10.x/admin-api-clusters.md
new file mode 100644
index 0000000000000..53cd43187e069
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/admin-api-clusters.md
@@ -0,0 +1,318 @@
+---
+id: admin-api-clusters
+title: Managing Clusters
+sidebar_label: "Clusters"
+original_id: admin-api-clusters
+---
+
+````mdx-code-block
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+````
+
+
+> **Important**
+>
+> This page only shows **some frequently used operations**.
+>
+> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](/tools/pulsar-admin/)
+>
+> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc.
+>
+> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/).
+
+Pulsar clusters consist of one or more Pulsar [brokers](reference-terminology.md#broker), one or more [BookKeeper](reference-terminology.md#bookkeeper)
+servers (aka [bookies](reference-terminology.md#bookie)), and a [ZooKeeper](https://zookeeper.apache.org) cluster that provides configuration and coordination management.
+
+Clusters can be managed via:
+
+* The `clusters` command of the [`pulsar-admin`](/tools/pulsar-admin/)) tool
+* The `/admin/v2/clusters` endpoint of the admin {@inject: rest:REST:/} API
+* The `clusters` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md)
+
+## Clusters resources
+
+### Provision
+
+New clusters can be provisioned using the admin interface.
+
+> Please note that this operation requires superuser privileges.
+
+````mdx-code-block
+
+
+
+You can provision a new cluster using the [`create`](reference-pulsar-admin.md#clusters-create) subcommand. Here's an example:
+
+```shell
+
+$ pulsar-admin clusters create cluster-1 \
+ --url http://my-cluster.org.com:8080 \
+ --broker-url pulsar://my-cluster.org.com:6650
+
+```
+
+
+
+
+{@inject: endpoint|PUT|/admin/v2/clusters/:cluster|operation/createCluster?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+ClusterData clusterData = new ClusterData(
+ serviceUrl,
+ serviceUrlTls,
+ brokerServiceUrl,
+ brokerServiceUrlTls
+);
+admin.clusters().createCluster(clusterName, clusterData);
+
+```
+
+
+
+
+````
+
+### Initialize cluster metadata
+
+When provision a new cluster, you need to initialize that cluster's [metadata](concepts-architecture-overview.md#metadata-store). When initializing cluster metadata, you need to specify all of the following:
+
+* The name of the cluster
+* The local metadata store connection string for the cluster
+* The configuration store connection string for the entire instance
+* The web service URL for the cluster
+* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster
+
+You must initialize cluster metadata *before* starting up any [brokers](admin-api-brokers.md) that will belong to the cluster.
+
+> **No cluster metadata initialization through the REST API or the Java admin API**
+>
+> Unlike most other admin functions in Pulsar, cluster metadata initialization cannot be performed via the admin REST API
+> or the admin Java client, as metadata initialization involves communicating with ZooKeeper directly.
+> Instead, you can use the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool, in particular
+> the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command.
+
+Here's an example cluster metadata initialization command:
+
+```shell
+
+bin/pulsar initialize-cluster-metadata \
+ --cluster us-west \
+ --metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
+ --configuration-metadata-store zk:zk1.us-west.example.com:2181,zk2.us-west.example.com:2181/my-chroot-path \
+ --web-service-url http://pulsar.us-west.example.com:8080/ \
+ --web-service-url-tls https://pulsar.us-west.example.com:8443/ \
+ --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \
+ --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/
+
+```
+
+You'll need to use `--*-tls` flags only if you're using [TLS authentication](security-tls-authentication.md) in your instance.
+
+### Get configuration
+
+You can fetch the [configuration](reference-configuration.md) for an existing cluster at any time.
+
+````mdx-code-block
+
+
+
+Use the [`get`](reference-pulsar-admin.md#clusters-get) subcommand and specify the name of the cluster. Here's an example:
+
+```shell
+
+$ pulsar-admin clusters get cluster-1
+{
+ "serviceUrl": "http://my-cluster.org.com:8080/",
+ "serviceUrlTls": null,
+ "brokerServiceUrl": "pulsar://my-cluster.org.com:6650/",
+ "brokerServiceUrlTls": null
+ "peerClusterNames": null
+}
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/clusters/:cluster|operation/getCluster?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.clusters().getCluster(clusterName);
+
+```
+
+
+
+
+````
+
+### Update
+
+You can update the configuration for an existing cluster at any time.
+
+````mdx-code-block
+
+
+
+Use the [`update`](reference-pulsar-admin.md#clusters-update) subcommand and specify new configuration values using flags.
+
+```shell
+
+$ pulsar-admin clusters update cluster-1 \
+ --url http://my-cluster.org.com:4081 \
+ --broker-url pulsar://my-cluster.org.com:3350
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/clusters/:cluster|operation/updateCluster?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+ClusterData clusterData = new ClusterData(
+ serviceUrl,
+ serviceUrlTls,
+ brokerServiceUrl,
+ brokerServiceUrlTls
+);
+admin.clusters().updateCluster(clusterName, clusterData);
+
+```
+
+
+
+
+````
+
+### Delete
+
+Clusters can be deleted from a Pulsar [instance](reference-terminology.md#instance).
+
+````mdx-code-block
+
+
+
+Use the [`delete`](reference-pulsar-admin.md#clusters-delete) subcommand and specify the name of the cluster.
+
+```
+
+$ pulsar-admin clusters delete cluster-1
+
+```
+
+
+
+
+{@inject: endpoint|DELETE|/admin/v2/clusters/:cluster|operation/deleteCluster?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.clusters().deleteCluster(clusterName);
+
+```
+
+
+
+
+````
+
+### List
+
+You can fetch a list of all clusters in a Pulsar [instance](reference-terminology.md#instance).
+
+````mdx-code-block
+
+
+
+Use the [`list`](reference-pulsar-admin.md#clusters-list) subcommand.
+
+```shell
+
+$ pulsar-admin clusters list
+cluster-1
+cluster-2
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/clusters|operation/getClusters?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.clusters().getClusters();
+
+```
+
+
+
+
+````
+
+### Update peer-cluster data
+
+Peer clusters can be configured for a given cluster in a Pulsar [instance](reference-terminology.md#instance).
+
+````mdx-code-block
+
+
+
+Use the [`update-peer-clusters`](reference-pulsar-admin.md#clusters-update-peer-clusters) subcommand and specify the list of peer-cluster names.
+
+```
+
+$ pulsar-admin update-peer-clusters cluster-1 --peer-clusters cluster-2
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/clusters/:cluster/peers|operation/setPeerClusterNames?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.clusters().updatePeerClusterNames(clusterName, peerClusterList);
+
+```
+
+
+
+
+````
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.10.x/admin-api-functions.md b/site2/website/versioned_docs/version-2.10.x/admin-api-functions.md
new file mode 100644
index 0000000000000..8274a21d68008
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/admin-api-functions.md
@@ -0,0 +1,830 @@
+---
+id: admin-api-functions
+title: Manage Functions
+sidebar_label: "Functions"
+original_id: admin-api-functions
+---
+
+````mdx-code-block
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+````
+
+
+> **Important**
+>
+> This page only shows **some frequently used operations**.
+>
+> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](/tools/pulsar-admin/)
+>
+> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc.
+>
+> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/).
+
+**Pulsar Functions** are lightweight compute processes that
+
+* consume messages from one or more Pulsar topics
+* apply a user-supplied processing logic to each message
+* publish the results of the computation to another topic
+
+Functions can be managed via the following methods.
+
+Method | Description
+---|---
+**Admin CLI** | The `functions` command of the [`pulsar-admin`](/tools/pulsar-admin/) tool.
+**REST API** |The `/admin/v3/functions` endpoint of the admin {@inject: rest:REST:/} API.
+**Java Admin API**| The `functions` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md).
+
+## Function resources
+
+You can perform the following operations on functions.
+
+### Create a function
+
+You can create a Pulsar function in cluster mode (deploy it on a Pulsar cluster) using Admin CLI, REST API or Java Admin API.
+
+````mdx-code-block
+
+
+
+Use the [`create`](reference-pulsar-admin.md#functions-create) subcommand.
+
+**Example**
+
+```shell
+
+$ pulsar-admin functions create \
+ --tenant public \
+ --namespace default \
+ --name (the name of Pulsar Functions) \
+ --inputs test-input-topic \
+ --output persistent://public/default/test-output-topic \
+ --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \
+ --jar /examples/api-examples.jar
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName|operation/registerFunction?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+FunctionConfig functionConfig = new FunctionConfig();
+functionConfig.setTenant(tenant);
+functionConfig.setNamespace(namespace);
+functionConfig.setName(functionName);
+functionConfig.setRuntime(FunctionConfig.Runtime.JAVA);
+functionConfig.setParallelism(1);
+functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction");
+functionConfig.setProcessingGuarantees(FunctionConfig.ProcessingGuarantees.ATLEAST_ONCE);
+functionConfig.setTopicsPattern(sourceTopicPattern);
+functionConfig.setSubName(subscriptionName);
+functionConfig.setAutoAck(true);
+functionConfig.setOutput(sinkTopic);
+admin.functions().createFunction(functionConfig, fileName);
+
+```
+
+
+
+
+````
+
+### Update a function
+
+You can update a Pulsar function that has been deployed to a Pulsar cluster using Admin CLI, REST API or Java Admin API.
+
+````mdx-code-block
+
+
+
+Use the [`update`](reference-pulsar-admin.md#functions-update) subcommand.
+
+**Example**
+
+```shell
+
+$ pulsar-admin functions update \
+ --tenant public \
+ --namespace default \
+ --name (the name of Pulsar Functions) \
+ --output persistent://public/default/update-output-topic \
+ # other options
+
+```
+
+
+
+
+{@inject: endpoint|PUT|/admin/v3/functions/:tenant/:namespace/:functionName|operation/updateFunction?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+FunctionConfig functionConfig = new FunctionConfig();
+functionConfig.setTenant(tenant);
+functionConfig.setNamespace(namespace);
+functionConfig.setName(functionName);
+functionConfig.setRuntime(FunctionConfig.Runtime.JAVA);
+functionConfig.setParallelism(1);
+functionConfig.setClassName("org.apache.pulsar.functions.api.examples.ExclamationFunction");
+UpdateOptions updateOptions = new UpdateOptions();
+updateOptions.setUpdateAuthData(updateAuthData);
+admin.functions().updateFunction(functionConfig, userCodeFile, updateOptions);
+
+```
+
+
+
+
+````
+
+### Start an instance of a function
+
+You can start a stopped function instance with `instance-id` using Admin CLI, REST API or Java Admin API.
+
+````mdx-code-block
+
+
+
+Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand.
+
+```shell
+
+$ pulsar-admin functions start \
+ --tenant public \
+ --namespace default \
+ --name (the name of Pulsar Functions) \
+ --instance-id 1
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/start|operation/startFunction?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.functions().startFunction(tenant, namespace, functionName, Integer.parseInt(instanceId));
+
+```
+
+
+
+
+````
+
+### Start all instances of a function
+
+You can start all stopped function instances using Admin CLI, REST API or Java Admin API.
+
+````mdx-code-block
+
+
+
+Use the [`start`](reference-pulsar-admin.md#functions-start) subcommand.
+
+**Example**
+
+```shell
+
+$ pulsar-admin functions start \
+ --tenant public \
+ --namespace default \
+ --name (the name of Pulsar Functions) \
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/start|operation/startFunction?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.functions().startFunction(tenant, namespace, functionName);
+
+```
+
+
+
+
+````
+
+### Stop an instance of a function
+
+You can stop a function instance with `instance-id` using Admin CLI, REST API or Java Admin API.
+
+````mdx-code-block
+
+
+
+Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand.
+
+**Example**
+
+```shell
+
+$ pulsar-admin functions stop \
+ --tenant public \
+ --namespace default \
+ --name (the name of Pulsar Functions) \
+ --instance-id 1
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/stop|operation/stopFunction?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.functions().stopFunction(tenant, namespace, functionName, Integer.parseInt(instanceId));
+
+```
+
+
+
+
+````
+
+### Stop all instances of a function
+
+You can stop all function instances using Admin CLI, REST API or Java Admin API.
+
+````mdx-code-block
+
+
+
+Use the [`stop`](reference-pulsar-admin.md#functions-stop) subcommand.
+
+**Example**
+
+```shell
+
+$ pulsar-admin functions stop \
+ --tenant public \
+ --namespace default \
+ --name (the name of Pulsar Functions) \
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/stop|operation/stopFunction?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.functions().stopFunction(tenant, namespace, functionName);
+
+```
+
+
+
+
+````
+
+### Restart an instance of a function
+
+Restart a function instance with `instance-id` using Admin CLI, REST API or Java Admin API.
+
+````mdx-code-block
+
+
+
+Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand.
+
+**Example**
+
+```shell
+
+$ pulsar-admin functions restart \
+ --tenant public \
+ --namespace default \
+ --name (the name of Pulsar Functions) \
+ --instance-id 1
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/restart|operation/restartFunction?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.functions().restartFunction(tenant, namespace, functionName, Integer.parseInt(instanceId));
+
+```
+
+
+
+
+````
+
+### Restart all instances of a function
+
+You can restart all function instances using Admin CLI, REST API or Java admin API.
+
+````mdx-code-block
+
+
+
+Use the [`restart`](reference-pulsar-admin.md#functions-restart) subcommand.
+
+**Example**
+
+```shell
+
+$ pulsar-admin functions restart \
+ --tenant public \
+ --namespace default \
+ --name (the name of Pulsar Functions) \
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/restart|operation/restartFunction?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.functions().restartFunction(tenant, namespace, functionName);
+
+```
+
+
+
+
+````
+
+### List all functions
+
+You can list all Pulsar functions running under a specific tenant and namespace using Admin CLI, REST API or Java Admin API.
+
+````mdx-code-block
+
+
+
+Use the [`list`](reference-pulsar-admin.md#functions-list) subcommand.
+
+**Example**
+
+```shell
+
+$ pulsar-admin functions list \
+ --tenant public \
+ --namespace default
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace|operation/listFunctions?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.functions().getFunctions(tenant, namespace);
+
+```
+
+
+
+
+````
+
+### Delete a function
+
+You can delete a Pulsar function that is running on a Pulsar cluster using Admin CLI, REST API or Java Admin API.
+
+````mdx-code-block
+
+
+
+Use the [`delete`](reference-pulsar-admin.md#functions-delete) subcommand.
+
+**Example**
+
+```shell
+
+$ pulsar-admin functions delete \
+ --tenant public \
+ --namespace default \
+ --name (the name of Pulsar Functions)
+
+```
+
+
+
+
+{@inject: endpoint|DELETE|/admin/v3/functions/:tenant/:namespace/:functionName|operation/deregisterFunction?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.functions().deleteFunction(tenant, namespace, functionName);
+
+```
+
+
+
+
+````
+
+### Get info about a function
+
+You can get information about a Pulsar function currently running in cluster mode using Admin CLI, REST API or Java Admin API.
+
+````mdx-code-block
+
+
+
+Use the [`get`](reference-pulsar-admin.md#functions-get) subcommand.
+
+**Example**
+
+```shell
+
+$ pulsar-admin functions get \
+ --tenant public \
+ --namespace default \
+ --name (the name of Pulsar Functions)
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName|operation/getFunctionInfo?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.functions().getFunction(tenant, namespace, functionName);
+
+```
+
+
+
+
+````
+
+### Get status of an instance of a function
+
+You can get the current status of a Pulsar function instance with `instance-id` using Admin CLI, REST API or Java Admin API.
+````mdx-code-block
+
+
+
+Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand.
+
+**Example**
+
+```shell
+
+$ pulsar-admin functions status \
+ --tenant public \
+ --namespace default \
+ --name (the name of Pulsar Functions) \
+ --instance-id 1
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/status|operation/getFunctionInstanceStatus?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.functions().getFunctionStatus(tenant, namespace, functionName, Integer.parseInt(instanceId));
+
+```
+
+
+
+
+````
+
+### Get status of all instances of a function
+
+You can get the current status of a Pulsar function instance using Admin CLI, REST API or Java Admin API.
+
+````mdx-code-block
+
+
+
+Use the [`status`](reference-pulsar-admin.md#functions-status) subcommand.
+
+**Example**
+
+```shell
+
+$ pulsar-admin functions status \
+ --tenant public \
+ --namespace default \
+ --name (the name of Pulsar Functions)
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/status|operation/getFunctionStatus?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.functions().getFunctionStatus(tenant, namespace, functionName);
+
+```
+
+
+
+
+````
+
+### Get stats of an instance of a function
+
+You can get the current stats of a Pulsar Function instance with `instance-id` using Admin CLI, REST API or Java admin API.
+````mdx-code-block
+
+
+
+Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand.
+
+**Example**
+
+```shell
+
+$ pulsar-admin functions stats \
+ --tenant public \
+ --namespace default \
+ --name (the name of Pulsar Functions) \
+ --instance-id 1
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/:instanceId/stats|operation/getFunctionInstanceStats?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.functions().getFunctionStats(tenant, namespace, functionName, Integer.parseInt(instanceId));
+
+```
+
+
+
+
+````
+
+### Get stats of all instances of a function
+
+You can get the current stats of a Pulsar function using Admin CLI, REST API or Java admin API.
+
+````mdx-code-block
+
+
+
+Use the [`stats`](reference-pulsar-admin.md#functions-stats) subcommand.
+
+**Example**
+
+```shell
+
+$ pulsar-admin functions stats \
+ --tenant public \
+ --namespace default \
+ --name (the name of Pulsar Functions)
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/stats|operation/getFunctionStats?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.functions().getFunctionStats(tenant, namespace, functionName);
+
+```
+
+
+
+
+````
+
+### Trigger a function
+
+You can trigger a specified Pulsar function with a supplied value using Admin CLI, REST API or Java admin API.
+
+````mdx-code-block
+
+
+
+Use the [`trigger`](reference-pulsar-admin.md#functions-trigger) subcommand.
+
+**Example**
+
+```shell
+
+$ pulsar-admin functions trigger \
+ --tenant public \
+ --namespace default \
+ --name (the name of Pulsar Functions) \
+ --topic (the name of input topic) \
+ --trigger-value \"hello pulsar\"
+ # or --trigger-file (the path of trigger file)
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/trigger|operation/triggerFunction?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.functions().triggerFunction(tenant, namespace, functionName, topic, triggerValue, triggerFile);
+
+```
+
+
+
+
+````
+
+### Put state associated with a function
+
+You can put the state associated with a Pulsar function using Admin CLI, REST API or Java admin API.
+
+````mdx-code-block
+
+
+
+Use the [`putstate`](reference-pulsar-admin.md#functions-putstate) subcommand.
+
+**Example**
+
+```shell
+
+$ pulsar-admin functions putstate \
+ --tenant public \
+ --namespace default \
+ --name (the name of Pulsar Functions) \
+ --state "{\"key\":\"pulsar\", \"stringValue\":\"hello pulsar\"}"
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v3/functions/:tenant/:namespace/:functionName/state/:key|operation/putFunctionState?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+TypeReference typeRef = new TypeReference() {};
+FunctionState stateRepr = ObjectMapperFactory.getThreadLocal().readValue(state, typeRef);
+admin.functions().putFunctionState(tenant, namespace, functionName, stateRepr);
+
+```
+
+
+
+
+````
+
+### Fetch state associated with a function
+
+You can fetch the current state associated with a Pulsar function using Admin CLI, REST API or Java admin API.
+
+````mdx-code-block
+
+
+
+Use the [`querystate`](reference-pulsar-admin.md#functions-querystate) subcommand.
+
+**Example**
+
+```shell
+
+$ pulsar-admin functions querystate \
+ --tenant public \
+ --namespace default \
+ --name (the name of Pulsar Functions) \
+ --key (the key of state)
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v3/functions/:tenant/:namespace/:functionName/state/:key|operation/getFunctionState?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.functions().getFunctionState(tenant, namespace, functionName, key);
+
+```
+
+
+
+
+````
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.10.x/admin-api-namespaces.md b/site2/website/versioned_docs/version-2.10.x/admin-api-namespaces.md
new file mode 100644
index 0000000000000..eb8017a1142d0
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/admin-api-namespaces.md
@@ -0,0 +1,1267 @@
+---
+id: admin-api-namespaces
+title: Managing Namespaces
+sidebar_label: "Namespaces"
+original_id: admin-api-namespaces
+---
+
+````mdx-code-block
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+````
+
+
+> **Important**
+>
+> This page only shows **some frequently used operations**.
+>
+> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](/tools/pulsar-admin/).
+>
+> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc.
+>
+> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/).
+
+Pulsar [namespaces](reference-terminology.md#namespace) are logical groupings of [topics](reference-terminology.md#topic).
+
+Namespaces can be managed via:
+
+* The `namespaces` command of the [`pulsar-admin`](/tools/pulsar-admin/) tool
+* The `/admin/v2/namespaces` endpoint of the admin {@inject: rest:REST:/} API
+* The `namespaces` method of the `PulsarAdmin` object in the [Java API](client-libraries-java.md)
+
+## Namespaces resources
+
+### Create namespaces
+
+You can create new namespaces under a given [tenant](reference-terminology.md#tenant).
+
+````mdx-code-block
+
+
+
+Use the [`create`](reference-pulsar-admin.md#namespaces-create) subcommand and specify the namespace by name:
+
+```shell
+
+$ pulsar-admin namespaces create test-tenant/test-namespace
+
+```
+
+
+
+
+{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace|operation/createNamespace?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().createNamespace(namespace);
+
+```
+
+
+
+
+````
+
+### Get policies
+
+You can fetch the current policies associated with a namespace at any time.
+
+````mdx-code-block
+
+
+
+Use the [`policies`](reference-pulsar-admin.md#namespaces-policies) subcommand and specify the namespace:
+
+```shell
+
+$ pulsar-admin namespaces policies test-tenant/test-namespace
+{
+ "auth_policies": {
+ "namespace_auth": {},
+ "destination_auth": {}
+ },
+ "replication_clusters": [],
+ "bundles_activated": true,
+ "bundles": {
+ "boundaries": [
+ "0x00000000",
+ "0xffffffff"
+ ],
+ "numBundles": 1
+ },
+ "backlog_quota_map": {},
+ "persistence": null,
+ "latency_stats_sample_rate": {},
+ "message_ttl_in_seconds": 0,
+ "retention_policies": null,
+ "deleted": false
+}
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace|operation/getPolicies?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().getPolicies(namespace);
+
+```
+
+
+
+
+````
+
+### List namespaces
+
+You can list all namespaces within a given Pulsar [tenant](reference-terminology.md#tenant).
+
+````mdx-code-block
+
+
+
+Use the [`list`](reference-pulsar-admin.md#namespaces-list) subcommand and specify the tenant:
+
+```shell
+
+$ pulsar-admin namespaces list test-tenant
+test-tenant/ns1
+test-tenant/ns2
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant|operation/getTenantNamespaces?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().getNamespaces(tenant);
+
+```
+
+
+
+
+````
+
+### Delete namespaces
+
+You can delete existing namespaces from a tenant.
+
+````mdx-code-block
+
+
+
+Use the [`delete`](reference-pulsar-admin.md#namespaces-delete) subcommand and specify the namespace:
+
+```shell
+
+$ pulsar-admin namespaces delete test-tenant/ns1
+
+```
+
+
+
+
+{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace|operation/deleteNamespace?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().deleteNamespace(namespace);
+
+```
+
+
+
+
+````
+
+### Configure replication clusters
+
+#### Set replication cluster
+
+You can set replication clusters for a namespace to enable Pulsar to internally replicate the published messages from one colocation facility to another.
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces set-clusters test-tenant/ns1 \
+ --clusters cl1
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replication|operation/setNamespaceReplicationClusters?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().setNamespaceReplicationClusters(namespace, clusters);
+
+```
+
+
+
+
+````
+
+#### Get replication cluster
+
+You can get the list of replication clusters for a given namespace.
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces get-clusters test-tenant/cl1/ns1
+
+```
+
+```
+
+cl2
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/replication|operation/getNamespaceReplicationClusters?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().getNamespaceReplicationClusters(namespace)
+
+```
+
+
+
+
+````
+
+### Configure backlog quota policies
+
+#### Set backlog quota policies
+
+Backlog quota helps the broker to restrict bandwidth/storage of a namespace once it reaches a certain threshold limit. Admin can set the limit and take corresponding action after the limit is reached.
+
+ 1. producer_request_hold: broker holds but not persists produce request payload
+
+ 2. producer_exception: broker disconnects with the client by giving an exception
+
+ 3. consumer_backlog_eviction: broker starts discarding backlog messages
+
+Backlog quota restriction can be taken care by defining restriction of backlog-quota-type: destination_storage.
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces set-backlog-quota --limit 10G --limitTime 36000 --policy producer_request_hold test-tenant/ns1
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/setBacklogQuota?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().setBacklogQuota(namespace, new BacklogQuota(limit, limitTime, policy))
+
+```
+
+
+
+
+````
+
+#### Get backlog quota policies
+
+You can get a configured backlog quota for a given namespace.
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces get-backlog-quotas test-tenant/ns1
+
+```
+
+```json
+
+{
+ "destination_storage": {
+ "limit": 10,
+ "policy": "producer_request_hold"
+ }
+}
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap|operation/getBacklogQuotaMap?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().getBacklogQuotaMap(namespace);
+
+```
+
+
+
+
+````
+
+#### Remove backlog quota policies
+
+You can remove backlog quota policies for a given namespace.
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces remove-backlog-quota test-tenant/ns1
+
+```
+
+
+
+
+{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/backlogQuota|operation/removeBacklogQuota?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().removeBacklogQuota(namespace, backlogQuotaType)
+
+```
+
+
+
+
+````
+
+### Configure persistence policies
+
+#### Set persistence policies
+
+Persistence policies allow users to configure persistency-level for all topic messages under a given namespace.
+
+ - Bookkeeper-ack-quorum: Number of acks (guaranteed copies) to wait for each entry, default: 0
+
+ - Bookkeeper-ensemble: Number of bookies to use for a topic, default: 0
+
+ - Bookkeeper-write-quorum: How many writes to make of each entry, default: 0
+
+ - Ml-mark-delete-max-rate: Throttling rate of mark-delete operation (0 means no throttle), default: 0.0
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces set-persistence --bookkeeper-ack-quorum 2 --bookkeeper-ensemble 3 --bookkeeper-write-quorum 2 --ml-mark-delete-max-rate 0 test-tenant/ns1
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().setPersistence(namespace,new PersistencePolicies(bookkeeperEnsemble, bookkeeperWriteQuorum,bookkeeperAckQuorum,managedLedgerMaxMarkDeleteRate))
+
+```
+
+
+
+
+````
+
+#### Get persistence policies
+
+You can get the configured persistence policies of a given namespace.
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces get-persistence test-tenant/ns1
+
+```
+
+```json
+
+{
+ "bookkeeperEnsemble": 3,
+ "bookkeeperWriteQuorum": 2,
+ "bookkeeperAckQuorum": 2,
+ "managedLedgerMaxMarkDeleteRate": 0
+}
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().getPersistence(namespace)
+
+```
+
+
+
+
+````
+
+### Configure namespace bundles
+
+#### Unload namespace bundles
+
+The namespace bundle is a virtual group of topics which belong to the same namespace. If the broker gets overloaded with the number of bundles, this command can help unload a bundle from that broker, so it can be served by some other less-loaded brokers. The namespace bundle ID ranges from 0x00000000 to 0xffffffff.
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces unload --bundle 0x00000000_0xffffffff test-tenant/ns1
+
+```
+
+
+
+
+{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/:bundle/unload|operation/unloadNamespaceBundle?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().unloadNamespaceBundle(namespace, bundle)
+
+```
+
+
+
+
+````
+
+#### Split namespace bundles
+
+One namespace bundle can contain multiple topics but can be served by only one broker. If a single bundle is creating an excessive load on a broker, an admin can split the bundle using the command below, permitting one or more of the new bundles to be unloaded, thus balancing the load across the brokers.
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces split-bundle --bundle 0x00000000_0xffffffff test-tenant/ns1
+
+```
+
+
+
+
+{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/:bundle/split|operation/splitNamespaceBundle?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().splitNamespaceBundle(namespace, bundle)
+
+```
+
+
+
+
+````
+
+### Configure message TTL
+
+#### Set message-ttl
+
+You can configure the time to live (in seconds) duration for messages. In the example below, the message-ttl is set as 100s.
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces set-message-ttl --messageTTL 100 test-tenant/ns1
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/setNamespaceMessageTTL?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().setNamespaceMessageTTL(namespace, messageTTL)
+
+```
+
+
+
+
+````
+
+#### Get message-ttl
+
+When the message-ttl for a namespace is set, you can use the command below to get the configured value. This example comtinues the example of the command `set message-ttl`, so the returned value is 100(s).
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces get-message-ttl test-tenant/ns1
+
+```
+
+```
+
+100
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/getNamespaceMessageTTL?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().getNamespaceMessageTTL(namespace)
+
+```
+
+```
+
+100
+
+```
+
+
+
+
+````
+
+#### Remove message-ttl
+
+Remove a message TTL of the configured namespace.
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces remove-message-ttl test-tenant/ns1
+
+```
+
+
+
+
+{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/removeNamespaceMessageTTL?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().removeNamespaceMessageTTL(namespace)
+
+```
+
+
+
+
+````
+
+
+### Clear backlog
+
+#### Clear namespace backlog
+
+It clears all message backlog for all the topics that belong to a specific namespace. You can also clear backlog for a specific subscription as well.
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces clear-backlog --sub my-subscription test-tenant/ns1
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/clearBacklog|operation/clearNamespaceBacklogForSubscription?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().clearNamespaceBacklogForSubscription(namespace, subscription)
+
+```
+
+
+
+
+````
+
+#### Clear bundle backlog
+
+It clears all message backlog for all the topics that belong to a specific NamespaceBundle. You can also clear backlog for a specific subscription as well.
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces clear-backlog --bundle 0x00000000_0xffffffff --sub my-subscription test-tenant/ns1
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/:bundle/clearBacklog|operation/clearNamespaceBundleBacklogForSubscription?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().clearNamespaceBundleBacklogForSubscription(namespace, bundle, subscription)
+
+```
+
+
+
+
+````
+
+### Configure retention
+
+#### Set retention
+
+Each namespace contains multiple topics and the retention size (storage size) of each topic should not exceed a specific threshold or it should be stored for a certain period. This command helps configure the retention size and time of topics in a given namespace.
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces set-retention --size 100 --time 10 test-tenant/ns1
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/retention|operation/setRetention?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().setRetention(namespace, new RetentionPolicies(retentionTimeInMin, retentionSizeInMB))
+
+```
+
+
+
+
+````
+
+#### Get retention
+
+It shows retention information of a given namespace.
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces get-retention test-tenant/ns1
+
+```
+
+```json
+
+{
+ "retentionTimeInMinutes": 10,
+ "retentionSizeInMB": 100
+}
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/retention|operation/getRetention?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().getRetention(namespace)
+
+```
+
+
+
+
+````
+
+### Configure dispatch throttling for topics
+
+#### Set dispatch throttling for topics
+
+It sets message dispatch rate for all the topics under a given namespace.
+The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`).
+dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which
+disables the throttling.
+
+:::note
+
+- If neither `clusterDispatchRate` nor `topicDispatchRate` is configured, dispatch throttling is disabled.
+- If `topicDispatchRate` is not configured, `clusterDispatchRate` takes effect.
+- If `topicDispatchRate` is configured, `topicDispatchRate` takes effect.
+
+:::
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces set-dispatch-rate test-tenant/ns1 \
+ --msg-dispatch-rate 1000 \
+ --byte-dispatch-rate 1048576 \
+ --dispatch-rate-period 1
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/dispatchRate|operation/setDispatchRate?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().setDispatchRate(namespace, new DispatchRate(1000, 1048576, 1))
+
+```
+
+
+
+
+````
+
+#### Get configured message-rate for topics
+
+It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second)
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces get-dispatch-rate test-tenant/ns1
+
+```
+
+```json
+
+{
+ "dispatchThrottlingRatePerTopicInMsg" : 1000,
+ "dispatchThrottlingRatePerTopicInByte" : 1048576,
+ "ratePeriodInSecond" : 1
+}
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/dispatchRate|operation/getDispatchRate?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().getDispatchRate(namespace)
+
+```
+
+
+
+
+````
+
+### Configure dispatch throttling for subscription
+
+#### Set dispatch throttling for subscription
+
+It sets message dispatch rate for all the subscription of topics under a given namespace.
+The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`).
+dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which
+disables the throttling.
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces set-subscription-dispatch-rate test-tenant/ns1 \
+ --msg-dispatch-rate 1000 \
+ --byte-dispatch-rate 1048576 \
+ --dispatch-rate-period 1
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/subscriptionDispatchRate|operation/setDispatchRate?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().setSubscriptionDispatchRate(namespace, new DispatchRate(1000, 1048576, 1))
+
+```
+
+
+
+
+````
+
+#### Get configured message-rate for subscription
+
+It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second)
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces get-subscription-dispatch-rate test-tenant/ns1
+
+```
+
+```json
+
+{
+ "dispatchThrottlingRatePerTopicInMsg" : 1000,
+ "dispatchThrottlingRatePerTopicInByte" : 1048576,
+ "ratePeriodInSecond" : 1
+}
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/subscriptionDispatchRate|operation/getDispatchRate?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().getSubscriptionDispatchRate(namespace)
+
+```
+
+
+
+
+````
+
+### Configure dispatch throttling for replicator
+
+#### Set dispatch throttling for replicator
+
+It sets message dispatch rate for all the replicator between replication clusters under a given namespace.
+The dispatch rate can be restricted by the number of messages per X seconds (`msg-dispatch-rate`) or by the number of message-bytes per X second (`byte-dispatch-rate`).
+dispatch rate is in second and it can be configured with `dispatch-rate-period`. Default value of `msg-dispatch-rate` and `byte-dispatch-rate` is -1 which
+disables the throttling.
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces set-replicator-dispatch-rate test-tenant/ns1 \
+ --msg-dispatch-rate 1000 \
+ --byte-dispatch-rate 1048576 \
+ --dispatch-rate-period 1
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/replicatorDispatchRate|operation/setDispatchRate?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().setReplicatorDispatchRate(namespace, new DispatchRate(1000, 1048576, 1))
+
+```
+
+
+
+
+````
+
+#### Get configured message-rate for replicator
+
+It shows configured message-rate for the namespace (topics under this namespace can dispatch this many messages per second)
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces get-replicator-dispatch-rate test-tenant/ns1
+
+```
+
+```json
+
+{
+ "dispatchThrottlingRatePerTopicInMsg" : 1000,
+ "dispatchThrottlingRatePerTopicInByte" : 1048576,
+ "ratePeriodInSecond" : 1
+}
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/replicatorDispatchRate|operation/getDispatchRate?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().getReplicatorDispatchRate(namespace)
+
+```
+
+
+
+
+````
+
+### Configure deduplication snapshot interval
+
+#### Get deduplication snapshot interval
+
+It shows configured `deduplicationSnapshotInterval` for a namespace (Each topic under the namespace will take a deduplication snapshot according to this interval)
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces get-deduplication-snapshot-interval test-tenant/ns1
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/getDeduplicationSnapshotInterval?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().getDeduplicationSnapshotInterval(namespace)
+
+```
+
+
+
+
+````
+
+#### Set deduplication snapshot interval
+
+Set configured `deduplicationSnapshotInterval` for a namespace. Each topic under the namespace will take a deduplication snapshot according to this interval.
+`brokerDeduplicationEnabled` must be set to `true` for this property to take effect.
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces set-deduplication-snapshot-interval test-tenant/ns1 --interval 1000
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/setDeduplicationSnapshotInterval?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().setDeduplicationSnapshotInterval(namespace, 1000)
+
+```
+
+
+
+
+````
+
+#### Remove deduplication snapshot interval
+
+Remove configured `deduplicationSnapshotInterval` of a namespace (Each topic under the namespace will take a deduplication snapshot according to this interval)
+
+````mdx-code-block
+
+
+
+```
+
+$ pulsar-admin namespaces remove-deduplication-snapshot-interval test-tenant/ns1
+
+```
+
+
+
+
+{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/deduplicationSnapshotInterval|operation/deleteDeduplicationSnapshotInterval?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().removeDeduplicationSnapshotInterval(namespace)
+
+```
+
+
+
+
+````
+
+### Namespace isolation
+
+You can use the [Pulsar isolation policy](administration-isolation.md) to allocate resources (broker and bookie) for a namespace.
+
+### Unload namespaces from a broker
+
+You can unload a namespace, or a [namespace bundle](reference-terminology.md#namespace-bundle), from the Pulsar [broker](reference-terminology.md#broker) that is currently responsible for it.
+
+#### pulsar-admin
+
+Use the [`unload`](reference-pulsar-admin.md#unload) subcommand of the [`namespaces`](reference-pulsar-admin.md#namespaces) command.
+
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin namespaces unload my-tenant/my-ns
+
+```
+
+
+
+
+{@inject: endpoint|PUT|/admin/v2/namespaces/:tenant/:namespace/unload|operation/unloadNamespace?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().unload(namespace)
+
+```
+
+
+
+
+````
diff --git a/site2/website/versioned_docs/version-2.10.x/admin-api-non-partitioned-topics.md b/site2/website/versioned_docs/version-2.10.x/admin-api-non-partitioned-topics.md
new file mode 100644
index 0000000000000..e6347bb8c363a
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/admin-api-non-partitioned-topics.md
@@ -0,0 +1,8 @@
+---
+id: admin-api-non-partitioned-topics
+title: Managing non-partitioned topics
+sidebar_label: "Non-partitioned topics"
+original_id: admin-api-non-partitioned-topics
+---
+
+For details of the content, refer to [manage topics](admin-api-topics.md).
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.10.x/admin-api-non-persistent-topics.md b/site2/website/versioned_docs/version-2.10.x/admin-api-non-persistent-topics.md
new file mode 100644
index 0000000000000..3126a6494c715
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/admin-api-non-persistent-topics.md
@@ -0,0 +1,8 @@
+---
+id: admin-api-non-persistent-topics
+title: Managing non-persistent topics
+sidebar_label: "Non-Persistent topics"
+original_id: admin-api-non-persistent-topics
+---
+
+For details of the content, refer to [manage topics](admin-api-topics.md).
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.10.x/admin-api-overview.md b/site2/website/versioned_docs/version-2.10.x/admin-api-overview.md
new file mode 100644
index 0000000000000..408f1943fff18
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/admin-api-overview.md
@@ -0,0 +1,144 @@
+---
+id: admin-api-overview
+title: Pulsar admin interface
+sidebar_label: "Overview"
+original_id: admin-api-overview
+---
+
+````mdx-code-block
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+````
+
+
+The Pulsar admin interface enables you to manage all important entities in a Pulsar instance, such as tenants, topics, and namespaces.
+
+You can interact with the admin interface via:
+
+- The `pulsar-admin` CLI tool, which is available in the `bin` folder of your Pulsar installation:
+
+ ```shell
+
+ bin/pulsar-admin
+
+ ```
+
+ > **Important**
+ >
+ > For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](/tools/pulsar-admin/).
+
+- HTTP calls, which are made against the admin {@inject: rest:REST:/} API provided by Pulsar brokers. For some RESTful APIs, they might be redirected to the owner brokers for serving with [`307 Temporary Redirect`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/307), hence the HTTP callers should handle `307 Temporary Redirect`. If you use `curl` commands, you should specify `-L` to handle redirections.
+
+ > **Important**
+ >
+ > For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc.
+
+- A Java client interface.
+
+ > **Important**
+ >
+ > For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/).
+
+> **The REST API is the admin interface**. Both the `pulsar-admin` CLI tool and the Java client use the REST API. If you implement your own admin interface client, you should use the REST API.
+
+## Admin setup
+
+Each of the three admin interfaces (the `pulsar-admin` CLI tool, the {@inject: rest:REST:/} API, and the [Java admin API](/api/admin)) requires some special setup if you have enabled authentication in your Pulsar instance.
+
+````mdx-code-block
+
+
+
+If you have enabled authentication, you need to provide an auth configuration to use the `pulsar-admin` tool. By default, the configuration for the `pulsar-admin` tool is in the [`conf/client.conf`](reference-configuration.md#client) file. The following are the available parameters:
+
+|Name|Description|Default|
+|----|-----------|-------|
+|webServiceUrl|The web URL for the cluster.|http://localhost:8080/|
+|brokerServiceUrl|The Pulsar protocol URL for the cluster.|pulsar://localhost:6650/|
+|authPlugin|The authentication plugin.| |
+|authParams|The authentication parameters for the cluster, as a comma-separated string.| |
+|useTls|Whether or not TLS authentication will be enforced in the cluster.|false|
+|tlsAllowInsecureConnection|Accept untrusted TLS certificate from client.|false|
+|tlsTrustCertsFilePath|Path for the trusted TLS certificate file.| |
+
+
+
+
+You can find details for the REST API exposed by Pulsar brokers in this {@inject: rest:document:/}.
+
+
+
+
+To use the Java admin API, instantiate a {@inject: javadoc:PulsarAdmin:/admin/org/apache/pulsar/client/admin/PulsarAdmin} object, and specify a URL for a Pulsar broker and a {@inject: javadoc:PulsarAdminBuilder:/admin/org/apache/pulsar/client/admin/PulsarAdminBuilder}. The following is a minimal example using `localhost`:
+
+```java
+
+String url = "http://localhost:8080";
+// Pass auth-plugin class fully-qualified name if Pulsar-security enabled
+String authPluginClassName = "com.org.MyAuthPluginClass";
+// Pass auth-param if auth-plugin class requires it
+String authParams = "param1=value1";
+boolean useTls = false;
+boolean tlsAllowInsecureConnection = false;
+String tlsTrustCertsFilePath = null;
+PulsarAdmin admin = PulsarAdmin.builder()
+.authentication(authPluginClassName,authParams)
+.serviceHttpUrl(url)
+.tlsTrustCertsFilePath(tlsTrustCertsFilePath)
+.allowTlsInsecureConnection(tlsAllowInsecureConnection)
+.build();
+
+```
+
+If you use multiple brokers, you can use multi-host like Pulsar service. For example,
+
+```java
+
+String url = "http://localhost:8080,localhost:8081,localhost:8082";
+// Pass auth-plugin class fully-qualified name if Pulsar-security enabled
+String authPluginClassName = "com.org.MyAuthPluginClass";
+// Pass auth-param if auth-plugin class requires it
+String authParams = "param1=value1";
+boolean useTls = false;
+boolean tlsAllowInsecureConnection = false;
+String tlsTrustCertsFilePath = null;
+PulsarAdmin admin = PulsarAdmin.builder()
+.authentication(authPluginClassName,authParams)
+.serviceHttpUrl(url)
+.tlsTrustCertsFilePath(tlsTrustCertsFilePath)
+.allowTlsInsecureConnection(tlsAllowInsecureConnection)
+.build();
+
+```
+
+
+
+
+````
+
+## How to define Pulsar resource names when running Pulsar in Kubernetes
+If you run Pulsar Functions or connectors on Kubernetes, you need to follow Kubernetes naming convention to define the names of your Pulsar resources, whichever admin interface you use.
+
+Kubernetes requires a name that can be used as a DNS subdomain name as defined in [RFC 1123](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names). Pulsar supports more legal characters than Kubernetes naming convention. If you create a Pulsar resource name with special characters that are not supported by Kubernetes (for example, including colons in a Pulsar namespace name), Kubernetes runtime translates the Pulsar object names into Kubernetes resource labels which are in RFC 1123-compliant forms. Consequently, you can run functions or connectors using Kubernetes runtime. The rules for translating Pulsar object names into Kubernetes resource labels are as below:
+
+- Truncate to 63 characters
+
+- Replace the following characters with dashes (-):
+
+ - Non-alphanumeric characters
+
+ - Underscores (_)
+
+ - Dots (.)
+
+- Replace beginning and ending non-alphanumeric characters with 0
+
+:::tip
+
+- If you get an error in translating Pulsar object names into Kubernetes resource labels (for example, you may have a naming collision if your Pulsar object name is too long) or want to customize the translating rules, see [customize Kubernetes runtime](functions-runtime.md#customize-kubernetes-runtime).
+- For how to configure Kubernetes runtime, see [here](functions-runtime.md#configure-kubernetes-runtime).
+
+:::
+
diff --git a/site2/website/versioned_docs/version-2.10.x/admin-api-packages.md b/site2/website/versioned_docs/version-2.10.x/admin-api-packages.md
new file mode 100644
index 0000000000000..608dfb7587daf
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/admin-api-packages.md
@@ -0,0 +1,390 @@
+---
+id: admin-api-packages
+title: Manage packages
+sidebar_label: "Packages"
+original_id: admin-api-packages
+---
+
+````mdx-code-block
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+````
+
+
+> **Important**
+>
+> This page only shows **some frequently used operations**.
+>
+> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](/tools/pulsar-admin/).
+>
+> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc.
+>
+> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/).
+
+Package managers or package-management systems automatically manage packages in a consistent manner. These tools simplify the installation tasks, upgrade process, and deletion operations for users. A package is a minimal unit that a package manager deals with. In Pulsar, packages are organized at the tenant- and namespace-level to manage Pulsar Functions and Pulsar IO connectors (i.e., source and sink).
+
+## What is a package?
+
+A package is a set of elements that the user would like to reuse in later operations. In Pulsar, a package can be a group of functions, sources, and sinks. You can define a package according to your needs.
+
+The package management system in Pulsar stores the data and metadata of each package (as shown in the table below) and tracks the package versions.
+
+|Metadata|Description|
+|--|--|
+|description|The description of the package.|
+|contact|The contact information of a package. For example, an email address of the developer team.|
+|create_time|The time when the package is created.|
+|modification_time|The time when the package is lastly modified.|
+|properties|A user-defined key/value map to store other information.|
+
+## How to use a package
+
+Packages can efficiently use the same set of functions and IO connectors. For example, you can use the same function, source, and sink in multiple namespaces. The main steps are:
+
+1. Create a package in the package manager by providing the following information: type, tenant, namespace, package name, and version.
+
+ |Component|Description|
+ |-|-|
+ |type|Specify one of the supported package types: function, sink and source.|
+ |tenant|Specify the tenant where you want to create the package.|
+ |namespace|Specify the namespace where you want to create the package.|
+ |name|Specify the complete name of the package, using the format `//`.|
+ |version|Specify the version of the package using the format `MajorVerion.MinorVersion` in numerals.|
+
+ The information you provide creates a URL for a package, in the format `://///`.
+
+2. Upload the elements to the package, i.e., the functions, sources, and sinks that you want to use across namespaces.
+
+3. Apply permissions to this package from various namespaces.
+
+Now, you can use the elements you defined in the package by calling this package from within the package manager. The package manager locates it by the URL. For example,
+
+```
+
+sink://public/default/mysql-sink@1.0
+function://my-tenant/my-ns/my-function@0.1
+source://my-tenant/my-ns/mysql-cdc-source@2.3
+
+```
+
+## Package management in Pulsar
+
+You can use the command line tools, REST API, or the Java client to manage your package resources in Pulsar. More specifically, you can use these tools to [upload](#upload-a-package), [download](#download-a-package), and [delete](#delete-a-package) a package, [get the metadata](#get-the-metadata-of-a-package) and [update the metadata](#update-the-metadata-of-a-package) of a package, [get the versions](#list-all-versions-of-a-package) of a package, and [get all packages of a specific type under a namespace](#list-all-packages-of-a-specific-type-under-a-namespace).
+
+### Upload a package
+
+You can use the following commands to upload a package.
+
+````mdx-code-block
+
+
+
+```shell
+
+bin/pulsar-admin packages upload function://public/default/example@v0.1 --path package-file --description package-description
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/upload?version=@pulsar:version_number@}
+
+
+
+
+Upload a package to the package management service synchronously.
+
+```java
+
+ void upload(PackageMetadata metadata, String packageName, String path) throws PulsarAdminException;
+
+```
+
+Upload a package to the package management service asynchronously.
+
+```java
+
+ CompletableFuture uploadAsync(PackageMetadata metadata, String packageName, String path);
+
+```
+
+
+
+
+````
+
+### Download a package
+
+You can use the following commands to download a package.
+
+````mdx-code-block
+
+
+
+```shell
+
+bin/pulsar-admin packages download function://public/default/example@v0.1 --path package-file
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/download?version=@pulsar:version_number@}
+
+
+
+
+Download a package from the package management service synchronously.
+
+```java
+
+ void download(String packageName, String path) throws PulsarAdminException;
+
+```
+
+Download a package from the package management service asynchronously.
+
+```java
+
+ CompletableFuture downloadAsync(String packageName, String path);
+
+```
+
+
+
+
+````
+
+### Delete a package
+
+You can use the following commands to delete a package.
+
+````mdx-code-block
+
+
+
+The following command deletes a package of version 0.1.
+
+```shell
+
+bin/pulsar-admin packages delete functions://public/default/example@v0.1
+
+```
+
+
+
+
+{@inject: endpoint|DELETE|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version|operation/delete?version=@pulsar:version_number@}
+
+
+
+
+Delete a specified package synchronously.
+
+```java
+
+ void delete(String packageName) throws PulsarAdminException;
+
+```
+
+Delete a specified package asynchronously.
+
+```java
+
+ CompletableFuture deleteAsync(String packageName);
+
+```
+
+
+
+
+````
+
+### Get the metadata of a package
+
+You can use the following commands to get the metadate of a package.
+
+````mdx-code-block
+
+
+
+```shell
+
+bin/pulsar-admin packages get-metadata function://public/default/test@v1
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/metadata|operation/getMeta?version=@pulsar:version_number@}
+
+
+
+
+Get the metadata of a package synchronously.
+
+```java
+
+ PackageMetadata getMetadata(String packageName) throws PulsarAdminException;
+
+```
+
+Get the metadata of a package asynchronously.
+
+```java
+
+ CompletableFuture getMetadataAsync(String packageName);
+
+```
+
+
+
+
+````
+
+### Update the metadata of a package
+
+You can use the following commands to update the metadata of a package.
+
+````mdx-code-block
+
+
+
+```shell
+
+bin/pulsar-admin packages update-metadata function://public/default/example@v0.1 --description update-description
+
+```
+
+
+
+
+{@inject: endpoint|PUT|/admin/v3/packages/:type/:tenant/:namespace/:packageName/:version/metadata|operation/updateMeta?version=@pulsar:version_number@}
+
+
+
+
+Update the metadata of a package synchronously.
+
+```java
+
+ void updateMetadata(String packageName, PackageMetadata metadata) throws PulsarAdminException;
+
+```
+
+Update the metadata of a package asynchronously.
+
+```java
+
+ CompletableFuture updateMetadataAsync(String packageName, PackageMetadata metadata);
+
+```
+
+
+
+
+````
+
+### List all versions of a package
+
+You can use the following commands to list all versions of a package.
+
+````mdx-code-block
+
+
+
+```shell
+
+bin/pulsar-admin packages list-versions type://tenant/namespace/packageName
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v3/packages/:type/:tenant/:namespace/:packageName|operation/listPackageVersion?version=@pulsar:version_number@}
+
+
+
+
+List all versions of a package synchronously.
+
+```java
+
+ List listPackageVersions(String packageName) throws PulsarAdminException;
+
+```
+
+List all versions of a package asynchronously.
+
+```java
+
+ CompletableFuture> listPackageVersionsAsync(String packageName);
+
+```
+
+
+
+
+````
+
+### List all packages of a specific type under a namespace
+
+You can use the following commands to list all packages of a specific type under a namespace.
+
+````mdx-code-block
+
+
+
+
+```shell
+
+bin/pulsar-admin packages list --type function public/default
+
+```
+
+
+
+
+{@inject: endpoint|PUT|/admin/v3/packages/:type/:tenant/:namespace|operation/listPackages?version=@pulsar:version_number@}
+
+
+
+
+List all packages of a specific type under a namespace synchronously.
+
+```java
+
+ List listPackages(String type, String namespace) throws PulsarAdminException;
+
+```
+
+List all packages of a specific type under a namespace asynchronously.
+
+```java
+
+ CompletableFuture> listPackagesAsync(String type, String namespace);
+
+```
+
+
+
+
+````
diff --git a/site2/website/versioned_docs/version-2.10.x/admin-api-partitioned-topics.md b/site2/website/versioned_docs/version-2.10.x/admin-api-partitioned-topics.md
new file mode 100644
index 0000000000000..5ce182282e032
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/admin-api-partitioned-topics.md
@@ -0,0 +1,8 @@
+---
+id: admin-api-partitioned-topics
+title: Managing partitioned topics
+sidebar_label: "Partitioned topics"
+original_id: admin-api-partitioned-topics
+---
+
+For details of the content, refer to [manage topics](admin-api-topics.md).
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.10.x/admin-api-permissions.md b/site2/website/versioned_docs/version-2.10.x/admin-api-permissions.md
new file mode 100644
index 0000000000000..5ace9d573bdaa
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/admin-api-permissions.md
@@ -0,0 +1,189 @@
+---
+id: admin-api-permissions
+title: Managing permissions
+sidebar_label: "Permissions"
+original_id: admin-api-permissions
+---
+
+````mdx-code-block
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+````
+
+
+> **Important**
+>
+> This page only shows **some frequently used operations**.
+>
+> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](/tools/pulsar-admin/)
+>
+> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc.
+>
+> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/).
+
+Pulsar allows you to grant namespace-level or topic-level permission to users.
+
+- If you grant a namespace-level permission to a user, then the user can access all the topics under the namespace.
+
+- If you grant a topic-level permission to a user, then the user can access only the topic.
+
+The chapters below demonstrate how to grant namespace-level permissions to users. For how to grant topic-level permissions to users, see [manage topics](admin-api-topics.md/#grant-permission).
+
+## Grant permissions
+
+You can grant permissions to specific roles for lists of operations such as `produce` and `consume`.
+
+````mdx-code-block
+
+
+
+Use the [`grant-permission`](reference-pulsar-admin.md#grant-permission) subcommand and specify a namespace, actions using the `--actions` flag, and a role using the `--role` flag:
+
+```shell
+
+$ pulsar-admin namespaces grant-permission test-tenant/ns1 \
+ --actions produce,consume \
+ --role admin10
+
+```
+
+Wildcard authorization can be performed when `authorizationAllowWildcardsMatching` is set to `true` in `broker.conf`.
+
+e.g.
+
+```shell
+
+$ pulsar-admin namespaces grant-permission test-tenant/ns1 \
+ --actions produce,consume \
+ --role 'my.role.*'
+
+```
+
+Then, roles `my.role.1`, `my.role.2`, `my.role.foo`, `my.role.bar`, etc. can produce and consume.
+
+```shell
+
+$ pulsar-admin namespaces grant-permission test-tenant/ns1 \
+ --actions produce,consume \
+ --role '*.role.my'
+
+```
+
+Then, roles `1.role.my`, `2.role.my`, `foo.role.my`, `bar.role.my`, etc. can produce and consume.
+
+**Note**: A wildcard matching works at **the beginning or end of the role name only**.
+
+e.g.
+
+```shell
+
+$ pulsar-admin namespaces grant-permission test-tenant/ns1 \
+ --actions produce,consume \
+ --role 'my.*.role'
+
+```
+
+In this case, only the role `my.*.role` has permissions.
+Roles `my.1.role`, `my.2.role`, `my.foo.role`, `my.bar.role`, etc. **cannot** produce and consume.
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/grantPermissionOnNamespace?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().grantPermissionOnNamespace(namespace, role, getAuthActions(actions));
+
+```
+
+
+
+
+````
+
+## Get permissions
+
+You can see which permissions have been granted to which roles in a namespace.
+
+````mdx-code-block
+
+
+
+Use the [`permissions`](reference-pulsar-admin#permissions) subcommand and specify a namespace:
+
+```shell
+
+$ pulsar-admin namespaces permissions test-tenant/ns1
+{
+ "admin10": [
+ "produce",
+ "consume"
+ ]
+}
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/permissions|operation/getPermissions?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().getPermissions(namespace);
+
+```
+
+
+
+
+````
+
+## Revoke permissions
+
+You can revoke permissions from specific roles, which means that those roles will no longer have access to the specified namespace.
+
+````mdx-code-block
+
+
+
+Use the [`revoke-permission`](reference-pulsar-admin.md#revoke-permission) subcommand and specify a namespace and a role using the `--role` flag:
+
+```shell
+
+$ pulsar-admin namespaces revoke-permission test-tenant/ns1 \
+ --role admin10
+
+```
+
+
+
+
+{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/permissions/:role|operation/revokePermissionsOnNamespace?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.namespaces().revokePermissionsOnNamespace(namespace, role);
+
+```
+
+
+
+
+````
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.10.x/admin-api-persistent-topics.md b/site2/website/versioned_docs/version-2.10.x/admin-api-persistent-topics.md
new file mode 100644
index 0000000000000..50d135b72f542
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/admin-api-persistent-topics.md
@@ -0,0 +1,8 @@
+---
+id: admin-api-persistent-topics
+title: Managing persistent topics
+sidebar_label: "Persistent topics"
+original_id: admin-api-persistent-topics
+---
+
+For details of the content, refer to [manage topics](admin-api-topics.md).
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.10.x/admin-api-schemas.md b/site2/website/versioned_docs/version-2.10.x/admin-api-schemas.md
new file mode 100644
index 0000000000000..9ffe21f5b0f75
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/admin-api-schemas.md
@@ -0,0 +1,7 @@
+---
+id: admin-api-schemas
+title: Managing Schemas
+sidebar_label: "Schemas"
+original_id: admin-api-schemas
+---
+
diff --git a/site2/website/versioned_docs/version-2.10.x/admin-api-tenants.md b/site2/website/versioned_docs/version-2.10.x/admin-api-tenants.md
new file mode 100644
index 0000000000000..e962ed851e4f0
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/admin-api-tenants.md
@@ -0,0 +1,242 @@
+---
+id: admin-api-tenants
+title: Managing Tenants
+sidebar_label: "Tenants"
+original_id: admin-api-tenants
+---
+
+````mdx-code-block
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+````
+
+
+> **Important**
+>
+> This page only shows **some frequently used operations**.
+>
+> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](/tools/pulsar-admin/)
+>
+> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc.
+>
+> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/).
+
+Tenants, like namespaces, can be managed using the [admin API](admin-api-overview.md). There are currently two configurable aspects of tenants:
+
+* Admin roles
+* Allowed clusters
+
+## Tenant resources
+
+### List
+
+You can list all of the tenants associated with an [instance](reference-terminology.md#instance).
+
+````mdx-code-block
+
+
+
+Use the [`list`](reference-pulsar-admin.md#tenants-list) subcommand.
+
+```shell
+
+$ pulsar-admin tenants list
+my-tenant-1
+my-tenant-2
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/tenants|operation/getTenants?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.tenants().getTenants();
+
+```
+
+
+
+
+````
+
+### Create
+
+You can create a new tenant.
+
+````mdx-code-block
+
+
+
+Use the [`create`](reference-pulsar-admin.md#tenants-create) subcommand:
+
+```shell
+
+$ pulsar-admin tenants create my-tenant
+
+```
+
+When creating a tenant, you can optionally assign admin roles using the `-r`/`--admin-roles`
+flag, and clusters using the `-c`/`--allowed-clusters` flag. You can specify multiple values
+as a comma-separated list. Here are some examples:
+
+```shell
+
+$ pulsar-admin tenants create my-tenant \
+ --admin-roles role1,role2,role3 \
+ --allowed-clusters cluster1
+
+$ pulsar-admin tenants create my-tenant \
+ -r role1
+ -c cluster1
+
+```
+
+
+
+
+{@inject: endpoint|PUT|/admin/v2/tenants/:tenant|operation/createTenant?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.tenants().createTenant(tenantName, tenantInfo);
+
+```
+
+
+
+
+````
+
+### Get configuration
+
+You can fetch the [configuration](reference-configuration.md) for an existing tenant at any time.
+
+````mdx-code-block
+
+
+
+Use the [`get`](reference-pulsar-admin.md#tenants-get) subcommand and specify the name of the tenant. Here's an example:
+
+```shell
+
+$ pulsar-admin tenants get my-tenant
+{
+ "adminRoles": [
+ "admin1",
+ "admin2"
+ ],
+ "allowedClusters": [
+ "cl1",
+ "cl2"
+ ]
+}
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/tenants/:tenant|operation/getTenant?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.tenants().getTenantInfo(tenantName);
+
+```
+
+
+
+
+````
+
+### Delete
+
+Tenants can be deleted from a Pulsar [instance](reference-terminology.md#instance).
+
+````mdx-code-block
+
+
+
+Use the [`delete`](reference-pulsar-admin.md#tenants-delete) subcommand and specify the name of the tenant.
+
+```shell
+
+$ pulsar-admin tenants delete my-tenant
+
+```
+
+
+
+
+{@inject: endpoint|DELETE|/admin/v2/tenants/:tenant|operation/deleteTenant?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.Tenants().deleteTenant(tenantName);
+
+```
+
+
+
+
+````
+
+### Update
+
+You can update a tenant's configuration.
+
+````mdx-code-block
+
+
+
+Use the [`update`](reference-pulsar-admin.md#tenants-update) subcommand.
+
+```shell
+
+$ pulsar-admin tenants update my-tenant
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/tenants/:tenant|operation/updateTenant?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.tenants().updateTenant(tenantName, tenantInfo);
+
+```
+
+
+
+
+````
diff --git a/site2/website/versioned_docs/version-2.10.x/admin-api-topics.md b/site2/website/versioned_docs/version-2.10.x/admin-api-topics.md
new file mode 100644
index 0000000000000..90baa7a120ee6
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/admin-api-topics.md
@@ -0,0 +1,2472 @@
+---
+id: admin-api-topics
+title: Manage topics
+sidebar_label: "Topics"
+original_id: admin-api-topics
+---
+
+````mdx-code-block
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+````
+
+
+> **Important**
+>
+> This page only shows **some frequently used operations**.
+>
+> - For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more, see [Pulsar admin doc](/tools/pulsar-admin/)
+>
+> - For the latest and complete information about `REST API`, including parameters, responses, samples, and more, see {@inject: rest:REST:/} API doc.
+>
+> - For the latest and complete information about `Java admin API`, including classes, methods, descriptions, and more, see [Java admin API doc](/api/admin/).
+
+Pulsar has persistent and non-persistent topics. Persistent topic is a logical endpoint for publishing and consuming messages. The topic name structure for persistent topics is:
+
+```shell
+
+persistent://tenant/namespace/topic
+
+```
+
+Non-persistent topics are used in applications that only consume real-time published messages and do not need persistent guarantee. In this way, it reduces message-publish latency by removing overhead of persisting messages. The topic name structure for non-persistent topics is:
+
+```shell
+
+non-persistent://tenant/namespace/topic
+
+```
+
+## Manage topic resources
+Whether it is persistent or non-persistent topic, you can obtain the topic resources through `pulsar-admin` tool, REST API and Java.
+
+:::note
+
+In REST API, `:schema` stands for persistent or non-persistent. `:tenant`, `:namespace`, `:x` are variables, replace them with the real tenant, namespace, and `x` names when using them.
+Take {@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@} as an example, to get the list of persistent topics in REST API, use `https://pulsar.apache.org/admin/v2/persistent/my-tenant/my-namespace`. To get the list of non-persistent topics in REST API, use `https://pulsar.apache.org/admin/v2/non-persistent/my-tenant/my-namespace`.
+
+:::
+
+### List of topics
+
+You can get the list of topics under a given namespace in the following ways.
+
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin topics list \
+ my-tenant/my-namespace
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String namespace = "my-tenant/my-namespace";
+admin.topics().getList(namespace);
+
+```
+
+
+
+
+````
+
+### Grant permission
+
+You can grant permissions on a client role to perform specific actions on a given topic in the following ways.
+
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin topics grant-permission \
+ --actions produce,consume --role application1 \
+ persistent://test-tenant/ns1/tp1 \
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/grantPermissionsOnTopic?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+String role = "test-role";
+Set actions = Sets.newHashSet(AuthAction.produce, AuthAction.consume);
+admin.topics().grantPermission(topic, role, actions);
+
+```
+
+
+
+
+````
+
+### Get permission
+
+You can fetch permission in the following ways.
+
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin topics permissions \
+ persistent://test-tenant/ns1/tp1 \
+
+{
+ "application1": [
+ "consume",
+ "produce"
+ ]
+}
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/permissions|operation/getPermissionsOnTopic?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.topics().getPermissions(topic);
+
+```
+
+
+
+
+````
+
+### Revoke permission
+
+You can revoke a permission granted on a client role in the following ways.
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin topics revoke-permission \
+ --role application1 \
+ persistent://test-tenant/ns1/tp1 \
+
+{
+ "application1": [
+ "consume",
+ "produce"
+ ]
+}
+
+```
+
+
+
+
+{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic/permissions/:role|operation/revokePermissionsOnTopic?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+String role = "test-role";
+admin.topics().revokePermissions(topic, role);
+
+```
+
+
+
+
+````
+
+### Delete topic
+
+You can delete a topic in the following ways. You cannot delete a topic if any active subscription or producers is connected to the topic.
+
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin topics delete \
+ persistent://test-tenant/ns1/tp1 \
+
+```
+
+
+
+
+{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic|operation/deleteTopic?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.topics().delete(topic);
+
+```
+
+
+
+
+````
+
+### Unload topic
+
+You can unload a topic in the following ways.
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin topics unload \
+ persistent://test-tenant/ns1/tp1 \
+
+```
+
+
+
+
+{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/unload|operation/unloadTopic?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.topics().unload(topic);
+
+```
+
+
+
+
+````
+
+### Get stats
+
+You can check the following statistics of a given non-partitioned topic.
+
+ - **msgRateIn**: The sum of all local and replication publishers' publish rates (msg/s).
+
+ - **msgThroughputIn**: The sum of all local and replication publishers' publish rates (bytes/s).
+
+ - **msgRateOut**: The sum of all local and replication consumers' dispatch rates(msg/s).
+
+ - **msgThroughputOut**: The sum of all local and replication consumers' dispatch rates (bytes/s).
+
+ - **averageMsgSize**: The average size (in bytes) of messages published within the last interval.
+
+ - **storageSize**: The sum of the ledgers' storage size for this topic. The space used to store the messages for the topic.
+
+ - **earliestMsgPublishTimeInBacklogs**: The publish time of the earliest message in the backlog (ms).
+
+ - **bytesInCounter**: Total bytes published to the topic.
+
+ - **msgInCounter**: Total messages published to the topic.
+
+ - **bytesOutCounter**: Total bytes delivered to consumers.
+
+ - **msgOutCounter**: Total messages delivered to consumers.
+
+ - **msgChunkPublished**: Topic has chunked message published on it.
+
+ - **backlogSize**: Estimated total unconsumed or backlog size (in bytes).
+
+ - **offloadedStorageSize**: Space used to store the offloaded messages for the topic (in bytes).
+
+ - **waitingPublishers**: The number of publishers waiting in a queue in exclusive access mode.
+
+ - **deduplicationStatus**: The status of message deduplication for the topic.
+
+ - **topicEpoch**: The topic epoch or empty if not set.
+
+ - **nonContiguousDeletedMessagesRanges**: The number of non-contiguous deleted messages ranges.
+
+ - **nonContiguousDeletedMessagesRangesSerializedSize**: The serialized size of non-contiguous deleted messages ranges.
+
+ - **publishers**: The list of all local publishers into the topic. The list ranges from zero to thousands.
+
+ - **accessMode**: The type of access to the topic that the producer requires.
+
+ - **msgRateIn**: The total rate of messages (msg/s) published by this publisher.
+
+ - **msgThroughputIn**: The total throughput (bytes/s) of the messages published by this publisher.
+
+ - **averageMsgSize**: The average message size in bytes from this publisher within the last interval.
+
+ - **chunkedMessageRate**: The total rate of chunked messages published by this publisher.
+
+ - **producerId**: The internal identifier for this producer on this topic.
+
+ - **producerName**: The internal identifier for this producer, generated by the client library.
+
+ - **address**: The IP address and source port for the connection of this producer.
+
+ - **connectedSince**: The timestamp when this producer is created or reconnected last time.
+
+ - **clientVersion**: The client library version of this producer.
+
+ - **metadata**: Metadata (key/value strings) associated with this publisher.
+
+ - **subscriptions**: The list of all local subscriptions to the topic.
+
+ - **my-subscription**: The name of this subscription. It is defined by the client.
+
+ - **msgRateOut**: The total rate of messages (msg/s) delivered on this subscription.
+
+ - **msgThroughputOut**: The total throughput (bytes/s) delivered on this subscription.
+
+ - **msgBacklog**: The number of messages in the subscription backlog.
+
+ - **type**: The subscription type.
+
+ - **msgRateExpired**: The rate at which messages were discarded instead of dispatched from this subscription due to TTL.
+
+ - **lastExpireTimestamp**: The timestamp of the last message expire execution.
+
+ - **lastConsumedFlowTimestamp**: The timestamp of the last flow command received.
+
+ - **lastConsumedTimestamp**: The latest timestamp of all the consumed timestamp of the consumers.
+
+ - **lastAckedTimestamp**: The latest timestamp of all the acked timestamp of the consumers.
+
+ - **bytesOutCounter**: Total bytes delivered to consumer.
+
+ - **msgOutCounter**: Total messages delivered to consumer.
+
+ - **msgRateRedeliver**: Total rate of messages redelivered on this subscription (msg/s).
+
+ - **chunkedMessageRate**: Chunked message dispatch rate.
+
+ - **backlogSize**: Size of backlog for this subscription (in bytes).
+
+ - **earliestMsgPublishTimeInBacklog**: The publish time of the earliest message in the backlog for the subscription (ms).
+
+ - **msgBacklogNoDelayed**: Number of messages in the subscription backlog that do not contain the delay messages.
+
+ - **blockedSubscriptionOnUnackedMsgs**: Flag to verify if a subscription is blocked due to reaching threshold of unacked messages.
+
+ - **msgDelayed**: Number of delayed messages currently being tracked.
+
+ - **unackedMessages**: Number of unacknowledged messages for the subscription, where an unacknowledged message is one that has been sent to a consumer but not yet acknowledged. This field is only meaningful when using a subscription that tracks individual message acknowledgement.
+
+ - **activeConsumerName**: The name of the consumer that is active for single active consumer subscriptions. For example, failover or exclusive.
+
+ - **totalMsgExpired**: Total messages expired on this subscription.
+
+ - **lastMarkDeleteAdvancedTimestamp**: Last MarkDelete position advanced timestamp.
+
+ - **durable**: Whether the subscription is durable or ephemeral (for example, from a reader).
+
+ - **replicated**: Mark that the subscription state is kept in sync across different regions.
+
+ - **allowOutOfOrderDelivery**: Whether out of order delivery is allowed on the Key_Shared subscription.
+
+ - **keySharedMode**: Whether the Key_Shared subscription mode is AUTO_SPLIT or STICKY.
+
+ - **consumersAfterMarkDeletePosition**: This is for Key_Shared subscription to get the recentJoinedConsumers in the Key_Shared subscription.
+
+ - **nonContiguousDeletedMessagesRanges**: The number of non-contiguous deleted messages ranges.
+
+ - **nonContiguousDeletedMessagesRangesSerializedSize**: The serialized size of non-contiguous deleted messages ranges.
+
+ - **consumers**: The list of connected consumers for this subscription.
+
+ - **msgRateOut**: The total rate of messages (msg/s) delivered to the consumer.
+
+ - **msgThroughputOut**: The total throughput (bytes/s) delivered to the consumer.
+
+ - **consumerName**: The internal identifier for this consumer, generated by the client library.
+
+ - **availablePermits**: The number of messages that the consumer has space for in the client library's listen queue. `0` means the client library's queue is full and `receive()` isn't being called. A non-zero value means this consumer is ready for dispatched messages.
+
+ - **unackedMessages**: The number of unacknowledged messages for the consumer, where an unacknowledged message is one that has been sent to the consumer but not yet acknowledged. This field is only meaningful when using a subscription that tracks individual message acknowledgement.
+
+ - **blockedConsumerOnUnackedMsgs**: The flag used to verify if the consumer is blocked due to reaching threshold of the unacknowledged messages.
+
+ - **lastConsumedTimestamp**: The timestamp when the consumer reads a message the last time.
+
+ - **lastAckedTimestamp**: The timestamp when the consumer acknowledges a message the last time.
+
+ - **address**: The IP address and source port for the connection of this consumer.
+
+ - **connectedSince**: The timestamp when this consumer is created or reconnected last time.
+
+ - **clientVersion**: The client library version of this consumer.
+
+ - **bytesOutCounter**: Total bytes delivered to consumer.
+
+ - **msgOutCounter**: Total messages delivered to consumer.
+
+ - **msgRateRedeliver**: Total rate of messages redelivered by this consumer (msg/s).
+
+ - **chunkedMessageRate**: The total rate of chunked messages delivered to this consumer.
+
+ - **avgMessagesPerEntry**: Number of average messages per entry for the consumer consumed.
+
+ - **readPositionWhenJoining**: The read position of the cursor when the consumer joining.
+
+ - **keyHashRanges**: Hash ranges assigned to this consumer if is Key_Shared sub mode.
+
+ - **metadata**: Metadata (key/value strings) associated with this consumer.
+
+ - **replication**: This section gives the stats for cross-colo replication of this topic
+
+ - **msgRateIn**: The total rate (msg/s) of messages received from the remote cluster.
+
+ - **msgThroughputIn**: The total throughput (bytes/s) received from the remote cluster.
+
+ - **msgRateOut**: The total rate of messages (msg/s) delivered to the replication-subscriber.
+
+ - **msgThroughputOut**: The total throughput (bytes/s) delivered to the replication-subscriber.
+
+ - **msgRateExpired**: The total rate of messages (msg/s) expired.
+
+ - **replicationBacklog**: The number of messages pending to be replicated to remote cluster.
+
+ - **connected**: Whether the outbound replicator is connected.
+
+ - **replicationDelayInSeconds**: How long the oldest message has been waiting to be sent through the connection, if connected is `true`.
+
+ - **inboundConnection**: The IP and port of the broker in the remote cluster's publisher connection to this broker.
+
+ - **inboundConnectedSince**: The TCP connection being used to publish messages to the remote cluster. If there are no local publishers connected, this connection is automatically closed after a minute.
+
+ - **outboundConnection**: The address of the outbound replication connection.
+
+ - **outboundConnectedSince**: The timestamp of establishing outbound connection.
+
+The following is an example of a topic status.
+
+```json
+
+{
+ "msgRateIn" : 0.0,
+ "msgThroughputIn" : 0.0,
+ "msgRateOut" : 0.0,
+ "msgThroughputOut" : 0.0,
+ "bytesInCounter" : 504,
+ "msgInCounter" : 9,
+ "bytesOutCounter" : 2296,
+ "msgOutCounter" : 41,
+ "averageMsgSize" : 0.0,
+ "msgChunkPublished" : false,
+ "storageSize" : 504,
+ "backlogSize" : 0,
+ "earliestMsgPublishTimeInBacklogs": 0,
+ "offloadedStorageSize" : 0,
+ "publishers" : [ {
+ "accessMode" : "Shared",
+ "msgRateIn" : 0.0,
+ "msgThroughputIn" : 0.0,
+ "averageMsgSize" : 0.0,
+ "chunkedMessageRate" : 0.0,
+ "producerId" : 0,
+ "metadata" : { },
+ "address" : "/127.0.0.1:65402",
+ "connectedSince" : "2021-06-09T17:22:55.913+08:00",
+ "clientVersion" : "2.9.0-SNAPSHOT",
+ "producerName" : "standalone-1-0"
+ } ],
+ "waitingPublishers" : 0,
+ "subscriptions" : {
+ "sub-demo" : {
+ "msgRateOut" : 0.0,
+ "msgThroughputOut" : 0.0,
+ "bytesOutCounter" : 2296,
+ "msgOutCounter" : 41,
+ "msgRateRedeliver" : 0.0,
+ "chunkedMessageRate" : 0,
+ "msgBacklog" : 0,
+ "backlogSize" : 0,
+ "earliestMsgPublishTimeInBacklog": 0,
+ "msgBacklogNoDelayed" : 0,
+ "blockedSubscriptionOnUnackedMsgs" : false,
+ "msgDelayed" : 0,
+ "unackedMessages" : 0,
+ "type" : "Exclusive",
+ "activeConsumerName" : "20b81",
+ "msgRateExpired" : 0.0,
+ "totalMsgExpired" : 0,
+ "lastExpireTimestamp" : 0,
+ "lastConsumedFlowTimestamp" : 1623230565356,
+ "lastConsumedTimestamp" : 1623230583946,
+ "lastAckedTimestamp" : 1623230584033,
+ "lastMarkDeleteAdvancedTimestamp" : 1623230584033,
+ "consumers" : [ {
+ "msgRateOut" : 0.0,
+ "msgThroughputOut" : 0.0,
+ "bytesOutCounter" : 2296,
+ "msgOutCounter" : 41,
+ "msgRateRedeliver" : 0.0,
+ "chunkedMessageRate" : 0.0,
+ "consumerName" : "20b81",
+ "availablePermits" : 959,
+ "unackedMessages" : 0,
+ "avgMessagesPerEntry" : 314,
+ "blockedConsumerOnUnackedMsgs" : false,
+ "lastAckedTimestamp" : 1623230584033,
+ "lastConsumedTimestamp" : 1623230583946,
+ "metadata" : { },
+ "address" : "/127.0.0.1:65172",
+ "connectedSince" : "2021-06-09T17:22:45.353+08:00",
+ "clientVersion" : "2.9.0-SNAPSHOT"
+ } ],
+ "allowOutOfOrderDelivery": false,
+ "consumersAfterMarkDeletePosition" : { },
+ "nonContiguousDeletedMessagesRanges" : 0,
+ "nonContiguousDeletedMessagesRangesSerializedSize" : 0,
+ "durable" : true,
+ "replicated" : false
+ }
+ },
+ "replication" : { },
+ "deduplicationStatus" : "Disabled",
+ "nonContiguousDeletedMessagesRanges" : 0,
+ "nonContiguousDeletedMessagesRangesSerializedSize" : 0
+}
+
+```
+
+To get the status of a topic, you can use the following ways.
+
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin topics stats \
+ persistent://test-tenant/ns1/tp1 \
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.topics().getStats(topic);
+
+```
+
+
+
+
+````
+
+### Get internal stats
+
+You can get the detailed statistics of a topic.
+
+ - **entriesAddedCounter**: Messages published since this broker loaded this topic.
+
+ - **numberOfEntries**: The total number of messages being tracked.
+
+ - **totalSize**: The total storage size in bytes of all messages.
+
+ - **currentLedgerEntries**: The count of messages written to the ledger that is currently open for writing.
+
+ - **currentLedgerSize**: The size in bytes of messages written to the ledger that is currently open for writing.
+
+ - **lastLedgerCreatedTimestamp**: The time when the last ledger is created.
+
+ - **lastLedgerCreationFailureTimestamp:** The time when the last ledger failed.
+
+ - **waitingCursorsCount**: The number of cursors that are "caught up" and waiting for a new message to be published.
+
+ - **pendingAddEntriesCount**: The number of messages that complete (asynchronous) write requests.
+
+ - **lastConfirmedEntry**: The ledgerid:entryid of the last message that is written successfully. If the entryid is `-1`, then the ledger is open, yet no entries are written.
+
+ - **state**: The state of this ledger for writing. The state `LedgerOpened` means that a ledger is open for saving published messages.
+
+ - **ledgers**: The ordered list of all ledgers for this topic holding messages.
+
+ - **ledgerId**: The ID of this ledger.
+
+ - **entries**: The total number of entries that belong to this ledger.
+
+ - **size**: The size of messages written to this ledger (in bytes).
+
+ - **offloaded**: Whether this ledger is offloaded.
+
+ - **metadata**: The ledger metadata.
+
+ - **schemaLedgers**: The ordered list of all ledgers for this topic schema.
+
+ - **ledgerId**: The ID of this ledger.
+
+ - **entries**: The total number of entries that belong to this ledger.
+
+ - **size**: The size of messages written to this ledger (in bytes).
+
+ - **offloaded**: Whether this ledger is offloaded.
+
+ - **metadata**: The ledger metadata.
+
+ - **compactedLedger**: The ledgers holding un-acked messages after topic compaction.
+
+ - **ledgerId**: The ID of this ledger.
+
+ - **entries**: The total number of entries that belong to this ledger.
+
+ - **size**: The size of messages written to this ledger (in bytes).
+
+ - **offloaded**: Whether this ledger is offloaded. The value is `false` for the compacted topic ledger.
+
+ - **cursors**: The list of all cursors on this topic. Each subscription in the topic stats has a cursor.
+
+ - **markDeletePosition**: All messages before the markDeletePosition are acknowledged by the subscriber.
+
+ - **readPosition**: The latest position of subscriber for reading message.
+
+ - **waitingReadOp**: This is true when the subscription has read the latest message published to the topic and is waiting for new messages to be published.
+
+ - **pendingReadOps**: The counter for how many outstanding read requests to the BookKeepers in progress.
+
+ - **messagesConsumedCounter**: The number of messages this cursor has acked since this broker loaded this topic.
+
+ - **cursorLedger**: The ledger being used to persistently store the current markDeletePosition.
+
+ - **cursorLedgerLastEntry**: The last entryid used to persistently store the current markDeletePosition.
+
+ - **individuallyDeletedMessages**: If acknowledges are being done out of order, the ranges of messages acknowledged between the markDeletePosition and the read-position shows.
+
+ - **lastLedgerSwitchTimestamp**: The last time the cursor ledger is rolled over.
+
+ - **state**: The state of the cursor ledger: `Open` means you have a cursor ledger for saving updates of the markDeletePosition.
+
+The following is an example of the detailed statistics of a topic.
+
+```json
+
+{
+ "entriesAddedCounter":0,
+ "numberOfEntries":0,
+ "totalSize":0,
+ "currentLedgerEntries":0,
+ "currentLedgerSize":0,
+ "lastLedgerCreatedTimestamp":"2021-01-22T21:12:14.868+08:00",
+ "lastLedgerCreationFailureTimestamp":null,
+ "waitingCursorsCount":0,
+ "pendingAddEntriesCount":0,
+ "lastConfirmedEntry":"3:-1",
+ "state":"LedgerOpened",
+ "ledgers":[
+ {
+ "ledgerId":3,
+ "entries":0,
+ "size":0,
+ "offloaded":false,
+ "metadata":null
+ }
+ ],
+ "cursors":{
+ "test":{
+ "markDeletePosition":"3:-1",
+ "readPosition":"3:-1",
+ "waitingReadOp":false,
+ "pendingReadOps":0,
+ "messagesConsumedCounter":0,
+ "cursorLedger":4,
+ "cursorLedgerLastEntry":1,
+ "individuallyDeletedMessages":"[]",
+ "lastLedgerSwitchTimestamp":"2021-01-22T21:12:14.966+08:00",
+ "state":"Open",
+ "numberOfEntriesSinceFirstNotAckedMessage":0,
+ "totalNonContiguousDeletedMessagesRange":0,
+ "properties":{
+
+ }
+ }
+ },
+ "schemaLedgers":[
+ {
+ "ledgerId":1,
+ "entries":11,
+ "size":10,
+ "offloaded":false,
+ "metadata":null
+ }
+ ],
+ "compactedLedger":{
+ "ledgerId":-1,
+ "entries":-1,
+ "size":-1,
+ "offloaded":false,
+ "metadata":null
+ }
+}
+
+```
+
+To get the internal status of a topic, you can use the following ways.
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin topics stats-internal \
+ persistent://test-tenant/ns1/tp1 \
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.topics().getInternalStats(topic);
+
+```
+
+
+
+
+````
+
+### Peek messages
+
+You can peek a number of messages for a specific subscription of a given topic in the following ways.
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin topics peek-messages \
+ --count 10 --subscription my-subscription \
+ persistent://test-tenant/ns1/tp1 \
+
+Message ID: 315674752:0
+Properties: { "X-Pulsar-publish-time" : "2015-07-13 17:40:28.451" }
+msg-payload
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/position/:messagePosition|operation/peekNthMessage?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+String subName = "my-subscription";
+int numMessages = 1;
+admin.topics().peekMessages(topic, subName, numMessages);
+
+```
+
+
+
+
+````
+
+### Get message by ID
+
+You can fetch the message with the given ledger ID and entry ID in the following ways.
+
+````mdx-code-block
+
+
+
+```shell
+
+$ ./bin/pulsar-admin topics get-message-by-id \
+ persistent://public/default/my-topic \
+ -l 10 -e 0
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/ledger/:ledgerId/entry/:entryId|operation/getMessageById?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+long ledgerId = 10;
+long entryId = 10;
+admin.topics().getMessageById(topic, ledgerId, entryId);
+
+```
+
+
+
+
+````
+
+### Examine messages
+
+You can examine a specific message on a topic by position relative to the earliest or the latest message.
+
+````mdx-code-block
+
+
+
+```shell
+
+./bin/pulsar-admin topics examine-messages \
+ persistent://public/default/my-topic \
+ -i latest -m 1
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/examinemessage?initialPosition=:initialPosition&messagePosition=:messagePosition|operation/examineMessage?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.topics().examineMessage(topic, "latest", 1);
+
+```
+
+
+
+
+````
+
+### Get message ID
+
+You can get message ID published at or just after the given datetime.
+
+````mdx-code-block
+
+
+
+```shell
+
+./bin/pulsar-admin topics get-message-id \
+ persistent://public/default/my-topic \
+ -d 2021-06-28T19:01:17Z
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/messageid/:timestamp|operation/getMessageIdByTimestamp?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+long timestamp = System.currentTimeMillis()
+admin.topics().getMessageIdByTimestamp(topic, timestamp);
+
+```
+
+
+
+
+````
+
+
+### Skip messages
+
+You can skip a number of messages for a specific subscription of a given topic in the following ways.
+
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin topics skip \
+ --count 10 --subscription my-subscription \
+ persistent://test-tenant/ns1/tp1 \
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip/:numMessages|operation/skipMessages?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+String subName = "my-subscription";
+int numMessages = 1;
+admin.topics().skipMessages(topic, subName, numMessages);
+
+```
+
+
+
+
+````
+
+### Skip all messages
+
+You can skip all the old messages for a specific subscription of a given topic.
+
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin topics skip-all \
+ --subscription my-subscription \
+ persistent://test-tenant/ns1/tp1 \
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/skip_all|operation/skipAllMessages?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+String subName = "my-subscription";
+admin.topics().skipAllMessages(topic, subName);
+
+```
+
+
+
+
+````
+
+### Reset cursor
+
+You can reset a subscription cursor position back to the position which is recorded X minutes before. It essentially calculates time and position of cursor at X minutes before and resets it at that position. You can reset the cursor in the following ways.
+
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin topics reset-cursor \
+ --subscription my-subscription --time 10 \
+ persistent://test-tenant/ns1/tp1 \
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic/subscription/:subName/resetcursor/:timestamp|operation/resetCursor?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+String subName = "my-subscription";
+long timestamp = 2342343L;
+admin.topics().resetCursor(topic, subName, timestamp);
+
+```
+
+
+
+
+````
+
+### Look up topic's owner broker
+
+You can locate the owner broker of the given topic in the following ways.
+
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin topics lookup \
+ persistent://test-tenant/ns1/tp1 \
+
+ "pulsar://broker1.org.com:4480"
+
+```
+
+
+
+
+{@inject: endpoint|GET|/lookup/v2/topic/:topic-domain/:tenant/:namespace/:topic|operation/lookupTopicAsync?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.lookup().lookupDestination(topic);
+
+```
+
+
+
+
+````
+
+### Look up partitioned topic's owner broker
+
+You can locate the owner broker of the given partitioned topic in the following ways.
+
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin topics partitioned-lookup \
+ persistent://test-tenant/ns1/my-topic \
+
+ "persistent://test-tenant/ns1/my-topic-partition-0 pulsar://localhost:6650"
+ "persistent://test-tenant/ns1/my-topic-partition-1 pulsar://localhost:6650"
+ "persistent://test-tenant/ns1/my-topic-partition-2 pulsar://localhost:6650"
+ "persistent://test-tenant/ns1/my-topic-partition-3 pulsar://localhost:6650"
+
+```
+
+
+
+
+```java
+
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.lookup().lookupPartitionedTopic(topic);
+
+```
+
+Lookup the partitioned topics sorted by broker URL
+
+```shell
+
+$ pulsar-admin topics partitioned-lookup \
+ persistent://test-tenant/ns1/my-topic --sort-by-broker \
+
+ "pulsar://localhost:6650 [persistent://test-tenant/ns1/my-topic-partition-0, persistent://test-tenant/ns1/my-topic-partition-1, persistent://test-tenant/ns1/my-topic-partition-2, persistent://test-tenant/ns1/my-topic-partition-3]"
+
+```
+
+
+
+
+````
+
+### Get bundle
+
+You can get the range of the bundle that the given topic belongs to in the following ways.
+
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin topics bundle-range \
+ persistent://test-tenant/ns1/tp1 \
+
+ "0x00000000_0xffffffff"
+
+```
+
+
+
+
+{@inject: endpoint|GET|/lookup/v2/topic/:topic_domain/:tenant/:namespace/:topic/bundle|operation/getNamespaceBundle?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.lookup().getBundleRange(topic);
+
+```
+
+
+
+
+````
+
+### Get subscriptions
+
+You can check all subscription names for a given topic in the following ways.
+
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin topics subscriptions \
+ persistent://test-tenant/ns1/tp1 \
+
+ my-subscription
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscriptions|operation/getSubscriptions?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.topics().getSubscriptions(topic);
+
+```
+
+
+
+
+````
+
+### Last Message Id
+
+You can get the last committed message ID for a persistent topic. It is available since 2.3.0 release.
+
+````mdx-code-block
+
+
+
+```shell
+
+pulsar-admin topics last-message-id topic-name
+
+```
+
+
+
+
+{@inject: endpoint|Get|/admin/v2/:schema/:tenant/:namespace/:topic/lastMessageId|operation/getLastMessageId?version=@pulsar:version_number@}
+
+
+
+
+```Java
+
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.topics().getLastMessage(topic);
+
+```
+
+
+
+
+````
+
+### Get backlog size
+
+You can get the backlog size of a single partition topic or a non-partitioned topic with a given message ID (in bytes).
+
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin topics get-backlog-size \
+ -m 1:1 \
+ persistent://test-tenant/ns1/tp1-partition-0 \
+
+```
+
+
+
+
+{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/backlogSize|operation/getBacklogSizeByMessageId?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+MessageId messageId = MessageId.earliest;
+admin.topics().getBacklogSizeByMessageId(topic, messageId);
+
+```
+
+
+
+
+````
+
+
+### Configure deduplication snapshot interval
+
+#### Get deduplication snapshot interval
+
+To get the topic-level deduplication snapshot interval, use one of the following methods.
+
+````mdx-code-block
+
+
+
+```
+
+pulsar-admin topics get-deduplication-snapshot-interval options
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/getDeduplicationSnapshotInterval?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.topics().getDeduplicationSnapshotInterval(topic)
+
+```
+
+
+
+
+````
+
+#### Set deduplication snapshot interval
+
+To set the topic-level deduplication snapshot interval, use one of the following methods.
+
+> **Prerequisite** `brokerDeduplicationEnabled` must be set to `true`.
+
+````mdx-code-block
+
+
+
+```
+
+pulsar-admin topics set-deduplication-snapshot-interval options
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/setDeduplicationSnapshotInterval?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.topics().setDeduplicationSnapshotInterval(topic, 1000)
+
+```
+
+
+
+
+````
+
+#### Remove deduplication snapshot interval
+
+To remove the topic-level deduplication snapshot interval, use one of the following methods.
+
+````mdx-code-block
+
+
+
+```
+
+pulsar-admin topics remove-deduplication-snapshot-interval options
+
+```
+
+
+
+
+{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/deduplicationSnapshotInterval|operation/deleteDeduplicationSnapshotInterval?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.topics().removeDeduplicationSnapshotInterval(topic)
+
+```
+
+
+
+
+````
+
+
+### Configure inactive topic policies
+
+#### Get inactive topic policies
+
+To get the topic-level inactive topic policies, use one of the following methods.
+
+````mdx-code-block
+
+
+
+```
+
+pulsar-admin topics get-inactive-topic-policies options
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/getInactiveTopicPolicies?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.topics().getInactiveTopicPolicies(topic)
+
+```
+
+
+
+
+````
+
+#### Set inactive topic policies
+
+To set the topic-level inactive topic policies, use one of the following methods.
+
+````mdx-code-block
+
+
+
+```
+
+pulsar-admin topics set-inactive-topic-policies options
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/setInactiveTopicPolicies?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.topics().setInactiveTopicPolicies(topic, inactiveTopicPolicies)
+
+```
+
+
+
+
+````
+
+#### Remove inactive topic policies
+
+To remove the topic-level inactive topic policies, use one of the following methods.
+
+````mdx-code-block
+
+
+
+```
+
+pulsar-admin topics remove-inactive-topic-policies options
+
+```
+
+
+
+
+{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/inactiveTopicPolicies|operation/removeInactiveTopicPolicies?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.topics().removeInactiveTopicPolicies(topic)
+
+```
+
+
+
+
+````
+
+
+### Configure offload policies
+
+#### Get offload policies
+
+To get the topic-level offload policies, use one of the following methods.
+
+````mdx-code-block
+
+
+
+```
+
+pulsar-admin topics get-offload-policies options
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/getOffloadPolicies?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.topics().getOffloadPolicies(topic)
+
+```
+
+
+
+
+````
+
+#### Set offload policies
+
+To set the topic-level offload policies, use one of the following methods.
+
+````mdx-code-block
+
+
+
+```
+
+pulsar-admin topics set-offload-policies options
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/setOffloadPolicies?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.topics().setOffloadPolicies(topic, offloadPolicies)
+
+```
+
+
+
+
+````
+
+#### Remove offload policies
+
+To remove the topic-level offload policies, use one of the following methods.
+
+````mdx-code-block
+
+
+
+```
+
+pulsar-admin topics remove-offload-policies options
+
+```
+
+
+
+
+{@inject: endpoint|DELETE|/admin/v2/topics/:tenant/:namespace/:topic/offloadPolicies|operation/removeOffloadPolicies?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.topics().removeOffloadPolicies(topic)
+
+```
+
+
+
+
+````
+
+
+## Manage non-partitioned topics
+You can use Pulsar [admin API](admin-api-overview.md) to create, delete and check status of non-partitioned topics.
+
+### Create
+Non-partitioned topics must be explicitly created. When creating a new non-partitioned topic, you need to provide a name for the topic.
+
+By default, 60 seconds after creation, topics are considered inactive and deleted automatically to avoid generating trash data. To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to a specific value.
+
+For more information about the two parameters, see [here](reference-configuration.md#broker).
+
+You can create non-partitioned topics in the following ways.
+````mdx-code-block
+
+
+
+When you create non-partitioned topics with the [`create`](reference-pulsar-admin.md#create-3) command, you need to specify the topic name as an argument.
+
+```shell
+
+$ bin/pulsar-admin topics create \
+ persistent://my-tenant/my-namespace/my-topic
+
+```
+
+:::note
+
+When you create a non-partitioned topic with the suffix '-partition-' followed by numeric value like 'xyz-topic-partition-x' for the topic name, if a partitioned topic with same suffix 'xyz-topic-partition-y' exists, then the numeric value(x) for the non-partitioned topic must be larger than the number of partitions(y) of the partitioned topic. Otherwise, you cannot create such a non-partitioned topic.
+
+:::
+
+
+
+
+{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic|operation/createNonPartitionedTopic?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topicName = "persistent://my-tenant/my-namespace/my-topic";
+admin.topics().createNonPartitionedTopic(topicName);
+
+```
+
+
+
+
+````
+
+### Delete
+You can delete non-partitioned topics in the following ways.
+````mdx-code-block
+
+
+
+```shell
+
+$ bin/pulsar-admin topics delete \
+ persistent://my-tenant/my-namespace/my-topic
+
+```
+
+
+
+
+{@inject: endpoint|DELETE|/admin/v2/:schema/:tenant/:namespace/:topic|operation/deleteTopic?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.topics().delete(topic);
+
+```
+
+
+
+
+````
+
+### List
+
+You can get the list of topics under a given namespace in the following ways.
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin topics list tenant/namespace
+persistent://tenant/namespace/topic1
+persistent://tenant/namespace/topic2
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getList?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.topics().getList(namespace);
+
+```
+
+
+
+
+````
+
+### Stats
+
+You can check the current statistics of a given topic. The following is an example. For description of each stats, refer to [get stats](#get-stats).
+
+```json
+
+{
+ "msgRateIn": 4641.528542257553,
+ "msgThroughputIn": 44663039.74947473,
+ "msgRateOut": 0,
+ "msgThroughputOut": 0,
+ "averageMsgSize": 1232439.816728665,
+ "storageSize": 135532389160,
+ "publishers": [
+ {
+ "msgRateIn": 57.855383881403576,
+ "msgThroughputIn": 558994.7078932219,
+ "averageMsgSize": 613135,
+ "producerId": 0,
+ "producerName": null,
+ "address": null,
+ "connectedSince": null
+ }
+ ],
+ "subscriptions": {
+ "my-topic_subscription": {
+ "msgRateOut": 0,
+ "msgThroughputOut": 0,
+ "msgBacklog": 116632,
+ "type": null,
+ "msgRateExpired": 36.98245516804671,
+ "consumers": []
+ }
+ },
+ "replication": {}
+}
+
+```
+
+You can check the current statistics of a given topic and its connected producers and consumers in the following ways.
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin topics stats \
+ persistent://test-tenant/namespace/topic \
+ --get-precise-backlog
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.topics().getStats(topic, false /* is precise backlog */);
+
+```
+
+
+
+
+````
+
+## Manage partitioned topics
+You can use Pulsar [admin API](admin-api-overview.md) to create, update, delete and check status of partitioned topics.
+
+### Create
+
+Partitioned topics must be explicitly created. When creating a new partitioned topic, you need to provide a name and the number of partitions for the topic.
+
+By default, 60 seconds after creation, topics are considered inactive and deleted automatically to avoid generating trash data. To disable this feature, set `brokerDeleteInactiveTopicsEnabled` to `false`. To change the frequency of checking inactive topics, set `brokerDeleteInactiveTopicsFrequencySeconds` to a specific value.
+
+For more information about the two parameters, see [here](reference-configuration.md#broker).
+
+You can create partitioned topics in the following ways.
+````mdx-code-block
+
+
+
+When you create partitioned topics with the [`create-partitioned-topic`](reference-pulsar-admin.md#create-partitioned-topic)
+command, you need to specify the topic name as an argument and the number of partitions using the `-p` or `--partitions` flag.
+
+```shell
+
+$ bin/pulsar-admin topics create-partitioned-topic \
+ persistent://my-tenant/my-namespace/my-topic \
+ --partitions 4
+
+```
+
+:::note
+
+If a non-partitioned topic with the suffix '-partition-' followed by a numeric value like 'xyz-topic-partition-10', you can not create a partitioned topic with name 'xyz-topic', because the partitions of the partitioned topic could override the existing non-partitioned topic. To create such partitioned topic, you have to delete that non-partitioned topic first.
+
+:::
+
+
+
+
+{@inject: endpoint|PUT|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/createPartitionedTopic?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topicName = "persistent://my-tenant/my-namespace/my-topic";
+int numPartitions = 4;
+admin.topics().createPartitionedTopic(topicName, numPartitions);
+
+```
+
+
+
+
+````
+
+### Create missed partitions
+
+When topic auto-creation is disabled, and you have a partitioned topic without any partitions, you can use the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions) command to create partitions for the topic.
+
+````mdx-code-block
+
+
+
+You can create missed partitions with the [`create-missed-partitions`](reference-pulsar-admin.md#create-missed-partitions) command and specify the topic name as an argument.
+
+```shell
+
+$ bin/pulsar-admin topics create-missed-partitions \
+ persistent://my-tenant/my-namespace/my-topic \
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:namespace/:topic|operation/createMissedPartitions?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topicName = "persistent://my-tenant/my-namespace/my-topic";
+admin.topics().createMissedPartitions(topicName);
+
+```
+
+
+
+
+````
+
+### Get metadata
+
+Partitioned topics are associated with metadata, you can view it as a JSON object. The following metadata field is available.
+
+Field | Description
+:-----|:-------
+`partitions` | The number of partitions into which the topic is divided.
+
+````mdx-code-block
+
+
+
+You can check the number of partitions in a partitioned topic with the [`get-partitioned-topic-metadata`](reference-pulsar-admin.md#get-partitioned-topic-metadata) subcommand.
+
+```shell
+
+$ pulsar-admin topics get-partitioned-topic-metadata \
+ persistent://my-tenant/my-namespace/my-topic
+{
+ "partitions": 4
+}
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitions|operation/getPartitionedMetadata?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topicName = "persistent://my-tenant/my-namespace/my-topic";
+admin.topics().getPartitionedTopicMetadata(topicName);
+
+```
+
+
+
+
+````
+
+### Update
+
+You can update the number of partitions for an existing partitioned topic *if* the topic is non-global. However, you can only add the partition number. Decrementing the number of partitions would delete the topic, which is not supported in Pulsar.
+
+Producers and consumers can find the newly created partitions automatically.
+
+````mdx-code-block
+
+
+
+You can update partitioned topics with the [`update-partitioned-topic`](reference-pulsar-admin.md#update-partitioned-topic) command.
+
+```shell
+
+$ pulsar-admin topics update-partitioned-topic \
+ persistent://my-tenant/my-namespace/my-topic \
+ --partitions 8
+
+```
+
+
+
+
+{@inject: endpoint|POST|/admin/v2/:schema/:tenant/:cluster/:namespace/:destination/partitions|operation/updatePartitionedTopic?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.topics().updatePartitionedTopic(topic, numPartitions);
+
+```
+
+
+
+
+````
+
+### Delete
+You can delete partitioned topics with the [`delete-partitioned-topic`](reference-pulsar-admin.md#delete-partitioned-topic) command, REST API and Java.
+
+````mdx-code-block
+
+
+
+```shell
+
+$ bin/pulsar-admin topics delete-partitioned-topic \
+ persistent://my-tenant/my-namespace/my-topic
+
+```
+
+
+
+
+{@inject: endpoint|DELETE|/admin/v2/:schema/:topic/:namespace/:destination/partitions|operation/deletePartitionedTopic?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.topics().delete(topic);
+
+```
+
+
+
+
+````
+
+### List
+You can get the list of partitioned topics under a given namespace in the following ways.
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin topics list-partitioned-topics tenant/namespace
+persistent://tenant/namespace/topic1
+persistent://tenant/namespace/topic2
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace|operation/getPartitionedTopicList?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.topics().getPartitionedTopicList(namespace);
+
+```
+
+
+
+
+````
+
+### Stats
+
+You can check the current statistics of a given partitioned topic. The following is an example. For description of each stats, refer to [get stats](#get-stats).
+
+Note that in the subscription JSON object, `chuckedMessageRate` is deprecated. Please use `chunkedMessageRate`. Both will be sent in the JSON for now.
+
+```json
+
+{
+ "msgRateIn" : 999.992947159793,
+ "msgThroughputIn" : 1070918.4635439808,
+ "msgRateOut" : 0.0,
+ "msgThroughputOut" : 0.0,
+ "bytesInCounter" : 270318763,
+ "msgInCounter" : 252489,
+ "bytesOutCounter" : 0,
+ "msgOutCounter" : 0,
+ "averageMsgSize" : 1070.926056966454,
+ "msgChunkPublished" : false,
+ "storageSize" : 270316646,
+ "backlogSize" : 200921133,
+ "publishers" : [ {
+ "msgRateIn" : 999.992947159793,
+ "msgThroughputIn" : 1070918.4635439808,
+ "averageMsgSize" : 1070.3333333333333,
+ "chunkedMessageRate" : 0.0,
+ "producerId" : 0
+ } ],
+ "subscriptions" : {
+ "test" : {
+ "msgRateOut" : 0.0,
+ "msgThroughputOut" : 0.0,
+ "bytesOutCounter" : 0,
+ "msgOutCounter" : 0,
+ "msgRateRedeliver" : 0.0,
+ "chuckedMessageRate" : 0,
+ "chunkedMessageRate" : 0,
+ "msgBacklog" : 144318,
+ "msgBacklogNoDelayed" : 144318,
+ "blockedSubscriptionOnUnackedMsgs" : false,
+ "msgDelayed" : 0,
+ "unackedMessages" : 0,
+ "msgRateExpired" : 0.0,
+ "lastExpireTimestamp" : 0,
+ "lastConsumedFlowTimestamp" : 0,
+ "lastConsumedTimestamp" : 0,
+ "lastAckedTimestamp" : 0,
+ "consumers" : [ ],
+ "isDurable" : true,
+ "isReplicated" : false
+ }
+ },
+ "replication" : { },
+ "metadata" : {
+ "partitions" : 3
+ },
+ "partitions" : { }
+}
+
+```
+
+You can check the current statistics of a given partitioned topic and its connected producers and consumers in the following ways.
+
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin topics partitioned-stats \
+ persistent://test-tenant/namespace/topic \
+ --per-partition
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/partitioned-stats|operation/getPartitionedStats?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.topics().getPartitionedStats(topic, true /* per partition */, false /* is precise backlog */);
+
+```
+
+
+
+
+````
+
+### Internal stats
+
+You can check the detailed statistics of a topic. The following is an example. For description of each stats, refer to [get internal stats](#get-internal-stats).
+
+```json
+
+{
+ "entriesAddedCounter": 20449518,
+ "numberOfEntries": 3233,
+ "totalSize": 331482,
+ "currentLedgerEntries": 3233,
+ "currentLedgerSize": 331482,
+ "lastLedgerCreatedTimestamp": "2016-06-29 03:00:23.825",
+ "lastLedgerCreationFailureTimestamp": null,
+ "waitingCursorsCount": 1,
+ "pendingAddEntriesCount": 0,
+ "lastConfirmedEntry": "324711539:3232",
+ "state": "LedgerOpened",
+ "ledgers": [
+ {
+ "ledgerId": 324711539,
+ "entries": 0,
+ "size": 0
+ }
+ ],
+ "cursors": {
+ "my-subscription": {
+ "markDeletePosition": "324711539:3133",
+ "readPosition": "324711539:3233",
+ "waitingReadOp": true,
+ "pendingReadOps": 0,
+ "messagesConsumedCounter": 20449501,
+ "cursorLedger": 324702104,
+ "cursorLedgerLastEntry": 21,
+ "individuallyDeletedMessages": "[(324711539:3134‥324711539:3136], (324711539:3137‥324711539:3140], ]",
+ "lastLedgerSwitchTimestamp": "2016-06-29 01:30:19.313",
+ "state": "Open"
+ }
+ }
+}
+
+```
+
+You can get the internal stats for the partitioned topic in the following ways.
+
+````mdx-code-block
+
+
+
+```shell
+
+$ pulsar-admin topics stats-internal \
+ persistent://test-tenant/namespace/topic
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/internalStats|operation/getInternalStats?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+admin.topics().getInternalStats(topic);
+
+```
+
+
+
+
+````
+
+
+## Publish to partitioned topics
+
+By default, Pulsar topics are served by a single broker, which limits the maximum throughput of a topic. *Partitioned topics* can span multiple brokers and thus allow for higher throughput.
+
+You can publish to partitioned topics using Pulsar client libraries. When publishing to partitioned topics, you must specify a routing mode. If you do not specify any routing mode when you create a new producer, the round robin routing mode is used.
+
+### Routing mode
+
+You can specify the routing mode in the ProducerConfiguration object that you use to configure your producer. The routing mode determines which partition(internal topic) that each message should be published to.
+
+The following {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} options are available.
+
+Mode | Description
+:--------|:------------
+`RoundRobinPartition` | If no key is provided, the producer publishes messages across all partitions in round-robin policy to achieve the maximum throughput. Round-robin is not done per individual message, round-robin is set to the same boundary of batching delay to ensure that batching is effective. If a key is specified on the message, the partitioned producer hashes the key and assigns message to a particular partition. This is the default mode.
+`SinglePartition` | If no key is provided, the producer picks a single partition randomly and publishes all messages into that partition. If a key is specified on the message, the partitioned producer hashes the key and assigns message to a particular partition.
+`CustomPartition` | Use custom message router implementation that is called to determine the partition for a particular message. You can create a custom routing mode by using the Java client and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface.
+
+The following is an example:
+
+```java
+
+String pulsarBrokerRootUrl = "pulsar://localhost:6650";
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+
+PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build();
+Producer producer = pulsarClient.newProducer()
+ .topic(topic)
+ .messageRoutingMode(MessageRoutingMode.SinglePartition)
+ .create();
+producer.send("Partitioned topic message".getBytes());
+
+```
+
+### Custom message router
+
+To use a custom message router, you need to provide an implementation of the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface, which has just one `choosePartition` method:
+
+```java
+
+public interface MessageRouter extends Serializable {
+ int choosePartition(Message msg);
+}
+
+```
+
+The following router routes every message to partition 10:
+
+```java
+
+public class AlwaysTenRouter implements MessageRouter {
+ public int choosePartition(Message msg) {
+ return 10;
+ }
+}
+
+```
+
+With that implementation, you can send
+
+```java
+
+String pulsarBrokerRootUrl = "pulsar://localhost:6650";
+String topic = "persistent://my-tenant/my-cluster-my-namespace/my-topic";
+
+PulsarClient pulsarClient = PulsarClient.builder().serviceUrl(pulsarBrokerRootUrl).build();
+Producer producer = pulsarClient.newProducer()
+ .topic(topic)
+ .messageRouter(new AlwaysTenRouter())
+ .create();
+producer.send("Partitioned topic message".getBytes());
+
+```
+
+### How to choose partitions when using a key
+If a message has a key, it supersedes the round robin routing policy. The following example illustrates how to choose the partition when using a key.
+
+```java
+
+// If the message has a key, it supersedes the round robin routing policy
+ if (msg.hasKey()) {
+ return signSafeMod(hash.makeHash(msg.getKey()), topicMetadata.numPartitions());
+ }
+
+ if (isBatchingEnabled) { // if batching is enabled, choose partition on `partitionSwitchMs` boundary.
+ long currentMs = clock.millis();
+ return signSafeMod(currentMs / partitionSwitchMs + startPtnIdx, topicMetadata.numPartitions());
+ } else {
+ return signSafeMod(PARTITION_INDEX_UPDATER.getAndIncrement(this), topicMetadata.numPartitions());
+ }
+
+```
+
+## Manage subscriptions
+
+You can use [Pulsar admin API](admin-api-overview.md) to create, check, and delete subscriptions.
+
+### Create subscription
+
+You can create a subscription for a topic using one of the following methods.
+
+````mdx-code-block
+
+
+
+
+```shell
+
+pulsar-admin topics create-subscription \
+--subscription my-subscription \
+persistent://test-tenant/ns1/tp1
+
+```
+
+
+
+
+{@inject: endpoint|PUT|/admin/v2/persistent/:tenant/:namespace/:topic/subscription/:subscription|operation/createSubscriptions?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+String subscriptionName = "my-subscription";
+admin.topics().createSubscription(topic, subscriptionName, MessageId.latest);
+
+```
+
+
+
+
+````
+
+### Get subscription
+
+You can check all subscription names for a given topic using one of the following methods.
+
+````mdx-code-block
+
+
+
+
+```shell
+
+pulsar-admin topics subscriptions \
+persistent://test-tenant/ns1/tp1 \
+my-subscription
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/subscriptions|operation/getSubscriptions?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+admin.topics().getSubscriptions(topic);
+
+```
+
+
+
+
+````
+
+### Unsubscribe subscription
+
+When a subscription does not process messages any more, you can unsubscribe it using one of the following methods.
+
+````mdx-code-block
+
+
+
+
+```shell
+
+pulsar-admin topics unsubscribe \
+--subscription my-subscription \
+persistent://test-tenant/ns1/tp1
+
+```
+
+
+
+
+{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/:topic/subscription/:subscription|operation/deleteSubscription?version=@pulsar:version_number@}
+
+
+
+
+```java
+
+String topic = "persistent://my-tenant/my-namespace/my-topic";
+String subscriptionName = "my-subscription";
+admin.topics().deleteSubscription(topic, subscriptionName);
+
+```
+
+
+
+
+````
diff --git a/site2/website/versioned_docs/version-2.10.x/administration-geo.md b/site2/website/versioned_docs/version-2.10.x/administration-geo.md
new file mode 100644
index 0000000000000..2d64f0b643f1e
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/administration-geo.md
@@ -0,0 +1,302 @@
+---
+id: administration-geo
+title: Pulsar geo-replication
+sidebar_label: "Geo-replication"
+original_id: administration-geo
+---
+
+````mdx-code-block
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+````
+
+
+## Enable geo-replication for a namespace
+
+You must enable geo-replication on a [per-tenant basis](#concepts-multi-tenancy) in Pulsar. For example, you can enable geo-replication between two specific clusters only when a tenant has access to both clusters.
+
+Geo-replication is managed at the namespace level, which means you only need to create and configure a namespace to replicate messages between two or more provisioned clusters that a tenant can access.
+
+Complete the following tasks to enable geo-replication for a namespace:
+
+* [Enable a geo-replication namespace](#enable-geo-replication-at-namespace-level)
+* [Configure that namespace to replicate across two or more provisioned clusters](admin-api-namespaces.md/#configure-replication-clusters)
+
+Any message published on *any* topic in that namespace is replicated to all clusters in the specified set.
+
+## Local persistence and forwarding
+
+When messages are produced on a Pulsar topic, messages are first persisted in the local cluster, and then forwarded asynchronously to the remote clusters.
+
+In normal cases, when connectivity issues are none, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, the network [round-trip time](https://en.wikipedia.org/wiki/Round-trip_delay_time) (RTT) between the remote regions defines end-to-end delivery latency.
+
+Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (like during a network partition).
+
+Producers and consumers can publish messages to and consume messages from any cluster in a Pulsar instance. However, subscriptions cannot only be local to the cluster where the subscriptions are created but also can be transferred between clusters after replicated subscription is enabled. Once replicated subscription is enabled, you can keep subscription state in synchronization. Therefore, a topic can be asynchronously replicated across multiple geographical regions. In case of failover, a consumer can restart consuming messages from the failure point in a different cluster.
+
+![A typical geo-replication example with full-mesh pattern](/assets/geo-replication.png)
+
+In the aforementioned example, the **T1** topic is replicated among three clusters, **Cluster-A**, **Cluster-B**, and **Cluster-C**.
+
+All messages produced in any of the three clusters are delivered to all subscriptions in other clusters. In this case, **C1** and **C2** consumers receive all messages that **P1**, **P2**, and **P3** producers publish. Ordering is still guaranteed on a per-producer basis.
+
+## Configure replication
+
+This section guides you through the steps to configure geo-replicated clusters.
+1. [Connect replication clusters](#connect-replication-clusters)
+2. [Grant permissions to properties](#grant-permissions-to-properties)
+3. [Enable geo-replication](#enable-geo-replication)
+4. [Use topics with geo-replication](#use-topics-with-geo-replication)
+
+### Connect replication clusters
+
+To replicate data among clusters, you need to configure each cluster to connect to the other. You can use the [`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/) tool to create a connection.
+
+**Example**
+
+Suppose that you have 3 replication clusters: `us-west`, `us-cent`, and `us-east`.
+
+1. Configure the connection from `us-west` to `us-east`.
+
+ Run the following command on `us-west`.
+
+```shell
+
+$ bin/pulsar-admin clusters create \
+ --broker-url pulsar://: \
+ --url http://: \
+ us-east
+
+```
+
+ :::tip
+
+ - If you want to use a secure connection for a cluster, you can use the flags `--broker-url-secure` and `--url-secure`. For more information, see [pulsar-admin clusters create](https://pulsar.apache.org/tools/pulsar-admin/).
+ - Different clusters may have different authentications. You can use the authentication flag `--auth-plugin` and `--auth-parameters` together to set cluster authentication, which overrides `brokerClientAuthenticationPlugin` and `brokerClientAuthenticationParameters` if `authenticationEnabled` sets to `true` in `broker.conf` and `standalone.conf`. For more information, see [authentication and authorization](concepts-authentication.md).
+
+ :::
+
+2. Configure the connection from `us-west` to `us-cent`.
+
+ Run the following command on `us-west`.
+
+```shell
+
+$ bin/pulsar-admin clusters create \
+ --broker-url pulsar://: \
+ --url http://: \
+ us-cent
+
+```
+
+3. Run similar commands on `us-east` and `us-cent` to create connections among clusters.
+
+### Grant permissions to properties
+
+To replicate to a cluster, the tenant needs permission to use that cluster. You can grant permission to the tenant when you create the tenant or grant later.
+
+Specify all the intended clusters when you create a tenant:
+
+```shell
+
+$ bin/pulsar-admin tenants create my-tenant \
+ --admin-roles my-admin-role \
+ --allowed-clusters us-west,us-east,us-cent
+
+```
+
+To update permissions of an existing tenant, use `update` instead of `create`.
+
+### Enable geo-replication
+
+You can enable geo-replication at **namespace** or **topic** level.
+
+#### Enable geo-replication at namespace level
+
+You can create a namespace with the following command sample.
+
+```shell
+
+$ bin/pulsar-admin namespaces create my-tenant/my-namespace
+
+```
+
+Initially, the namespace is not assigned to any cluster. You can assign the namespace to clusters using the `set-clusters` subcommand:
+
+```shell
+
+$ bin/pulsar-admin namespaces set-clusters my-tenant/my-namespace \
+ --clusters us-west,us-east,us-cent
+
+```
+
+#### Enable geo-replication at topic level
+
+You can set geo-replication at topic level using the command `pulsar-admin topics set-replication-clusters`. For the latest and complete information about `Pulsar admin`, including commands, flags, descriptions, and more information, see [Pulsar admin doc](https://pulsar.apache.org/tools/pulsar-admin/).
+
+```shell
+
+$ bin/pulsar-admin topics set-replication-clusters --clusters us-west,us-east,us-cent my-tenant/my-namespace/my-topic
+
+```
+
+:::tip
+
+- You can change the replication clusters for a namespace at any time, without disruption to ongoing traffic. Replication channels are immediately set up or stopped in all clusters as soon as the configuration changes.
+- Once you create a geo-replication namespace, any topics that producers or consumers create within that namespace are replicated across clusters. Typically, each application uses the `serviceUrl` for the local cluster.
+- If you are using Pulsar version `2.10.x`, to enable geo-replication at topic level, you need to change the following configurations in the `conf/broker.conf` or `conf/standalone.conf` file to enable topic policies service.
+```shell
+systemTopicEnabled=true
+topicLevelPoliciesEnabled=true
+```
+:::
+
+### Use topics with geo-replication
+
+#### Selective replication
+
+By default, messages are replicated to all clusters configured for the namespace. You can restrict replication selectively by specifying a replication list for a message, and then that message is replicated only to the subset in the replication list.
+
+The following is an example for the [Java API](client-libraries-java.md). Note the use of the `setReplicationClusters` method when you construct the {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} object:
+
+```java
+
+List restrictReplicationTo = Arrays.asList(
+ "us-west",
+ "us-east"
+);
+
+Producer producer = client.newProducer()
+ .topic("some-topic")
+ .create();
+
+producer.newMessage()
+ .value("my-payload".getBytes())
+ .setReplicationClusters(restrictReplicationTo)
+ .send();
+
+```
+
+#### Topic stats
+
+You can check topic-specific statistics for geo-replication topics using one of the following methods.
+
+````mdx-code-block
+
+
+
+Use the [`pulsar-admin topics stats`](https://pulsar.apache.org/tools/pulsar-admin/) command.
+
+```shell
+
+$ bin/pulsar-admin topics stats persistent://my-tenant/my-namespace/my-topic
+
+```
+
+
+
+
+{@inject: endpoint|GET|/admin/v2/:schema/:tenant/:namespace/:topic/stats|operation/getStats?version=@pulsar:version_number@}
+
+
+
+
+````
+
+Each cluster reports its own local stats, including the incoming and outgoing replication rates and backlogs.
+
+#### Delete a geo-replication topic
+
+Given that geo-replication topics exist in multiple regions, directly deleting a geo-replication topic is not possible. Instead, you should rely on automatic topic garbage collection.
+
+In Pulsar, a topic is automatically deleted when the topic meets the following three conditions:
+- no producers or consumers are connected to it;
+- no subscriptions to it;
+- no more messages are kept for retention.
+For geo-replication topics, each region uses a fault-tolerant mechanism to decide when deleting the topic locally is safe.
+
+You can explicitly disable topic garbage collection by setting `brokerDeleteInactiveTopicsEnabled` to `false` in your [broker configuration](reference-configuration.md#broker).
+
+To delete a geo-replication topic, close all producers and consumers on the topic, and delete all of its local subscriptions in every replication cluster. When Pulsar determines that no valid subscription for the topic remains across the system, it will garbage collect the topic.
+
+## Replicated subscriptions
+
+Pulsar supports replicated subscriptions, so you can keep subscription state in sync, within a sub-second timeframe, in the context of a topic that is being asynchronously replicated across multiple geographical regions.
+
+In case of failover, a consumer can restart consuming from the failure point in a different cluster.
+
+### Enable replicated subscription
+
+Replicated subscription is disabled by default. You can enable replicated subscription when creating a consumer.
+
+```java
+
+Consumer consumer = client.newConsumer(Schema.STRING)
+ .topic("my-topic")
+ .subscriptionName("my-subscription")
+ .replicateSubscriptionState(true)
+ .subscribe();
+
+```
+
+### Advantages
+
+ * It is easy to implement the logic.
+ * You can choose to enable or disable replicated subscription.
+ * When you enable it, the overhead is low, and it is easy to configure.
+ * When you disable it, the overhead is zero.
+
+### Limitations
+
+* When you enable replicated subscription, you're creating a consistent distributed snapshot to establish an association between message ids from different clusters. The snapshots are taken periodically. The default value is `1 second`. It means that a consumer failing over to a different cluster can potentially receive 1 second of duplicates. You can also configure the frequency of the snapshot in the `broker.conf` file.
+* Only the base line cursor position is synced in replicated subscriptions while the individual acknowledgments are not synced. This means the messages acknowledged out-of-order could end up getting delivered again, in the case of a cluster failover.
+
+## Migrate data between clusters using geo-replication
+
+Using geo-replication to migrate data between clusters is a special use case of the [active-active replication pattern](concepts-replication.md/#active-active-replication) when you don't have a large amount of data.
+
+1. Create your new cluster.
+2. Add the new cluster to your old cluster.
+
+```shell
+
+ bin/pulsar-admin cluster create new-cluster
+
+```
+
+3. Add the new cluster to your tenant.
+
+```shell
+
+ bin/pulsar-admin tenants update my-tenant --cluster old-cluster,new-cluster
+
+```
+
+4. Set the clusters on your namespace.
+
+```shell
+
+ bin/pulsar-admin namespaces set-clusters my-tenant/my-ns --cluster old-cluster,new-cluster
+
+```
+
+5. Update your applications using [replicated subscriptions](#replicated-subscriptions).
+6. Validate subscription replication is active.
+
+```shell
+
+ bin/pulsar-admin topics stats-internal public/default/t1
+
+```
+
+7. Move your consumers and producers to the new cluster by modifying the values of `serviceURL`.
+
+:::note
+
+* The replication starts from step 4, which means existing messages in your old cluster are not replicated.
+* If you have some older messages to migrate, you can pre-create the replication subscriptions for each topic and set it at the earliest position by using `pulsar-admin topics create-subscription -s pulsar.repl.new-cluster -m earliest `.
+
+:::
+
diff --git a/site2/website/versioned_docs/version-2.10.x/administration-isolation.md b/site2/website/versioned_docs/version-2.10.x/administration-isolation.md
new file mode 100644
index 0000000000000..b176d1f14c20d
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/administration-isolation.md
@@ -0,0 +1,124 @@
+---
+id: administration-isolation
+title: Pulsar isolation
+sidebar_label: "Pulsar isolation"
+original_id: administration-isolation
+---
+
+````mdx-code-block
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+````
+
+
+In an organization, a Pulsar instance provides services to multiple teams. When organizing the resources across multiple teams, you want to make a suitable isolation plan to avoid the resource competition between different teams and applications and provide high-quality messaging service. In this case, you need to take resource isolation into consideration and weigh your intended actions against expected and unexpected consequences.
+
+To enforce resource isolation, you can use the Pulsar isolation policy, which allows you to allocate resources (**broker** and **bookie**) for the namespace.
+
+## Broker isolation
+
+In Pulsar, when namespaces (more specifically, namespace bundles) are assigned dynamically to brokers, the namespace isolation policy limits the set of brokers that can be used for assignment. Before topics are assigned to brokers, you can set the namespace isolation policy with a primary or a secondary regex to select desired brokers.
+
+You can set a namespace isolation policy for a cluster using one of the following methods.
+
+````mdx-code-block
+
+
+
+
+```
+
+pulsar-admin ns-isolation-policy set options
+
+```
+
+For more information about the command `pulsar-admin ns-isolation-policy set options`, see [here](https://pulsar.apache.org/tools/pulsar-admin/).
+
+**Example**
+
+```shell
+
+bin/pulsar-admin ns-isolation-policy set \
+--auto-failover-policy-type min_available \
+--auto-failover-policy-params min_limit=1,usage_threshold=80 \
+--namespaces my-tenant/my-namespace \
+--primary 10.193.216.* my-cluster policy-name
+
+```
+
+
+
+
+[PUT /admin/v2/namespaces/{tenant}/{namespace}](https://pulsar.apache.org/admin-rest-api/?version=master&apiversion=v2#operation/createNamespace)
+
+
+
+
+For how to set namespace isolation policy using Java admin API, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/NamespacesImpl.java#L251).
+
+
+
+
+````
+
+## Bookie isolation
+
+A namespace can be isolated into user-defined groups of bookies, which guarantees all the data that belongs to the namespace is stored in desired bookies. The bookie affinity group uses the BookKeeper [rack-aware placement policy](https://bookkeeper.apache.org/docs/latest/api/javadoc/org/apache/bookkeeper/client/EnsemblePlacementPolicy.html) and it is a way to feed rack information which is stored as JSON format in znode.
+
+You can set a bookie affinity group using one of the following methods.
+
+````mdx-code-block
+
+
+
+
+```
+
+pulsar-admin namespaces set-bookie-affinity-group options
+
+```
+
+For more information about the command `pulsar-admin namespaces set-bookie-affinity-group options`, see [here](https://pulsar.apache.org/tools/pulsar-admin/).
+
+**Example**
+
+```shell
+
+bin/pulsar-admin bookies set-bookie-rack \
+--bookie 127.0.0.1:3181 \
+--hostname 127.0.0.1:3181 \
+--group group-bookie1 \
+--rack rack1
+
+bin/pulsar-admin namespaces set-bookie-affinity-group public/default \
+--primary-group group-bookie1
+
+```
+
+:::note
+
+- Do not set a bookie rack name to slash (`/`) or an empty string (`""`) if you use Pulsar earlier than 2.7.5, 2.8.3, and 2.9.2. If you use Pulsar 2.7.5, 2.8.3, 2.9.2 or later versions, it falls back to `/default-rack` or `/default-region/default-rack`.
+- When `RackawareEnsemblePlacementPolicy` is enabled, the rack name is not allowed to contain slash (`/`) except for the beginning and end of the rack name string. For example, rack name like `/rack0` is okay, but `/rack/0` is not allowed.
+- When `RegionawareEnsemblePlacementPolicy` is enabled, the rack name can only contain one slash (`/`) except for the beginning and end of the rack name string. For example, rack name like `/region0/rack0` is okay, but `/region0rack0` and `/region0/rack/0` are not allowed.
+For the bookie rack name restrictions, see [pulsar-admin bookies set-bookie-rack](https://pulsar.apache.org/tools/pulsar-admin/).
+
+:::
+
+
+
+
+[POST /admin/v2/namespaces/{tenant}/{namespace}/persistence/bookieAffinity](https://pulsar.apache.org/admin-rest-api/?version=master&apiversion=v2#operation/setBookieAffinityGroup)
+
+
+
+
+For how to set bookie affinity group for a namespace using Java admin API, see [here](https://github.com/apache/pulsar/blob/master/pulsar-client-admin/src/main/java/org/apache/pulsar/client/admin/internal/NamespacesImpl.java#L1164).
+
+
+
+
+````
diff --git a/site2/website/versioned_docs/version-2.10.x/administration-load-balance.md b/site2/website/versioned_docs/version-2.10.x/administration-load-balance.md
new file mode 100644
index 0000000000000..397c88c5dc0f7
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/administration-load-balance.md
@@ -0,0 +1,280 @@
+---
+id: administration-load-balance
+title: Load balance across brokers
+sidebar_label: "Load balance"
+original_id: administration-load-balance
+---
+
+
+Pulsar is a horizontally scalable messaging system, so the traffic in a logical cluster must be balanced across all the available Pulsar brokers as evenly as possible, which is a core requirement.
+
+You can use multiple settings and tools to control the traffic distribution which requires a bit of context to understand how the traffic is managed in Pulsar. Though in most cases, the core requirement mentioned above is true out of the box and you should not worry about it.
+
+The following sections introduce how the load-balanced assignments work across Pulsar brokers and how you can leverage the framework to adjust.
+
+## Dynamic assignments
+
+Topics are dynamically assigned to brokers based on the load conditions of all brokers in the cluster. The assignment of topics to brokers is not done at the topic level but at the **bundle** level (a higher level). Instead of individual topic assignments, each broker takes ownership of a subset of the topics for a namespace. This subset is called a bundle and effectively this subset is a sharding mechanism.
+
+In other words, each namespace is an "administrative" unit and sharded into a list of bundles, with each bundle comprising a portion of the overall hash range of the namespace. Topics are assigned to a particular bundle by taking the hash of the topic name and checking in which bundle the hash falls. Each bundle is independent of the others and thus is independently assigned to different brokers.
+
+The benefit of the assignment granularity is to amortize the amount of information that you need to keep track of. Based on CPU, memory, traffic load, and other indexes, topics are assigned to a particular broker dynamically. For example:
+* When a client starts using new topics that are not assigned to any broker, a process is triggered to choose the best-suited broker to acquire ownership of these topics according to the load conditions.
+* If the broker owning a topic becomes overloaded, the topic is reassigned to a less-loaded broker.
+* If the broker owning a topic crashes, the topic is reassigned to another active broker.
+
+:::tip
+
+For partitioned topics, different partitions are assigned to different brokers. Here "topic" means either a non-partitioned topic or one partition of a topic.
+
+:::
+
+## Create namespaces with assigned bundles
+
+When you create a new namespace, a number of bundles are assigned to the namespace. You can set this number in the `conf/broker.conf` file:
+
+```conf
+
+# When a namespace is created without specifying the number of bundles, this
+# value will be used as the default
+defaultNumberOfNamespaceBundles=4
+
+```
+
+Alternatively, you can override the value when you create a new namespace using [Pulsar admin](/tools/pulsar-admin/):
+
+```shell
+
+bin/pulsar-admin namespaces create my-tenant/my-namespace --clusters us-west --bundles 16
+
+```
+
+With the above command, you create a namespace with 16 initial bundles. Therefore the topics for this namespace can immediately be spread across up to 16 brokers.
+
+In general, if you know the expected traffic and number of topics in advance, you had better start with a reasonable number of bundles instead of waiting for the system to auto-correct the distribution.
+
+On the same note, it is beneficial to start with more bundles than the number of brokers, due to the hashing nature of the distribution of topics into bundles. For example, for a namespace with 1000 topics, using something like 64 bundles achieves a good distribution of traffic across 16 brokers.
+
+
+## Split namespace bundles
+
+Since the load for the topics in a bundle might change over time and predicting the load might be hard, bundle split is designed to resolve these challenges. The broker splits a bundle into two and the new smaller bundles can be reassigned to different brokers.
+
+Pulsar supports the following two bundle split algorithms:
+* `range_equally_divide`: split the bundle into two parts with the same hash range size.
+* `topic_count_equally_divide`: split the bundle into two parts with the same number of topics.
+
+To enable bundle split, you need to configure the following settings in the `broker.conf` file, and set `defaultNamespaceBundleSplitAlgorithm` based on your needs.
+
+```conf
+
+loadBalancerAutoBundleSplitEnabled=true
+loadBalancerAutoUnloadSplitBundlesEnabled=true
+defaultNamespaceBundleSplitAlgorithm=range_equally_divide
+
+```
+
+You can configure more parameters for splitting thresholds. Any existing bundle that exceeds any of the thresholds is a candidate to be split. By default, the newly split bundles are immediately reassigned to other brokers, to facilitate the traffic distribution.
+
+```conf
+
+# maximum topics in a bundle, otherwise bundle split will be triggered
+loadBalancerNamespaceBundleMaxTopics=1000
+
+# maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered
+loadBalancerNamespaceBundleMaxSessions=1000
+
+# maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered
+loadBalancerNamespaceBundleMaxMsgRate=30000
+
+# maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered
+loadBalancerNamespaceBundleMaxBandwidthMbytes=100
+
+# maximum number of bundles in a namespace (for auto-split)
+loadBalancerNamespaceMaximumBundles=128
+
+```
+
+## Shed load automatically
+
+The support for automatic load shedding is available in the load manager of Pulsar. This means that whenever the system recognizes a particular broker is overloaded, the system forces some traffic to be reassigned to less-loaded brokers.
+
+When a broker is identified as overloaded, the broker forces to "unload" a subset of the bundles, the ones with higher traffic, that make up for the overload percentage.
+
+For example, the default threshold is 85% and if a broker is over quota at 95% CPU usage, then the broker unloads the percent difference plus a 5% margin: `(95% - 85%) + 5% = 15%`. Given the selection of bundles to unload is based on traffic (as a proxy measure for CPU, network, and memory), the broker unloads bundles for at least 15% of traffic.
+
+:::tip
+
+* The automatic load shedding is enabled by default. To disable it, you can set `loadBalancerSheddingEnabled` to `false`.
+* Besides the automatic load shedding, you can [manually unload bundles](#unload-topics-and-bundles).
+
+:::
+
+Additional settings that apply to shedding:
+
+```conf
+
+# Load shedding interval. Broker periodically checks whether some traffic should be offload from
+# some over-loaded broker to other under-loaded brokers
+loadBalancerSheddingIntervalMinutes=1
+
+# Prevent the same topics to be shed and moved to other brokers more than once within this timeframe
+loadBalancerSheddingGracePeriodMinutes=30
+
+```
+
+Pulsar supports the following types of automatic load shedding strategies.
+* [ThresholdShedder](#thresholdshedder)
+* [OverloadShedder](#overloadshedder)
+* [UniformLoadShedder](#uniformloadshedder)
+
+:::note
+
+* From Pulsar 2.10, the **default** shedding strategy is `ThresholdShedder`.
+* You need to restart brokers if the shedding strategy is [dynamically updated](admin-api-brokers.md/#dynamic-broker-configuration).
+
+:::
+
+### ThresholdShedder
+This strategy tends to shed the bundles if any broker's usage is above the configured threshold. It does this by first computing the average resource usage per broker for the whole cluster. The resource usage for each broker is calculated using the following method `LocalBrokerData#getMaxResourceUsageWithWeight`. Historical observations are included in the running average based on the broker's setting for `loadBalancerHistoryResourcePercentage`. Once the average resource usage is calculated, a broker's current/historical usage is compared to the average broker usage. If a broker's usage is greater than the average usage per broker plus the `loadBalancerBrokerThresholdShedderPercentage`, this load shedder proposes removing enough bundles to bring the unloaded broker 5% below the current average broker usage. Note that recently unloaded bundles are not unloaded again.
+
+![Shedding strategy - ThresholdShedder](/assets/shedding-strategy-thresholdshedder.svg)
+
+For example, assume you have three brokers, the average broker usage of broker1 is 40%, the average broker usage of broker2 and broker3 is 10%, then the cluster average usage is 20% ((40% + 10% + 10%) / 3). If you set `loadBalancerBrokerThresholdShedderPercentage` to `10`, then only broker1's certain bundles get unloaded, because the average usage of broker1 is greater than the sum of the cluster average usage (20%) plus `loadBalancerBrokerThresholdShedderPercentage`(10%).
+
+To use the `ThresholdShedder` strategy, configure brokers with this value.
+`loadBalancerLoadSheddingStrategy=org.apache.pulsar.broker.loadbalance.impl.ThresholdShedder`
+
+You can configure the weights for each resource per broker in the `conf/broker.conf` file.
+
+```conf
+
+# The BandWithIn usage weight when calculating new resource usage. The range is between 0 and 1.0.
+loadBalancerBandwithInResourceWeight=1.0
+
+# The BandWithOut usage weight when calculating new resource usage. The range is between 0 and 1.0.
+loadBalancerBandwithOutResourceWeight=1.0
+
+# The CPU usage weight when calculating new resource usage. The range is between 0 and 1.0.
+loadBalancerCPUResourceWeight=1.0
+
+# The heap memory usage weight when calculating new resource usage. The range is between 0 and 1.0.
+loadBalancerMemoryResourceWeight=1.0
+
+# The direct memory usage weight when calculating new resource usage. The range is between 0 and 1.0.
+loadBalancerDirectMemoryResourceWeight=1.0
+
+```
+
+### OverloadShedder
+This strategy attempts to shed exactly one bundle on brokers which are overloaded, that is, whose maximum system resource usage exceeds [`loadBalancerBrokerOverloadedThresholdPercentage`](#broker-overload-thresholds). To see which resources are considered when determining the maximum system resource. A bundle is recommended for unloading off that broker if and only if the following conditions hold: The broker has at least two bundles assigned and the broker has at least one bundle that has not been unloaded recently according to `LoadBalancerSheddingGracePeriodMinutes`. The unloaded bundle will be the most expensive bundle in terms of message rate that has not been recently unloaded. Note that this strategy does not take into account "underloaded" brokers when determining which bundles to unload. If you are looking for a strategy that spreads load evenly across all brokers, see [ThresholdShedder](#thresholdshedder).
+
+![Shedding strategy - OverloadShedder](/assets/shedding-strategy-overloadshedder.svg)
+
+To use the `OverloadShedder` strategy, configure brokers with this value.
+`loadBalancerLoadSheddingStrategy=org.apache.pulsar.broker.loadbalance.impl.OverloadShedder`
+
+#### Broker overload thresholds
+
+The determination of when a broker is overloaded is based on the threshold of CPU, network, and memory usage. Whenever either of those metrics reaches the threshold, the system triggers the shedding (if enabled).
+
+:::note
+
+The overload threshold `loadBalancerBrokerOverloadedThresholdPercentage` only applies to the [`OverloadShedder`](#overloadshedder) shedding strategy. By default, it is set to 85%.
+
+:::
+
+Pulsar gathers the CPU, network, and memory usage stats from the system metrics. In some cases of network utilization, the network interface speed that Linux reports is not correct and needs to be manually overridden. This is the case in AWS EC2 instances with 1Gbps NIC speed for which the OS reports 10Gbps speed.
+
+Because of the incorrect max speed, the load manager might think the broker has not reached the NIC capacity, while in fact the broker already uses all the bandwidth and the traffic is slowed down.
+
+You can set `loadBalancerOverrideBrokerNicSpeedGbps` in the `conf/broker.conf` file to correct the max NIC speed. When the value is empty, Pulsar uses the value that the OS reports.
+
+### UniformLoadShedder
+This strategy tends to distribute load uniformly across all brokers. This strategy checks the load difference between the broker with the highest load and the broker with the lowest load. If the difference is higher than configured thresholds `loadBalancerMsgRateDifferenceShedderThreshold` and `loadBalancerMsgThroughputMultiplierDifferenceShedderThreshold` then it finds out bundles that can be unloaded to distribute traffic evenly across all brokers.
+
+![Shedding strategy - UniformLoadShedder](/assets/shedding-strategy-uniformLoadshedder.svg)
+
+To use the `UniformLoadShedder` strategy, configure brokers with this value.
+`loadBalancerLoadSheddingStrategy=org.apache.pulsar.broker.loadbalance.impl.UniformLoadShedder`
+
+## Unload topics and bundles
+
+You can "unload" a topic in Pulsar manual admin operations. Unloading means closing topics, releasing ownership, and reassigning topics to a new broker, based on the current load.
+
+When unloading happens, the client experiences a small latency blip, typically in the order of tens of milliseconds, while the topic is reassigned.
+
+Unloading is the mechanism that the load manager uses to perform the load shedding, but you can also trigger the unloading manually, for example, to correct the assignments and redistribute traffic even before having any broker overloaded.
+
+Unloading a topic has no effect on the assignment, but just closes and reopens the particular topic:
+
+```shell
+
+pulsar-admin topics unload persistent://tenant/namespace/topic
+
+```
+
+To unload all topics for a namespace and trigger reassignments:
+
+```shell
+
+pulsar-admin namespaces unload tenant/namespace
+
+```
+
+## Distribute anti-affinity namespaces across failure domains
+
+When your application has multiple namespaces and you want one of them available all the time to avoid any downtime, you can group these namespaces and distribute them across different [failure domains](reference-terminology.md#failure-domain) and different brokers. Thus, if one of the failure domains is down (due to release rollout or brokers restart), it only disrupts namespaces owned by that specific failure domain and the rest of the namespaces owned by other domains remain available without any impact.
+
+Such a group of namespaces has anti-affinity to each other, that is, all the namespaces in this group are [anti-affinity namespaces](reference-terminology.md#anti-affinity-namespaces) and are distributed to different failure domains in a load-balanced manner.
+
+As illustrated in the following figure, Pulsar has 2 failure domains (Domain1 and Domain2) and each domain has 2 brokers in it. You can create an anti-affinity namespace group that has 4 namespaces in it, and all the 4 namespaces have anti-affinity to each other. The load manager tries to distribute namespaces evenly across all the brokers in the same domain. Since each domain has 2 brokers, every broker owns one namespace from this anti-affinity namespace group, and you can see each domain owns 2 namespaces, and each broker owns 1 namespace.
+
+![Distribute anti-affinity namespaces across failure domains](/assets/anti-affinity-namespaces-across-failure-domains.svg)
+
+The load manager follows an even distribution policy across failure domains to assign anti-affinity namespaces. The following table outlines the even-distributed assignment sequence illustrated in the above figure.
+
+| Assignment sequence | Namespace | Failure domain candidates | Broker candidates | Selected broker |
+|:---|:------------|:------------------|:------------------------------------|:-----------------|
+| 1 | Namespace1 | Domain1, Domain2 | Broker1, Broker2, Broker3, Broker4 | Domain1:Broker1 |
+| 2 | Namespace2 | Domain2 | Broker3, Broker4 | Domain2:Broker3 |
+| 3 | Namespace3 | Domain1, Domain2 | Broker2, Broker4 | Domain1:Broker2 |
+| 4 | Namespace4 | Domain2 | Broker4 | Domain2:Broker4 |
+
+:::tip
+
+* Each namespace belongs to only one anti-affinity group. If a namespace with an existing anti-affinity assignment is assigned to another anti-affinity group, the original assignment is dropped.
+
+* If there are more anti-affinity namespaces than failure domains, the load manager distributes namespaces evenly across all the domains, and also every domain distributes namespaces evenly across all the brokers under that domain.
+
+:::
+
+### Create a failure domain and register brokers
+
+:::note
+
+One broker can only be registered to a single failure domain.
+
+:::
+
+To create a domain under a specific cluster and register brokers, run the following command:
+
+```bash
+
+pulsar-admin clusters create-failure-domain --domain-name --broker-list
+
+```
+
+You can also view, update, and delete domains under a specific cluster. For more information, refer to [Pulsar admin doc](/tools/pulsar-admin/).
+
+### Create an anti-affinity namespace group
+
+An anti-affinity group is created automatically when the first namespace is assigned to the group. To assign a namespace to an anti-affinity group, run the following command. It sets an anti-affinity group name for a namespace.
+
+```bash
+
+pulsar-admin namespaces set-anti-affinity-group --group
+
+```
+
+For more information about `anti-affinity-group` related commands, refer to [Pulsar admin doc](/tools/pulsar-admin/).
diff --git a/site2/website/versioned_docs/version-2.10.x/administration-proxy.md b/site2/website/versioned_docs/version-2.10.x/administration-proxy.md
new file mode 100644
index 0000000000000..f45185dc45bfe
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/administration-proxy.md
@@ -0,0 +1,90 @@
+---
+id: administration-proxy
+title: Pulsar proxy
+sidebar_label: "Pulsar proxy"
+original_id: administration-proxy
+---
+
+Pulsar proxy is an optional gateway. Pulsar proxy is used when direct connections between clients and Pulsar brokers are either infeasible or undesirable. For example, when you run Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, you can run Pulsar proxy.
+
+## Configure the proxy
+
+Before using the proxy, you need to configure it with the brokers addresses in the cluster. You can configure the broker URL in the proxy configuration, or the proxy to connect directly using service discovery.
+
+> In a production environment service discovery is not recommended.
+
+### Use broker URLs
+
+It is more secure to specify a URL to connect to the brokers.
+
+Proxy authorization requires access to ZooKeeper, so if you use these broker URLs to connect to the brokers, you need to disable authorization at the Proxy level. Brokers still authorize requests after the proxy forwards them.
+
+You can configure the broker URLs in `conf/proxy.conf` as follows.
+
+```properties
+
+brokerServiceURL=pulsar://brokers.example.com:6650
+brokerWebServiceURL=http://brokers.example.com:8080
+functionWorkerWebServiceURL=http://function-workers.example.com:8080
+
+```
+
+If you use TLS, configure the broker URLs in the following way:
+
+```properties
+
+brokerServiceURLTLS=pulsar+ssl://brokers.example.com:6651
+brokerWebServiceURLTLS=https://brokers.example.com:8443
+functionWorkerWebServiceURL=https://function-workers.example.com:8443
+
+```
+
+The hostname in the URLs provided should be a DNS entry which points to multiple brokers or a virtual IP address, which is backed by multiple broker IP addresses, so that the proxy does not lose connectivity to Pulsar cluster if a single broker becomes unavailable.
+
+The ports to connect to the brokers (6650 and 8080, or in the case of TLS, 6651 and 8443) should be open in the network ACLs.
+
+Note that if you do not use functions, you do not need to configure `functionWorkerWebServiceURL`.
+
+### Use service discovery
+
+Pulsar uses [ZooKeeper](https://zookeeper.apache.org) for service discovery. To connect the proxy to ZooKeeper, specify the following in `conf/proxy.conf`.
+
+```properties
+
+metadataStoreUrl=my-zk-0:2181,my-zk-1:2181,my-zk-2:2181
+configurationMetadataStoreUrl=my-zk-0:2184,my-zk-remote:2184
+
+```
+
+> To use service discovery, you need to open the network ACLs, so the proxy can connects to the ZooKeeper nodes through the ZooKeeper client port (port `2181`) and the configuration store client port (port `2184`).
+
+> However, it is not secure to use service discovery. Because if the network ACL is open, when someone compromises a proxy, they have full access to ZooKeeper.
+
+## Start the proxy
+
+To start the proxy:
+
+```bash
+
+$ cd /path/to/pulsar/directory
+$ bin/pulsar proxy \
+ --metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181 \
+ --configuration-metadata-store zk:my-zk-1:2181,my-zk-2:2181,my-zk-3:2181
+
+```
+
+> You can run multiple instances of the Pulsar proxy in a cluster.
+
+## Stop the proxy
+
+Pulsar proxy runs in the foreground by default. To stop the proxy, simply stop the process in which the proxy is running.
+
+## Proxy frontends
+
+You can run Pulsar proxy behind some kind of load-distributing frontend, such as an [HAProxy](https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts) load balancer.
+
+## Use Pulsar clients with the proxy
+
+Once your Pulsar proxy is up and running, preferably behind a load-distributing [frontend](#proxy-frontends), clients can connect to the proxy via whichever address that the frontend uses. If the address is the DNS address `pulsar.cluster.default`, for example, the connection URL for clients is `pulsar://pulsar.cluster.default:6650`.
+
+For more information on Proxy configuration, refer to [Pulsar proxy](reference-configuration.md#pulsar-proxy).
diff --git a/site2/website/versioned_docs/version-2.10.x/administration-pulsar-manager.md b/site2/website/versioned_docs/version-2.10.x/administration-pulsar-manager.md
new file mode 100644
index 0000000000000..40c5a33da6da8
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/administration-pulsar-manager.md
@@ -0,0 +1,216 @@
+---
+id: administration-pulsar-manager
+title: Pulsar Manager
+sidebar_label: "Pulsar Manager"
+original_id: administration-pulsar-manager
+---
+
+Pulsar Manager is a web-based GUI management and monitoring tool that helps administrators and users manage and monitor tenants, namespaces, topics, subscriptions, brokers, clusters, and so on, and supports dynamic configuration of multiple environments.
+
+:::note
+
+If you are monitoring your current stats with [Pulsar dashboard](administration-dashboard.md), we recommend you use Pulsar Manager instead. Pulsar dashboard is deprecated.
+
+:::
+
+## Install
+
+### Quick Install
+The easiest way to use the Pulsar Manager is to run it inside a [Docker](https://www.docker.com/products/docker) container.
+
+```shell
+
+docker pull apachepulsar/pulsar-manager:v0.2.0
+docker run -it \
+ -p 9527:9527 -p 7750:7750 \
+ -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \
+ apachepulsar/pulsar-manager:v0.2.0
+
+```
+
+* Pulsar Manager is divided into front-end and back-end, the front-end service port is `9527` and the back-end service port is `7750`.
+* `SPRING_CONFIGURATION_FILE`: Default configuration file for spring.
+* By default, Pulsar Manager uses the `herddb` database. HerdDB is a SQL distributed database implemented in Java and can be found at [herddb.org](https://herddb.org/) for more information.
+
+### Configure Database or JWT authentication
+#### Configure Database (optional)
+
+If you have a large amount of data, you can use a custom database. Otherwise, some display errors may occur. For example, the topic information cannot be displayed when the topic exceeds 10000.
+The following is an example of PostgreSQL.
+
+1. Initialize database and table structures using the [file](https://github.com/apache/pulsar-manager/blob/master/src/main/resources/META-INF/sql/postgresql-schema.sql).
+2. Download and modify the [configuration file](https://github.com/apache/pulsar-manager/blob/master/src/main/resources/application.properties), then add the PostgreSQL configuration.
+
+```properties
+
+spring.datasource.driver-class-name=org.postgresql.Driver
+spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/pulsar_manager
+spring.datasource.username=postgres
+spring.datasource.password=postgres
+
+```
+
+3. Add a configuration mount and start with a docker image.
+
+```bash
+
+docker pull apachepulsar/pulsar-manager:v0.2.0
+docker run -it \
+ -p 9527:9527 -p 7750:7750 \
+ -v /your-path/application.properties:/pulsar-manager/pulsar-
+manager/application.properties
+ -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \
+ apachepulsar/pulsar-manager:v0.2.0
+
+```
+
+#### Enable JWT authentication (optional)
+
+If you want to turn on JWT authentication, configure the `application.properties` file.
+
+```properties
+
+backend.jwt.token=token
+
+jwt.broker.token.mode=PRIVATE
+jwt.broker.public.key=file:///path/broker-public.key
+jwt.broker.private.key=file:///path/broker-private.key
+
+or
+jwt.broker.token.mode=SECRET
+jwt.broker.secret.key=file:///path/broker-secret.key
+
+```
+
+• `backend.jwt.token`: token for the superuser. You need to configure this parameter during cluster initialization.
+• `jwt.broker.token.mode`: multiple modes of generating token, including PUBLIC, PRIVATE, and SECRET.
+• `jwt.broker.public.key`: configure this option if you use the PUBLIC mode.
+• `jwt.broker.private.key`: configure this option if you use the PRIVATE mode.
+• `jwt.broker.secret.key`: configure this option if you use the SECRET mode.
+For more information, see [Token Authentication Admin of Pulsar](security-token-admin.md).
+
+Docker command to add profile and key files mount.
+
+```bash
+
+docker pull apachepulsar/pulsar-manager:v0.2.0
+docker run -it \
+ -p 9527:9527 -p 7750:7750 \
+ -v /your-path/application.properties:/pulsar-manager/pulsar-
+manager/application.properties
+ -v /your-path/private.key:/pulsar-manager/private.key
+ -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \
+ apachepulsar/pulsar-manager:v0.2.0
+
+```
+
+### Set the administrator account and password
+
+```bash
+
+CSRF_TOKEN=$(curl http://localhost:7750/pulsar-manager/csrf-token)
+curl \
+ -H 'X-XSRF-TOKEN: $CSRF_TOKEN' \
+ -H 'Cookie: XSRF-TOKEN=$CSRF_TOKEN;' \
+ -H "Content-Type: application/json" \
+ -X PUT http://localhost:7750/pulsar-manager/users/superuser \
+ -d '{"name": "admin", "password": "apachepulsar", "description": "test", "email": "username@test.org"}'
+
+```
+
+The request parameter in curl command:
+
+```json
+
+{"name": "admin", "password": "apachepulsar", "description": "test", "email": "username@test.org"}
+
+```
+
+- `name` is the Pulsar Manager login username, currently `admin`.
+- `password` is the password of the current user of Pulsar Manager, currently `apachepulsar`. The password should be more than or equal to 6 digits.
+
+
+
+### Configure the environment
+1. Login to the system, Visit http://localhost:9527 to login. The current default account is `admin/apachepulsar`
+
+2. Click "New Environment" button to add an environment.
+
+3. Input the "Environment Name". The environment name is used for identifying an environment.
+
+4. Input the "Service URL". The Service URL is the admin service url of your Pulsar cluster.
+
+
+## Other Installation
+### Bare-metal installation
+
+When using binary packages for direct deployment, you can follow these steps.
+
+- Download and unzip the binary package, which is available on the [Pulsar Download](/download) page.
+
+ ```bash
+
+ wget https://dist.apache.org/repos/dist/release/pulsar/pulsar-manager/pulsar-manager-0.2.0/apache-pulsar-manager-0.2.0-bin.tar.gz
+ tar -zxvf apache-pulsar-manager-0.2.0-bin.tar.gz
+
+ ```
+
+- Extract the back-end service binary package and place the front-end resources in the back-end service directory.
+
+ ```bash
+
+ cd pulsar-manager
+ tar -zxvf pulsar-manager.tar
+ cd pulsar-manager
+ cp -r ../dist ui
+
+ ```
+
+- Modify `application.properties` configuration on demand.
+
+ > If you don't want to modify the `application.properties` file, you can add the configuration to the startup parameters via `. /bin/pulsar-manager --backend.jwt.token=token` to add the configuration to the startup parameters. This is a capability of the spring boot framework.
+
+- Start Pulsar Manager
+
+ ```bash
+
+ ./bin/pulsar-manager
+
+ ```
+
+### Custom docker image installation
+
+You can find the docker image in the [Docker Hub](https://github.com/apache/pulsar-manager/tree/master/docker) directory and build an image from the source code as well:
+
+ ```bash
+
+ git clone https://github.com/apache/pulsar-manager
+ cd pulsar-manager/front-end
+ npm install --save
+ npm run build:prod
+ cd ..
+ ./gradlew build -x test
+ cd ..
+ docker build -f docker/Dockerfile --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`latest` --build-arg VERSION=`latest` -t apachepulsar/pulsar-manager .
+
+ ```
+
+## Configuration
+
+
+
+| application.properties | System env on Docker Image | Desc | Example |
+| ----------------------------------- | -------------------------- | ------------------------------------------------------------ | ------------------------------------------------- |
+| backend.jwt.token | JWT_TOKEN | token for the superuser. You need to configure this parameter during cluster initialization. | `token` |
+| jwt.broker.token.mode | N/A | multiple modes of generating token, including PUBLIC, PRIVATE, and SECRET. | `PUBLIC` or `PRIVATE` or `SECRET`. |
+| jwt.broker.public.key | PUBLIC_KEY | configure this option if you use the PUBLIC mode. | `file:///path/broker-public.key` |
+| jwt.broker.private.key | PRIVATE_KEY | configure this option if you use the PRIVATE mode. | `file:///path/broker-private.key` |
+| jwt.broker.secret.key | SECRET_KEY | configure this option if you use the SECRET mode. | `file:///path/broker-secret.key` |
+| spring.datasource.driver-class-name | DRIVER_CLASS_NAME | the driver class name of the database. | `org.postgresql.Driver` |
+| spring.datasource.url | URL | the JDBC URL of your database. | `jdbc:postgresql://127.0.0.1:5432/pulsar_manager` |
+| spring.datasource.username | USERNAME | the username of database. | `postgres` |
+| spring.datasource.password | PASSWORD | the password of database. | `postgres` |
+| N/A | LOG_LEVEL | the level of log. | DEBUG |
+* For more information about backend configurations, see [here](https://github.com/apache/pulsar-manager/blob/master/src/README).
+* For more information about frontend configurations, see [here](https://github.com/apache/pulsar-manager/tree/master/front-end).
+
diff --git a/site2/website/versioned_docs/version-2.10.x/administration-stats.md b/site2/website/versioned_docs/version-2.10.x/administration-stats.md
new file mode 100644
index 0000000000000..ac0c03602f36d
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/administration-stats.md
@@ -0,0 +1,64 @@
+---
+id: administration-stats
+title: Pulsar stats
+sidebar_label: "Pulsar statistics"
+original_id: administration-stats
+---
+
+## Partitioned topics
+
+|Stat|Description|
+|---|---|
+|msgRateIn| The sum of publish rates of all local and replication publishers in messages per second.|
+|msgThroughputIn| Same as msgRateIn but in bytes per second instead of messages per second.|
+|msgRateOut| The sum of dispatch rates of all local and replication consumers in messages per second.|
+|msgThroughputOut| Same as msgRateOut but in bytes per second instead of messages per second.|
+|averageMsgSize| Average message size, in bytes, from this publisher within the last interval.|
+|storageSize| The sum of storage size of the ledgers for this topic.|
+|publishers| The list of all local publishers into the topic. Publishers can be anywhere from zero to thousands.|
+|producerId| Internal identifier for this producer on this topic.|
+|producerName| Internal identifier for this producer, generated by the client library.|
+|address| IP address and source port for the connection of this producer.|
+|connectedSince| Timestamp this producer is created or last reconnected.|
+|subscriptions| The list of all local subscriptions to the topic.|
+|my-subscription| The name of this subscription (client defined).|
+|msgBacklog| The count of messages in backlog for this subscription.|
+|type| This subscription type.|
+|msgRateExpired| The rate at which messages are discarded instead of dispatched from this subscription due to TTL.|
+|consumers| The list of connected consumers for this subscription.|
+|consumerName| Internal identifier for this consumer, generated by the client library.|
+|availablePermits| The number of messages this consumer has space for in the listen queue of client library. A value of 0 means the queue of client library is full and receive() is not being called. A nonzero value means this consumer is ready to be dispatched messages.|
+|replication| This section gives the stats for cross-colo replication of this topic.|
+|replicationBacklog| The outbound replication backlog in messages.|
+|connected| Whether the outbound replicator is connected.|
+|replicationDelayInSeconds| How long the oldest message has been waiting to be sent through the connection, if connected is true.|
+|inboundConnection| The IP and port of the broker in the publisher connection of remote cluster to this broker. |
+|inboundConnectedSince| The TCP connection being used to publish messages to the remote cluster. If no local publishers are connected, this connection is automatically closed after a minute.|
+
+
+## Topics
+
+|Stat|Description|
+|---|---|
+|entriesAddedCounter| Messages published since this broker loads this topic.|
+|numberOfEntries| Total number of messages being tracked.|
+|totalSize| Total storage size in bytes of all messages.|
+|currentLedgerEntries| Count of messages written to the ledger currently open for writing.|
+|currentLedgerSize| Size in bytes of messages written to ledger currently open for writing.|
+|lastLedgerCreatedTimestamp| Time when last ledger is created.|
+|lastLedgerCreationFailureTimestamp| Time when last ledger is failed.|
+|waitingCursorsCount| How many cursors are caught up and waiting for a new message to be published.|
+|pendingAddEntriesCount| How many messages have (asynchronous) write requests you are waiting on completion.|
+|lastConfirmedEntry| The ledgerid:entryid of the last message successfully written. If the entryid is -1, then the ledger is opened or is being currently opened but has no entries written yet.|
+|state| The state of the cursor ledger. Open means you have a cursor ledger for saving updates of the markDeletePosition.|
+|ledgers| The ordered list of all ledgers for this topic holding its messages.|
+|cursors| The list of all cursors on this topic. Every subscription you saw in the topic stats has one.|
+|markDeletePosition| The ack position: the last message the subscriber acknowledges receiving.|
+|readPosition| The latest position of subscriber for reading message.|
+|waitingReadOp| This is true when the subscription reads the latest message that is published to the topic and waits on new messages to be published.|
+|pendingReadOps| The counter for how many outstanding read requests to the BookKeepers you have in progress.|
+|messagesConsumedCounter| Number of messages this cursor acks since this broker loads this topic.|
+|cursorLedger| The ledger used to persistently store the current markDeletePosition.|
+|cursorLedgerLastEntry| The last entryid used to persistently store the current markDeletePosition.|
+|individuallyDeletedMessages| If Acks are done out of order, shows the ranges of messages Acked between the markDeletePosition and the read-position.|
+|lastLedgerSwitchTimestamp| The last time the cursor ledger is rolled over.|
diff --git a/site2/website/versioned_docs/version-2.10.x/administration-upgrade.md b/site2/website/versioned_docs/version-2.10.x/administration-upgrade.md
new file mode 100644
index 0000000000000..72d136b6460f6
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/administration-upgrade.md
@@ -0,0 +1,168 @@
+---
+id: administration-upgrade
+title: Upgrade Guide
+sidebar_label: "Upgrade"
+original_id: administration-upgrade
+---
+
+## Upgrade guidelines
+
+Apache Pulsar is comprised of multiple components, ZooKeeper, bookies, and brokers. These components are either stateful or stateless. You do not have to upgrade ZooKeeper nodes unless you have special requirement. While you upgrade, you need to pay attention to bookies (stateful), brokers and proxies (stateless).
+
+The following are some guidelines on upgrading a Pulsar cluster. Read the guidelines before upgrading.
+
+- Backup all your configuration files before upgrading.
+- Read guide entirely, make a plan, and then execute the plan. When you make upgrade plan, you need to take your specific requirements and environment into consideration.
+- Pay attention to the upgrading order of components. In general, you do not need to upgrade your ZooKeeper or configuration store cluster (the global ZooKeeper cluster). You need to upgrade bookies first, and then upgrade brokers, proxies, and your clients.
+- If `autorecovery` is enabled, you need to disable `autorecovery` in the upgrade process, and re-enable it after completing the process.
+- Read the release notes carefully for each release. Release notes contain features, configuration changes that might impact your upgrade.
+- Upgrade a small subset of nodes of each type to canary test the new version before upgrading all nodes of that type in the cluster. When you have upgraded the canary nodes, run for a while to ensure that they work correctly.
+- Upgrade one data center to verify new version before upgrading all data centers if your cluster runs in multi-cluster replicated mode.
+
+> Note: Currently, Apache Pulsar is compatible between versions.
+
+## Upgrade sequence
+
+To upgrade an Apache Pulsar cluster, follow the upgrade sequence.
+
+1. Upgrade ZooKeeper (optional)
+- Canary test: test an upgraded version in one or a small set of ZooKeeper nodes.
+- Rolling upgrade: rollout the upgraded version to all ZooKeeper servers incrementally, one at a time. Monitor your dashboard during the whole rolling upgrade process.
+2. Upgrade bookies
+- Canary test: test an upgraded version in one or a small set of bookies.
+- Rolling upgrade:
+ - a. Disable `autorecovery` with the following command.
+
+ ```shell
+
+ bin/bookkeeper shell autorecovery -disable
+
+ ```
+
+
+ - b. Rollout the upgraded version to all bookies in the cluster after you determine that a version is safe after canary.
+ - c. After you upgrade all bookies, re-enable `autorecovery` with the following command.
+
+ ```shell
+
+ bin/bookkeeper shell autorecovery -enable
+
+ ```
+
+3. Upgrade brokers
+- Canary test: test an upgraded version in one or a small set of brokers.
+- Rolling upgrade: rollout the upgraded version to all brokers in the cluster after you determine that a version is safe after canary.
+4. Upgrade proxies
+- Canary test: test an upgraded version in one or a small set of proxies.
+- Rolling upgrade: rollout the upgraded version to all proxies in the cluster after you determine that a version is safe after canary.
+
+## Upgrade ZooKeeper (optional)
+While you upgrade ZooKeeper servers, you can do canary test first, and then upgrade all ZooKeeper servers in the cluster.
+
+### Canary test
+
+You can test an upgraded version in one of ZooKeeper servers before upgrading all ZooKeeper servers in your cluster.
+
+To upgrade ZooKeeper server to a new version, complete the following steps:
+
+1. Stop a ZooKeeper server.
+2. Upgrade the binary and configuration files.
+3. Start the ZooKeeper server with the new binary files.
+4. Use `pulsar zookeeper-shell` to connect to the newly upgraded ZooKeeper server and run a few commands to verify if it works as expected.
+5. Run the ZooKeeper server for a few days, observe and make sure the ZooKeeper cluster runs well.
+
+#### Canary rollback
+
+If issues occur during canary test, you can shut down the problematic ZooKeeper node, revert the binary and configuration, and restart the ZooKeeper with the reverted binary.
+
+### Upgrade all ZooKeeper servers
+
+After canary test to upgrade one ZooKeeper in your cluster, you can upgrade all ZooKeeper servers in your cluster.
+
+You can upgrade all ZooKeeper servers one by one by following steps in canary test.
+
+## Upgrade bookies
+
+While you upgrade bookies, you can do canary test first, and then upgrade all bookies in the cluster.
+For more details, you can read Apache BookKeeper [Upgrade guide](http://bookkeeper.apache.org/docs/latest/admin/upgrade).
+
+### Canary test
+
+You can test an upgraded version in one or a small set of bookies before upgrading all bookies in your cluster.
+
+To upgrade bookie to a new version, complete the following steps:
+
+1. Stop a bookie.
+2. Upgrade the binary and configuration files.
+3. Start the bookie in `ReadOnly` mode to verify if the bookie of this new version runs well for read workload.
+
+ ```shell
+
+ bin/pulsar bookie --readOnly
+
+ ```
+
+4. When the bookie runs successfully in `ReadOnly` mode, stop the bookie and restart it in `Write/Read` mode.
+
+ ```shell
+
+ bin/pulsar bookie
+
+ ```
+
+5. Observe and make sure the cluster serves both write and read traffic.
+
+#### Canary rollback
+
+If issues occur during the canary test, you can shut down the problematic bookie node. Other bookies in the cluster replaces this problematic bookie node with autorecovery.
+
+### Upgrade all bookies
+
+After canary test to upgrade some bookies in your cluster, you can upgrade all bookies in your cluster.
+
+Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios.
+
+In a rolling upgrade scenario, upgrade one bookie at a time. In a downtime upgrade scenario, shut down the entire cluster, upgrade each bookie, and then start the cluster.
+
+While you upgrade in both scenarios, the procedure is the same for each bookie.
+
+1. Stop the bookie.
+2. Upgrade the software (either new binary or new configuration files).
+2. Start the bookie.
+
+> **Advanced operations**
+> When you upgrade a large BookKeeper cluster in a rolling upgrade scenario, upgrading one bookie at a time is slow. If you configure rack-aware or region-aware placement policy, you can upgrade bookies rack by rack or region by region, which speeds up the whole upgrade process.
+
+## Upgrade brokers and proxies
+
+The upgrade procedure for brokers and proxies is the same. Brokers and proxies are `stateless`, so upgrading the two services is easy.
+
+### Canary test
+
+You can test an upgraded version in one or a small set of nodes before upgrading all nodes in your cluster.
+
+To upgrade to a new version, complete the following steps:
+
+1. Stop a broker (or proxy).
+2. Upgrade the binary and configuration file.
+3. Start a broker (or proxy).
+
+#### Canary rollback
+
+If issues occur during canary test, you can shut down the problematic broker (or proxy) node. Revert to the old version and restart the broker (or proxy).
+
+### Upgrade all brokers or proxies
+
+After canary test to upgrade some brokers or proxies in your cluster, you can upgrade all brokers or proxies in your cluster.
+
+Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios.
+
+In a rolling upgrade scenario, you can upgrade one broker or one proxy at a time if the size of the cluster is small. If your cluster is large, you can upgrade brokers or proxies in batches. When you upgrade a batch of brokers or proxies, make sure the remaining brokers and proxies in the cluster have enough capacity to handle the traffic during upgrade.
+
+In a downtime upgrade scenario, shut down the entire cluster, upgrade each broker or proxy, and then start the cluster.
+
+While you upgrade in both scenarios, the procedure is the same for each broker or proxy.
+
+1. Stop the broker or proxy.
+2. Upgrade the software (either new binary or new configuration files).
+3. Start the broker or proxy.
diff --git a/site2/website/versioned_docs/version-2.10.x/administration-zk-bk.md b/site2/website/versioned_docs/version-2.10.x/administration-zk-bk.md
new file mode 100644
index 0000000000000..0530b258dca2c
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/administration-zk-bk.md
@@ -0,0 +1,378 @@
+---
+id: administration-zk-bk
+title: ZooKeeper and BookKeeper administration
+sidebar_label: "ZooKeeper and BookKeeper"
+original_id: administration-zk-bk
+---
+
+Pulsar relies on two external systems for essential tasks:
+
+* [ZooKeeper](https://zookeeper.apache.org/) is responsible for a wide variety of configuration-related and coordination-related tasks.
+* [BookKeeper](http://bookkeeper.apache.org/) is responsible for [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data.
+
+ZooKeeper and BookKeeper are both open-source [Apache](https://www.apache.org/) projects.
+
+> Skip to the [How Pulsar uses ZooKeeper and BookKeeper](#how-pulsar-uses-zookeeper-and-bookkeeper) section below for a more schematic explanation of the role of these two systems in Pulsar.
+
+
+## ZooKeeper
+
+Each Pulsar instance relies on two separate ZooKeeper quorums.
+
+* [Local ZooKeeper](#deploy-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster.
+* [Configuration Store](#deploy-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum.
+
+### Deploy local ZooKeeper
+
+ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar.
+
+To deploy a Pulsar instance, you need to stand up one local ZooKeeper cluster *per Pulsar cluster*.
+
+To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster:
+
+```properties
+
+server.1=zk1.us-west.example.com:2888:3888
+server.2=zk2.us-west.example.com:2888:3888
+server.3=zk3.us-west.example.com:2888:3888
+
+```
+
+On each host, you need to specify the node ID in `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter).
+
+> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more.
+
+
+On a ZooKeeper server at `zk1.us-west.example.com`, for example, you can set the `myid` value like this:
+
+```shell
+
+$ mkdir -p data/zookeeper
+$ echo 1 > data/zookeeper/myid
+
+```
+
+On `zk2.us-west.example.com` the command is `echo 2 > data/zookeeper/myid` and so on.
+
+Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```shell
+
+$ bin/pulsar-daemon start zookeeper
+
+```
+
+### Deploy configuration store
+
+The ZooKeeper cluster configured and started up in the section above is a *local* ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks.
+
+If you deploy a [single-cluster](#single-cluster-pulsar-instance) instance, you do not need a separate cluster for the configuration store. If, however, you deploy a [multi-cluster](#multi-cluster-pulsar-instance) instance, you need to stand up a separate ZooKeeper cluster for configuration tasks.
+
+#### Single-cluster Pulsar instance
+
+If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports.
+
+To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorum uses to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster:
+
+```properties
+
+clientPort=2184
+server.1=zk1.us-west.example.com:2185:2186
+server.2=zk2.us-west.example.com:2185:2186
+server.3=zk3.us-west.example.com:2185:2186
+
+```
+
+As before, create the `myid` files for each server on `data/global-zookeeper/myid`.
+
+#### Multi-cluster Pulsar instance
+
+When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions.
+
+The key here is to make sure the ZK quorum members are spread across at least 3 regions and that other regions run as observers.
+
+Again, given the very low expected load on the configuration store servers, you can share the same hosts used for the local ZooKeeper quorum.
+
+For example, you can assume a Pulsar instance with the following clusters `us-west`, `us-east`, `us-central`, `eu-central`, `ap-south`. Also you can assume, each cluster has its own local ZK servers named such as
+
+```
+
+zk[1-3].${CLUSTER}.example.com
+
+```
+
+In this scenario you want to pick the quorum participants from few clusters and let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`.
+
+This guarantees that writes to configuration store is possible even if one of these regions is unreachable.
+
+The ZK configuration in all the servers looks like:
+
+```properties
+
+clientPort=2184
+server.1=zk1.us-west.example.com:2185:2186
+server.2=zk2.us-west.example.com:2185:2186
+server.3=zk3.us-west.example.com:2185:2186
+server.4=zk1.us-central.example.com:2185:2186
+server.5=zk2.us-central.example.com:2185:2186
+server.6=zk3.us-central.example.com:2185:2186:observer
+server.7=zk1.us-east.example.com:2185:2186
+server.8=zk2.us-east.example.com:2185:2186
+server.9=zk3.us-east.example.com:2185:2186:observer
+server.10=zk1.eu-central.example.com:2185:2186:observer
+server.11=zk2.eu-central.example.com:2185:2186:observer
+server.12=zk3.eu-central.example.com:2185:2186:observer
+server.13=zk1.ap-south.example.com:2185:2186:observer
+server.14=zk2.ap-south.example.com:2185:2186:observer
+server.15=zk3.ap-south.example.com:2185:2186:observer
+
+```
+
+Additionally, ZK observers need to have:
+
+```properties
+
+peerType=observer
+
+```
+
+##### Start the service
+
+Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon)
+
+```shell
+
+$ bin/pulsar-daemon start configuration-store
+
+```
+
+### ZooKeeper configuration
+
+In Pulsar, ZooKeeper configuration is handled by two separate configuration files in the `conf` directory of your Pulsar installation:
+* The `conf/zookeeper.conf` file handles the configuration for local ZooKeeper.
+* The `conf/global-zookeeper.conf` file handles the configuration for configuration store.
+See [parameters](reference-configuration.md#zookeeper) for more details.
+
+#### Configure batching operations
+Using the batching operations reduces the remote procedure call (RPC) traffic between ZooKeeper client and servers. It also reduces the number of write transactions, because each batching operation corresponds to a single ZooKeeper transaction, containing multiple read and write operations.
+
+The following figure demonstrates a basic benchmark of batching read/write operations that can be requested to ZooKeeper in one second:
+
+![Zookeeper batching benchmark](/assets/zookeeper-batching.png)
+
+To enable batching operations, set the [`metadataStoreBatchingEnabled`](reference-configuration.md#broker) parameter to `true` on the broker side.
+
+
+## BookKeeper
+
+BookKeeper stores all durable messages in Pulsar. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) WAL system that guarantees read consistency of independent message logs calls ledgers. Individual BookKeeper servers are also called *bookies*.
+
+> To manage message persistence, retention, and expiry in Pulsar, refer to [cookbook](cookbooks-retention-expiry.md).
+
+### Hardware requirements
+
+Bookie hosts store message data on disk. To provide optimal performance, ensure that the bookies have a suitable hardware configuration. The following are two key dimensions of bookie hardware capacity:
+
+- Disk I/O capacity read/write
+- Storage capacity
+
+Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker by default. To ensure low write latency, BookKeeper is designed to use multiple devices:
+
+- A **journal** to ensure durability. For sequential writes, it is critical to have fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms.
+- A **ledger storage device** stores data. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller.
+
+### Configure BookKeeper
+
+You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. When you configure each bookie, ensure that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for local ZooKeeper of the Pulsar cluster.
+
+The minimum configuration changes required in `conf/bookkeeper.conf` are as follows:
+
+:::note
+
+Set `journalDirectory` and `ledgerDirectories` carefully. It is difficilt to change them later.
+
+:::
+
+```properties
+
+# Change to point to journal disk mount point
+journalDirectory=data/bookkeeper/journal
+
+# Point to ledger storage disk mount point
+ledgerDirectories=data/bookkeeper/ledgers
+
+# Point to local ZK quorum
+zkServers=zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181
+
+#It is recommended to set this parameter. Otherwise, BookKeeper can't start normally in certain environments (for example, Huawei Cloud).
+advertisedAddress=
+
+```
+
+To change the ZooKeeper root path that BookKeeper uses, use `zkLedgersRootPath=/MY-PREFIX/ledgers` instead of `zkServers=localhost:2181/MY-PREFIX`.
+
+> For more information about BookKeeper, refer to the official [BookKeeper docs](http://bookkeeper.apache.org).
+
+### Deploy BookKeeper
+
+BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar. Each Pulsar broker has its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster.
+
+### Start bookies manually
+
+You can start a bookie in the foreground or as a background daemon.
+
+To start a bookie in the foreground, use the [`bookkeeper`](reference-cli-tools.md#bookkeeper) CLI tool:
+
+```bash
+
+$ bin/bookkeeper bookie
+
+```
+
+To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+
+$ bin/pulsar-daemon start bookie
+
+```
+
+You can verify whether the bookie works properly with the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell):
+
+```shell
+
+$ bin/bookkeeper shell bookiesanity
+
+```
+
+When you use this command, you create a new ledger on the local bookie, write a few entries, read them back and finally delete the ledger.
+
+### Decommission bookies cleanly
+
+Before you decommission a bookie, you need to check your environment and meet the following requirements.
+
+1. Ensure the state of your cluster supports decommissioning the target bookie. Check if `EnsembleSize >= Write Quorum >= Ack Quorum` is `true` with one less bookie.
+
+2. Ensure the target bookie is listed after using the `listbookies` command.
+
+3. Ensure that no other process is ongoing (upgrade etc).
+
+And then you can decommission bookies safely. To decommission bookies, complete the following steps.
+
+1. Log in to the bookie node, check if there are underreplicated ledgers. The decommission command force to replicate the underreplicated ledgers.
+`$ bin/bookkeeper shell listunderreplicated`
+
+2. Stop the bookie by killing the bookie process. Make sure that no liveness/readiness probes setup for the bookies to spin them back up if you deploy it in a Kubernetes environment.
+
+3. Run the decommission command.
+ - If you have logged in to the node to be decommissioned, you do not need to provide `-bookieid`.
+ - If you are running the decommission command for the target bookie node from another bookie node, you should mention the target bookie ID in the arguments for `-bookieid`
+ `$ bin/bookkeeper shell decommissionbookie`
+ or
+ `$ bin/bookkeeper shell decommissionbookie -bookieid `
+
+4. Validate that no ledgers are on the decommissioned bookie.
+`$ bin/bookkeeper shell listledgers -bookieid `
+
+You can run the following command to check if the bookie you have decommissioned is listed in the bookies list:
+
+```bash
+
+./bookkeeper shell listbookies -rw -h
+./bookkeeper shell listbookies -ro -h
+
+```
+
+## BookKeeper persistence policies
+
+In Pulsar, you can set *persistence policies* at the namespace level, which determines how BookKeeper handles persistent storage of messages. Policies determine four things:
+
+* The number of acks (guaranteed copies) to wait for each ledger entry.
+* The number of bookies to use for a topic.
+* The number of writes to make for each ledger entry.
+* The throttling rate for mark-delete operations.
+
+### Set persistence policies
+
+You can set persistence policies for BookKeeper at the [namespace](reference-terminology.md#namespace) level.
+
+#### Pulsar-admin
+
+Use the [`set-persistence`](reference-pulsar-admin.md#namespaces-set-persistence) subcommand and specify a namespace as well as any policies that you want to apply. The available flags are:
+
+Flag | Description | Default
+:----|:------------|:-------
+`-a`, `--bookkeeper-ack-quorum` | The number of acks (guaranteed copies) to wait on for each entry | 0
+`-e`, `--bookkeeper-ensemble` | The number of [bookies](reference-terminology.md#bookie) to use for topics in the namespace | 0
+`-w`, `--bookkeeper-write-quorum` | The number of writes to make for each entry | 0
+`-r`, `--ml-mark-delete-max-rate` | Throttling rate for mark-delete operations (0 means no throttle) | 0
+
+The following is an example:
+
+```shell
+
+$ pulsar-admin namespaces set-persistence my-tenant/my-ns \
+ --bookkeeper-ack-quorum 3 \
+ --bookkeeper-ensemble 2
+
+```
+
+#### REST API
+
+{@inject: endpoint|POST|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/setPersistence?version=@pulsar:version_number@}
+
+#### Java
+
+```java
+
+int bkEnsemble = 2;
+int bkQuorum = 3;
+int bkAckQuorum = 2;
+double markDeleteRate = 0.7;
+PersistencePolicies policies =
+ new PersistencePolicies(ensemble, quorum, ackQuorum, markDeleteRate);
+admin.namespaces().setPersistence(namespace, policies);
+
+```
+
+### List persistence policies
+
+You can see which persistence policy currently applies to a namespace.
+
+#### Pulsar-admin
+
+Use the [`get-persistence`](reference-pulsar-admin.md#namespaces-get-persistence) subcommand and specify the namespace.
+
+The following is an example:
+
+```shell
+
+$ pulsar-admin namespaces get-persistence my-tenant/my-ns
+{
+ "bookkeeperEnsemble": 1,
+ "bookkeeperWriteQuorum": 1,
+ "bookkeeperAckQuorum", 1,
+ "managedLedgerMaxMarkDeleteRate": 0
+}
+
+```
+
+#### REST API
+
+{@inject: endpoint|GET|/admin/v2/namespaces/:tenant/:namespace/persistence|operation/getPersistence?version=@pulsar:version_number@}
+
+#### Java
+
+```java
+
+PersistencePolicies policies = admin.namespaces().getPersistence(namespace);
+
+```
+
+## How Pulsar uses ZooKeeper and BookKeeper
+
+This diagram illustrates the role of ZooKeeper and BookKeeper in a Pulsar cluster:
+
+![ZooKeeper and BookKeeper](/assets/pulsar-system-architecture.png)
+
+Each Pulsar cluster consists of one or more message brokers. Each broker relies on an ensemble of bookies.
diff --git a/site2/website/versioned_docs/version-2.10.x/client-libraries-cgo.md b/site2/website/versioned_docs/version-2.10.x/client-libraries-cgo.md
new file mode 100644
index 0000000000000..feee2cac3bafb
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/client-libraries-cgo.md
@@ -0,0 +1,581 @@
+---
+id: client-libraries-cgo
+title: Pulsar CGo client
+sidebar_label: "CGo(deprecated)"
+original_id: client-libraries-cgo
+---
+
+> The CGo client has been deprecated since version 2.7.0. If possible, use the [Go client](client-libraries-go.md) instead.
+
+You can use Pulsar Go client to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang).
+
+All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Go client are thread-safe.
+
+Currently, the following Go clients are maintained in two repositories.
+
+| Language | Project | Maintainer | License | Description |
+|----------|---------|------------|---------|-------------|
+| CGo | [pulsar-client-go](https://github.com/apache/pulsar/tree/master/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | CGo client that depends on C++ client library |
+| Go | [pulsar-client-go](https://github.com/apache/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client |
+
+> **API docs available as well**
+> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar/pulsar-client-go/pulsar).
+
+## Installation
+
+### Requirements
+
+Pulsar Go client library is based on the C++ client library. Follow
+the instructions for [C++ library](client-libraries-cpp.md) for installing the binaries through [RPM](client-libraries-cpp.md#rpm), [Deb](client-libraries-cpp.md#deb) or [Homebrew packages](client-libraries-cpp.md#macos).
+
+### Install go package
+
+> **Compatibility Warning**
+> The version number of the Go client **must match** the version number of the Pulsar C++ client library.
+
+You can install the `pulsar` library locally using `go get`. Note that `go get` doesn't support fetching a specific tag - it will always pull in master's version of the Go client. You'll need a C++ client library that matches master.
+
+```bash
+
+$ go get -u github.com/apache/pulsar/pulsar-client-go/pulsar
+
+```
+
+Or you can use [dep](https://github.com/golang/dep) for managing the dependencies.
+
+```bash
+
+$ dep ensure -add github.com/apache/pulsar/pulsar-client-go/pulsar@v@pulsar:version@
+
+```
+
+Once installed locally, you can import it into your project:
+
+```go
+
+import "github.com/apache/pulsar/pulsar-client-go/pulsar"
+
+```
+
+## Connection URLs
+
+To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL.
+
+Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`:
+
+```http
+
+pulsar://localhost:6650
+
+```
+
+A URL for a production Pulsar cluster may look something like this:
+
+```http
+
+pulsar://pulsar.us-west.example.com:6650
+
+```
+
+If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this:
+
+```http
+
+pulsar+ssl://pulsar.us-west.example.com:6651
+
+```
+
+## Create a client
+
+In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example:
+
+```go
+
+import (
+ "log"
+ "runtime"
+
+ "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+ client, err := pulsar.NewClient(pulsar.ClientOptions{
+ URL: "pulsar://localhost:6650",
+ OperationTimeoutSeconds: 5,
+ MessageListenerThreads: runtime.NumCPU(),
+ })
+
+ if err != nil {
+ log.Fatalf("Could not instantiate Pulsar client: %v", err)
+ }
+}
+
+```
+
+The following configurable parameters are available for Pulsar clients:
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`URL` | The connection URL for the Pulsar cluster. See [above](#urls) for more info |
+`IOThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker) | 1
+`OperationTimeoutSeconds` | The timeout for some Go client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30
+`MessageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)) | 1
+`ConcurrentLookupRequests` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 5000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 5000
+`Logger` | A custom logger implementation for the client (as a function that takes a log level, file path, line number, and message). All info, warn, and error messages will be routed to this function. | `nil`
+`TLSTrustCertsFilePath` | The file path for the trusted TLS certificate |
+`TLSAllowInsecureConnection` | Whether the client accepts untrusted TLS certificates from the broker | `false`
+`Authentication` | Configure the authentication provider. (default: no authentication). Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | `nil`
+`StatsIntervalInSeconds` | The interval (in seconds) at which client stats are published | 60
+
+## Producers
+
+Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example:
+
+```go
+
+producer, err := client.CreateProducer(pulsar.ProducerOptions{
+ Topic: "my-topic",
+})
+
+if err != nil {
+ log.Fatalf("Could not instantiate Pulsar producer: %v", err)
+}
+
+defer producer.Close()
+
+msg := pulsar.ProducerMessage{
+ Payload: []byte("Hello, Pulsar"),
+}
+
+if err := producer.Send(context.Background(), msg); err != nil {
+ log.Fatalf("Producer could not send message: %v", err)
+}
+
+```
+
+> **Blocking operation**
+> When you create a new Pulsar producer, the operation will block (waiting on a go channel) until either a producer is successfully created or an error is thrown.
+
+
+### Producer operations
+
+Pulsar Go producers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string`
+`Name()` | Fetches the producer's name | `string`
+`Send(context.Context, ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | `error`
+`SendAndGetMsgID(context.Context, ProducerMessage)`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | (MessageID, error)
+`SendAsync(context.Context, ProducerMessage, func(ProducerMessage, error))` | Publishes a [message](#messages) to the producer's topic asynchronously. The third argument is a callback function that specifies what happens either when the message is acknowledged or an error is thrown. |
+`SendAndGetMsgIDAsync(context.Context, ProducerMessage, func(MessageID, error))`| Send a message in asynchronous mode. The callback will report back the message being published and the eventual error in publishing |
+`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64
+`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error
+`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | `error`
+`Schema()` | | Schema
+
+Here's a more involved example usage of a producer:
+
+```go
+
+import (
+ "context"
+ "fmt"
+ "log"
+
+ "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+ // Instantiate a Pulsar client
+ client, err := pulsar.NewClient(pulsar.ClientOptions{
+ URL: "pulsar://localhost:6650",
+ })
+
+ if err != nil { log.Fatal(err) }
+
+ // Use the client to instantiate a producer
+ producer, err := client.CreateProducer(pulsar.ProducerOptions{
+ Topic: "my-topic",
+ })
+
+ if err != nil { log.Fatal(err) }
+
+ ctx := context.Background()
+
+ // Send 10 messages synchronously and 10 messages asynchronously
+ for i := 0; i < 10; i++ {
+ // Create a message
+ msg := pulsar.ProducerMessage{
+ Payload: []byte(fmt.Sprintf("message-%d", i)),
+ }
+
+ // Attempt to send the message
+ if err := producer.Send(ctx, msg); err != nil {
+ log.Fatal(err)
+ }
+
+ // Create a different message to send asynchronously
+ asyncMsg := pulsar.ProducerMessage{
+ Payload: []byte(fmt.Sprintf("async-message-%d", i)),
+ }
+
+ // Attempt to send the message asynchronously and handle the response
+ producer.SendAsync(ctx, asyncMsg, func(msg pulsar.ProducerMessage, err error) {
+ if err != nil { log.Fatal(err) }
+
+ fmt.Printf("the %s successfully published", string(msg.Payload))
+ })
+ }
+}
+
+```
+
+### Producer configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages |
+`Name` | A name for the producer. If you don't explicitly assign a name, Pulsar will automatically generate a globally unique name that you can access later using the `Name()` method. If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. |
+`Properties`| Attach a set of application defined properties to the producer. This properties will be visible in the topic stats |
+`SendTimeout` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `SendTimeout` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30 seconds
+`MaxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `Send` and `SendAsync` methods will fail *unless* `BlockIfQueueFull` is set to `true`. |
+`MaxPendingMessagesAcrossPartitions` | Set the number of max pending messages across all the partitions. This setting will be used to lower the max pending messages for each partition `MaxPendingMessages(int)`, if the total exceeds the configured value.|
+`BlockIfQueueFull` | If set to `true`, the producer's `Send` and `SendAsync` methods will block when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `MaxPendingMessages` parameter); if set to `false` (the default), `Send` and `SendAsync` operations will fail and throw a `ProducerQueueIsFullError` when the queue is full. | `false`
+`MessageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`pulsar.RoundRobinDistribution`, the default), publishing all messages to a single partition (`pulsar.UseSinglePartition`), or a custom partitioning scheme (`pulsar.CustomPartition`). | `pulsar.RoundRobinDistribution`
+`HashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `pulsar.JavaStringHash` (the equivalent of `String.hashCode()` in Java), `pulsar.Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `pulsar.BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library) | `pulsar.JavaStringHash`
+`CompressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), [`ZLIB`](https://zlib.net/), [`ZSTD`](https://facebook.github.io/zstd/) and [`SNAPPY`](https://google.github.io/snappy/). | No compression
+`MessageRouter` | By default, Pulsar uses a round-robin routing scheme for [partitioned topics](cookbooks-partitioned.md). The `MessageRouter` parameter enables you to specify custom routing logic via a function that takes the Pulsar message and topic metadata as an argument and returns an integer (where the ), i.e. a function signature of `func(Message, TopicMetadata) int`. |
+`Batching` | Control whether automatic batching of messages is enabled for the producer. | false
+`BatchingMaxPublishDelay` | Set the time period within which the messages sent will be batched (default: 1ms) if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or until | 1ms
+`BatchingMaxMessages` | Set the maximum number of messages permitted in a batch. (default: 1000) If set to a value greater than 1, messages will be queued until this threshold is reached or batch interval has elapsed | 1000
+
+## Consumers
+
+Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels:
+
+```go
+
+msgChannel := make(chan pulsar.ConsumerMessage)
+
+consumerOpts := pulsar.ConsumerOptions{
+ Topic: "my-topic",
+ SubscriptionName: "my-subscription-1",
+ Type: pulsar.Exclusive,
+ MessageChannel: msgChannel,
+}
+
+consumer, err := client.Subscribe(consumerOpts)
+
+if err != nil {
+ log.Fatalf("Could not establish subscription: %v", err)
+}
+
+defer consumer.Close()
+
+for cm := range msgChannel {
+ msg := cm.Message
+
+ fmt.Printf("Message ID: %s", msg.ID())
+ fmt.Printf("Message value: %s", string(msg.Payload()))
+
+ consumer.Ack(msg)
+}
+
+```
+
+> **Blocking operation**
+> When you create a new Pulsar consumer, the operation will block (on a go channel) until either a producer is successfully created or an error is thrown.
+
+
+### Consumer operations
+
+Pulsar Go consumers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Returns the consumer's [topic](reference-terminology.md#topic) | `string`
+`Subscription()` | Returns the consumer's subscription name | `string`
+`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error`
+`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)`
+`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | `error`
+`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | `error`
+`AckCumulative(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `AckCumulative` method will block until the ack has been sent to the broker. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can only be used with a [shared](concepts-messaging.md#shared) subscription type. | `error`
+`AckCumulativeID(MessageID)` |Ack the reception of all the messages in the stream up to (and including) the provided message. This method will block until the acknowledge has been sent to the broker. After that, the messages will not be re-delivered to this consumer. | error
+`Nack(Message)` | Acknowledge the failure to process a single message. | `error`
+`NackID(MessageID)` | Acknowledge the failure to process a single message. | `error`
+`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | `error`
+`RedeliverUnackedMessages()` | Redelivers *all* unacknowledged messages on the topic. In [failover](concepts-messaging.md#failover) mode, this request is ignored if the consumer isn't active on the specified topic; in [shared](concepts-messaging.md#shared) mode, redelivered messages are distributed across all consumers connected to the topic. **Note**: this is a *non-blocking* operation that doesn't throw an error. |
+`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | error
+
+#### Receive example
+
+Here's an example usage of a Go consumer that uses the `Receive()` method to process incoming messages:
+
+```go
+
+import (
+ "context"
+ "log"
+
+ "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+ // Instantiate a Pulsar client
+ client, err := pulsar.NewClient(pulsar.ClientOptions{
+ URL: "pulsar://localhost:6650",
+ })
+
+ if err != nil { log.Fatal(err) }
+
+ // Use the client object to instantiate a consumer
+ consumer, err := client.Subscribe(pulsar.ConsumerOptions{
+ Topic: "my-golang-topic",
+ SubscriptionName: "sub-1",
+ Type: pulsar.Exclusive,
+ })
+
+ if err != nil { log.Fatal(err) }
+
+ defer consumer.Close()
+
+ ctx := context.Background()
+
+ // Listen indefinitely on the topic
+ for {
+ msg, err := consumer.Receive(ctx)
+ if err != nil { log.Fatal(err) }
+
+ // Do something with the message
+ err = processMessage(msg)
+
+ if err == nil {
+ // Message processed successfully
+ consumer.Ack(msg)
+ } else {
+ // Failed to process messages
+ consumer.Nack(msg)
+ }
+ }
+}
+
+```
+
+### Consumer configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages |
+`Topics` | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing |
+`TopicsPattern` | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing |
+`SubscriptionName` | The subscription name for this consumer |
+`Properties` | Attach a set of application defined properties to the consumer. This properties will be visible in the topic stats|
+`Name` | The name of the consumer |
+`AckTimeout` | Set the timeout for unacked messages | 0
+`NackRedeliveryDelay` | The delay after which to redeliver the messages that failed to be processed. Default is 1min. (See `Consumer.Nack()`) | 1 minute
+`Type` | Available options are `Exclusive`, `Shared`, and `Failover` | `Exclusive`
+`SubscriptionInitPos` | InitialPosition at which the cursor will be set when subscribe | Latest
+`MessageChannel` | The Go channel used by the consumer. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
+`ReceiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `Receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000
+`MaxTotalReceiverQueueSizeAcrossPartitions` |Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value | 50000
+`ReadCompacted` | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the consumer will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal. |
+
+## Readers
+
+Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example:
+
+```go
+
+reader, err := client.CreateReader(pulsar.ReaderOptions{
+ Topic: "my-golang-topic",
+ StartMessageId: pulsar.LatestMessage,
+})
+
+```
+
+> **Blocking operation**
+> When you create a new Pulsar reader, the operation will block (on a go channel) until either a reader is successfully created or an error is thrown.
+
+
+### Reader operations
+
+Pulsar Go readers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string`
+`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)`
+`HasNext()` | Check if there is any message available to read from the current position| (bool, error)
+`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error`
+
+#### "Next" example
+
+Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages:
+
+```go
+
+import (
+ "context"
+ "log"
+
+ "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+ // Instantiate a Pulsar client
+ client, err := pulsar.NewClient(pulsar.ClientOptions{
+ URL: "pulsar://localhost:6650",
+ })
+
+ if err != nil { log.Fatalf("Could not create client: %v", err) }
+
+ // Use the client to instantiate a reader
+ reader, err := client.CreateReader(pulsar.ReaderOptions{
+ Topic: "my-golang-topic",
+ StartMessageID: pulsar.EarliestMessage,
+ })
+
+ if err != nil { log.Fatalf("Could not create reader: %v", err) }
+
+ defer reader.Close()
+
+ ctx := context.Background()
+
+ // Listen on the topic for incoming messages
+ for {
+ msg, err := reader.Next(ctx)
+ if err != nil { log.Fatalf("Error reading from topic: %v", err) }
+
+ // Process the message
+ }
+}
+
+```
+
+In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example:
+
+```go
+
+lastSavedId := // Read last saved message id from external store as byte[]
+
+reader, err := client.CreateReader(pulsar.ReaderOptions{
+ Topic: "my-golang-topic",
+ StartMessageID: DeserializeMessageID(lastSavedId),
+})
+
+```
+
+### Reader configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages
+`Name` | The name of the reader
+`StartMessageID` | The initial reader position, i.e. the message at which the reader begins processing messages. The options are `pulsar.EarliestMessage` (the earliest available message on the topic), `pulsar.LatestMessage` (the latest available message on the topic), or a `MessageID` object for a position that isn't earliest or latest. |
+`MessageChannel` | The Go channel used by the reader. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
+`ReceiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `Next`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000
+`SubscriptionRolePrefix` | The subscription role prefix. | `reader`
+`ReadCompacted` | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the reader will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal.|
+
+## Messages
+
+The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message:
+
+```go
+
+msg := pulsar.ProducerMessage{
+ Payload: []byte("Here is some message data"),
+ Key: "message-key",
+ Properties: map[string]string{
+ "foo": "bar",
+ },
+ EventTime: time.Now(),
+ ReplicationClusters: []string{"cluster1", "cluster3"},
+}
+
+if err := producer.send(msg); err != nil {
+ log.Fatalf("Could not publish message due to: %v", err)
+}
+
+```
+
+The following methods parameters are available for `ProducerMessage` objects:
+
+Parameter | Description
+:---------|:-----------
+`Payload` | The actual data payload of the message
+`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message.
+`Key` | The optional key associated with the message (particularly useful for things like topic compaction)
+`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message
+`EventTime` | The timestamp associated with the message
+`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default.
+`SequenceID` | Set the sequence id to assign to the current message
+
+## TLS encryption and authentication
+
+In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so:
+
+ * Use `pulsar+ssl` URL type
+ * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker
+ * Configure `Authentication` option
+
+Here's an example:
+
+```go
+
+opts := pulsar.ClientOptions{
+ URL: "pulsar+ssl://my-cluster.com:6651",
+ TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr",
+ Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"),
+}
+
+```
+
+## Schema
+
+This example shows how to create a producer and consumer with schema.
+
+```go
+
+var exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," +
+ "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}"
+jsonSchema := NewJsonSchema(exampleSchemaDef, nil)
+// create producer
+producer, err := client.CreateProducerWithSchema(ProducerOptions{
+ Topic: "jsonTopic",
+}, jsonSchema)
+err = producer.Send(context.Background(), ProducerMessage{
+ Value: &testJson{
+ ID: 100,
+ Name: "pulsar",
+ },
+})
+if err != nil {
+ log.Fatal(err)
+}
+defer producer.Close()
+//create consumer
+var s testJson
+consumerJS := NewJsonSchema(exampleSchemaDef, nil)
+consumer, err := client.SubscribeWithSchema(ConsumerOptions{
+ Topic: "jsonTopic",
+ SubscriptionName: "sub-2",
+}, consumerJS)
+if err != nil {
+ log.Fatal(err)
+}
+msg, err := consumer.Receive(context.Background())
+if err != nil {
+ log.Fatal(err)
+}
+err = msg.GetValue(&s)
+if err != nil {
+ log.Fatal(err)
+}
+fmt.Println(s.ID) // output: 100
+fmt.Println(s.Name) // output: pulsar
+defer consumer.Close()
+
+```
+
diff --git a/site2/website/versioned_docs/version-2.10.x/client-libraries-cpp.md b/site2/website/versioned_docs/version-2.10.x/client-libraries-cpp.md
new file mode 100644
index 0000000000000..f5b8ae3678de2
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/client-libraries-cpp.md
@@ -0,0 +1,765 @@
+---
+id: client-libraries-cpp
+title: Pulsar C++ client
+sidebar_label: "C++"
+original_id: client-libraries-cpp
+---
+
+You can use Pulsar C++ client to create Pulsar producers and consumers in C++.
+
+All the methods in producer, consumer, and reader of a C++ client are thread-safe.
+
+## Supported platforms
+
+Pulsar C++ client is supported on **Linux** ,**MacOS** and **Windows** platforms.
+
+[Doxygen](http://www.doxygen.nl/)-generated API docs for the C++ client are available [here](/api/cpp).
+
+
+## Linux
+
+:::note
+
+You can choose one of the following installation methods based on your needs: Compilation, Install RPM or Install Debian.
+
+:::
+
+### Compilation
+
+#### System requirements
+
+You need to install the following components before using the C++ client:
+
+* [CMake](https://cmake.org/)
+* [Boost](http://www.boost.org/)
+* [Protocol Buffers](https://developers.google.com/protocol-buffers/) >= 3
+* [libcurl](https://curl.se/libcurl/)
+* [Google Test](https://github.com/google/googletest)
+
+1. Clone the Pulsar repository.
+
+```shell
+
+$ git clone https://github.com/apache/pulsar
+
+```
+
+2. Install all necessary dependencies.
+
+```shell
+
+$ apt-get install cmake libssl-dev libcurl4-openssl-dev liblog4cxx-dev \
+ libprotobuf-dev protobuf-compiler libboost-all-dev google-mock libgtest-dev libjsoncpp-dev
+
+```
+
+3. Compile and install [Google Test](https://github.com/google/googletest).
+
+```shell
+
+# libgtest-dev version is 1.18.0 or above
+$ cd /usr/src/googletest
+$ sudo cmake .
+$ sudo make
+$ sudo cp ./googlemock/libgmock.a ./googlemock/gtest/libgtest.a /usr/lib/
+
+# less than 1.18.0
+$ cd /usr/src/gtest
+$ sudo cmake .
+$ sudo make
+$ sudo cp libgtest.a /usr/lib
+
+$ cd /usr/src/gmock
+$ sudo cmake .
+$ sudo make
+$ sudo cp libgmock.a /usr/lib
+
+```
+
+4. Compile the Pulsar client library for C++ inside the Pulsar repository.
+
+```shell
+
+$ cd pulsar-client-cpp
+$ cmake .
+$ make
+
+```
+
+After you install the components successfully, the files `libpulsar.so` and `libpulsar.a` are in the `lib` folder of the repository. The tools `perfProducer` and `perfConsumer` are in the `perf` directory.
+
+### Install Dependencies
+
+> Since 2.1.0 release, Pulsar ships pre-built RPM and Debian packages. You can download and install those packages directly.
+
+After you download and install RPM or DEB, the `libpulsar.so`, `libpulsarnossl.so`, `libpulsar.a`, and `libpulsarwithdeps.a` libraries are in your `/usr/lib` directory.
+
+By default, they are built in code path `${PULSAR_HOME}/pulsar-client-cpp`. You can build with the command below.
+
+ `cmake . -DBUILD_TESTS=OFF -DLINK_STATIC=ON && make pulsarShared pulsarSharedNossl pulsarStatic pulsarStaticWithDeps -j 3`.
+
+These libraries rely on some other libraries. If you want to get detailed version of dependencies, see [RPM](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/rpm/Dockerfile) or [DEB](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/deb/Dockerfile) files.
+
+1. `libpulsar.so` is a shared library, containing statically linked `boost` and `openssl`. It also dynamically links all other necessary libraries. You can use this Pulsar library with the command below.
+
+```bash
+
+ g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsar.so -I/usr/local/ssl/include
+
+```
+
+2. `libpulsarnossl.so` is a shared library, similar to `libpulsar.so` except that the libraries `openssl` and `crypto` are dynamically linked. You can use this Pulsar library with the command below.
+
+```bash
+
+ g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsarnossl.so -lssl -lcrypto -I/usr/local/ssl/include -L/usr/local/ssl/lib
+
+```
+
+3. `libpulsar.a` is a static library. You need to load dependencies before using this library. You can use this Pulsar library with the command below.
+
+```bash
+
+ g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsar.a -lssl -lcrypto -ldl -lpthread -I/usr/local/ssl/include -L/usr/local/ssl/lib -lboost_system -lboost_regex -lcurl -lprotobuf -lzstd -lz
+
+```
+
+4. `libpulsarwithdeps.a` is a static library, based on `libpulsar.a`. It is archived in the dependencies of `libboost_regex`, `libboost_system`, `libcurl`, `libprotobuf`, `libzstd` and `libz`. You can use this Pulsar library with the command below.
+
+```bash
+
+ g++ --std=c++11 PulsarTest.cpp -o test /usr/lib/libpulsarwithdeps.a -lssl -lcrypto -ldl -lpthread -I/usr/local/ssl/include -L/usr/local/ssl/lib
+
+```
+
+The `libpulsarwithdeps.a` does not include library openssl related libraries `libssl` and `libcrypto`, because these two libraries are related to security. It is more reasonable and easier to use the versions provided by the local system to handle security issues and upgrade libraries.
+
+### Install RPM
+
+1. Download a RPM package from the links in the table.
+
+| Link | Crypto files |
+|------|--------------|
+| [client](@pulsar:dist_rpm:client@) | [asc](@pulsar:dist_rpm:client@.asc), [sha512](@pulsar:dist_rpm:client@.sha512) |
+| [client-debuginfo](@pulsar:dist_rpm:client-debuginfo@) | [asc](@pulsar:dist_rpm:client-debuginfo@.asc), [sha512](@pulsar:dist_rpm:client-debuginfo@.sha512) |
+| [client-devel](@pulsar:dist_rpm:client-devel@) | [asc](@pulsar:dist_rpm:client-devel@.asc), [sha512](@pulsar:dist_rpm:client-devel@.sha512) |
+
+2. Install the package using the following command.
+
+```bash
+
+$ rpm -ivh apache-pulsar-client*.rpm
+
+```
+
+After you install RPM successfully, Pulsar libraries are in the `/usr/lib` directory, for example:
+
+```bash
+
+lrwxrwxrwx 1 root root 18 Dec 30 22:21 libpulsar.so -> libpulsar.so.2.9.1
+lrwxrwxrwx 1 root root 23 Dec 30 22:21 libpulsarnossl.so -> libpulsarnossl.so.2.9.1
+
+```
+
+:::note
+
+If you get the error that `libpulsar.so: cannot open shared object file: No such file or directory` when starting Pulsar client, you may need to run `ldconfig` first.
+
+:::
+
+2. Install the GCC and g++ using the following command, otherwise errors would occur in installing Node.js.
+
+```bash
+
+$ sudo yum -y install gcc automake autoconf libtool make
+$ sudo yum -y install gcc-c++
+
+```
+
+### Install Debian
+
+1. Download a Debian package from the links in the table.
+
+| Link | Crypto files |
+|------|--------------|
+| [client](@pulsar:deb:client@) | [asc](@pulsar:dist_deb:client@.asc), [sha512](@pulsar:dist_deb:client@.sha512) |
+| [client-devel](@pulsar:deb:client-devel@) | [asc](@pulsar:dist_deb:client-devel@.asc), [sha512](@pulsar:dist_deb:client-devel@.sha512) |
+
+2. Install the package using the following command.
+
+```bash
+
+$ apt install ./apache-pulsar-client*.deb
+
+```
+
+After you install DEB successfully, Pulsar libraries are in the `/usr/lib` directory.
+
+### Build
+
+> If you want to build RPM and Debian packages from the latest master, follow the instructions below. You should run all the instructions at the root directory of your cloned Pulsar repository.
+
+There are recipes that build RPM and Debian packages containing a
+statically linked `libpulsar.so` / `libpulsarnossl.so` / `libpulsar.a` / `libpulsarwithdeps.a` with all required dependencies.
+
+To build the C++ library packages, you need to build the Java packages first.
+
+```shell
+
+mvn install -DskipTests
+
+```
+
+#### RPM
+
+To build the RPM inside a Docker container, use the command below. The RPMs are in the `pulsar-client-cpp/pkg/rpm/RPMS/x86_64/` path.
+
+```shell
+
+pulsar-client-cpp/pkg/rpm/docker-build-rpm.sh
+
+```
+
+| Package name | Content |
+|-----|-----|
+| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` |
+| pulsar-client-devel | Static library `libpulsar.a`, `libpulsarwithdeps.a`and C++ and C headers |
+| pulsar-client-debuginfo | Debug symbols for `libpulsar.so` |
+
+#### Debian
+
+To build Debian packages, enter the following command.
+
+```shell
+
+pulsar-client-cpp/pkg/deb/docker-build-deb.sh
+
+```
+
+Debian packages are created in the `pulsar-client-cpp/pkg/deb/BUILD/DEB/` path.
+
+| Package name | Content |
+|-----|-----|
+| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` |
+| pulsar-client-dev | Static library `libpulsar.a`, `libpulsarwithdeps.a` and C++ and C headers |
+
+## MacOS
+
+### Compilation
+
+1. Clone the Pulsar repository.
+
+```shell
+
+$ git clone https://github.com/apache/pulsar
+
+```
+
+2. Install all necessary dependencies.
+
+```shell
+
+# OpenSSL installation
+$ brew install openssl
+$ export OPENSSL_INCLUDE_DIR=/usr/local/opt/openssl/include/
+$ export OPENSSL_ROOT_DIR=/usr/local/opt/openssl/
+
+# Protocol Buffers installation
+$ brew install protobuf boost boost-python log4cxx
+# If you are using python3, you need to install boost-python3
+
+# Google Test installation
+$ git clone https://github.com/google/googletest.git
+$ cd googletest
+$ git checkout release-1.12.1
+$ cmake .
+$ make install
+
+```
+
+3. Compile the Pulsar client library in the repository that you cloned.
+
+```shell
+
+$ cd pulsar-client-cpp
+$ cmake .
+$ make
+
+```
+
+### Install `libpulsar`
+
+Pulsar releases are available in the [Homebrew](https://brew.sh/) core repository. You can install the C++ client library with the following command. The package is installed with the library and headers.
+
+```shell
+
+brew install libpulsar
+
+```
+
+## Windows (64-bit)
+
+### Compilation
+
+1. Clone the Pulsar repository.
+
+```shell
+
+$ git clone https://github.com/apache/pulsar
+
+```
+
+2. Install all necessary dependencies.
+
+```shell
+
+cd ${PULSAR_HOME}/pulsar-client-cpp
+vcpkg install --feature-flags=manifests --triplet x64-windows
+
+```
+
+3. Build C++ libraries.
+
+```shell
+
+cmake -B ./build -A x64 -DBUILD_PYTHON_WRAPPER=OFF -DBUILD_TESTS=OFF -DVCPKG_TRIPLET=x64-windows -DCMAKE_BUILD_TYPE=Release -S .
+cmake --build ./build --config Release
+
+```
+
+> **NOTE**
+>
+> 1. For Windows 32-bit, you need to use `-A Win32` and `-DVCPKG_TRIPLET=x86-windows`.
+> 2. For MSVC Debug mode, you need to replace `Release` with `Debug` for both `CMAKE_BUILD_TYPE` variable and `--config` option.
+
+4. Client libraries are available in the following places.
+
+```
+
+${PULSAR_HOME}/pulsar-client-cpp/build/lib/Release/pulsar.lib
+${PULSAR_HOME}/pulsar-client-cpp/build/lib/Release/pulsar.dll
+
+```
+
+## Connection URLs
+
+To connect Pulsar using client libraries, you need to specify a Pulsar protocol URL.
+
+Pulsar protocol URLs are assigned to specific clusters, you can use the Pulsar URI scheme. The default port is `6650`. The following is an example for localhost.
+
+```http
+
+pulsar://localhost:6650
+
+```
+
+In a Pulsar cluster in production, the URL looks as follows.
+
+```http
+
+pulsar://pulsar.us-west.example.com:6650
+
+```
+
+If you use TLS authentication, you need to add `ssl`, and the default port is `6651`. The following is an example.
+
+```http
+
+pulsar+ssl://pulsar.us-west.example.com:6651
+
+```
+
+## Create a producer
+
+To use Pulsar as a producer, you need to create a producer on the C++ client. There are two main ways of using a producer:
+- [Blocking style](#simple-blocking-example) : each call to `send` waits for an ack from the broker.
+- [Non-blocking asynchronous style](#non-blocking-example) : `sendAsync` is called instead of `send` and a callback is supplied for when the ack is received from the broker.
+
+### Simple blocking example
+
+This example sends 100 messages using the blocking style. While simple, it does not produce high throughput as it waits for each ack to come back before sending the next message.
+
+```c++
+
+#include
+#include
+
+using namespace pulsar;
+
+int main() {
+ Client client("pulsar://localhost:6650");
+
+ Result result = client.createProducer("persistent://public/default/my-topic", producer);
+ if (result != ResultOk) {
+ std::cout << "Error creating producer: " << result << std::endl;
+ return -1;
+ }
+
+ // Send 100 messages synchronously
+ int ctr = 0;
+ while (ctr < 100) {
+ std::string content = "msg" + std::to_string(ctr);
+ Message msg = MessageBuilder().setContent(content).setProperty("x", "1").build();
+ Result result = producer.send(msg);
+ if (result != ResultOk) {
+ std::cout << "The message " << content << " could not be sent, received code: " << result << std::endl;
+ } else {
+ std::cout << "The message " << content << " sent successfully" << std::endl;
+ }
+
+ std::this_thread::sleep_for(std::chrono::milliseconds(100));
+ ctr++;
+ }
+
+ std::cout << "Finished producing synchronously!" << std::endl;
+
+ client.close();
+ return 0;
+}
+
+```
+
+### Non-blocking example
+
+This example sends 100 messages using the non-blocking style calling `sendAsync` instead of `send`. This allows the producer to have multiple messages inflight at a time which increases throughput.
+
+The producer configuration `blockIfQueueFull` is useful here to avoid `ResultProducerQueueIsFull` errors when the internal queue for outgoing send requests becomes full. Once the internal queue is full, `sendAsync` becomes blocking which can make your code simpler.
+
+Without this configuration, the result code `ResultProducerQueueIsFull` is passed to the callback. You must decide how to deal with that (retry, discard etc).
+
+```c++
+
+#include
+#include
+
+using namespace pulsar;
+
+std::atomic acksReceived;
+
+void callback(Result code, const MessageId& msgId, std::string msgContent) {
+ // message processing logic here
+ std::cout << "Received ack for msg: " << msgContent << " with code: "
+ << code << " -- MsgID: " << msgId << std::endl;
+ acksReceived++;
+}
+
+int main() {
+ Client client("pulsar://localhost:6650");
+
+ ProducerConfiguration producerConf;
+ producerConf.setBlockIfQueueFull(true);
+ Producer producer;
+ Result result = client.createProducer("persistent://public/default/my-topic",
+ producerConf, producer);
+ if (result != ResultOk) {
+ std::cout << "Error creating producer: " << result << std::endl;
+ return -1;
+ }
+
+ // Send 100 messages asynchronously
+ int ctr = 0;
+ while (ctr < 100) {
+ std::string content = "msg" + std::to_string(ctr);
+ Message msg = MessageBuilder().setContent(content).setProperty("x", "1").build();
+ producer.sendAsync(msg, std::bind(callback,
+ std::placeholders::_1, std::placeholders::_2, content));
+
+ std::this_thread::sleep_for(std::chrono::milliseconds(100));
+ ctr++;
+ }
+
+ // wait for 100 messages to be acked
+ while (acksReceived < 100) {
+ std::this_thread::sleep_for(std::chrono::milliseconds(100));
+ }
+
+ std::cout << "Finished producing asynchronously!" << std::endl;
+
+ client.close();
+ return 0;
+}
+
+```
+
+### Partitioned topics and lazy producers
+
+When scaling out a Pulsar topic, you may configure a topic to have hundreds of partitions. Likewise, you may have also scaled out your producers so there are hundreds or even thousands of producers. This can put some strain on the Pulsar brokers as when you create a producer on a partitioned topic, internally it creates one internal producer per partition which involves communications to the brokers for each one. So for a topic with 1000 partitions and 1000 producers, it ends up creating 1,000,000 internal producers across the producer applications, each of which has to communicate with a broker to find out which broker it should connect to and then perform the connection handshake.
+
+You can reduce the load caused by this combination of a large number of partitions and many producers by doing the following:
+- use SinglePartition partition routing mode (this ensures that all messages are only sent to a single, randomly selected partition)
+- use non-keyed messages (when messages are keyed, routing is based on the hash of the key and so messages will end up being sent to multiple partitions)
+- use lazy producers (this ensures that an internal producer is only created on demand when a message needs to be routed to a partition)
+
+With our example above, that reduces the number of internal producers spread out over the 1000 producer apps from 1,000,000 to just 1000.
+
+Note that there can be extra latency for the first message sent. If you set a low send timeout, this timeout could be reached if the initial connection handshake is slow to complete.
+
+```c++
+
+ProducerConfiguration producerConf;
+producerConf.setPartitionsRoutingMode(ProducerConfiguration::UseSinglePartition);
+producerConf.setLazyStartPartitionedProducers(true);
+
+```
+
+### Enable chunking
+
+Message [chunking](concepts-messaging.md#chunking) enables Pulsar to process large payload messages by splitting the message into chunks at the producer side and aggregating chunked messages at the consumer side.
+
+The message chunking feature is OFF by default. The following is an example about how to enable message chunking when creating a producer.
+
+```c++
+
+ProducerConfiguration conf;
+conf.setBatchingEnabled(false);
+conf.setChunkingEnabled(true);
+Producer producer;
+client.createProducer("my-topic", conf, producer);
+
+```
+
+> **Note:** To enable chunking, you need to disable batching (`setBatchingEnabled`=`false`) concurrently.
+
+## Create a consumer
+
+To use Pulsar as a consumer, you need to create a consumer on the C++ client. There are two main ways of using the consumer:
+- [Blocking style](#blocking-example): synchronously calling `receive(msg)`.
+- [Non-blocking](#consumer-with-a-message-listener) (event based) style: using a message listener.
+
+### Blocking example
+
+The benefit of this approach is that it is the simplest code. Simply keeps calling `receive(msg)` which blocks until a message is received.
+
+This example starts a subscription at the earliest offset and consumes 100 messages.
+
+```c++
+
+#include
+
+using namespace pulsar;
+
+int main() {
+ Client client("pulsar://localhost:6650");
+
+ Consumer consumer;
+ ConsumerConfiguration config;
+ config.setSubscriptionInitialPosition(InitialPositionEarliest);
+ Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer);
+ if (result != ResultOk) {
+ std::cout << "Failed to subscribe: " << result << std::endl;
+ return -1;
+ }
+
+ Message msg;
+ int ctr = 0;
+ // consume 100 messages
+ while (ctr < 100) {
+ consumer.receive(msg);
+ std::cout << "Received: " << msg
+ << " with payload '" << msg.getDataAsString() << "'" << std::endl;
+
+ consumer.acknowledge(msg);
+ ctr++;
+ }
+
+ std::cout << "Finished consuming synchronously!" << std::endl;
+
+ client.close();
+ return 0;
+}
+
+```
+
+### Consumer with a message listener
+
+You can avoid running a loop with blocking calls with an event based style by using a message listener which is invoked for each message that is received.
+
+This example starts a subscription at the earliest offset and consumes 100 messages.
+
+```c++
+
+#include
+#include
+#include
+
+using namespace pulsar;
+
+std::atomic messagesReceived;
+
+void handleAckComplete(Result res) {
+ std::cout << "Ack res: " << res << std::endl;
+}
+
+void listener(Consumer consumer, const Message& msg) {
+ std::cout << "Got message " << msg << " with content '" << msg.getDataAsString() << "'" << std::endl;
+ messagesReceived++;
+ consumer.acknowledgeAsync(msg.getMessageId(), handleAckComplete);
+}
+
+int main() {
+ Client client("pulsar://localhost:6650");
+
+ Consumer consumer;
+ ConsumerConfiguration config;
+ config.setMessageListener(listener);
+ config.setSubscriptionInitialPosition(InitialPositionEarliest);
+ Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer);
+ if (result != ResultOk) {
+ std::cout << "Failed to subscribe: " << result << std::endl;
+ return -1;
+ }
+
+ // wait for 100 messages to be consumed
+ while (messagesReceived < 100) {
+ std::this_thread::sleep_for(std::chrono::milliseconds(100));
+ }
+
+ std::cout << "Finished consuming asynchronously!" << std::endl;
+
+ client.close();
+ return 0;
+}
+
+```
+
+### Configure chunking
+
+You can limit the maximum number of chunked messages a consumer maintains concurrently by configuring the `setMaxPendingChunkedMessage` and `setAutoAckOldestChunkedMessageOnQueueFull` parameters. When the threshold is reached, the consumer drops pending messages by silently acknowledging them or asking the broker to redeliver them later.
+
+The following is an example of how to configure message chunking.
+
+```c++
+
+ConsumerConfiguration conf;
+conf.setAutoAckOldestChunkedMessageOnQueueFull(true);
+conf.setMaxPendingChunkedMessage(100);
+Consumer consumer;
+client.subscribe("my-topic", "my-sub", conf, consumer);
+
+```
+
+## Enable authentication in connection URLs
+If you use TLS authentication when connecting to Pulsar, you need to add `ssl` in the connection URLs, and the default port is `6651`. The following is an example.
+
+```cpp
+
+ClientConfiguration config = ClientConfiguration();
+config.setUseTls(true);
+config.setTlsTrustCertsFilePath("/path/to/cacert.pem");
+config.setTlsAllowInsecureConnection(false);
+config.setAuth(pulsar::AuthTls::create(
+ "/path/to/client-cert.pem", "/path/to/client-key.pem"););
+
+Client client("pulsar+ssl://my-broker.com:6651", config);
+
+```
+
+For complete examples, refer to [C++ client examples](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/examples).
+
+## Schema
+
+This section describes some examples about schema. For more information about
+schema, see [Pulsar schema](schema-get-started.md).
+
+### Avro schema
+
+- The following example shows how to create a producer with an Avro schema.
+
+ ```cpp
+
+ static const std::string exampleSchema =
+ "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\","
+ "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}";
+ Producer producer;
+ ProducerConfiguration producerConf;
+ producerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema));
+ client.createProducer("topic-avro", producerConf, producer);
+
+ ```
+
+- The following example shows how to create a consumer with an Avro schema.
+
+ ```cpp
+
+ static const std::string exampleSchema =
+ "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\","
+ "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}";
+ ConsumerConfiguration consumerConf;
+ Consumer consumer;
+ consumerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema));
+ client.subscribe("topic-avro", "sub-2", consumerConf, consumer)
+
+ ```
+
+### ProtobufNative schema
+
+The following example shows how to create a producer and a consumer with a ProtobufNative schema.
+
+1. Generate the `User` class using Protobuf3.
+
+ :::note
+
+ You need to use Protobuf3 or later versions.
+
+ :::
+
+
+
+ ```protobuf
+
+ syntax = "proto3";
+
+ message User {
+ string name = 1;
+ int32 age = 2;
+ }
+
+ ```
+
+
+2. Include the `ProtobufNativeSchema.h` in your source code. Ensure the Protobuf dependency has been added to your project.
+
+
+ ```c++
+
+ #include
+
+ ```
+
+
+3. Create a producer to send a `User` instance.
+
+
+ ```c++
+
+ ProducerConfiguration producerConf;
+ producerConf.setSchema(createProtobufNativeSchema(User::GetDescriptor()));
+ Producer producer;
+ client.createProducer("topic-protobuf", producerConf, producer);
+ User user;
+ user.set_name("my-name");
+ user.set_age(10);
+ std::string content;
+ user.SerializeToString(&content);
+ producer.send(MessageBuilder().setContent(content).build());
+
+ ```
+
+
+4. Create a consumer to receive a `User` instance.
+
+
+ ```c++
+
+ ConsumerConfiguration consumerConf;
+ consumerConf.setSchema(createProtobufNativeSchema(User::GetDescriptor()));
+ consumerConf.setSubscriptionInitialPosition(InitialPositionEarliest);
+ Consumer consumer;
+ client.subscribe("topic-protobuf", "my-sub", consumerConf, consumer);
+ Message msg;
+ consumer.receive(msg);
+ User user2;
+ user2.ParseFromArray(msg.getData(), msg.getLength());
+
+ ```
+
diff --git a/site2/website/versioned_docs/version-2.10.x/client-libraries-dotnet.md b/site2/website/versioned_docs/version-2.10.x/client-libraries-dotnet.md
new file mode 100644
index 0000000000000..52b6200c478af
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/client-libraries-dotnet.md
@@ -0,0 +1,456 @@
+---
+id: client-libraries-dotnet
+title: Pulsar C# client
+sidebar_label: "C#"
+original_id: client-libraries-dotnet
+---
+
+You can use the Pulsar C# client (DotPulsar) to create Pulsar producers and consumers in C#. All the methods in the producer, consumer, and reader of a C# client are thread-safe. The official documentation for DotPulsar is available [here](https://github.com/apache/pulsar-dotpulsar/wiki).
+
+## Installation
+
+You can install the Pulsar C# client library either through the dotnet CLI or through the Visual Studio. This section describes how to install the Pulsar C# client library through the dotnet CLI. For information about how to install the Pulsar C# client library through the Visual Studio, see [here](https://docs.microsoft.com/en-us/visualstudio/mac/nuget-walkthrough?view=vsmac-2019).
+
+### Prerequisites
+
+Install the [.NET Core SDK](https://dotnet.microsoft.com/download/), which provides the dotnet command-line tool. Starting in Visual Studio 2017, the dotnet CLI is automatically installed with any .NET Core related workloads.
+
+### Procedures
+
+To install the Pulsar C# client library, following these steps:
+
+1. Create a project.
+
+ 1. Create a folder for the project.
+
+ 2. Open a terminal window and switch to the new folder.
+
+ 3. Create the project using the following command.
+
+ ```
+
+ dotnet new console
+
+ ```
+
+ 4. Use `dotnet run` to test that the app has been created properly.
+
+2. Add the DotPulsar NuGet package.
+
+ 1. Use the following command to install the `DotPulsar` package.
+
+ ```
+
+ dotnet add package DotPulsar
+
+ ```
+
+ 2. After the command completes, open the `.csproj` file to see the added reference.
+
+ ```xml
+
+
+
+
+
+ ```
+
+## Client
+
+This section describes some configuration examples for the Pulsar C# client.
+
+### Create client
+
+This example shows how to create a Pulsar C# client connected to localhost.
+
+```c#
+
+using DotPulsar;
+
+var client = PulsarClient.Builder().Build();
+
+```
+
+To create a Pulsar C# client by using the builder, you can specify the following options.
+
+| Option | Description | Default |
+| ---- | ---- | ---- |
+| ServiceUrl | Set the service URL for the Pulsar cluster. | pulsar://localhost:6650 |
+| RetryInterval | Set the time to wait before retrying an operation or a reconnection. | 3s |
+
+### Create producer
+
+This section describes how to create a producer.
+
+- Create a producer by using the builder.
+
+ ```c#
+
+ using DotPulsar;
+ using DotPulsar.Extensions;
+
+ var producer = client.NewProducer())
+ .Topic("persistent://public/default/mytopic")
+ .Create();
+
+ ```
+
+- Create a producer without using the builder.
+
+ ```c#
+
+ using DotPulsar;
+
+ var options = new ProducerOptions("persistent://public/default/mytopic", Schema.ByteArray);
+ var producer = client.CreateProducer(options);
+
+ ```
+
+### Create consumer
+
+This section describes how to create a consumer.
+
+- Create a consumer by using the builder.
+
+ ```c#
+
+ using DotPulsar;
+ using DotPulsar.Extensions;
+
+ var consumer = client.NewConsumer()
+ .SubscriptionName("MySubscription")
+ .Topic("persistent://public/default/mytopic")
+ .Create();
+
+ ```
+
+- Create a consumer without using the builder.
+
+ ```c#
+
+ using DotPulsar;
+
+ var options = new ConsumerOptions("MySubscription", "persistent://public/default/mytopic", Schema.ByteArray);
+ var consumer = client.CreateConsumer(options);
+
+ ```
+
+### Create reader
+
+This section describes how to create a reader.
+
+- Create a reader by using the builder.
+
+ ```c#
+
+ using DotPulsar;
+ using DotPulsar.Extensions;
+
+ var reader = client.NewReader()
+ .StartMessageId(MessageId.Earliest)
+ .Topic("persistent://public/default/mytopic")
+ .Create();
+
+ ```
+
+- Create a reader without using the builder.
+
+ ```c#
+
+ using DotPulsar;
+
+ var options = new ReaderOptions(MessageId.Earliest, "persistent://public/default/mytopic", Schema.ByteArray);
+ var reader = client.CreateReader(options);
+
+ ```
+
+### Configure encryption policies
+
+The Pulsar C# client supports four kinds of encryption policies:
+
+- `EnforceUnencrypted`: always use unencrypted connections.
+- `EnforceEncrypted`: always use encrypted connections)
+- `PreferUnencrypted`: use unencrypted connections, if possible.
+- `PreferEncrypted`: use encrypted connections, if possible.
+
+This example shows how to set the `EnforceUnencrypted` encryption policy.
+
+```c#
+
+using DotPulsar;
+
+var client = PulsarClient.Builder()
+ .ConnectionSecurity(EncryptionPolicy.EnforceEncrypted)
+ .Build();
+
+```
+
+### Configure authentication
+
+Currently, the Pulsar C# client supports the TLS (Transport Layer Security) and JWT (JSON Web Token) authentication.
+
+If you have followed [Authentication using TLS](security-tls-authentication.md), you get a certificate and a key. To use them from the Pulsar C# client, follow these steps:
+
+1. Create an unencrypted and password-less pfx file.
+
+ ```c#
+
+ openssl pkcs12 -export -keypbe NONE -certpbe NONE -out admin.pfx -inkey admin.key.pem -in admin.cert.pem -passout pass:
+
+ ```
+
+2. Use the admin.pfx file to create an X509Certificate2 and pass it to the Pulsar C# client.
+
+ ```c#
+
+ using System.Security.Cryptography.X509Certificates;
+ using DotPulsar;
+
+ var clientCertificate = new X509Certificate2("admin.pfx");
+ var client = PulsarClient.Builder()
+ .AuthenticateUsingClientCertificate(clientCertificate)
+ .Build();
+
+ ```
+
+## Producer
+
+A producer is a process that attaches to a topic and publishes messages to a Pulsar broker for processing. This section describes some configuration examples about the producer.
+
+## Send data
+
+This example shows how to send data.
+
+```c#
+
+var data = Encoding.UTF8.GetBytes("Hello World");
+await producer.Send(data);
+
+```
+
+### Send messages with customized metadata
+
+- Send messages with customized metadata by using the builder.
+
+ ```c#
+
+ var messageId = await producer.NewMessage()
+ .Property("SomeKey", "SomeValue")
+ .Send(data);
+
+ ```
+
+- Send messages with customized metadata without using the builder.
+
+ ```c#
+
+ var data = Encoding.UTF8.GetBytes("Hello World");
+ var metadata = new MessageMetadata();
+ metadata["SomeKey"] = "SomeValue";
+ var messageId = await producer.Send(metadata, data));
+
+ ```
+
+## Consumer
+
+A consumer is a process that attaches to a topic through a subscription and then receives messages. This section describes some configuration examples about the consumer.
+
+### Receive messages
+
+This example shows how a consumer receives messages from a topic.
+
+```c#
+
+await foreach (var message in consumer.Messages())
+{
+ Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray()));
+}
+
+```
+
+### Acknowledge messages
+
+Messages can be acknowledged individually or cumulatively. For details about message acknowledgement, see [acknowledgement](concepts-messaging.md#acknowledgement).
+
+- Acknowledge messages individually.
+
+ ```c#
+
+ await consumer.Acknowledge(message);
+
+ ```
+
+- Acknowledge messages cumulatively.
+
+ ```c#
+
+ await consumer.AcknowledgeCumulative(message);
+
+ ```
+
+### Unsubscribe from topics
+
+This example shows how a consumer unsubscribes from a topic.
+
+```c#
+
+await consumer.Unsubscribe();
+
+```
+
+#### Note
+
+> A consumer cannot be used and is disposed once the consumer unsubscribes from a topic.
+
+## Reader
+
+A reader is actually just a consumer without a cursor. This means that Pulsar does not keep track of your progress and there is no need to acknowledge messages.
+
+This example shows how a reader receives messages.
+
+```c#
+
+await foreach (var message in reader.Messages())
+{
+ Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray()));
+}
+
+```
+
+## Monitoring
+
+This section describes how to monitor the producer, consumer, and reader state.
+
+### Monitor producer
+
+The following table lists states available for the producer.
+
+| State | Description |
+| ---- | ----|
+| Closed | The producer or the Pulsar client has been disposed. |
+| Connected | All is well. |
+| Disconnected | The connection is lost and attempts are being made to reconnect. |
+| Faulted | An unrecoverable error has occurred. |
+| PartiallyConnected | Some of the sub-producers are disconnected. |
+
+This example shows how to monitor the producer state.
+
+```c#
+
+private static async ValueTask Monitor(IProducer producer, CancellationToken cancellationToken)
+{
+ var state = ProducerState.Disconnected;
+
+ while (!cancellationToken.IsCancellationRequested)
+ {
+ state = (await producer.StateChangedFrom(state, cancellationToken)).ProducerState;
+
+ var stateMessage = state switch
+ {
+ ProducerState.Connected => $"The producer is connected",
+ ProducerState.Disconnected => $"The producer is disconnected",
+ ProducerState.Closed => $"The producer has closed",
+ ProducerState.Faulted => $"The producer has faulted",
+ ProducerState.PartiallyConnected => $"The producer is partially connected.",
+ _ => $"The producer has an unknown state '{state}'"
+ };
+
+ Console.WriteLine(stateMessage);
+
+ if (producer.IsFinalState(state))
+ return;
+ }
+}
+
+```
+
+### Monitor consumer state
+
+The following table lists states available for the consumer.
+
+| State | Description |
+| ---- | ----|
+| Active | All is well. |
+| Inactive | All is well. The subscription type is `Failover` and you are not the active consumer. |
+| Closed | The consumer or the Pulsar client has been disposed. |
+| Disconnected | The connection is lost and attempts are being made to reconnect. |
+| Faulted | An unrecoverable error has occurred. |
+| ReachedEndOfTopic | No more messages are delivered. |
+| Unsubscribed | The consumer has unsubscribed. |
+
+This example shows how to monitor the consumer state.
+
+```c#
+
+private static async ValueTask Monitor(IConsumer consumer, CancellationToken cancellationToken)
+{
+ var state = ConsumerState.Disconnected;
+
+ while (!cancellationToken.IsCancellationRequested)
+ {
+ state = (await consumer.StateChangedFrom(state, cancellationToken)).ConsumerState;
+
+ var stateMessage = state switch
+ {
+ ConsumerState.Active => "The consumer is active",
+ ConsumerState.Inactive => "The consumer is inactive",
+ ConsumerState.Disconnected => "The consumer is disconnected",
+ ConsumerState.Closed => "The consumer has closed",
+ ConsumerState.ReachedEndOfTopic => "The consumer has reached end of topic",
+ ConsumerState.Faulted => "The consumer has faulted",
+ ConsumerState.Unsubscribed => "The consumer is unsubscribed.",
+ _ => $"The consumer has an unknown state '{state}'"
+ };
+
+ Console.WriteLine(stateMessage);
+
+ if (consumer.IsFinalState(state))
+ return;
+ }
+}
+
+```
+
+### Monitor reader state
+
+The following table lists states available for the reader.
+
+| State | Description |
+| ---- | ----|
+| Closed | The reader or the Pulsar client has been disposed. |
+| Connected | All is well. |
+| Disconnected | The connection is lost and attempts are being made to reconnect.
+| Faulted | An unrecoverable error has occurred. |
+| ReachedEndOfTopic | No more messages are delivered. |
+
+This example shows how to monitor the reader state.
+
+```c#
+
+private static async ValueTask Monitor(IReader reader, CancellationToken cancellationToken)
+{
+ var state = ReaderState.Disconnected;
+
+ while (!cancellationToken.IsCancellationRequested)
+ {
+ state = (await reader.StateChangedFrom(state, cancellationToken)).ReaderState;
+
+ var stateMessage = state switch
+ {
+ ReaderState.Connected => "The reader is connected",
+ ReaderState.Disconnected => "The reader is disconnected",
+ ReaderState.Closed => "The reader has closed",
+ ReaderState.ReachedEndOfTopic => "The reader has reached end of topic",
+ ReaderState.Faulted => "The reader has faulted",
+ _ => $"The reader has an unknown state '{state}'"
+ };
+
+ Console.WriteLine(stateMessage);
+
+ if (reader.IsFinalState(state))
+ return;
+ }
+}
+
+```
+
diff --git a/site2/website/versioned_docs/version-2.10.x/client-libraries-go.md b/site2/website/versioned_docs/version-2.10.x/client-libraries-go.md
new file mode 100644
index 0000000000000..d2f5dd5a13d0d
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/client-libraries-go.md
@@ -0,0 +1,1064 @@
+---
+id: client-libraries-go
+title: Pulsar Go client
+sidebar_label: "Go"
+original_id: client-libraries-go
+---
+
+> Tips: The CGo client has been deprecated since version 2.7.0.
+
+You can use Pulsar [Go client](https://github.com/apache/pulsar-client-go) to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang).
+
+> **API docs available as well**
+> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar-client-go/pulsar).
+
+
+## Installation
+
+### Install go package
+
+You can get the `pulsar` library by using `go get` or use it with `go module`.
+
+Download the library of Go client to local environment:
+
+```bash
+
+$ go get -u "github.com/apache/pulsar-client-go/pulsar"
+
+```
+
+Once installed locally, you can import it into your project:
+
+```go
+
+import "github.com/apache/pulsar-client-go/pulsar"
+
+```
+
+Use with go module:
+
+```bash
+
+$ mkdir test_dir && cd test_dir
+
+```
+
+Write a sample script in the `test_dir` directory (such as `test_example.go`) and write `package main` at the beginning of the file.
+
+```bash
+
+$ go mod init test_dir
+$ go mod tidy && go mod download
+$ go build test_example.go
+$ ./test_example
+
+```
+
+## Connection URLs
+
+To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL.
+
+Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`:
+
+```http
+
+pulsar://localhost:6650
+
+```
+
+If you have multiple brokers, you can set the URL as below.
+
+```
+
+pulsar://localhost:6550,localhost:6651,localhost:6652
+
+```
+
+A URL for a production Pulsar cluster may look something like this:
+
+```http
+
+pulsar://pulsar.us-west.example.com:6650
+
+```
+
+If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this:
+
+```http
+
+pulsar+ssl://pulsar.us-west.example.com:6651
+
+```
+
+## Create a client
+
+In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example:
+
+```go
+
+import (
+ "log"
+ "time"
+
+ "github.com/apache/pulsar-client-go/pulsar"
+)
+
+func main() {
+ client, err := pulsar.NewClient(pulsar.ClientOptions{
+ URL: "pulsar://localhost:6650",
+ OperationTimeout: 30 * time.Second,
+ ConnectionTimeout: 30 * time.Second,
+ })
+ if err != nil {
+ log.Fatalf("Could not instantiate Pulsar client: %v", err)
+ }
+
+ defer client.Close()
+}
+
+```
+
+If you have multiple brokers, you can initiate a client object as below.
+
+```go
+
+import (
+ "log"
+ "time"
+ "github.com/apache/pulsar-client-go/pulsar"
+)
+
+func main() {
+ client, err := pulsar.NewClient(pulsar.ClientOptions{
+ URL: "pulsar://localhost:6650,localhost:6651,localhost:6652",
+ OperationTimeout: 30 * time.Second,
+ ConnectionTimeout: 30 * time.Second,
+ })
+ if err != nil {
+ log.Fatalf("Could not instantiate Pulsar client: %v", err)
+ }
+
+ defer client.Close()
+}
+
+```
+
+The following configurable parameters are available for Pulsar clients:
+
+ Name | Description | Default
+| :-------- | :---------- |:---------- |
+| URL | Configure the service URL for the Pulsar service.
If you have multiple brokers, you can set multiple Pulsar cluster addresses for a client.
This parameter is **required**. |None |
+| ConnectionTimeout | Timeout for the establishment of a TCP connection | 30s |
+| OperationTimeout| Set the operation timeout. Producer-create, subscribe and unsubscribe operations will be retried until this interval, after which the operation will be marked as failed| 30s|
+| Authentication | Configure the authentication provider. Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | no authentication |
+| TLSTrustCertsFilePath | Set the path to the trusted TLS certificate file | |
+| TLSAllowInsecureConnection | Configure whether the Pulsar client accept untrusted TLS certificate from broker | false |
+| TLSValidateHostname | Configure whether the Pulsar client verify the validity of the host name from broker | false |
+| ListenerName | Configure the net model for VPC users to connect to the Pulsar broker | |
+| MaxConnectionsPerBroker | Max number of connections to a single broker that is kept in the pool | 1 |
+| CustomMetricsLabels | Add custom labels to all the metrics reported by this client instance | |
+| Logger | Configure the logger used by the client | logrus.StandardLogger |
+
+## Producers
+
+Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example:
+
+```go
+
+producer, err := client.CreateProducer(pulsar.ProducerOptions{
+ Topic: "my-topic",
+})
+
+if err != nil {
+ log.Fatal(err)
+}
+
+_, err = producer.Send(context.Background(), &pulsar.ProducerMessage{
+ Payload: []byte("hello"),
+})
+
+defer producer.Close()
+
+if err != nil {
+ fmt.Println("Failed to publish message", err)
+}
+fmt.Println("Published message")
+
+```
+
+### Producer operations
+
+Pulsar Go producers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string`
+`Name()` | Fetches the producer's name | `string`
+`Send(context.Context, *ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | (MessageID, error)
+`SendAsync(context.Context, *ProducerMessage, func(MessageID, *ProducerMessage, error))`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. |
+`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64
+`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error
+`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. |
+
+### Producer Example
+
+#### How to use message router in producer
+
+```go
+
+client, err := NewClient(pulsar.ClientOptions{
+ URL: serviceURL,
+})
+
+if err != nil {
+ log.Fatal(err)
+}
+defer client.Close()
+
+// Only subscribe on the specific partition
+consumer, err := client.Subscribe(pulsar.ConsumerOptions{
+ Topic: "my-partitioned-topic-partition-2",
+ SubscriptionName: "my-sub",
+})
+
+if err != nil {
+ log.Fatal(err)
+}
+defer consumer.Close()
+
+producer, err := client.CreateProducer(pulsar.ProducerOptions{
+ Topic: "my-partitioned-topic",
+ MessageRouter: func(msg *ProducerMessage, tm TopicMetadata) int {
+ fmt.Println("Routing message ", msg, " -- Partitions: ", tm.NumPartitions())
+ return 2
+ },
+})
+
+if err != nil {
+ log.Fatal(err)
+}
+defer producer.Close()
+
+```
+
+#### How to use schema interface in producer
+
+```go
+
+type testJSON struct {
+ ID int `json:"id"`
+ Name string `json:"name"`
+}
+
+```
+
+```go
+
+var (
+ exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," +
+ "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}"
+)
+
+```
+
+```go
+
+client, err := NewClient(pulsar.ClientOptions{
+ URL: "pulsar://localhost:6650",
+})
+if err != nil {
+ log.Fatal(err)
+}
+defer client.Close()
+
+properties := make(map[string]string)
+properties["pulsar"] = "hello"
+jsonSchemaWithProperties := NewJSONSchema(exampleSchemaDef, properties)
+producer, err := client.CreateProducer(ProducerOptions{
+ Topic: "jsonTopic",
+ Schema: jsonSchemaWithProperties,
+})
+assert.Nil(t, err)
+
+_, err = producer.Send(context.Background(), &ProducerMessage{
+ Value: &testJSON{
+ ID: 100,
+ Name: "pulsar",
+ },
+})
+if err != nil {
+ log.Fatal(err)
+}
+producer.Close()
+
+```
+
+#### How to use delay relative in producer
+
+```go
+
+client, err := NewClient(pulsar.ClientOptions{
+ URL: "pulsar://localhost:6650",
+})
+if err != nil {
+ log.Fatal(err)
+}
+defer client.Close()
+
+topicName := newTopicName()
+producer, err := client.CreateProducer(pulsar.ProducerOptions{
+ Topic: topicName,
+ DisableBatching: true,
+})
+if err != nil {
+ log.Fatal(err)
+}
+defer producer.Close()
+
+consumer, err := client.Subscribe(pulsar.ConsumerOptions{
+ Topic: topicName,
+ SubscriptionName: "subName",
+ Type: Shared,
+})
+if err != nil {
+ log.Fatal(err)
+}
+defer consumer.Close()
+
+ID, err := producer.Send(context.Background(), &pulsar.ProducerMessage{
+ Payload: []byte(fmt.Sprintf("test")),
+ DeliverAfter: 3 * time.Second,
+})
+if err != nil {
+ log.Fatal(err)
+}
+fmt.Println(ID)
+
+ctx, canc := context.WithTimeout(context.Background(), 1*time.Second)
+msg, err := consumer.Receive(ctx)
+if err != nil {
+ log.Fatal(err)
+}
+fmt.Println(msg.Payload())
+canc()
+
+ctx, canc = context.WithTimeout(context.Background(), 5*time.Second)
+msg, err = consumer.Receive(ctx)
+if err != nil {
+ log.Fatal(err)
+}
+fmt.Println(msg.Payload())
+canc()
+
+```
+
+#### How to use Prometheus metrics in producer
+
+Pulsar Go client registers client metrics using Prometheus. This section demonstrates how to create a simple Pulsar producer application that exposes Prometheus metrics via HTTP.
+
+1. Write a simple producer application.
+
+```go
+
+// Create a Pulsar client
+client, err := pulsar.NewClient(pulsar.ClientOptions{
+ URL: "pulsar://localhost:6650",
+})
+if err != nil {
+ log.Fatal(err)
+}
+
+defer client.Close()
+
+// Start a separate goroutine for Prometheus metrics
+// In this case, Prometheus metrics can be accessed via http://localhost:2112/metrics
+go func() {
+ prometheusPort := 2112
+ log.Printf("Starting Prometheus metrics at http://localhost:%v/metrics\n", prometheusPort)
+ http.Handle("/metrics", promhttp.Handler())
+ err = http.ListenAndServe(":"+strconv.Itoa(prometheusPort), nil)
+ if err != nil {
+ log.Fatal(err)
+ }
+}()
+
+// Create a producer
+producer, err := client.CreateProducer(pulsar.ProducerOptions{
+ Topic: "topic-1",
+})
+if err != nil {
+ log.Fatal(err)
+}
+
+defer producer.Close()
+
+ctx := context.Background()
+
+// Write your business logic here
+// In this case, you build a simple Web server. You can produce messages by requesting http://localhost:8082/produce
+webPort := 8082
+http.HandleFunc("/produce", func(w http.ResponseWriter, r *http.Request) {
+ msgId, err := producer.Send(ctx, &pulsar.ProducerMessage{
+ Payload: []byte(fmt.Sprintf("hello world")),
+ })
+ if err != nil {
+ log.Fatal(err)
+ } else {
+ log.Printf("Published message: %v", msgId)
+ fmt.Fprintf(w, "Published message: %v", msgId)
+ }
+})
+
+err = http.ListenAndServe(":"+strconv.Itoa(webPort), nil)
+if err != nil {
+ log.Fatal(err)
+}
+
+```
+
+2. To scrape metrics from applications, configure a local running Prometheus instance using a configuration file (`prometheus.yml`).
+
+```yaml
+
+scrape_configs:
+- job_name: pulsar-client-go-metrics
+ scrape_interval: 10s
+ static_configs:
+ - targets:
+ - localhost:2112
+
+```
+
+Now you can query Pulsar client metrics on Prometheus.
+
+### Producer configuration
+
+ Name | Description | Default
+| :-------- | :---------- |:---------- |
+| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | |
+| Name | Name specify a name for the producer. If not assigned, the system will generate a globally unique name which can be access with Producer.ProducerName(). | |
+| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | |
+| SendTimeout | SendTimeout set the timeout for a message that is not acknowledged by the server | 30s |
+| DisableBlockIfQueueFull | DisableBlockIfQueueFull control whether Send and SendAsync block if producer's message queue is full | false |
+| MaxPendingMessages| MaxPendingMessages set the max size of the queue holding the messages pending to receive an acknowledgment from the broker. | |
+| HashingScheme | HashingScheme change the `HashingScheme` used to chose the partition on where to publish a particular message. | JavaStringHash |
+| CompressionType | CompressionType set the compression type for the producer. | not compressed |
+| CompressionLevel | Define the desired compression level. Options: Default, Faster and Better | Default |
+| MessageRouter | MessageRouter set a custom message routing policy by passing an implementation of MessageRouter | |
+| DisableBatching | DisableBatching control whether automatic batching of messages is enabled for the producer. | false |
+| BatchingMaxPublishDelay | BatchingMaxPublishDelay set the time period within which the messages sent will be batched | 1ms |
+| BatchingMaxMessages | BatchingMaxMessages set the maximum number of messages permitted in a batch. | 1000 |
+| BatchingMaxSize | BatchingMaxSize sets the maximum number of bytes permitted in a batch. | 128KB |
+| Schema | Schema set a custom schema type by passing an implementation of `Schema` | bytes[] |
+| Interceptors | A chain of interceptors. These interceptors are called at some points defined in the `ProducerInterceptor` interface. | None |
+| MaxReconnectToBroker | MaxReconnectToBroker set the maximum retry number of reconnectToBroker | ultimate |
+| BatcherBuilderType | BatcherBuilderType sets the batch builder type. This is used to create a batch container when batching is enabled. Options: DefaultBatchBuilder and KeyBasedBatchBuilder | DefaultBatchBuilder |
+
+## Consumers
+
+Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels:
+
+```go
+
+consumer, err := client.Subscribe(pulsar.ConsumerOptions{
+ Topic: "topic-1",
+ SubscriptionName: "my-sub",
+ Type: pulsar.Shared,
+})
+if err != nil {
+ log.Fatal(err)
+}
+defer consumer.Close()
+
+for i := 0; i < 10; i++ {
+ msg, err := consumer.Receive(context.Background())
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ fmt.Printf("Received message msgId: %#v -- content: '%s'\n",
+ msg.ID(), string(msg.Payload()))
+
+ consumer.Ack(msg)
+}
+
+if err := consumer.Unsubscribe(); err != nil {
+ log.Fatal(err)
+}
+
+```
+
+### Consumer operations
+
+Pulsar Go consumers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Subscription()` | Returns the consumer's subscription name | `string`
+`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error`
+`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)`
+`Chan()` | Chan returns a channel from which to consume messages. | `<-chan ConsumerMessage`
+`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) |
+`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID |
+`ReconsumeLater(msg Message, delay time.Duration)` | ReconsumeLater mark a message for redelivery after custom delay |
+`Nack(Message)` | Acknowledge the failure to process a single message. |
+`NackID(MessageID)` | Acknowledge the failure to process a single message. |
+`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | `error`
+`SeekByTime(time time.Time)` | Reset the subscription associated with this consumer to a specific message publish time. | `error`
+`Close()` | Closes the consumer, disabling its ability to receive messages from the broker |
+`Name()` | Name returns the name of consumer | `string`
+
+### Receive example
+
+#### How to use regex consumer
+
+```go
+
+client, err := pulsar.NewClient(pulsar.ClientOptions{
+ URL: "pulsar://localhost:6650",
+})
+
+defer client.Close()
+
+p, err := client.CreateProducer(pulsar.ProducerOptions{
+ Topic: topicInRegex,
+ DisableBatching: true,
+})
+if err != nil {
+ log.Fatal(err)
+}
+defer p.Close()
+
+topicsPattern := fmt.Sprintf("persistent://%s/foo.*", namespace)
+opts := pulsar.ConsumerOptions{
+ TopicsPattern: topicsPattern,
+ SubscriptionName: "regex-sub",
+}
+consumer, err := client.Subscribe(opts)
+if err != nil {
+ log.Fatal(err)
+}
+defer consumer.Close()
+
+```
+
+#### How to use multi topics Consumer
+
+```go
+
+func newTopicName() string {
+ return fmt.Sprintf("my-topic-%v", time.Now().Nanosecond())
+}
+
+
+topic1 := "topic-1"
+topic2 := "topic-2"
+
+client, err := NewClient(pulsar.ClientOptions{
+ URL: "pulsar://localhost:6650",
+})
+if err != nil {
+ log.Fatal(err)
+}
+topics := []string{topic1, topic2}
+consumer, err := client.Subscribe(pulsar.ConsumerOptions{
+ Topics: topics,
+ SubscriptionName: "multi-topic-sub",
+})
+if err != nil {
+ log.Fatal(err)
+}
+defer consumer.Close()
+
+```
+
+#### How to use consumer listener
+
+```go
+
+import (
+ "fmt"
+ "log"
+
+ "github.com/apache/pulsar-client-go/pulsar"
+)
+
+func main() {
+ client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"})
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ defer client.Close()
+
+ channel := make(chan pulsar.ConsumerMessage, 100)
+
+ options := pulsar.ConsumerOptions{
+ Topic: "topic-1",
+ SubscriptionName: "my-subscription",
+ Type: pulsar.Shared,
+ }
+
+ options.MessageChannel = channel
+
+ consumer, err := client.Subscribe(options)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ defer consumer.Close()
+
+ // Receive messages from channel. The channel returns a struct which contains message and the consumer from where
+ // the message was received. It's not necessary here since we have 1 single consumer, but the channel could be
+ // shared across multiple consumers as well
+ for cm := range channel {
+ msg := cm.Message
+ fmt.Printf("Received message msgId: %v -- content: '%s'\n",
+ msg.ID(), string(msg.Payload()))
+
+ consumer.Ack(msg)
+ }
+}
+
+```
+
+#### How to use consumer receive timeout
+
+```go
+
+client, err := NewClient(pulsar.ClientOptions{
+ URL: "pulsar://localhost:6650",
+})
+if err != nil {
+ log.Fatal(err)
+}
+defer client.Close()
+
+topic := "test-topic-with-no-messages"
+ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond)
+defer cancel()
+
+// create consumer
+consumer, err := client.Subscribe(pulsar.ConsumerOptions{
+ Topic: topic,
+ SubscriptionName: "my-sub1",
+ Type: Shared,
+})
+if err != nil {
+ log.Fatal(err)
+}
+defer consumer.Close()
+
+msg, err := consumer.Receive(ctx)
+fmt.Println(msg.Payload())
+if err != nil {
+ log.Fatal(err)
+}
+
+```
+
+#### How to use schema in consumer
+
+```go
+
+type testJSON struct {
+ ID int `json:"id"`
+ Name string `json:"name"`
+}
+
+```
+
+```go
+
+var (
+ exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," +
+ "\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}"
+)
+
+```
+
+```go
+
+client, err := NewClient(pulsar.ClientOptions{
+ URL: "pulsar://localhost:6650",
+})
+if err != nil {
+ log.Fatal(err)
+}
+defer client.Close()
+
+var s testJSON
+
+consumerJS := NewJSONSchema(exampleSchemaDef, nil)
+consumer, err := client.Subscribe(ConsumerOptions{
+ Topic: "jsonTopic",
+ SubscriptionName: "sub-1",
+ Schema: consumerJS,
+ SubscriptionInitialPosition: SubscriptionPositionEarliest,
+})
+assert.Nil(t, err)
+msg, err := consumer.Receive(context.Background())
+assert.Nil(t, err)
+err = msg.GetSchemaValue(&s)
+if err != nil {
+ log.Fatal(err)
+}
+
+defer consumer.Close()
+
+```
+
+#### How to use Prometheus metrics in consumer
+
+In this guide, This section demonstrates how to create a simple Pulsar consumer application that exposes Prometheus metrics via HTTP.
+1. Write a simple consumer application.
+
+```go
+
+// Create a Pulsar client
+client, err := pulsar.NewClient(pulsar.ClientOptions{
+ URL: "pulsar://localhost:6650",
+})
+if err != nil {
+ log.Fatal(err)
+}
+
+defer client.Close()
+
+// Start a separate goroutine for Prometheus metrics
+// In this case, Prometheus metrics can be accessed via http://localhost:2112/metrics
+go func() {
+ prometheusPort := 2112
+ log.Printf("Starting Prometheus metrics at http://localhost:%v/metrics\n", prometheusPort)
+ http.Handle("/metrics", promhttp.Handler())
+ err = http.ListenAndServe(":"+strconv.Itoa(prometheusPort), nil)
+ if err != nil {
+ log.Fatal(err)
+ }
+}()
+
+// Create a consumer
+consumer, err := client.Subscribe(pulsar.ConsumerOptions{
+ Topic: "topic-1",
+ SubscriptionName: "sub-1",
+ Type: pulsar.Shared,
+})
+if err != nil {
+ log.Fatal(err)
+}
+
+defer consumer.Close()
+
+ctx := context.Background()
+
+// Write your business logic here
+// In this case, you build a simple Web server. You can consume messages by requesting http://localhost:8083/consume
+webPort := 8083
+http.HandleFunc("/consume", func(w http.ResponseWriter, r *http.Request) {
+ msg, err := consumer.Receive(ctx)
+ if err != nil {
+ log.Fatal(err)
+ } else {
+ log.Printf("Received message msgId: %v -- content: '%s'\n", msg.ID(), string(msg.Payload()))
+ fmt.Fprintf(w, "Received message msgId: %v -- content: '%s'\n", msg.ID(), string(msg.Payload()))
+ consumer.Ack(msg)
+ }
+})
+
+err = http.ListenAndServe(":"+strconv.Itoa(webPort), nil)
+if err != nil {
+ log.Fatal(err)
+}
+
+```
+
+2. To scrape metrics from applications, configure a local running Prometheus instance using a configuration file (`prometheus.yml`).
+
+```yaml
+
+scrape_configs:
+- job_name: pulsar-client-go-metrics
+ scrape_interval: 10s
+ static_configs:
+ - targets:
+ - localhost:2112
+
+```
+
+Now you can query Pulsar client metrics on Prometheus.
+
+### Consumer configuration
+
+ Name | Description | Default
+| :-------- | :---------- |:---------- |
+| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | |
+| Topics | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing| |
+| TopicsPattern | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing | |
+| AutoDiscoveryPeriod | Specify the interval in which to poll for new partitions or new topics if using a TopicsPattern. | |
+| SubscriptionName | Specify the subscription name for this consumer. This argument is required when subscribing | |
+| Name | Set the consumer name | |
+| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | |
+| Type | Select the subscription type to be used when subscribing to the topic. | Exclusive |
+| SubscriptionInitialPosition | InitialPosition at which the cursor will be set when subscribe | Latest |
+| DLQ | Configuration for Dead Letter Queue consumer policy. | no DLQ |
+| MessageChannel | Sets a `MessageChannel` for the consumer. When a message is received, it will be pushed to the channel for consumption | |
+| ReceiverQueueSize | Sets the size of the consumer receive queue. | 1000|
+| NackRedeliveryDelay | The delay after which to redeliver the messages that failed to be processed | 1min |
+| ReadCompacted | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic | false |
+| ReplicateSubscriptionState | Mark the subscription as replicated to keep it in sync across clusters | false |
+| KeySharedPolicy | Configuration for Key Shared consumer policy. | |
+| RetryEnable | Auto retry send messages to default filled DLQPolicy topics | false |
+| Interceptors | A chain of interceptors. These interceptors are called at some points defined in the `ConsumerInterceptor` interface. | |
+| MaxReconnectToBroker | MaxReconnectToBroker set the maximum retry number of reconnectToBroker. | ultimate |
+| Schema | Schema set a custom schema type by passing an implementation of `Schema` | bytes[] |
+
+## Readers
+
+Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example:
+
+```go
+
+reader, err := client.CreateReader(pulsar.ReaderOptions{
+ Topic: "topic-1",
+ StartMessageID: pulsar.EarliestMessageID(),
+})
+if err != nil {
+ log.Fatal(err)
+}
+defer reader.Close()
+
+```
+
+### Reader operations
+
+Pulsar Go readers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string`
+`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)`
+`HasNext()` | Check if there is any message available to read from the current position| (bool, error)
+`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error`
+`Seek(MessageID)` | Reset the subscription associated with this reader to a specific message ID | `error`
+`SeekByTime(time time.Time)` | Reset the subscription associated with this reader to a specific message publish time | `error`
+
+### Reader example
+
+#### How to use reader to read 'next' message
+
+Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages:
+
+```go
+
+import (
+ "context"
+ "fmt"
+ "log"
+
+ "github.com/apache/pulsar-client-go/pulsar"
+)
+
+func main() {
+ client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"})
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ defer client.Close()
+
+ reader, err := client.CreateReader(pulsar.ReaderOptions{
+ Topic: "topic-1",
+ StartMessageID: pulsar.EarliestMessageID(),
+ })
+ if err != nil {
+ log.Fatal(err)
+ }
+ defer reader.Close()
+
+ for reader.HasNext() {
+ msg, err := reader.Next(context.Background())
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ fmt.Printf("Received message msgId: %#v -- content: '%s'\n",
+ msg.ID(), string(msg.Payload()))
+ }
+}
+
+```
+
+In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example:
+
+```go
+
+lastSavedId := // Read last saved message id from external store as byte[]
+
+reader, err := client.CreateReader(pulsar.ReaderOptions{
+ Topic: "my-golang-topic",
+ StartMessageID: pulsar.DeserializeMessageID(lastSavedId),
+})
+
+```
+
+#### How to use reader to read specific message
+
+```go
+
+client, err := NewClient(pulsar.ClientOptions{
+ URL: lookupURL,
+})
+
+if err != nil {
+ log.Fatal(err)
+}
+defer client.Close()
+
+topic := "topic-1"
+ctx := context.Background()
+
+// create producer
+producer, err := client.CreateProducer(pulsar.ProducerOptions{
+ Topic: topic,
+ DisableBatching: true,
+})
+if err != nil {
+ log.Fatal(err)
+}
+defer producer.Close()
+
+// send 10 messages
+msgIDs := [10]MessageID{}
+for i := 0; i < 10; i++ {
+ msgID, err := producer.Send(ctx, &pulsar.ProducerMessage{
+ Payload: []byte(fmt.Sprintf("hello-%d", i)),
+ })
+ assert.NoError(t, err)
+ assert.NotNil(t, msgID)
+ msgIDs[i] = msgID
+}
+
+// create reader on 5th message (not included)
+reader, err := client.CreateReader(pulsar.ReaderOptions{
+ Topic: topic,
+ StartMessageID: msgIDs[4],
+})
+
+if err != nil {
+ log.Fatal(err)
+}
+defer reader.Close()
+
+// receive the remaining 5 messages
+for i := 5; i < 10; i++ {
+ msg, err := reader.Next(context.Background())
+ if err != nil {
+ log.Fatal(err)
+}
+
+// create reader on 5th message (included)
+readerInclusive, err := client.CreateReader(pulsar.ReaderOptions{
+ Topic: topic,
+ StartMessageID: msgIDs[4],
+ StartMessageIDInclusive: true,
+})
+
+if err != nil {
+ log.Fatal(err)
+}
+defer readerInclusive.Close()
+
+```
+
+### Reader configuration
+
+ Name | Description | Default
+| :-------- | :---------- |:---------- |
+| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | |
+| Name | Name set the reader name. | |
+| Properties | Attach a set of application defined properties to the reader. This properties will be visible in the topic stats | |
+| StartMessageID | StartMessageID initial reader positioning is done by specifying a message id. | |
+| StartMessageIDInclusive | If true, the reader will start at the `StartMessageID`, included. Default is `false` and the reader will start from the "next" message | false |
+| MessageChannel | MessageChannel sets a `MessageChannel` for the consumer When a message is received, it will be pushed to the channel for consumption| |
+| ReceiverQueueSize | ReceiverQueueSize sets the size of the consumer receive queue. | 1000 |
+| SubscriptionRolePrefix| SubscriptionRolePrefix set the subscription role prefix. | “reader” |
+| ReadCompacted | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. ReadCompacted can only be enabled when reading from a persistent topic. | false|
+
+## Messages
+
+The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message:
+
+```go
+
+msg := pulsar.ProducerMessage{
+ Payload: []byte("Here is some message data"),
+ Key: "message-key",
+ Properties: map[string]string{
+ "foo": "bar",
+ },
+ EventTime: time.Now(),
+ ReplicationClusters: []string{"cluster1", "cluster3"},
+}
+
+if _, err := producer.send(msg); err != nil {
+ log.Fatalf("Could not publish message due to: %v", err)
+}
+
+```
+
+The following methods parameters are available for `ProducerMessage` objects:
+
+Parameter | Description
+:---------|:-----------
+`Payload` | The actual data payload of the message
+`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message.
+`Key` | The optional key associated with the message (particularly useful for things like topic compaction)
+`OrderingKey` | OrderingKey sets the ordering key of the message.
+`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message
+`EventTime` | The timestamp associated with the message
+`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default.
+`SequenceID` | Set the sequence id to assign to the current message
+`DeliverAfter` | Request to deliver the message only after the specified relative delay
+`DeliverAt` | Deliver the message only at or after the specified absolute timestamp
+
+## TLS encryption and authentication
+
+In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so:
+
+ * Use `pulsar+ssl` URL type
+ * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker
+ * Configure `Authentication` option
+
+Here's an example:
+
+```go
+
+opts := pulsar.ClientOptions{
+ URL: "pulsar+ssl://my-cluster.com:6651",
+ TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr",
+ Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"),
+}
+
+```
+
+## OAuth2 authentication
+
+To use [OAuth2 authentication](security-oauth2.md), you'll need to configure your client to perform the following operations.
+This example shows how to configure OAuth2 authentication.
+
+```go
+
+oauth := pulsar.NewAuthenticationOAuth2(map[string]string{
+ "type": "client_credentials",
+ "issuerUrl": "https://dev-kt-aa9ne.us.auth0.com",
+ "audience": "https://dev-kt-aa9ne.us.auth0.com/api/v2/",
+ "privateKey": "/path/to/privateKey",
+ "clientId": "0Xx...Yyxeny",
+ })
+client, err := pulsar.NewClient(pulsar.ClientOptions{
+ URL: "pulsar://my-cluster:6650",
+ Authentication: oauth,
+})
+
+```
+
diff --git a/site2/website/versioned_docs/version-2.10.x/client-libraries-java.md b/site2/website/versioned_docs/version-2.10.x/client-libraries-java.md
new file mode 100644
index 0000000000000..0b402f1cc456d
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/client-libraries-java.md
@@ -0,0 +1,1542 @@
+---
+id: client-libraries-java
+title: Pulsar Java client
+sidebar_label: "Java"
+original_id: client-libraries-java
+---
+
+````mdx-code-block
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+````
+
+
+You can use a Pulsar Java client to create the Java [producer](#producer), [consumer](#consumer), [readers](#reader) and [TableView](#tableview) of messages and to perform [administrative tasks](admin-api-overview.md). The current Java client version is **@pulsar:version@**.
+
+All the methods in [producer](#producer), [consumer](#consumer), [readers](#reader) and [TableView](#tableview) of a Java client are thread-safe.
+
+Javadoc for the Pulsar client is divided into two domains by package as follows.
+
+Package | Description | Maven Artifact
+:-------|:------------|:--------------
+[`org.apache.pulsar.client.api`](/api/client) | [The producer and consumer API](/api/client/) | [org.apache.pulsar:pulsar-client:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar)
+[`org.apache.pulsar.client.admin`](/api/admin) | The Java [admin API](admin-api-overview.md) | [org.apache.pulsar:pulsar-client-admin:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-admin%7C@pulsar:version@%7Cjar)
+`org.apache.pulsar.client.all` |Include both `pulsar-client` and `pulsar-client-admin` Both `pulsar-client` and `pulsar-client-admin` are shaded packages and they shade dependencies independently. Consequently, the applications using both `pulsar-client` and `pulsar-client-admin` have redundant shaded classes. It would be troublesome if you introduce new dependencies but forget to update shading rules. In this case, you can use `pulsar-client-all`, which shades dependencies only one time and reduces the size of dependencies. |[org.apache.pulsar:pulsar-client-all:@pulsar:version@](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client-all%7C@pulsar:version@%7Cjar)
+
+This document focuses only on the client API for producing and consuming messages on Pulsar topics. For how to use the Java admin client, see [Pulsar admin interface](admin-api-overview.md).
+
+## Installation
+
+The latest version of the Pulsar Java client library is available via [Maven Central](http://search.maven.org/#artifactdetails%7Corg.apache.pulsar%7Cpulsar-client%7C@pulsar:version@%7Cjar). To use the latest version, add the `pulsar-client` library to your build configuration.
+
+:::tip
+
+- [`pulsar-client`](https://search.maven.org/artifact/org.apache.pulsar/pulsar-client) and [`pulsar-client-admin`](https://search.maven.org/artifact/org.apache.pulsar/pulsar-client-admin) shade dependencies via [maven-shade-plugin](https://maven.apache.org/plugins/maven-shade-plugin/) to avoid conflicts of the underlying dependency packages (such as Netty). If you do not want to manage dependency conflicts manually, you can use them.
+- [`pulsar-client-original`](https://search.maven.org/artifact/org.apache.pulsar/pulsar-client-original) and [`pulsar-client-admin-original`](https://search.maven.org/artifact/org.apache.pulsar/pulsar-client-admin-original) **does not** shade dependencies. If you want to manage dependencies manually, you can use them.
+
+:::
+
+### Maven
+
+If you use Maven, add the following information to the `pom.xml` file.
+
+```xml
+
+
+@pulsar:version@
+
+
+
+ org.apache.pulsar
+ pulsar-client
+ ${pulsar.version}
+
+
+```
+
+### Gradle
+
+If you use Gradle, add the following information to the `build.gradle` file.
+
+```groovy
+
+def pulsarVersion = '@pulsar:version@'
+
+dependencies {
+ compile group: 'org.apache.pulsar', name: 'pulsar-client', version: pulsarVersion
+}
+
+```
+
+## Connection URLs
+
+To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL.
+
+You can assign Pulsar protocol URLs to specific clusters and use the `pulsar` scheme. The default port is `6650`. The following is an example of `localhost`.
+
+```http
+
+pulsar://localhost:6650
+
+```
+
+If you have multiple brokers, the URL is as follows.
+
+```http
+
+pulsar://localhost:6550,localhost:6651,localhost:6652
+
+```
+
+A URL for a production Pulsar cluster is as follows.
+
+```http
+
+pulsar://pulsar.us-west.example.com:6650
+
+```
+
+If you use [TLS](security-tls-authentication.md) authentication, the URL is as follows.
+
+```http
+
+pulsar+ssl://pulsar.us-west.example.com:6651
+
+```
+
+## Client
+
+You can instantiate a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object using just a URL for the target Pulsar [cluster](reference-terminology.md#cluster) like this:
+
+```java
+
+PulsarClient client = PulsarClient.builder()
+ .serviceUrl("pulsar://localhost:6650")
+ .build();
+
+```
+
+If you have multiple brokers, you can initiate a PulsarClient like this:
+
+```java
+
+PulsarClient client = PulsarClient.builder()
+ .serviceUrl("pulsar://localhost:6650,localhost:6651,localhost:6652")
+ .build();
+
+```
+
+> ### Default broker URLs for standalone clusters
+> If you run a cluster in [standalone mode](getting-started-standalone.md), the broker is available at the `pulsar://localhost:6650` URL by default.
+
+If you create a client, you can use the `loadConf` configuration. The following parameters are available in `loadConf`.
+
+| Name | Type |
Description
| Default
+|---|---|---|---
+`serviceUrl` | String | Service URL provider for Pulsar service | None
+`authPluginClassName` | String | Name of the authentication plugin | None
+ `authParams` | String | Parameters for the authentication plugin
**Example** key1:val1,key2:val2|None
+`operationTimeoutMs`|long|`operationTimeoutMs`|Operation timeout |30000
+`statsIntervalSeconds`|long|Interval between each stats information
Stats is activated with positive `statsInterval`
Set `statsIntervalSeconds` to 1 second at least. |60
+`numIoThreads`| int| The number of threads used for handling connections to brokers | 1
+`numListenerThreads`|int|The number of threads used for handling message listeners. The listener thread pool is shared across all the consumers and readers using the "listener" model to get messages. For a given consumer, the listener is always invoked from the same thread to ensure ordering. If you want multiple threads to process a single topic, you need to create a [`shared`](concepts-messaging.md#shared) subscription and multiple consumers for this subscription. This does not ensure ordering.| 1
+`useTcpNoDelay`| boolean| Whether to use TCP no-delay flag on the connection to disable Nagle algorithm |true
+`enableTls` |boolean | Whether to use TLS encryption on the connection. Note that this parameter is **deprecated**. If you want to enable TLS, use `pulsar+ssl://` in `serviceUrl` instead. | false
+ `tlsTrustCertsFilePath` |string |Path to the trusted TLS certificate file|None
+`tlsAllowInsecureConnection`|boolean|Whether the Pulsar client accepts untrusted TLS certificate from broker | false
+`tlsHostnameVerificationEnable` |boolean | Whether to enable TLS hostname verification|false
+`concurrentLookupRequest`|int|The number of concurrent lookup requests allowed to send on each broker connection to prevent overload on broker|5000
+`maxLookupRequest`|int|The maximum number of lookup requests allowed on each broker connection to prevent overload on broker | 50000
+`maxNumberOfRejectedRequestPerConnection`|int|The maximum number of rejected requests of a broker in a certain time frame (30 seconds) after the current connection is closed and the client creates a new connection to connect to a different broker|50
+`keepAliveIntervalSeconds`|int|Seconds of keeping alive interval for each client broker connection|30
+`connectionTimeoutMs`|int|Duration of waiting for a connection to a broker to be established
If the duration passes without a response from a broker, the connection attempt is dropped|10000
+`requestTimeoutMs`|int|Maximum duration for completing a request |60000
+`defaultBackoffIntervalNanos`|int| Default duration for a backoff interval | TimeUnit.MILLISECONDS.toNanos(100);
+`maxBackoffIntervalNanos`|long|Maximum duration for a backoff interval|TimeUnit.SECONDS.toNanos(30)
+`socks5ProxyAddress`|SocketAddress|SOCKS5 proxy address | None
+`socks5ProxyUsername`|string|SOCKS5 proxy username | None
+`socks5ProxyPassword`|string|SOCKS5 proxy password | None
+
+Check out the Javadoc for the {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} class for a full list of configurable parameters.
+
+> In addition to client-level configuration, you can also apply [producer](#configure-producer) and [consumer](#configure-consumer) specific configuration as described in sections below.
+
+### Client memory allocator configuration
+You can set the client memory allocator configurations through Java properties.
+
+| Property | Type |
Description
| Default | Available values
+|---|---|---|---|---
+`pulsar.allocator.pooled` | String | If set to `true`, the client uses a direct memory pool. If set to `false`, the client uses a heap memory without pool | true |
true
false
+`pulsar.allocator.exit_on_oom` | String | Whether to exit the JVM when OOM happens | false |
true
false
+`pulsar.allocator.leak_detection` | String | The leak detection policy for Pulsar bytebuf allocator.
**Disabled**: No leak detection and no overhead.
**Simple**: Instruments 1% of the allocated buffer to track for leaks.
**Advanced**: Instruments 1% of the allocated buffer to track for leaks, reporting stack traces of places where the buffer is used.
**Paranoid**: Instruments 100% of the allocated buffer to track for leaks, reporting stack traces of places where the buffer is used and introduces a significant overhead.
| Disabled |
Disabled
Simple
Advanced
Paranoid
+`pulsar.allocator.out_of_memory_policy` | String | When an OOM occurs, the client throws an exception or fallbacks to heap | FallbackToHeap |
ThrowException
FallbackToHeap
+
+**Example**:
+
+```
+
+-Dpulsar.allocator.pooled=true
+-Dpulsar.allocator.exit_on_oom=false
+-Dpulsar.allocator.leak_detection=Disabled
+-Dpulsar.allocator.out_of_memory_policy=ThrowException
+
+```
+
+### Cluster-level failover
+
+This chapter describes the concept, benefits, use cases, constraints, usage, working principles, and more information about the cluster-level failover. It contains the following sections:
+
+- [What is cluster-level failover?](#what-is-cluster-level-failover)
+
+ * [Concept of cluster-level failover](#concept-of-cluster-level-failover)
+
+ * [Why use cluster-level failover?](#why-use-cluster-level-failover)
+
+ * [When to use cluster-level failover?](#when-to-use-cluster-level-failover)
+
+ * [When cluster-level failover is triggered?](#when-cluster-level-failover-is-triggered)
+
+ * [Why does cluster-level failover fail?](#why-does-cluster-level-failover-fail)
+
+ * [What are the limitations of cluster-level failover?](#what-are-the-limitations-of-cluster-level-failover)
+
+ * [What are the relationships between cluster-level failover and geo-replication?](#what-are-the-relationships-between-cluster-level-failover-and-geo-replication)
+
+- [How to use cluster-level failover?](#how-to-use-cluster-level-failover)
+
+- [How does cluster-level failover work?](#how-does-cluster-level-failover-work)
+
+> #### What is cluster-level failover
+
+This chapter helps you better understand the concept of cluster-level failover.
+> ##### Concept of cluster-level failover
+
+````mdx-code-block
+
+
+
+Automatic cluster-level failover supports Pulsar clients switching from a primary cluster to one or several backup clusters automatically and seamlessly when it detects a failover event based on the configured detecting policy set by **users**.
+
+![Automatic cluster-level failover](/assets/cluster-level-failover-1.png)
+
+
+
+
+Controlled cluster-level failover supports Pulsar clients switching from a primary cluster to one or several backup clusters. The switchover is manually set by **administrators**.
+
+![Controlled cluster-level failover](/assets/cluster-level-failover-2.png)
+
+
+
+
+````
+
+Once the primary cluster functions again, Pulsar clients can switch back to the primary cluster. Most of the time users won’t even notice a thing. Users can keep using applications and services without interruptions or timeouts.
+
+> ##### Why use cluster-level failover?
+
+The cluster-level failover provides fault tolerance, continuous availability, and high availability together. It brings a number of benefits, including but not limited to:
+
+* Reduced cost: services can be switched and recovered automatically with no data loss.
+
+* Simplified management: businesses can operate on an “always-on” basis since no immediate user intervention is required.
+
+* Improved stability and robustness: it ensures continuous performance and minimizes service downtime.
+
+> ##### When to use cluster-level failover?
+
+The cluster-level failover protects your environment in a number of ways, including but not limited to:
+
+* Disaster recovery: cluster-level failover can automatically and seamlessly transfer the production workload on a primary cluster to one or several backup clusters, which ensures minimum data loss and reduced recovery time.
+
+* Planned migration: if you want to migrate production workloads from an old cluster to a new cluster, you can improve the migration efficiency with cluster-level failover. For example, you can test whether the data migration goes smoothly in case of a failover event, identify possible issues and risks before the migration.
+
+> ##### When cluster-level failover is triggered?
+
+````mdx-code-block
+
+
+
+Automatic cluster-level failover is triggered when Pulsar clients cannot connect to the primary cluster for a prolonged period of time. This can be caused by any number of reasons including, but not limited to:
+
+* Network failure: internet connection is lost.
+
+* Power failure: shutdown time of a primary cluster exceeds time limits.
+
+* Service error: errors occur on a primary cluster (for example, the primary cluster does not function because of time limits).
+
+* Crashed storage space: the primary cluster does not have enough storage space, but the corresponding storage space on the backup server functions normally.
+
+
+
+
+Controlled cluster-level failover is triggered when administrators set the switchover manually.
+
+
+
+
+````
+
+> ##### Why does cluster-level failover fail?
+
+Obviously, the cluster-level failover does not succeed if the backup cluster is unreachable by active Pulsar clients. This can happen for many reasons, including but not limited to:
+
+* Power failure: the backup cluster is shut down or does not function normally.
+
+* Crashed storage space: primary and backup clusters do not have enough storage space.
+
+* If the failover is initiated, but no cluster can assume the role of an available cluster due to errors, and the primary cluster is not able to provide service normally.
+
+* If you manually initiate a switchover, but services cannot be switched to the backup cluster server, then the system will attempt to switch services back to the primary cluster.
+
+* Fail to authenticate or authorize between 1) primary and backup clusters, or 2) between two backup clusters.
+
+> ##### What are the limitations of cluster-level failover?
+
+Currently, cluster-level failover can perform probes to prevent data loss, but it can not check the status of backup clusters. If backup clusters are not healthy, you cannot produce or consume data.
+
+> #### What are the relationships between cluster-level failover and geo-replication?
+
+The cluster-level failover is an extension of [geo-replication](concepts-replication.md) to improve stability and robustness. The cluster-level failover depends on geo-replication, and they have some **differences** as below.
+
+Influence |Cluster-level failover|Geo-replication
+|---|---|---
+Do administrators have heavy workloads?|No or maybe.
- For the **automatic** cluster-level failover, the cluster switchover is triggered automatically based on the policies set by **users**.
- For the **controlled** cluster-level failover, the switchover is triggered manually by **administrators**.|Yes.
If a cluster fails, immediate administration intervention is required.|
+Result in data loss?|No.
For both **automatic** and **controlled** cluster-level failover, if the failed primary cluster doesn't replicate messages immediately to the backup cluster, the Pulsar client can't consume the non-replicated messages. After the primary cluster is restored and the Pulsar client switches back, the non-replicated data can still be consumed by the Pulsar client. Consequently, the data is not lost.
- For the **automatic** cluster-level failover, services can be switched and recovered automatically with no data loss.
- For the **controlled** cluster-level failover, services can be switched and recovered manually and data loss may happen.|Yes.
Pulsar clients and DNS systems have caches. When administrators switch the DNS from a primary cluster to a backup cluster, it takes some time for cache trigger timeout, which delays client recovery time and fails to produce or consume messages.
+Result in Pulsar client failure? |No or maybe.
- For **automatic** cluster-level failover, services can be switched and recovered automatically and the Pulsar client does not fail.
- For **controlled** cluster-level failover, services can be switched and recovered manually, but the Pulsar client fails before administrators can take action. |Same as above.
+
+> #### How to use cluster-level failover
+
+This section guides you through every step on how to configure cluster-level failover.
+
+**Tip**
+
+- You should configure cluster-level failover only when the cluster contains sufficient resources to handle all possible consequences. Workload intensity on the backup cluster may increase significantly.
+
+- Connect clusters to an uninterruptible power supply (UPS) unit to reduce the risk of unexpected power loss.
+
+**Requirements**
+
+* Pulsar client 2.10 or later versions.
+
+* For backup clusters:
+
+ * The number of BookKeeper nodes should be equal to or greater than the ensemble quorum.
+
+ * The number of ZooKeeper nodes should be equal to or greater than 3.
+
+* **Turn on geo-replication** between the primary cluster and any dependent cluster (primary to backup or backup to backup) to prevent data loss.
+
+* Set `replicateSubscriptionState` to `true` when creating consumers.
+
+````mdx-code-block
+
+
+
+This is an example of how to construct a Java Pulsar client to use automatic cluster-level failover. The switchover is triggered automatically.
+
+```
+
+ private PulsarClient getAutoFailoverClient() throws PulsarClientException {
+
+ ServiceUrlProvider failover = AutoClusterFailover.builder()
+ .primary("pulsar://localhost:6650")
+ .secondary(Collections.singletonList("pulsar://other1:6650","pulsar://other2:6650"))
+ .failoverDelay(30, TimeUnit.SECONDS)
+ .switchBackDelay(60, TimeUnit.SECONDS)
+ .checkInterval(1000, TimeUnit.MILLISECONDS)
+ .secondaryTlsTrustCertsFilePath("/path/to/ca.cert.pem")
+ .secondaryAuthentication("org.apache.pulsar.client.impl.auth.AuthenticationTls",
+"tlsCertFile:/path/to/my-role.cert.pem,tlsKeyFile:/path/to/my-role.key-pk8.pem")
+
+ .build();
+
+ PulsarClient pulsarClient = PulsarClient.builder()
+ .build();
+
+ failover.initialize(pulsarClient);
+ return pulsarClient;
+ }
+
+```
+
+Configure the following parameters:
+
+Parameter|Default value|Required?|Description
+|---|---|---|---
+`primary`|N/A|Yes|Service URL of the primary cluster.
+`secondary`|N/A|Yes|Service URL(s) of one or several backup clusters.
You can specify several backup clusters using a comma-separated list.
Note that: - The backup cluster is chosen in the sequence shown in the list. - If all backup clusters are available, the Pulsar client chooses the first backup cluster.
+`failoverDelay`|N/A|Yes|The delay before the Pulsar client switches from the primary cluster to the backup cluster.
Automatic failover is controlled by a probe task: 1) The probe task first checks the health status of the primary cluster. 2) If the probe task finds the continuous failure time of the primary cluster exceeds `failoverDelayMs`, it switches the Pulsar client to the backup cluster.
+`switchBackDelay`|N/A|Yes|The delay before the Pulsar client switches from the backup cluster to the primary cluster.
Automatic failover switchover is controlled by a probe task: 1) After the Pulsar client switches from the primary cluster to the backup cluster, the probe task continues to check the status of the primary cluster. 2) If the primary cluster functions well and continuously remains active longer than `switchBackDelay`, the Pulsar client switches back to the primary cluster.
+`checkInterval`|30s|No|Frequency of performing a probe task (in seconds).
+`secondaryTlsTrustCertsFilePath`|N/A|No|Path to the trusted TLS certificate file of the backup cluster.
+`secondaryAuthentication`|N/A|No|Authentication of the backup cluster.
+
+
+
+
+This is an example of how to construct a Java Pulsar client to use controlled cluster-level failover. The switchover is triggered by administrators manually.
+
+**Note**: you can have one or several backup clusters but can only specify one.
+
+```
+
+ public PulsarClient getControlledFailoverClient() throws IOException {
+Map header = new HashMap();
+ header.put(“service_user_id”, “my-user”);
+ header.put(“service_password”, “tiger”);
+ header.put(“clusterA”, “tokenA”);
+ header.put(“clusterB”, “tokenB”);
+
+ ServiceUrlProvider provider =
+ ControlledClusterFailover.builder()
+ .defaultServiceUrl("pulsar://localhost:6650")
+ .checkInterval(1, TimeUnit.MINUTES)
+ .urlProvider("http://localhost:8080/test")
+ .urlProviderHeader(header)
+ .build();
+
+ PulsarClient pulsarClient =
+ PulsarClient.builder()
+ .build();
+
+ provider.initialize(pulsarClient);
+ return pulsarClient;
+}
+
+```
+
+Parameter|Default value|Required?|Description
+|---|---|---|---
+`defaultServiceUrl`|N/A|Yes|Pulsar service URL.
+`checkInterval`|30s|No|Frequency of performing a probe task (in seconds).
+`urlProvider`|N/A|Yes|URL provider service.
+`urlProviderHeader`|N/A|No|`urlProviderHeader` is a map containing tokens and credentials.
If you enable authentication or authorization between Pulsar clients and primary and backup clusters, you need to provide `urlProviderHeader`.
+
+Here is an example of how `urlProviderHeader` works.
+
+![How urlProviderHeader works](/assets/cluster-level-failover-3.png)
+
+Assume that you want to connect Pulsar client 1 to cluster A.
+
+1. Pulsar client 1 sends the token *t1* to the URL provider service.
+
+2. The URL provider service returns the credential *c1* and the cluster A URL to the Pulsar client.
+
+ The URL provider service manages all tokens and credentials. It returns different credentials based on different tokens and different target cluster URLs to different Pulsar clients.
+
+ **Note**: **the credential must be in a JSON file and contain parameters as shown**.
+
+ ```
+
+ {
+ "serviceUrl": "pulsar+ssl://target:6651",
+ "tlsTrustCertsFilePath": "/security/ca.cert.pem",
+ "authPluginClassName":"org.apache.pulsar.client.impl.auth.AuthenticationTls",
+ "authParamsString": " \"tlsCertFile\": \"/security/client.cert.pem\"
+ \"tlsKeyFile\": \"/security/client-pk8.pem\" "
+ }
+
+ ```
+
+3. Pulsar client 1 connects to cluster A using credential *c1*.
+
+
+
+
+````
+
+>#### How does cluster-level failover work?
+
+This chapter explains the working process of cluster-level failover. For more implementation details, see [PIP-121](https://github.com/apache/pulsar/issues/13315).
+
+````mdx-code-block
+
+
+
+In automatic failover cluster, the primary cluster and backup cluster are aware of each other's availability. The automatic failover cluster performs the following actions without administrator intervention:
+
+1. The Pulsar client runs a probe task at intervals defined in `checkInterval`.
+
+2. If the probe task finds the failure time of the primary cluster exceeds the time set in the `failoverDelay` parameter, it searches backup clusters for an available healthy cluster.
+
+ 2a) If there are healthy backup clusters, the Pulsar client switches to a backup cluster in the order defined in `secondary`.
+
+ 2b) If there is no healthy backup cluster, the Pulsar client does not perform the switchover, and the probe task continues to look for an available backup cluster.
+
+3. The probe task checks whether the primary cluster functions well or not.
+
+ 3a) If the primary cluster comes back and the continuous healthy time exceeds the time set in `switchBackDelay`, the Pulsar client switches back to the primary cluster.
+
+ 3b) If the primary cluster does not come back, the Pulsar client does not perform the switchover.
+
+![Workflow of automatic failover cluster](/assets/cluster-level-failover-4.png)
+
+
+
+
+1. The Pulsar client runs a probe task at intervals defined in `checkInterval`.
+
+2. The probe task fetches the service URL configuration from the URL provider service, which is configured by `urlProvider`.
+
+ 2a) If the service URL configuration is changed, the probe task switches to the target cluster without checking the health status of the target cluster.
+
+ 2b) If the service URL configuration is not changed, the Pulsar client does not perform the switchover.
+
+3. If the Pulsar client switches to the target cluster, the probe task continues to fetch service URL configuration from the URL provider service at intervals defined in `checkInterval`.
+
+ 3a) If the service URL configuration is changed, the probe task switches to the target cluster without checking the health status of the target cluster.
+
+ 3b) If the service URL configuration is not changed, it does not perform the switchover.
+
+![Workflow of controlled failover cluster](/assets/cluster-level-failover-5.png)
+
+
+
+
+````
+
+## Producer
+
+In Pulsar, producers write messages to topics. Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object (as in the section [above](#client-configuration)), you can create a {@inject: javadoc:Producer:/client/org/apache/pulsar/client/api/Producer} for a specific Pulsar [topic](reference-terminology.md#topic).
+
+```java
+
+Producer producer = client.newProducer()
+ .topic("my-topic")
+ .create();
+
+// You can then send messages to the broker and topic you specified:
+producer.send("My message".getBytes());
+
+```
+
+By default, producers produce messages that consist of byte arrays. You can produce different types by specifying a message [schema](#schema).
+
+```java
+
+Producer stringProducer = client.newProducer(Schema.STRING)
+ .topic("my-topic")
+ .create();
+stringProducer.send("My message");
+
+```
+
+> Make sure that you close your producers, consumers, and clients when you do not need them.
+
+> ```java
+>
+> producer.close();
+> consumer.close();
+> client.close();
+>
+>
+> ```
+
+>
+> Close operations can also be asynchronous:
+
+> ```java
+>
+> producer.closeAsync()
+> .thenRun(() -> System.out.println("Producer closed"))
+> .exceptionally((ex) -> {
+> System.err.println("Failed to close producer: " + ex);
+> return null;
+> });
+>
+>
+> ```
+
+
+### Configure producer
+
+If you instantiate a `Producer` object by specifying only a topic name as the example above, the default configuration of producer is used.
+
+If you create a producer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`.
+
+Name| Type |
Description
| Default
+|---|---|---|---
+`topicName`| string| Topic name| null|
+`producerName`| string|Producer name| null
+`sendTimeoutMs`| long|Message send timeout in ms. If a message is not acknowledged by a server before the `sendTimeout` expires, an error occurs.|30000
+`blockIfQueueFull`|boolean|If it is set to `true`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer block, rather than failing and throwing errors. If it is set to `false`, when the outgoing message queue is full, the `Send` and `SendAsync` methods of producer fail and `ProducerQueueIsFullError` exceptions occur.
The `MaxPendingMessages` parameter determines the size of the outgoing message queue.|false
+`maxPendingMessages`| int|The maximum size of a queue holding pending messages.
For example, a message waiting to receive an acknowledgment from a [broker](reference-terminology.md#broker).
By default, when the queue is full, all calls to the `Send` and `SendAsync` methods fail **unless** you set `BlockIfQueueFull` to `true`.|1000
+`maxPendingMessagesAcrossPartitions`|int|The maximum number of pending messages across partitions.
Use the setting to lower the max pending messages for each partition ({@link #setMaxPendingMessages(int)}) if the total number exceeds the configured value.|50000
+`messageRoutingMode`| MessageRoutingMode|Message routing logic for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics). Apply the logic only when setting no key on messages. Available options are as follows:
`pulsar.RoundRobinDistribution`: round robin
`pulsar.UseSinglePartition`: publish all messages to a single partition
`pulsar.CustomPartition`: a custom partitioning scheme
|
`pulsar.RoundRobinDistribution`
+`hashingScheme`| HashingScheme|Hashing function determining the partition where you publish a particular message (**partitioned topics only**). Available options are as follows:
`pulsar.JavastringHash`: the equivalent of `string.hashCode()` in Java
`pulsar.Murmur3_32Hash`: applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function
`pulsar.BoostHash`: applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library
|`HashingScheme.JavastringHash`
+`cryptoFailureAction`| ProducerCryptoFailureAction|Producer should take action when encryption fails.
**FAIL**: if encryption fails, unencrypted messages fail to send.
**SEND**: if encryption fails, unencrypted messages are sent.
|`ProducerCryptoFailureAction.FAIL`
+`batchingMaxPublishDelayMicros`| long|Batching time period of sending messages.|TimeUnit.MILLISECONDS.toMicros(1)
+`batchingMaxMessages` |int|The maximum number of messages permitted in a batch.|1000
+`batchingEnabled`| boolean|Enable batching of messages. |true
+`chunkingEnabled` | boolean | Enable chunking of messages. |false
+`compressionType`|CompressionType|Message data compression type used by a producer. Available options:
[`LZ4`](https://github.com/lz4/lz4)
[`ZLIB`](https://zlib.net/)
[`ZSTD`](https://facebook.github.io/zstd/)
[`SNAPPY`](https://google.github.io/snappy/)
| No compression
+`initialSubscriptionName`|string|Use this configuration to automatically create an initial subscription when creating a topic. If this field is not set, the initial subscription is not created.|null
+
+You can configure parameters if you do not want to use the default configuration.
+
+For a full list, see the Javadoc for the {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder} class. The following is an example.
+
+```java
+
+Producer producer = client.newProducer()
+ .topic("my-topic")
+ .batchingMaxPublishDelay(10, TimeUnit.MILLISECONDS)
+ .sendTimeout(10, TimeUnit.SECONDS)
+ .blockIfQueueFull(true)
+ .create();
+
+```
+
+### Message routing
+
+When using partitioned topics, you can specify the routing mode whenever you publish messages using a producer. For more information on specifying a routing mode using the Java client, see the [Partitioned Topics cookbook](cookbooks-partitioned.md).
+
+### Async send
+
+You can publish messages [asynchronously](concepts-messaging.md#send-modes) using the Java client. With async send, the producer puts the message in a blocking queue and returns it immediately. Then the client library sends the message to the broker in the background. If the queue is full (max size configurable), the producer is blocked or fails immediately when calling the API, depending on arguments passed to the producer.
+
+The following is an example.
+
+```java
+
+producer.sendAsync("my-async-message".getBytes()).thenAccept(msgId -> {
+ System.out.println("Message with ID " + msgId + " successfully sent");
+});
+
+```
+
+As you can see from the example above, async send operations return a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId} wrapped in a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture).
+
+### Configure messages
+
+In addition to a value, you can set additional items on a given message:
+
+```java
+
+producer.newMessage()
+ .key("my-message-key")
+ .value("my-async-message".getBytes())
+ .property("my-key", "my-value")
+ .property("my-other-key", "my-other-value")
+ .send();
+
+```
+
+You can terminate the builder chain with `sendAsync()` and get a future return.
+
+### Enable chunking
+
+Message [chunking](concepts-messaging.md#chunking) enables Pulsar to process large payload messages by splitting the message into chunks at the producer side and aggregating chunked messages at the consumer side.
+
+The message chunking feature is OFF by default. The following is an example about how to enable message chunking when creating a producer.
+
+```java
+
+Producer producer = client.newProducer()
+ .topic(topic)
+ .enableChunking(true)
+ .enableBatching(false)
+ .create();
+
+```
+
+By default, producer chunks the large message based on max message size (`maxMessageSize`) configured at broker (eg: 5MB). However, client can also configure max chunked size using producer configuration `chunkMaxMessageSize`.
+> **Note:** To enable chunking, you need to disable batching (`enableBatching`=`false`) concurrently.
+
+## Consumer
+
+In Pulsar, consumers subscribe to topics and handle messages that producers publish to those topics. You can instantiate a new [consumer](reference-terminology.md#consumer) by first instantiating a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object and passing it a URL for a Pulsar broker (as [above](#client-configuration)).
+
+Once you've instantiated a {@inject: javadoc:PulsarClient:/client/org/apache/pulsar/client/api/PulsarClient} object, you can create a {@inject: javadoc:Consumer:/client/org/apache/pulsar/client/api/Consumer} by specifying a [topic](reference-terminology.md#topic) and a [subscription](concepts-messaging.md#subscription-types).
+
+```java
+
+Consumer consumer = client.newConsumer()
+ .topic("my-topic")
+ .subscriptionName("my-subscription")
+ .subscribe();
+
+```
+
+The `subscribe` method will auto subscribe the consumer to the specified topic and subscription. One way to make the consumer listen on the topic is to set up a `while` loop. In this example loop, the consumer listens for messages, prints the contents of any received message, and then [acknowledges](reference-terminology.md#acknowledgment-ack) that the message has been processed. If the processing logic fails, you can use [negative acknowledgement](reference-terminology.md#acknowledgment-ack) to redeliver the message later.
+
+```java
+
+while (true) {
+ // Wait for a message
+ Message msg = consumer.receive();
+
+ try {
+ // Do something with the message
+ System.out.println("Message received: " + new String(msg.getData()));
+
+ // Acknowledge the message so that it can be deleted by the message broker
+ consumer.acknowledge(msg);
+ } catch (Exception e) {
+ // Message failed to process, redeliver later
+ consumer.negativeAcknowledge(msg);
+ }
+}
+
+```
+
+If you don't want to block your main thread and rather listen constantly for new messages, consider using a `MessageListener`.
+
+```java
+
+MessageListener myMessageListener = (consumer, msg) -> {
+ try {
+ System.out.println("Message received: " + new String(msg.getData()));
+ consumer.acknowledge(msg);
+ } catch (Exception e) {
+ consumer.negativeAcknowledge(msg);
+ }
+}
+
+Consumer consumer = client.newConsumer()
+ .topic("my-topic")
+ .subscriptionName("my-subscription")
+ .messageListener(myMessageListener)
+ .subscribe();
+
+```
+
+### Configure consumer
+
+If you instantiate a `Consumer` object by specifying only a topic and subscription name as in the example above, the consumer uses the default configuration.
+
+When you create a consumer, you can use the `loadConf` configuration. The following parameters are available in `loadConf`.
+
+ Name|Type |
Description
| Default
+|---|---|---|---
+`topicNames`| Set<String>| Topic name| Sets.newTreeSet()
+ `topicsPattern`|Pattern| Topic pattern |None
+`subscriptionName`|String| Subscription name| None
+`subscriptionType`|SubscriptionType| Subscription type Four subscription types are available:
Exclusive
Failover
Shared
Key_Shared
|SubscriptionType.Exclusive
+`receiverQueueSize` |int | Size of a consumer's receiver queue.
For example, the number of messages accumulated by a consumer before an application calls `Receive`.
A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.| 1000
+`acknowledgementsGroupTimeMicros`|long|Group a consumer acknowledgment for a specified time.
By default, a consumer uses 100ms grouping time to send out acknowledgments to a broker.
Setting a group time of 0 sends out acknowledgments immediately.
A longer ack group time is more efficient at the expense of a slight increase in message re-deliveries after a failure.|TimeUnit.MILLISECONDS.toMicros(100)
+`negativeAckRedeliveryDelayMicros`|long|Delay to wait before redelivering messages that failed to be processed.
When an application uses {@link Consumer#negativeAcknowledge(Message)}, failed messages are redelivered after a fixed timeout. |TimeUnit.MINUTES.toMicros(1)
+`maxTotalReceiverQueueSizeAcrossPartitions`|int |The max total receiver queue size across partitions.
This setting reduces the receiver queue size for individual partitions if the total receiver queue size exceeds this value.|50000
+`consumerName`|String|Consumer name|null
+`ackTimeoutMillis`|long|Timeout of unacked messages|0
+`tickDurationMillis`|long|Granularity of the ack-timeout redelivery.
Using an higher `tickDurationMillis` reduces the memory overhead to track messages when setting ack-timeout to a bigger value (for example, 1 hour).|1000
+`priorityLevel`|int|Priority level for a consumer to which a broker gives more priority while dispatching messages in Shared subscription type.
The broker follows descending priorities. For example, 0=max-priority, 1, 2,...
In Shared subscription type, the broker **first dispatches messages to the max priority level consumers if they have permits**. Otherwise, the broker considers next priority level consumers.
**Example 1** If a subscription has consumerA with `priorityLevel` 0 and consumerB with `priorityLevel` 1, then the broker **only dispatches messages to consumerA until it runs out permits** and then starts dispatching messages to consumerB.
Order in which a broker dispatches messages to consumers is: C1, C2, C3, C1, C4, C5, C4.|0
+`cryptoFailureAction`|ConsumerCryptoFailureAction|Consumer should take action when it receives a message that can not be decrypted.
**FAIL**: this is the default option to fail messages until crypto succeeds.
**DISCARD**:silently acknowledge and not deliver message to an application.
**CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.
The decompression of message fails.
If messages contain batch messages, a client is not be able to retrieve individual messages in batch.
Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.|
ConsumerCryptoFailureAction.FAIL
+`properties`|SortedMap|A name or value property of this consumer.
`properties` is application defined metadata attached to a consumer.
When getting a topic stats, associate this metadata with the consumer stats for easier identification.|new TreeMap()
+`readCompacted`|boolean|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than reading a full message backlog of a topic.
A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.
Only enabling `readCompacted` on subscriptions to persistent topics, which have a single active consumer (like failure or exclusive subscriptions).
Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`.|false
+`subscriptionInitialPosition`|SubscriptionInitialPosition|Initial position at which to set cursor when subscribing to a topic at first time.|SubscriptionInitialPosition.Latest
+`patternAutoDiscoveryPeriod`|int|Topic auto discovery period when using a pattern for topic's consumer.
The default and minimum value is 1 minute.|1
+`regexSubscriptionMode`|RegexSubscriptionMode|When subscribing to a topic using a regular expression, you can pick a certain type of topics.
**PersistentOnly**: only subscribe to persistent topics.
**NonPersistentOnly**: only subscribe to non-persistent topics.
**AllTopics**: subscribe to both persistent and non-persistent topics.
|RegexSubscriptionMode.PersistentOnly
+`deadLetterPolicy`|DeadLetterPolicy|Dead letter policy for consumers.
By default, some messages are probably redelivered many times, even to the extent that it never stops.
By using the dead letter mechanism, messages have the max redelivery count. **When exceeding the maximum number of redeliveries, messages are sent to the Dead Letter Topic and acknowledged automatically**.
You can enable the dead letter mechanism by setting `deadLetterPolicy`.
Default dead letter topic name is `{TopicName}-{Subscription}-DLQ`.
To set a custom dead letter topic name: client.newConsumer() .deadLetterPolicy(DeadLetterPolicy.builder().maxRedeliverCount(10) .deadLetterTopic("your-topic-name").build()) .subscribe();
When specifying the dead letter policy while not specifying `ackTimeoutMillis`, you can set the ack timeout to 30000 millisecond.|None
+`autoUpdatePartitions`|boolean|If `autoUpdatePartitions` is enabled, a consumer subscribes to partition increasement automatically.
**Note**: this is only for partitioned consumers.|true
+`replicateSubscriptionState`|boolean|If `replicateSubscriptionState` is enabled, a subscription state is replicated to geo-replicated clusters.|false
+`negativeAckRedeliveryBackoff`|RedeliveryBackoff|Interface for custom message is negativeAcked policy. You can specify `RedeliveryBackoff` for a consumer.| `MultiplierRedeliveryBackoff`
+`ackTimeoutRedeliveryBackoff`|RedeliveryBackoff|Interface for custom message is ackTimeout policy. You can specify `RedeliveryBackoff` for a consumer.| `MultiplierRedeliveryBackoff`
+`autoAckOldestChunkedMessageOnQueueFull`|boolean|Whether to automatically acknowledge pending chunked messages when the threashold of `maxPendingChunkedMessage` is reached. If set to `false`, these messages will be redelivered by their broker. |true
+`maxPendingChunkedMessage`|int| The maximum size of a queue holding pending chunked messages. When the threshold is reached, the consumer drops pending messages to optimize memory utilization.|10
+`expireTimeOfIncompleteChunkedMessageMillis`|long|The time interval to expire incomplete chunks if a consumer fails to receive all the chunks in the specified time period. The default value is 1 minute. | 60000
+
+You can configure parameters if you do not want to use the default configuration. For a full list, see the Javadoc for the {@inject: javadoc:ConsumerBuilder:/client/org/apache/pulsar/client/api/ConsumerBuilder} class.
+
+The following is an example.
+
+```java
+
+Consumer consumer = client.newConsumer()
+ .topic("my-topic")
+ .subscriptionName("my-subscription")
+ .ackTimeout(10, TimeUnit.SECONDS)
+ .subscriptionType(SubscriptionType.Exclusive)
+ .subscribe();
+
+```
+
+### Async receive
+
+The `receive` method receives messages synchronously (the consumer process is blocked until a message is available). You can also use [async receive](concepts-messaging.md#receive-modes), which returns a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) object immediately once a new message is available.
+
+The following is an example.
+
+```java
+
+CompletableFuture asyncMessage = consumer.receiveAsync();
+
+```
+
+Async receive operations return a {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} wrapped inside of a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture).
+
+### Batch receive
+
+Use `batchReceive` to receive multiple messages for each call.
+
+The following is an example.
+
+```java
+
+Messages messages = consumer.batchReceive();
+for (Object message : messages) {
+ // do something
+}
+consumer.acknowledge(messages)
+
+```
+
+:::note
+
+Batch receive policy limits the number and bytes of messages in a single batch. You can specify a timeout to wait for enough messages.
+The batch receive is completed if any of the following condition is met: enough number of messages, bytes of messages, wait timeout.
+
+```java
+
+Consumer consumer = client.newConsumer()
+.topic("my-topic")
+.subscriptionName("my-subscription")
+.batchReceivePolicy(BatchReceivePolicy.builder()
+.maxNumMessages(100)
+.maxNumBytes(1024 * 1024)
+.timeout(200, TimeUnit.MILLISECONDS)
+.build())
+.subscribe();
+
+```
+
+The default batch receive policy is:
+
+```java
+
+BatchReceivePolicy.builder()
+.maxNumMessage(-1)
+.maxNumBytes(10 * 1024 * 1024)
+.timeout(100, TimeUnit.MILLISECONDS)
+.build();
+
+```
+
+:::
+
+### Configure chunking
+
+You can limit the maximum number of chunked messages a consumer maintains concurrently by configuring the `maxPendingChunkedMessage` and `autoAckOldestChunkedMessageOnQueueFull` parameters. When the threshold is reached, the consumer drops pending messages by silently acknowledging them or asking the broker to redeliver them later. The `expireTimeOfIncompleteChunkedMessage` parameter decides the time interval to expire incomplete chunks if the consumer fails to receive all chunks of a message within the specified time period.
+
+The following is an example of how to configure message chunking.
+
+```java
+
+Consumer consumer = client.newConsumer()
+ .topic(topic)
+ .subscriptionName("test")
+ .autoAckOldestChunkedMessageOnQueueFull(true)
+ .maxPendingChunkedMessage(100)
+ .expireTimeOfIncompleteChunkedMessage(10, TimeUnit.MINUTES)
+ .subscribe();
+
+```
+
+### Negative acknowledgment redelivery backoff
+
+The `RedeliveryBackoff` introduces a redelivery backoff mechanism. You can achieve redelivery with different delays by setting `redeliveryCount ` of messages.
+
+```java
+
+Consumer consumer = client.newConsumer()
+ .topic("my-topic")
+ .subscriptionName("my-subscription")
+ .negativeAckRedeliveryBackoff(MultiplierRedeliveryBackoff.builder()
+ .minDelayMs(1000)
+ .maxDelayMs(60 * 1000)
+ .build())
+ .subscribe();
+
+```
+
+### Acknowledgement timeout redelivery backoff
+
+The `RedeliveryBackoff` introduces a redelivery backoff mechanism. You can redeliver messages with different delays by setting the number
+of times the messages is retried.
+
+```java
+
+Consumer consumer = client.newConsumer()
+ .topic("my-topic")
+ .subscriptionName("my-subscription")
+ .ackTimeout(10, TimeUnit.SECOND)
+ .ackTimeoutRedeliveryBackoff(MultiplierRedeliveryBackoff.builder()
+ .minDelayMs(1000)
+ .maxDelayMs(60000)
+ .multiplier(2)
+ .build())
+ .subscribe();
+
+```
+
+The message redelivery behavior should be as follows.
+
+Redelivery count | Redelivery delay
+:--------------------|:-----------
+1 | 10 + 1 seconds
+2 | 10 + 2 seconds
+3 | 10 + 4 seconds
+4 | 10 + 8 seconds
+5 | 10 + 16 seconds
+6 | 10 + 32 seconds
+7 | 10 + 60 seconds
+8 | 10 + 60 seconds
+
+:::note
+
+- The `negativeAckRedeliveryBackoff` does not work with `consumer.negativeAcknowledge(MessageId messageId)` because you are not able to get the redelivery count from the message ID.
+- If a consumer crashes, it triggers the redelivery of unacked messages. In this case, `RedeliveryBackoff` does not take effect and the messages might get redelivered earlier than the delay time from the backoff.
+
+:::
+
+### Multi-topic subscriptions
+
+In addition to subscribing a consumer to a single Pulsar topic, you can also subscribe to multiple topics simultaneously using [multi-topic subscriptions](concepts-messaging.md#multi-topic-subscriptions). To use multi-topic subscriptions you can supply either a regular expression (regex) or a `List` of topics. If you select topics via regex, all topics must be within the same Pulsar namespace.
+
+The followings are some examples.
+
+```java
+
+import org.apache.pulsar.client.api.Consumer;
+import org.apache.pulsar.client.api.PulsarClient;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.regex.Pattern;
+
+ConsumerBuilder consumerBuilder = pulsarClient.newConsumer()
+ .subscriptionName(subscription);
+
+// Subscribe to all topics in a namespace
+Pattern allTopicsInNamespace = Pattern.compile("public/default/.*");
+Consumer allTopicsConsumer = consumerBuilder
+ .topicsPattern(allTopicsInNamespace)
+ .subscribe();
+
+// Subscribe to a subsets of topics in a namespace, based on regex
+Pattern someTopicsInNamespace = Pattern.compile("public/default/foo.*");
+Consumer allTopicsConsumer = consumerBuilder
+ .topicsPattern(someTopicsInNamespace)
+ .subscribe();
+
+```
+
+In the above example, the consumer subscribes to the `persistent` topics that can match the topic name pattern. If you want the consumer subscribes to all `persistent` and `non-persistent` topics that can match the topic name pattern, set `subscriptionTopicsMode` to `RegexSubscriptionMode.AllTopics`.
+
+```java
+
+Pattern pattern = Pattern.compile("public/default/.*");
+pulsarClient.newConsumer()
+ .subscriptionName("my-sub")
+ .topicsPattern(pattern)
+ .subscriptionTopicsMode(RegexSubscriptionMode.AllTopics)
+ .subscribe();
+
+```
+
+:::note
+
+By default, the `subscriptionTopicsMode` of the consumer is `PersistentOnly`. Available options of `subscriptionTopicsMode` are `PersistentOnly`, `NonPersistentOnly`, and `AllTopics`.
+
+:::
+
+You can also subscribe to an explicit list of topics (across namespaces if you wish):
+
+```java
+
+List topics = Arrays.asList(
+ "topic-1",
+ "topic-2",
+ "topic-3"
+);
+
+Consumer multiTopicConsumer = consumerBuilder
+ .topics(topics)
+ .subscribe();
+
+// Alternatively:
+Consumer multiTopicConsumer = consumerBuilder
+ .topic(
+ "topic-1",
+ "topic-2",
+ "topic-3"
+ )
+ .subscribe();
+
+```
+
+You can also subscribe to multiple topics asynchronously using the `subscribeAsync` method rather than the synchronous `subscribe` method. The following is an example.
+
+```java
+
+Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default.*");
+consumerBuilder
+ .topics(topics)
+ .subscribeAsync()
+ .thenAccept(this::receiveMessageFromConsumer);
+
+private void receiveMessageFromConsumer(Object consumer) {
+ ((Consumer)consumer).receiveAsync().thenAccept(message -> {
+ // Do something with the received message
+ receiveMessageFromConsumer(consumer);
+ });
+}
+
+```
+
+### Subscription types
+
+Pulsar has various [subscription types](concepts-messaging#subscription-types) to match different scenarios. A topic can have multiple subscriptions with different subscription types. However, a subscription can only have one subscription type at a time.
+
+A subscription is identical with the subscription name; a subscription name can specify only one subscription type at a time. To change the subscription type, you should first stop all consumers of this subscription.
+
+Different subscription types have different message distribution types. This section describes the differences of subscription types and how to use them.
+
+In order to better describe their differences, assuming you have a topic named "my-topic", and the producer has published 10 messages.
+
+```java
+
+Producer producer = client.newProducer(Schema.STRING)
+ .topic("my-topic")
+ .enableBatching(false)
+ .create();
+// 3 messages with "key-1", 3 messages with "key-2", 2 messages with "key-3" and 2 messages with "key-4"
+producer.newMessage().key("key-1").value("message-1-1").send();
+producer.newMessage().key("key-1").value("message-1-2").send();
+producer.newMessage().key("key-1").value("message-1-3").send();
+producer.newMessage().key("key-2").value("message-2-1").send();
+producer.newMessage().key("key-2").value("message-2-2").send();
+producer.newMessage().key("key-2").value("message-2-3").send();
+producer.newMessage().key("key-3").value("message-3-1").send();
+producer.newMessage().key("key-3").value("message-3-2").send();
+producer.newMessage().key("key-4").value("message-4-1").send();
+producer.newMessage().key("key-4").value("message-4-2").send();
+
+```
+
+#### Exclusive
+
+Create a new consumer and subscribe with the `Exclusive` subscription type.
+
+```java
+
+Consumer consumer = client.newConsumer()
+ .topic("my-topic")
+ .subscriptionName("my-subscription")
+ .subscriptionType(SubscriptionType.Exclusive)
+ .subscribe()
+
+```
+
+Only the first consumer is allowed to the subscription, other consumers receive an error. The first consumer receives all 10 messages, and the consuming order is the same as the producing order.
+
+:::note
+
+If topic is a partitioned topic, the first consumer subscribes to all partitioned topics, other consumers are not assigned with partitions and receive an error.
+
+:::
+
+#### Failover
+
+Create new consumers and subscribe with the`Failover` subscription type.
+
+```java
+
+Consumer consumer1 = client.newConsumer()
+ .topic("my-topic")
+ .subscriptionName("my-subscription")
+ .subscriptionType(SubscriptionType.Failover)
+ .subscribe()
+Consumer consumer2 = client.newConsumer()
+ .topic("my-topic")
+ .subscriptionName("my-subscription")
+ .subscriptionType(SubscriptionType.Failover)
+ .subscribe()
+//conumser1 is the active consumer, consumer2 is the standby consumer.
+//consumer1 receives 5 messages and then crashes, consumer2 takes over as an active consumer.
+
+```
+
+Multiple consumers can attach to the same subscription, yet only the first consumer is active, and others are standby. When the active consumer is disconnected, messages will be dispatched to one of standby consumers, and the standby consumer then becomes active consumer.
+
+If the first active consumer is disconnected after receiving 5 messages, the standby consumer becomes active consumer. Consumer1 will receive:
+
+```
+
+("key-1", "message-1-1")
+("key-1", "message-1-2")
+("key-1", "message-1-3")
+("key-2", "message-2-1")
+("key-2", "message-2-2")
+
+```
+
+consumer2 will receive:
+
+```
+
+("key-2", "message-2-3")
+("key-3", "message-3-1")
+("key-3", "message-3-2")
+("key-4", "message-4-1")
+("key-4", "message-4-2")
+
+```
+
+:::note
+
+If a topic is a partitioned topic, each partition has only one active consumer, messages of one partition are distributed to only one consumer, and messages of multiple partitions are distributed to multiple consumers.
+
+:::
+
+#### Shared
+
+Create new consumers and subscribe with `Shared` subscription type.
+
+```java
+
+Consumer consumer1 = client.newConsumer()
+ .topic("my-topic")
+ .subscriptionName("my-subscription")
+ .subscriptionType(SubscriptionType.Shared)
+ .subscribe()
+
+Consumer consumer2 = client.newConsumer()
+ .topic("my-topic")
+ .subscriptionName("my-subscription")
+ .subscriptionType(SubscriptionType.Shared)
+ .subscribe()
+//Both consumer1 and consumer2 are active consumers.
+
+```
+
+In Shared subscription type, multiple consumers can attach to the same subscription and messages are delivered in a round robin distribution across consumers.
+
+If a broker dispatches only one message at a time, consumer1 receives the following information.
+
+```
+
+("key-1", "message-1-1")
+("key-1", "message-1-3")
+("key-2", "message-2-2")
+("key-3", "message-3-1")
+("key-4", "message-4-1")
+
+```
+
+consumer2 receives the following information.
+
+```
+
+("key-1", "message-1-2")
+("key-2", "message-2-1")
+("key-2", "message-2-3")
+("key-3", "message-3-2")
+("key-4", "message-4-2")
+
+```
+
+`Shared` subscription is different from `Exclusive` and `Failover` subscription types. `Shared` subscription has better flexibility, but cannot provide order guarantee.
+
+#### Key_shared
+
+This is a new subscription type since 2.4.0 release. Create new consumers and subscribe with `Key_Shared` subscription type.
+
+```java
+
+Consumer consumer1 = client.newConsumer()
+ .topic("my-topic")
+ .subscriptionName("my-subscription")
+ .subscriptionType(SubscriptionType.Key_Shared)
+ .subscribe()
+
+Consumer consumer2 = client.newConsumer()
+ .topic("my-topic")
+ .subscriptionName("my-subscription")
+ .subscriptionType(SubscriptionType.Key_Shared)
+ .subscribe()
+//Both consumer1 and consumer2 are active consumers.
+
+```
+
+Just like in `Shared` subscription, all consumers in `Key_Shared` subscription type can attach to the same subscription. But `Key_Shared` subscription type is different from the `Shared` subscription. In `Key_Shared` subscription type, messages with the same key are delivered to only one consumer in order. The possible distribution of messages between different consumers (by default we do not know in advance which keys will be assigned to a consumer, but a key will only be assigned to a consumer at the same time).
+
+consumer1 receives the following information.
+
+```
+
+("key-1", "message-1-1")
+("key-1", "message-1-2")
+("key-1", "message-1-3")
+("key-3", "message-3-1")
+("key-3", "message-3-2")
+
+```
+
+consumer2 receives the following information.
+
+```
+
+("key-2", "message-2-1")
+("key-2", "message-2-2")
+("key-2", "message-2-3")
+("key-4", "message-4-1")
+("key-4", "message-4-2")
+
+```
+
+If batching is enabled at the producer side, messages with different keys are added to a batch by default. The broker will dispatch the batch to the consumer, so the default batch mechanism may break the Key_Shared subscription guaranteed message distribution semantics. The producer needs to use the `KeyBasedBatcher`.
+
+```java
+
+Producer producer = client.newProducer()
+ .topic("my-topic")
+ .batcherBuilder(BatcherBuilder.KEY_BASED)
+ .create();
+
+```
+
+Or the producer can disable batching.
+
+```java
+
+Producer producer = client.newProducer()
+ .topic("my-topic")
+ .enableBatching(false)
+ .create();
+
+```
+
+:::note
+
+If the message key is not specified, messages without key are dispatched to one consumer in order by default.
+
+:::
+
+## Reader
+
+With the [reader interface](concepts-clients.md#reader-interface), Pulsar clients can "manually position" themselves within a topic and reading all messages from a specified message onward. The Pulsar API for Java enables you to create {@inject: javadoc:Reader:/client/org/apache/pulsar/client/api/Reader} objects by specifying a topic and a {@inject: javadoc:MessageId:/client/org/apache/pulsar/client/api/MessageId}.
+
+The following is an example.
+
+```java
+
+byte[] msgIdBytes = // Some message ID byte array
+MessageId id = MessageId.fromByteArray(msgIdBytes);
+Reader reader = pulsarClient.newReader()
+ .topic(topic)
+ .startMessageId(id)
+ .create();
+
+while (true) {
+ Message message = reader.readNext();
+ // Process message
+}
+
+```
+
+In the example above, a `Reader` object is instantiated for a specific topic and message (by ID); the reader iterates over each message in the topic after the message is identified by `msgIdBytes` (how that value is obtained depends on the application).
+
+The code sample above shows pointing the `Reader` object to a specific message (by ID), but you can also use `MessageId.earliest` to point to the earliest available message on the topic of `MessageId.latest` to point to the most recent available message.
+
+### Configure reader
+When you create a reader, you can use the `loadConf` configuration. The following parameters are available in `loadConf`.
+
+| Name | Type|
Description
| Default
+|---|---|---|---
+`topicName`|String|Topic name. |None
+`receiverQueueSize`|int|Size of a consumer's receiver queue.
For example, the number of messages that can be accumulated by a consumer before an application calls `Receive`.
A value higher than the default value increases consumer throughput, though at the expense of more memory utilization.|1000
+`readerListener`|ReaderListener<T>|A listener that is called for message received.|None
+`readerName`|String|Reader name.|null
+`subscriptionName`|String| Subscription name|When there is a single topic, the default subscription name is `"reader-" + 10-digit UUID`. When there are multiple topics, the default subscription name is `"multiTopicsReader-" + 10-digit UUID`.
+`subscriptionRolePrefix`|String|Prefix of subscription role. |null
+`cryptoKeyReader`|CryptoKeyReader|Interface that abstracts the access to a key store.|null
+`cryptoFailureAction`|ConsumerCryptoFailureAction|Consumer should take action when it receives a message that can not be decrypted.
**FAIL**: this is the default option to fail messages until crypto succeeds.
**DISCARD**: silently acknowledge and not deliver message to an application.
**CONSUME**: deliver encrypted messages to applications. It is the application's responsibility to decrypt the message.
The message decompression fails.
If messages contain batch messages, a client is not be able to retrieve individual messages in batch.
Delivered encrypted message contains {@link EncryptionContext} which contains encryption and compression information in it using which application can decrypt consumed message payload.|
ConsumerCryptoFailureAction.FAIL
+`readCompacted`|boolean|If enabling `readCompacted`, a consumer reads messages from a compacted topic rather than a full message backlog of a topic.
A consumer only sees the latest value for each key in the compacted topic, up until reaching the point in the topic message when compacting backlog. Beyond that point, send messages as normal.
`readCompacted` can only be enabled on subscriptions to persistent topics, which have a single active consumer (for example, failure or exclusive subscriptions).
Attempting to enable it on subscriptions to non-persistent topics or on shared subscriptions leads to a subscription call throwing a `PulsarClientException`.|false
+`resetIncludeHead`|boolean|If set to true, the first message to be returned is the one specified by `messageId`.
If set to false, the first message to be returned is the one next to the message specified by `messageId`.|false
+
+### Sticky key range reader
+
+In sticky key range reader, broker will only dispatch messages which hash of the message key contains by the specified key hash range. Multiple key hash ranges can be specified on a reader.
+
+The following is an example to create a sticky key range reader.
+
+```java
+
+pulsarClient.newReader()
+ .topic(topic)
+ .startMessageId(MessageId.earliest)
+ .keyHashRange(Range.of(0, 10000), Range.of(20001, 30000))
+ .create();
+
+```
+
+Total hash range size is 65536, so the max end of the range should be less than or equal to 65535.
+
+
+## TableView
+
+The TableView interface serves an encapsulated access pattern, providing a continuously updated key-value map view of the compacted topic data. Messages without keys will be ignored.
+
+With TableView, Pulsar clients can fetch all the message updates from a topic and construct a map with the latest values of each key. These values can then be used to build a local cache of data. In addition, you can register consumers with the TableView by specifying a listener to perform a scan of the map and then receive notifications when new messages are received. Consequently, event handling can be triggered to serve use cases, such as event-driven applications and message monitoring.
+
+> **Note:** Each TableView uses one Reader instance per partition, and reads the topic starting from the compacted view by default. It is highly recommended to enable automatic compaction by [configuring the topic compaction policies](cookbooks-compaction.md#configuring-compaction-to-run-automatically) for the given topic or namespace. More frequent compaction results in shorter startup times because less data is replayed to reconstruct the TableView of the topic.
+
+The following figure illustrates the dynamic construction of a TableView updated with newer values of each key.
+![TableView](/assets/tableview.png)
+
+### Configure TableView
+
+The following is an example of how to configure a TableView.
+
+```java
+
+TableView tv = client.newTableViewBuilder(Schema.STRING)
+ .topic("my-tableview")
+ .create()
+
+```
+
+You can use the available parameters in the `loadConf` configuration or related [API](/api/client/2.10.0-SNAPSHOT/org/apache/pulsar/client/api/TableViewBuilder.html) to customize your TableView.
+
+| Name | Type| Required? |
Description
| Default
+|---|---|---|---|---
+| `topic` | string | yes | The topic name of the TableView. | N/A
+| `autoUpdatePartitionInterval` | int | no | The interval to check for newly added partitions. | 60 (seconds)
+
+### Register listeners
+
+You can register listeners for both existing messages on a topic and new messages coming into the topic by using `forEachAndListen`, and specify to perform operations for all existing messages by using `forEach`.
+
+The following is an example of how to register listeners with TableView.
+
+```java
+
+// Register listeners for all existing and incoming messages
+tv.forEachAndListen((key, value) -> /*operations on all existing and incoming messages*/)
+
+// Register action for all existing messages
+tv.forEach((key, value) -> /*operations on all existing messages*/)
+
+```
+
+## Schema
+
+In Pulsar, all message data consists of byte arrays "under the hood." [Message schemas](schema-get-started.md) enable you to use other types of data when constructing and handling messages (from simple types like strings to more complex, application-specific types). If you construct, say, a [producer](#producer) without specifying a schema, then the producer can only produce messages of type `byte[]`. The following is an example.
+
+```java
+
+Producer producer = client.newProducer()
+ .topic(topic)
+ .create();
+
+```
+
+The producer above is equivalent to a `Producer` (in fact, you should *always* explicitly specify the type). If you'd like to use a producer for a different type of data, you'll need to specify a **schema** that informs Pulsar which data type will be transmitted over the [topic](reference-terminology.md#topic).
+
+### AvroBaseStructSchema example
+
+Let's say that you have a `SensorReading` class that you'd like to transmit over a Pulsar topic:
+
+```java
+
+public class SensorReading {
+ public float temperature;
+
+ public SensorReading(float temperature) {
+ this.temperature = temperature;
+ }
+
+ // A no-arg constructor is required
+ public SensorReading() {
+ }
+
+ public float getTemperature() {
+ return temperature;
+ }
+
+ public void setTemperature(float temperature) {
+ this.temperature = temperature;
+ }
+}
+
+```
+
+You could then create a `Producer` (or `Consumer`) like this:
+
+```java
+
+Producer producer = client.newProducer(JSONSchema.of(SensorReading.class))
+ .topic("sensor-readings")
+ .create();
+
+```
+
+The following schema formats are currently available for Java:
+
+* No schema or the byte array schema (which can be applied using `Schema.BYTES`):
+
+ ```java
+
+ Producer bytesProducer = client.newProducer(Schema.BYTES)
+ .topic("some-raw-bytes-topic")
+ .create();
+
+ ```
+
+ Or, equivalently:
+
+ ```java
+
+ Producer bytesProducer = client.newProducer()
+ .topic("some-raw-bytes-topic")
+ .create();
+
+ ```
+
+* `String` for normal UTF-8-encoded string data. Apply the schema using `Schema.STRING`:
+
+ ```java
+
+ Producer stringProducer = client.newProducer(Schema.STRING)
+ .topic("some-string-topic")
+ .create();
+
+ ```
+
+* Create JSON schemas for POJOs using `Schema.JSON`. The following is an example.
+
+ ```java
+
+ Producer pojoProducer = client.newProducer(Schema.JSON(MyPojo.class))
+ .topic("some-pojo-topic")
+ .create();
+
+ ```
+
+* Generate Protobuf schemas using `Schema.PROTOBUF`. The following example shows how to create the Protobuf schema and use it to instantiate a new producer:
+
+ ```java
+
+ Producer protobufProducer = client.newProducer(Schema.PROTOBUF(MyProtobuf.class))
+ .topic("some-protobuf-topic")
+ .create();
+
+ ```
+
+* Define Avro schemas with `Schema.AVRO`. The following code snippet demonstrates how to create and use Avro schema.
+
+ ```java
+
+ Producer avroProducer = client.newProducer(Schema.AVRO(MyAvro.class))
+ .topic("some-avro-topic")
+ .create();
+
+ ```
+
+### ProtobufNativeSchema example
+
+For example of ProtobufNativeSchema, see [`SchemaDefinition` in `Complex type`](schema-understand.md#complex-type).
+
+## Authentication
+
+Pulsar currently supports three authentication schemes: [TLS](security-tls-authentication.md), [Athenz](security-athenz.md), and [Oauth2](security-oauth2.md). You can use the Pulsar Java client with all of them.
+
+### TLS Authentication
+
+To use [TLS](security-tls-authentication.md), `enableTls` method is deprecated and you need to use "pulsar+ssl://" in serviceUrl to enable, point your Pulsar client to a TLS cert path, and provide paths to cert and key files.
+
+The following is an example.
+
+```java
+
+Map authParams = new HashMap();
+authParams.put("tlsCertFile", "/path/to/client-cert.pem");
+authParams.put("tlsKeyFile", "/path/to/client-key.pem");
+
+Authentication tlsAuth = AuthenticationFactory
+ .create(AuthenticationTls.class.getName(), authParams);
+
+PulsarClient client = PulsarClient.builder()
+ .serviceUrl("pulsar+ssl://my-broker.com:6651")
+ .tlsTrustCertsFilePath("/path/to/cacert.pem")
+ .authentication(tlsAuth)
+ .build();
+
+```
+
+### Athenz
+
+To use [Athenz](security-athenz.md) as an authentication provider, you need to [use TLS](#tls-authentication) and provide values for four parameters in a hash:
+
+* `tenantDomain`
+* `tenantService`
+* `providerDomain`
+* `privateKey`
+
+You can also set an optional `keyId`. The following is an example.
+
+```java
+
+Map authParams = new HashMap();
+authParams.put("tenantDomain", "shopping"); // Tenant domain name
+authParams.put("tenantService", "some_app"); // Tenant service name
+authParams.put("providerDomain", "pulsar"); // Provider domain name
+authParams.put("privateKey", "file:///path/to/private.pem"); // Tenant private key path
+authParams.put("keyId", "v1"); // Key id for the tenant private key (optional, default: "0")
+
+Authentication athenzAuth = AuthenticationFactory
+ .create(AuthenticationAthenz.class.getName(), authParams);
+
+PulsarClient client = PulsarClient.builder()
+ .serviceUrl("pulsar+ssl://my-broker.com:6651")
+ .tlsTrustCertsFilePath("/path/to/cacert.pem")
+ .authentication(athenzAuth)
+ .build();
+
+```
+
+> #### Supported pattern formats
+> The `privateKey` parameter supports the following three pattern formats:
+> * `file:///path/to/file`
+> * `file:/path/to/file`
+> * `data:application/x-pem-file;base64,`
+
+### Oauth2
+
+The following example shows how to use [Oauth2](security-oauth2.md) as an authentication provider for the Pulsar Java client.
+
+You can use the factory method to configure authentication for Pulsar Java client.
+
+```java
+
+PulsarClient client = PulsarClient.builder()
+ .serviceUrl("pulsar://broker.example.com:6650/")
+ .authentication(
+ AuthenticationFactoryOAuth2.clientCredentials(this.issuerUrl, this.credentialsUrl, this.audience))
+ .build();
+
+```
+
+In addition, you can also use the encoded parameters to configure authentication for Pulsar Java client.
+
+```java
+
+Authentication auth = AuthenticationFactory
+ .create(AuthenticationOAuth2.class.getName(), "{"type":"client_credentials","privateKey":"...","issuerUrl":"...","audience":"..."}");
+PulsarClient client = PulsarClient.builder()
+ .serviceUrl("pulsar://broker.example.com:6650/")
+ .authentication(auth)
+ .build();
+
+```
+
diff --git a/site2/website/versioned_docs/version-2.10.x/client-libraries-node.md b/site2/website/versioned_docs/version-2.10.x/client-libraries-node.md
new file mode 100644
index 0000000000000..a023b51d8ceb0
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.10.x/client-libraries-node.md
@@ -0,0 +1,652 @@
+---
+id: client-libraries-node
+title: The Pulsar Node.js client
+sidebar_label: "Node.js"
+original_id: client-libraries-node
+---
+
+The Pulsar Node.js client can be used to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Node.js.
+
+All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Node.js client are thread-safe.
+
+For 1.3.0 or later versions, [type definitions](https://github.com/apache/pulsar-client-node/blob/master/index.d.ts) used in TypeScript are available.
+
+## Installation
+
+You can install the [`pulsar-client`](https://www.npmjs.com/package/pulsar-client) library via [npm](https://www.npmjs.com/).
+
+### Requirements
+Pulsar Node.js client library is based on the C++ client library.
+Follow [these instructions](client-libraries-cpp.md#compilation) and install the Pulsar C++ client library.
+
+### Compatibility
+
+Compatibility between each version of the Node.js client and the C++ client is as follows:
+
+| Node.js client | C++ client |
+| :------------- | :------------- |
+| 1.0.0 | 2.3.0 or later |
+| 1.1.0 | 2.4.0 or later |
+| 1.2.0 | 2.5.0 or later |
+
+If an incompatible version of the C++ client is installed, you may fail to build or run this library.
+
+### Installation using npm
+
+Install the `pulsar-client` library via [npm](https://www.npmjs.com/):
+
+```shell
+
+$ npm install pulsar-client
+
+```
+
+:::note
+
+Also, this library works only in Node.js 10.x or later because it uses the [`node-addon-api`](https://github.com/nodejs/node-addon-api) module to wrap the C++ library.
+
+:::
+
+## Connection URLs
+To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL.
+
+Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here is an example for `localhost`:
+
+```http
+
+pulsar://localhost:6650
+
+```
+
+A URL for a production Pulsar cluster may look something like this:
+
+```http
+
+pulsar://pulsar.us-west.example.com:6650
+
+```
+
+If you are using [TLS encryption](security-tls-transport.md) or [TLS Authentication](security-tls-authentication.md), the URL looks like this:
+
+```http
+
+pulsar+ssl://pulsar.us-west.example.com:6651
+
+```
+
+## Create a client
+
+In order to interact with Pulsar, you first need a client object. You can create a client instance using a `new` operator and the `Client` method, passing in a client options object (more on configuration [below](#client-configuration)).
+
+Here is an example:
+
+```JavaScript
+
+const Pulsar = require('pulsar-client');
+
+(async () => {
+ const client = new Pulsar.Client({
+ serviceUrl: 'pulsar://localhost:6650',
+ });
+
+ await client.close();
+})();
+
+```
+
+### Client configuration
+
+The following configurable parameters are available for Pulsar clients:
+
+| Parameter | Description | Default |
+| :-------- | :---------- | :------ |
+| `serviceUrl` | The connection URL for the Pulsar cluster. See [above](#connection-urls) for more info. | |
+| `authentication` | Configure the authentication provider. (default: no authentication). See [TLS Authentication](security-tls-authentication.md) for more info. | |
+| `operationTimeoutSeconds` | The timeout for Node.js client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries occur until this threshold is reached, at which point the operation fails. | 30 |
+| `ioThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker). | 1 |
+| `messageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)). | 1 |
+| `concurrentLookupRequest` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 50000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 50000 |
+| `tlsTrustCertsFilePath` | The file path for the trusted TLS certificate. | |
+| `tlsValidateHostname` | The boolean value of setup whether to enable TLS hostname verification. | `false` |
+| `tlsAllowInsecureConnection` | The boolean value of setup whether the Pulsar client accepts untrusted TLS certificate from broker. | `false` |
+| `statsIntervalInSeconds` | Interval between each stat info. Stats is activated with positive statsInterval. The value should be set to 1 second at least | 600 |
+| `log` | A function that is used for logging. | `console.log` |
+
+## Producers
+
+Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Node.js producers using a producer configuration object.
+
+Here is an example:
+
+```JavaScript
+
+const producer = await client.createProducer({
+ topic: 'my-topic', // or 'my-tenant/my-namespace/my-topic' to specify topic's tenant and namespace
+});
+
+await producer.send({
+ data: Buffer.from("Hello, Pulsar"),
+});
+
+await producer.close();
+
+```
+
+> #### Promise operation
+> When you create a new Pulsar producer, the operation returns `Promise` object and get producer instance or an error through executor function.
+> In this example, using await operator instead of executor function.
+
+### Producer operations
+
+Pulsar Node.js producers have the following methods available:
+
+| Method | Description | Return type |
+| :----- | :---------- | :---------- |
+| `send(Object)` | Publishes a [message](#messages) to the producer's topic. When the message is successfully acknowledged by the Pulsar broker, or an error is thrown, the Promise object whose result is the message ID runs executor function. | `Promise