Skip to content

iprentic/rocksplicator

 
 

Repository files navigation

Rocksplicator

Build Status

Rocksplicator is a set of C++ libraries and tools for building large scale RocksDB based stateful services. Its goal is to help application developers solve common difficulties of building large scale stateful services, such as data replication, request routing and cluster management. With Rocksplicator, application developers just need to focus on their application logics, and won't need to deal with data replication, request routing nor cluster management.

Rocksplicator includes:

  1. RocksDB replicator (a library for RocksDB real-time data replication. It supports 3 different replication modes, i.e., async replication, semi-sync replication, and sync replication.)
  2. Helix powered automated cluster management and recovery
  3. Async fbthrift client pool and fbthrift request router
  4. A stats library for maintaining & reporting server stats
  5. A set of other small tool classes for building C++ services.

Online introduction videos

Introduction of Rocksplicator can be found in in our presentation at 2016 Annual RocksDB meetup at FB HQ and @Scale presentation (starting from 17:30).

Use cases

Currently, we have 9 different online services based on rocksplicator running at Pinterest, which consist of nearly 30 clusters, over 4000 hosts and process tens of PB data per day.

Prerequisities

The third-party dependencies of Rocksplicator can be found in docker/Dockerfile.

Get Started

Install docker

Docker is used for building Rocksplicator. Follow the Docker installation instructions to get Docker running on your system.

Build docker image

You can build your own docker image (if you want to change the docker file and test it locally).

cd docker && docker build -t rocksplicator-build .

Or pull the one we uploaded.

docker pull gopalrajpurohit/rocksplicator-build:librdkafka_1_4_0

Initialize submodules

cd rocksplicator && git submodule update --init

Build the libraries & tools

Get into the docker build environment. We are assuming the rocksplicator repo is under $HOME/code/, and $HOME/docker-root is an existing directory.

docker run -v <SOURCE-DIR>:/rocksplicator -v $HOME/docker-root:/root -ti gopalrajpurohit/rocksplicator-build:librdkafka_1_4_0 bash

Run the following command in the docker bash to build Rocksplicator:

cd /rocksplicator && mkdir -p build && cd build && cmake .. && make -j

Run Tests

Run the following command in the docker bash:

cd /rocksplicator && mkdir -p build && cd build && cmake .. && make -j && make test

How to build your own service based on RocksDB replicator & cluster management libraries.

There is an example counter service under examples/counter_service/, which demonstrated a typical usage pattern for RocksDB replicator.

Automated cluster management and recovery

Please check cluster_management directory for Helix powered automated cluster management and recovery.

Commands for cluster management (The following is for script driven cluster management, which has been deprecated)

The cluster mangement tool rocksdb_admin.py is under rocksdb_admin/tool/.

Before using the tool, we need to generate python client code for Admin interface as follows.

cd /rocksplicator/rocksdb_admin/tool/ && ./sync.sh

Create config file for a newly launched cluster.

host_file is a text file containing all hosts in the cluster. Each line is for a host in format "ip:port:zone". For example "192.168.0.101:9090:us-east-1c"

python rocksdb_admin.py new_cluster_name config --host_file=./host_file --segment=test --shard_num=1000 --overwrite

Ping all hosts in a cluster

python rocksdb_admin.py cluster_name ping

Remove a host from a cluster

python rocksdb_admin.py cluster_name remove_host "ip:port:zone"

Promote Master for shards currenty having Slaves only

python rocksdb_admin.py cluster_name promote

Add a host to a cluster

python rocksdb_admin.py cluster_name add_host "ip:port:zone"

Rebalance a cluster (Evenly distribute Masters)

python rocksdb_admin.py cluster_name rebalance

Load SST files from S3 to the cluster

python rocksdb_admin.py "cluster" load_sst "segment" "s3_bucket" "s3_prefix" --concurrency 64 --rate_limit_mb 64

Typical cluster management workflows (This has been deprecated, please check the new Helix powered solution in the cluster_management directory)

replacing a dead host

python rocksdb_admin.py cluster_name remove_host old_ip:old_port:zone_a
python rocksdb_admin.py cluster_name promote
python rocksdb_admin.py cluster_name add_host new_ip:new_port:zone_a
python rocksdb_admin.py cluster_name rebalance

License

Apache License, Version 2.0

About

RocksDB Replication

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 77.9%
  • Java 16.3%
  • Python 2.5%
  • Dockerfile 1.2%
  • Thrift 1.2%
  • CMake 0.9%