4 * Licensed to the Apache Software Foundation (ASF) under one
5 * or more contributor license agreements. See the NOTICE file
6 * distributed with this work for additional information
7 * regarding copyright ownership. The ASF licenses this file
8 * to you under the Apache License, Version 2.0 (the
9 * "License"); you may not use this file except in compliance
10 * with the License. You may obtain a copy of the License at
12 * http://www.apache.org/licenses/LICENSE-2.0
14 * Unless required by applicable law or agreed to in writing, software
15 * distributed under the License is distributed on an "AS IS" BASIS,
16 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
17 * See the License for the specific language governing permissions and
18 * limitations under the License.
23 = Synchronous Replication
29 :source-language: java
33 The current <<Cluster Replication, replication>> in HBase in asynchronous. So if the master cluster crashes, the slave cluster may not have the
34 newest data. If users want strong consistency then they can not switch to the slave cluster.
38 Please see the design doc on link:https://issues.apache.org/jira/browse/HBASE-19064[HBASE-19064]
40 == Operation and maintenance
42 Case.1 Setup two synchronous replication clusters::
44 * Add a synchronous peer in both source cluster and peer cluster.
49 hbase> add_peer '1', CLUSTER_KEY => 'lg-hadoop-tst-st01.bj:10010,lg-hadoop-tst-st02.bj:10010,lg-hadoop-tst-st03.bj:10010:/hbase/test-hbase-slave', REMOTE_WAL_DIR=>'hdfs://lg-hadoop-tst-st01.bj:20100/hbase/test-hbase-slave/remoteWALs', TABLE_CFS => {"ycsb-test"=>[]}
55 hbase> add_peer '1', CLUSTER_KEY => 'lg-hadoop-tst-st01.bj:10010,lg-hadoop-tst-st02.bj:10010,lg-hadoop-tst-st03.bj:10010:/hbase/test-hbase', REMOTE_WAL_DIR=>'hdfs://lg-hadoop-tst-st01.bj:20100/hbase/test-hbase/remoteWALs', TABLE_CFS => {"ycsb-test"=>[]}
58 NOTE: For synchronous replication, the current implementation require that we have the same peer id for both source
59 and peer cluster. Another thing that need attention is: the peer does not support cluster-level, namespace-level, or
60 cf-level replication, only support table-level replication now.
62 * Transit the peer cluster to be STANDBY state
66 hbase> transit_peer_sync_replication_state '1', 'STANDBY'
69 * Transit the source cluster to be ACTIVE state
73 hbase> transit_peer_sync_replication_state '1', 'ACTIVE'
76 Now, the synchronous replication has been set up successfully. the HBase client can only request to source cluster, if
77 request to peer cluster, the peer cluster which is STANDBY state now will reject the read/write requests.
79 Case.2 How to operate when standby cluster crashed::
81 If the standby cluster has been crashed, it will fail to write remote WAL for the active cluster. So we need to transit
82 the source cluster to DOWNGRANDE_ACTIVE state, which means source cluster won't write any remote WAL any more, but
83 the normal replication (asynchronous Replication) can still work fine, it queue the newly written WALs, but the
84 replication block until the peer cluster come back.
88 hbase> transit_peer_sync_replication_state '1', 'DOWNGRADE_ACTIVE'
91 Once the peer cluster come back, we can just transit the source cluster to ACTIVE, to ensure that the replication will be
96 hbase> transit_peer_sync_replication_state '1', 'ACTIVE'
99 Case.3 How to operate when active cluster crashed::
101 If the active cluster has been crashed (it may be not reachable now), so let's just transit the standby cluster to
102 DOWNGRADE_ACTIVE state, and after that, we should redirect all the requests from client to the DOWNGRADE_ACTIVE cluster.
106 hbase> transit_peer_sync_replication_state '1', 'DOWNGRADE_ACTIVE'
109 If the crashed cluster come back again, we just need to transit it to STANDBY directly. Otherwise if you transit the
110 cluster to DOWNGRADE_ACTIVE, the original ACTIVE cluster may have redundant data compared to the current ACTIVE
111 cluster. Because we designed to write source cluster WALs and remote cluster WALs concurrently, so it's possible that
112 the source cluster WALs has more data than the remote cluster, which result in data inconsistency. The procedure of
113 transiting ACTIVE to STANDBY has no problem, because we'll skip to replay the original WALs.
117 hbase> transit_peer_sync_replication_state '1', 'STANDBY'
120 After that, we can promote the DOWNGRADE_ACTIVE cluster to ACTIVE now, to ensure that the replication will be synchronous.
124 hbase> transit_peer_sync_replication_state '1', 'ACTIVE'