4 * Licensed to the Apache Software Foundation (ASF) under one
5 * or more contributor license agreements. See the NOTICE file
6 * distributed with this work for additional information
7 * regarding copyright ownership. The ASF licenses this file
8 * to you under the Apache License, Version 2.0 (the
9 * "License"); you may not use this file except in compliance
10 * with the License. You may obtain a copy of the License at
12 * http://www.apache.org/licenses/LICENSE-2.0
14 * Unless required by applicable law or agreed to in writing, software
15 * distributed under the License is distributed on an "AS IS" BASIS,
16 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
17 * See the License for the specific language governing permissions and
18 * limitations under the License.
23 = Apache HBase Coprocessors
30 HBase Coprocessors are modeled after Google BigTable's coprocessor implementation
31 (http://research.google.com/people/jeff/SOCC2010-keynote-slides.pdf pages 41-42.).
33 The coprocessor framework provides mechanisms for running your custom code directly on
34 the RegionServers managing your data. Efforts are ongoing to bridge gaps between HBase's
35 implementation and BigTable's architecture. For more information see
36 link:https://issues.apache.org/jira/browse/HBASE-4047[HBASE-4047].
38 The information in this chapter is primarily sourced and heavily reused from the following
41 . Mingjie Lai's blog post
42 link:https://blogs.apache.org/hbase/entry/coprocessor_introduction[Coprocessor Introduction].
43 . Gaurav Bhardwaj's blog post
44 link:http://www.3pillarglobal.com/insights/hbase-coprocessors[The How To Of HBase Coprocessors].
47 .Use Coprocessors At Your Own Risk
49 Coprocessors are an advanced feature of HBase and are intended to be used by system
50 developers only. Because coprocessor code runs directly on the RegionServer and has
51 direct access to your data, they introduce the risk of data corruption, man-in-the-middle
52 attacks, or other malicious data access. Currently, there is no mechanism to prevent
53 data corruption by coprocessors, though work is underway on
54 link:https://issues.apache.org/jira/browse/HBASE-4047[HBASE-4047].
56 In addition, there is no resource isolation, so a well-intentioned but misbehaving
57 coprocessor can severely degrade cluster performance and stability.
60 == Coprocessor Overview
62 In HBase, you fetch data using a `Get` or `Scan`, whereas in an RDBMS you use a SQL
63 query. In order to fetch only the relevant data, you filter it using a HBase
64 link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/Filter.html[Filter]
65 , whereas in an RDBMS you use a `WHERE` predicate.
67 After fetching the data, you perform computations on it. This paradigm works well
68 for "small data" with a few thousand rows and several columns. However, when you scale
69 to billions of rows and millions of columns, moving large amounts of data across your
70 network will create bottlenecks at the network layer, and the client needs to be powerful
71 enough and have enough memory to handle the large amounts of data and the computations.
72 In addition, the client code can grow large and complex.
74 In this scenario, coprocessors might make sense. You can put the business computation
75 code into a coprocessor which runs on the RegionServer, in the same location as the
76 data, and returns the result to the client.
78 This is only one scenario where using coprocessors can provide benefit. Following
79 are some analogies which may help to explain some of the benefits of coprocessors.
82 === Coprocessor Analogies
84 Triggers and Stored Procedure::
85 An Observer coprocessor is similar to a trigger in a RDBMS in that it executes
86 your code either before or after a specific event (such as a `Get` or `Put`)
87 occurs. An endpoint coprocessor is similar to a stored procedure in a RDBMS
88 because it allows you to perform custom computations on the data on the
89 RegionServer itself, rather than on the client.
92 MapReduce operates on the principle of moving the computation to the location of
93 the data. Coprocessors operate on the same principal.
96 If you are familiar with Aspect Oriented Programming (AOP), you can think of a coprocessor
97 as applying advice by intercepting a request and then running some custom code,
98 before passing the request on to its final destination (or even changing the destination).
101 === Coprocessor Implementation Overview
103 . Your class should implement one of the Coprocessor interfaces -
104 link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/Coprocessor.html[Coprocessor],
105 link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html[RegionObserver],
106 link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/CoprocessorService.html[CoprocessorService] - to name a few.
108 . Load the coprocessor, either statically (from the configuration) or dynamically,
109 using HBase Shell. For more details see <<cp_loading,Loading Coprocessors>>.
111 . Call the coprocessor from your client-side code. HBase handles the coprocessor
114 The framework API is provided in the
115 link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/coprocessor/package-summary.html[coprocessor]
118 == Types of Coprocessors
120 === Observer Coprocessors
122 Observer coprocessors are triggered either before or after a specific event occurs.
123 Observers that happen before an event use methods that start with a `pre` prefix,
124 such as link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html#prePut-org.apache.hadoop.hbase.coprocessor.ObserverContext-org.apache.hadoop.hbase.client.Put-org.apache.hadoop.hbase.wal.WALEdit-org.apache.hadoop.hbase.client.Durability-[`prePut`]. Observers that happen just after an event override methods that start
125 with a `post` prefix, such as link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html#postPut-org.apache.hadoop.hbase.coprocessor.ObserverContext-org.apache.hadoop.hbase.client.Put-org.apache.hadoop.hbase.wal.WALEdit-org.apache.hadoop.hbase.client.Durability-[`postPut`].
128 ==== Use Cases for Observer Coprocessors
130 Before performing a `Get` or `Put` operation, you can check for permission using
131 `preGet` or `prePut` methods.
133 Referential Integrity::
134 HBase does not directly support the RDBMS concept of refential integrity, also known
135 as foreign keys. You can use a coprocessor to enforce such integrity. For instance,
136 if you have a business rule that every insert to the `users` table must be followed
137 by a corresponding entry in the `user_daily_attendance` table, you could implement
138 a coprocessor to use the `prePut` method on `user` to insert a record into `user_daily_attendance`.
141 You can use a coprocessor to maintain secondary indexes. For more information, see
142 link:https://cwiki.apache.org/confluence/display/HADOOP2/Hbase+SecondaryIndexing[SecondaryIndexing].
145 ==== Types of Observer Coprocessor
148 A RegionObserver coprocessor allows you to observe events on a region, such as `Get`
149 and `Put` operations. See
150 link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html[RegionObserver].
152 RegionServerObserver::
153 A RegionServerObserver allows you to observe events related to the RegionServer's
154 operation, such as starting, stopping, or performing merges, commits, or rollbacks.
156 link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionServerObserver.html[RegionServerObserver].
159 A MasterObserver allows you to observe events related to the HBase Master, such
160 as table creation, deletion, or schema modification. See
161 link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/MasterObserver.html[MasterObserver].
164 A WalObserver allows you to observe events related to writes to the Write-Ahead
166 link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/WALObserver.html[WALObserver].
168 <<cp_example,Examples>> provides working examples of observer coprocessors.
173 === Endpoint Coprocessor
175 Endpoint processors allow you to perform computation at the location of the data.
176 See <<cp_analogies, Coprocessor Analogy>>. An example is the need to calculate a running
177 average or summation for an entire table which spans hundreds of regions.
179 In contrast to observer coprocessors, where your code is run transparently, endpoint
180 coprocessors must be explicitly invoked using the
181 link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/Table.html#coprocessorService-java.lang.Class-byte:A-byte:A-org.apache.hadoop.hbase.client.coprocessor.Batch.Call-[CoprocessorService()]
183 link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/Table.html[Table]
185 link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HTable.html[HTable].
187 Starting with HBase 0.96, endpoint coprocessors are implemented using Google Protocol
188 Buffers (protobuf). For more details on protobuf, see Google's
189 link:https://developers.google.com/protocol-buffers/docs/proto[Protocol Buffer Guide].
190 Endpoints Coprocessor written in version 0.94 are not compatible with version 0.96 or later.
192 link:https://issues.apache.org/jira/browse/HBASE-5448[HBASE-5448]). To upgrade your
193 HBase cluster from 0.94 or earlier to 0.96 or later, you need to reimplement your
196 Coprocessor Endpoints should make no use of HBase internals and
197 only avail of public APIs; ideally a CPEP should depend on Interfaces
198 and data structures only. This is not always possible but beware
199 that doing so makes the Endpoint brittle, liable to breakage as HBase
200 internals evolve. HBase internal APIs annotated as private or evolving
201 do not have to respect semantic versioning rules or general java rules on
202 deprecation before removal. While generated protobuf files are
203 absent the hbase audience annotations -- they are created by the
204 protobuf protoc tool which knows nothing of how HBase works --
205 they should be consided `@InterfaceAudience.Private` so are liable to
208 <<cp_example,Examples>> provides working examples of endpoint coprocessors.
211 == Loading Coprocessors
213 To make your coprocessor available to HBase, it must be _loaded_, either statically
214 (through the HBase configuration) or dynamically (using HBase Shell or the Java API).
218 Follow these steps to statically load your coprocessor. Keep in mind that you must
219 restart HBase to unload a coprocessor that has been loaded statically.
221 . Define the Coprocessor in _hbase-site.xml_, with a <property> element with a <name>
222 and a <value> sub-element. The <name> should be one of the following:
224 - `hbase.coprocessor.region.classes` for RegionObservers and Endpoints.
225 - `hbase.coprocessor.wal.classes` for WALObservers.
226 - `hbase.coprocessor.master.classes` for MasterObservers.
228 <value> must contain the fully-qualified class name of your coprocessor's implementation
231 For example to load a Coprocessor (implemented in class SumEndPoint.java) you have to create
232 following entry in RegionServer's 'hbase-site.xml' file (generally located under 'conf' directory):
237 <name>hbase.coprocessor.region.classes</name>
238 <value>org.myname.hbase.coprocessor.endpoint.SumEndPoint</value>
242 If multiple classes are specified for loading, the class names must be comma-separated.
243 The framework attempts to load all the configured classes using the default class loader.
244 Therefore, the jar file must reside on the server-side HBase classpath.
246 Coprocessors which are loaded in this way will be active on all regions of all tables.
247 These are also called system Coprocessor.
248 The first listed Coprocessors will be assigned the priority `Coprocessor.Priority.SYSTEM`.
249 Each subsequent coprocessor in the list will have its priority value incremented by one (which
250 reduces its priority, because priorities have the natural sort order of Integers).
252 When calling out to registered observers, the framework executes their callbacks methods in the
253 sorted order of their priority. +
254 Ties are broken arbitrarily.
256 . Put your code on HBase's classpath. One easy way to do this is to drop the jar
257 (containing you code and all the dependencies) into the `lib/` directory in the
265 . Delete the coprocessor's <property> element, including sub-elements, from `hbase-site.xml`.
267 . Optionally, remove the coprocessor's JAR file from the classpath or HBase's `lib/`
273 You can also load a coprocessor dynamically, without restarting HBase. This may seem
274 preferable to static loading, but dynamically loaded coprocessors are loaded on a
275 per-table basis, and are only available to the table for which they were loaded. For
276 this reason, dynamically loaded tables are sometimes called *Table Coprocessor*.
278 In addition, dynamically loading a coprocessor acts as a schema change on the table,
279 and the table must be taken offline to load the coprocessor.
281 There are three ways to dynamically load Coprocessor.
286 The below mentioned instructions makes the following assumptions:
288 * A JAR called `coprocessor.jar` contains the Coprocessor implementation along with all of its
290 * The JAR is available in HDFS in some location like
291 `hdfs://<namenode>:<port>/user/<hadoop-user>/coprocessor.jar`.
294 [[load_coprocessor_in_shell]]
295 ==== Using HBase Shell
297 . Load the Coprocessor, using a command like the following:
301 hbase alter 'users', METHOD => 'table_att', 'Coprocessor'=>'hdfs://<namenode>:<port>/
302 user/<hadoop-user>/coprocessor.jar| org.myname.hbase.Coprocessor.RegionObserverExample|1073741823|
306 The Coprocessor framework will try to read the class information from the coprocessor table
308 The value contains four pieces of information which are separated by the pipe (`|`) character.
310 * File path: The jar file containing the Coprocessor implementation must be in a location where
311 all region servers can read it. +
312 You could copy the file onto the local disk on each region server, but it is recommended to store
314 https://issues.apache.org/jira/browse/HBASE-14548[HBASE-14548] allows a directory containing the jars
315 or some wildcards to be specified, such as: hdfs://<namenode>:<port>/user/<hadoop-user>/ or
316 hdfs://<namenode>:<port>/user/<hadoop-user>/*.jar. Please note that if a directory is specified,
317 all jar files(.jar) in the directory are added. It does not search for files in sub-directories.
318 Do not use a wildcard if you would like to specify a directory. This enhancement applies to the
319 usage via the JAVA API as well.
320 * Class name: The full class name of the Coprocessor.
321 * Priority: An integer. The framework will determine the execution sequence of all configured
322 observers registered at the same hook using priorities. This field can be left blank. In that
323 case the framework will assign a default priority value.
324 * Arguments (Optional): This field is passed to the Coprocessor implementation. This is optional.
326 . Verify that the coprocessor loaded:
329 hbase(main):04:0> describe 'users'
332 The coprocessor should be listed in the `TABLE_ATTRIBUTES`.
334 ==== Using the Java API (all HBase versions)
336 The following Java code shows how to use the `setValue()` method of `HTableDescriptor`
337 to load a coprocessor on the `users` table.
341 TableName tableName = TableName.valueOf("users");
342 String path = "hdfs://<namenode>:<port>/user/<hadoop-user>/coprocessor.jar";
343 Configuration conf = HBaseConfiguration.create();
344 Connection connection = ConnectionFactory.createConnection(conf);
345 Admin admin = connection.getAdmin();
346 HTableDescriptor hTableDescriptor = new HTableDescriptor(tableName);
347 HColumnDescriptor columnFamily1 = new HColumnDescriptor("personalDet");
348 columnFamily1.setMaxVersions(3);
349 hTableDescriptor.addFamily(columnFamily1);
350 HColumnDescriptor columnFamily2 = new HColumnDescriptor("salaryDet");
351 columnFamily2.setMaxVersions(3);
352 hTableDescriptor.addFamily(columnFamily2);
353 hTableDescriptor.setValue("COPROCESSOR$1", path + "|"
354 + RegionObserverExample.class.getCanonicalName() + "|"
355 + Coprocessor.PRIORITY_USER);
356 admin.modifyTable(tableName, hTableDescriptor);
359 ==== Using the Java API (HBase 0.96+ only)
361 In HBase 0.96 and newer, the `addCoprocessor()` method of `HTableDescriptor` provides
362 an easier way to load a coprocessor dynamically.
366 TableName tableName = TableName.valueOf("users");
367 Path path = new Path("hdfs://<namenode>:<port>/user/<hadoop-user>/coprocessor.jar");
368 Configuration conf = HBaseConfiguration.create();
369 Connection connection = ConnectionFactory.createConnection(conf);
370 Admin admin = connection.getAdmin();
371 HTableDescriptor hTableDescriptor = new HTableDescriptor(tableName);
372 HColumnDescriptor columnFamily1 = new HColumnDescriptor("personalDet");
373 columnFamily1.setMaxVersions(3);
374 hTableDescriptor.addFamily(columnFamily1);
375 HColumnDescriptor columnFamily2 = new HColumnDescriptor("salaryDet");
376 columnFamily2.setMaxVersions(3);
377 hTableDescriptor.addFamily(columnFamily2);
378 hTableDescriptor.addCoprocessor(RegionObserverExample.class.getCanonicalName(), path,
379 Coprocessor.PRIORITY_USER, null);
380 admin.modifyTable(tableName, hTableDescriptor);
383 WARNING: There is no guarantee that the framework will load a given Coprocessor successfully.
384 For example, the shell command neither guarantees a jar file exists at a particular location nor
385 verifies whether the given class is actually contained in the jar file.
388 === Dynamic Unloading
390 ==== Using HBase Shell
392 . Alter the table to remove the coprocessor.
396 hbase> alter 'users', METHOD => 'table_att_unset', NAME => 'coprocessor$1'
399 ==== Using the Java API
401 Reload the table definition without setting the value of the coprocessor either by
402 using `setValue()` or `addCoprocessor()` methods. This will remove any coprocessor
403 attached to the table.
407 TableName tableName = TableName.valueOf("users");
408 String path = "hdfs://<namenode>:<port>/user/<hadoop-user>/coprocessor.jar";
409 Configuration conf = HBaseConfiguration.create();
410 Connection connection = ConnectionFactory.createConnection(conf);
411 Admin admin = connection.getAdmin();
412 HTableDescriptor hTableDescriptor = new HTableDescriptor(tableName);
413 HColumnDescriptor columnFamily1 = new HColumnDescriptor("personalDet");
414 columnFamily1.setMaxVersions(3);
415 hTableDescriptor.addFamily(columnFamily1);
416 HColumnDescriptor columnFamily2 = new HColumnDescriptor("salaryDet");
417 columnFamily2.setMaxVersions(3);
418 hTableDescriptor.addFamily(columnFamily2);
419 admin.modifyTable(tableName, hTableDescriptor);
422 In HBase 0.96 and newer, you can instead use the `removeCoprocessor()` method of the
423 `HTableDescriptor` class.
428 HBase ships examples for Observer Coprocessor.
430 A more detailed example is given below.
432 These examples assume a table called `users`, which has two column families `personalDet`
433 and `salaryDet`, containing personal and salary details. Below is the graphical representation
434 of the `users` table.
437 [width="100%",cols="7",options="header,footer"]
438 |====================
439 | 3+|personalDet 3+|salaryDet
440 |*rowkey* |*name* |*lastname* |*dob* |*gross* |*net* |*allowances*
441 |admin |Admin |Admin | 3+|
442 |cdickens |Charles |Dickens |02/07/1812 |10000 |8000 |2000
443 |jverne |Jules |Verne |02/08/1828 |12000 |9000 |3000
444 |====================
449 The following Observer coprocessor prevents the details of the user `admin` from being
450 returned in a `Get` or `Scan` of the `users` table.
452 . Write a class that implements the
453 link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionCoprocessor.html[RegionCoprocessor],
454 link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html[RegionObserver]
457 . Override the `preGetOp()` method (the `preGet()` method is deprecated) to check
458 whether the client has queried for the rowkey with value `admin`. If so, return an
459 empty result. Otherwise, process the request as normal.
461 . Put your code and dependencies in a JAR file.
463 . Place the JAR in HDFS where HBase can locate it.
465 . Load the Coprocessor.
467 . Write a simple program to test it.
469 Following are the implementation of the above steps:
473 public class RegionObserverExample implements RegionCoprocessor, RegionObserver {
475 private static final byte[] ADMIN = Bytes.toBytes("admin");
476 private static final byte[] COLUMN_FAMILY = Bytes.toBytes("details");
477 private static final byte[] COLUMN = Bytes.toBytes("Admin_det");
478 private static final byte[] VALUE = Bytes.toBytes("You can't see Admin details");
481 public Optional<RegionObserver> getRegionObserver() {
482 return Optional.of(this);
486 public void preGetOp(final ObserverContext<RegionCoprocessorEnvironment> e, final Get get, final List<Cell> results)
489 if (Bytes.equals(get.getRow(),ADMIN)) {
490 Cell c = CellUtil.createCell(get.getRow(),COLUMN_FAMILY, COLUMN,
491 System.currentTimeMillis(), (byte)4, VALUE);
499 Overriding the `preGetOp()` will only work for `Get` operations. You also need to override
500 the `preScannerOpen()` method to filter the `admin` row from scan results.
505 public RegionScanner preScannerOpen(final ObserverContext<RegionCoprocessorEnvironment> e, final Scan scan,
506 final RegionScanner s) throws IOException {
508 Filter filter = new RowFilter(CompareOp.NOT_EQUAL, new BinaryComparator(ADMIN));
509 scan.setFilter(filter);
514 This method works but there is a _side effect_. If the client has used a filter in
515 its scan, that filter will be replaced by this filter. Instead, you can explicitly
516 remove any `admin` results from the scan:
521 public boolean postScannerNext(final ObserverContext<RegionCoprocessorEnvironment> e, final InternalScanner s,
522 final List<Result> results, final int limit, final boolean hasMore) throws IOException {
523 Result result = null;
524 Iterator<Result> iterator = results.iterator();
525 while (iterator.hasNext()) {
526 result = iterator.next();
527 if (Bytes.equals(result.getRow(), ROWKEY)) {
538 Still using the `users` table, this example implements a coprocessor to calculate
539 the sum of all employee salaries, using an endpoint coprocessor.
541 . Create a '.proto' file defining your service.
545 option java_package = "org.myname.hbase.coprocessor.autogenerated";
546 option java_outer_classname = "Sum";
547 option java_generic_services = true;
548 option java_generate_equals_and_hash = true;
549 option optimize_for = SPEED;
551 required string family = 1;
552 required string column = 2;
555 message SumResponse {
556 required int64 sum = 1 [default = 0];
560 rpc getSum(SumRequest)
561 returns (SumResponse);
565 . Execute the `protoc` command to generate the Java code from the above .proto' file.
570 $ protoc --java_out=src ./sum.proto
573 This will generate a class call `Sum.java`.
575 . Write a class that extends the generated service class, implement the `Coprocessor`
576 and `CoprocessorService` classes, and override the service method.
578 WARNING: If you load a coprocessor from `hbase-site.xml` and then load the same coprocessor
579 again using HBase Shell, it will be loaded a second time. The same class will
580 exist twice, and the second instance will have a higher ID (and thus a lower priority).
581 The effect is that the duplicate coprocessor is effectively ignored.
585 public class SumEndPoint extends Sum.SumService implements Coprocessor, CoprocessorService {
587 private RegionCoprocessorEnvironment env;
590 public Service getService() {
595 public void start(CoprocessorEnvironment env) throws IOException {
596 if (env instanceof RegionCoprocessorEnvironment) {
597 this.env = (RegionCoprocessorEnvironment)env;
599 throw new CoprocessorException("Must be loaded on a table region!");
604 public void stop(CoprocessorEnvironment env) throws IOException {
609 public void getSum(RpcController controller, Sum.SumRequest request, RpcCallback<Sum.SumResponse> done) {
610 Scan scan = new Scan();
611 scan.addFamily(Bytes.toBytes(request.getFamily()));
612 scan.addColumn(Bytes.toBytes(request.getFamily()), Bytes.toBytes(request.getColumn()));
614 Sum.SumResponse response = null;
615 InternalScanner scanner = null;
618 scanner = env.getRegion().getScanner(scan);
619 List<Cell> results = new ArrayList<>();
620 boolean hasMore = false;
624 hasMore = scanner.next(results);
625 for (Cell cell : results) {
626 sum = sum + Bytes.toLong(CellUtil.cloneValue(cell));
631 response = Sum.SumResponse.newBuilder().setSum(sum).build();
632 } catch (IOException ioe) {
633 ResponseConverter.setControllerException(controller, ioe);
635 if (scanner != null) {
638 } catch (IOException ignored) {}
649 Configuration conf = HBaseConfiguration.create();
650 Connection connection = ConnectionFactory.createConnection(conf);
651 TableName tableName = TableName.valueOf("users");
652 Table table = connection.getTable(tableName);
654 final Sum.SumRequest request = Sum.SumRequest.newBuilder().setFamily("salaryDet").setColumn("gross").build();
656 Map<byte[], Long> results = table.coprocessorService(
657 Sum.SumService.class,
658 null, /* start key */
660 new Batch.Call<Sum.SumService, Long>() {
662 public Long call(Sum.SumService aggregate) throws IOException {
663 BlockingRpcCallback<Sum.SumResponse> rpcCallback = new BlockingRpcCallback<>();
664 aggregate.getSum(null, request, rpcCallback);
665 Sum.SumResponse response = rpcCallback.get();
667 return response.hasSum() ? response.getSum() : 0L;
672 for (Long sum : results.values()) {
673 System.out.println("Sum = " + sum);
675 } catch (ServiceException e) {
677 } catch (Throwable e) {
682 . Load the Coprocessor.
684 . Write a client code to call the Coprocessor.
687 == Guidelines For Deploying A Coprocessor
689 Bundling Coprocessors::
690 You can bundle all classes for a coprocessor into a
691 single JAR on the RegionServer's classpath, for easy deployment. Otherwise,
692 place all dependencies on the RegionServer's classpath so that they can be
693 loaded during RegionServer start-up. The classpath for a RegionServer is set
694 in the RegionServer's `hbase-env.sh` file.
695 Automating Deployment::
696 You can use a tool such as Puppet, Chef, or
697 Ansible to ship the JAR for the coprocessor to the required location on your
698 RegionServers' filesystems and restart each RegionServer, to automate
699 coprocessor deployment. Details for such set-ups are out of scope of this
701 Updating a Coprocessor::
702 Deploying a new version of a given coprocessor is not as simple as disabling it,
703 replacing the JAR, and re-enabling the coprocessor. This is because you cannot
704 reload a class in a JVM unless you delete all the current references to it.
705 Since the current JVM has reference to the existing coprocessor, you must restart
706 the JVM, by restarting the RegionServer, in order to replace it. This behavior
707 is not expected to change.
708 Coprocessor Logging::
709 The Coprocessor framework does not provide an API for logging beyond standard Java
711 Coprocessor Configuration::
712 If you do not want to load coprocessors from the HBase Shell, you can add their configuration
713 properties to `hbase-site.xml`. In <<load_coprocessor_in_shell>>, two arguments are
714 set: `arg1=1,arg2=2`. These could have been added to `hbase-site.xml` as follows:
726 Then you can read the configuration using code like the following:
729 Configuration conf = HBaseConfiguration.create();
730 Connection connection = ConnectionFactory.createConnection(conf);
731 TableName tableName = TableName.valueOf("users");
732 Table table = connection.getTable(tableName);
734 Get get = new Get(Bytes.toBytes("admin"));
735 Result result = table.get(get);
736 for (Cell c : result.rawCells()) {
737 System.out.println(Bytes.toString(CellUtil.cloneRow(c))
738 + "==> " + Bytes.toString(CellUtil.cloneFamily(c))
739 + "{" + Bytes.toString(CellUtil.cloneQualifier(c))
740 + ":" + Bytes.toLong(CellUtil.cloneValue(c)) + "}");
742 Scan scan = new Scan();
743 ResultScanner scanner = table.getScanner(scan);
744 for (Result res : scanner) {
745 for (Cell c : res.rawCells()) {
746 System.out.println(Bytes.toString(CellUtil.cloneRow(c))
747 + " ==> " + Bytes.toString(CellUtil.cloneFamily(c))
748 + " {" + Bytes.toString(CellUtil.cloneQualifier(c))
749 + ":" + Bytes.toLong(CellUtil.cloneValue(c))
755 == Restricting Coprocessor Usage
757 Restricting arbitrary user coprocessors can be a big concern in multitenant environments. HBase provides a continuum of options for ensuring only expected coprocessors are running:
759 - `hbase.coprocessor.enabled`: Enables or disables all coprocessors. This will limit the functionality of HBase, as disabling all coprocessors will disable some security providers. An example coproccessor so affected is `org.apache.hadoop.hbase.security.access.AccessController`.
760 * `hbase.coprocessor.user.enabled`: Enables or disables loading coprocessors on tables (i.e. user coprocessors).
761 * One can statically load coprocessors via the following tunables in `hbase-site.xml`:
762 ** `hbase.coprocessor.regionserver.classes`: A comma-separated list of coprocessors that are loaded by region servers
763 ** `hbase.coprocessor.region.classes`: A comma-separated list of RegionObserver and Endpoint coprocessors
764 ** `hbase.coprocessor.user.region.classes`: A comma-separated list of coprocessors that are loaded by all regions
765 ** `hbase.coprocessor.master.classes`: A comma-separated list of coprocessors that are loaded by the master (MasterObserver coprocessors)
766 ** `hbase.coprocessor.wal.classes`: A comma-separated list of WALObserver coprocessors to load
767 * `hbase.coprocessor.abortonerror`: Whether to abort the daemon which has loaded the coprocessor if the coprocessor should error other than `IOError`. If this is set to false and an access controller coprocessor should have a fatal error the coprocessor will be circumvented, as such in secure installations this is advised to be `true`; however, one may override this on a per-table basis for user coprocessors, to ensure they do not abort their running region server and are instead unloaded on error.
768 * `hbase.coprocessor.region.whitelist.paths`: A comma separated list available for those loading `org.apache.hadoop.hbase.security.access.CoprocessorWhitelistMasterObserver` whereby one can use the following options to white-list paths from which coprocessors may be loaded.
769 ** Coprocessors on the classpath are implicitly white-listed
770 ** `*` to wildcard all coprocessor paths
771 ** An entire filesystem (e.g. `hdfs://my-cluster/`)
772 ** A wildcard path to be evaluated by link:https://commons.apache.org/proper/commons-io/javadocs/api-release/org/apache/commons/io/FilenameUtils.html[FilenameUtils.wildcardMatch]
773 ** Note: Path can specify scheme or not (e.g. `file:///usr/hbase/lib/coprocessors` or for all filesystems `/usr/hbase/lib/coprocessors`)