4 * Licensed to the Apache Software Foundation (ASF) under one
5 * or more contributor license agreements. See the NOTICE file
6 * distributed with this work for additional information
7 * regarding copyright ownership. The ASF licenses this file
8 * to you under the Apache License, Version 2.0 (the
9 * "License"); you may not use this file except in compliance
10 * with the License. You may obtain a copy of the License at
12 * http://www.apache.org/licenses/LICENSE-2.0
14 * Unless required by applicable law or agreed to in writing, software
15 * distributed under the License is distributed on an "AS IS" BASIS,
16 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
17 * See the License for the specific language governing permissions and
18 * limitations under the License.
23 = Troubleshooting and Debugging Apache HBase
33 Always start with the master log (TODO: Which lines?). Normally it's just printing the same lines over and over again.
34 If not, then there's an issue.
35 Google or link:http://search-hadoop.com[search-hadoop.com] should return some hits for those exceptions you're seeing.
37 An error rarely comes alone in Apache HBase, usually when something gets screwed up what will follow may be hundreds of exceptions and stack traces coming from all over the place.
38 The best way to approach this type of problem is to walk the log up to where it all began, for example one trick with RegionServers is that they will print some metrics when aborting so grepping for _Dump_ should get you around the start of the problem.
40 RegionServer suicides are 'normal', as this is what they do when something goes wrong.
41 For example, if ulimit and max transfer threads (the two most important initial settings, see <<ulimit>> and <<dfs.datanode.max.transfer.threads>>) aren't changed, it will make it impossible at some point for DataNodes to create new threads that from the HBase point of view is seen as if HDFS was gone.
42 Think about what would happen if your MySQL database was suddenly unable to access files on your local file system, well it's the same with HBase and HDFS.
43 Another very common reason to see RegionServers committing seppuku is when they enter prolonged garbage collection pauses that last longer than the default ZooKeeper session timeout.
44 For more information on GC pauses, see the link:https://blog.cloudera.com/blog/2011/02/avoiding-full-gcs-in-hbase-with-memstore-local-allocation-buffers-part-1/[3 part blog post] by Todd Lipcon and <<gcpause>> above.
49 The key process logs are as follows... (replace <user> with the user that started the service, and <hostname> for the machine name)
51 NameNode: _$HADOOP_HOME/logs/hadoop-<user>-namenode-<hostname>.log_
53 DataNode: _$HADOOP_HOME/logs/hadoop-<user>-datanode-<hostname>.log_
55 JobTracker: _$HADOOP_HOME/logs/hadoop-<user>-jobtracker-<hostname>.log_
57 TaskTracker: _$HADOOP_HOME/logs/hadoop-<user>-tasktracker-<hostname>.log_
59 HMaster: _$HBASE_HOME/logs/hbase-<user>-master-<hostname>.log_
61 RegionServer: _$HBASE_HOME/logs/hbase-<user>-regionserver-<hostname>.log_
65 [[trouble.log.locations]]
68 For stand-alone deployments the logs are obviously going to be on a single machine, however this is a development configuration only.
69 Production deployments need to run on a cluster.
71 [[trouble.log.locations.namenode]]
74 The NameNode log is on the NameNode server.
75 The HBase Master is typically run on the NameNode server, and well as ZooKeeper.
77 For smaller clusters the JobTracker/ResourceManager is typically run on the NameNode server as well.
79 [[trouble.log.locations.datanode]]
82 Each DataNode server will have a DataNode log for HDFS, as well as a RegionServer log for HBase.
84 Additionally, each DataNode server will also have a TaskTracker/NodeManager log for MapReduce task execution.
86 [[trouble.log.levels]]
90 ==== Enabling RPC-level logging
92 Enabling the RPC-level logging on a RegionServer can often give insight on timings at the server.
93 Once enabled, the amount of log spewed is voluminous.
94 It is not recommended that you leave this logging on for more than short bursts of time.
95 To enable RPC-level logging, browse to the RegionServer UI and click on _Log Level_.
96 Set the log level to `DEBUG` for the package `org.apache.hadoop.ipc` (That's right, for `hadoop.ipc`, NOT, `hbase.ipc`). Then tail the RegionServers log.
99 To disable, set the logging level back to `INFO` level.
102 === JVM Garbage Collection Logs
106 All example Garbage Collection logs in this section are based on Java 8 output. The introduction of Unified Logging in Java 9 and newer will result in very different looking logs.
109 HBase is memory intensive, and using the default GC you can see long pauses in all threads including the _Juliet Pause_ aka "GC of Death". To help debug this or confirm this is happening GC logging can be turned on in the Java virtual machine.
111 To enable, in _hbase-env.sh_, uncomment one of the below lines :
116 # This enables basic gc logging to the .out file.
117 # export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps"
119 # This enables basic gc logging to its own file.
120 # export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>"
122 # This enables basic GC logging to its own file with automatic log rolling. Only applies to jdk 1.6.0_34+ and 1.7.0_2+.
123 # export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=512M"
125 # If <FILE-PATH> is not replaced, the log file(.gc) would be generated in the HBASE_LOG_DIR.
128 At this point you should see logs like so:
133 64898.952: [GC [1 CMS-initial-mark: 2811538K(3055704K)] 2812179K(3061272K), 0.0007360 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
134 64898.953: [CMS-concurrent-mark-start]
135 64898.971: [GC 64898.971: [ParNew: 5567K->576K(5568K), 0.0101110 secs] 2817105K->2812715K(3061272K), 0.0102200 secs] [Times: user=0.07 sys=0.00, real=0.01 secs]
138 In this section, the first line indicates a 0.0007360 second pause for the CMS to initially mark.
139 This pauses the entire VM, all threads for that period of time.
141 The third line indicates a "minor GC", which pauses the VM for 0.0101110 seconds - aka 10 milliseconds.
142 It has reduced the "ParNew" from about 5.5m to 576k.
143 Later on in this cycle we see:
148 64901.445: [CMS-concurrent-mark: 1.542/2.492 secs] [Times: user=10.49 sys=0.33, real=2.49 secs]
149 64901.445: [CMS-concurrent-preclean-start]
150 64901.453: [GC 64901.453: [ParNew: 5505K->573K(5568K), 0.0062440 secs] 2868746K->2864292K(3061272K), 0.0063360 secs] [Times: user=0.05 sys=0.00, real=0.01 secs]
151 64901.476: [GC 64901.476: [ParNew: 5563K->575K(5568K), 0.0072510 secs] 2869283K->2864837K(3061272K), 0.0073320 secs] [Times: user=0.05 sys=0.01, real=0.01 secs]
152 64901.500: [GC 64901.500: [ParNew: 5517K->573K(5568K), 0.0120390 secs] 2869780K->2865267K(3061272K), 0.0121150 secs] [Times: user=0.09 sys=0.00, real=0.01 secs]
153 64901.529: [GC 64901.529: [ParNew: 5507K->569K(5568K), 0.0086240 secs] 2870200K->2865742K(3061272K), 0.0087180 secs] [Times: user=0.05 sys=0.00, real=0.01 secs]
154 64901.554: [GC 64901.555: [ParNew: 5516K->575K(5568K), 0.0107130 secs] 2870689K->2866291K(3061272K), 0.0107820 secs] [Times: user=0.06 sys=0.00, real=0.01 secs]
155 64901.578: [CMS-concurrent-preclean: 0.070/0.133 secs] [Times: user=0.48 sys=0.01, real=0.14 secs]
156 64901.578: [CMS-concurrent-abortable-preclean-start]
157 64901.584: [GC 64901.584: [ParNew: 5504K->571K(5568K), 0.0087270 secs] 2871220K->2866830K(3061272K), 0.0088220 secs] [Times: user=0.05 sys=0.00, real=0.01 secs]
158 64901.609: [GC 64901.609: [ParNew: 5512K->569K(5568K), 0.0063370 secs] 2871771K->2867322K(3061272K), 0.0064230 secs] [Times: user=0.06 sys=0.00, real=0.01 secs]
159 64901.615: [CMS-concurrent-abortable-preclean: 0.007/0.037 secs] [Times: user=0.13 sys=0.00, real=0.03 secs]
160 64901.616: [GC[YG occupancy: 645 K (5568 K)]64901.616: [Rescan (parallel) , 0.0020210 secs]64901.618: [weak refs processing, 0.0027950 secs] [1 CMS-remark: 2866753K(3055704K)] 2867399K(3061272K), 0.0049380 secs] [Times: user=0.00 sys=0.01, real=0.01 secs]
161 64901.621: [CMS-concurrent-sweep-start]
164 The first line indicates that the CMS concurrent mark (finding garbage) has taken 2.4 seconds.
165 But this is a _concurrent_ 2.4 seconds, Java has not been paused at any point in time.
167 There are a few more minor GCs, then there is a pause at the 2nd last line:
171 64901.616: [GC[YG occupancy: 645 K (5568 K)]64901.616: [Rescan (parallel) , 0.0020210 secs]64901.618: [weak refs processing, 0.0027950 secs] [1 CMS-remark: 2866753K(3055704K)] 2867399K(3061272K), 0.0049380 secs] [Times: user=0.00 sys=0.01, real=0.01 secs]
174 The pause here is 0.0049380 seconds (aka 4.9 milliseconds) to 'remark' the heap.
176 At this point the sweep starts, and you can watch the heap size go down:
181 64901.637: [GC 64901.637: [ParNew: 5501K->569K(5568K), 0.0097350 secs] 2871958K->2867441K(3061272K), 0.0098370 secs] [Times: user=0.05 sys=0.00, real=0.01 secs]
182 ... lines removed ...
183 64904.936: [GC 64904.936: [ParNew: 5532K->568K(5568K), 0.0070720 secs] 1365024K->1360689K(3061272K), 0.0071930 secs] [Times: user=0.05 sys=0.00, real=0.01 secs]
184 64904.953: [CMS-concurrent-sweep: 2.030/3.332 secs] [Times: user=9.57 sys=0.26, real=3.33 secs]
187 At this point, the CMS sweep took 3.332 seconds, and heap went from about ~ 2.8 GB to 1.3 GB (approximate).
189 The key points here is to keep all these pauses low.
190 CMS pauses are always low, but if your ParNew starts growing, you can see minor GC pauses approach 100ms, exceed 100ms and hit as high at 400ms.
192 This can be due to the size of the ParNew, which should be relatively small.
193 If your ParNew is very large after running HBase for a while, in one example a ParNew was about 150MB, then you might have to constrain the size of ParNew (The larger it is, the longer the collections take but if it's too small, objects are promoted to old gen too quickly). In the below we constrain new gen size to 64m.
195 Add the below line in _hbase-env.sh_:
199 export SERVER_GC_OPTS="$SERVER_GC_OPTS -XX:NewSize=64m -XX:MaxNewSize=64m"
202 Similarly, to enable GC logging for client processes, uncomment one of the below lines in _hbase-env.sh_:
207 # This enables basic gc logging to the .out file.
208 # export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps"
210 # This enables basic gc logging to its own file.
211 # export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>"
213 # This enables basic GC logging to its own file with automatic log rolling. Only applies to jdk 1.6.0_34+ and 1.7.0_2+.
214 # export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=512M"
216 # If <FILE-PATH> is not replaced, the log file(.gc) would be generated in the HBASE_LOG_DIR .
219 For more information on GC pauses, see the link:https://blog.cloudera.com/blog/2011/02/avoiding-full-gcs-in-hbase-with-memstore-local-allocation-buffers-part-1/[3 part blog post] by Todd Lipcon and <<gcpause>> above.
221 [[trouble.resources]]
224 [[trouble.resources.searchhadoop]]
225 === search-hadoop.com
227 link:http://search-hadoop.com[search-hadoop.com] indexes all the mailing lists and is great for historical searches.
228 Search here first when you have an issue as its more than likely someone has already had your problem.
230 [[trouble.resources.lists]]
233 Ask a question on the link:https://hbase.apache.org/mail-lists.html[Apache HBase mailing lists].
234 The 'dev' mailing list is aimed at the community of developers actually building Apache HBase and for features currently under development, and 'user' is generally used for questions on released versions of Apache HBase.
235 Before going to the mailing list, make sure your question has not already been answered by searching the mailing list archives first.
236 Use <<trouble.resources.searchhadoop>>.
237 Take some time crafting your question.
238 See link:http://www.mikeash.com/getting_answers.html[Getting Answers] for ideas on crafting good questions.
239 A quality question that includes all context and exhibits evidence the author has tried to find answers in the manual and out on lists is more likely to get a prompt response.
241 [[trouble.resources.slack]]
243 See http://apache-hbase.slack.com Channel on Slack
245 [[trouble.resources.irc]]
247 (You will probably get a more prompt response on the Slack channel)
249 #hbase on irc.freenode.net
251 [[trouble.resources.jira]]
254 link:https://issues.apache.org/jira/browse/HBASE[JIRA] is also really helpful when looking for Hadoop/HBase-specific issues.
259 [[trouble.tools.builtin]]
262 [[trouble.tools.builtin.webmaster]]
263 ==== Master Web Interface
265 The Master starts a web-interface on port 16010 by default.
267 The Master web UI lists created tables and their definition (e.g., ColumnFamilies, blocksize, etc.). Additionally, the available RegionServers in the cluster are listed along with selected high-level metrics (requests, number of regions, usedHeap, maxHeap). The Master web UI allows navigation to each RegionServer's web UI.
269 [[trouble.tools.builtin.webregion]]
270 ==== RegionServer Web Interface
272 RegionServers starts a web-interface on port 16030 by default.
274 The RegionServer web UI lists online regions and their start/end keys, as well as point-in-time RegionServer metrics (requests, regions, storeFileIndexSize, compactionQueueSize, etc.).
276 See <<hbase_metrics>> for more information in metric definitions.
278 [[trouble.tools.builtin.zkcli]]
281 `zkcli` is a very useful tool for investigating ZooKeeper-related issues.
285 ./hbase zkcli -server host:port <cmd> <args>
288 The commands (and arguments) are:
295 set path data [version]
296 delquota [-n|-b] path
299 create [-s] [-e] path data acl
310 delete path [version]
311 setquota -n|-b val path
314 [[trouble.tools.external]]
317 [[trouble.tools.tail]]
320 `tail` is the command line tool that lets you look at the end of a file.
321 Add the `-f` option and it will refresh when new data is available.
322 It's useful when you are wondering what's happening, for example, when a cluster is taking a long time to shutdown or startup as you can just fire a new terminal and tail the master log (and maybe a few RegionServers).
324 [[trouble.tools.top]]
327 `top` is probably one of the most important tools when first trying to see what's running on a machine and how the resources are consumed.
328 Here's an example from production system:
332 top - 14:46:59 up 39 days, 11:55, 1 user, load average: 3.75, 3.57, 3.84
333 Tasks: 309 total, 1 running, 308 sleeping, 0 stopped, 0 zombie
334 Cpu(s): 4.5%us, 1.6%sy, 0.0%ni, 91.7%id, 1.4%wa, 0.1%hi, 0.6%si, 0.0%st
335 Mem: 24414432k total, 24296956k used, 117476k free, 7196k buffers
336 Swap: 16008732k total, 14348k used, 15994384k free, 11106908k cached
338 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
339 15558 hadoop 18 -2 3292m 2.4g 3556 S 79 10.4 6523:52 java
340 13268 hadoop 18 -2 8967m 8.2g 4104 S 21 35.1 5170:30 java
341 8895 hadoop 18 -2 1581m 497m 3420 S 11 2.1 4002:32 java
345 Here we can see that the system load average during the last five minutes is 3.75, which very roughly means that on average 3.75 threads were waiting for CPU time during these 5 minutes.
346 In general, the _perfect_ utilization equals to the number of cores, under that number the machine is under utilized and over that the machine is over utilized.
347 This is an important concept, see this article to understand it more: http://www.linuxjournal.com/article/9001.
349 Apart from load, we can see that the system is using almost all its available RAM but most of it is used for the OS cache (which is good). The swap only has a few KBs in it and this is wanted, high numbers would indicate swapping activity which is the nemesis of performance of Java systems.
350 Another way to detect swapping is when the load average goes through the roof (although this could also be caused by things like a dying disk, among others).
352 The list of processes isn't super useful by default, all we know is that 3 java processes are using about 111% of the CPUs.
353 To know which is which, simply type `c` and each line will be expanded.
354 Typing `1` will give you the detail of how each CPU is used instead of the average for all of them like shown here.
356 [[trouble.tools.jps]]
359 `jps` is shipped with every JDK and gives the java process ids for the current user (if root, then it gives the ids for all users). Example:
363 hadoop@sv4borg12:~$ jps
376 * Hadoop TaskTracker, manages the local Childs
377 * HBase RegionServer, serves regions
378 * Child, its MapReduce task, cannot tell which type exactly
379 * Hadoop TaskTracker, manages the local Childs
380 * Hadoop DataNode, serves blocks
381 * HQuorumPeer, a ZooKeeper ensemble member
382 * Jps, well... it's the current process
383 * ThriftServer, it's a special one will be running only if thrift was started
384 * jmx, this is a local process that's part of our monitoring platform ( poorly named maybe). You probably don't have that.
386 You can then do stuff like checking out the full command line that started the process:
390 hadoop@sv4borg12:~$ ps aux | grep HRegionServer
391 hadoop 17789 155 35.2 9067824 8604364 ? S<l Mar04 9855:48 /usr/java/jdk1.6.0_14/bin/java -Xmx8000m -XX:+DoEscapeAnalysis -XX:+AggressiveOpts -XX:+UseConcMarkSweepGC -XX:NewSize=64m -XX:MaxNewSize=64m -XX:CMSInitiatingOccupancyFraction=88 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/export1/hadoop/logs/gc-hbase.log -Dcom.sun.management.jmxremote.port=10102 -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.password.file=/home/hadoop/hbase/conf/jmxremote.password -Dcom.sun.management.jmxremote -Dhbase.log.dir=/export1/hadoop/logs -Dhbase.log.file=hbase-hadoop-regionserver-sv4borg12.log -Dhbase.home.dir=/home/hadoop/hbase -Dhbase.id.str=hadoop -Dhbase.root.logger=INFO,DRFA -Djava.library.path=/home/hadoop/hbase/lib/native/Linux-amd64-64 -classpath /home/hadoop/hbase/bin/../conf:[many jars]:/home/hadoop/hadoop/conf org.apache.hadoop.hbase.regionserver.HRegionServer start
394 [[trouble.tools.jstack]]
397 `jstack` is one of the most important tools when trying to figure out what a java process is doing apart from looking at the logs.
398 It has to be used in conjunction with jps in order to give it a process id.
399 It shows a list of threads, each one has a name, and they appear in the order that they were created (so the top ones are the most recent threads). Here are a few example:
401 The main thread of a RegionServer waiting for something to do from the master:
405 "regionserver60020" prio=10 tid=0x0000000040ab4000 nid=0x45cf waiting on condition [0x00007f16b6a96000..0x00007f16b6a96a70]
406 java.lang.Thread.State: TIMED_WAITING (parking)
407 at sun.misc.Unsafe.park(Native Method)
408 - parking to wait for <0x00007f16cd5c2f30> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
409 at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
410 at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963)
411 at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:395)
412 at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:647)
413 at java.lang.Thread.run(Thread.java:619)
416 The MemStore flusher thread that is currently flushing to a file:
420 "regionserver60020.cacheFlusher" daemon prio=10 tid=0x0000000040f4e000 nid=0x45eb in Object.wait() [0x00007f16b5b86000..0x00007f16b5b87af0]
421 java.lang.Thread.State: WAITING (on object monitor)
422 at java.lang.Object.wait(Native Method)
423 at java.lang.Object.wait(Object.java:485)
424 at org.apache.hadoop.ipc.Client.call(Client.java:803)
425 - locked <0x00007f16cb14b3a8> (a org.apache.hadoop.ipc.Client$Call)
426 at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:221)
427 at $Proxy1.complete(Unknown Source)
428 at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
429 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
430 at java.lang.reflect.Method.invoke(Method.java:597)
431 at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
432 at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
433 at $Proxy1.complete(Unknown Source)
434 at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3390)
435 - locked <0x00007f16cb14b470> (a org.apache.hadoop.hdfs.DFSClient$DFSOutputStream)
436 at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3304)
437 at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:61)
438 at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:86)
439 at org.apache.hadoop.hbase.io.hfile.HFile$Writer.close(HFile.java:650)
440 at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.close(StoreFile.java:853)
441 at org.apache.hadoop.hbase.regionserver.Store.internalFlushCache(Store.java:467)
442 - locked <0x00007f16d00e6f08> (a java.lang.Object)
443 at org.apache.hadoop.hbase.regionserver.Store.flushCache(Store.java:427)
444 at org.apache.hadoop.hbase.regionserver.Store.access$100(Store.java:80)
445 at org.apache.hadoop.hbase.regionserver.Store$StoreFlusherImpl.flushCache(Store.java:1359)
446 at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:907)
447 at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:834)
448 at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:786)
449 at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:250)
450 at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:224)
451 at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.run(MemStoreFlusher.java:146)
454 A handler thread that's waiting for stuff to do (like put, delete, scan, etc.):
458 "IPC Server handler 16 on 60020" daemon prio=10 tid=0x00007f16b011d800 nid=0x4a5e waiting on condition [0x00007f16afefd000..0x00007f16afefd9f0]
459 java.lang.Thread.State: WAITING (parking)
460 at sun.misc.Unsafe.park(Native Method)
461 - parking to wait for <0x00007f16cd3f8dd8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
462 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
463 at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
464 at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
465 at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1013)
468 And one that's busy doing an increment of a counter (it's in the phase where it's trying to create a scanner in order to read the last value):
472 "IPC Server handler 66 on 60020" daemon prio=10 tid=0x00007f16b006e800 nid=0x4a90 runnable [0x00007f16acb77000..0x00007f16acb77cf0]
473 java.lang.Thread.State: RUNNABLE
474 at org.apache.hadoop.hbase.regionserver.KeyValueHeap.<init>(KeyValueHeap.java:56)
475 at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:79)
476 at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1202)
477 at org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.<init>(HRegion.java:2209)
478 at org.apache.hadoop.hbase.regionserver.HRegion.instantiateInternalScanner(HRegion.java:1063)
479 at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1055)
480 at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1039)
481 at org.apache.hadoop.hbase.regionserver.HRegion.getLastIncrement(HRegion.java:2875)
482 at org.apache.hadoop.hbase.regionserver.HRegion.incrementColumnValue(HRegion.java:2978)
483 at org.apache.hadoop.hbase.regionserver.HRegionServer.incrementColumnValue(HRegionServer.java:2433)
484 at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
485 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
486 at java.lang.reflect.Method.invoke(Method.java:597)
487 at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:560)
488 at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1027)
491 A thread that receives data from HDFS:
495 "IPC Client (47) connection to sv4borg9/10.4.24.40:9000 from hadoop" daemon prio=10 tid=0x00007f16a02d0000 nid=0x4fa3 runnable [0x00007f16b517d000..0x00007f16b517dbf0]
496 java.lang.Thread.State: RUNNABLE
497 at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
498 at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:215)
499 at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
500 at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
501 - locked <0x00007f17d5b68c00> (a sun.nio.ch.Util$1)
502 - locked <0x00007f17d5b68be8> (a java.util.Collections$UnmodifiableSet)
503 - locked <0x00007f1877959b50> (a sun.nio.ch.EPollSelectorImpl)
504 at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
505 at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:332)
506 at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
507 at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
508 at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
509 at java.io.FilterInputStream.read(FilterInputStream.java:116)
510 at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:304)
511 at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
512 at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
513 - locked <0x00007f1808539178> (a java.io.BufferedInputStream)
514 at java.io.DataInputStream.readInt(DataInputStream.java:370)
515 at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:569)
516 at org.apache.hadoop.ipc.Client$Connection.run(Client.java:477)
519 And here is a master trying to recover a lease after a RegionServer died:
523 "LeaseChecker" daemon prio=10 tid=0x00000000407ef800 nid=0x76cd waiting on condition [0x00007f6d0eae2000..0x00007f6d0eae2a70]
525 java.lang.Thread.State: WAITING (on object monitor)
526 at java.lang.Object.wait(Native Method)
527 at java.lang.Object.wait(Object.java:485)
528 at org.apache.hadoop.ipc.Client.call(Client.java:726)
529 - locked <0x00007f6d1cd28f80> (a org.apache.hadoop.ipc.Client$Call)
530 at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
531 at $Proxy1.recoverBlock(Unknown Source)
532 at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2636)
533 at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.<init>(DFSClient.java:2832)
534 at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:529)
535 at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:186)
536 at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:530)
537 at org.apache.hadoop.hbase.util.FSUtils.recoverFileLease(FSUtils.java:619)
538 at org.apache.hadoop.hbase.regionserver.wal.HLog.splitLog(HLog.java:1322)
539 at org.apache.hadoop.hbase.regionserver.wal.HLog.splitLog(HLog.java:1210)
540 at org.apache.hadoop.hbase.master.HMaster.splitLogAfterStartup(HMaster.java:648)
541 at org.apache.hadoop.hbase.master.HMaster.joinCluster(HMaster.java:572)
542 at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:503)
545 [[trouble.tools.opentsdb]]
548 link:http://opentsdb.net[OpenTSDB] is an excellent alternative to Ganglia as it uses Apache HBase to store all the time series and doesn't have to downsample.
549 Monitoring your own HBase cluster that hosts OpenTSDB is a good exercise.
551 Here's an example of a cluster that's suffering from hundreds of compactions launched almost all around the same time, which severely affects the IO performance: (TODO: insert graph plotting compactionQueueSize)
553 It's a good practice to build dashboards with all the important graphs per machine and per cluster so that debugging issues can be done with a single quick look.
554 For example, at StumbleUpon there's one dashboard per cluster with the most important metrics from both the OS and Apache HBase.
555 You can then go down at the machine level and get even more detailed metrics.
557 [[trouble.tools.clustersshtop]]
560 clusterssh+top, it's like a poor man's monitoring system and it can be quite useful when you have only a few machines as it's very easy to setup.
561 Starting clusterssh will give you one terminal per machine and another terminal in which whatever you type will be retyped in every window.
562 This means that you can type `top` once and it will start it for all of your machines at the same time giving you full view of the current state of your cluster.
563 You can also tail all the logs at the same time, edit files, etc.
568 For more information on the HBase client, see <<architecture.client,client>>.
570 [[trouble.client.scantimeout]]
571 === ScannerTimeoutException or UnknownScannerException
573 This is thrown if the time between RPC calls from the client to RegionServer exceeds the scan timeout.
574 For example, if `Scan.setCaching` is set to 500, then there will be an RPC call to fetch the next batch of rows every 500 `.next()` calls on the ResultScanner because data is being transferred in blocks of 500 rows to the client.
575 Reducing the setCaching value may be an option, but setting this value too low makes for inefficient processing on numbers of rows.
577 See <<perf.hbase.client.caching>>.
579 === Performance Differences in Thrift and Java APIs
581 Poor performance, or even `ScannerTimeoutExceptions`, can occur if `Scan.setCaching` is too high, as discussed in <<trouble.client.scantimeout>>.
582 If the Thrift client uses the wrong caching settings for a given workload, performance can suffer compared to the Java API.
583 To set caching for a given scan in the Thrift client, use the `scannerGetList(scannerId, numRows)` method, where `numRows` is an integer representing the number of rows to cache.
584 In one case, it was found that reducing the cache for Thrift scans from 1000 to 100 increased performance to near parity with the Java API given the same queries.
586 See also Jesse Andersen's link:http://blog.cloudera.com/blog/2014/04/how-to-use-the-hbase-thrift-interface-part-3-using-scans/[blog post] about using Scans with Thrift.
588 [[trouble.client.lease.exception]]
589 === `LeaseException` when calling `Scanner.next`
591 In some situations clients that fetch data from a RegionServer get a LeaseException instead of the usual <<trouble.client.scantimeout>>.
592 Usually the source of the exception is `org.apache.hadoop.hbase.regionserver.Leases.removeLease(Leases.java:230)` (line number may vary). It tends to happen in the context of a slow/freezing `RegionServer#next` call.
593 It can be prevented by having `hbase.rpc.timeout` > `hbase.regionserver.lease.period`.
594 Harsh J investigated the issue as part of the mailing list thread link:https://mail-archives.apache.org/mod_mbox/hbase-user/201209.mbox/%3CCAOcnVr3R-LqtKhFsk8Bhrm-YW2i9O6J6Fhjz2h7q6_sxvwd2yw%40mail.gmail.com%3E[HBase, mail # user - Lease does not exist exceptions]
596 [[trouble.client.scarylogs]]
597 === Shell or client application throws lots of scary exceptions during normal operation
599 Since 0.20.0 the default log level for `org.apache.hadoop.hbase.*`is DEBUG.
601 On your clients, edit _$HBASE_HOME/conf/log4j.properties_ and change this: `log4j.logger.org.apache.hadoop.hbase=DEBUG` to this: `log4j.logger.org.apache.hadoop.hbase=INFO`, or even `log4j.logger.org.apache.hadoop.hbase=WARN`.
603 [[trouble.client.longpauseswithcompression]]
604 === Long Client Pauses With Compression
606 This is a fairly frequent question on the Apache HBase dist-list.
607 The scenario is that a client is typically inserting a lot of data into a relatively un-optimized HBase cluster.
608 Compression can exacerbate the pauses, although it is not the source of the problem.
610 See <<precreate.regions>> on the pattern for pre-creating regions and confirm that the table isn't starting with a single region.
612 See <<perf.configurations>> for cluster configuration, particularly `hbase.hstore.blockingStoreFiles`, `hbase.hregion.memstore.block.multiplier`, `MAX_FILESIZE` (region size), and `MEMSTORE_FLUSHSIZE.`
614 A slightly longer explanation of why pauses can happen is as follows: Puts are sometimes blocked on the MemStores which are blocked by the flusher thread which is blocked because there are too many files to compact because the compactor is given too many small files to compact and has to compact the same data repeatedly.
615 This situation can occur even with minor compactions.
616 Compounding this situation, Apache HBase doesn't compress data in memory.
617 Thus, the 64MB that lives in the MemStore could become a 6MB file after compression - which results in a smaller StoreFile.
618 The upside is that more data is packed into the same region, but performance is achieved by being able to write larger files - which is why HBase waits until the flushsize before writing a new StoreFile.
619 And smaller StoreFiles become targets for compaction.
620 Without compression the files are much bigger and don't need as much compaction, however this is at the expense of I/O.
622 For additional information, see this thread on link:http://search-hadoop.com/m/WUnLM6ojHm1/Long+client+pauses+with+compression&subj=Long+client+pauses+with+compression[Long client pauses with compression].
624 [[trouble.client.security.rpc.krb]]
625 === Secure Client Connect ([Caused by GSSException: No valid credentials provided...])
627 You may encounter the following error:
630 Secure Client Connect ([Caused by GSSException: No valid credentials provided
631 (Mechanism level: Request is a replay (34) V PROCESS_TGS)])
634 This issue is caused by bugs in the MIT Kerberos replay_cache component, link:http://krbdev.mit.edu/rt/Ticket/Display.html?id=1201[#1201] and link:http://krbdev.mit.edu/rt/Ticket/Display.html?id=5924[#5924].
635 These bugs caused the old version of krb5-server to erroneously block subsequent requests sent from a Principal.
636 This caused krb5-server to block the connections sent from one Client (one HTable instance with multi-threading connection instances for each RegionServer); Messages, such as `Request is a replay (34)`, are logged in the client log You can ignore the messages, because HTable will retry 5 * 10 (50) times for each failed connection by default.
637 HTable will throw IOException if any connection to the RegionServer fails after the retries, so that the user client code for HTable instance can handle it further.
638 NOTE: `HTable` is deprecated in HBase 1.0, in favor of `Table`.
640 Alternatively, update krb5-server to a version which solves these issues, such as krb5-server-1.10.3.
641 See JIRA link:https://issues.apache.org/jira/browse/HBASE-10379[HBASE-10379] for more details.
643 [[trouble.client.zookeeper]]
644 === ZooKeeper Client Connection Errors
651 11/07/05 11:26:41 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
652 unexpected error, closing socket connection and attempting reconnect
653 java.net.ConnectException: Connection refused: no further information
654 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
655 at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)
656 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1078)
657 11/07/05 11:26:43 INFO zookeeper.ClientCnxn: Opening socket connection to
658 server localhost/127.0.0.1:2181
659 11/07/05 11:26:44 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
660 unexpected error, closing socket connection and attempting reconnect
661 java.net.ConnectException: Connection refused: no further information
662 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
663 at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)
664 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1078)
665 11/07/05 11:26:45 INFO zookeeper.ClientCnxn: Opening socket connection to
666 server localhost/127.0.0.1:2181
669 ...are either due to ZooKeeper being down, or unreachable due to network issues.
671 The utility <<trouble.tools.builtin.zkcli>> may help investigate ZooKeeper issues.
673 [[trouble.client.oome.directmemory.leak]]
674 === Client running out of memory though heap size seems to be stable (but the off-heap/direct heap keeps growing)
676 You are likely running into the issue that is described and worked through in the mail thread link:http://search-hadoop.com/m/ubhrX8KvcH/Suspected+memory+leak&subj=Re+Suspected+memory+leak[HBase, mail # user - Suspected memory leak] and continued over in link:http://search-hadoop.com/m/p2Agc1Zy7Va/MaxDirectMemorySize+Was%253A+Suspected+memory+leak&subj=Re+FeedbackRe+Suspected+memory+leak[HBase, mail # dev - FeedbackRe: Suspected memory leak].
677 A workaround is passing your client-side JVM a reasonable value for `-XX:MaxDirectMemorySize`.
678 By default, the `MaxDirectMemorySize` is equal to your `-Xmx` max heapsize setting (if `-Xmx` is set). Try setting it to something smaller (for example, one user had success setting it to `1g` when they had a client-side heap of `12g`). If you set it too small, it will bring on `FullGCs` so keep it a bit hefty.
679 You want to make this setting client-side only especially if you are running the new experimental server-side off-heap cache since this feature depends on being able to use big direct buffers (You may have to keep separate client-side and server-side config dirs).
681 [[trouble.client.security.rpc]]
682 === Secure Client Cannot Connect ([Caused by GSSException: No valid credentials provided(Mechanism level: Failed to find any Kerberos tgt)])
684 There can be several causes that produce this symptom.
686 First, check that you have a valid Kerberos ticket.
687 One is required in order to set up communication with a secure Apache HBase cluster.
688 Examine the ticket currently in the credential cache, if any, by running the `klist` command line utility.
689 If no ticket is listed, you must obtain a ticket by running the `kinit` command with either a keytab specified, or by interactively entering a password for the desired principal.
691 Then, consult the link:http://docs.oracle.com/javase/1.5.0/docs/guide/security/jgss/tutorials/Troubleshooting.html[Java Security Guide troubleshooting section].
692 The most common problem addressed there is resolved by setting `javax.security.auth.useSubjectCredsOnly` system property value to `false`.
694 Because of a change in the format in which MIT Kerberos writes its credentials cache, there is a bug in the Oracle JDK 6 Update 26 and earlier that causes Java to be unable to read the Kerberos credentials cache created by versions of MIT Kerberos 1.8.1 or higher.
695 If you have this problematic combination of components in your environment, to work around this problem, first log in with `kinit` and then immediately refresh the credential cache with `kinit -R`.
696 The refresh will rewrite the credential cache without the problematic formatting.
698 Prior to JDK 1.4, the JCE was an unbundled product, and as such, the JCA and JCE were regularly referred to as separate, distinct components.
699 As JCE is now bundled in the JDK 7.0, the distinction is becoming less apparent. Since the JCE uses the same architecture as the JCA, the JCE should be more properly thought of as a part of the JCA.
701 You may need to install the link:https://docs.oracle.com/javase/1.5.0/docs/guide/security/jce/JCERefGuide.html[Java Cryptography Extension], or JCE because of JDK 1.5 or earlier version.
702 Insure the JCE jars are on the classpath on both server and client systems.
704 You may also need to download the link:http://www.oracle.com/technetwork/java/javase/downloads/jce-6-download-429243.html[unlimited strength JCE policy files].
705 Uncompress and extract the downloaded file, and install the policy jars into _<java-home>/lib/security_.
707 [[trouble.mapreduce]]
710 [[trouble.mapreduce.local]]
711 === You Think You're On The Cluster, But You're Actually Local
713 This following stacktrace happened using `ImportTsv`, but things like this can happen on any job with a mis-configuration.
717 WARN mapred.LocalJobRunner: job_local_0001
718 java.lang.IllegalArgumentException: Can't read partitions file
719 at org.apache.hadoop.hbase.mapreduce.hadoopbackport.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:111)
720 at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
721 at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
722 at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:560)
723 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:639)
724 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
725 at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210)
726 Caused by: java.io.FileNotFoundException: File _partition.lst does not exist.
727 at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:383)
728 at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251)
729 at org.apache.hadoop.fs.FileSystem.getLength(FileSystem.java:776)
730 at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1424)
731 at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1419)
732 at org.apache.hadoop.hbase.mapreduce.hadoopbackport.TotalOrderPartitioner.readPartitions(TotalOrderPartitioner.java:296)
735 ...see the critical portion of the stack? It's...
739 at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210)
742 LocalJobRunner means the job is running locally, not on the cluster.
744 To solve this problem, you should run your MR job with your `HADOOP_CLASSPATH` set to include the HBase dependencies.
745 The "hbase classpath" utility can be used to do this easily.
746 For example (substitute VERSION with your HBase version):
750 HADOOP_CLASSPATH=`hbase classpath` hadoop jar $HBASE_HOME/hbase-mapreduce-VERSION.jar rowcounter usertable
753 See <<hbase.mapreduce.classpath,HBase, MapReduce, and the CLASSPATH>> for more information on HBase MapReduce jobs and classpaths.
755 [[trouble.hbasezerocopybytestring]]
756 === Launching a job, you get java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString or class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralByteString
758 See link:https://issues.apache.org/jira/browse/HBASE-10304[HBASE-10304 Running an hbase job jar: IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralByteString] and link:https://issues.apache.org/jira/browse/HBASE-11118[HBASE-11118 non environment variable solution for "IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralByteString"].
759 The issue can also show up when trying to run spark jobs.
760 See link:https://issues.apache.org/jira/browse/HBASE-10877[HBASE-10877 HBase non-retriable exception list should be expanded].
765 For more information on the NameNode, see <<arch.hdfs>>.
767 [[trouble.namenode.disk]]
768 === HDFS Utilization of Tables and Regions
770 To determine how much space HBase is using on HDFS use the `hadoop` shell commands from the NameNode.
776 hadoop fs -dus /hbase/
778 ...returns the summarized disk utilization for all HBase objects.
783 hadoop fs -dus /hbase/myTable
785 ...returns the summarized disk utilization for the HBase table 'myTable'.
790 hadoop fs -du /hbase/myTable
792 ...returns a list of the regions under the HBase table 'myTable' and their disk utilization.
794 For more information on HDFS shell commands, see the link:https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/FileSystemShell.html[HDFS FileSystem Shell documentation].
796 [[trouble.namenode.hbase.objects]]
797 === Browsing HDFS for HBase Objects
799 Sometimes it will be necessary to explore the HBase objects that exist on HDFS.
800 These objects could include the WALs (Write Ahead Logs), tables, regions, StoreFiles, etc.
801 The easiest way to do this is with the NameNode web application that runs on port 50070.
802 The NameNode web application will provide links to the all the DataNodes in the cluster so that they can be browsed seamlessly.
804 The HDFS directory structure of HBase tables in the cluster is...
809 /<Table> (Tables in the cluster)
810 /<Region> (Regions for the table)
811 /<ColumnFamily> (ColumnFamilies for the Region for the table)
812 /<StoreFile> (StoreFiles for the ColumnFamily for the Regions for the table)
815 The HDFS directory structure of HBase WAL is..
821 /<RegionServer> (RegionServers)
822 /<WAL> (WAL files for the RegionServer)
825 See the link:https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html[HDFS User Guide] for other non-shell diagnostic utilities like `fsck`.
827 [[trouble.namenode.0size.hlogs]]
828 ==== Zero size WALs with data in them
830 Problem: when getting a listing of all the files in a RegionServer's _.logs_ directory, one file has a size of 0 but it contains data.
832 Answer: It's an HDFS quirk.
833 A file that's currently being written to will appear to have a size of 0 but once it's closed it will show its true size
835 [[trouble.namenode.uncompaction]]
838 Two common use-cases for querying HDFS for HBase objects is research the degree of uncompaction of a table.
839 If there are a large number of StoreFiles for each ColumnFamily it could indicate the need for a major compaction.
840 Additionally, after a major compaction if the resulting StoreFile is "small" it could indicate the need for a reduction of ColumnFamilies for the table.
842 === Unexpected Filesystem Growth
844 If you see an unexpected spike in filesystem usage by HBase, two possible culprits
845 are snapshots and WALs.
848 When you create a snapshot, HBase retains everything it needs to recreate the table's
849 state at that time of the snapshot. This includes deleted cells or expired versions.
850 For this reason, your snapshot usage pattern should be well-planned, and you should
851 prune snapshots that you no longer need. Snapshots are stored in `/hbase/.snapshots`,
852 and archives needed to restore snapshots are stored in
853 `/hbase/.archive/<_tablename_>/<_region_>/<_column_family_>/`.
855 *Do not* manage snapshots or archives manually via HDFS. HBase provides APIs and
856 HBase Shell commands for managing them. For more information, see <<ops.snapshots>>.
859 Write-ahead logs (WALs) are stored in subdirectories of the HBase root directory,
860 typically `/hbase/`, depending on their status. Already-processed WALs are stored
861 in `/hbase/oldWALs/` and corrupt WALs are stored in `/hbase/.corrupt/` for examination.
862 If the size of one of these subdirectories is growing, examine the HBase
863 server logs to find the root cause for why WALs are not being processed correctly.
865 If you use replication and `/hbase/oldWALs/` is using more space than you expect,
866 remember that WALs are saved when replication is disabled, as long as there are peers.
868 *Do not* manage WALs manually via HDFS.
873 [[trouble.network.spikes]]
876 If you are seeing periodic network spikes you might want to check the `compactionQueues` to see if major compactions are happening.
878 See <<managed.compactions>> for more information on managing compactions.
880 [[trouble.network.loopback]]
883 HBase expects the loopback IP Address to be 127.0.0.1.
885 [[trouble.network.ints]]
886 === Network Interfaces
888 Are all the network interfaces functioning correctly? Are you sure? See the Troubleshooting Case Study in <<trouble.casestudy>>.
893 For more information on the RegionServers, see <<regionserver.arch>>.
895 [[trouble.rs.startup]]
898 [[trouble.rs.startup.master_no_region]]
899 ==== Master Starts, But RegionServers Do Not
901 The Master believes the RegionServers have the IP of 127.0.0.1 - which is localhost and resolves to the master's own localhost.
903 The RegionServers are erroneously informing the Master that their IP addresses are 127.0.0.1.
905 Modify _/etc/hosts_ on the region servers, from...
909 # Do not remove the following line, or various programs
910 # that require network functionality will fail.
911 127.0.0.1 fully.qualified.regionservername regionservername localhost.localdomain localhost
912 ::1 localhost6.localdomain6 localhost6
915 \... to (removing the master node's name from localhost)...
919 # Do not remove the following line, or various programs
920 # that require network functionality will fail.
921 127.0.0.1 localhost.localdomain localhost
922 ::1 localhost6.localdomain6 localhost6
925 [[trouble.rs.startup.compression]]
926 ==== Compression Link Errors
928 Since compression algorithms such as LZO need to be installed and configured on each cluster this is a frequent source of startup error.
929 If you see messages like this...
934 11/02/20 01:32:15 ERROR lzo.GPLNativeCodeLoader: Could not load native gpl library
935 java.lang.UnsatisfiedLinkError: no gplcompression in java.library.path
936 at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1734)
937 at java.lang.Runtime.loadLibrary0(Runtime.java:823)
938 at java.lang.System.loadLibrary(System.java:1028)
941 \... then there is a path issue with the compression libraries.
942 See the Configuration section on link:[LZO compression configuration].
944 [[trouble.rs.runtime]]
947 [[trouble.rs.runtime.hang]]
948 ==== RegionServer Hanging
950 Are you running an old JVM (< 1.6.0_u21?)? When you look at a thread dump, does it look like threads are BLOCKED but no one holds the lock all are blocked on? See link:https://issues.apache.org/jira/browse/HBASE-3622[HBASE 3622 Deadlock in
951 HBaseServer (JVM bug?)].
952 Adding `-XX:+UseMembar` to the HBase `HBASE_OPTS` in _conf/hbase-env.sh_ may fix it.
954 [[trouble.rs.runtime.filehandles]]
955 ==== java.io.IOException...(Too many open files)
957 If you see log messages like this...
962 2010-09-13 01:24:17,336 WARN org.apache.hadoop.hdfs.server.datanode.DataNode:
963 Disk-related IOException in BlockReceiver constructor. Cause is java.io.IOException: Too many open files
964 at java.io.UnixFileSystem.createFileExclusively(Native Method)
965 at java.io.File.createNewFile(File.java:883)
968 \... see the Getting Started section on link:[ulimit and nproc configuration].
970 [[trouble.rs.runtime.xceivers]]
971 ==== xceiverCount 258 exceeds the limit of concurrent xcievers 256
973 This typically shows up in the DataNode logs.
975 See the Getting Started section on link:[xceivers configuration].
977 [[trouble.rs.runtime.oom_nt]]
978 ==== System instability, and the presence of "java.lang.OutOfMemoryError: unable to createnew native thread in exceptions" HDFS DataNode logs or that of any system daemon
980 See the Getting Started section on ulimit and nproc configuration.
981 The default on recent Linux distributions is 1024 - which is far too low for HBase.
983 [[trouble.rs.runtime.gc]]
984 ==== DFS instability and/or RegionServer lease timeouts
986 If you see warning messages like this...
991 2009-02-24 10:01:33,516 WARN org.apache.hadoop.hbase.util.Sleeper: We slept xxx ms, ten times longer than scheduled: 10000
992 2009-02-24 10:01:33,516 WARN org.apache.hadoop.hbase.util.Sleeper: We slept xxx ms, ten times longer than scheduled: 15000
993 2009-02-24 10:01:36,472 WARN org.apache.hadoop.hbase.regionserver.HRegionServer: unable to report to master for xxx milliseconds - retrying
996 \... or see full GC compactions then you may be experiencing full GC's.
998 [[trouble.rs.runtime.nolivenodes]]
999 ==== "No live nodes contain current block" and/or YouAreDeadException
1001 These errors can happen either when running out of OS file handles or in periods of severe network problems where the nodes are unreachable.
1003 See the Getting Started section on ulimit and nproc configuration and check your network.
1005 [[trouble.rs.runtime.zkexpired]]
1006 ==== ZooKeeper SessionExpired events
1008 Master or RegionServers shutting down with messages like those in the logs:
1013 WARN org.apache.zookeeper.ClientCnxn: Exception
1014 closing session 0x278bd16a96000f to sun.nio.ch.SelectionKeyImpl@355811ec
1015 java.io.IOException: TIMED OUT
1016 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:906)
1017 WARN org.apache.hadoop.hbase.util.Sleeper: We slept 79410ms, ten times longer than scheduled: 5000
1018 INFO org.apache.zookeeper.ClientCnxn: Attempting connection to server hostname/IP:PORT
1019 INFO org.apache.zookeeper.ClientCnxn: Priming connection to java.nio.channels.SocketChannel[connected local=/IP:PORT remote=hostname/IP:PORT]
1020 INFO org.apache.zookeeper.ClientCnxn: Server connection successful
1021 WARN org.apache.zookeeper.ClientCnxn: Exception closing session 0x278bd16a96000d to sun.nio.ch.SelectionKeyImpl@3544d65e
1022 java.io.IOException: Session Expired
1023 at org.apache.zookeeper.ClientCnxn$SendThread.readConnectResult(ClientCnxn.java:589)
1024 at org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:709)
1025 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:945)
1026 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: ZooKeeper session expired
1029 The JVM is doing a long running garbage collecting which is pausing every threads (aka "stop the world"). Since the RegionServer's local ZooKeeper client cannot send heartbeats, the session times out.
1030 By design, we shut down any node that isn't able to contact the ZooKeeper ensemble after getting a timeout so that it stops serving data that may already be assigned elsewhere.
1032 * Make sure you give plenty of RAM (in _hbase-env.sh_), the default of 1GB won't be able to sustain long running imports.
1033 * Make sure you don't swap, the JVM never behaves well under swapping.
1034 * Make sure you are not CPU starving the RegionServer thread.
1035 For example, if you are running a MapReduce job using 6 CPU-intensive tasks on a machine with 4 cores, you are probably starving the RegionServer enough to create longer garbage collection pauses.
1036 * Increase the ZooKeeper session timeout
1038 If you wish to increase the session timeout, add the following to your _hbase-site.xml_ to increase the timeout from the default of 60 seconds to 120 seconds.
1043 <name>zookeeper.session.timeout</name>
1044 <value>120000</value>
1047 <name>hbase.zookeeper.property.tickTime</name>
1052 Be aware that setting a higher timeout means that the regions served by a failed RegionServer will take at least that amount of time to be transferred to another RegionServer.
1053 For a production system serving live requests, we would instead recommend setting it lower than 1 minute and over-provision your cluster in order the lower the memory load on each machines (hence having less garbage to collect per machine).
1055 If this is happening during an upload which only happens once (like initially loading all your data into HBase), consider bulk loading.
1057 See <<trouble.zookeeper.general>> for other general information about ZooKeeper troubleshooting.
1059 [[trouble.rs.runtime.notservingregion]]
1060 ==== NotServingRegionException
1062 This exception is "normal" when found in the RegionServer logs at DEBUG level.
1063 This exception is returned back to the client and then the client goes back to `hbase:meta` to find the new location of the moved region.
1065 However, if the NotServingRegionException is logged ERROR, then the client ran out of retries and something probably wrong.
1067 [[brand.new.compressor]]
1068 ==== Logs flooded with '2011-01-10 12:40:48,407 INFO org.apache.hadoop.io.compress.CodecPool: Gotbrand-new compressor' messages
1070 We are not using the native versions of compression libraries.
1071 See link:https://issues.apache.org/jira/browse/HBASE-1900[HBASE-1900 Put back native support when hadoop 0.21 is released].
1072 Copy the native libs from hadoop under HBase lib dir or symlink them into place and the message should go away.
1074 [[trouble.rs.runtime.client_went_away]]
1075 ==== Server handler X on 60020 caught: java.nio.channels.ClosedChannelException
1077 If you see this type of message it means that the region server was trying to read/send data from/to a client but it already went away.
1078 Typical causes for this are if the client was killed (you see a storm of messages like this when a MapReduce job is killed or fails) or if the client receives a SocketTimeoutException.
1079 It's harmless, but you should consider digging in a bit more if you aren't doing something to trigger them.
1081 === Snapshot Errors Due to Reverse DNS
1083 Several operations within HBase, including snapshots, rely on properly configured reverse DNS.
1084 Some environments, such as Amazon EC2, have trouble with reverse DNS.
1085 If you see errors like the following on your RegionServers, check your reverse DNS configuration:
1089 2013-05-01 00:04:56,356 DEBUG org.apache.hadoop.hbase.procedure.Subprocedure: Subprocedure 'backup1'
1090 coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator.
1093 In general, the hostname reported by the RegionServer needs to be the same as the hostname the Master is trying to reach.
1094 You can see a hostname mismatch by looking for the following type of message in the RegionServer's logs at start-up.
1098 2013-05-01 00:03:00,614 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Master passed us hostname
1099 to use. Was=myhost-1234, Now=ip-10-55-88-99.ec2.internal
1102 [[trouble.rs.shutdown]]
1110 For more information on the Master, see <<architecture.master,master>>.
1112 [[trouble.master.startup]]
1115 [[trouble.master.startup.migration]]
1116 ==== Master says that you need to run the HBase migrations script
1118 Upon running that, the HBase migrations script says no files in root directory.
1120 HBase expects the root directory to either not exist, or to have already been initialized by HBase running a previous time.
1121 If you create a new directory for HBase using Hadoop DFS, this error will occur.
1122 Make sure the HBase root directory does not currently exist or has been initialized by a previous run of HBase.
1123 Sure fire solution is to just use Hadoop dfs to delete the HBase root and let HBase create and initialize the directory itself.
1125 [[trouble.master.startup.zk.buffer]]
1126 ==== Packet len6080218 is out of range!
1128 If you have many regions on your cluster and you see an error like that reported above in this sections title in your logs, see link:https://issues.apache.org/jira/browse/HBASE-4246[HBASE-4246 Cluster with too many regions cannot withstand some master failover scenarios].
1130 [[trouble.master.shutdown]]
1135 [[trouble.zookeeper]]
1138 [[trouble.zookeeper.startup]]
1141 [[trouble.zookeeper.startup.address]]
1142 ==== Could not find my address: xyz in list of ZooKeeper quorum servers
1144 A ZooKeeper server wasn't able to start, throws that error.
1145 xyz is the name of your server.
1147 This is a name lookup problem.
1148 HBase tries to start a ZooKeeper server on some machine but that machine isn't able to find itself in the `hbase.zookeeper.quorum` configuration.
1150 Use the hostname presented in the error message instead of the value you used.
1151 If you have a DNS server, you can set `hbase.zookeeper.dns.interface` and `hbase.zookeeper.dns.nameserver` in _hbase-site.xml_ to make sure it resolves to the correct FQDN.
1153 [[trouble.zookeeper.general]]
1154 === ZooKeeper, The Cluster Canary
1156 ZooKeeper is the cluster's "canary in the mineshaft". It'll be the first to notice issues if any so making sure its happy is the short-cut to a humming cluster.
1158 See the link:https://wiki.apache.org/hadoop/ZooKeeper/Troubleshooting[ZooKeeper Operating Environment Troubleshooting] page.
1159 It has suggestions and tools for checking disk and networking performance; i.e.
1160 the operating environment your ZooKeeper and HBase are running in.
1162 Additionally, the utility <<trouble.tools.builtin.zkcli>> may help investigate ZooKeeper issues.
1167 [[trouble.ec2.zookeeper]]
1168 === ZooKeeper does not seem to work on Amazon EC2
1170 HBase does not start when deployed as Amazon EC2 instances.
1171 Exceptions like the below appear in the Master and/or RegionServer logs:
1176 2009-10-19 11:52:27,030 INFO org.apache.zookeeper.ClientCnxn: Attempting
1177 connection to server ec2-174-129-15-236.compute-1.amazonaws.com/10.244.9.171:2181
1178 2009-10-19 11:52:27,032 WARN org.apache.zookeeper.ClientCnxn: Exception
1179 closing session 0x0 to sun.nio.ch.SelectionKeyImpl@656dc861
1180 java.net.ConnectException: Connection refused
1183 Security group policy is blocking the ZooKeeper port on a public address.
1184 Use the internal EC2 host names when configuring the ZooKeeper quorum peer list.
1186 [[trouble.ec2.instability]]
1187 === Instability on Amazon EC2
1189 Questions on HBase and Amazon EC2 come up frequently on the HBase dist-list.
1190 Search for old threads using link:http://search-hadoop.com/[Search Hadoop]
1192 [[trouble.ec2.connection]]
1193 === Remote Java Connection into EC2 Cluster Not Working
1195 See Andrew's answer here, up on the user list: link:http://search-hadoop.com/m/sPdqNFAwyg2[Remote Java client connection into EC2 instance].
1197 [[trouble.versions]]
1198 == HBase and Hadoop version issues
1200 [[trouble.wrong.version]]
1201 === ...cannot communicate with client version...
1203 If you see something like the following in your logs [computeroutput]+... 2012-09-24
1204 10:20:52,168 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting
1205 shutdown. org.apache.hadoop.ipc.RemoteException: Server IPC version 7 cannot communicate
1206 with client version 4 ...+ ...are you trying to talk to an Hadoop 2.0.x from an HBase that has an Hadoop 1.0.x client? Use the HBase built against Hadoop 2.0 or rebuild your HBase passing the +-Dhadoop.profile=2.0+ attribute to Maven (See <<maven.build.hadoop>> for more).
1210 General configuration guidance for Apache HDFS is out of the scope of this guide.
1211 Refer to the documentation available at https://hadoop.apache.org/ for extensive information about configuring HDFS.
1212 This section deals with HDFS in terms of HBase.
1214 In most cases, HBase stores its data in Apache HDFS.
1215 This includes the HFiles containing the data, as well as the write-ahead logs (WALs) which store data before it is written to the HFiles and protect against RegionServer crashes.
1216 HDFS provides reliability and protection to data in HBase because it is distributed.
1217 To operate with the most efficiency, HBase needs data to be available locally.
1218 Therefore, it is a good practice to run an HDFS DataNode on each RegionServer.
1220 .Important Information and Guidelines for HBase and HDFS
1222 HBase is a client of HDFS.::
1223 HBase is an HDFS client, using the HDFS `DFSClient` class, and references to this class appear in HBase logs with other HDFS client log messages.
1225 Configuration is necessary in multiple places.::
1226 Some HDFS configurations relating to HBase need to be done at the HDFS (server) side.
1227 Others must be done within HBase (at the client side). Other settings need to be set at both the server and client side.
1229 Write errors which affect HBase may be logged in the HDFS logs rather than HBase logs.::
1230 When writing, HDFS pipelines communications from one DataNode to another.
1231 HBase communicates to both the HDFS NameNode and DataNode, using the HDFS client classes.
1232 Communication problems between DataNodes are logged in the HDFS logs, not the HBase logs.
1234 HBase communicates with HDFS using two different ports.::
1235 HBase communicates with DataNodes using the `ipc.Client` interface and the `DataNode` class.
1236 References to these will appear in HBase logs.
1237 Each of these communication channels use a different port (50010 and 50020 by default). The ports are configured in the HDFS configuration, via the `dfs.datanode.address` and `dfs.datanode.ipc.address` parameters.
1239 Errors may be logged in HBase, HDFS, or both.::
1240 When troubleshooting HDFS issues in HBase, check logs in both places for errors.
1242 HDFS takes a while to mark a node as dead. You can configure HDFS to avoid using stale DataNodes.::
1243 By default, HDFS does not mark a node as dead until it is unreachable for 630 seconds.
1244 In Hadoop 1.1 and Hadoop 2.x, this can be alleviated by enabling checks for stale DataNodes, though this check is disabled by default.
1245 You can enable the check for reads and writes separately, via `dfs.namenode.avoid.read.stale.datanode` and `dfs.namenode.avoid.write.stale.datanode settings`.
1246 A stale DataNode is one that has not been reachable for `dfs.namenode.stale.datanode.interval` (default is 30 seconds). Stale datanodes are avoided, and marked as the last possible target for a read or write operation.
1247 For configuration details, see the HDFS documentation.
1249 Settings for HDFS retries and timeouts are important to HBase.::
1250 You can configure settings for various retries and timeouts.
1251 Always refer to the HDFS documentation for current recommendations and defaults.
1252 Some of the settings important to HBase are listed here.
1253 Defaults are current as of Hadoop 2.3.
1254 Check the Hadoop documentation for the most current values and recommendations.
1256 The HBase Balancer and HDFS Balancer are incompatible::
1257 The HDFS balancer attempts to spread HDFS blocks evenly among DataNodes. HBase relies
1258 on compactions to restore locality after a region split or failure. These two types
1259 of balancing do not work well together.
1261 In the past, the generally accepted advice was to turn off the HDFS load balancer and rely
1262 on the HBase balancer, since the HDFS balancer would degrade locality. This advice
1263 is still valid if your HDFS version is lower than 2.7.1.
1265 link:https://issues.apache.org/jira/browse/HDFS-6133[HDFS-6133] provides the ability
1266 to exclude favored-nodes (pinned) blocks from the HDFS load balancer, by setting the
1267 `dfs.datanode.block-pinning.enabled` property to `true` in the HDFS service
1270 HBase can be enabled to use the HDFS favored-nodes feature by switching the HBase balancer
1271 class (conf: `hbase.master.loadbalancer.class`) to `org.apache.hadoop.hbase.favored.FavoredNodeLoadBalancer`
1272 which is documented link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/favored/FavoredNodeLoadBalancer.html[here].
1274 NOTE: HDFS-6133 is available in HDFS 2.7.0 and higher, but HBase does not support
1275 running on HDFS 2.7.0, so you must be using HDFS 2.7.1 or higher to use this feature
1278 .Connection Timeouts
1279 Connection timeouts occur between the client (HBASE) and the HDFS DataNode.
1280 They may occur when establishing a connection, attempting to read, or attempting to write.
1281 The two settings below are used in combination, and affect connections between the DFSClient and the DataNode, the ipc.cClient and the DataNode, and communication between two DataNodes.
1283 `dfs.client.socket-timeout` (default: 60000)::
1284 The amount of time before a client connection times out when establishing a connection or reading.
1285 The value is expressed in milliseconds, so the default is 60 seconds.
1287 `dfs.datanode.socket.write.timeout` (default: 480000)::
1288 The amount of time before a write operation times out.
1289 The default is 8 minutes, expressed as milliseconds.
1292 The following types of errors are often seen in the logs.
1294 `INFO HDFS.DFSClient: Failed to connect to /xxx50010, add to deadNodes and
1295 continue java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel
1296 to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending
1297 remote=/region-server-1:50010]`::
1298 All DataNodes for a block are dead, and recovery is not possible.
1299 Here is the sequence of events that leads to this error:
1301 `INFO org.apache.hadoop.HDFS.DFSClient: Exception in createBlockOutputStream
1302 java.net.SocketTimeoutException: 69000 millis timeout while waiting for channel to be
1303 ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/
1305 This type of error indicates a write issue.
1306 In this case, the master wants to split the log.
1307 It does not have a local DataNodes so it tries to connect to a remote DataNode, but the DataNode is dead.
1310 == Running unit or integration tests
1312 [[trouble.hdfs_2556]]
1313 === Runtime exceptions from MiniDFSCluster when running tests
1315 If you see something like the following
1320 java.lang.NullPointerException: null
1321 at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes
1322 at org.apache.hadoop.hdfs.MiniDFSCluster.<init>
1323 at org.apache.hadoop.hbase.MiniHBaseCluster.<init>
1324 at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster
1325 at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster
1334 java.io.IOException: Shutting down
1335 at org.apache.hadoop.hbase.MiniHBaseCluster.init
1336 at org.apache.hadoop.hbase.MiniHBaseCluster.<init>
1337 at org.apache.hadoop.hbase.MiniHBaseCluster.<init>
1338 at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster
1339 at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster
1343 \... then try issuing the command +umask 022+ before launching tests.
1344 This is a workaround for link:https://issues.apache.org/jira/browse/HDFS-2556[HDFS-2556]
1346 [[trouble.casestudy]]
1349 For Performance and Troubleshooting Case Studies, see <<casestudies>>.
1352 == Cryptographic Features
1354 [[trouble.crypto.hbase_10132]]
1355 === sun.security.pkcs11.wrapper.PKCS11Exception: CKR_ARGUMENTS_BAD
1357 This problem manifests as exceptions ultimately caused by:
1361 Caused by: sun.security.pkcs11.wrapper.PKCS11Exception: CKR_ARGUMENTS_BAD
1362 at sun.security.pkcs11.wrapper.PKCS11.C_DecryptUpdate(Native Method)
1363 at sun.security.pkcs11.P11Cipher.implDoFinal(P11Cipher.java:795)
1366 This problem appears to affect some versions of OpenJDK 7 shipped by some Linux vendors.
1367 NSS is configured as the default provider.
1368 If the host has an x86_64 architecture, depending on if the vendor packages contain the defect, the NSS provider will not function correctly.
1370 To work around this problem, find the JRE home directory and edit the file _lib/security/java.security_.
1371 Edit the file to comment out the line:
1375 security.provider.1=sun.security.pkcs11.SunPKCS11 ${java.home}/lib/security/nss.cfg
1378 Then renumber the remaining providers accordingly.
1380 == Operating System Specific Issues
1382 === Page Allocation Failure
1384 NOTE: This issue is known to affect CentOS 6.2 and possibly CentOS 6.5.
1385 It may also affect some versions of Red Hat Enterprise Linux, according to https://bugzilla.redhat.com/show_bug.cgi?id=770545.
1387 Some users have reported seeing the following error:
1390 kernel: java: page allocation failure. order:4, mode:0x20
1393 Raising the value of `min_free_kbytes` was reported to fix this problem.
1394 This parameter is set to a percentage of the amount of RAM on your system, and is described in more detail at http://www.centos.org/docs/5/html/5.1/Deployment_Guide/s3-proc-sys-vm.html.
1396 To find the current value on your system, run the following command:
1399 [user@host]# cat /proc/sys/vm/min_free_kbytes
1402 Next, raise the value.
1403 Try doubling, then quadrupling the value.
1404 Note that setting the value too low or too high could have detrimental effects on your system.
1405 Consult your operating system vendor for specific recommendations.
1407 Use the following command to modify the value of `min_free_kbytes`, substituting _<value>_ with your intended value:
1410 [user@host]# echo <value> > /proc/sys/vm/min_free_kbytes
1415 === NoSuchMethodError: java.util.concurrent.ConcurrentHashMap.keySet
1417 If you see this in your logs:
1420 Caused by: java.lang.NoSuchMethodError: java.util.concurrent.ConcurrentHashMap.keySet()Ljava/util/concurrent/ConcurrentHashMap$KeySetView;
1421 at org.apache.hadoop.hbase.master.ServerManager.findServerWithSameHostnamePortWithLock(ServerManager.java:393)
1422 at org.apache.hadoop.hbase.master.ServerManager.checkAndRecordNewServer(ServerManager.java:307)
1423 at org.apache.hadoop.hbase.master.ServerManager.regionServerStartup(ServerManager.java:244)
1424 at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:304)
1425 at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:7910)
1426 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2020)
1429 then check if you compiled with jdk8 and tried to run it on jdk7.
1430 If so, this won't work.
1431 Run on jdk8 or recompile with jdk7.
1432 See link:https://issues.apache.org/jira/browse/HBASE-10607[HBASE-10607 JDK8 NoSuchMethodError involving ConcurrentHashMap.keySet if running on JRE 7].