ERROR [main] index.IndexTool: hadoop.hbase.mapreduce.HFileOutputFormat2.configurePartitioner(HFileOu

Highlighted
Newbie

ERROR [main] index.IndexTool: hadoop.hbase.mapreduce.HFileOutputFormat2.configurePartitioner(HFileOu

HI.

I using Apache phoenix (version 4.14) and I have a problem with creating the ASYNC index.

I want to create one index on ALITEST_UPPERCASE view. so I useing this command in Phoenix shell : 
CREATE LOCAL INDEX ASYNC_INDEX_ALITEST_UPPERCASE ON ALITEST_UPPERCASE ("personal_data"."name") ASYNC ;

 

command : 

hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table ALITEST_UPPERCASE --index-table ASYNC_INDEX_ALITEST_UPPERCASE --output-path /home/ali/

 

message : 


The index is created with the status of the Building. Now I want to complete the index using the IndexTool  but I encounter this message : 

 

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hbase/lib/phoenix-4.14.2-HBase-1.4-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hbase/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
2019-08-27 13:45:21,706 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2019-08-27 13:45:22,245 INFO [main] log.QueryLoggerDisruptor: Starting QueryLoggerDisruptor for with ringbufferSize=8192, waitStrategy=BlockingWaitStrategy, exceptionHandler=org.apache.phoenix.log.QueryLoggerDefaultExceptionHandler@35ef1869...
2019-08-27 13:45:22,292 INFO [main] query.ConnectionQueryServicesImpl: An instance of ConnectionQueryServices was created.
2019-08-27 13:45:22,448 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5bf0d49 connecting to ZooKeeper ensemble=192.168.1.4:2181,192.168.1.5:2181,192.168.1.1:2181,192.168.1.6:2181,192.168.1.7:2181
2019-08-27 13:45:22,459 INFO [main] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
2019-08-27 13:45:22,459 INFO [main] zookeeper.ZooKeeper: Client environment:host.name=hmaster.datak.ir
2019-08-27 13:45:22,459 INFO [main] zookeeper.ZooKeeper: Client environment:java.version=1.8.0_172
2019-08-27 13:45:22,459 INFO [main] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
2019-08-27 13:45:22,459 INFO [main] zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.8.0_172-amd64/jre
2019-08-27 13:45:22,460 INFO [main] zookeeper.ZooKeeper: .6.8.jar:/home/hadoop/hbase/lib/jsch-0.1.54.jar:/home/hadoop/hbase/lib/jsp-2.1-6.1.14.jar:/home/hadoop/hbase/lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/hbase/lib/junit-4.12.jar:/home/hadoop/hbase/lib/leveldbjni-all-1.8.jar:/home/hadoop/hbase/lib/libthrift-0.9.3.jar:/home/hadoop/hbase/lib/log4j-1.2.17.jar:/home/hadoop/hbase/lib/metrics-core-2.2.0.jar:/home/hadoop/hbase/lib/metrics-core-3.1.2.jar:/home/hadoop/hbase/lib/netty-all-4.1.8.Final.jar:/home/hadoop/hbase/lib/paranamer-2.3.jar:/home/hadoop/hbase/lib/phoenix-4.14.2-HBase-1.4-client.jar:/home/hadoop/hbase/lib/phoenix-4.14.2-HBase-1.4-server.jar:/home/hadoop/hbase/lib/protobuf-java-2.5.0.jar:/home/hadoop/hbase/lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hbase/lib/slf4j-api-1.7.7.jar:/home/hadoop/hbase/lib/slf4j-log4j12-1.7.10.jar:/home/hadoop/hbase/lib/snappy-java-1.0.5.jar:/home/hadoop/hbase/lib/spymemcached-2.11.6.jar:/home/hadoop/hbase/lib/xmlenc-0.52.jar:/home/hadoop/hbase/lib/xz-1.0.jar:/home/hadoop/hbase/lib/zookeeper-3.4.10.jar:
2019-08-27 13:45:22,460 INFO [main] zookeeper.ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2019-08-27 13:45:22,460 INFO [main] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2019-08-27 13:45:22,460 INFO [main] zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2019-08-27 13:45:22,460 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux
2019-08-27 13:45:22,460 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64
2019-08-27 13:45:22,460 INFO [main] zookeeper.ZooKeeper: Client environment:os.version=3.10.0-957.12.1.el7.x86_64
2019-08-27 13:45:22,460 INFO [main] zookeeper.ZooKeeper: Client environment:user.name=hadoop
2019-08-27 13:45:22,460 INFO [main] zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop
2019-08-27 13:45:22,460 INFO [main] zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop
2019-08-27 13:45:22,462 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=192.168.1.4:2181,192.168.1.5:2181,192.168.1.1:2181,192.168.1.6:2181,192.168.1.7:2181 sessionTimeout=600000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@198b6731
2019-08-27 13:45:22,491 INFO [main-SendThread(192.168.1.5:2181)] zookeeper.ClientCnxn: Opening socket connection to server 192.168.1.5/192.168.1.5:2181. Will not attempt to authenticate using SASL (unknown error)
2019-08-27 13:45:22,503 INFO [main-SendThread(192.168.1.5:2181)] zookeeper.ClientCnxn: Socket connection established to 192.168.1.5/192.168.1.5:2181, initiating session
2019-08-27 13:45:22,520 INFO [main-SendThread(192.168.1.5:2181)] zookeeper.ClientCnxn: Session establishment complete on server 192.168.1.5/192.168.1.5:2181, sessionid = 0x200a3bd3afa0040, negotiated timeout = 600000
2019-08-27 13:45:23,875 INFO [main] query.ConnectionQueryServicesImpl: org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2499)
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:147)
org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
java.sql.DriverManager.getConnection(DriverManager.java:664)
java.sql.DriverManager.getConnection(DriverManager.java:208)
org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(ConnectionUtil.java:113)
org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(ConnectionUtil.java:58)
org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(ConnectionUtil.java:46)
org.apache.phoenix.mapreduce.index.IndexTool.run(IndexTool.java:585)
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
org.apache.phoenix.mapreduce.index.IndexTool.main(IndexTool.java:846)

2019-08-27 13:45:27,573 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4c0884e8 connecting to ZooKeeper ensemble=192.168.1.4:2181,192.168.1.5:2181,192.168.1.1:2181,192.168.1.6:2181,192.168.1.7:2181
2019-08-27 13:45:27,573 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=192.168.1.4:2181,192.168.1.5:2181,192.168.1.1:2181,192.168.1.6:2181,192.168.1.7:2181 sessionTimeout=600000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@231baf51
2019-08-27 13:45:27,597 INFO [main-SendThread(192.168.1.1:2181)] zookeeper.ClientCnxn: Opening socket connection to server 192.168.1.1/192.168.1.1:2181. Will not attempt to authenticate using SASL (unknown error)
2019-08-27 13:45:27,599 INFO [main-SendThread(192.168.1.1:2181)] zookeeper.ClientCnxn: Socket connection established to 192.168.1.1/192.168.1.1:2181, initiating session
2019-08-27 13:45:27,603 INFO [main-SendThread(192.168.1.1:2181)] zookeeper.ClientCnxn: Session establishment complete on server 192.168.1.1/192.168.1.1:2181, sessionid = 0x300a3c7c1200041, negotiated timeout = 600000
2019-08-27 13:45:27,628 INFO [main] mapreduce.HFileOutputFormat2: bulkload locality sensitive enabled
2019-08-27 13:45:27,628 INFO [main] mapreduce.HFileOutputFormat2: Looking up current regions for table ALITEST_**bleep**_UPPERCASE
2019-08-27 13:45:27,648 INFO [main] mapreduce.HFileOutputFormat2: Configuring 1 reduce partitions to match current region count
2019-08-27 13:45:27,649 INFO [main] mapreduce.HFileOutputFormat2: Writing partition information to /user/hadoop/hbase-staging/partitions_d0b15883-996b-4140-b351-791e7195f23d
2019-08-27 13:45:27,674 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing master protocol: MasterService
2019-08-27 13:45:27,674 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x300a3c7c1200041
2019-08-27 13:45:27,679 INFO [main] zookeeper.ZooKeeper: Session: 0x300a3c7c1200041 closed
2019-08-27 13:45:27,685 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x300a3c7c1200041
2019-08-27 13:45:27,693 ERROR [main] index.IndexTool: hadoop.hbase.mapreduce.HFileOutputFormat2.configurePartitioner(HFileOutputFormat2.java:673)
at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:517)
at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:476)
at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat.configureIncrementalLoad(HFileOutputFormat.java:91)
at org.apache.phoenix.mapreduce.index.IndexTool$JobFactory.configureRunnableJobUsingBulkLoad(IndexTool.java:523)
at org.apache.phoenix.mapreduce.index.IndexTool$JobFactory.configureJobForAysncIndex(IndexTool.java:473)
at org.apache.phoenix.mapreduce.index.IndexTool$JobFactory.getJob(IndexTool.java:290)
at org.apache.phoenix.mapreduce.index.IndexTool.run(IndexTool.java:642)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.phoenix.mapreduce.index.IndexTool.main(IndexTool.java:846)

 

 
1 REPLY 1
Highlighted
Accomplice


@toanns35 wrote:

HI.

I using Apache phoenix (version 4.14) and I have a problem with creating the ASYNC index.

I want to create one index on ALITEST_UPPERCASE view. so I useing this command in Phoenix shell : 
CREATE LOCAL INDEX ASYNC_INDEX_ALITEST_UPPERCASE ON ALITEST_UPPERCASE ("personal_data"."name") ASYNC ;

 

command : 

hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table ALITEST_UPPERCASE --index-table ASYNC_INDEX_ALITEST_UPPERCASE --output-path /home/ali/

 

message : 


The index is created with the status of the Building. Now I want to complete the index using the IndexTool  but I encounter this message : 

 

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hbase/lib/phoenix-4.14.2-HBase-1.4-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hbase/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
2019-08-27 13:45:21,706 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2019-08-27 13:45:22,245 INFO [main] log.QueryLoggerDisruptor: Starting QueryLoggerDisruptor for with ringbufferSize=8192, waitStrategy=BlockingWaitStrategy, exceptionHandler=org.apache.phoenix.log.QueryLoggerDefaultExceptionHandler@35ef1869...
2019-08-27 13:45:22,292 INFO [main] query.ConnectionQueryServicesImpl: An instance of ConnectionQueryServices was created.
2019-08-27 13:45:22,448 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5bf0d49 connecting to ZooKeeper ensemble=192.168.1.4:2181,192.168.1.5:2181,192.168.1.1:2181,192.168.1.6:2181,192.168.1.7:2181
2019-08-27 13:45:22,459 INFO [main] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
2019-08-27 13:45:22,459 INFO [main] zookeeper.ZooKeeper: Client environment:host.name=hmaster.datak.ir
2019-08-27 13:45:22,459 INFO [main] zookeeper.ZooKeeper: Client environment:java.version=1.8.0_172
2019-08-27 13:45:22,459 INFO [main] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
2019-08-27 13:45:22,459 INFO [main] zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.8.0_172-amd64/jre
2019-08-27 13:45:22,460 INFO [main] zookeeper.ZooKeeper: .6.8.jar:/home/hadoop/hbase/lib/jsch-0.1.54.jar:/home/hadoop/hbase/lib/jsp-2.1-6.1.14.jar:/home/hadoop/hbase/lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/hbase/lib/junit-4.12.jar:/home/hadoop/hbase/lib/leveldbjni-all-1.8.jar:/home/hadoop/hbase/lib/libthrift-0.9.3.jar:/home/hadoop/hbase/lib/log4j-1.2.17.jar:/home/hadoop/hbase/lib/metrics-core-2.2.0.jar:/home/hadoop/hbase/lib/metrics-core-3.1.2.jar:/home/hadoop/hbase/lib/netty-all-4.1.8.Final.jar:/home/hadoop/hbase/lib/paranamer-2.3.jar:/home/hadoop/hbase/lib/phoenix-4.14.2-HBase-1.4-client.jar:/home/hadoop/hbase/lib/phoenix-4.14.2-HBase-1.4-server.jar:/home/hadoop/hbase/lib/protobuf-java-2.5.0.jar:/home/hadoop/hbase/lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hbase/lib/slf4j-api-1.7.7.jar:/home/hadoop/hbase/lib/slf4j-log4j12-1.7.10.jar:/home/hadoop/hbase/lib/snappy-java-1.0.5.jar:/home/hadoop/hbase/lib/spymemcached-2.11.6.jar:/home/hadoop/hbase/lib/xmlenc-0.52.jar:/home/hadoop/hbase/lib/xz-1.0.jar:/home/hadoop/hbase/lib/zookeeper-3.4.10.jar:
2019-08-27 13:45:22,460 INFO [main] zookeeper.ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2019-08-27 13:45:22,460 INFO [main] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2019-08-27 13:45:22,460 INFO [main] zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2019-08-27 13:45:22,460 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux
2019-08-27 13:45:22,460 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64
2019-08-27 13:45:22,460 INFO [main] zookeeper.ZooKeeper: Client environment:os.version=3.10.0-957.12.1.el7.x86_64
2019-08-27 13:45:22,460 INFO [main] zookeeper.ZooKeeper: Client environment:user.name=hadoop
2019-08-27 13:45:22,460 INFO [main] zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop
2019-08-27 13:45:22,460 INFO [main] zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop
2019-08-27 13:45:22,462 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=192.168.1.4:2181,192.168.1.5:2181,192.168.1.1:2181,192.168.1.6:2181,192.168.1.7:2181 sessionTimeout=600000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@198b6731
2019-08-27 13:45:22,491 INFO [main-SendThread(192.168.1.5:2181)] zookeeper.ClientCnxn: Opening socket connection to server 192.168.1.5/192.168.1.5:2181. Will not attempt to authenticate using SASL (unknown error)
2019-08-27 13:45:22,503 INFO [main-SendThread(192.168.1.5:2181)] zookeeper.ClientCnxn: Socket connection established to 192.168.1.5/192.168.1.5:2181, initiating session
2019-08-27 13:45:22,520 INFO [main-SendThread(192.168.1.5:2181)] zookeeper.ClientCnxn: Session establishment complete on server 192.168.1.5/192.168.1.5:2181, sessionid = 0x200a3bd3afa0040, negotiated timeout = 600000
2019-08-27 13:45:23,875 INFO [main] query.ConnectionQueryServicesImpl: org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2499)
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:147)
org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
java.sql.DriverManager.getConnection(DriverManager.java:664)
java.sql.DriverManager.getConnection(DriverManager.java:208)
org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(ConnectionUtil.java:113)
org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(ConnectionUtil.java:58)
org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(ConnectionUtil.java:46)
org.apache.phoenix.mapreduce.index.IndexTool.run(IndexTool.java:585)
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
org.apache.phoenix.mapreduce.index.IndexTool.main(IndexTool.java:846)

2019-08-27 13:45:27,573 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4c0884e8 connecting to ZooKeeper ensemble=192.168.1.4:2181,192.168.1.5:2181,192.168.1.1:2181,192.168.1.6:2181,192.168.1.7:2181
2019-08-27 13:45:27,573 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=192.168.1.4:2181,192.168.1.5:2181,192.168.1.1:2181,192.168.1.6:2181,192.168.1.7:2181 sessionTimeout=600000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@231baf51
2019-08-27 13:45:27,597 INFO [main-SendThread(192.168.1.1:2181)] zookeeper.ClientCnxn: Opening socket connection to server 192.168.1.1/192.168.1.1:2181. Will not attempt to authenticate using SASL (unknown error)
2019-08-27 13:45:27,599 INFO [main-SendThread(192.168.1.1:2181)] zookeeper.ClientCnxn: Socket connection established to 192.168.1.1/192.168.1.1:2181, initiating session
2019-08-27 13:45:27,603 INFO [main-SendThread(192.168.1.1:2181)] zookeeper.ClientCnxn: Session establishment complete on server 192.168.1.1/192.168.1.1:2181, sessionid = 0x300a3c7c1200041, negotiated timeout = 600000
2019-08-27 13:45:27,628 INFO [main] mapreduce.HFileOutputFormat2: bulkload locality sensitive enabled
2019-08-27 13:45:27,628 INFO [main] mapreduce.HFileOutputFormat2: Looking up current regions for table ALITEST_**bleep**_UPPERCASE
2019-08-27 13:45:27,648 INFO [main] mapreduce.HFileOutputFormat2: Configuring 1 reduce partitions to match current region count
2019-08-27 13:45:27,649 INFO [main] mapreduce.HFileOutputFormat2: Writing partition information to /user/hadoop/hbase-staging/partitions_d0b15883-996b-4140-b351-791e7195f23d
2019-08-27 13:45:27,674 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing master protocol: MasterService
2019-08-27 13:45:27,674 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x300a3c7c1200041
2019-08-27 13:45:27,679 INFO [main] zookeeper.ZooKeeper: Session: 0x300a3c7c1200041 closed
2019-08-27 13:45:27,685 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x300a3c7c1200041
2019-08-27 13:45:27,693 ERROR [main] index.IndexTool: hadoop.hbase.mapreduce.HFileOutputFormat2.configurePartitioner(HFileOutputFormat2.java:673)
at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:517)
at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:476)
at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat.configureIncrementalLoad(HFileOutputFormat.java:91)
at org.apache.phoenix.mapreduce.index.IndexTool$JobFactory.configureRunnableJobUsingBulkLoad(IndexTool.java:523)
at org.apache.phoenix.mapreduce.index.IndexTool$JobFactory.configureJobForAysncIndex(IndexTool.java:473)
at org.apache.phoenix.mapreduce.index.IndexTool$JobFactory.getJob(IndexTool.java:290)
at org.apache.phoenix.mapreduce.index.IndexTool.run(IndexTool.java:642)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.phoenix.mapreduce.index.IndexTool.main(IndexTool.java:846)

 

 

Getting similar issue here too. Some help is appreciated,


Thanks in advance

Rgards,

Oliver