HBase installation configuration, using independent zookeeper

Keywords: Big Data HBase Zookeeper xml Hadoop

1.HBase installation configuration, using independent zookeeper

2. Modify environment variables:
The first machine is planned to be master, the second machine is RegionServer, and then start RegionServer on the first machine
Make a cluster
1 master
2 RegionServer

Execute vi /etc/profile on the machine, and add the following:
export HBASE_HOME=/usr/local/src/hbase
export PATH=$PATH:$HBASE_HOME/bin

3. Modify the configuration file:

vi  /hbase/conf/hbase-site.xml 
<configuration>
<property>
#Host and port of hbasemaster
<name>hbase.master</name>
<value>hostip2:60000</value>
</property>
<property>
#Allowable time difference of time synchronization
<name>hbase.master.maxclockskew</name> 
<value>180000</value>
</property>
<property>
#Shared directory, persistent hbase data
#hdfs://hostip2:9000 is the built hdfs
<name>hbase.rootdir</name>
<value>hdfs://hostip2:9000/hbase</value>
</property>
<property>
 #Distributed operation or not, false is stand-alone
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
#zookeeper address
<name>hbase.zookeeper.quorum</name>
<value>hostip3,hostip5</value>
</property>
<property>
#The location of the snapshot of the configuration information of zookeeper. The file that keeps the information of zookeeper is / tmp by default, which will be lost after restarting
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/hadoop/hbase/tmp/zookeeper</value>
</property>
<property>  
#2181 is the clientPort of zookeeper
<name>hbase.zookeeper.property.clientPort</name>  
<value>2181</value>  
</property>  
</configuration>

vi /hbase/conf/hbase-env.sh

export JAVA_HOME=/usr/java/jdk1.7.0_45   //jdk installation directory
export HBASE_CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar  //Location of hadoop configuration file
export HBASE_MANAGES_ZK=false   #If you use the independently installed zookeeper, this place is false, and the built-in setting is true

vi /conf/regionservers

hostip3
hostip5

4. Put hdfs-site.xml and core-site.xml of Hadoop under hbase/conf:

hdfs-site.xml:

<configuration>
    <!-- Appoint HDFS Number of copies -->
        <property>
            <name>dfs.replication</name>
            <value>2</value>
        </property>
</configuration>

core-site.xml:

<configuration>
     <property>
            <name>fs.defaultFS</name>
            <value>hdfs://hostip2:9000</value>
        </property>
        <!-- Appoint hadoop Storage directory where files are generated at run time -->
        <property>
            <name>hadoop.tmp.dir</name>
            <value>/usr/local/src/hadoop-2.6.4/hdpdata</value>
        </property>
</configuration>

5. Send hbase directory to other machines and start master and regionserver respectively

./hbase-daemon.sh start master

./hbase-daemon.sh start regionserver

6. View the webpage: http://hostip2:16010/

Process view jps

3201 Main
9092 HRegionServer
29125 HMaster
13240 Jps
16012 DataNode
12766 ZooKeeperMain

7. View by zookeeper
. / zkCli.sh connect

[zk: localhost:2181(CONNECTED) 1] ls /
[zookeeper, hbase]

[zk: localhost:2181(CONNECTED) 2] ls /hbase
[replication, meta-region-server, rs, splitWAL, backup-masters, 
table-lock, flush-table-proc, region-in-transition, online-snapshot, 
master, running, recovering-regions, draining, namespace, hbaseid, table]

[zk: localhost:2181(CONNECTED) 5] ls /hbase/rs
[host1,16020,1543922854774, host2,16201,1543834531685]

8, expand 1
If you want to start a dual master, hbase-day.sh start master cannot, because hbase-site.xml has written

<property>
#Host and port of hbasemaster
<name>hbase.master</name>
<value>IP1:60000</value>
</property>

You need to use / HBase / bin >. / local master backup.sh start 2

zk view

[zk: localhost:2181(CONNECTED) 26] ls /hbase/backup-masters
[host1,1543924655750]

Posted by vadercole on Tue, 03 Dec 2019 01:49:01 -0800