hadoop & hbase install

MacOS 安装 Hadoop & Hbase 过程

Hadoop: 2.8.2
Hbase: 1.2.6

Mac 安装 HBase

hadoop安装

  1. 安装
1
2
3
4
5
6
7
8
# brew 安装 hadoop
➜ brew install hadoop

#Hadoop Settings .zshrc
➜ export HADOOP_HOME=/usr/local/Cellar/hadoop/2.8.2/libexec

➜ echo $HADOOP_HOME
/usr/local/Cellar/hadoop/2.8.2/libexec
  1. 进入Hadoop的配置目录
1
➜ cd ${HADOOP_HOME}/etc/hadoop
  • 编辑core-site.xml文件

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    <configuration>
    <property>
    <name>hadoop.tmp.dir</name>
    <!-- 永久存储的tmp文件. -->
    <value>/Users/niufeiy/Hadoop/opt/data/tmp/hadoop-${user.name}</value>
    <description>A base for other temporary directories.</description>
    </property>
    <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:8020</value>
    </property>
    </configuration>
  • 编辑hdfs-site.xml文件

    1
    2
    3
    4
    5
    6
    <configuration>
    <property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>
    </configuration>
  • 格式化HDFS

    1
    hdfs namenode -format
  • 配置Yarn

    1
    2
    ➜ cd ${HADOOP_HOME}/etc/hadoop
    ➜ cp mapred-site.xml.template mapred-site.xml
  • 编辑mapred-site.xml

    1
    2
    3
    4
    5
    6
    <configuration>
    <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
    </property>
    </configuration>
  • 编辑yarn-site.xml

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    <configuration>
    <!-- Site specific YARN configuration properties -->
    <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
    </property>
    <property>
    <name>yarn.resourcemanager.hostname</name>
    <value>localhost</value>
    </property>
    </configuration>

    Yarn的端口是8088,访问 http://localhost:8088/

  • 查看java进程

    1
    2
    3
    4
    5
    6
    7
    ➜  conf jps
    42420 ResourceManager
    42100 NameNode
    42516 NodeManager
    42299 SecondaryNameNode
    42188 DataNode
    48669 Jps

命令

1
2
3
4
5
6
7
8
start-all.sh
stop-all.sh

hadoop-daemon.sh start namenode
hadoop-daemon.sh start datanode
hadoop-daemon.sh start secondarynamenode

yarn-daemon.sh start nodemanager

Hbase 安装

伪分布式模式

配置 hbase-env.xml

1
2
3
4
export JAVA_HOME="$(/usr/libexec/java_home)"
export HADOOP_HOME=/usr/local/Cellar/hadoop/2.8.2
export HBASE_MANAGES_ZK=true
# true使用自带的zk

配置 hbase-site.xml (/usr/local/Cellar/hbase/1.2.6/libexec/conf)

1
2
3
4
5
6
7
8
9
10
11
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:8020/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>

</configuration>

注意

hbase.rootdir 与 hadoop fs.default.name 地址一致

单节点模式

配置 hbase-env.xml

1
2
3
4
export JAVA_HOME="$(/usr/libexec/java_home)"
export HADOOP_HOME=/usr/local/Cellar/hadoop/2.8.2
export HBASE_MANAGES_ZK=true
# true使用自带的zk

配置 hbase-site.xml

1
2
3
4
5
6
7
8
9
10
<configuration>
<property>
<name>hbase.rootdir</name>
<value>file:/Users/pc-009/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/Users/pc-009/zk_data</value>
</property>
</configuration>

启动

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
start-all.sh
start-hbase.sh
jps
hbase shell


➜ conf jps
42420 ResourceManager
42100 NameNode
42516 NodeManager
48055 HMaster
48167 HRegionServer
42299 SecondaryNameNode
48876 Jps
42188 DataNode
47998 HQuorumPeer

➜ conf hbase shell
2018-08-08 17:33:08,579 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/Cellar/hbase/1.2.6/libexec/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/Cellar/hadoop/2.8.2/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.6, rUnknown, Mon May 29 02:25:32 CDT 2017

hbase(main):001:0> list
TABLE
0 row(s) in 0.1900 seconds

=> []
hbase(main):002:0>

报错集合

  1. org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = Con

    原因 没有启动HBase 需要运行 start-hbase.sh 才可以。

  2. ERROR:org.apache.hadoop.hbase.PleaseHoldException: Master is initializing

    原因 时间不同步 or ZooKeeper不正常

ERROR

1
ERROR: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.

原因是端口被占用,需要把自己安装的zookeeper关闭,用hbase自带的

http://localhost:8042/node
http://localhost:8088/cluster
http://localhost:50070/dfshealth.html#tab-overview