[关闭]
@tsing1226 2016-07-04T09:40:32.000000Z 字数 8460 阅读 1143

hadoop

Hadoop2.7.2 HA搭建

标签: 搭建,HA,


节点规划


注意:表格中的代表含义如下:NN:NameNode;DN:DataNode;JN:JournalNode;ZK:Zookeeper;RM:ResourceManager;NM:NodeManager;ZKFC:Zookeeper Failover Controllers.表格中含有则用1表示,不含有则不填写。

配置文件

配置文件hadoop-env.sh

JAVA_HOME=/usr/java/jdk1.7.0_79
export HADOOP_CONF_DIR=/home/hadoop/hadoop/etc/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=$HADOOP_HOME/lib/native"

配置文件slaves

m117
s118
s119

配置文件core-site.xml

<configuration>
<!--###############HADOOP TMP DIR###############################-->
        <property>
             <name>hadoop.tmp.dir</name>
             <value>file:/storage/hadoop/hadoop/tmp</value>
                     </property>
<!--####################DEAFAULT FILESYSTEM####################-->
        <property>
             <name>fs.defaultFS</name>
             <value>hdfs://ns1</value>
        </property>
<!--####################HDFS HA FOR ZOOKEEPER QUORUM#########-->
        <property>
             <name>ha.zookeeper.quorum</name>
             <value>m117,s118,s119</value>
        </property>
</configuration>

###配置文件hdfs-site.xml

 <configuration>
    <!--########################Namenode Address########################-->
        <property>
            <name>dfs.nameservices</name>
            <value>ns1</value>
        </property>
        <property>
            <name>dfs.ha.namenodes.ns1</name>
            <value>nn1,nn2</value>
        </property>
        <property>
            <name>dfs.namenode.rpc-address.ns1.nn1</name>
            <value>m117:8020</value>
        </property>
        <property>
               <name>dfs.namenode.http-address.ns1.nn1</name>
               <value>m117:50070</value>
        </property>
        <property>
               <name>dfs.namenode.https-address.ns1.nn1</name>
               <value>m117:50470</value>
        </property>
        <property>
            <name>dfs.namenode.rpc-address.ns1.nn2</name>
            <value>s118:8020</value>
        </property>
        <property>
               <name>dfs.namenode.http-address.ns1.nn2</name>
               <value>s118:50070</value>
        </property>
        <property>
               <name>dfs.namenode.https-address.ns1.nn2</name>
               <value>s118:50470</value>
        </property>
    <!-- ######################Namenode TMP DIR########################-->
        <property>
               <name>dfs.namenode.name.dir</name>
               <value>file:/storage/hadoop/hadoop/tmp/dfs/name</value>
        </property>
        <property>
               <name>dfs.datanode.data.dir</name>
               <value>file:/storage/hadoop/hadoop/tmp/dfs/data</value>
        </property>
    <!--#######################SecondaryNameNode Address################-->
        <property>
               <name>dfs.namenode.secondary.http-address</name>
               <value>m117:50090</value>
        </property>
        <property>
               <name>dfs.namenode.secondary.https-address</name>
               <value>m117:50091</value>
        </property>
    <!--######################HDFS ####################################-->
        <property>
               <name>dfs.replication</name>
               <value>2</value>
        </property> 
        <property>
                <name>dfs.blocksize</name>
                <value>134217728</value>
        </property>
    <!-- #######################dfs permissions###########################-->   
        <property>
               <name>dfs.permissions.enabled</name>
               <value>false</value>
        </property>
    <!-- #########################HDFS HA###########################--> 
        <property>
               <name>dfs.ha.automatic-failover.enabled</name>
               <value>true</value>
        </property>
    <!--###################HDFS Failover Proxy###########################-->    
        <property>
                <name>dfs.client.failover.proxy.provider.ns1</name>
                <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
        </property>
    <!-- #########################SSH Setting###########################--> 
        <property>
          <name>dfs.ha.fencing.methods</name>
          <value>sshfence</value>
        </property>
        <property>
          <name>dfs.ha.fencing.ssh.private-key-files</name>
          <value>/home/hadoop/.ssh/id_rsa</value>
        </property>
    <!-- ####################Journalnode Setting###########################-->  
        <property>
               <name>dfs.namenode.shared.edits.dir</name>
               <value>qjournal://m117:8485;s118:8485;s119:8485/ns1</value>
        </property>
        <property>
            <name>dfs.journalnode.edits.dir</name>
            <value>/home/hadoop/hadoop/tmp/dfs/jn</value>
        </property>
    </configuration>

配置文件mapred-site.xml

<configuration>
<!--#######The runtime framework for executing MapReduce jobs#######-->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
<!--##############JOBHISTORY ADRESS##########################-->
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>m117:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>m117:19888</value>
    </property>
</configuration>

此时的m117要修改成ns1吗?

配置文件yarn-site.xml

<configuration>
    <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
    </property>
<!--################ResourceManager HA############################-->
    <property>
            <name>yarn.resourcemanager.ha.enabled</name>
            <value>true</value>
    </property>
    <property>
            <name>yarn.resourcemanager.cluster-id</name>
            <value>ns1</value>
    </property>
    <property>
            <name>yarn.resourcemanager.ha.rm-ids</name>
            <value>rm1,rm2</value>
    </property>
    <property>
             <name>yarn.resourcemanager.hostname.rm1</name>
             <value>m117</value>
    </property>
    <property>
            <name>yarn.resourcemanager.hostname.rm2</name>
            <value>s118</value>
    </property>
    <property>
            <name>yarn.resourcemanager.webapp.address.rm1</name>
            <value>m117:8088</value>
    </property>
    <property>
            <name>yarn.resourcemanager.webapp.address.rm2</name>
            <value>s118:8088</value>
    </property>
    <property>
            <name>yarn.resourcemanager.zk-address</name>
            <value>m117:2181,s118:2181,s119:2181</value>
    </property>
</configuration>

启动Hadoop HA集群

1. 创建集群需要的文件夹

mkdir -p /storage/hadoop/hadoop/tmp/dfs/data
mkdir -p /storage/hadoop/hadoop/tmp/dfs/name
mkdir -p /home/hadoop/hadoop/tmp/dfs/jn

2. 启动Zookeeper集群

${ZOOKEEPER_HOME}/bin/zkServer.sh start

3. HDFS HA初始状态写到Zookeeper

${HADOOP_HOME}/bin/hdfs zkfc -formatZK

4.启动各个节点上的JournalNode

${HADOOP_HOME}/sbin/hadoop-daemon.sh start journalnode
相应关闭JournalNode命令:
${HADOOP_HOME}/sbin/hadoop-daemon.sh stop  journalnode

5.NameNode格式化

${HADOOP_HOME}/bin/hdfs namenode -format -clusterID ns1

6.启动该NameNode

${HADOOP_HOME}/sbin/hadoop-daemon.sh start namenode
相应的关闭NameNode命令:
${HADOOP_HOME}/sbin/hadoop-daemon.sh stop namenode

7.另一个NameNode同步元数据

${HADOOP_HOME}/bin/hdfs namenode -bootstrapStandby

出现问题

16/07/01 10:59:49 INFO common.Storage: Storage directory /storage/hadoop/hadoop/tmp/dfs/name has been successfully formatted.
16/07/01 10:59:49 WARN common.Storage: writeTransactionIdToStorage failed on Storage Directory /storage/hadoop/hadoop/tmp
java.io.FileNotFoundException: /storage/hadoop/hadoop/tmp/current/seen_txid.tmp (No such file or directory)
    at java.io.FileOutputStream.open(Native Method)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
    at org.apache.hadoop.hdfs.util.AtomicFileOutputStream.<init>(AtomicFileOutputStream.java:58)
    at org.apache.hadoop.hdfs.util.PersistentLongFile.writeFile(PersistentLongFile.java:78)
    at org.apache.hadoop.hdfs.server.namenode.NNStorage.writeTransactionIdFile(NNStorage.java:438)
    at org.apache.hadoop.hdfs.server.namenode.NNStorage.writeTransactionIdFileToStorage(NNStorage.java:479)
    at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.downloadImage(BootstrapStandby.java:315)
    at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.doRun(BootstrapStandby.java:204)
    at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.access$000(BootstrapStandby.java:76)
    	at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby$1.run(BootstrapStandby.java:114)
    at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby$1.run(BootstrapStandby.java:110)
    at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
    at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:110)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)

解决方案

mkdir -p /storage/hadoop/hadoop/tmp/current

8.在另一台上启动NameNode

${HADOOP_HOME}/sbin/hadoop-daemon.sh start namenode
此时,两个Namenode都处于Standby状态

9.在两台Namenode节点上分别启动zkfc进程

${HADOOP_HOME}/sbin/hadoop-daemon.sh start zkfc
此时,先启动zkfc的NameNode节点的状态由Standby转换为active,另外一台NameNode仍处于Standby。
相应的关闭zkfc进程命令:
${HADOOP_HOME}/sbin/hadoop-daemon.sh stop zkfc

10.在DataNode节点上启动DataNode

${HADOOP_HOME}/sbin/hadoop-daemon.sh start datanode
相应的关闭DataNode命令
${HADOOP_HOME}/sbin/hadoop-daemon.sh stop datanode

11.启动ResourceManager和NodeManager

${HADOOP_HOME}/sbin/yarn-daemon.sh start resourcemanager //启动ResourceManager
相应关闭ResourceMannager命令
${HADOOP_HOME}/sbin/yarn-daemon.sh stop resourcemanager
${HADOOP_HOME}/sbin/yarn-daemon.sh start nodemanager    //启动NodeManager
相应关闭NodeManager命令:
${HADOOP_HOME}/sbin/yarn-daemon.sh stop nodemanager 

12.其他启动方式

${HADOOP_HOME}/sbin/start-dfs.sh   //启动NameNode、DataNode、JournalNode、ZK Failover Controllers
${HADOOP_HOME}/sbin/start-yarn.sh  //启动ResourceManager和NodeManager
相应的关闭代码如下:
${HADOOP_HOME}/sbin/stop-dfs.sh   //关闭NameNode、DataNode、JournalNode、ZK Failover Controllers
${HADOOP_HOME}/sbin/stop-yarn.sh  //关闭ResourceManager和NodeManager

启动standby ResourceManager

${HADOOP_HOME}/sbin/yarn-daemon.sh start resourcemanager
    相应的关闭命令如下:
    ${HADOOP_HOME}/sbin/yarn-daemon.sh stop resourcemanager

测试

压力测试不在此讲述

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注