[关闭]
@zhangyy 2020-07-20T11:02:48.000000Z 字数 4160 阅读 170

hadoop 分布式环境搭建处理

hadoop的部分

  • 一: 环境配置
  • 二:系统环境的初始化
  • 三:安装hadoop与配置处理
  • 四:环境测试

一: 环境配置

  1. 系统:CentOS 6.4 X64
  2. 软件:Hadoop-2.5.2.tar.gz
  3. native-2.5.2.tar.gz
  4. jdk-7u67-linux-x64.tar.gz
  5. 将所有软件安装上传到/home/hadoop/yangyang/ 下面
  6. - 主机名配置:
  7. 192.168.3.1 master.hadoop.com
  8. 192.168.3.2 slave1.hadoop.com
  9. 192.168.3.3 slave2.hadoop.com

二:系统环境的初始化

  1. master.hadoop.com 配置 作为NTP SERVERmaster.hadoop.com master.hadoop.com NTP 配置:
  2. master.hadoop.com去网上同步时间

node1.png-30.6kB
#加入开机自启动

  1. #echo “ntpdate –u 202.112.10.36 ” >> /etc/rc.d/rc.local
  2. #vim /etc/ntp.conf

ntp1.png-50.7kB

  1. #取消下面两行的#

ntp2.png-36.1kB

  1. #vim /etc/sysconfig/ntpd
  2. 增加:

ntp3.png-24.4kB

  1. #service ntpd restart
  2. #chkconfig ntpd on

ntp4.png-54.6kB

  1. slave1.hadoop.comslave2.hadoop.com 配置计划任务处理将从master.hadoop.com 同步时间
  2. crontab e
  3. */10 * * * * /usr/sbin/ntpdate master.hadoop.com

slave1.hadoop.com
ntp6.png-81.4kB
slave2.hadoop.com
ntp7.png-64.7kB
- 2.2 三台虚拟机配置jdk环境

  1. 安装jdk
  2. tar -zxvf jdk-7u67-linux-x64.tar.gz
  3. mv jdk-7u67-linux-x64 jdk
  4. 环境变量配置
  5. #vim .bash_profile
  6. 到最后加上:
  7. export JAVA_HOME=/home/hadoop/yangyang/jdk
  8. export CLASSPATH=.:$JAVA_HOME/jre/lib:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar
  9. export HADOOP_HOME=/home/hadoop/yangyang/hadoop
  10. PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:${HADOOP_HOME}/bin
  1. 等所有软件安装部署完毕在进行
  2. source .bash_profile
  3. java version

java1.png-51.6kB

  1. ssh-keygen-------一种按回车键即可生成。(三台服务器一样)
  2. slave1slave2的配置
  3. cd .ssh
  4. scp id_rsa.pub hadoop@192.168.3.1:/home/hadoop/.ssh/slave1.pub
  5. scp id_rsa.pub hadoop@192.168.3.1:/home/hadoop/.ssh/slave2.pub
  6. maste的配置
  7. cat id_rsa.pub >> authorized_keys
  8. cat slave1.pub >> authorized_keys
  9. cat slave2.pub >> authorized_keys
  10. chmod 600 authorized_keys
  11. scp authorized_keys hadoop@slave1.hadoop.com:/home/hadoop/.ssh/
  12. scp authorized_keys hadoopslave2.hadoop.com:/home/hadoop/.ssh/

测试:
ssh.png-80.9kB

三:安装hadoop与配置处理

  1. 3.1 安装hadoop 与配置文件处理
  2. tar -zxvf hadoop-2.5.2.tar.gz
  3. mv hadoop-2.5.2 hadoop
  4. cd /home/hadoop/yangyang/hadoop/etc/hadoop
  5. 3.2更换native 文件
  6. rm -rf lib/native/*
  7. tar –zxvf hadoop-native-2.5.2.tar.gz –C hadoop/lib/native
  8. cd hadoop/lib/native/

native.png-108.3kB
编辑core-site.xml 文件:

  1. <configuration>
  2. <property>
  3. <name>fs.defaultFS</name>
  4. <value>hdfs://master.hadoop.com:8020</value>
  5. </property>
  6. <name>hadoop.tmp.dir</name>
  7. <value>/home/hadoop/yangyang/hadoop/data</value>
  8. <description>hadoop_temp</description>
  9. </property>
  10. </configuration>

编辑hdfs-site.xml 文件:

  1. <configuration>
  2. <property>
  3. <name>dfs.replication</name>
  4. <value>3</value>
  5. </property>
  6. <property>
  7. <name>dfs.namenode.http-address</name>
  8. <value>master.hadoop.com:50070</value>
  9. </property>
  10. <property>
  11. <name>dfs.namenode.secondary.http-address</name>
  12. <value>slave2.hadoop.com:50090</value>
  13. </property>
  14. </configuration>

编辑mapred-site.xml

  1. <configuration>
  2. <property>
  3. <name>mapreduce.framework.name</name>
  4. <value>yarn</value>
  5. </property>
  6. <property>
  7. <name>mapreduce.jobhistory.address</name>
  8. <value>slave2.hadoop.com:10020</value>
  9. </property>
  10. <property>
  11. <name>mapreduce.jobhistory.webapp.address</name>
  12. <value>slave2.hadoop.com:19888</value>
  13. </property>
  14. </configuration>

编辑yarn-site.xml

  1. <configuration>
  2. <property>
  3. <name>yarn.nodemanager.aux-services</name>
  4. <value>mapreduce_shuffle</value>
  5. </property>
  6. <property>
  7. <name>yarn.resourcemanager.hostname</name>
  8. <value>slave1.hadoop.com</value>
  9. </property>
  10. <property>
  11. <name>yarn.log-aggregation-enable</name>
  12. <value>true</value>
  13. </property>
  14. <property>
  15. <name>yarn.log-aggregation.retain-seconds</name>
  16. <value>604800</value>
  17. </property>
  18. </configuration>

编辑hadoop-env.sh 文件:

  1. export JAVA_HOME=/home/hadoop/yangyang/jdk
  2. export HADOOP_PID_DIR=/home/hadoop/yangyang/hadoop/data/tmp
  3. export HADOOP_SECURE_DN_PID_DIR=/home/hadoop/yangyang/hadoop/data/tmp

编辑mapred-env.sh 文件:

  1. export JAVA_HOME=/home/hadoop/yangyang/jdk
  2. export HADOOP_MAPRED_PID_DIR=/home/hadoop/yangyang/hadoop/data/tmp

编辑yarn-env.sh 文件:
vim yarn-env.sh

  1. export JAVA_HOME=/home/hadoop/yangyang/jdk

编辑slaves 文件
vim slaves

  1. master.hadoop.com
  2. slave1.hadoop.com
  3. slave2.hadoop.com

3.3 同步到所有节点slave1和slave2

  1. cd /home/hadoop/yangyang/
  2. tar zcvf hadoop.tar.gz hadoop
  3. scp hadoop.tar.gz hadoop@192.168.3.2:/home/hadoop/yangyang/
  4. scp hadoop.tar.gz hadoop@192.168.3.3:/home/hadoop/yangyang/

3.4 格式化文件系统HDFS

  1. master.hadoop.com 主机上执行:
  2. cd hadoop/bin/
  3. ./hdfs namenode format
  4. 3.5 启动hdfs
  5. master.hadoop.com 主机上执行:
  6. cd hadoop/sbin/
  7. ./start-dfs.sh

dfs.png-66.5kB
3.6启动start-yarn.sh

  1. slave1.hadoop.com
  2. cd hadoop/sbin/
  3. ./start-yarn.sh

dfs1.png-81.2kB
3.7 启动日志功能:

  1. slave1.hadoop.com
  2. cd hadoop/sbin/
  3. ./mr-jobhistory-daemon.sh start historyserver

history1.png-57.2kB
3.8 参照分配表处理
fenpei1.png-39kB
master.hadoop.com 主机:
master.png-40.2kB
slave1.haodop.com 主机:
slave1.png-48.8kB
Slave2.hadoop.com 主机
slave2.png-52.2kB


四:环境测试

master.hadoop.com
上面的HDFS
mstar1.png-72.1kB
date.png-45.8kB
slave1.hadoop.com
上的yarn
yarn1.png-64.3kB
slave2.hadoop.com上面的jobhistory
his01.png-38.3kB
hadoop 环境的测试与检查:
创建,上传,运行wordcount 检测
1.png-46.9kB
2.png-41.9kB
3.png-138.4kB
4.png-90.1kB
5.png-57.9kB

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注