@tsing1226
2015-12-23T10:51:46.000000Z
字数 3963
阅读 1588
oozie
Oozie中WorkFlow包括job.properties、workflow.xml 、lib 目录(依赖jar包)三部分组成。job.properties配置文件中包括nameNode、jobTracker、queueName、oozieAppsRoot、oozieDataRoot、oozie.wf.application.path、inputDir、outputDir,其关键点是指向workflow.xml文件所在的HDFS位置。
job.properties
关键点:指向workflow.xml文件所在的HDFS位置
workflow.xml (该文件需存放在HDFS上)
包含几点:
- start
- action
MapReduce、Hive、Sqoop、Shell
- ok
- error
- kill
- end
lib 目录 (该目录需存放在HDFS上)
依赖jar包
mkdir oozie-apps/
cp -r examples/apps/map-reduce/ oozie-apps/
cp /opt/softwares/mr-wc.jar /opt/cdh3.5.6/oozie-4.0.0-cdh5.3.6/oozie-apps/map-reduce/lib/
说明:下面我们需要配置workflow.xml,里面的属性为方便配置,我们先运行相应的程序,本节我们讲解的是MapReduce Action,运行一个wordcount程序这里不做介绍。我们打开刚才运行的wordcount程序上的历史文件选择configuration,搜索MapReduce的五个阶段:input、mapper、shuffle、reducer、output五个阶段的相关属性。
nameNode=hdfs://hadoop-senior01.grc.com:8020
jobTracker=hadoop-senior01.grc.com:8032
queueName=default
oozieAppsRoot=user/grc/oozie-apps
oozieDataRoot=user/grc/oozie-datas
oozie.wf.application.path=${nameNode}/${oozieAppsRoot}/map-reduce/workflow.xml
inputDir=map-reduce/input
outputDir=map-reduce/output
<start to="mr-node"/>
<action name="mr-node">
<map-reduce>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path="${nameNode}/${oozieDataRoot}/${outputDir}"/>
</prepare>
<configuration>
<property>
<name>mapreduce.job.queuename</name>
<value>${queueName}</value>
</property>
<!--use new mapper /reducer api -->
<property>
<name>mapred.mapper.new-api</name>
<value>true</value>
</property>
<property>
<name>mapred.reducer.new-api</name>
<value>true</value>
</property>
<!-- input mapper shuffle reducer output-->
<!-- input-->
<property>
<name>mapreduce.input.fileinputformat.inputdir</name>
<value>${nameNode}/${oozieDataRoot}/${inputDir}</value>
</property>
<!-- mapper-->
<property>
<name>mapreduce.job.map.class</name>
<value>com.ibeifeng.bigdata.senior.hadoop.mapreduce.WordCountMapReduce$WordCountMapper</value>
</property>
<property>
<name>mapreduce.map.output.key.class</name>
<value>org.apache.hadoop.io.Text</value>
</property>
<property>
<name>mapreduce.map.output.value.class</name>
<value>org.apache.hadoop.io.IntWritable</value>
</property>
<!--mapper compress-->
<property>
<name>mapreduce.map.output.compress</name>
<value>true</value>
</property>
<property>
<name>mapreduce.map.output.compress.codec</name>
<value>org.apache.hadoop.io.compress.SnappyCodec</value>
</property>
<!-- reducer-->
<property>
<name>mapreduce.job.reduce.class</name>
<value>com.ibeifeng.bigdata.senior.hadoop.mapreduce.WordCountMapReduce$WordCountReducer</value>
</property>
<property>
<name>mapreduce.job.output.key.class</name>
<value>org.apache.hadoop.io.Text</value>
</property>
<property>
<name>mapreduce.job.output.value.class</name>
<value>org.apache.hadoop.io.IntWritable</value>
</property>
<!-- output-->
<property>
<name>mapreduce.output.fileoutputformat.outputdir</name>
<value>${nameNode}/${oozieDataRoot}/${outputDir}</value>
</property>
<!--reducer compress-->
<property>
<name>mapreduce.output.fileoutputformat.compress</name>
<value>true</value>
</property>
<property>
<name>mapreduce.output.fileoutputformat.compress.codec</name>
<value>org.apache.hadoop.io.compress.SnappyCodec</value>
</property>
</configuration>
</map-reduce>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Map/Reduce failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
NOTE:配置文件workflow.xml中,我们需要选择比较新的版本0.5,同时为了组件的生效,我们还需配置mapred.mapper.new-api、mapred.reducer.new-api两个属性的值都为true,否则,选用新版本配置的文件不生效。
/opt/cdh3.5.6/hadoop-2.5.0-cdh5.3.6/bin/hdfs dfs -put oozie-apps oozie-apps
/opt/cdh3.5.6/hadoop-2.5.0-cdh5.3.6/bin/hdfs dfs -mkdir -p /user/grc/oozie-datas/map-reduce/input
上传wordcount数据到HDFS文件系统上/user/grc/oozie-datas/map-reduce/input文件夹下。
bin/oozie job -config oozie-apps/map-reduce/job.properties -run