@tsing1226
2016-09-21T12:52:03.000000Z
字数 4040
阅读 2135
hue
hue是hadoop数据分析的web可视化接口,hue应用能够将hadoop的多个组件集成在一起显示在web界面。
tar -zxf hue-3.7.0-cdh5.3.6.tar.gz -C /opt/cdh3.5.6/
不同系统之间的依赖
我这里运用的是Centos系统,需要添加如下依赖:
yum install ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi cyrus-sasl-plain gcc gcc-c++ krb5-devel libffi-devel libtidy libxml2-devel libxslt-devel make mvn mysql mysql-devel openldap-devel python-devel sqlite-devel gmp-devel
编译命令:make apps
配置hue.ini
# Set this to a random string, the longer the better.
# This is used for secure hashing in the session store.
secret_key=jFE93j;2[290-eiw.KEiwN2s3['d;/.q[eIW^y#e=+Iei*@Mn<qW5o
# Webserver listens on this address and port
http_host=hadoop-senior01.grc.com
http_port=8888
build/env/bin/supervisor
配置文件hdfs-site.xml
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
配置文件core-site.html
<property>
<name>hadoop.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hue.groups</name>
<value>*</value>
</property>
hue.ini配置hadoop
# Enter the filesystem uri
fs_defaultfs=http://hadoop-senior01.grc.com:8020
# Use WebHdfs/HttpFs as the communication mechanism.
# Domain should be the NameNode or HttpFs host.
# Default port is 14000 for HttpFs.
webhdfs_url=http://hadoop-senior01.grc.com:50070/webhdfs/v1
# Change this if your HDFS cluster is Kerberos-secured
## security_enabled=false
# Default umask for file and directory creation, specified in an octal value.
## umask=022
#This is the home of your Hadoop HDFS installation
hadoop_hdfs_home=/opt/cdh3.5.6/hadoop-2.5.0-cdh5.3.6
#Use this as the HDFS Hadoop launcher script
hadoop_bin=/opt/cdh3.5.6/hadoop-2.5.0-cdh5.3.6/bin
# Directory of the Hadoop configuration
hadoop_conf_dir=/opt/cdh3.5.6/hadoop-2.5.0-cdh5.3.6/etc/hadoop
hue.ini配置文件
# Enter the host on which you are running the ResourceManager
resourcemanager_host=hadoop-senior01.grc.com
# The port where the ResourceManager IPC listens on
resourcemanager_port=8032
# Whether to submit jobs to this cluster
submit_to=True
# URL of the ResourceManager API
resourcemanager_api_url=http://hadoop-senior01.grc.com:8088
# URL of the ProxyServer API
proxy_api_url=http://hadoop-senior01.grc.com:8088
# URL of the HistoryServer API
history_server_api_url=http://hadoop-senior01.grc.com:19888
hue.ini配置hive
# Host where HiveServer2 is running.
# If Kerberos security is enabled, use fully-qualified domain name (FQDN).
hive_server_host=hadoop-senior01.grc.com
# Port where HiveServer2 Thrift server runs on.
hive_server_port=10000
# Hive configuration directory, where hive-site.xml is located
hive_conf_dir=/opt/cdh3.5.6/hive-0.13.1-cdh5.3.6/conf
# Timeout in seconds for thrift calls to Hive service
server_conn_timeout=120
在hue上操作 select count(1) from emp ;
出现错误:org.apache.hadoop.security.AccessControlException(Permission denied: user=admin, access=EXECUTE, inode="/tmp":grc:supergroup:drwxrwx---
解决方案:
bin/hdfs dfs -chmod -R o+x /tmp/
hive-site.xml配置
<property>
<name>hive.metastore.uris</name>
<value>thrift://hadoop-senior01.grc.com:9083</value>
</property>
nohup bin/hive --service metastore &
hue.ini配置MYSQL数据库
# sqlite configuration.
[[[sqlite]]]
# Name to show in the UI.
nice_name=SQLite
# For SQLite, name defines the path to the database.
name=/opt/cdh3.5.6/hue-3.7.0-cdh5.3.6/desktop/desktop.db
# Database backend to use.
engine=sqlite
# Database options to send to the server when connecting.
# https://docs.djangoproject.com/en/1.4/ref/databases/
## options={}
# mysql, oracle, or postgresql configuration.
[[[mysql]]]
# Name to show in the UI.
nice_name="My SQL DB"
# For MySQL and PostgreSQL, name is the name of the database.
# For Oracle, Name is instance of the Oracle server. For express edition
# this is 'xe' by default.
name=db_track
# Database backend to use. This can be:
# 1. mysql
# 2. postgresql
# 3. oracle
engine=mysql
# IP or hostname of the database to connect to.
host=hadoop-senior01.grc.com
# Port the database server is listening to. Defaults are:
# 1. MySQL: 3306
# 2. PostgreSQL: 5432
# 3. Oracle Express Edition: 1521
port=3306
# Username to authenticate with when connecting to the database.
user=root
# Password matching the username to authenticate with when
# connecting to the database.
password=123456
运行hive editor
需要启动:
hive$:nohup bin/hiveserver2 &
运行测试:
参考地址:http://archive.cloudera.com/cdh5/cdh/5/hue-3.7.0-cdh5.3.6/manual.html#_hadoop_configuration