[关闭]
@zhangyy 2021-10-12T11:55:08.000000Z 字数 6276 阅读 211

clickhouse 集群rpm包方式署配置

ClickHouse系列



一:clickhouse 简介

1.1 clickhouse 数据库简介

  1. clickhouse 2016 俄罗斯yandex 公司开源出来的一款MPP架构(大规模并行处理)的列式数据库,主要用于大数据分析(OLAP)领域,具有快速查询,线性可扩展,功能 丰富,硬件利用效率高,容错,高度可靠等优点。
  2. ClickHouse的主要应用场景:
  3. 电信行业用于存储数据和统计数据使用
  4. 用户行为数据记录与分析
  5. 信息安全日志分析
  6. 商业智能与广告网络价值数据挖掘分析
  7. 网络游戏以及物联网的数据处理与分析。
  8. clickhouse 与其它的数据查询对比
  9. https://clickhouse.tech/benchmark/dbms/

二: clickhouse单机版本部署

2.1 系统环境准备

  1. 系统:CentOS7.9x64
  2. 拥有sudo权限的非root用户,如:clickhouse
  3. cat /etc/hosts
  4. ----
  5. 192.168.100.11 node01.flyfish.cn
  6. 192.168.100.12 node02.flyfish.cn
  7. 192.168.100.13 node03.flyfish.cn
  8. 192.168.100.14 node04.flyfish.cn
  9. 192.168.100.15 node05.flyfish.cn
  10. 192.168.100.16 node06.flyfish.cn
  11. 192.168.100.17 node07.flyfish.cn
  12. 192.168.100.18 node08.flyfish.cn
  13. ----
  14. 单机版本安装第一台, 集群安装 4

  1. useradd -m clickhouse && echo clickhouse |passwd clickhouse --stdin
  2. clickhouse 用户提权:
  3. chmod +x /etc/sudoers
  4. vim /etc/sudoers
  5. ---
  6. clickhouse ALL=(ALL) NOPASSWD:ALL
  7. ----
  8. su - clickhouse
  9. sudo su

image_1f6lp5hcushcuoicsiv85qm49.png-96.8kB

image_1f6lp6d5b159115ir19h014ca1ebdm.png-56.8kB


2.2. 单节点安装

  1. sudo yum install yum-utils
  2. sudo rpm --import https://repo.clickhouse.tech/CLICKHOUSE-KEY.GPG
  3. sudo yum-config-manager --add-repo https://repo.clickhouse.tech/rpm/clickhouse.repo
  4. sudo yum install clickhouse-server clickhouse-client

image_1f6lpb4qb162214ha1eis1ahbmhd13.png-99.7kB

image_1f6lpgnfr13eu1r3g19cl98i1qa1g.png-150.1kB

  1. Server config files: /etc/clickhouse-server/
  2. 库数据位置:/var/lib/clickhouse/
  3. 日志目录:/var/log/clickhouse-server/
  4. 您已成功安装ClickHouse服务器和客户端。

  1. 配置clickhouse 的监听端口:
  2. vim +159 /etc/clickhouse/config.xml
  3. ----
  4. 159
  5. <listen_host>0.0.0.0</listen_host>
  6. ----

  1. 启动clickhouse
  2. sudo -u clickhouse clickhouse-server --config-file=/etc/clickhouse-server/config.xml

image_1f6lpqbgb1dsd15de5q3ufofnm1t.png-231kB


  1. 以服务方式启动
  2. sudo systemctl start clickhouse-server
  3. sudo systemctl stop clickhouse-server
  4. sudo systemctl status clickhouse-server

image_1f6lpsa8b1c33ndo1hcmil91hr72a.png-292.8kB

  1. clickhouse-client

image_1f6lptoom18r31299167v17ad5cr34.png-110.3kB


2.3 clickhouse 集群版本部署

2.3.1 部署 zookeeper 集群

  1. node02.flyfish.cn/node03.flyfish.cn/node04.flyfish.cn 部署zookeeper
  2. mkdir -p /opt/bigdata/
  3. tar -zxvf jdk1.8.0_201.tar.gz
  4. mv jdk1.8.0_201 /opt/bigdata/jdk
  5. vim /etc/profile
  6. ---
  7. ### jdk
  8. export JAVA_HOME=/opt/bigdata/jdk
  9. export CLASSPATH=.:$JAVA_HOME/jre/lib:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar
  10. PATH=$PATH:$HOME/bin:$JAVA_HOME/bin
  11. ---
  12. source /etc/profile
  13. java -version

image_1f6lr0pp61h4e14rb5jrsr8kqd4e.png-50.8kB

image_1f6lr19jo1ngdbd015um57f10jo4r.png-46.4kB

image_1f6lr0ad81l9j1lpp14cj1uiv1vab41.png-50.1kB


2.3.2 部署zookeeper

  1. 部署zookeeper
  2. tar -zxvf zookeeper-3.4.14.tar.gz
  3. mv zookeeper-3.4.14 /opt/bigdata/zookeeper
  4. mkdir -p /opt/bigdata/zookeeper/data
  5. mkdir -p /opt/bigdata/zookeeper/log
  6. cd /opt/bigdata/zookeeper/data/
  7. echo 1 > myid
  8. ----
  9. cd /opt/bigdata/zookeeper/conf
  10. cp zoo_sample.cfg zoo.cfg
  11. vim zoo.cfg
  12. ----
  13. # 心跳时间
  14. tickTime=2000
  15. # follow连接leader的初始化连接时间,表示tickTime的倍数
  16. initLimit=10
  17. # syncLimit配置表示leader与follower之间发送消息,请求和应答时间长度。如果followe在设置的时间内不能与leader进行通信,那么此follower将被丢弃,tickTime的倍数
  18. syncLimit=5
  19. # 客户端连接端口
  20. clientPort=2181
  21. # 节点数据存储目录,需要提前创建,注意myid添加,用于标识服务器节点
  22. dataDir=/opt/bigdata/zookeeper/data
  23. dataLogDir=/opt/bigdata/zookeeper/log
  24. server.1=192.168.100.12:2888:3888
  25. server.2=192.168.100.13:2888:3888
  26. server.3=192.168.100.14:2888:3888
  27. ---
  28. -----
  29. scp -r zookeeper root@192.168.100.13:/usr/local/
  30. scp -r zookeeper root@192.168.100.14:/usr/local/
  31. 修改192.168.100.13 节点 myid
  32. cd /opt/bigdata/zookeeper/data/
  33. echo 2 > myid
  34. 修改192.168.100.14 节点 myid
  35. cd /opt/bigdata/zookeeper/data/
  36. echo 3 > myid

  1. 启动zookeeper
  2. cd /opt/bigdata/zookeeper/bin/
  3. ./zkServer.sh start

image_1f6lri7tk18ll1ha311vrhgv1g5r9.png-86.1kB

image_1f6lrilse126saam1co9rpt1armm.png-83.9kB

image_1f6lrj65u1clf1gl6e3roml1h8f13.png-82.3kB


2.3.3 部署clickhouse集群

  1. 所有节点安装clickhouse 同上单机配置
  2. clickhouse-client

image_1f6lsggqb135818im1cgjaa11c7b3m.png-64.3kB

image_1f6lsfojdvpf1mm86qq1nqauu02s.png-44.2kB

image_1f6lsfasbc0ofij11n419cai6l2f.png-110.8kB

image_1f6lsg4qra93uu7196i19lm1mfp39.png-57.6kB


  1. 创建clickhouse 数据存放目录 (所有节点全部新建)
  2. mkdir /data/clickhouse -p
  3. chmod 777 /data/clickhouse

image_1f6lsmnus14lh1qbnifah9ljos5a.png-34kB

image_1f6lsm5rpo013f32it27v16jc4t.png-46kB

image_1f6lslo4j10km1amgqadv0k1pcg4g.png-58.8kB

image_1f6lsl6f214uv1p29v3p5pq1u9a43.png-48.3kB


  1. vim /etc/clickhouse-server/config.xml
  2. ---
  3. <yandex>
  4. <!--引入metrika.xml-->
  5. <include_from>/etc/clickhouse-server/config.d/metrika.xml</include_from>
  6. <!-- Path to data directory, with trailing slash. -->
  7. <path>/data/clickhouse/</path>
  8. <!-- Path to temporary data for processing hard queries. -->
  9. <tmp_path> /data/clickhouse/tmp/</tmp_path>
  10. <!-- <tmp_policy>tmp</tmp_policy> -->
  11. <!-- 存储路径 -->
  12. <storage_configuration>
  13. <disks>
  14. <disk_name_0>
  15. <path>/data/clickhouse/</path>
  16. </disk_name_0>
  17. ### 这边要是有多个就写多个
  18. </disks>
  19. <policies>
  20. <policy_name_1>
  21. <volumes>
  22. <volume_name_0>
  23. <disk>disk_name_0</disk> ### 这边按照上面的写
  24. </volume_name_0>
  25. </volumes>
  26. </policy_name_1>
  27. </policies>
  28. </storage_configuration>
  29. 拿掉所有localhost的本地存储shared 分片不做显示:
  30. <remote_servers incl="clickhouse_remote_servers" >
  31. ...... #### 全部注释掉
  32. </remote_servers>
  33. <!-- 引用Zookeeper配置的定义-->
  34. <zookeeper incl="zookeeper-servers" optional="true"/>
  35. <macros incl="macros" optional="true" />
  36. ---

image_1f6lui3cqkjkhej3u7h5o1mr65n.png-102kB

image_1f6luioao1qisejalg61al51jnt64.png-51.6kB

image_1f6lujmt7187t1nn4145f127haua6h.png-76.6kB

image_1f6lukaap1o21g2bqkk1fnc1t976u.png-151.2kB

image_1f6luoccrue5169m1elk16ruts07b.png-158.9kB

image_1f6lup14n1p68vrfml319oo1etc7o.png-143.8kB

image_1f6lusej711pg16elv5m1ink3p785.png-142.9kB

image_1f6lutea19lb105b5gt14es1mks8i.png-72.5kB

  1. cd /etc/clickhouse-server/config.d/
  2. vim metrika.xml
  3. ---
  4. <yandex>
  5. <clickhouse_remote_servers>
  6. <!--集群名称,clickhouse支持多集群的模式-->
  7. <clickhouse_cluster>
  8. <!--定义分片节点,这里我指定3个分片,每个分片只有1个副本,也就是它本身-->
  9. <shard>
  10. <internal_replication>true</internal_replication>
  11. <replica>
  12. <host>node01.flyfish.cn</host>
  13. <port>9000</port>
  14. </replica>
  15. </shard>
  16. <shard>
  17. <replica>
  18. <internal_replication>true</internal_replication>
  19. <host>node02.flyfish.cn</host>
  20. <port>9000</port>
  21. </replica>
  22. </shard>
  23. <shard>
  24. <internal_replication>true</internal_replication>
  25. <replica>
  26. <host>node03.flyfish.cn</host>
  27. <port>9000</port>
  28. </replica>
  29. </shard>
  30. <shard>
  31. <internal_replication>true</internal_replication>
  32. <replica>
  33. <host>node04.flyfish.cn</host>
  34. <port>9000</port>
  35. </replica>
  36. </shard>
  37. </clickhouse_cluster>
  38. </clickhouse_remote_servers>
  39. <!--zookeeper集群的连接信息-->
  40. <zookeeper-servers>
  41. <node index="1">
  42. <host>node02.flyfish.cn</host>
  43. <port>2181</port>
  44. </node>
  45. <node index="2">
  46. <host>node03.flyfish.cn</host>
  47. <port>2181</port>
  48. </node>
  49. <node index="3">
  50. <host>node04.flyfish.cn</host>
  51. <port>2181</port>
  52. </node>
  53. </zookeeper-servers>
  54. <!--定义宏变量,后面需要用-->
  55. <macros>
  56. <replica>node04.flyfish.cn</replica> ### 在那台主机上面就写那个主机名
  57. </macros>
  58. <!--不限制访问来源ip地址-->
  59. <networks>
  60. <ip>::/0</ip>
  61. </networks>
  62. <!--数据压缩方式,默认为lz4-->
  63. <clickhouse_compression>
  64. <case>
  65. <min_part_size>10000000000</min_part_size>
  66. <min_part_size_ratio>0.01</min_part_size_ratio>
  67. <method>lz4</method>
  68. </case>
  69. </clickhouse_compression>
  70. </yandex>
  71. ---

  1. 同步所有主机:
  2. scp /etc/clickhouse-server/config.xml root@node02.flyfish.cn:/etc/clickhouse-server/
  3. scp /etc/clickhouse-server/config.xml root@node03.flyfish.cn:/etc/clickhouse-server/
  4. scp /etc/clickhouse-server/config.xml root@node04.flyfish.cn:/etc/clickhouse-server/
  5. scp /etc/clickhouse-server/config.d/metrika.xml root@node02.flyfish.cn:/etc/clickhouse-server/config.d/
  6. scp /etc/clickhouse-server/config.d/metrika.xml root@node03.flyfish.cn:/etc/clickhouse-server/config.d/
  7. scp /etc/clickhouse-server/config.d/metrika.xml root@node04.flyfish.cn:/etc/clickhouse-server/config.d/
  8. 修改所有主机的
  9. <!--定义宏变量,后面需要用-->
  10. <macros>
  11. <replica>node04.flyfish.cn</replica> ### 在那台主机上面就写那个主机名 这边是node04.flyfish.cn 其它节点对应修改
  12. </macros>

image_1f6lv647t1ctf13edsviqrc5ls8v.png-226.2kB


  1. 启动停止:
  2. systemctl stop clickhouse-server.service 停机所有集群
  3. systemctl start clickhouse-server.service 所有节点全部启动
  4. systemctl status clickhouse-server.service 所有节点查看clickhouse 节点状态

image_1f6lvd80810qr1nic1dpo13d6strbd.png-273.7kB

image_1f6lve0ff1lbpakq197747p1l69bq.png-249.9kB

image_1f6lvehs7nhn1bqrl845j43isc7.png-255.9kB

image_1f6lvfdasrmd1j9f8ldr691ff7ck.png-298.4kB

  1. 随便登录到其中一个节点:
  2. clickhouse-client
  3. clickhouse-client -m -h 10.10.10.6
  4. select * from system.clusters;

image_1f6lvbukc450a8t17881upcpk8b0.png-143kB

image_1f6m4m3m9e61rl6dht112b1bskd1.png-143.8kB

image_1f6m4mjts1tsc6qo4vb1m3v33qde.png-136.4kB

image_1f6m4nelt1ra9mp75gajb011afdr.png-140.8kB


添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注