@guodong
2020-06-08T19:48:08.000000Z
字数 5302
阅读 1335
Elasticsearch
为搭建ElasticSearch集群,准备了三台服务器,主机IP分别为:
服务器IP | 系统版本 |
---|---|
192.168.1.107 | Centos6.5 |
192.168.1.108 | Centos6.5 |
192.168.1.109 | Centos6.5 |
vim /etc/sysctl.conf
# 增加下面的内容
fs.file-max = 65536
vm.max_map_count = 262144
# 执行命令
sysctl -p
vim /etc/security/limits.conf
# 修改
* soft nofile 65536
* hard nofile 65536
* soft nproc 4096
* hard nproc 4096
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
vim /etc/security/limits.d/90-nproc.conf
找到如下内容:
* soft nproc 1024
#修改为
* soft nproc 4096
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
vim /etc/yum.repos.d/elasticsearch.repo
加入以下信息
[elasticsearch]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
# 展示所有版本
sudo yum --showduplicates list elasticsearch
# 安装指定版本
sudo yum install elasticsearch-7.6.2-1
sudo chkconfig --add elasticsearch
# 编辑配置文件
vim /etc/sysconfig/elasticsearch
# 修改JAVA_HOME的值
JAVA_HOME=/opt/jdk1.8.0_181/
vim /etc/elasticsearch/elasticsearch.yml
# 集群名称,各个节点的值必须一致
cluster.name: elasticsearch_production
# 节点名称,区分节点,各个节点的值不能一致
node.name: node-1
# 数据文件路径,需要配置datanode所有数据磁盘路径
path.data:
- /data/dfs/dfs00/dfs/elasticsearch
- /data/dfs/dfs01/dfs/elasticsearch
- /data/dfs/dfs02/dfs/elasticsearch
# 日志文件路径
path.logs: /var/log/elasticsearch
# 设置为true来锁住内存。因为当jvm开始swapping时es的效率会降低,所以要保证它不swap,可以把ES_MIN_MEM和ES_MAX_MEM两个环境变量设置成同一个值,并且保证机器有足够的内存分配给es。同时也要允许elasticsearch的进程可以锁住内存,Linux下可以通过ulimit -l unlimited命令
bootstrap.memory_lock: true
# 因为Centos6不支持SecComp,而ES6.1.2默认bootstrap.system_call_filter为true进行检测,所以导致检测失败,失败后直接导致ES不能启动
bootstrap.system_call_filter: false
# 本机IP
network.host: 192.168.1.107
# 集群种子节点
discovery.seed_hosts:
- 192.168.1.107:9300
- 192.168.1.108:9300
- 192.168.1.109:9300
# 集群初始主节点,需要填写node.name中的值
cluster.initial_master_nodes:
- node-1
- node-2
- node-3
# 存在至少2个节点(数据节点或者 master 节点)才进行数据恢复
gateway.recover_after_nodes: 2
# 等待10分钟,或者3个节点上线后,才进行数据恢复,这取决于哪个条件先达到
gateway.expected_nodes: 3
gateway.recover_after_time: 10m
search.max_buckets: 200000
action.destructive_requires_name: true
vim /etc/elasticsearch/jvm.options
# 调整内存大小根据系统资源而定,最好不要超过总资源一半
-Xms16g
-Xmx16g
sudo -i service elasticsearch start
# 查看节点状态
curl -XGET '192.168.1.107:9200/_cat/nodes?v'
# 查看集群状态
curl -XGET '192.168.1.107:9200/_cat/health?v'
Kibana安装在1.109上面
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
vim /etc/yum.repos.d/kibana.repo
加入以下信息
[kibana]
name=Kibana repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
# 展示所有版本
sudo yum --showduplicates list kibana --enablerepo=kibana kibana
# 安装指定版本
sudo yum install kibana-7.6.2-1 --enablerepo=kibana kibana
sudo chkconfig --add kibana
vim /etc/kibana/kibana.yml
# 本机IP
server.host: "192.168.1.109"
# 访问es的地址
elasticsearch.hosts: ["http://192.168.1.107:9200","http://192.168.1.108:9200","http://192.168.1.109:9200"]
sudo -i service kibana start
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
vim /etc/yum.repos.d/logstash.repo
加入以下信息
[logstash]
name=logstash repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
mv /usr/bin/java /usr/bin/java7
ln -s $JAVA_HOME/bin/java /usr/bin/java
sudo yum --showduplicates list logstash
sudo yum install logstash-7.6.2-1
vim /etc/logstash/logstash.yml
# 配置热加载
config.reload.automatic: true
# 开启持久化队列
queue.type: persisted
queue.max_bytes: 8gb
在/etc/logstash/conf.d下新建 xxx.conf 编写管道处理逻辑
nohup /usr/share/logstash/bin/logstash --path.settings /etc/logstash >/dev/null 2>&1 &
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
vim /etc/yum.repos.d/filebeat.repo
加入以下信息
[filebeat]
name=filebeat repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
sudo yum install filebeat-7.6.2-1
sudo chkconfig --add filebeat
vim /etc/filebeat/filebeat.yml
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /opt/web_app/openservice/elk/elk*.log
#- c:\programdata\elasticsearch\logs\*
tags: ["log_api"]
fields:
log_topic: log_api
-------------------------- Kafka output ------------------------------
output.kafka:
# initial brokers for reading cluster metadata
hosts: ["kafka01.bitnei.cn:9092", "kafka02.bitnei.cn:9092", "kafka03.bitnei.cn:9092"]
# message topic selection + partitioning
topic: '%{[fields.log_topic]}'
#key: '%{[interfaceName]}'
required_acks: -1
compression: gzip
# (bytes) This value should be equal to or less than the broker’s message.max.bytes.
max_message_bytes: 10000000
service filebeat start