[关闭]
@saltyang 2022-03-29T16:16:54.000000Z 字数 9400 阅读 1538

Local Deploy Webackup SOP

mbk docker deploy


Software Version

  • Centos 6.x
  • Docker Engine Version: 1.7.1
  • Docker Compose Version: 1.5.2
  • Docker Redis Image Version: 3.2
  • Docker Rabbitmq Image Version: 3.6.2
  • Docker Cassandra Image Version: 3.5
  • Docker Mysql Image Version: 5.6
  • Docker Centos Image Version: 6.7

Install Docker in Linux

#Install docker enginee

  • yum install docker-io or curl -fsSL https://get.docker.com/ | sh

#restart docker service

  • service docker start

#start docker deamon when machine power on

  • chkconfig docker on

Note: Centos7装docker

Load Docker image

# copy compose-mbk.tag.gz to your machine (compose-mbk.tag.gz 放在百度云盘我的应用数据>bypy>compose-mbk.tag.gz

  • tar -xzvf compose-mbk.tag.gz

# load images

  • cd compose-mbk
  • docker load -i mbkcentos6.7.tar
  • docker load -i cassandra.tar
  • docker load -i mysql.tar
  • docker load -i rabbitmq.tar
  • docker load -i redis.tar

# Check whether load image successfylly or not. There are five images: mbkcentos6.7, cassandra, mysql, rabbitmq, redis

  • docker images

Run docker-compose

# Install docker-compose (Note : docker-compose version is 1.5.2)

# Run docker-compose (Note: please stop mysqld/rabbitmq-server/redis/cassandra service on host machine to make sure the needed ports haven't used. )

  • docker-compose up -d

# Check whether docker container has started successfully or not. (There are five running containers: mbk_server, mbk_worker,mbk_cassandra, mbk_mysql, mbk_redis, the state of containers are "Up")

  • docker-compose ps

Prepare db for deploy

Mysql:

# If you haven't mysql client , you can run this command to connect mbk_mysql container:

  • docker run -it --link mbk_mysql:mysql --rm mysql
    sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT"
    -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
    (Note: this is one command, not three commands)

# If you have install mysql client, use mysql client connect mbk_mysql, you can get password from docker-compose.yml in compose-mbk folder

  • mysql -h you_host_ip -uroot -p

# after connect mbk_mysql, change default charset:

  • alter database mbackup character set utf8;

Cassandra:

# connect cassandra with cqlsh

  • docker run -it --link mbk_cassandra:cassandra --rm cassandra cqlsh cassandra

# create mbk keyspace

  • create keyspace mbackup with replication={'class': 'SimpleStrategy', 'replication_factor': 1};

Redis:

# connect redis with redis-cli

  • Port Hint: docker run -it --link mbk_redis:redis --rm redis redis-cli -h redis -p 6379 -a passwd
  • Port Expose: redis-cli -h host -a passwd

Deploy

  1. 1> Clone code (*If you have cloned, please pass this step*)
  2. <code>git clone git@www.cloudraid.com.cn:puya/mbackup.git</code>
  3. 2> Enter into `mbackup/deploy` folder and Modify `deployConfig_local.json`
  4. Note : The port of sshd is mbkserver: 50001 and mbkworker: 50002. The address of mysql/redis/rabbitmq/cassandra use service name in `docker-compose.yml`. For Example: mysql address should be `mbk_mysql` which is mapping to a ip address via host file in container.
  5. Example File Content:
  6. server
  7. "192.168.1.191:50001"
  8. worker
  9. "192.168.1.191:50002"
  10. portal
  11. "192.168.1.191:50001"
  12. mysql database address
  13. "mbk_mysql"
  14. mysql password:
  15. "puyacn#1.."
  16. redis address
  17. "mbk_redis"
  18. cassandra address
  19. "mbk_cassandra"
  20. rabbitmq-server
  21. "mbk_rabbitmq"
  22. 3> Deploy Step:
  23. a> Deploy Mbkserver
  24. python deployV2.py mbkserver -c deploy -a -l local
  25. b> Deploy Mbkworker
  26. python deployV2.py mbkworker -c deploy -a -l local
  27. c> Deploy Mbkportal
  28. python deployV2.py mbkportal -c deploy -a -l local
  29. 4> Finish
  30. Client Slient Install Command Line:
  31. C:\Users\Salt\Desktop\wbksetup.exe /S /server_host=192.168.1.155 /ar=1

RPM OVA BUILD

  • Need modify CD-ROM ResourceSubType to vmware.cdrom.remotepassthrough
  • re-sha1sum ovf file and Change the value of sha1 in mf file
    制作OVA注意事项

Note:

  • Config mysql master-slave

    1. 主数据库:192.168.1.177
    2. 从数据库:192.168.1.176
    3. 1 修改主数据库master
    4. a> 创建用于同步的用户账户:
    5. mysql> CREATE USER 'backup'@'192.168.1.176' IDENTIFIED BY 'backupcn1..';#创建用户
    6. mysql> GRANT REPLICATION SLAVE ON *.* TO 'backup'@'192.168.1.176';#分配权限
    7. mysql> flush privileges; #刷新权限
    8. b> 在配置文件:/etc/my.conf中插入如下两行:
    9. [mysqld]
    10. log-bin=mysql-bin
    11. server-id=1
    12. c> 重启mysql 查看master状态,并记录二进制文件名和位置。
    13. mysql > SHOW MASTER STATUS;
    14. +------------------+----------+--------------+------------------+
    15. | File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
    16. +------------------+----------+--------------+------------------+
    17. | mysql-bin.000003 | 73 | test | manual,mysql |
    18. +------------------+----------+--------------+------------------+
    19. 2 修改从数据库master
    20. a> 导入主数据库的数据
    21. mysql> create database mbackup default charset utf8;
    22. mysql> use mbackup;
    23. mysql> source /root/backup.sql;
    24. b> 修改mysql配置:/etc/my.cnf
    25. [mysqld]
    26. server-id=2 #设置server-id,必须唯一
    27. c> 重启mysql,打开mysql会话,执行同步SQL语句
    28. mysql> CHANGE MASTER TO
    29. -> MASTER_HOST='192.168.1.176',
    30. -> MASTER_USER='backup',
    31. -> MASTER_PASSWORD='backupcn1..',
    32. -> MASTER_LOG_FILE='mysql-bin.000003',
    33. -> MASTER_LOG_POS=73;
    34. d> 启动slave同步进程
    35. mysql> start slave;
    36. e> 查看slave状态
    37. show slave status\G;
    38. *************************** 1. row ***************************
    39. Slave_IO_State: Waiting for master to send event
    40. Master_Host: 182.92.172.80
    41. Master_User: rep1
    42. Master_Port: 3306
    43. Connect_Retry: 60
    44. Master_Log_File: mysql-bin.000013
    45. Read_Master_Log_Pos: 11662
    46. Relay_Log_File: mysqld-relay-bin.000022
    47. Relay_Log_Pos: 11765
    48. Relay_Master_Log_File: mysql-bin.000013
    49. Slave_IO_Running: Yes # 两者都为Yes是正常,如果Slave_IO_Running为Connecting
    50. Slave_SQL_Running: Yes # 则检查网络,查看主机的3306端口是否开放
    51. Replicate_Do_DB:
    52. Replicate_Ignore_DB:
  • Intstall redis

    1. yum install epel-release
    2. yum install redis
    3. //change redis conf file
    4. vim /etc/redis.conf
    5. bind 127.0.0.1注释掉
    6. //启动服务
    7. service redis start
    8. //修改密码
    9. requirepass 111111
    10. //重启服务
    11. service redis restart
  • Install rabbitmq

    1. wget http://www.rabbitmq.com/releases/rabbitmq-server/v3.6.6/rabbitmq-server-3.6.6-1.el6.noarch.rpm
    2. yum -y install rabbitmq-server-3.6.6-1.el6.noarch.rpm
    3. rabbitmq-server start
    4. rabbitmqctl add_user mbk mbkpwd
    5. rabbitmqctl add_vhost vhost_mbk
    6. rabbitmqctl set_permissions -p vhost_mbk mbk ".*" ".*" ".*"
  • Install cassandra

    1. pass
  • Install ElasticSearch

    1. # download elasticsearch and install
    2. wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.1.rpm
    3. rpm -ivh elasticsearch-6.4.1.rpm
    4. service elasticsearch start
    5. # (Optional) you can downlaod kibana to post api with dev-tool
    6. wget https://artifacts.elastic.co/downloads/kibana/kibana-6.4.1-x86_64.rpm
    7. rpm -ivh kibana-6.4.1-x86_64.rpm
    8. service kibana start
    9. # config elasticsearch yml
    10. vim /etc/elasticsearch/elasticsearch.yml
    11. '''
    12. cluster.name: mbk
    13. http.port: 9200
    14. network.host: 0.0.0.0
    15. '''
    16. service elasticsearch restart
    17. # config kibana yml
    18. vim /etc/kibana/kibana.yml
    19. '''
    20. server.host: "0.0.0.0"
    21. server.port: 5601
    22. elasticsearch.url: "http://localhost:9200"
    23. '''
    24. service kibana restart
    25. # open firewall to enable port
    26. firewall-cmd --zone=public --add-port=9200/tcp --permanent
    27. firewall-cmd --zone=public --add-port=5601/tcp --permanent
    28. service firewalld reload
    29. # add elasticsearch to pip
    30. # test
    31. curl http://192.168.1.144:9200/
    32. curl curl http://192.168.1.144:5601/app/kibana
  • Deploy code

    1. copy env and 3rdParty to /opt/mbk
    2. python deployV2.py mbkserver -a -c deploy -l vpc
    3. Note: libmysqlclient_r.so.16: cannot open shared object file: No such file or directory
    4. cp libmysqlclient_r.so.16 to /usr/lib64
    5. Note: ngix start -> libpcre.so.0: cannot open shared object file: No such file or directory
    6. ln -s libpcre.so.0 /lib64/libpcre.so
    7. 开放443端口
    8. yum install firewalld firewalld-config
    9. firewall-cmd --zone=public --add-port=443/tcp --permanent
    10. service firewalld restart
  • 存储配置

    1. 系统盘和数据盘分离(至少需要添加两块磁盘)
    2. fdisk -l #查看磁盘是否已分配
    3. fdisk /dev/sdb #发现有磁盘,路径为/dev/sdb。然后使用fdisk命令进行建立分区
    4. 输入命令依次是:n -> p -> 1 -> 默认 -> 默认 -> w
    5. fdisk -l #查看一下,应该已经有了分区
    6. mkfs.xfs -f /dev/sdb1 #建好分区后要格式化分区,建立文件系统,最好和其他几个分区保持一致
    7. mount /dev/sdb1 /opt/mbk/storage # 文件系统建好后,选择挂载到/opt/mbk/storage 下
    8. chown -R mbk:mbk /opt/mbk/storage #修改目录所属权限
    1. \# 系统安装好后,发现home分区过大,想从home分区中拿出100G给/分区
    2. \# 针对XFS系统
    3. umount /home/
    4. lvreduce -L -100G /dev/mapper/centos-home
    5. mkfs.xfs /dev/mapper/centos-home -f
    6. mount /dev/mapper/centos-home /home/
    7. df -hT # 再次查看分区,发现home分区已经减小了100G,只不过这个分区里之前的数据都没有了
    8. vgdisplay # #然后将上面从home分区拿出的100G放到/分区下
    9. lvextend -L +100G /dev/mapper/centos-root
    10. xfs_growfs /dev/mapper/centos-root
    11. df -hT
    1. \# 针对ext2ext3ext4文件系统
    2. umount /home/
    3. resize2fs -p /dev/mapper/vg_weidianserver2-lv_home 20G
    4. mount /home
    5. df -h
    6. lvextend -L +812G /dev/mapper/vg_weidianserver2-lv_root
    7. resize2fs -p /dev/mapper/vg_weidianserver2-lv_root
    8. df -h
    9. [解决linux系统CentOS下调整home和根分区大小的方法][3]
    1. mount cifs 开机自启动
    2. //192.168.1.33/webackup_share /opt/mbk/test_storage/bucket0/1 cifs user=Everyone,password=,vers=2.0,gid=994,uid=996 0 0
添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注