[关闭]
@abelsu7 2020-03-03T17:57:59.000000Z 字数 15617 阅读 952

Linux 下使用 virt-manager 基于虚拟机快速搭建 Ceph (jewel 10.x) 集群

Ceph KVM virt-manager


0. 资源准备

  1. 下载 CentOS-7-x86_64-Minimal-1908.iso 镜像,用于安装虚拟机的操作系统
  2. 磁盘可用空间>=200G(估计值,非必须,满足实际需求即可)

我的本地宿主机环境如下:

  1. > cat /etc/redhat-release
  2. Fedora release 31 (Thirty One)
  3. > uname -r
  4. 5.4.8-200.fc31.x86_64
  5. > df -hT /
  6. Filesystem Type Size Used Avail Use% Mounted on
  7. /dev/mapper/fedora-root ext4 395G 123G 254G 33% /
  8. > virt-manager --version
  9. 2.2.1
  10. > free -h
  11. total used free shared buff/cache available
  12. Mem: 15Gi 2.4Gi 10Gi 434Mi 2.3Gi 12Gi
  13. Swap: 0B 0B 0B
  14. > screenfetch
  15. /:-------------:\ root@ins-7590
  16. :-------------------:: OS: Fedora 31 ThirtyOne
  17. :-----------/shhOHbmp---:\ Kernel: x86_64 Linux 5.4.8-200.fc31.x86_64
  18. /-----------omMMMNNNMMD ---: Uptime: 40m
  19. :-----------sMMMMNMNMP. ---: Packages: 2237
  20. :-----------:MMMdP------- ---\ Shell: zsh 5.7.1
  21. ,------------:MMMd-------- ---: Resolution: 3600x1080
  22. :------------:MMMd------- .---: DE: GNOME
  23. :---- oNMMMMMMMMMNho .----: WM: GNOME Shell
  24. :-- .+shhhMMMmhhy++ .------/ WM Theme:
  25. :- -------:MMMd--------------: GTK Theme: Adwaita-dark [GTK2/3]
  26. :- --------/MMMd-------------; Icon Theme: Adwaita
  27. :- ------/hMMMy------------: Font: Cantarell 11
  28. :-- :dMNdhhdNMMNo------------; CPU: Intel Core i7-9750H @ 12x 4.5GHz
  29. :---:sdNMMMMNds:------------: GPU: Mesa DRI Intel(R) UHD Graphics 630 (Coffeelake 3x8 GT2)
  30. :------:://:-------------:: RAM: 2469MiB / 15786MiB
  31. :---------------------://

1. 虚拟机环境搭建

计划使用三台虚拟机搭建三节点的 Ceph 集群:

hostname IP 备注
ceph-node1 192.168.200.101 deploy, 1 mon, 2 osd
ceph-node2 192.168.200.102 1 mon, 2 osd
ceph-node3 192.168.200.103 1 mon, 2 osd

1.1 创建磁盘镜像

创建一个目录(例如/mnt/ceph/)用来存放三台虚拟机的磁盘镜像:

  1. ~ > mkdir -p /mnt/ceph
  2. ~ > cd /mnt/ceph/
  3. /mnt/ceph > mkdir ceph-node1 ceph-node2 ceph-node3
  4. /mnt/ceph > ls -l
  5. total 12
  6. drwxr-xr-x 2 root root 4096 Mar 2 15:23 ceph-node1
  7. drwxr-xr-x 2 root root 4096 Mar 2 15:23 ceph-node2
  8. drwxr-xr-x 2 root root 4096 Mar 2 15:23 ceph-node3

注:先准备ceph-node1,其余两台之后可通过virt-clone复制

使用qemu-img命令为ceph-node1创建一块容量为100G的系统盘,以及两块容量各为2T的磁盘,镜像格式均为qcow2

  1. /mnt/ceph > qemu-img create -f qcow2 ceph-node1/ceph-node1.qcow2 100G
  2. Formatting 'ceph-node1/ceph-node1.qcow2', fmt=qcow2 size=107374182400 cluster_size=65536 lazy_refcounts=off refcount_bits=16
  3. /mnt/ceph > qemu-img create -f qcow2 ceph-node1/disk-1.qcow2 2T
  4. Formatting 'ceph-node1/disk-1.qcow2', fmt=qcow2 size=2199023255552 cluster_size=65536 lazy_refcounts=off refcount_bits=16
  5. /mnt/ceph > qemu-img create -f qcow2 ceph-node1/disk-2.qcow2 2T
  6. Formatting 'ceph-node1/disk-2.qcow2', fmt=qcow2 size=2199023255552 cluster_size=65536 lazy_refcounts=off refcount_bits=16
  7. /mnt/ceph > tree -h
  8. .
  9. ├── [ 4.0K] ceph-node1
  10.    ├── [ 194K] ceph-node1.qcow2
  11.    ├── [ 224K] disk-1.qcow2
  12.    └── [ 224K] disk-2.qcow2
  13. ├── [ 4.0K] ceph-node2
  14. └── [ 4.0K] ceph-node3
  15. 3 directories, 3 files

1.2 创建虚拟网络

virt-manager里创建虚拟网络ceph-net,模式NAT,转发至本机的上网接口。规划的网段为192.168.200.0/24,DHCP 范围192.168.200.101~254

ceph-net自动生成的 XML 定义如下:

  1. <network>
  2. <name>ceph-net</name>
  3. <uuid>cc046613-11c4-4db7-a478-ac6d568e69ec</uuid>
  4. <forward dev="wlo1" mode="nat">
  5. <nat>
  6. <port start="1024" end="65535"/>
  7. </nat>
  8. <interface dev="wlo1"/>
  9. </forward>
  10. <bridge name="virbr1" stp="on" delay="0"/>
  11. <mac address="52:54:00:86:86:e8"/>
  12. <domain name="ceph-net"/>
  13. <ip address="192.168.200.1" netmask="255.255.255.0">
  14. <dhcp>
  15. <range start="192.168.200.101" end="192.168.200.254"/>
  16. </dhcp>
  17. </ip>
  18. </network>

1.3 安装操作系统

virt-manager中新建虚拟机:

选择下载好的CentOS-7-x86_64-Minimal-1908.iso作为系统安装镜像:

2 核 1G,默认即可:

选择之前创建好的ceph-node1.qcow2作为系统镜像:

设置虚拟机名为ceph-node1,网络选择ceph-net,并勾选Customize configuration before install,添加两块 2T 磁盘:

disk-1.qcow2disk-2.qcow2添加为VirtIO Disk

最后确认Boot OptionCDROM为第一启动项目,即可开始安装:

之后就进入了熟悉的CentOS安装界面,选择100G的盘作为系统盘,顺便观察两块2T的磁盘是否识别:

打开网络开关,确认是否已获得动态分配的192.168.200.0/24网段内的 IP 地址,具体的地址之后进入系统可以修改。另外,左下角将主机名修改为ceph-node1Apply

现在即可开始安装,记得设置root密码。安装完重启进入系统:

  1. [root@ceph-node1 ~]$ lsblk
  2. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
  3. sr0 11:0 1 1024M 0 rom
  4. vda 252:0 0 100G 0 disk
  5. ├─vda1 252:1 0 1G 0 part /boot
  6. └─vda2 252:2 0 99G 0 part
  7. ├─centos_ceph--node1-root 253:0 0 50G 0 lvm /
  8. ├─centos_ceph--node1-swap 253:1 0 2G 0 lvm [SWAP]
  9. └─centos_ceph--node1-home 253:2 0 47G 0 lvm /home
  10. vdb 252:16 0 2T 0 disk
  11. vdc 252:32 0 2T 0 disk

1.4 克隆前的准备

之前在安装过程中看到,eth0自动获取了分配的 IP 地址,现在我们需要将eth0的 IP 地址修改为我们计划的静态 IP 地址,即192.168.200.101

修改网卡的配置文件:

  1. vi /etc/sysconfig/network-scripts/ifcfg-eth0
  2. # 修改以下几个配置项
  3. BOOTPROTO="static"
  4. IPADDR=192.168.200.101
  5. NETMASK=255.255.255.0
  6. GATEWAY=192.168.200.1
  7. DNS1=192.168.200.1
  8. ONBOOT="yes"

重启网络,之后检查一下内网和外网的连通性:

  1. [root@ceph-node1 ~]$ systemctl restart network # 重启网络
  2. [root@ceph-node1 ~]$ ip addr list eth0
  3. 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
  4. link/ether 52:54:00:32:af:61 brd ff:ff:ff:ff:ff:ff
  5. inet 192.168.200.101/24 brd 192.168.200.255 scope global eth0
  6. valid_lft forever preferred_lft forever
  7. inet6 fe80::5054:ff:fe32:af61/64 scope link
  8. valid_lft forever preferred_lft forever
  9. [root@ceph-node1 ~]$ nmcli
  10. eth0: connected to eth0
  11. "Red Hat Virtio"
  12. ethernet (virtio_net), 52:54:00:32:AF:61, hw, mtu 1500
  13. ip4 default
  14. inet4 192.168.200.101/24
  15. route4 192.168.200.0/24
  16. route4 0.0.0.0/0
  17. inet6 fe80::916d:cd1f:bb97:ca22/64
  18. route6 fe80::/64
  19. route6 ff00::/8
  20. lo: unmanaged
  21. "lo"
  22. loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536
  23. DNS configuration:
  24. servers: 192.168.200.1
  25. interface: eth0
  26. [root@ceph-node1 ~]$ ping 192.168.200.1 # ping 网关
  27. [root@ceph-node1 ~]$ ping baidu.com # ping 外网

简便起见,关闭firewalldselinux

  1. # 关闭 firewalld 并禁用
  2. [root@ceph-node1 ~]$ systemctl stop firewalld
  3. [root@ceph-node1 ~]$ systemctl disable firewalld
  4. # 临时关闭 selinux
  5. [root@ceph-node1 ~]$ getenforce
  6. Enforcing
  7. [root@ceph-node1 ~]$ setenforce 0
  8. [root@ceph-node1 ~]$ getenforce
  9. Permissive
  10. # 永久关闭 selinux,重启后生效
  11. [root@ceph-node1 ~]$ sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

yum源修改为阿里云的源,并添加ceph的源:

  1. [root@ceph-node1 ~]$ yum clean all
  2. [root@ceph-node1 ~]$ curl http://mirrors.aliyun.com/repo/Centos-7.repo >/etc/yum.repos.d/CentOS-Base.repo
  3. [root@ceph-node1 ~]$ curl http://mirrors.aliyun.com/repo/epel-7.repo >/etc/yum.repos.d/epel.repo
  4. [root@ceph-node1 ~]$ sed -i '/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo
  5. [root@ceph-node1 ~]$ sed -i '/aliyuncs/d' /etc/yum.repos.d/epel.repo
  6. [root@ceph-node1 ~]$ vim /etc/yum.repos.d/ceph.repo
  7. # 添加以下内容
  8. [ceph]
  9. name=ceph
  10. baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/x86_64/
  11. gpgcheck=0
  12. [ceph-noarch]
  13. name=cephnoarch
  14. baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch/
  15. gpgcheck=0
  16. [root@ceph-node1 ~]$ yum makecache
  17. [root@ceph-node1 ~]$ yum repolist
  18. Loaded plugins: fastestmirror, priorities
  19. Loading mirror speeds from cached hostfile
  20. repo id repo name status
  21. base/7/x86_64 CentOS-7 - Base - mirrors.aliyun.com 10,097
  22. ceph ceph 499
  23. ceph-noarch cephnoarch 16
  24. epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 13,196
  25. extras/7/x86_64 CentOS-7 - Extras - mirrors.aliyun.com 323
  26. updates/7/x86_64 CentOS-7 - Updates - mirrors.aliyun.com 1,478
  27. repolist: 25,609

安装ceph客户端、ntpdate以及其他软件:

  1. # 安装 ceph 客户端
  2. [root@ceph-node1 ~]$ yum install ceph ceph-radosgw
  3. [root@ceph-node1 ~]$ ceph --version
  4. ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)
  5. # 必须安装,借助 ntpdate 同步时间
  6. [root@ceph-node1 ~]$ yum install ntp ntpdate ntp-doc
  7. # 非必须,方便监控性能数据
  8. [root@ceph-node1 ~]$ yum install wget vim htop dstat iftop tmux

设置开机运行ntpdate自动同步时间:

  1. [root@ceph-node1 ~]$ ntpdate ntp.sjtu.edu.cn
  2. 2 Mar 09:49:39 ntpdate[1915]: adjust time server 84.16.73.33 offset 0.051449 sec
  3. [root@ceph-node1 ~]$ echo ntpdate ntp.sjtu.edu.cn >> /etc/rc.d/rc.local
  4. [root@ceph-node1 ~]$ chmod +x /etc/rc.d/rc.local

添加各节点主机名,生成 SSH 密钥并配置免密登录:

  1. [root@ceph-node1 ~]$ echo ceph-node1 >/etc/hostname
  2. [root@ceph-node1 ~]$ vim /etc/hosts
  3. # 添加以下主机名
  4. 192.168.200.101 ceph-node1
  5. 192.168.200.102 ceph-node2
  6. 192.168.200.103 ceph-node3
  7. # 三次回车,生成密钥
  8. [root@ceph-node1 ~]$ ssh-keygen
  9. # 设置本机免密登录
  10. # 由于虚拟机克隆后密钥是同一份
  11. # 因此只需执行这一次 ssh-copy-id
  12. # 三台虚拟机相互之间均可以免密登录
  13. [root@ceph-node1 ~]$ ssh-copy-id root@ceph-node1

在宿主机上同样添加以上三个节点的主机名,并ssh-copy-id到虚拟机配置免密登录:

  1. [root@ceph-host ~]$ vim /etc/hosts
  2. # 添加以下主机名
  3. 192.168.200.101 ceph-node1
  4. 192.168.200.102 ceph-node2
  5. 192.168.200.103 ceph-node3
  6. [root@ceph-host ~]$ ssh-copy-id root@ceph-node1

至此虚拟机环境已准备完毕,可以执行shutdown -h now关机,记得在virt-manager里将CDROM移除掉,并确保VirtIO Disk 1为启动项。

1.5 克隆虚拟机

使用virt-clone命令基于ceph-node1克隆出ceph-node2ceph-node3另外两台虚拟机,它将为网卡生成新的 MAC 地址,并将镜像复制到指定路径:

  1. > virsh domblklist ceph-node1
  2. Target Source
  3. -------------------------------------------------
  4. vda /mnt/ceph/ceph-node1/ceph-node1.qcow2
  5. vdb /mnt/ceph/ceph-node1/disk-1.qcow2
  6. vdc /mnt/ceph/ceph-node1/disk-2.qcow2
  7. > virt-clone --original ceph-node1 --name ceph-node2 \
  8. --file /mnt/ceph/ceph-node2/ceph-node2.qcow2 \
  9. --file /mnt/ceph/ceph-node2/disk-1.qcow2 \
  10. --file /mnt/ceph/ceph-node2/disk-2.qcow2
  11. > virt-clone --original ceph-node1 --name ceph-node3 \
  12. --file /mnt/ceph/ceph-node3/ceph-node3.qcow2 \
  13. --file /mnt/ceph/ceph-node3/disk-1.qcow2 \
  14. --file /mnt/ceph/ceph-node3/disk-2.qcow2
  15. > tree -h /mnt/ceph/
  16. /mnt/ceph
  17. ├── [ 4.0K] ceph-node1
  18.    ├── [ 1.8G] ceph-node1.qcow2
  19.    ├── [ 224K] disk-1.qcow2
  20.    └── [ 224K] disk-2.qcow2
  21. ├── [ 4.0K] ceph-node2
  22.    ├── [ 1.8G] ceph-node2.qcow2
  23.    ├── [ 224K] disk-1.qcow2
  24.    └── [ 224K] disk-2.qcow2
  25. └── [ 4.0K] ceph-node3
  26. ├── [ 1.8G] ceph-node3.qcow2
  27. ├── [ 224K] disk-1.qcow2
  28. └── [ 224K] disk-2.qcow2
  29. 3 directories, 9 files
  30. > virsh list --all
  31. Id Name State
  32. ------------------------------------
  33. - ceph-node1 shut off
  34. - ceph-node2 shut off
  35. - ceph-node3 shut off

登录ceph-node2,修改主机名及 IP 地址:

  1. # 修改主机名,下次登录生效
  2. [root@ceph-node2 ~]$ hostname ceph-node2
  3. [root@ceph-node2 ~]$ echo ceph-node2 > /etc/hostname
  4. # 注销重新登录,使新的主机名生效
  5. [root@ceph-node2 ~]$ exit
  6. # 修改为对应的 IP 地址
  7. [root@ceph-node2 ~]$ vim /etc/sysconfig/network-scripts/ifcfg-eth0
  8. IPADDR=192.168.200.102
  9. # 重启网络
  10. [root@ceph-node2 ~]$ systemctl restart network

登录ceph-node3,进行同样的操作,至此三台虚拟机已经准备完毕。

2. 安装 Ceph 集群

计划使用三台虚拟机搭建三节点的 Ceph 集群:

hostname IP 备注
ceph-node1 192.168.200.101 deploy, 1 mon, 2 osd
ceph-node2 192.168.200.102 1 mon, 2 osd
ceph-node3 192.168.200.103 1 mon, 2 osd

通过ceph-deploy工具即可很方便的从一个节点(此例中为ceph-node1)部署ceph集群,因此在ceph-node1上执行以下操作:

2.1 安装 ceph-deploy

注:以下命令在ceph-node1上执行

安装ceph-deploy

  1. [root@ceph-node1 ~]$ yum info ceph-deploy
  2. Loaded plugins: fastestmirror
  3. Loading mirror speeds from cached hostfile
  4. Available Packages
  5. Name : ceph-deploy
  6. Arch : noarch
  7. Version : 1.5.39
  8. Release : 0
  9. Size : 284 k
  10. Repo : ceph-noarch
  11. Summary : Admin and deploy tool for Ceph
  12. URL : http://ceph.com/
  13. License : MIT
  14. Description : An easy to use admin tool for deploy ceph storage clusters.
  15. [root@ceph-node1 ~]$ yum install ceph-deploy -y
  16. [root@ceph-node1 ~]$ ceph-deploy --version
  17. 1.5.39

2.2 开始部署

注:以下命令在ceph-node1上执行

创建部署目录my-cluster

  1. [root@ceph-node1 ~]$ mkdir my-cluster
  2. [root@ceph-node1 ~]$ cd my-cluster/
  3. # 生成初始配置文件
  4. [root@ceph-node1 my-cluster]$ ceph-deploy new ceph-node1 ceph-node2 ceph-node3
  5. [root@ceph-node1 my-cluster]$ ls -hl
  6. total 12K
  7. -rw-r--r-- 1 root root 203 Mar 2 09:18 ceph.conf
  8. -rw-r--r-- 1 root root 3.0K Mar 2 09:18 ceph-deploy-ceph.log
  9. -rw------- 1 root root 73 Mar 2 09:18 ceph.mon.keyring
  10. [root@ceph-node1 my-cluster]$ cat ceph.conf
  11. [global]
  12. fsid = 86537cd8-270c-480d-b549-1f352de6c907
  13. mon_initial_members = ceph-node1, ceph-node2, ceph-node3
  14. mon_host = 192.168.200.101,192.168.200.102,192.168.200.103
  15. auth_cluster_required = cephx
  16. auth_service_required = cephx
  17. auth_client_required = cephx

注:在任何时候当你陷入困境并希望从头开始部署时,就执行以下命令以清空cephpackage,并擦除各节点的数据和配置:

  1. [root@ceph-node1 my-cluster]$ ceph-deploy purge ceph-node1 ceph-node2 ceph-node3
  2. [root@ceph-node1 my-cluster]$ ceph-deploy purgedata ceph-node1 ceph-node2 ceph-node3
  3. [root@ceph-node1 my-cluster]$ ceph-deploy forgetkeys
  4. [root@ceph-node1 my-cluster]$ rm ceph.*

根据此前的 IP 配置向ceph.conf中添加public_network,并稍微增大mon之间的时差允许范围(默认为0.05s,现改为2s):

  1. [root@ceph-node1 my-cluster]$ echo public_network=192.168.200.0/24 >> ceph.conf
  2. [root@ceph-node1 my-cluster]$ echo mon_clock_drift_allowed = 2 >> ceph.conf
  3. [root@ceph-node1 my-cluster]$ cat ceph.conf
  4. [global]
  5. fsid = 86537cd8-270c-480d-b549-1f352de6c907
  6. mon_initial_members = ceph-node1, ceph-node2, ceph-node3
  7. mon_host = 192.168.200.101,192.168.200.102,192.168.200.103
  8. auth_cluster_required = cephx
  9. auth_service_required = cephx
  10. auth_client_required = cephx
  11. public_network=192.168.200.0/24
  12. mon_clock_drift_allowed = 2

开始部署monitor

  1. [root@ceph-node1 my-cluster]$ ceph-deploy mon create-initial

查看集群状态,此时healthHEALTH_ERR是因为还没有部署osd

  1. [root@ceph-node1 my-cluster]$ ceph -s
  2. cluster 86537cd8-270c-480d-b549-1f352de6c907
  3. health HEALTH_ERR
  4. no osds
  5. monmap e2: 3 mons at {ceph-node1=192.168.200.101:6789/0,ceph-node2=192.168.200.102:6789/0,ceph-node3=192.168.200.103:6789/0}
  6. election epoch 6, quorum 0,1,2 ceph-node1,ceph-node2,ceph-node3
  7. osdmap e1: 0 osds: 0 up, 0 in
  8. flags sortbitwise,require_jewel_osds
  9. pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
  10. 0 kB used, 0 kB / 0 kB avail
  11. 64 creating

开始部署osd

  1. [root@ceph-node1 my-cluster]$ ceph-deploy --overwrite-conf osd prepare \
  2. ceph-node1:/dev/vdb ceph-node1:/dev/vdc \
  3. ceph-node2:/dev/vdb ceph-node2:/dev/vdc \
  4. ceph-node3:/dev/vdb ceph-node3:/dev/vdc --zap-disk
  5. [root@ceph-node1 my-cluster]$ ceph-deploy --overwrite-conf osd activate \
  6. ceph-node1:/dev/vdb1 ceph-node1:/dev/vdc1 \
  7. ceph-node2:/dev/vdb1 ceph-node2:/dev/vdc1 \
  8. ceph-node3:/dev/vdb1 ceph-node3:/dev/vdc1

查看集群状态:

  1. [root@ceph-node1 my-cluster]$ ceph -s
  2. cluster 86537cd8-270c-480d-b549-1f352de6c907
  3. health HEALTH_OK
  4. monmap e2: 3 mons at {ceph-node1=192.168.200.101:6789/0,ceph-node2=192.168.200.102:6789/0,ceph-node3=192.168.200.103:6789/0}
  5. election epoch 6, quorum 0,1,2 ceph-node1,ceph-node2,ceph-node3
  6. osdmap e30: 6 osds: 6 up, 6 in
  7. flags sortbitwise,require_jewel_osds
  8. pgmap v72: 64 pgs, 1 pools, 0 bytes data, 0 objects
  9. 646 MB used, 12251 GB / 12252 GB avail
  10. 64 active+clean

至此,集群部署完成。

2.3 关闭 cephx 认证

首先在ceph-node1上修改my-cluster目录下的ceph.conf

  1. [root@ceph-node1 my-cluster]$ vim ceph.conf
  2. # 将 cephx 全部改为 none
  3. auth_cluster_required = none
  4. auth_service_required = none
  5. auth_client_required = none

之后通过ceph-deploy将该配置文件推送到三个节点上:

  1. [root@ceph-node1 my-cluster]$ ceph-deploy --overwrite-conf config push ceph-node1 ceph-node2 ceph-node3

最后分别在三个节点上重启monosd

  1. # ceph-node1
  2. [root@ceph-node1 ~]$ systemctl restart ceph-mon.target
  3. [root@ceph-node1 ~]$ systemctl restart ceph-osd.target
  4. # ceph-node2
  5. [root@ceph-node2 ~]$ systemctl restart ceph-mon.target
  6. [root@ceph-node2 ~]$ systemctl restart ceph-osd.target
  7. # ceph-node3
  8. [root@ceph-node3 ~]$ systemctl restart ceph-mon.target
  9. [root@ceph-node3 ~]$ systemctl restart ceph-osd.target

稍后可观察到集群恢复:

  1. [root@ceph-node1 ~]$ ceph -s
  2. cluster 86537cd8-270c-480d-b549-1f352de6c907
  3. health HEALTH_OK
  4. monmap e2: 3 mons at {ceph-node1=192.168.200.101:6789/0,ceph-node2=192.168.200.102:6789/0,ceph-node3=192.168.200.103:6789/0}
  5. election epoch 12, quorum 0,1,2 ceph-node1,ceph-node2,ceph-node3
  6. osdmap e42: 6 osds: 6 up, 6 in
  7. flags sortbitwise,require_jewel_osds
  8. pgmap v98: 64 pgs, 1 pools, 0 bytes data, 0 objects
  9. 647 MB used, 12251 GB / 12252 GB avail
  10. 64 active+clean
  11. [root@ceph-node1 ~]$ ceph osd tree
  12. ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
  13. -1 11.96457 root default
  14. -2 3.98819 host ceph-node1
  15. 0 1.99409 osd.0 up 1.00000 1.00000
  16. 5 1.99409 osd.5 up 1.00000 1.00000
  17. -3 3.98819 host ceph-node2
  18. 1 1.99409 osd.1 up 1.00000 1.00000
  19. 2 1.99409 osd.2 up 1.00000 1.00000
  20. -4 3.98819 host ceph-node3
  21. 3 1.99409 osd.3 up 1.00000 1.00000
  22. 4 1.99409 osd.4 up 1.00000 1.00000

2.4 在宿主机上通过 libvirt 连接 Ceph 集群

首先回到宿主机,安装ceph客户端:

  1. [root@ceph-host ~]$ yum install ceph ceph-radosgw

之后创建/mnt/ceph/rbd-pool.xml

  1. <pool type='rbd'>
  2. <name>rbd</name>
  3. <source>
  4. <host name='ceph-node1' port='6789'/>
  5. <name>rbd</name>
  6. </source>
  7. </pool>

定义rbd存储池并启动:

  1. [root@ceph-host ~]$ virsh pool-create /mnt/ceph/rbd-pool.xml
  2. Pool rbd defined from /mnt/ceph/rbd-pool.xml
  3. [root@ceph-host ~]$ virsh pool-start rbd
  4. Pool rbd started
  5. [root@ceph-host ~]$ virsh pool-info rbd
  6. Name: rbd
  7. UUID: 0e3115e5-87c8-41c6-979b-3b8277deef78
  8. State: running
  9. Persistent: yes
  10. Autostart: no
  11. Capacity: 11.96 TiB
  12. Allocation: 1.32 KiB
  13. Available: 11.96 TiB
  14. [root@ceph-host ~]$ virsh pool-dumpxml rbd
  15. <pool type='rbd'>
  16. <name>rbd</name>
  17. <uuid>0e3115e5-87c8-41c6-979b-3b8277deef78</uuid>
  18. <capacity unit='bytes'>13155494166528</capacity>
  19. <allocation unit='bytes'>1349</allocation>
  20. <available unit='bytes'>13154814738432</available>
  21. <source>
  22. <host name='ceph-node1' port='6789'/>
  23. <name>rbd</name>
  24. </source>
  25. </pool>

使用qemu-img尝试创建一个rbd镜像:

  1. [root@ceph-host ~]$ qemu-img create -f rbd rbd:rbd/test-from-host 10G
  2. Formatting 'rbd:rbd/test-from-host', fmt=rbd size=10737418240
  3. [root@ceph-host ~]$ qemu-img info rbd:rbd/test-from-host
  4. image: json:{"driver": "raw", "file": {"pool": "rbd", "image": "test-from-host", "driver": "rbd"}}
  5. file format: raw
  6. virtual size: 10 GiB (10737418240 bytes)
  7. disk size: unavailable
  8. cluster_size: 4194304

使用rbd命令与virsh查看该镜像:

  1. [root@ceph-host ~]$ rbd ls
  2. test-from-host
  3. [root@ceph-host ~]$ rbd du
  4. NAME PROVISIONED USED
  5. test-from-host 10 GiB 0 B
  6. [root@ceph-host ~]$ virsh vol-list rbd
  7. Name Path
  8. --------------------------------------
  9. test-from-host rbd/test-from-host
  10. [root@ceph-host ~]$ virsh vol-info rbd/test-from-host
  11. Name: test-from-host
  12. Type: network
  13. Capacity: 10.00 GiB
  14. Allocation: 10.00 GiB
  15. [root@ceph-host ~]$ virsh vol-dumpxml rbd/test-from-host
  16. <volume type='network'>
  17. <name>test-from-host</name>
  18. <key>rbd/test-from-host</key>
  19. <source>
  20. </source>
  21. <capacity unit='bytes'>10737418240</capacity>
  22. <allocation unit='bytes'>10737418240</allocation>
  23. <target>
  24. <path>rbd/test-from-host</path>
  25. <format type='raw'/>
  26. </target>
  27. </volume>

宿主机若要关机,关闭三台虚拟机即可。宿主机开机后,再重新启动三台虚拟机,集群会自动恢复至HEALTH_OK状态。

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注