@dyj2017
2017-10-20T03:11:43.000000Z
字数 1104
阅读 3146
ceph 运维 osd
掉电后,上电发现cluster中的主机node3下的所有osd都down掉了,通过命令重启node3的ceph-osd服务,osd依然无法up;通过激活集群所有osd还是不行。
[root@node1 ~]# ceph osd treeID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF-1 0.05878 root default-3 0.01959 host node10 hdd 0.00980 osd.0 up 1.00000 1.000003 hdd 0.00980 osd.3 up 1.00000 1.00000-5 0.01959 host node21 hdd 0.00980 osd.1 up 1.00000 1.000004 hdd 0.00980 osd.4 up 1.00000 1.00000-7 0.01959 host node32 hdd 0.00980 osd.2 down 0 1.000005 hdd 0.00980 osd.5 down 0 1.00000
通过 删除osd的shell脚本 的博客删除主机下的所有osd
通过admin节点执行下面命令重建osd:
# ceph-deploy osd create node3:/dev/sdb2 node3:/dev/sdc2
# ceph-deploy osd activate node1:/dev/sdb1 node2:/dev/sdb1 node3:/dev/sdb2 node1:/dev/sdc1 node2:/dev/sdc1 node3:/dev/sdc2
验证osd是否up:
[root@node1 ~]# ceph osd treeID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF-1 0.05878 root default-3 0.01959 host node10 hdd 0.00980 osd.0 up 1.00000 1.000003 hdd 0.00980 osd.3 up 1.00000 1.00000-5 0.01959 host node21 hdd 0.00980 osd.1 up 1.00000 1.000004 hdd 0.00980 osd.4 up 1.00000 1.00000-7 0.01959 host node32 hdd 0.00980 osd.2 up 1.00000 1.000005 hdd 0.00980 osd.5 up 1.00000 1.00000