@tony-yin
2017-11-30T11:15:44.000000Z
字数 23917
阅读 1105
Ceph
本文就阅读完徐小胖的大话Cephx后,针对一些猜测和疑惑进行了实战演练,对原文的一些说法和结论进行了验证,并进行了一系列的扩展的思考猜想和总结。最后收获满满,不仅对原文的一些结论进行了验证,也发现了其中的一些问题,更多的是自己动手后一些奇妙的场景和发现。
本文实战任务和完成情况如下:
client.admin.keyring
cephx
配置Monitor keyring
OSD keyring
client.admin.keyring
,通过Mon
找回正确的keyring
Mon Cap
OSD Cap
keyring
文件再恢复ceph.conf
再恢复CephX
后不重启OSD
osd.keyring
访问集群RBD
的用户权限主节点开始存在keyring
,可以正常访问集群
[root@node1 ceph]# ls
ceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.log rbdmap
ceph.bootstrap-mgr.keyring ceph.bootstrap-rgw.keyring ceph.conf ceph.mon.keyring
[root@node1 ceph]# ceph -s
cluster:
id: 99480db2-f92f-481f-b958-c03c261918c6
health: HEALTH_WARN
no active mgr
Reduced data availability: 281 pgs inactive, 65 pgs down, 58 pgs incomplete
Degraded data redundancy: 311/771 objects degraded (40.337%), 439 pgs unclean, 316 pgs degraded, 316 pgs undersized
application not enabled on 3 pool(s)
clock skew detected on mon.node2, mon.node3
services:
mon: 3 daemons, quorum node1,node2,node3
mgr: no daemons active
osd: 6 osds: 5 up, 5 in
rgw: 1 daemon active
rgw-nfs: 1 daemon active
data:
pools: 10 pools, 444 pgs
objects: 257 objects, 36140 kB
usage: 6256 MB used, 40645 MB / 46901 MB avail
pgs: 63.288% pgs not active
311/771 objects degraded (40.337%)
158 undersized+degraded+peered
158 active+undersized+degraded
65 down
58 incomplete
5 active+clean+remapped
将keyring
文件移动到其他地方,相当于删除了keyring
,这时访问集群报错
[root@node1 ceph]# mv ceph.client.admin.keyring /tmp/
[root@node1 ceph]# ls
ceph.bootstrap-mds.keyring ceph.bootstrap-mgr.keyring ceph.bootstrap-osd.keyring ceph.bootstrap-rgw.keyring ceph.conf ceph-deploy-ceph.log ceph.mon.keyring rbdmap
[root@node1 ceph]# ceph -s
2017-11-23 18:07:48.685028 7f63f6935700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2017-11-23 18:07:48.685094 7f63f6935700 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication
2017-11-23 18:07:48.685098 7f63f6935700 0 librados: client.admin initialization error (2) No such file or directory
[errno 2] error connecting to the cluster
再拷贝回来又可以访问集群了
[root@node1 ceph]# mv /tmp/ceph.client.admin.keyring ./
[root@node1 ceph]# ceph -s
cluster:
id: 99480db2-f92f-481f-b958-c03c261918c6
health: HEALTH_WARN
no active mgr
Reduced data availability: 281 pgs inactive, 65 pgs down, 58 pgs incomplete
Degraded data redundancy: 311/771 objects degraded (40.337%), 439 pgs unclean, 316 pgs degraded, 316 pgs undersized
application not enabled on 3 pool(s)
clock skew detected on mon.node2, mon.node3
node3
由于/etc/ceph/
目录下没有keyring
文件,所以也无法连接集群
[root@node3 ceph]# ls
ceph.conf ceph-deploy-ceph.log rbdmap
[root@node3 ceph]# ceph -s
2017-11-23 17:59:16.659034 7fbe34678700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2017-11-23 17:59:16.659085 7fbe34678700 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication
2017-11-23 17:59:16.659089 7fbe34678700 0 librados: client.admin initialization error (2) No such file or directory
[errno 2] error connecting to the cluster
结论:
当
ceph.conf
中的auth
配置为cephx
的时候,访问集群是需要秘钥文件的
在node3
节点上的/etc/ceph/
目录下操作,首先将ceph.client.admin.keyring
文件删除,然后将auth
配置从cephx
改为none
,然后先重启monitor
,再重启osd
,这时候依然不可以访问集群,因为cephx
是面向整个集群的,而不是某个节点,接下来需要在其他节点做一样的操作,更改cephx
为none
,然后重启monitor
和osd
,这时候便可以在没有keyring
文件的情况下访问集群了。
# 删除keyring文件
[root@node3 ~]# cd /etc/ceph/
[root@node3 ceph]# ls
ceph.client.admin.keyring ceph.conf ceph-deploy-ceph.log rbdmap
[root@node3 ceph]# mv ceph.client.admin.keyring /tmp/
# 更改cephx配置
[root@node3 ceph]# cat ceph.conf
[global]
fsid = 99480db2-f92f-481f-b958-c03c261918c6
mon_initial_members = node1, node2, node3
mon_host = 192.168.1.58,192.168.1.61,192.168.1.62
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public network = 192.168.1.0/24
mon clock drift allowed = 2
mon clock drift warn backoff = 30
[root@node3 ceph]# vim ceph.conf
[root@node3 ceph]# cat ceph.conf
[global]
fsid = 99480db2-f92f-481f-b958-c03c261918c6
mon_initial_members = node1, node2, node3
mon_host = 192.168.1.58,192.168.1.61,192.168.1.62
auth_cluster_required = none
auth_service_required = none
auth_client_required = none
public network = 192.168.1.0/24
mon clock drift allowed = 2
mon clock drift warn backoff = 30
[root@node3 ceph]# systemctl restart ceph-mon
ceph-mon@ ceph-mon@node3.service ceph-mon.target
[root@node3 ceph]# systemctl restart ceph-mon
ceph-mon@ ceph-mon@node3.service ceph-mon.target
[root@node3 ceph]# systemctl restart ceph-mon.target
[root@node3 ceph]# systemctl restart ceph-osd.target
# 更改单个节点配置后依然不可以访问集群
[root@node3 ceph]# ceph -s
2017-11-27 23:05:23.022571 7f5200c2f700 0 librados: client.admin authentication error (95) Operation not supported
[errno 95] error connecting to the cluster
# 相应的更改其他几个节点并重启,便又可以正常访问集群了
[root@node3 ceph]# ceph -s
cluster:
id: 99480db2-f92f-481f-b958-c03c261918c6
health: HEALTH_WARN
...
结论:
当
auth
配置为cephx
的时候访问集群必须要借助秘钥文件,而当auth
配置为none
的时候,不再需要秘钥文件就可以访问集群了。(更改配置需要集群所有节点都做才可以生效,而不是单一节点)
/etc/ceph
和/var/lib//ceph/mon/ceph-node1
各有一个mon keyring
[root@node1 ceph-node1]# cd /etc/ceph/
[root@node1 ceph]# ls
ceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.log rbdmap
ceph.bootstrap-mgr.keyring ceph.bootstrap-rgw.keyring ceph.conf ceph.mon.keyring
[root@node1 ceph]# cd /var/lib/ceph/mon/ceph-node1/
[root@node1 ceph-node1]# ls
done keyring kv_backend store.db systemd
先删除/etc/ceph/ceph-mon.keyring
,还是可以访问集群
[root@node1 ceph]# rm ceph.mon.keyring
rm: remove regular file ‘ceph.mon.keyring’? y
[root@node1 ceph]# systemctl restart ceph-mon@node1.service
[root@node1 ceph]# ceph -s
cluster:
id: 99480db2-f92f-481f-b958-c03c261918c6
health: HEALTH_WARN
no active mgr
Reduced data availability: 281 pgs inactive, 65 pgs down, 58 pgs incomplete
Degraded data redundancy: 311/771 objects degraded (40.337%), 439 pgs unclean, 316 pgs degraded, 316 pgs undersized
application not enabled on 3 pool(s)
clock skew detected on mon.node2
...
...
再删除/var/lib/ceph/mon/ceph-node1/keyring
[root@node1 ceph-node1]# rm keyring
rm: remove regular file ‘keyring’? y
[root@node1 ceph-node1]# systemctl restart ceph-mon@node1.service
[root@node1 ceph-node1]# ceph -s
访问集群一直timeount
,查看log
文件发现Mon
初始化失败
2017-11-24 00:33:55.812955 7fa16f995e40 -1 auth: error reading file: /var/lib/ceph/mon/ceph-node1/keyring: can't open /var/lib/ceph/mon/ceph-node1/keyring: (2) No such file or directory
2017-11-24 00:33:55.812991 7fa16f995e40 -1 mon.node1@-1(probing) e1 unable to load initial keyring /etc/ceph/ceph.mon.node1.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,
2017-11-24 00:33:55.812999 7fa16f995e40 -1 failed to initialize
ok,那我们再试试将/var/lib/ceph/mon/ceph-node1/keyring
删除,将etc/ceph/ceph.mon.keyring
拷贝回来,这时候意外发生了,居然mon
初始化失败
结论:
Monitor
启动是需要keyring
文件进行秘钥认证的,并且这个文件必须是/var/lib/ceph/mon/ceph-node1/
目录下的,/etc/ceph/
目录下的ceph.mon.keyring
并不起作用
[root@node1 ceph-node1]# rm keyring
rm: remove regular file ‘keyring’? y
[root@node1 ceph]# ls
ceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.log rbdmap
ceph.bootstrap-mgr.keyring ceph.bootstrap-rgw.keyring ceph.conf ceph.mon.keyring
[root@node1 ceph]# ceph -s
// timeout
...
mon.log
中的现象:
2017-11-24 00:44:26.534865 7ffaf5117e40 -1 auth: error reading file: /var/lib/ceph/mon/ceph-node1/keyring: can't open /var/lib/ceph/mon/ceph-node1/keyring: (2) No such file or directory
2017-11-24 00:44:26.534901 7ffaf5117e40 -1 mon.node1@-1(probing) e1 unable to load initial keyring /etc/ceph/ceph.mon.node1.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,
2017-11-24 00:44:26.534916 7ffaf5117e40 -1 failed to initialize
至此,我们可以得出结论monitor
初始化的时候依赖的文件是/var/lib/ceph/mon/ceph-node1/keyring
而不是/etc/ceph/ceph.mon.keyring
[root@node1 ceph-node1]# cat keyring
[mon.]
key = AQCo7fdZAAAAABAAQOysx+Yxbno/2N8W1huZFA==
caps mon = "allow *"
[root@node1 ceph-node1]# ceph auth get mon.
exported keyring for mon.
[mon.]
key = AQCo7fdZAAAAABAAQOysx+Yxbno/2N8W1huZFA==
caps mon = "allow *"
[root@node1 ceph-node1]# vim keyring
[root@node1 ceph-node1]# cat keyring
[mon.]
key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==
caps mon = "allow *"
理想结果:
[root@node1 ceph-node1]# systemctl restart ceph-mon.target
[root@node1 ceph-node1]# ceph auth get mon.
exported keyring for mon.
[mon.]
key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==
caps mon = "allow *"
令人疑惑的现实:
[root@node1 ceph]# ceph auth get mon.
exported keyring for mon.
[mon.]
key = AQCo7fdZAAAAABAAQOysx+Yxbno/2N8W1huZFA==
caps mon = "allow *"
[root@node1 ceph]# ceph auth get mon.
exported keyring for mon.
[mon.]
key = AQCo7fdZAAAAABAAQOysx+Yxbno/2N8W1huZFA==
caps mon = "allow *"
[root@node1 ceph]# ceph auth get mon.
exported keyring for mon.
[mon.]
key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==
caps mon = "allow *"
[root@node1 ceph]# ceph auth get mon.
exported keyring for mon.
[mon.]
key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==
caps mon = "allow *"
[root@node1 ceph]# ceph auth get mon.
exported keyring for mon.
[mon.]
key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==
caps mon = "allow *"
[root@node1 ceph]# ceph auth get mon.
exported keyring for mon.
[mon.]
key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==
caps mon = "allow *"
[root@node1 ceph]# ceph auth get mon.
exported keyring for mon.
[mon.]
key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==
caps mon = "allow *"
[root@node1 ceph]# ceph auth get mon.
exported keyring for mon.
[mon.]
key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==
caps mon = "allow *"
[root@node1 ceph]# ceph auth get mon.
exported keyring for mon.
[mon.]
key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==
caps mon = "allow *"
[root@node1 ceph]# ceph auth get mon.
exported keyring for mon.
[mon.]
key = AQCo7fdZAAAAABAAQOysx+Yxbno/2N8W1huZFA==
caps mon = "allow *"
可以看到一会是修改之前的keyring
,一会是修改之后的keyring
,那遇到这种问题,我们就通过log
观察如何获取keyring
的
node1
的mon.log
中日志:
2017-11-24 09:30:08.697047 7f9b73e09700 0 mon.node1@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
2017-11-24 09:30:08.697106 7f9b73e09700 0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/1169357136' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
2017-11-24 09:30:10.020571 7f9b73e09700 0 mon.node1@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
2017-11-24 09:30:10.020641 7f9b73e09700 0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/2455152702' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
2017-11-24 09:30:11.393391 7f9b73e09700 0 mon.node1@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
2017-11-24 09:30:11.393452 7f9b73e09700 0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/1704778092' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
2017-11-24 09:30:12.669987 7f9b73e09700 0 mon.node1@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
2017-11-24 09:30:12.670049 7f9b73e09700 0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/275069695' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
2017-11-24 09:30:14.113077 7f9b73e09700 0 mon.node1@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
2017-11-24 09:30:14.113147 7f9b73e09700 0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/3800873459' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
2017-11-24 09:30:15.742038 7f9b73e09700 0 mon.node1@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
2017-11-24 09:30:15.742106 7f9b73e09700 0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/1908944728' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
2017-11-24 09:30:17.629681 7f9b73e09700 0 mon.node1@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
2017-11-24 09:30:17.629729 7f9b73e09700 0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/2193002591' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
node2
的mon.log
中日志:
2017-11-24 09:29:23.799402 7fdb3c0ae700 0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/4284881078' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
2017-11-24 09:29:26.030516 7fdb3c0ae700 0 mon.node2@1(peon) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
2017-11-24 09:29:26.030588 7fdb3c0ae700 0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/4157525590' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
2017-11-24 09:29:38.637677 7fdb3c0ae700 0 mon.node2@1(peon) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
2017-11-24 09:29:38.637748 7fdb3c0ae700 0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/4028820259' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
结论:
Monitor
的秘钥哪怕被修改过了,也不会影响Monitor
的启动,也就是说Monitor
启动时只要存在秘钥文件就好,内容忽略并不重要Monitor
启动的时候读取秘钥文件是随机的,并不一定是当前节点的,具体选择机制需要后期去看源代码了OSD
启动的时候需要秘钥才可以登录集群,这个秘钥会存在Monitor
的数据库中,所以登录的时候就会拿本地的keyring
和存在Monitor
中的keyring
相匹配,正确的话才可以启动成功。
下面我们将本地的OSD keyring
故意改错,然后重启OSD
查看效果
# 更改秘钥文件
[root@node3 ceph]# cd /var/lib/ceph/osd/ceph-2
[root@node3 ceph-2]# ls
activate.monmap active block bluefs ceph_fsid fsid keyring kv_backend magic mkfs_done ready systemd type whoami
[root@node3 ceph-2]# cat keyring
[osd.2]
key = AQCp8/dZ4BHbHxAA/GXihrjCOB+7kZJfgnSy+Q==
[root@node3 ceph-2]# vim keyring
[root@node3 ceph-2]# cat keyring
[osd.2]
key = BBBp8/dZ4BHbHxAA/GXihrjCOB+7kZJfgnSy+Q==
[root@node3 ceph-2]# systemctl restart ceph-osd
ceph-osd@ ceph-osd@2.service ceph-osd@5.service ceph-osd.target
[root@node3 ceph-2]# systemctl restart ceph-osd
ceph-osd@ ceph-osd@2.service ceph-osd@5.service ceph-osd.target
[root@node3 ceph-2]# systemctl restart ceph-osd@2.service
# 重启后发现OSD的状态时down
[root@node3 ceph-2]# ceph osd tree | grep osd.2
2 hdd 0.00980 osd.2 down 1.00000 1.00000
查看日志,发现init
失败,原因是auth
认证出错
2017-11-27 23:52:18.069207 7fae1e8d2d00 -1 auth: error parsing file /var/lib/ceph/osd/ceph-2/keyring
2017-11-27 23:52:18.069285 7fae1e8d2d00 -1 auth: failed to load /var/lib/ceph/osd/ceph-2/keyring: (5) Input/output error
...
2017-11-27 23:52:41.232803 7f58d15ded00 -1 ** ERROR: osd init failed: (5) Input/output error
我们可以通过查询Monitor
数据库获取正确的keyring
,将错误的keyring
修正过来再重启OSD
# 查询Monitor数据库中的osd keyring
[root@node3 ceph-2]# ceph auth get osd.2
exported keyring for osd.2
[osd.2]
key = AQCp8/dZ4BHbHxAA/GXihrjCOB+7kZJfgnSy+Q==
caps mgr = "allow profile osd"
caps mon = "allow profile osd"
caps osd = "allow *"
# 修正keyring
[root@node3 ceph-2]# vim keyring
[root@node3 ceph-2]# cat keyring
[osd.2]
key = AQCp8/dZ4BHbHxAA/GXihrjCOB+7kZJfgnSy+Q==
[root@node3 ceph-2]# systemctl restart ceph-osd@2.service
# 重启OSD后可以发现osd.2状态已经变为up
[root@node3 ceph-2]# ceph osd tree | grep osd.2
2 hdd 0.00980 osd.2 up 1.00000 1.00000
结论:
OSD
启动需要正确的keyring
,错误的话则无法启动成功,正确的keyring
会被存在Monitor
的数据库中
之前我们通过删除client keyring
验证了当auth=cephx
的时候,客户端需要keyring
才可以访问集群,那么它是像Monitor
一样内容不被care
还是和OSD
一样需要精确匹配keyring
呢?
# 修改ceph.client.admin.keyring
[root@node3 ceph-2]# cd /etc/ceph/
[root@node3 ceph]# ls
ceph.client.admin.keyring ceph.conf ceph-deploy-ceph.log rbdmap
[root@node3 ceph]# cat ceph.client.admin.keyring
[client.admin]
key = AQDL7fdZWaQkIBAAsFhvFVQYqSeM/FVSY6o8TQ==
[root@node3 ceph]# vim ceph.client.admin.keyring
[root@node3 ceph]# cat ceph.client.admin.keyring
[client.admin]
key = BBBB7fdZWaQkIBAAsFhvFVQYqSeM/FVSY6o8TQ==
# 访问集群出错
[root@node3 ceph]# ceph -s
2017-11-28 00:06:05.771604 7f3a69ccf700 -1 auth: error parsing file /etc/ceph/ceph.client.admin.keyring
2017-11-28 00:06:05.771622 7f3a69ccf700 -1 auth: failed to load /etc/ceph/ceph.client.admin.keyring: (5) Input/output error
2017-11-28 00:06:05.771634 7f3a69ccf700 0 librados: client.admin initialization error (5) Input/output error
[errno 5] error connecting to the cluster
可以看出访问集群需要正确的keyring
,这时候如何修复呢?大家应该能够猜到,它和OSD
的原理是一样的,正确的keyring
也存在与Monitor
的数据库
# 直接获取client.admin出错
[root@node3 ceph]# ceph auth get client.admin
2017-11-28 00:08:19.159073 7fcabb297700 -1 auth: error parsing file /etc/ceph/ceph.client.admin.keyring
2017-11-28 00:08:19.159079 7fcabb297700 -1 auth: failed to load /etc/ceph/ceph.client.admin.keyring: (5) Input/output error
2017-11-28 00:08:19.159090 7fcabb297700 0 librados: client.admin initialization error (5) Input/output error
[errno 5] error connecting to the cluster
# 需要加上monitor的keyring文件才可以获取client.admin.keyring
[root@node3 ceph]# ceph auth get client.admin --name mon. --keyring /var/lib/ceph/mon/ceph-node3/keyring
exported keyring for client.admin
[client.admin]
key = AQDL7fdZWaQkIBAAsFhvFVQYqSeM/FVSY6o8TQ==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"
# 修正keyring
[root@node3 ceph]# vim ceph
ceph.client.admin.keyring ceph.conf ceph-deploy-ceph.log
[root@node3 ceph]# vim ceph.client.admin.keyring
[root@node3 ceph]# cat ceph.client.admin.keyring
[client.admin]
key = AQDL7fdZWaQkIBAAsFhvFVQYqSeM/FVSY6o8TQ==
# 访问集群成功
[root@node3 ceph]# ceph -s
cluster:
id: 99480db2-f92f-481f-b958-c03c261918c6
health: HEALTH_WARN
...
出现了令人惊奇的一幕,就是上面通过ceph auth
获取OSD
的keyring
可以正常获取,而获取client.admin.keyring
却要加上monitor.keyring
,原因可以从报错信息看出,ceph auth
需要以客户端连接集群为前提。
结论:
Client
访问集群和OSD
一样,需要正确的keyring
与存在Monitor
数据库中对应的keyring
相匹配,并且当client.admin.keyring
不正确时,通过ceph auth
读取keyring
的时候需要加上monitor keyring
的选项
Monior
的r
权限就是拥有读权限,对应的读权限都有哪些操作?在这里的读权限其实就是拥有读取Monitor
数据库中信息的权限,MON
作为集群的状态维护者,其数据库(/var/lib/ceph/mon/ceph-$hostname/store.db
)内保存着集群这一系列状态图(Cluster Map
),这些Map
包含但不限于:
CRUSH Map
OSD Map
MON Map
MDS Map
PG Map
所以接下来我们可以创建一个新的只拥有读权限的用户,进行相关操作验证读权限具体拥有哪些权限
ceph auth get-or-create client.mon_r mon 'allow r' >> /root/key
[root@node3 ceph]# ceph auth get client.mon_r
exported keyring for client.mon_r
[client.mon_r]
key = AQABvRxaBS6BBhAAz9uwjYCT4xKavJhobIK3ig==
caps mon = "allow r"
ceph --name client.mon_r --keyring /root/key -s // ok
ceph --name client.mon_r --keyring /root/key osd crush dump // ok
ceph --name client.mon_r --keyring /root/key osd getcrushmap -o crushmap.map // ok
ceph --name client.mon_r --keyring /root/key osd dump // ok
ceph --name client.mon_r --keyring /root/key osd tree // ok
ceph --name client.mon_r --keyring /root/key osd stat // ok
ceph --name client.mon_r --keyring /root/key pg dump // ok
ceph --name client.mon_r --keyring /root/key pg stat // ok
尝试了下两个写操作,都显示报错权限拒绝
[root@node3 ceph]# rados --name client.mon_r --keyring /root/key -p testpool put crush crushmap.map
error putting testpool/crush: (1) Operation not permitted
[root@node3 ceph]# ceph --name client.mon_r --keyring /root/key osd out osd.0
Error EACCES: access denied
注意:
虽然上面有osd
和pg
等信息,但是这些都隶属于crush map
的范畴中,所以这些状态数据都是从Monitor
获取的
结论:
Monitor
的读权限对应的是从Monitor
数据库获取一系列的Map
信息,具体的上面也都讲的很详细了,并且该权限只能读取状态信息,不能获取具体数据信息,且不能进行OSD
等守护进程写操作
w
权限必须配合r
权限才会有效果,否则,单独w
权限执行指令时,是会一直access denied
的。所以我们在测试w
权限时,需要附加上r
权限才行:
ceph auth get-or-create client.mon_rw mon 'allow rw' >> /root/key
而w
权限就可以做一些对组件的非读操作了,比如:
# 踢出OSD
ceph osd out
# 删除OSD
ceph osd rm
# 修复PG
ceph pg repair
# 替换CRUSH
ceph osd setcrushmap
# 删除MON
ceph mon rm
...
# 还有很多操作,就不一一赘述
结论:
Mon
的r
权限可以读取集群各个组件的状态,但是不能修改状态,而w
权限是可以做到的
注意:
这里的
w
权限能做到的写权限也只是修改组件的状态,但是并不包括对集群对象的读写权限,因为这些组件状态信息是存在Mon
,而对象信息是存在OSD
里面的,而这里的w
权限也只是Mon
的写权限,所以也很好理解了。
MON
的x
权限很局限,因为这个权限仅仅和auth
相关,比如ceph auth list
,ceph auth get
之类的指令,和w
权限类似,x
权限也需要r
权限组合在一起才能有效力:
# 用上面创建拥有rw权限的用户访问auth list后auth报错
[root@node3 ~]# ceph --name client.mon_rw --keyring /root/key auth list
2017-11-28 21:28:10.620537 7f0d15967700 0 librados: client.mon_rw authentication error (22) Invalid argument
InvalidArgumentError does not take keyword arguments
# 创建rw权限的用户访问auth list成功
[root@node3 ~]# ceph --name client.mon_rx --keyring /root/key auth list
installed auth entries:
osd.0
key: AQDaTgBav2MgDBAALE1GEEfbQN73xh8V7ISvFA==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
...
...
这边需要注意的是徐小胖的原文应该是笔误,他是用的client.mon.rw
访问的,所以说实践可以发现很多光看发现不了的东西
结论:
x
权限也需要和r
权限搭配才有效果,该权限只能处理与auth
相关的操作
这没什么好说的,猜也能猜到了,就是拥有rwx
所有权限
这一章需要研究一波再发出来
如果所有秘钥全部删除,是否真的能恢复?所有秘钥包括
MON
: /var/lib/ceph/mon/ceph-$hostname/keyring
OSD
: /var/lib/ceph/osd/ceph-$hostname/keyring
Client
:/etc/ceph/ceph.client.admin.keyring
# 删除 mon keyring
[root@node1 ceph-node1]# mv keyring /root/
# 删除 ceph.conf
[root@node1 ceph-node1]# mv /etc/ceph/ceph.conf /root/
# 删除 client.admin.keyring
[root@node1 ceph-node1]# mv /etc/ceph/ceph.client.admin.keyring /root
# 尝试访问集群报错
[root@node1 ceph-node1]# ceph -s
2017-11-29 23:57:14.195467 7f25dc4cc700 -1 Errors while parsing config file!
2017-11-29 23:57:14.195571 7f25dc4cc700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2017-11-29 23:57:14.195579 7f25dc4cc700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2017-11-29 23:57:14.195580 7f25dc4cc700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Error initializing cluster client: ObjectNotFound('error calling conf_read_file',)
# 尝试获取auth list报错
[root@node1 ceph-node1]# ceph auth list
2017-11-29 23:57:27.037435 7f162c5a7700 -1 Errors while parsing config file!
2017-11-29 23:57:27.037450 7f162c5a7700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2017-11-29 23:57:27.037452 7f162c5a7700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2017-11-29 23:57:27.037453 7f162c5a7700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Error initializing cluster client: ObjectNotFound('error calling conf_read_file',)
ok,下面开始修复:
在ceph
中除了mon.
用户以外的的账户密码都保存在Mon
的数据库leveldb
中,但是mon.
用户的信息并没有保存在数据库里,而是在MON
启动时读取Mon
目录下的keyring
文件得到的,这也是我们之前验证后得到的结论。所以,我们可以随便伪造一个keyring
,放到Mon
目录下去。然后同步到各个Mon
节点,然后重启三个Mon
。
[root@node1 ceph-node1]# cd /var/lib/ceph/mon/ceph-node1/
[root@node1 ceph-node1]# ls
done kv_backend store.db systemd
[root@node1 ceph-node1]# vim keyring
# 伪造 keyring,可以看到里面还有tony的字样,可以看出明显是伪造的
[root@node1 ceph-node1]# cat keyring
[mon.]
key = AQCtonyZAAAAABAAQOysx+Yxbno/2N8W1huZFA==
caps mon = "allow *"
# 重启 mon
[root@node1 ceph-node1]# service ceph-mon@node1 restart
Redirecting to /bin/systemctl restart ceph-mon@node1.service
可以看到效果:
# monitor log显示mon.node1@0初始化成功,并被选举成了monitor leader
2017-11-30 00:15:04.042157 7f8c4e28a700 0 log_channel(cluster) log [INF] : mon.node1 calling new monitor election
2017-11-30 00:15:04.042299 7f8c4e28a700 1 mon.node1@0(electing).elector(934) init, last seen epoch 934
2017-11-30 00:15:04.048498 7f8c4e28a700 0 log_channel(cluster) log [INF] : mon.node1 calling new monitor election
2017-11-30 00:15:04.048605 7f8c4e28a700 1 mon.node1@0(electing).elector(937) init, last seen epoch 937, mid-election, bumping
2017-11-30 00:15:04.078454 7f8c4e28a700 0 log_channel(cluster) log [INF] : mon.node1@0 won leader election with quorum 0,1,2
注意(很重要):
虽然说
mon
在启动的时候读取对应的keyring
,不在乎内容的正确性,但是不代表这个keyring
可以胡乱修改。也就是说这个keyring
是要符合某种规范和格式的,在实践过程我发现keyring
前三位必须为大写的AQC
,当然还有其他的格式要求,比如结尾是否必须要是==
?长度是否是固定的?这个格式要求可能很多,我没有时间一个一个手动无脑验证,这个可以日后查看源码了解实现思路,有兴趣的童鞋可以试试,说不定可以发现很有趣的现象。当然说了这么多是否意味着很难伪造呢?这个我们也不必担心,最好的做法是从别的集群的Mon keyring
拷贝一份过来就可以了,自己胡乱伪造启动会报错如下:
2017-11-29 23:49:50.134137 7fcab3e23700 -1 cephx: cephx_build_service_ticket_blob failed with error invalid key
2017-11-29 23:49:50.134140 7fcab3e23700 0 mon.node1@0(probing) e1 ms_get_authorizer failed to build service ticket
2017-11-29 23:49:50.134393 7fcab3e23700 0 -- 192.168.1.58:6789/0 >> 192.168.1.61:6789/0 conn(0x7fcacd15d800 :-1 s=STATE_CONNECTING_WAIT_CONNECT_REPLY_AUTH pgs=0 cs=0 l=0).handle_connect_reply connect got BADAUTHORIZER
没有/etc/ceph/ceph.conf
这个文件,我们是没法执行ceph
相关指令的,所以我们需要尽可能的还原它。首先fsid
可以通过去任意osd
目录(/var/lib/ceph/osd/ceph-$num/
)读取ceph-fsid
文件获得,然后mon_initial_members
和mon_host
代表着集群每个节点的hostname
和ip
,这些都是我们知道的。
# 还原 ceph.conf
[root@node1 ceph-node1]# cat /var/lib/ceph/osd/ceph-0/ceph_fsid
99480db2-f92f-481f-b958-c03c261918c6
[root@node1 ceph-node1]# vim /etc/ceph/ceph.conf
[root@node1 ceph-node1]# cat /etc/ceph/ceph.conf
[global]
fsid = 99480db2-f92f-481f-b958-c03c261918c6
mon_initial_members = node1, node2, node3
mon_host = 192.168.1.58,192.168.1.61,192.168.1.62
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public network = 192.168.1.0/24
# 通过 mon keyring 访问集群状态成功
[root@node1 ceph-node1]# ceph -s --name mon. --keyring /var/lib/ceph/mon/ceph-node1/keyring
cluster:
id: 99480db2-f92f-481f-b958-c03c261918c6
health: HEALTH_OK
services:
mon: 3 daemons, quorum node1,node2,node3
mgr: node1_mgr(active)
osd: 6 osds: 6 up, 6 in
有了Mon keyring
,并且可以执行ceph
指令,那么我们就可以通过ceph auth get
去Monitor leveldb
获取任意keyring
# 通过 Mon 获取 client.admin.keyring
[root@node1 ceph-node1]# ceph --name mon. --keyring /var/lib/ceph/mon/ceph-node1/keyring auth get client.admin
exported keyring for client.admin
[client.admin]
key = AQDL7fdZWaQkIBAAsFhvFVQYqSeM/FVSY6o8TQ==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"
# 创建 /etc/ceph/ceph.client.admin.keyring,并将上面内容更新到该文件
[root@node1 ceph-node1]# vim /etc/ceph/ceph.client.admin.keyring
[root@node1 ceph-node1]# cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
key = AQDL7fdZWaQkIBAAsFhvFVQYqSeM/FVSY6o8TQ==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"
# 用默认 ceph -s 测试一下,发现可以正常访问了
[root@node1 ceph-node1]# ceph -s
cluster:
id: 99480db2-f92f-481f-b958-c03c261918c6
health: HEALTH_OK
services:
mon: 3 daemons, quorum node1,node2,node3
mgr: node1_mgr(active)
osd: 6 osds: 6 up, 6 in
首先感谢徐小胖给我提供了cephx
方面的思路,希望日后多出好文,我也在不断地拜读这些优质文章。这篇文章花了我很长时间,大家从日志的时间可以看出来,跨度已经有好几天了,很多实践真的不是一蹴而就的,需要反复的尝试和思考才能得到最后的成功。Ceph
还是要多动手,看别人文章是好事,但是记得要加以实践,否则再好的文章也只是想当然,作者说什么你就跟着他的思路走,你永远不知道别人一句简短的话语和结论的背后花了多少时间去推敲和实践,你看起来一条命令执行成功或者在某一步执行某个命令那也许是别人失败了无数次总结出来的。所以我们要自己实践去验证,除了可以验证原文的观点正确与否,往往可以发现一些其他有用的知识。
经历这次总结,收获满满,我对cephx
的理解又上了一个层次。本文就cephx
在不同组件中的角色扮演和依赖关系进行梳理,然后再对各组件的cap
进行了研究,最后针对各个keyring
的恢复给出了详细的指南和步骤。然后还剩两项任务没有完成,等有空进行完善!