@tony-yin
2017-11-30T03:15:44.000000Z
字数 23917
阅读 1456
Ceph

本文就阅读完徐小胖的大话Cephx后,针对一些猜测和疑惑进行了实战演练,对原文的一些说法和结论进行了验证,并进行了一系列的扩展的思考猜想和总结。最后收获满满,不仅对原文的一些结论进行了验证,也发现了其中的一些问题,更多的是自己动手后一些奇妙的场景和发现。
本文实战任务和完成情况如下:
client.admin.keyringcephx配置Monitor keyringOSD keyringclient.admin.keyring,通过Mon找回正确的keyringMon CapOSD Capkeyring文件再恢复ceph.conf再恢复CephX后不重启OSDosd.keyring访问集群RBD的用户权限主节点开始存在keyring,可以正常访问集群
[root@node1 ceph]# lsceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.log rbdmapceph.bootstrap-mgr.keyring ceph.bootstrap-rgw.keyring ceph.conf ceph.mon.keyring[root@node1 ceph]# ceph -scluster:id: 99480db2-f92f-481f-b958-c03c261918c6health: HEALTH_WARNno active mgrReduced data availability: 281 pgs inactive, 65 pgs down, 58 pgs incompleteDegraded data redundancy: 311/771 objects degraded (40.337%), 439 pgs unclean, 316 pgs degraded, 316 pgs undersizedapplication not enabled on 3 pool(s)clock skew detected on mon.node2, mon.node3services:mon: 3 daemons, quorum node1,node2,node3mgr: no daemons activeosd: 6 osds: 5 up, 5 inrgw: 1 daemon activergw-nfs: 1 daemon activedata:pools: 10 pools, 444 pgsobjects: 257 objects, 36140 kBusage: 6256 MB used, 40645 MB / 46901 MB availpgs: 63.288% pgs not active311/771 objects degraded (40.337%)158 undersized+degraded+peered158 active+undersized+degraded65 down58 incomplete5 active+clean+remapped
将keyring文件移动到其他地方,相当于删除了keyring,这时访问集群报错
[root@node1 ceph]# mv ceph.client.admin.keyring /tmp/[root@node1 ceph]# lsceph.bootstrap-mds.keyring ceph.bootstrap-mgr.keyring ceph.bootstrap-osd.keyring ceph.bootstrap-rgw.keyring ceph.conf ceph-deploy-ceph.log ceph.mon.keyring rbdmap[root@node1 ceph]# ceph -s2017-11-23 18:07:48.685028 7f63f6935700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory2017-11-23 18:07:48.685094 7f63f6935700 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication2017-11-23 18:07:48.685098 7f63f6935700 0 librados: client.admin initialization error (2) No such file or directory[errno 2] error connecting to the cluster
再拷贝回来又可以访问集群了
[root@node1 ceph]# mv /tmp/ceph.client.admin.keyring ./[root@node1 ceph]# ceph -scluster:id: 99480db2-f92f-481f-b958-c03c261918c6health: HEALTH_WARNno active mgrReduced data availability: 281 pgs inactive, 65 pgs down, 58 pgs incompleteDegraded data redundancy: 311/771 objects degraded (40.337%), 439 pgs unclean, 316 pgs degraded, 316 pgs undersizedapplication not enabled on 3 pool(s)clock skew detected on mon.node2, mon.node3
node3由于/etc/ceph/目录下没有keyring文件,所以也无法连接集群
[root@node3 ceph]# lsceph.conf ceph-deploy-ceph.log rbdmap[root@node3 ceph]# ceph -s2017-11-23 17:59:16.659034 7fbe34678700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory2017-11-23 17:59:16.659085 7fbe34678700 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication2017-11-23 17:59:16.659089 7fbe34678700 0 librados: client.admin initialization error (2) No such file or directory[errno 2] error connecting to the cluster
结论:
当
ceph.conf中的auth配置为cephx的时候,访问集群是需要秘钥文件的
在node3节点上的/etc/ceph/目录下操作,首先将ceph.client.admin.keyring文件删除,然后将auth配置从cephx改为none,然后先重启monitor,再重启osd,这时候依然不可以访问集群,因为cephx是面向整个集群的,而不是某个节点,接下来需要在其他节点做一样的操作,更改cephx为none,然后重启monitor和osd,这时候便可以在没有keyring文件的情况下访问集群了。
# 删除keyring文件[root@node3 ~]# cd /etc/ceph/[root@node3 ceph]# lsceph.client.admin.keyring ceph.conf ceph-deploy-ceph.log rbdmap[root@node3 ceph]# mv ceph.client.admin.keyring /tmp/# 更改cephx配置[root@node3 ceph]# cat ceph.conf[global]fsid = 99480db2-f92f-481f-b958-c03c261918c6mon_initial_members = node1, node2, node3mon_host = 192.168.1.58,192.168.1.61,192.168.1.62auth_cluster_required = cephxauth_service_required = cephxauth_client_required = cephxpublic network = 192.168.1.0/24mon clock drift allowed = 2mon clock drift warn backoff = 30[root@node3 ceph]# vim ceph.conf[root@node3 ceph]# cat ceph.conf[global]fsid = 99480db2-f92f-481f-b958-c03c261918c6mon_initial_members = node1, node2, node3mon_host = 192.168.1.58,192.168.1.61,192.168.1.62auth_cluster_required = noneauth_service_required = noneauth_client_required = nonepublic network = 192.168.1.0/24mon clock drift allowed = 2mon clock drift warn backoff = 30[root@node3 ceph]# systemctl restart ceph-monceph-mon@ ceph-mon@node3.service ceph-mon.target[root@node3 ceph]# systemctl restart ceph-monceph-mon@ ceph-mon@node3.service ceph-mon.target[root@node3 ceph]# systemctl restart ceph-mon.target[root@node3 ceph]# systemctl restart ceph-osd.target# 更改单个节点配置后依然不可以访问集群[root@node3 ceph]# ceph -s2017-11-27 23:05:23.022571 7f5200c2f700 0 librados: client.admin authentication error (95) Operation not supported[errno 95] error connecting to the cluster# 相应的更改其他几个节点并重启,便又可以正常访问集群了[root@node3 ceph]# ceph -scluster:id: 99480db2-f92f-481f-b958-c03c261918c6health: HEALTH_WARN...
结论:
当
auth配置为cephx的时候访问集群必须要借助秘钥文件,而当auth配置为none的时候,不再需要秘钥文件就可以访问集群了。(更改配置需要集群所有节点都做才可以生效,而不是单一节点)
/etc/ceph和/var/lib//ceph/mon/ceph-node1各有一个mon keyring
[root@node1 ceph-node1]# cd /etc/ceph/[root@node1 ceph]# lsceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.log rbdmapceph.bootstrap-mgr.keyring ceph.bootstrap-rgw.keyring ceph.conf ceph.mon.keyring[root@node1 ceph]# cd /var/lib/ceph/mon/ceph-node1/[root@node1 ceph-node1]# lsdone keyring kv_backend store.db systemd
先删除/etc/ceph/ceph-mon.keyring,还是可以访问集群
[root@node1 ceph]# rm ceph.mon.keyringrm: remove regular file ‘ceph.mon.keyring’? y[root@node1 ceph]# systemctl restart ceph-mon@node1.service[root@node1 ceph]# ceph -scluster:id: 99480db2-f92f-481f-b958-c03c261918c6health: HEALTH_WARNno active mgrReduced data availability: 281 pgs inactive, 65 pgs down, 58 pgs incompleteDegraded data redundancy: 311/771 objects degraded (40.337%), 439 pgs unclean, 316 pgs degraded, 316 pgs undersizedapplication not enabled on 3 pool(s)clock skew detected on mon.node2......
再删除/var/lib/ceph/mon/ceph-node1/keyring
[root@node1 ceph-node1]# rm keyringrm: remove regular file ‘keyring’? y[root@node1 ceph-node1]# systemctl restart ceph-mon@node1.service[root@node1 ceph-node1]# ceph -s
访问集群一直timeount,查看log文件发现Mon初始化失败
2017-11-24 00:33:55.812955 7fa16f995e40 -1 auth: error reading file: /var/lib/ceph/mon/ceph-node1/keyring: can't open /var/lib/ceph/mon/ceph-node1/keyring: (2) No such file or directory2017-11-24 00:33:55.812991 7fa16f995e40 -1 mon.node1@-1(probing) e1 unable to load initial keyring /etc/ceph/ceph.mon.node1.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,2017-11-24 00:33:55.812999 7fa16f995e40 -1 failed to initialize
ok,那我们再试试将/var/lib/ceph/mon/ceph-node1/keyring删除,将etc/ceph/ceph.mon.keyring拷贝回来,这时候意外发生了,居然mon初始化失败
结论:
Monitor启动是需要keyring文件进行秘钥认证的,并且这个文件必须是/var/lib/ceph/mon/ceph-node1/目录下的,/etc/ceph/目录下的ceph.mon.keyring并不起作用
[root@node1 ceph-node1]# rm keyringrm: remove regular file ‘keyring’? y[root@node1 ceph]# lsceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.log rbdmapceph.bootstrap-mgr.keyring ceph.bootstrap-rgw.keyring ceph.conf ceph.mon.keyring[root@node1 ceph]# ceph -s// timeout...
mon.log中的现象:
2017-11-24 00:44:26.534865 7ffaf5117e40 -1 auth: error reading file: /var/lib/ceph/mon/ceph-node1/keyring: can't open /var/lib/ceph/mon/ceph-node1/keyring: (2) No such file or directory2017-11-24 00:44:26.534901 7ffaf5117e40 -1 mon.node1@-1(probing) e1 unable to load initial keyring /etc/ceph/ceph.mon.node1.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,2017-11-24 00:44:26.534916 7ffaf5117e40 -1 failed to initialize
至此,我们可以得出结论monitor初始化的时候依赖的文件是/var/lib/ceph/mon/ceph-node1/keyring而不是/etc/ceph/ceph.mon.keyring
[root@node1 ceph-node1]# cat keyring[mon.]key = AQCo7fdZAAAAABAAQOysx+Yxbno/2N8W1huZFA==caps mon = "allow *"[root@node1 ceph-node1]# ceph auth get mon.exported keyring for mon.[mon.]key = AQCo7fdZAAAAABAAQOysx+Yxbno/2N8W1huZFA==caps mon = "allow *"
[root@node1 ceph-node1]# vim keyring[root@node1 ceph-node1]# cat keyring[mon.]key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==caps mon = "allow *"
理想结果:
[root@node1 ceph-node1]# systemctl restart ceph-mon.target[root@node1 ceph-node1]# ceph auth get mon.exported keyring for mon.[mon.]key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==caps mon = "allow *"
令人疑惑的现实:
[root@node1 ceph]# ceph auth get mon.exported keyring for mon.[mon.]key = AQCo7fdZAAAAABAAQOysx+Yxbno/2N8W1huZFA==caps mon = "allow *"[root@node1 ceph]# ceph auth get mon.exported keyring for mon.[mon.]key = AQCo7fdZAAAAABAAQOysx+Yxbno/2N8W1huZFA==caps mon = "allow *"[root@node1 ceph]# ceph auth get mon.exported keyring for mon.[mon.]key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==caps mon = "allow *"[root@node1 ceph]# ceph auth get mon.exported keyring for mon.[mon.]key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==caps mon = "allow *"[root@node1 ceph]# ceph auth get mon.exported keyring for mon.[mon.]key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==caps mon = "allow *"[root@node1 ceph]# ceph auth get mon.exported keyring for mon.[mon.]key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==caps mon = "allow *"[root@node1 ceph]# ceph auth get mon.exported keyring for mon.[mon.]key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==caps mon = "allow *"[root@node1 ceph]# ceph auth get mon.exported keyring for mon.[mon.]key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==caps mon = "allow *"[root@node1 ceph]# ceph auth get mon.exported keyring for mon.[mon.]key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==caps mon = "allow *"[root@node1 ceph]# ceph auth get mon.exported keyring for mon.[mon.]key = AQCo7fdZAAAAABAAQOysx+Yxbno/2N8W1huZFA==caps mon = "allow *"
可以看到一会是修改之前的keyring,一会是修改之后的keyring,那遇到这种问题,我们就通过log观察如何获取keyring的
node1的mon.log中日志:
2017-11-24 09:30:08.697047 7f9b73e09700 0 mon.node1@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v12017-11-24 09:30:08.697106 7f9b73e09700 0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/1169357136' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch2017-11-24 09:30:10.020571 7f9b73e09700 0 mon.node1@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v12017-11-24 09:30:10.020641 7f9b73e09700 0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/2455152702' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch2017-11-24 09:30:11.393391 7f9b73e09700 0 mon.node1@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v12017-11-24 09:30:11.393452 7f9b73e09700 0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/1704778092' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch2017-11-24 09:30:12.669987 7f9b73e09700 0 mon.node1@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v12017-11-24 09:30:12.670049 7f9b73e09700 0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/275069695' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch2017-11-24 09:30:14.113077 7f9b73e09700 0 mon.node1@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v12017-11-24 09:30:14.113147 7f9b73e09700 0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/3800873459' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch2017-11-24 09:30:15.742038 7f9b73e09700 0 mon.node1@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v12017-11-24 09:30:15.742106 7f9b73e09700 0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/1908944728' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch2017-11-24 09:30:17.629681 7f9b73e09700 0 mon.node1@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v12017-11-24 09:30:17.629729 7f9b73e09700 0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/2193002591' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
node2的mon.log中日志:
2017-11-24 09:29:23.799402 7fdb3c0ae700 0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/4284881078' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch2017-11-24 09:29:26.030516 7fdb3c0ae700 0 mon.node2@1(peon) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v12017-11-24 09:29:26.030588 7fdb3c0ae700 0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/4157525590' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch2017-11-24 09:29:38.637677 7fdb3c0ae700 0 mon.node2@1(peon) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v12017-11-24 09:29:38.637748 7fdb3c0ae700 0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/4028820259' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
结论:
Monitor的秘钥哪怕被修改过了,也不会影响Monitor的启动,也就是说Monitor启动时只要存在秘钥文件就好,内容忽略并不重要Monitor启动的时候读取秘钥文件是随机的,并不一定是当前节点的,具体选择机制需要后期去看源代码了OSD启动的时候需要秘钥才可以登录集群,这个秘钥会存在Monitor的数据库中,所以登录的时候就会拿本地的keyring和存在Monitor中的keyring相匹配,正确的话才可以启动成功。
下面我们将本地的OSD keyring故意改错,然后重启OSD查看效果
# 更改秘钥文件[root@node3 ceph]# cd /var/lib/ceph/osd/ceph-2[root@node3 ceph-2]# lsactivate.monmap active block bluefs ceph_fsid fsid keyring kv_backend magic mkfs_done ready systemd type whoami[root@node3 ceph-2]# cat keyring[osd.2]key = AQCp8/dZ4BHbHxAA/GXihrjCOB+7kZJfgnSy+Q==[root@node3 ceph-2]# vim keyring[root@node3 ceph-2]# cat keyring[osd.2]key = BBBp8/dZ4BHbHxAA/GXihrjCOB+7kZJfgnSy+Q==[root@node3 ceph-2]# systemctl restart ceph-osdceph-osd@ ceph-osd@2.service ceph-osd@5.service ceph-osd.target[root@node3 ceph-2]# systemctl restart ceph-osdceph-osd@ ceph-osd@2.service ceph-osd@5.service ceph-osd.target[root@node3 ceph-2]# systemctl restart ceph-osd@2.service# 重启后发现OSD的状态时down[root@node3 ceph-2]# ceph osd tree | grep osd.22 hdd 0.00980 osd.2 down 1.00000 1.00000
查看日志,发现init失败,原因是auth认证出错
2017-11-27 23:52:18.069207 7fae1e8d2d00 -1 auth: error parsing file /var/lib/ceph/osd/ceph-2/keyring2017-11-27 23:52:18.069285 7fae1e8d2d00 -1 auth: failed to load /var/lib/ceph/osd/ceph-2/keyring: (5) Input/output error...2017-11-27 23:52:41.232803 7f58d15ded00 -1 ** ERROR: osd init failed: (5) Input/output error
我们可以通过查询Monitor数据库获取正确的keyring,将错误的keyring修正过来再重启OSD
# 查询Monitor数据库中的osd keyring[root@node3 ceph-2]# ceph auth get osd.2exported keyring for osd.2[osd.2]key = AQCp8/dZ4BHbHxAA/GXihrjCOB+7kZJfgnSy+Q==caps mgr = "allow profile osd"caps mon = "allow profile osd"caps osd = "allow *"# 修正keyring[root@node3 ceph-2]# vim keyring[root@node3 ceph-2]# cat keyring[osd.2]key = AQCp8/dZ4BHbHxAA/GXihrjCOB+7kZJfgnSy+Q==[root@node3 ceph-2]# systemctl restart ceph-osd@2.service# 重启OSD后可以发现osd.2状态已经变为up[root@node3 ceph-2]# ceph osd tree | grep osd.22 hdd 0.00980 osd.2 up 1.00000 1.00000
结论:
OSD启动需要正确的keyring,错误的话则无法启动成功,正确的keyring会被存在Monitor的数据库中
之前我们通过删除client keyring验证了当auth=cephx的时候,客户端需要keyring才可以访问集群,那么它是像Monitor一样内容不被care还是和OSD一样需要精确匹配keyring呢?
# 修改ceph.client.admin.keyring[root@node3 ceph-2]# cd /etc/ceph/[root@node3 ceph]# lsceph.client.admin.keyring ceph.conf ceph-deploy-ceph.log rbdmap[root@node3 ceph]# cat ceph.client.admin.keyring[client.admin]key = AQDL7fdZWaQkIBAAsFhvFVQYqSeM/FVSY6o8TQ==[root@node3 ceph]# vim ceph.client.admin.keyring[root@node3 ceph]# cat ceph.client.admin.keyring[client.admin]key = BBBB7fdZWaQkIBAAsFhvFVQYqSeM/FVSY6o8TQ==# 访问集群出错[root@node3 ceph]# ceph -s2017-11-28 00:06:05.771604 7f3a69ccf700 -1 auth: error parsing file /etc/ceph/ceph.client.admin.keyring2017-11-28 00:06:05.771622 7f3a69ccf700 -1 auth: failed to load /etc/ceph/ceph.client.admin.keyring: (5) Input/output error2017-11-28 00:06:05.771634 7f3a69ccf700 0 librados: client.admin initialization error (5) Input/output error[errno 5] error connecting to the cluster
可以看出访问集群需要正确的keyring,这时候如何修复呢?大家应该能够猜到,它和OSD的原理是一样的,正确的keyring也存在与Monitor的数据库
# 直接获取client.admin出错[root@node3 ceph]# ceph auth get client.admin2017-11-28 00:08:19.159073 7fcabb297700 -1 auth: error parsing file /etc/ceph/ceph.client.admin.keyring2017-11-28 00:08:19.159079 7fcabb297700 -1 auth: failed to load /etc/ceph/ceph.client.admin.keyring: (5) Input/output error2017-11-28 00:08:19.159090 7fcabb297700 0 librados: client.admin initialization error (5) Input/output error[errno 5] error connecting to the cluster# 需要加上monitor的keyring文件才可以获取client.admin.keyring[root@node3 ceph]# ceph auth get client.admin --name mon. --keyring /var/lib/ceph/mon/ceph-node3/keyringexported keyring for client.admin[client.admin]key = AQDL7fdZWaQkIBAAsFhvFVQYqSeM/FVSY6o8TQ==caps mds = "allow *"caps mgr = "allow *"caps mon = "allow *"caps osd = "allow *"# 修正keyring[root@node3 ceph]# vim cephceph.client.admin.keyring ceph.conf ceph-deploy-ceph.log[root@node3 ceph]# vim ceph.client.admin.keyring[root@node3 ceph]# cat ceph.client.admin.keyring[client.admin]key = AQDL7fdZWaQkIBAAsFhvFVQYqSeM/FVSY6o8TQ==# 访问集群成功[root@node3 ceph]# ceph -scluster:id: 99480db2-f92f-481f-b958-c03c261918c6health: HEALTH_WARN...
出现了令人惊奇的一幕,就是上面通过ceph auth获取OSD的keyring可以正常获取,而获取client.admin.keyring却要加上monitor.keyring,原因可以从报错信息看出,ceph auth需要以客户端连接集群为前提。
结论:
Client访问集群和OSD一样,需要正确的keyring与存在Monitor数据库中对应的keyring相匹配,并且当client.admin.keyring
不正确时,通过ceph auth读取keyring的时候需要加上monitor keyring的选项
Monior的r权限就是拥有读权限,对应的读权限都有哪些操作?在这里的读权限其实就是拥有读取Monitor数据库中信息的权限,MON作为集群的状态维护者,其数据库(/var/lib/ceph/mon/ceph-$hostname/store.db)内保存着集群这一系列状态图(Cluster Map),这些Map包含但不限于:
CRUSH MapOSD MapMON MapMDS MapPG Map所以接下来我们可以创建一个新的只拥有读权限的用户,进行相关操作验证读权限具体拥有哪些权限
ceph auth get-or-create client.mon_r mon 'allow r' >> /root/key[root@node3 ceph]# ceph auth get client.mon_rexported keyring for client.mon_r[client.mon_r]key = AQABvRxaBS6BBhAAz9uwjYCT4xKavJhobIK3ig==caps mon = "allow r"ceph --name client.mon_r --keyring /root/key -s // okceph --name client.mon_r --keyring /root/key osd crush dump // okceph --name client.mon_r --keyring /root/key osd getcrushmap -o crushmap.map // okceph --name client.mon_r --keyring /root/key osd dump // okceph --name client.mon_r --keyring /root/key osd tree // okceph --name client.mon_r --keyring /root/key osd stat // okceph --name client.mon_r --keyring /root/key pg dump // okceph --name client.mon_r --keyring /root/key pg stat // ok
尝试了下两个写操作,都显示报错权限拒绝
[root@node3 ceph]# rados --name client.mon_r --keyring /root/key -p testpool put crush crushmap.maperror putting testpool/crush: (1) Operation not permitted[root@node3 ceph]# ceph --name client.mon_r --keyring /root/key osd out osd.0Error EACCES: access denied
注意:
虽然上面有osd和pg等信息,但是这些都隶属于crush map的范畴中,所以这些状态数据都是从Monitor获取的
结论:
Monitor的读权限对应的是从Monitor数据库获取一系列的Map信息,具体的上面也都讲的很详细了,并且该权限只能读取状态信息,不能获取具体数据信息,且不能进行OSD等守护进程写操作
w权限必须配合r权限才会有效果,否则,单独w权限执行指令时,是会一直access denied的。所以我们在测试w权限时,需要附加上r权限才行:
ceph auth get-or-create client.mon_rw mon 'allow rw' >> /root/key
而w权限就可以做一些对组件的非读操作了,比如:
# 踢出OSDceph osd out# 删除OSDceph osd rm# 修复PGceph pg repair# 替换CRUSHceph osd setcrushmap# 删除MONceph mon rm...# 还有很多操作,就不一一赘述
结论:
Mon的r权限可以读取集群各个组件的状态,但是不能修改状态,而w权限是可以做到的
注意:
这里的
w权限能做到的写权限也只是修改组件的状态,但是并不包括对集群对象的读写权限,因为这些组件状态信息是存在Mon,而对象信息是存在OSD里面的,而这里的w权限也只是Mon的写权限,所以也很好理解了。
MON的x权限很局限,因为这个权限仅仅和auth相关,比如ceph auth list,ceph auth get 之类的指令,和w权限类似,x权限也需要r权限组合在一起才能有效力:
# 用上面创建拥有rw权限的用户访问auth list后auth报错[root@node3 ~]# ceph --name client.mon_rw --keyring /root/key auth list2017-11-28 21:28:10.620537 7f0d15967700 0 librados: client.mon_rw authentication error (22) Invalid argumentInvalidArgumentError does not take keyword arguments# 创建rw权限的用户访问auth list成功[root@node3 ~]# ceph --name client.mon_rx --keyring /root/key auth listinstalled auth entries:osd.0key: AQDaTgBav2MgDBAALE1GEEfbQN73xh8V7ISvFA==caps: [mgr] allow profile osdcaps: [mon] allow profile osdcaps: [osd] allow *......
这边需要注意的是徐小胖的原文应该是笔误,他是用的client.mon.rw访问的,所以说实践可以发现很多光看发现不了的东西
结论:
x权限也需要和r权限搭配才有效果,该权限只能处理与auth相关的操作
这没什么好说的,猜也能猜到了,就是拥有rwx所有权限
这一章需要研究一波再发出来
如果所有秘钥全部删除,是否真的能恢复?所有秘钥包括
MON : /var/lib/ceph/mon/ceph-$hostname/keyringOSD : /var/lib/ceph/osd/ceph-$hostname/keyringClient :/etc/ceph/ceph.client.admin.keyring
# 删除 mon keyring[root@node1 ceph-node1]# mv keyring /root/# 删除 ceph.conf[root@node1 ceph-node1]# mv /etc/ceph/ceph.conf /root/# 删除 client.admin.keyring[root@node1 ceph-node1]# mv /etc/ceph/ceph.client.admin.keyring /root# 尝试访问集群报错[root@node1 ceph-node1]# ceph -s2017-11-29 23:57:14.195467 7f25dc4cc700 -1 Errors while parsing config file!2017-11-29 23:57:14.195571 7f25dc4cc700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory2017-11-29 23:57:14.195579 7f25dc4cc700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory2017-11-29 23:57:14.195580 7f25dc4cc700 -1 parse_file: cannot open ceph.conf: (2) No such file or directoryError initializing cluster client: ObjectNotFound('error calling conf_read_file',)# 尝试获取auth list报错[root@node1 ceph-node1]# ceph auth list2017-11-29 23:57:27.037435 7f162c5a7700 -1 Errors while parsing config file!2017-11-29 23:57:27.037450 7f162c5a7700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory2017-11-29 23:57:27.037452 7f162c5a7700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory2017-11-29 23:57:27.037453 7f162c5a7700 -1 parse_file: cannot open ceph.conf: (2) No such file or directoryError initializing cluster client: ObjectNotFound('error calling conf_read_file',)
ok,下面开始修复:
在ceph中除了mon.用户以外的的账户密码都保存在Mon的数据库leveldb中,但是mon. 用户的信息并没有保存在数据库里,而是在MON启动时读取Mon目录下的keyring 文件得到的,这也是我们之前验证后得到的结论。所以,我们可以随便伪造一个keyring,放到Mon 目录下去。然后同步到各个Mon节点,然后重启三个Mon。
[root@node1 ceph-node1]# cd /var/lib/ceph/mon/ceph-node1/[root@node1 ceph-node1]# lsdone kv_backend store.db systemd[root@node1 ceph-node1]# vim keyring# 伪造 keyring,可以看到里面还有tony的字样,可以看出明显是伪造的[root@node1 ceph-node1]# cat keyring[mon.]key = AQCtonyZAAAAABAAQOysx+Yxbno/2N8W1huZFA==caps mon = "allow *"# 重启 mon[root@node1 ceph-node1]# service ceph-mon@node1 restartRedirecting to /bin/systemctl restart ceph-mon@node1.service
可以看到效果:
# monitor log显示mon.node1@0初始化成功,并被选举成了monitor leader2017-11-30 00:15:04.042157 7f8c4e28a700 0 log_channel(cluster) log [INF] : mon.node1 calling new monitor election2017-11-30 00:15:04.042299 7f8c4e28a700 1 mon.node1@0(electing).elector(934) init, last seen epoch 9342017-11-30 00:15:04.048498 7f8c4e28a700 0 log_channel(cluster) log [INF] : mon.node1 calling new monitor election2017-11-30 00:15:04.048605 7f8c4e28a700 1 mon.node1@0(electing).elector(937) init, last seen epoch 937, mid-election, bumping2017-11-30 00:15:04.078454 7f8c4e28a700 0 log_channel(cluster) log [INF] : mon.node1@0 won leader election with quorum 0,1,2
注意(很重要):
虽然说
mon在启动的时候读取对应的keyring,不在乎内容的正确性,但是不代表这个keyring可以胡乱修改。也就是说这个keyring是要符合某种规范和格式的,在实践过程我发现keyring前三位必须为大写的AQC,当然还有其他的格式要求,比如结尾是否必须要是==?长度是否是固定的?这个格式要求可能很多,我没有时间一个一个手动无脑验证,这个可以日后查看源码了解实现思路,有兴趣的童鞋可以试试,说不定可以发现很有趣的现象。当然说了这么多是否意味着很难伪造呢?这个我们也不必担心,最好的做法是从别的集群的Mon keyring拷贝一份过来就可以了,自己胡乱伪造启动会报错如下:
2017-11-29 23:49:50.134137 7fcab3e23700 -1 cephx: cephx_build_service_ticket_blob failed with error invalid key2017-11-29 23:49:50.134140 7fcab3e23700 0 mon.node1@0(probing) e1 ms_get_authorizer failed to build service ticket2017-11-29 23:49:50.134393 7fcab3e23700 0 -- 192.168.1.58:6789/0 >> 192.168.1.61:6789/0 conn(0x7fcacd15d800 :-1 s=STATE_CONNECTING_WAIT_CONNECT_REPLY_AUTH pgs=0 cs=0 l=0).handle_connect_reply connect got BADAUTHORIZER
没有/etc/ceph/ceph.conf这个文件,我们是没法执行ceph相关指令的,所以我们需要尽可能的还原它。首先fsid可以通过去任意osd目录(/var/lib/ceph/osd/ceph-$num/)读取ceph-fsid文件获得,然后mon_initial_members和mon_host代表着集群每个节点的hostname和ip,这些都是我们知道的。
# 还原 ceph.conf[root@node1 ceph-node1]# cat /var/lib/ceph/osd/ceph-0/ceph_fsid99480db2-f92f-481f-b958-c03c261918c6[root@node1 ceph-node1]# vim /etc/ceph/ceph.conf[root@node1 ceph-node1]# cat /etc/ceph/ceph.conf[global]fsid = 99480db2-f92f-481f-b958-c03c261918c6mon_initial_members = node1, node2, node3mon_host = 192.168.1.58,192.168.1.61,192.168.1.62auth_cluster_required = cephxauth_service_required = cephxauth_client_required = cephxpublic network = 192.168.1.0/24# 通过 mon keyring 访问集群状态成功[root@node1 ceph-node1]# ceph -s --name mon. --keyring /var/lib/ceph/mon/ceph-node1/keyringcluster:id: 99480db2-f92f-481f-b958-c03c261918c6health: HEALTH_OKservices:mon: 3 daemons, quorum node1,node2,node3mgr: node1_mgr(active)osd: 6 osds: 6 up, 6 in
有了Mon keyring,并且可以执行ceph指令,那么我们就可以通过ceph auth get去Monitor leveldb获取任意keyring
# 通过 Mon 获取 client.admin.keyring[root@node1 ceph-node1]# ceph --name mon. --keyring /var/lib/ceph/mon/ceph-node1/keyring auth get client.adminexported keyring for client.admin[client.admin]key = AQDL7fdZWaQkIBAAsFhvFVQYqSeM/FVSY6o8TQ==caps mds = "allow *"caps mgr = "allow *"caps mon = "allow *"caps osd = "allow *"# 创建 /etc/ceph/ceph.client.admin.keyring,并将上面内容更新到该文件[root@node1 ceph-node1]# vim /etc/ceph/ceph.client.admin.keyring[root@node1 ceph-node1]# cat /etc/ceph/ceph.client.admin.keyring[client.admin]key = AQDL7fdZWaQkIBAAsFhvFVQYqSeM/FVSY6o8TQ==caps mds = "allow *"caps mgr = "allow *"caps mon = "allow *"caps osd = "allow *"# 用默认 ceph -s 测试一下,发现可以正常访问了[root@node1 ceph-node1]# ceph -scluster:id: 99480db2-f92f-481f-b958-c03c261918c6health: HEALTH_OKservices:mon: 3 daemons, quorum node1,node2,node3mgr: node1_mgr(active)osd: 6 osds: 6 up, 6 in
首先感谢徐小胖给我提供了cephx方面的思路,希望日后多出好文,我也在不断地拜读这些优质文章。这篇文章花了我很长时间,大家从日志的时间可以看出来,跨度已经有好几天了,很多实践真的不是一蹴而就的,需要反复的尝试和思考才能得到最后的成功。Ceph还是要多动手,看别人文章是好事,但是记得要加以实践,否则再好的文章也只是想当然,作者说什么你就跟着他的思路走,你永远不知道别人一句简短的话语和结论的背后花了多少时间去推敲和实践,你看起来一条命令执行成功或者在某一步执行某个命令那也许是别人失败了无数次总结出来的。所以我们要自己实践去验证,除了可以验证原文的观点正确与否,往往可以发现一些其他有用的知识。
经历这次总结,收获满满,我对cephx的理解又上了一个层次。本文就cephx在不同组件中的角色扮演和依赖关系进行梳理,然后再对各组件的cap进行了研究,最后针对各个keyring的恢复给出了详细的指南和步骤。然后还剩两项任务没有完成,等有空进行完善!