@tony-yin
2017-12-07T06:58:52.000000Z
字数 2641
阅读 1727
Ceph
OSD是Ceph中最基本也是最常用的功能,所以经常的创建和删除操作少不了,然后这个两个过程并不是一两个命令那么简单,本文就OSD的创建和删除进行内容进行步骤分离和讲解,结尾还有一份一键删除> 制定OSD的脚本
ceph osd create [uuid] #若没有uuid参数,则该命令自动生成一个uuid。该命令产生一个新的osd-number
mkdir -p /var/lib/ceph/osd/ceph-{osd-number}
mkfs.xfs -f /dev/vdemount /dev/vde /var/lib/ceph/osd/ceph-{osd-number}
ceph-osd -i {osd-number} --mkfs --mkkey
注意:在执行上述命令前要求新的OSD工作目录必须为空
ceph auth add osd.{osd-number} osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-{osd-number}/keyring
ceph osd crush add osd.{osd-number} {weight} [{bucketype}={bucket-name}......}
此步骤也可以添加buckets后再添加osd,即:
ceph osd crush add-bucket node5 host #创建一个名字为node5的bucketceph osd crush move node5 root=default #将创建出来的bucket放到root下ceph osd crush create-or-move osd.{osd-number} 1.0 root=default host=node5 #将新的OSD添加到node5下
[osd.{osd-number}]host = {hostname}devs = /dev/vde
/etc/init.d/ceph start osd.{osd-number}
此时通过集群状态查看命令#ceph -s可以看到OSD数量以及up和in的数量都发生了变化,此时再通过命令#ceph -w可以看到ceph经过peering状态后,最终达到active+clean状态
假定osd的id为1
ceph osd out osd.1
/etc/init.d/ceph stop osd.1
ceph osd crush remove osd.1 #删除指定的OSDceph osd crush remove node1 #删除OSD所在的bucket(此步骤可以不做)
ceph auth del osd.1
ceph osd rm 1
[osd.1]host = {hostname}...
#! /bin/bashstart_time=`date +%s`echo "start time: `date -d @$start_time "+%Y-%m-%d %H:%M:%S"`"disk=/dev/$1osd_id=`ceph osd create`osd_dir=/data/osd.$osd_idhost=10.16.100.99bucket=default_$hostecho "osd $osd_id is created ..."mkdir -p $osd_direcho "osd directory: /data/osd.$osd_id is created ..."mkfs -t ext4 -m 0 $diskecho "disk $disk is built with ext4 file system ..."mount -o noatime,user_xattr $disk $osd_direcho "device: $disk is mounted on directory: $osd_dir ..."ceph mon getmap -o /tmp/monmapceph-osd -i $osd_id --monmap /tmp/monmap --mkfs --mkjournalecho "osd $osd_id is initialized ..."osd_uuid=`ceph-osd -i $osd_id --get-osd-fsid`cat >> /etc/ceph/ceph.conf <<EOF[osd.$osd_id]host = $hostpublic addr = $hostcluster addr = $hostosd uuid = $osd_uuidpost stop command = python /usr/local/bin/syncfs.py -f /data/osd.$osd_id/ceph_fsid && /opt/MegaRAID/MegaCli/MegaCli64 -AdpCacheFlush -AallEOFecho 'ceph config file is configured ...'mcs3-ha service_ceph start osd.$osd_idecho "osd $osd_id start ..."ceph osd crush add $osd_id 0 pool=default host=$bucketecho "osd $osd_id is added in crush ..."echo 'all works done ...'end_time=`date +%s`echo "end time: `date -d @$end_time "+%Y-%m-%d %H:%M:%S"`"time_consuming=$(($end_time - $start_time))echo "The total time consuming is $time_consuming s"
#! /bin/bashosd_id=$1ceph osd out osd$1/etc/init.d/ceph stop osd.$1ceph osd crush remove osd.$1ceph auth del osd.$1ceph osd rm $1#清空 ceph.conf
参考链接:
