@tony-yin
2017-12-07T14:58:52.000000Z
字数 2641
阅读 1457
Ceph
OSD
是Ceph
中最基本也是最常用的功能,所以经常的创建和删除操作少不了,然后这个两个过程并不是一两个命令那么简单,本文就OSD
的创建和删除进行内容进行步骤分离和讲解,结尾还有一份一键删除> 制定OSD
的脚本
ceph osd create [uuid] #若没有uuid参数,则该命令自动生成一个uuid。该命令产生一个新的osd-number
mkdir -p /var/lib/ceph/osd/ceph-{osd-number}
mkfs.xfs -f /dev/vde
mount /dev/vde /var/lib/ceph/osd/ceph-{osd-number}
ceph-osd -i {osd-number} --mkfs --mkkey
注意:在执行上述命令前要求新的OSD工作目录必须为空
ceph auth add osd.{osd-number} osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-{osd-number}/keyring
ceph osd crush add osd.{osd-number} {weight} [{bucketype}={bucket-name}......}
此步骤也可以添加buckets后再添加osd,即:
ceph osd crush add-bucket node5 host #创建一个名字为node5的bucket
ceph osd crush move node5 root=default #将创建出来的bucket放到root下
ceph osd crush create-or-move osd.{osd-number} 1.0 root=default host=node5 #将新的OSD添加到node5下
[osd.{osd-number}]
host = {hostname}
devs = /dev/vde
/etc/init.d/ceph start osd.{osd-number}
此时通过集群状态查看命令#ceph -s可以看到OSD数量以及up和in的数量都发生了变化,此时再通过命令#ceph -w可以看到ceph经过peering状态后,最终达到active+clean状态
假定osd
的id
为1
ceph osd out osd.1
/etc/init.d/ceph stop osd.1
ceph osd crush remove osd.1 #删除指定的OSD
ceph osd crush remove node1 #删除OSD所在的bucket(此步骤可以不做)
ceph auth del osd.1
ceph osd rm 1
[osd.1]
host = {hostname}
...
#! /bin/bash
start_time=`date +%s`
echo "start time: `date -d @$start_time "+%Y-%m-%d %H:%M:%S"`"
disk=/dev/$1
osd_id=`ceph osd create`
osd_dir=/data/osd.$osd_id
host=10.16.100.99
bucket=default_$host
echo "osd $osd_id is created ..."
mkdir -p $osd_dir
echo "osd directory: /data/osd.$osd_id is created ..."
mkfs -t ext4 -m 0 $disk
echo "disk $disk is built with ext4 file system ..."
mount -o noatime,user_xattr $disk $osd_dir
echo "device: $disk is mounted on directory: $osd_dir ..."
ceph mon getmap -o /tmp/monmap
ceph-osd -i $osd_id --monmap /tmp/monmap --mkfs --mkjournal
echo "osd $osd_id is initialized ..."
osd_uuid=`ceph-osd -i $osd_id --get-osd-fsid`
cat >> /etc/ceph/ceph.conf <<EOF
[osd.$osd_id]
host = $host
public addr = $host
cluster addr = $host
osd uuid = $osd_uuid
post stop command = python /usr/local/bin/syncfs.py -f /data/osd.$osd_id/ceph_fsid && /opt/MegaRAID/MegaCli/MegaCli64 -AdpCacheFlush -Aall
EOF
echo 'ceph config file is configured ...'
mcs3-ha service_ceph start osd.$osd_id
echo "osd $osd_id start ..."
ceph osd crush add $osd_id 0 pool=default host=$bucket
echo "osd $osd_id is added in crush ..."
echo 'all works done ...'
end_time=`date +%s`
echo "end time: `date -d @$end_time "+%Y-%m-%d %H:%M:%S"`"
time_consuming=$(($end_time - $start_time))
echo "The total time consuming is $time_consuming s"
#! /bin/bash
osd_id=$1
ceph osd out osd$1
/etc/init.d/ceph stop osd.$1
ceph osd crush remove osd.$1
ceph auth del osd.$1
ceph osd rm $1
#清空 ceph.conf
参考链接: