@cdmonkey
2017-05-11T19:43:54.000000Z
字数 3961
阅读 1609
存储
首先查看物理卷情况:
[root@PBSNFS01 ~]# pvscan
PV /dev/emcpowera VG vg_nfs lvm2 [2.93 TiB / 0 free]
PV /dev/emcpowerb1 VG vg_nfs lvm2 [1.04 TiB / 0 free]
PV /dev/emcpowerc1 VG vg_nfs lvm2 [1.04 TiB / 0 free]
PV /dev/emcpowerd1 VG vg_nfs lvm2 [1.04 TiB / 0 free]
Total: 4 [6.06 TiB] / in use: 4 [6.06 TiB] / in no VG: 0 [0 ]
http://www.cnblogs.com/kerrycode/p/4569515.html
所谓的安全移除:
[root@PBSNFS01 ~]# vgchange -a n vg_data
...
0 logical volume(s) in volume group "vg_data" now active
[root@PBSNFS01 ~]# vgremove vg_data
...
WARNING: 3 physical volumes are currently missing from the system.
Do you really want to remove volume group "vg_data" containing 1 logical volumes? [y/n]: y
Logical volume "lv_data" successfully removed
Volume group "vg_data" not found, is inconsistent or has PVs missing.
Consider vgreduce --removemissing if metadata is inconsistent.
查看卷组情况:
[root@PBSNFS01 ~]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "vg_nfs" using metadata type lvm2
[root@PBSNFS01 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg_nfs 4 1 0 wz--n- 6.06t 0
[root@PBSNFS01 ~]# lvscan
ACTIVE '/dev/vg_nfs/lv_nfs' [6.06 TiB] inherit
http://www.cnblogs.com/mchina/p/linux-centos-logical-volume-manager-lvm.html
[root@PBSNFS01 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg_nfs 4 1 0 wz--n- 6.06t 0
能够看到,卷组vg_nfs
上面已经没有空闲空间了。这时我们首先就需要对卷组进行扩容。最常用方案是增加PV
至现有的卷组。
审单图片的“NFS”服务器容量已满,需要进行扩容。下面是扩容前的情况:
[root@PBSNFS01 ~]# pvscan
PV /dev/emcpowera VG vg_nfs lvm2 [2.93 TiB / 0 free]
PV /dev/emcpowerb1 VG vg_nfs lvm2 [1.04 TiB / 0 free]
PV /dev/emcpowerc1 VG vg_nfs lvm2 [1.04 TiB / 0 free]
PV /dev/emcpowerd1 VG vg_nfs lvm2 [1.04 TiB / 0 free]
Total: 4 [6.06 TiB] / in use: 4 [6.06 TiB] / in no VG: 0 [0 ]
治国同志已经提前为该服务器分配了一块盘,我们需要进行扫描,以便系统能够识别到新的硬盘设备。
[root@PBSNFS01 ~]# ls /sys/class/fc_host
# 会看到host1-host4,因而需要对这四个host进行扫描操作:
echo "- - -" > /sys/class/scsi_host/host1/scan
echo "- - -" > /sys/class/scsi_host/host2/scan
echo "- - -" > /sys/class/scsi_host/host3/scan
echo "- - -" > /sys/class/scsi_host/host4/scan # 耗时大概五分钟,就能够将设备扫描出来,但是最后这条指令会卡死。
扫描完毕后(未执行其他操作)的情况:
[root@PBSNFS01 ~]# pvscan
Couldn't find device with uuid 7EzxJZ-iWPC-3eFF-Cows-LthP-AwiE-lUdeXK.'
Couldn't find device with uuid aUQ2oC-JRUz-l6xl-CwRj-idPs-WFQT-ikvhLG.'
Couldn't find device with uuid yeZuAV-ciGH-w7AL-p3Gn-0akv-gWxM-fyRjKA.'
PV /dev/emcpowere1 VG vg_data lvm2 [1.04 TiB / 0 free]
PV unknown device VG vg_data lvm2 [1.04 TiB / 0 free]
PV unknown device VG vg_data lvm2 [1.04 TiB / 0 free]
PV unknown device VG vg_data lvm2 [1.04 TiB / 0 free]
PV /dev/emcpowera VG vg_nfs lvm2 [2.93 TiB / 0 free]
PV /dev/emcpowerb1 VG vg_nfs lvm2 [1.04 TiB / 0 free]
PV /dev/emcpowerc1 VG vg_nfs lvm2 [1.04 TiB / 0 free]
PV /dev/emcpowerd1 VG vg_nfs lvm2 [1.04 TiB / 0 free]
Total: 8 [10.23 TiB] / in use: 8 [10.23 TiB] / in no VG: 0 [0 ]
说明新增的设备中携带了之前的“LVM”信息,并且是残缺的信息,因而首先需要将Unknow
状态的PV
从该无用VG
中移除,最后再将该无用的VG
移除。
# 移除未知或已丢失的卷组:
[root@PBSNFS01 ~]# vgreduce --removemissing vg_data
Couldn't find device with uuid 7EzxJZ-iWPC-3eFF-Cows-LthP-AwiE-lUdeXK.'
Couldn't find device with uuid aUQ2oC-JRUz-l6xl-CwRj-idPs-WFQT-ikvhLG.'
Couldn't find device with uuid yeZuAV-ciGH-w7AL-p3Gn-0akv-gWxM-fyRjKA.'
Wrote out consistent volume group vg_data
执行完毕后该无用卷组里面就只剩下一个物理卷了,因而要将其进行安全移除:
[root@PBSNFS01 ~]# vgchange -a n vg_data
0 logical volume(s) in volume group "vg_data" now active
最后,移除该无用的卷组:
[root@PBSNFS01 ~]# vgremove vg_data
Volume group "vg_data" successfully removed
移除完毕后,再次查看物理卷信息:
[root@PBSNFS01 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/emcpowera vg_nfs lvm2 a-- 2.93t 0
/dev/emcpowerb1 vg_nfs lvm2 a-- 1.04t 0
/dev/emcpowerc1 vg_nfs lvm2 a-- 1.04t 0
/dev/emcpowerd1 vg_nfs lvm2 a-- 1.04t 0
/dev/emcpowere1 lvm2 --- 1.04t 1.04t
# 不用再执行创建物理卷的操作,因为 emcpowere1 本身已经是个物理卷了。
# 直接对卷组进行扩容:
[root@PBSNFS01 ~]# vgextend vg_nfs /dev/emcpowere1
Volume group "vg_nfs" successfully extended
卷组扩容完毕后再对逻辑卷扩容:
[root@PBSNFS01 ~]# lvextend -l +100%FREE /dev/mapper/vg_nfs-lv_nfs
Size of logical volume vg_nfs/lv_nfs changed from 6.06 TiB (1588245 extents) to 7.10 TiB (1861663 extents).
Logical volume lv_nfs successfully resized
空间扩容完毕后需要执行“resize”操作,耗时比较长,请耐心等待:
[root@PBSNFS01 ~]# resize2fs /dev/mapper/vg_nfs-lv_nfs
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/mapper/vg_nfs-lv_nfs is mounted on /home/app/images; on-line resizing required
old desc_blocks = 388, new_desc_blocks = 455
Performing an on-line resize of /dev/mapper/vg_nfs-lv_nfs to 1906342912 (4k) blocks.
The filesystem on /dev/mapper/vg_nfs-lv_nfs is now 1906342912 blocks long.
至此,扩容完毕。