I’m really bummed when I provision a boot volume that’s too large..
[rgibson@centos7 ~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 50G 6.5G 44G 13% /
At the current GP2 price of $0.05/gb, I’m paying three or four dollars more per month than I really should be. It’s really easy to make boot volumes bigger, but there’s not a lot of instruction out there on making them smaller. This method also has the advantage of removing the marketplace codes on the volume, making it easier to attach to a running instance later.
Shutting down the instance you want to shrink the boot volume for. You’ll need a second instance of a different OS, or at least one from a different AMI, as we can’t reliably boot up if there are multiple volumes attached with the same UUID. I used an Ubuntu instance (16.04).
First, detach the root volume you want to shrink from the original instance:
(Side note, this would be a good time to take a snapshot of this volume just in case you make a mistake in this process, such as wiping the partition table or running mkfs on the wrong volume…)
Attach it as a volume on your powered-off Ubuntu instance. I used sdf:
Create a smaller replacement volume in the correct availability zone. This would be a good time to encrypt it as well…
Attach this volume as sdg to the Ubuntu instance, and power it on. SSH in and create a new empty partition on the new volume. Pay attention that you’re working on the NEW volume and not messing with your source volume. Since we attached it as sdg it will be visible as /dev/xvdg:
ubuntu@ubuntu:~$ sudo -i root@ubuntu:~# fdisk /dev/xvdg Welcome to fdisk (util-linux 2.27.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0x1db62587. Command (m for help): n Partition type p primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions) Select (default p): p Partition number (1-4, default 1): First sector (2048-41943039, default 2048): Last sector, +sectors or +size{K,M,G,T,P} (2048-41943039, default 41943039): Created a new partition 1 of type 'Linux' and of size 20 GiB. Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. root@ubuntu:~#
Format the new partition as XFS:
Edit: If you use an OS that’s generations newer than the one you’re fixing, you could get into a bind here if you’re using a version of mkfs.xfs that writes a higher version signature than your original kernel supports. Maybe double check this now??
root@ubuntu:~# mkfs.xfs /dev/xvdg1 meta-data=/dev/xvdg1 isize=512 agcount=4, agsize=1310656 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=0 data = bsize=4096 blocks=5242624, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 root@ubuntu:~#
Make mount points for the original and new volumes, and mount them. I mounted the original drive read only:
root@ubuntu:~# mkdir /mnt/xvdf1 root@ubuntu:~# mkdir /mnt/xvdg1 root@ubuntu:~# mount -o ro /dev/xvdf1 /mnt/xvdf1 root@ubuntu:~# mount /dev/xvdg1 /mnt/xvdg1
Now we need to clone the filesystem from the old drive to the new drive. Originally I was doing this with rsync, but due to sparse file handling, the sizes didn’t match up exactly on the destination drive, so now I use a fancy piped tar command and it comes out perfect. We have to start out in the source directory for this to work right:
root@ubuntu:~# cd /mnt/xvdf1 root@ubuntu:/mnt/xvdf1# tar cSf - . | cat | (cd ../xvdg1/ && tar xSBf -)
There will be some warnings from tar about ‘socket ignored’ and this is expected.
After all of the files are copied, we need to fix the volume UUID. While we could update the UUID of the new drive to match the UUID of the old drive, there are advantages to using a new UUID. For example, if you had to perform an offline fix on this drive in the future, you could attach it to one of your other running CentOS instances and mount it. However, if you keep the old UUID you’ll have difficulty attaching it to another CentOS instance because the root UUIDs will match and the OS will get confused. Either way, we need to get a list of the UUID’s on the attached disks. This is done with the blkid command:
root@ubuntu:/mnt/xvdf1# cd ../xvdg1/ root@ubuntu:/mnt/xvdg1# blkid /dev/xvda1: LABEL="cloudimg-rootfs" UUID="567ab888-a3b5-43d4-a92a-f594e8653924" TYPE="ext4" PARTUUID="1a7d4c6a-01" /dev/xvdf1: UUID="0f790447-ebef-4ca0-b229-d0aa1985d57f" TYPE="xfs" PARTUUID="000aec37-01" /dev/xvdg1: UUID="6094350f-7d18-4256-b52e-6dbf5f196219" TYPE="xfs" PARTUUID="1db62587-01"
xvda1 is the ubuntu system, so forget about that UUID. xvdf1 has the original UUID that came with the CentOS AMI. xvdg1’s UUID was created when we ran mkfs.xfs. While you *could* use xfs_admin to assign the UUID from xvdf1 onto xvdg1 and skip to installing grub, I think it’s better to put the new UUID in all the places it need to go, so that’s what I’m going to describe next. There are four files to modify, and we can use sed to perform the editing on each one of them. The format of the command is:
# sed -i -e 's/old_UUID/new_UUID/g' /path/to/file
We’re already in /mnt/xvdg1/ from when we ran the tar pipe, so the file paths are relative to this point:
root@ubuntu:/mnt/xvdg1# sed -i -e 's/0f790447-ebef-4ca0-b229-d0aa1985d57f/6094350f-7d18-4256-b52e-6dbf5f196219/g' etc/fstab root@ubuntu:/mnt/xvdg1# sed -i -e 's/0f790447-ebef-4ca0-b229-d0aa1985d57f/6094350f-7d18-4256-b52e-6dbf5f196219/g' boot/grub2/grub.cfg root@ubuntu:/mnt/xvdg1# sed -i -e 's/0f790447-ebef-4ca0-b229-d0aa1985d57f/6094350f-7d18-4256-b52e-6dbf5f196219/g' boot/grub/grub.conf root@ubuntu:/mnt/xvdg1# sed -i -e 's/0f790447-ebef-4ca0-b229-d0aa1985d57f/6094350f-7d18-4256-b52e-6dbf5f196219/g' boot/grub/menu.lst
I really don’t think the last two files are necessary but the old UUID is in there so I changed it anyway. The last step is to chroot into the disk and to install grub. In order for that to work, we need to bind mount some places from the running Ubuntu. Grub needs to be installed on the new boot drive (/dev/xvdg), NOT the partition (/dev/xvdg1).
root@ubuntu:/mnt/xvdg1# mount --bind /dev dev root@ubuntu:/mnt/xvdg1# mount --bind /proc proc root@ubuntu:/mnt/xvdg1# mount --bind /sys sys root@ubuntu:/mnt/xvdg1# chroot . [root@centos /]# grub2-install /dev/xvdg Installing for i386-pc platform. Installation finished. No error reported. [root@centos /]# exit exit root@ubuntu:/mnt/xvdg1#
At this point, we are just about done. Shut down the ubuntu instance, detach the new drive we created, and attach it to the original instance as /dev/sda1.
That’s it! Start your original instance, log in and verify the size:
[rgibson@centos ~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 20G 6.5G 13G 33% /
If your instance is unreachable or you otherwise can’t log in, diagnose it with AWS’s “view instance screenshot” which is probably the coolest feature they’ve added to EC2 lately. Delete your old volume and any snapshots when you’re comfortable doing so.
Happy shrinking!
All right until the last step where I have to reinstall the grub.
I got the following error
# grub2-install /mnt/xvdg
Installing for i386-pc platform.
grub2-install: error: unknown filesystem
my disks configs
# sudo fdisk -l
Disk /dev/xvda: 8589 MB, 8589934592 bytes, 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000b15ec
Device Boot Start End Blocks Id System
/dev/xvda1 * 2048 16777215 8387584 83 Linux
Disk /dev/xvdg: 8589 MB, 8589934592 bytes, 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xcdb0e72a
Device Boot Start End Blocks Id System
/dev/xvdg1 2048 16777215 8387584 83 Linux
Disk /dev/xvdf: 214.7 GB, 214748364800 bytes, 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000ae09f
Device Boot Start End Blocks Id System
/dev/xvdf1 * 2048 419430366 209714159+ 83 Linux
What is missing or what is my error to install the grub and boot disk, verify that /dev/ xvdg1 is not marked with boot, these disks are ec2 medium instance of aws.
Waiting for your help thanks.
I wonder if this redhat article is related? Says there was an issue in 7.3 with the XFS UUID/CRC… Another possibility is that if you used a newer Ubuntu, it may have made the XFS version too new on the destination drive. You may want to make the destination drive from an equivalent of the OS that you’re cloning and then attach it here for the file copy and grub-install. I’m sorry I’m not any more help, good luck.
Thank you! Thank you! Thank you! 24 hours wasted on this until I came across this tutorial! I owe you a beer!
You are a genius. this guide save my lot of time and its working fine for me.
everything works perfect till attachment of new drive to instance.
Instance get started but i am not able to login. getting following error.
/bin/bash: Permission denied
Maybe it has to do with selinux? I wrote this assuming it was disabled. Check the contents of /etc/selinux/config, if it’s set to enabled either change it to permissive/disabled, or run “touch /.autorelabel” to relabel the new files, allowing the system read access again.
Blindly did stuff.
Shrieked ssd to 45 gb from 200, 5 times cost saved.
Thank you! Thank you! Followed by the tutorial, works perfect!