In this guide I will show you how to resize the root partition of a FreeBSD VPS using gpart(8). This is quite convenient since it allows us to increase the disk’s size of a virtual machine without needing to reinstall the whole operating system. Even if this process is safe and well tested, I strongly recommend you to try this procedure in a development server and to create multiple snapshot of the target virtual machine before proceeding. Since the resize procedure is a bit different while using ZFS, I will cover that case in the last part of the tutorial.

# Resize a UFS partition⌗

First, let’s figure out which disk we want to resize. Use # gpart show to get a list of available drives. Since I only have one disk, this is the output:

root@bsdvm4:~ # gpart show
=>      40  52428720  vtbd0  GPT  (25G)
40      1024      1  freebsd-boot  (512K)
1064  48234496      2  freebsd-ufs  (23G)
48235560   4193200      3  freebsd-swap  (2.0G)


Before proceeding, let us shutdown the system using # poweroff and let’s resize the physical disk image. Since I’m using KVM as my hypervisor and since the disk image is of qcow2 format, I’m going to use qemu-img(1):

$> qemu-img resize bsdvm4.qcow2 +20G  Your disk should now look something like this: root@bsdvm4:~ # gpart show => 40 52428720 vtbd0 GPT (45G) [CORRUPT] 40 1024 1 freebsd-boot (512K) 1064 48234496 2 freebsd-ufs (23G) 48235560 4193200 3 freebsd-swap (2.0G)  If the disk was formatted using GPT partition scheme, you may see a CORRUPT flag. This is caused by the fact that the GPT backup partition table is no longer at the end of the drive, so to fix it, simply run: root@bsdvm4:~ # gpart recover vtbd0 vtbd0 recovered root@bsdvm4:~ # gpart show => 40 94371760 vtbd0 GPT (45G) 40 1024 1 freebsd-boot (512K) 1064 48234496 2 freebsd-ufs (23G) 48235560 4193200 3 freebsd-swap (2.0G) 52428760 41943040 - free - (20G)  You should see the free space at the end of the disk now. Since partitions can only be resized into contiguous chunk of space, to resize the second partition(i.e., the root partition) we need to delete the third one(which is safe since swap partitions only holds temporary data), resize the second one and then, recreate the swap partition. 1 - Disable swap partition: root@bsdvm4:~ # swapoff /dev/vtbd0p3  2 - Delete swap partition: root@bsdvm4:~ # gpart delete -i 3 vtbd0 vtbd0p3 deleted root@bsdvm4:~ # gpart show => 40 94371760 vtbd0 GPT (45G) 40 1024 1 freebsd-boot (512K) 1064 48234496 2 freebsd-ufs (23G) 48235560 46136240 - free - (22G)  3 - Resize partition: root@bsdvm4:~ # gpart resize -i 2 -s 43G -a 4k vtbd0 vtbd0p2 resized root@bsdvm4:~ # gpart show => 40 94371760 vtbd0 GPT (45G) 40 1024 1 freebsd-boot (512K) 1064 90177536 2 freebsd-ufs (43G) 90178600 4193200 - free - (2.0G)  4 - Recreate swap partition: root@bsdvm4:~ # gpart add -t freebsd-swap -a 4k vtbd0 vtbd0p3 added root@bsdvm4:~ # gpart show => 40 94371760 vtbd0 GPT (45G) 40 1024 1 freebsd-boot (512K) 1064 90177536 2 freebsd-ufs (43G) 90178600 4193200 3 freebsd-swap (2.0G)  5 - Grow UFS partition: root@bsdvm4:~ # growfs /dev/vtbd0p2 Device is mounted read-write; resizing will result in temporary write suspension for /. It's strongly recommended to make a backup before growing the file system. OK to grow filesystem on /dev/vtbd0p2, mounted on /, from 23GB to 43GB? [yes/no] yes super-block backups (for fsck_ffs -b #) at: 48657216, 49937664, 51218112, 52498560, 53779008, 55059456, 56339904, 57620352, 58900800, 60181248, 61461696, 62742144, 64022592, 65303040, 66583488, 67863936, 69144384, 70424832, 71705280, 72985728, 74266176, 75546624, 76827072, 78107520, 79387968, 80668416, 81948864, 83229312, 84509760, 85790208, 87070656, 88351104, 89631552  6 - Activate swap partition: root@bsdvm4:~ # swapon /dev/vtbd0p3  The disk is now resized. # Resize a ZFS partition⌗ In this last part of the tutorial we will redo the same thing but using ZFS instead of UFS. While the process is quite similar, there are some aspects related to ZFS pools that may lead to confusion. So let’s get started by listing available disks in our system: root@bsdvm4:~ # gpart show => 40 52428720 vtbd0 GPT (25G) 40 1024 1 freebsd-boot (512K) 1064 4194304 2 freebsd-swap (2.0G) 4195368 48233392 3 freebsd-zfs (23G)  We want to extend the last partition of the disk(i.e., the root partition) but first, let’s extend the physical disk image. As always, shutdown your virtual machine and then use your hypervisor’s tools to edit the disk. Since I’m using QEMU/KVM, I will use qemu-img(1): $> qemu-img resize bsdvm04.qcow2 +20G


Now start your machine and run # gpart show, you should get something like this:

root@bsdvm4:~ # gpart show
=>      40  52428720  vtbd0  GPT  (45G) [CORRUPT]
40      1024      1  freebsd-boot  (512K)
1064   4194304      2  freebsd-swap  (2.0G)
4195368  48233392      3  freebsd-zfs  (23G)


Like before, since I’m using a GPT partition scheme and since the GPT partition scheme backup is no longer at the end of the disk, the disk is flagged as corrupted. To fix this, simply run:

root@bsdvm4:~ # gpart recover vtbd0
vtbd0 recovered
root@bsdvm4:~ # gpart show
=>      40  94371760  vtbd0  GPT  (45G)
40      1024      1  freebsd-boot  (512K)
1064   4194304      2  freebsd-swap  (2.0G)
4195368  48233392      3  freebsd-zfs  (23G)
52428760  41943040         - free -  (20G)


Now we can extend the last partition using

root@bsdvm4:~ # gpart resize -i 3 -s 44031M vtbd0
vtbd0p3 resized
root@bsdvm4:~ # gpart show
=>      40  94371760  vtbd0  GPT  (45G)
40      1024      1  freebsd-boot  (512K)
1064   4194304      2  freebsd-swap  (2.0G)
4195368  90175488      3  freebsd-zfs  (43G)
94370856       944         - free -  (472K)


Since no error were shown, the partition is successfully resized.

root@bsdvm4:~ # zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot  22.5G   939M  21.6G        -         -     0%     4%  1.00x    ONLINE  -


As you can see the size has not been automatically updated. To do so manually, we can use the following command:

root@bsdvm4:~ # zpool online -e zroot vtbd0p3


If we check again

root@bsdvm4:~ # zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot  42.5G   940M  41.6G        -         -     0%     2%  1.00x    ONLINE  -


We can see that the size is what we were expecting.