3.2 KiB
Direct migration of Synology RAID-0 ext4fs volume to larger pair of drives using dd
Why the article
You might think, that since Synology is widely-used and battle-tested product, it surely should be possible to do lift-and-shift migration of its volume from one set of drives to another.
And for most cases it is, but there are several for which the official solution is "use HyperBackup". Which requires you to have at least one drive with storage larger than data you have. And old little me heren wanted to migrate around 5 TB of data from RAID-0 ext4fs on 2 x 3TB drives, to 2 x 4TB drives.
After traversing 5 rings of hell (block devices, partitioning, software raid, lvm pool, volume and filesystem expansion) I am here to tell the story. Get a coffee, and happy reading.
What is needed
- Some way to connect 2 drives at once to Linux machine (for my laptop - 2x USB-to-SATA connectors)
- System with mdadm installed (restarted after installation)
- Time, depending on drive size mostly. Plan this for days rather than hours.
On Linux-PC
- Take drives out of the trays in Synology, mark them left/right
- Copy data from respectable OLD drives
dd if=/dev/sdX of=/dev/sdY bs=4M status=progressto the NEW ones - I kept mine with the same order, so mark those as well. Depending on drive size, prepare for several hours of waiting. For the sake of all that is holy, CHECK TWICE if AND of PARAMETERS before running. You are welcome. - Keep the OLD drives securely stored and do not touch data on them. There is still a lot that can go wrong.
- Connect both NEW drives to the PC, see what device name (i.e.
/dev/sdX) they get - Just to be sure, see if raid has not mounted automatically (
cat /proc/mdstatlook for mdX device name), if there is one - unmount usingmdadm -S /dev/mdX - For each NEW drive increase last (should be the largest) partition size, i.e. using
cfdisk /dev/sdX, note the last partition number (i.e.sdX9) - Remount raid using
mdadm -A /dev/mdX /dev/sdX9 /dev/sdY9 --update=devicesize
Raid size should increase to all space available on both drives partitions.
At this point I have moved them to Synology, hoping for the magic to happen. However...
On Synology
System should start normally. Storage manager in DSM may inform you that there is additional space to be used for the storage pool, but in my case clicking expand resulted in an error.
Back to the console we go.
Verify your mount points using df -h /volume1 to get yours /dev/vg1/volume_1
Get your mountpoint for /volume1 and ensure it is on lvm (in my case vg1 device in path):
marcin@minerva:~$ df -h /volume1
Filesystem Size Used Avail Use% Mounted on
/dev/vg1/volume_1 5.5T 4.4T 1.1T 81% /volume1
Using the paths you now know, run the following commands:
sudo pvresize /dev/mdX
sudo lvextend -l +100%FREE /dev/vgX/volume_X
sudo reboot
After reboot, in DSM the pool size is now increased, but the Volume1 size will still stay the same size, Synology will alert it detected incomplete volume expansion, but clicking "expand now" is still a trap. Sighs. Back to console again.
sudo resize2fs /dev/vg1/volume_1
No restart necessary, look into DSM and congratulate yourself. A seasoned home-lab self-proclaimed admin.