Softraid on OpenBSD

Page content

Softraid

Inspired by a book from MWL - OpenBSD Mastery Filesystems, here some Notes ..

Target

build a RAID with 3 Disks, add some Data, destroy one Disk, and rebuild the Raid (and it’s Data).

Requirements

  • OpenBSD 7.2 Running
  • added 3 Disk with 20G each: sd0, sd1, sd2

Find Disks

[email protected] # dmesg |grep -i sec

wd0: 64-sector PIO, LBA, 20480MB, 41943040 sectors
sd0: 20480MB, 512 bytes/sector, 41943040 sectors
sd1: 20480MB, 512 bytes/sector, 41943040 sectors
sd2: 20480MB, 512 bytes/sector, 41943040 sectors

sd0, sd1, sd2 are New Disks for RAID

Write GPT Table

fdisk -gy sd0
fdisk -gy sd1
fdisk -gy sd2

Set Filesystem to Raid

[email protected] # disklabel -E sd0

Label editor (enter '?' for help at any prompt)
sd0> a
partition: [a]
offset: [64]
size: [41942943]
FS type: [4.2BSD] raid
sd0*> w
sd0> q
No label changes.

Dump Disklabel, apply the Label to other Disks

if your disks have different sizes, you need to find the smallest one and apply that schema to the other disks

disklabel sd0 > disklabel.sd0.softraid
disklabel -R sd1 disklabel.sd0.softraid
disklabel -R sd2 disklabel.sd0.softraid

Create Raid5

[email protected] # bioctl -c 5 -l sd0a,sd1a,sd2a softraid0

softraid0: RAID 5 volume attached as sd3

Zeroize Raid

[email protected] # dd if=/dev/zero bs=1m count=1 of=/dev/sd3c

1+0 records in
1+0 records out
1048576 bytes transferred in 0.068 secs (15227879 bytes/sec)

make Partition on Raid

[email protected] # fdisk -gy sd3
Writing GPT.
# disklabel -E sd3

Label editor (enter '?' for help at any prompt)
sd3> a
partition: [a]
offset: [64]
size: [83884703]
FS type: [4.2BSD]
sd3*> w
sd3> q
No label changes.

Build new FileSystem

[email protected] # newfs sd3a
/dev/rsd3a: 40959.3MB in 83884672 sectors of 512 bytes
203 cylinder groups of 202.50MB, 12960 blocks, 25920 inodes each
super-block backups (for fsck -b #) at:
 160, 414880, 829600, 1244320, 1659040, 2073760, 2488480, 2903200, 3317920, 3732640, 4147360, 4562080, 4976800, 5391520,
 5806240, 6220960, 6635680, 7050400, 7465120, 7879840, 8294560, 8709280, 9124000, 9538720, 9953440, 10368160, 10782880,
 11197600, 11612320, 12027040, 12441760, 12856480, 13271200, 13685920, 14100640, 14515360, 14930080, 15344800, 15759520,
 16174240, 16588960, 17003680, 17418400, 17833120, 18247840, 18662560, 19077280, 19492000, 19906720, 20321440, 20736160,
 21150880, 21565600, 21980320, 22395040, 22809760, 23224480, 23639200, 24053920, 24468640, 24883360, 25298080, 25712800,
 26127520, 26542240, 26956960, 27371680, 27786400, 28201120, 28615840, 29030560, 29445280, 29860000, 30274720, 30689440,
 31104160, 31518880, 31933600, 32348320, 32763040, 33177760, 33592480, 34007200, 34421920, 34836640, 35251360, 35666080,
 36080800, 36495520, 36910240, 37324960, 37739680, 38154400, 38569120, 38983840, 39398560, 39813280, 40228000, 40642720,
 41057440, 41472160, 41886880, 42301600, 42716320, 43131040, 43545760, 43960480, 44375200, 44789920, 45204640, 45619360,
 46034080, 46448800, 46863520, 47278240, 47692960, 48107680, 48522400, 48937120, 49351840, 49766560, 50181280, 50596000,
 51010720, 51425440, 51840160, 52254880, 52669600, 53084320, 53499040, 53913760, 54328480, 54743200, 55157920, 55572640,
 55987360, 56402080, 56816800, 57231520, 57646240, 58060960, 58475680, 58890400, 59305120, 59719840, 60134560, 60549280,
 60964000, 61378720, 61793440, 62208160, 62622880, 63037600, 63452320, 63867040, 64281760, 64696480, 65111200, 65525920,
 65940640, 66355360, 66770080, 67184800, 67599520, 68014240, 68428960, 68843680, 69258400, 69673120, 70087840, 70502560,
 70917280, 71332000, 71746720, 72161440, 72576160, 72990880, 73405600, 73820320, 74235040, 74649760, 75064480, 75479200,
 75893920, 76308640, 76723360, 77138080, 77552800, 77967520, 78382240, 78796960, 79211680, 79626400, 80041120, 80455840,
 80870560, 81285280, 81700000, 82114720, 82529440, 82944160, 83358880, 83773600,

Mount Raid

[email protected] # mkdir /raid
[email protected] # mount /dev/sd3a /raid

make mount persistent

echo "/dev/$(bioctl softraid0 |awk '/softraid0/{ print $5 }')a /raid ffs rw,softdep,noatime,nodev,nosuid 1 2" >> /etc/fstab

df

[email protected] # df -h
Filesystem     Size    Used   Avail Capacity  Mounted on
/dev/wd0a      600M    224M    346M    39%    /
/dev/wd0l      3.7G   63.7M    3.4G     2%    /home
/dev/wd0d      848M    1.7M    804M     0%    /tmp
/dev/wd0f      2.3G    1.2G    1.0G    54%    /usr
/dev/wd0g      643M    271M    339M    44%    /usr/X11R6
/dev/wd0h      2.3G    1.7G    493M    78%    /usr/local
/dev/wd0k      5.2G    2.0K    4.9G     0%    /usr/obj
/dev/wd0j      1.6G    2.0K    1.5G     0%    /usr/src
/dev/wd0e      1.2G   49.8M    1.1G     4%    /var
/dev/sd3a     38.7G    2.0K   36.8G     0%    /raid

Create Data

[email protected] # tar czf /raid/all.tar.gz / &

[email protected] # df -h /raid/;
Filesystem     Size    Used   Avail Capacity  Mounted on
/dev/sd3a     38.7G    2.2G   34.6G     6%    /raid

[email protected]# ls -al /raid/
drwxr-xr-x   2 root  wheel         512 Feb  4 11:18 .
drwxr-xr-x  14 root  wheel         512 Feb  4 11:18 ..
-rw-r--r--   1 root  wheel  2374055353 Feb  4 11:22 all.tar.gz

Raid Status

[email protected] 130 # bioctl softraid0
Volume      Status               Size Device
softraid0 0 Online        42949017600 sd3     RAID5
          0 Online        21474516480 0:0.0   noencl <sd0a>
          1 Online        21474516480 0:1.0   noencl <sd1a>
          2 Online        21474516480 0:2.0   noencl <sd2a>

let’s reboot, remove one disk and replace it with an empty, bigger one …

New Disk

[email protected] # dmesg |grep -i sect
wd0: 64-sector PIO, LBA, 20480MB, 41943040 sectors
sd0: 20480MB, 512 bytes/sector, 41943040 sectors
sd1: 20480MB, 512 bytes/sector, 41943040 sectors
sd2: 30720MB, 512 bytes/sector, 62914560 sectors
sd3: 40959MB, 512 bytes/sector, 83884800 sectors

Check Raid Status

[email protected] # bioctl softraid0
Volume      Status               Size Device
softraid0 0 Degraded      42949017600 sd3     RAID5
          0 Online        21474516480 0:0.0   noencl <sd0a>
          1 Offline                 0 0:1.0   noencl <>
          2 Online        21474516480 0:2.0   noencl <sd1a>

Data still Safe ?

[email protected] # ll /raid/
total 4639136
-rw-r--r--  1 root  wheel  2374055353 Feb  4 11:22 all.tar.gz

Rebuild Raid

[email protected] # bioctl -R /dev/sd2a sd3
softraid0: sd2a partition too large, wasting 10737418240 bytes
softraid0: rebuild of sd3 started on sd2a

Rebuild Progress

[email protected] # bioctl sd3
Volume      Status               Size Device
softraid0 0 Rebuild       42949017600 sd3     RAID5 10% done
          0 Online        21474516480 0:0.0   noencl <sd0a>
          1 Rebuild       32211934720 0:1.0   noencl <sd2a>
          2 Online        21474516480 0:2.0   noencl <sd1a>

seems working fine !

Problems

  • created more than one Bluescreen while reading and writing to the RAID

  • added 4.th Disk to VM -> RAID Volume (softraid0 -> sd3) got renamed to sd4, while the new Disk got inserted as sd3

-> RAID was broken until i remove the new Disk and did a reboot :(

softraid0: volume sd4 is roaming, it used to be sd3, updating metadata

sha256: d69126086aa0db837e03b8ad8b02f2e54a1642a265dc5d4d1f2ddc852b728d76