MDAdm. Working with RAID in Ubuntu
- Details
- Category: Servers
- Published: Monday, 21 January 2019 14:00
- Written by Super User
- Hits: 2671
Sometimes when configuring servers you must configure RAID.
On branded servers they are often harware, but nonetheless, they often have to deal with a software raid.
Building a RAID
Lets build on the server RAID1.
First, create the same partition on the sdb and sdc drives
#fdisk /dev/sdb
command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-16777215, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-16777215, default 16777215): +5G
Command (m for help): t
Hex code (type L to list codes): 83
Changed system type of partition 1 to 83
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
Similarly, we will do for the sdc disk
Install the RAID utility
# apt-get install mdadm
Now we will build RAID 1
# mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
After the assembly, you can view the RAID state by using the command
# cat /proc/mdstat
As a result, we must get something similar to
personalities : [raid1]
md0 : active raid1 sdc1[1] sdb1[0]
5238720 blocks super 1.2 [2/2] [UU]
unused devices: <none>
Now you can create a file system on a created RAID partition and connect it to the system.
# mkfs.ext4 /dev/md0
# mkdir /mnt/raid
# mount /dev/md0 /mnt/raid
Also, to check the correctness of the RAID work, we will create a file in the RAID partition:
# touch /mnt/raid/test.txt
Error while working with RAID
Once we have created the RAID, we have defined it as /dev/md0, but after rebooting this device will not be in the system, and /dev/md127 will appear instead. Here you can either use this device name in the future or explain to the system that our RAID partition is /dev/md0 and not otherwise. To do this, execute the command:
# mdadm -Db /dev/md0 > /etc/mdadm/mdadm.conf
As a result of this command, the /etc/mdadm/mdadm.conf file will contain a string
ARRAY /dev/md0 metadata=1.2 name=ub-4:0 UUID=7da67e34:3d29e3a1:bdf36edd:6be26e60
Після цього необхідно оновити образ initramfs:
# update-initramfs -u
Now after reboot, our RAID partition will be defined as /dev/md0.
RAID degradation and recovery
Let's see how you can make the degradation of the RAID. Naturally, in a real system, the drive usually crashed itself and there is no need to specifically declare it to be fatal. But we will use mdadm utility features and announce one disk of RAID - /dev/sdb1 bad.
# mdadm /dev/md0 --fail /dev/sdb1
Let's see now the state of RAID
# cat /proc/mdstat
We should see that /dev/sdb1 has some problems and the RAID is degraded:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[2](F) sdc1[1]
5238720 blocks super 1.2 [2/1] [_U]
unused devices: <none>
Now using fdisk, we create /dev/sdd a partition of the same size as /dev/sdc1. Then remove /dev/sdb1 from RAID
# mdadm /dev/md0 --remove /dev/sdb1
And add a new partition /dev/sdd1
# mdadm /dev/md0 --add /dev/sdd1
If we immediately look at the RAID state
# cat /proc/mdstat
We will see that we have again two normal disks in RAID and its synchronization is currently happening:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdd1[2] sdc1[1]
5238720 blocks super 1.2 [2/1] [_U]
[=>...................] recovery = 6.2% (329984/5238720) finish=1.2min speed=65996K/sec
unused devices: <none>
If we now mount our RAID
# mount /dev/md0 /mnt/raid/
We'll see that the file we created before is present and nothing was missing.
# ls /mnt/raid/
lost+found test.txt