在 CentOS 7.x 上刪除 RAID

參考網站:
6.3.5. Removing a RAID Device
mdadm软RAID的删除方法和注意事项 – 猴叔的博客 – 51CTO技术博客

1. 顯示目前 RAID 狀態
# mdadm –detail /dev/md0 | tail -n 4
    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sdb1
       1       8       17        1      active sync   /dev/sdc1
       2       8       33        2      active sync   /dev/sdd1

2. 停用 RAID
# mdadm –stop /dev/md0
mdadm: stopped /dev/md0[@more@]3. 移除 RAID
# mdadm –remove /dev/md0

4. 移除 superblocks
# mdadm –zero-superblock /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

在 CentOS 7.x 上使用 RAID 5

1. 安裝 mdadm 套件
# yum install mdadm

2. 查看目前磁碟狀態
# fdisk -l | grep ‘^Disk /dev/sd’
Disk /dev/sda: 10.7 GB, 10737418240 bytes
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
Disk /dev/sdc: 21.5 GB, 21474836480 bytes
Disk /dev/sdd: 21.5 GB, 21474836480 bytes
Disk /dev/sde: 21.5 GB, 21474836480 bytes

[@more@]3. 建立磁碟分割區
# fdisk /dev/sdb


重複上面的動作,完成所有的磁碟
# fdisk /dev/sdc
# fdisk /dev/sdd
# fdisk /dev/sde

4. 建立 /dev/md0 磁碟陣列分割區
# mdadm –create –verbose –auto=yes /dev/md0 –level=5 –raid-devices=4 /dev/sd[b-e]
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: /dev/sdb appears to be part of a raid array:
       level=raid0 devices=0 ctime=Thu Jan  1 08:00:00 1970
mdadm: partition table exists on /dev/sdb but will be lost or
       meaningless after creating array
mdadm: /dev/sdc appears to be part of a raid array:
       level=raid0 devices=0 ctime=Thu Jan  1 08:00:00 1970
mdadm: partition table exists on /dev/sdc but will be lost or
       meaningless after creating array
mdadm: /dev/sdd appears to be part of a raid array:
       level=raid0 devices=0 ctime=Thu Jan  1 08:00:00 1970
mdadm: partition table exists on /dev/sdd but will be lost or
       meaningless after creating array
mdadm: size set to 20955136K
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

5. 查看建置結果
# fdisk -l | grep /dev/md0
Disk /dev/md0: 42.9 GB, 42916118528 bytes, 83820544 sectors

# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Thu Jun 30 03:31:44 2016
     Raid Level : raid5
     Array Size : 41910272 (39.97 GiB 42.92 GB)
  Used Dev Size : 20955136 (19.98 GiB 21.46 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Thu Jun 30 03:36:27 2016
          State : clean, degraded, recovering
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 47% complete

           Name : localhost.localdomain:0  (local to host localhost.localdomain)
           UUID : 40801919:fa833719:77db4a5b:bd3e0c50
         Events : 10

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       3       8       48        2      spare rebuilding   /dev/sdd

6. 格式化分割區 CentOS 7 改用 xfs
# mkfs.xfs /dev/md0
meta-data=/dev/md0               isize=256    agcount=16, agsize=654720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=5120, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

7. 查看硬碟的 UUID
# blkid | grep /dev/md0
/dev/md0: UUID=”fe9ac611-bb4a-4d85-87af-2998f6213cb8″ TYPE=”xfs”

8. 建立掛載目錄並掛載
# mkdir /mnt/raid5
# mount /dev/md0 /mnt/raid5
# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  8.5G  1.3G  7.2G  16% /
devtmpfs                 983M     0  983M   0% /dev
tmpfs                    993M     0  993M   0% /dev/shm
tmpfs                    993M  8.7M  985M   1% /run
tmpfs                    993M     0  993M   0% /sys/fs/cgroup
/dev/sda1                497M  153M  345M  31% /boot
tmpfs                    199M     0  199M   0% /run/user/0
/dev/md0                  40G   33M   40G   1% /mnt/raid5

9. 修改 /etc/fstab
# vim /etc/fstab
加入下面一行
UUID=fe9ac611-bb4a-4d85-87af-2998f6213cb8 /mnt/raid5                   xfs     defaults        0 0

10. 編輯 /etc/mdadm.conf  設定檔
# mdadm –detail –scan –verbose > /etc/mdadm.conf
# cat /etc/mdadm.conf
ARRAY /dev/md0 level=raid5 num-devices=3 metadata=1.2 spares=1 name=localhost.localdomain:0 UUID=40801919:fa833719:77db4a5b:bd3e0c50
   devices=/dev/sdb,/dev/sdc,/dev/sdd

在 CentOS 6.x 上使用 RAID 5

1. 安裝 mdadm 套件
# yum install mdadm

2. 查看目前磁碟狀態
# fdisk -l | grep ‘^Disk /dev/sd’
Disk /dev/sda: 10.7 GB, 10737418240 bytes
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
Disk /dev/sdc: 21.5 GB, 21474836480 bytes
Disk /dev/sdd: 21.5 GB, 21474836480 bytes
Disk /dev/sde: 21.5 GB, 21474836480 bytes[@more@]

3. 建立磁碟分割區
# fdisk /dev/sdb


重複上面的動作,完成所有的磁碟
# fdisk /dev/sdc
# fdisk /dev/sdd
# fdisk /dev/sde

4. 建立 /dev/md0 磁碟陣列分割區
# mdadm –create –verbose –auto=yes /dev/md0 –level=5 –raid-devices=4 /dev/sd[b-e]
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: /dev/sdb appears to be part of a raid array:
       level=raid0 devices=0 ctime=Thu Jan  1 08:00:00 1970
mdadm: partition table exists on /dev/sdb but will be lost or
       meaningless after creating array
mdadm: /dev/sdc appears to be part of a raid array:
       level=raid0 devices=0 ctime=Thu Jan  1 08:00:00 1970
mdadm: partition table exists on /dev/sdc but will be lost or
       meaningless after creating array
mdadm: /dev/sdd appears to be part of a raid array:
       level=raid0 devices=0 ctime=Thu Jan  1 08:00:00 1970
mdadm: partition table exists on /dev/sdd but will be lost or
       meaningless after creating array
mdadm: /dev/sde appears to be part of a raid array:
       level=raid0 devices=0 ctime=Thu Jan  1 08:00:00 1970
mdadm: partition table exists on /dev/sde but will be lost or
       meaningless after creating array
mdadm: size set to 20955136K
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

5. 查看建置結果
# fdisk -l | grep /dev/md0
Disk /dev/md0: 64.4 GB, 64374177792 bytes

# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      62865408 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
      [>………………..]  recovery =  2.8% (607044/20955136) finish=6.7min speed=50587K/sec

unused devices: <none>

# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Wed Jun 29 08:16:51 2016
     Raid Level : raid5
     Array Size : 62865408 (59.95 GiB 64.37 GB)
  Used Dev Size : 20955136 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Wed Jun 29 08:17:14 2016
          State : clean, degraded, recovering
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 6% complete

           Name : localhost.localdomain:0  (local to host localhost.localdomain)
           UUID : 17b9df4d:e3542df5:34c1a172:298a07a5
         Events : 2

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      spare rebuilding   /dev/sde

6. 格式化分割區
# mkfs -t ext4 /dev/md0
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=384 blocks
3932160 inodes, 15716352 blocks
785817 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
480 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 35 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

7. 建立掛載目錄並掛載
# mkdir /mnt/raid5
# mount /dev/md0 /mnt/raid5
# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                      8.3G 1022M  6.9G  13% /
tmpfs                 931M     0  931M   0% /dev/shm
/dev/sda1             477M   63M  389M  14% /boot
/dev/md0               59G   52M   56G   1% /mnt/raid5

8. 查看硬碟的 UUID
# blkid | grep /dev/md0
/dev/md0: UUID=”a24bbe2b-c0f1-4417-99d9-866ea1f2a33d” TYPE=”ext4″

9. 修改 /etc/fstab
# vim /etc/fstab
加入下面一行
UUID=a24bbe2b-c0f1-4417-99d9-866ea1f2a33d /mnt/raid5                   ext4    defaults        1 1

10. 編輯 /etc/mdadm.conf  設定檔
# mdadm –detail –scan –verbose >> /etc/mdadm.conf
ARRAY /dev/md0 level=raid5 num-devices=4 metadata=1.2 name=localhost.localdomain:0 UUID=232bc54c:6583d975:ab90c836:78be7854
   devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde

11. 重新啟動電腦
# reboot

12. 磁碟分割資訊
# fdisk -l | grep /dev/sd
Disk /dev/sda: 10.7 GB, 10737418240 bytes
/dev/sda1   *           1          64      512000   83  Linux
/dev/sda2              64        1306     9972736   8e  Linux LVM
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
/dev/sdb1               1        2610    20964793+  fd  Linux raid autodetect
Disk /dev/sdc: 21.5 GB, 21474836480 bytes
/dev/sdc1               1        2610    20964793+  fd  Linux raid autodetect
Disk /dev/sdd: 21.5 GB, 21474836480 bytes
/dev/sdd1               1        2610    20964793+  fd  Linux raid autodetect
Disk /dev/sde: 21.5 GB, 21474836480 bytes
/dev/sde1               1        2610    20964793+  fd  Linux raid autodetect

13. 檢查是否有正確掛載
# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                      8.3G 1022M  6.9G  13% /
tmpfs                 931M     0  931M   0% /dev/shm
/dev/sda1             477M   63M  389M  14% /boot
/dev/md0               59G   52M   56G   1% /mnt/raid5

在 Debian Linux 建立 RAID 10

參考網站:
Setting Up RAID 10 or 1+0 (Nested) in Linux – Part 6

OS:Debian Linux 5.8.0
HDD:
10G*1 Debian Linux System
20G *4 (/dev/sdb,sdc,sdd,sde)

1. 安裝 mdadm 套件
# apt-get install mdadm

[@more@]2. 查看目前磁碟狀態
# fdisk -l | grep ‘^Disk /dev’
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdc: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdd: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk /dev/sde: 20 GiB, 21474836480 bytes, 41943040 sectors

3. 建立磁碟分割區
# fdisk /dev/sdb


不一定要更改成 fd

重複上面的動作,完成所有的磁碟
# fdisk /dev/sdc
# fdisk /dev/sdd
# fdisk /dev/sde

4. 建立 /dev/md0 磁碟陣列分割區
# mdadm –create –verbose /dev/md0 –level=10 –raid-devices=4 /dev/sd[b-e]1
mdadm: layout defaults to n2
mdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: size set to 20954112K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

5. 檢視 RAID 10 狀態
# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:26:44 2016
     Raid Level : raid10
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 10:27:37 2016
          State : clean, resyncing
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

  Resync Status : 27% complete

           Name : debian:0  (local to host debian)
           UUID : b0c27dbd:1ddbb962:4bc7fbd4:e072ba41
         Events : 4

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync set-A   /dev/sdb1
       1       8       33        1      active sync set-B   /dev/sdc1
       2       8       49        2      active sync set-A   /dev/sdd1
       3       8       65        3      active sync set-B   /dev/sde1

# cat /proc/mdstat
Personalities : [raid10]
md0 : active raid10 sde1[3] sdd1[2] sdc1[1] sdb1[0]
      41908224 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
      [=>……………….]  resync =  5.7% (2400000/41908224) finish=3.2min speed=200000K/sec

unused devices: <none>

# fdisk -l | grep /dev/md0
Disk /dev/md0: 40 GiB, 42914021376 bytes, 83816448 sectors

6. 格式化分割區
# mkfs -t ext4 /dev/md0
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 10477056 4k blocks and 2621440 inodes
Filesystem UUID: 66244a88-5af2-4ab8-a274-2256649d0413
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

7. 建立掛載目錄並掛載
# mkdir /mnt/raid10
# mount /dev/md0 /mnt/raid10
# df -h

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.3G  968M  7.9G  11% /
udev             10M     0   10M   0% /dev
tmpfs           400M  5.7M  394M   2% /run
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/md0         40G   48M   38G   1% /mnt/raid10

8. 查看硬碟的 UUID
# blkid | grep /dev/md0
/dev/md0: UUID=”66244a88-5af2-4ab8-a274-2256649d0413″ TYPE=”ext4″

9. 修改 /etc/fstab
# vim /etc/fstab
加入下面一行
UUID=66244a88-5af2-4ab8-a274-2256649d0413 /mnt/raid10               ext4    errors=remount-ro 0       0

10. 編輯 /etc/mdadm/mdadm.conf  設定檔
# mdadm –detail –scan –verbose >> /etc/mdadm/mdadm.conf
ARRAY /dev/md0 level=raid10 num-devices=4 metadata=1.2 name=debian:0 UUID=b0c27dbd:1ddbb962:4bc7fbd4:e072ba41
   devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1

11. 磁碟分割資訊
# fdisk -l | grep /dev/sd
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdb1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
/dev/sda1  *        2048 20013055 20011008  9.6G 83 Linux
/dev/sda2       20015102 20969471   954370  466M  5 Extended
/dev/sda5       20015104 20969471   954368  466M 82 Linux swap / Solaris
Disk /dev/sdc: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdc1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sdd: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdd1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sde: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sde1        2048 41943039 41940992  20G fd Linux raid autodetect

12. 檢查是否有正確掛載
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.3G  968M  7.9G  11% /
udev             10M     0   10M   0% /dev
tmpfs           400M  5.7M  394M   2% /run
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/md0         40G   48M   38G   1% /mnt/raid10

另外一種方式:先建立二個 RAID1,再把這二個 RAID1 合併成 RAID 0

1. 建立 RAID 1
# mdadm –create –verbose /dev/md1 –metadata=1.2 –level=1 –raid-devices=2 /dev/sd[b-c]1
mdadm: size set to 20954112K
mdadm: array /dev/md1 started.
# mdadm –create –verbose /dev/md2 –metadata=1.2 –level=1 –raid-devices=2 /dev/sd[d-e]1
mdadm: size set to 20954112K
mdadm: array /dev/md2 started.

# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sde1[1] sdd1[0]
      20954112 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdc1[1] sdb1[0]
      20954112 blocks super 1.2 [2/2] [UU]

unused devices: <none>

2. 建立 RAID 0
# mdadm –create –verbose /dev/md0 –level=0 –raid-devices=2 /dev/md1 /dev/md2
mdadm: chunk size defaults to 512K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

# cat /proc/mdstat
Personalities : [raid1] [raid0]
md0 : active raid0 md2[1] md1[0]
      41875456 blocks super 1.2 512k chunks

md2 : active raid1 sde1[1] sdd1[0]
      20954112 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdc1[1] sdb1[0]
      20954112 blocks super 1.2 [2/2] [UU]

unused devices: <none>

# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 11:15:34 2016
     Raid Level : raid0
     Array Size : 41875456 (39.94 GiB 42.88 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 11:15:34 2016
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 512K

           Name : debian:0  (local to host debian)
           UUID : a55dec26:5dcd723f:4c8d15d4:2de2d739
         Events : 0

    Number   Major   Minor   RaidDevice State
       0       9        1        0      active sync   /dev/md1
       1       9        2        1      active sync   /dev/md2

3. 編輯 /etc/mdadm/mdadm.conf  設定檔
# mdadm –detail –scan –verbose >> /etc/mdadm/mdadm.conf
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=1.2 name=debian:1 UUID=ceac80b2:8ed44990:9927f0ab:03db076a
   devices=/dev/sdb1,/dev/sdc1
ARRAY /dev/md2 level=raid1 num-devices=2 metadata=1.2 name=debian:2 UUID=2bca9bb2:b520fedb:d23a38da:7572c357
   devices=/dev/sdd1,/dev/sde1
ARRAY /dev/md0 level=raid0 num-devices=2 metadata=1.2 name=debian:0 UUID=a55dec26:5dcd723f:4c8d15d4:2de2d739
   devices=/dev/md1,/dev/md2

4. 格式化分割區
# mkfs -t ext4 /dev/md0
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 10468864 4k blocks and 2621440 inodes
Filesystem UUID: be4f4dc6-3729-4bb0-ab86-9fbd654eb882
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

5. 建立掛載目錄並掛載
# mkdir /mnt/raid10
# mount /dev/md0 /mnt/raid10
# df -h

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.3G  968M  7.9G  11% /
udev             10M     0   10M   0% /dev
tmpfs           400M  5.7M  394M   2% /run
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/md0         40G   48M   38G   1% /mnt/raid10

6. 查看硬碟的 UUID
# blkid | grep /dev/md0
/dev/md0: UUID=”be4f4dc6-3729-4bb0-ab86-9fbd654eb882″ TYPE=”ext4″

7. 修改 /etc/fstab
# vim /etc/fstab
加入下面一行
UUID=66244a88-5af2-4ab8-a274-2256649d0413 /mnt/raid10               ext4    errors=remount-ro 0       0

在 Debian Linux 建立 RAID 6 – 新增一顆備援硬碟

參考網頁:
Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux – Part 5

OS:Debian Linux 5.8.0
HDD:
10G*1 Debian Linux System
20G *5 (/dev/sdb,sdc,sdd,sde,sdf)

1. 檢視目前的 RAID 狀態
# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 10:39:54 2016
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 27

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1[@more@]2. 建立 /dev/sdf 磁碟分割區
# fdisk /dev/sdf

Welcome to fdisk (util-linux 2.25.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x8d30e7ab.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-41943039, default 2048): 按二下 Enter 鍵
Last sector, +sectors or +size{K,M,G,T,P} (2048-41943039, default 41943039):

Created a new partition 1 of type ‘Linux’ and of size 20 GiB.

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): fd
Changed type of partition ‘Linux’ to ‘Linux raid autodetect’.

Command (m for help): wq
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

3. 將 /dev/sdf1 加入到 /dev/md0
# mdadm –add /dev/md0 /dev/sdf1
mdadm: added /dev/sdf1

4. 檢視目前的 RAID 狀態,多了一顆 Spare Devices
# mdadm -D /dev/md0
/dev/md0:

        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 10:44:37 2016
          State : clean
 Active Devices : 4
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 28

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1

       4       8       81        –      spare   /dev/sdf1

5. 模擬 /dev/sdd1 故障
# mdadm –manage –fail /dev/md0 /dev/sdd1
# mdadm –manage –set-faulty/dev/md0 /dev/sdd1
mdadm: set /dev/sdd1 faulty in /dev/md0

6. 檢查目前的 RAID 狀態,RAID 系統會自動讓 /dev/sdf1 產生作用
# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 10:47:46 2016
          State : clean, degraded, recovering
 Active Devices : 3
Working Devices : 4
 Failed Devices : 1
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 35% complete

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 35

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       4       8       81        2      spare rebuilding   /dev/sdf1
       3       8       65        3      active sync   /dev/sde1

       2       8       49        –      faulty   /dev/sdd1

# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdf1[4] sdb1[0] sde1[3] sdd1[2](F) sdc1[1]
      41908224 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/3] [UU_U]
      [================>….]  recovery = 80.8% (16943104/20954112) finish=0.4min speed=161494K/sec

unused devices: <none>

7. 將故障的硬碟移出
# mdadm –manage –remove /dev/md0 /dev/sdd1
mdadm: hot removed /dev/sdd1 from /dev/md0

# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdf1[4] sdb1[0] sde1[3] sdc1[1]
      41908224 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>

8.也可以直接在建立 RAID 時直接指定
# mdadm –create –verbose /dev/md0 –level=6 –raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 -x 1 /dev/sdf1

SoftRAID 的故障與復原

參考網頁:
磁碟管理:SoftRAID 與 LVM 綜合實做應用(上)

以 Debian Linux RAID 6 為例
1. 原本的 RAID 狀態
# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 10:36:08 2016
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 23

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1[@more@]2. 模擬 /dev/sdd1 故障
# mdadm –manage –fail /dev/md0 /dev/sdd1
# mdadm –manage –set-faulty /dev/md0 /dev/sdd1
mdadm: set /dev/sdd1 faulty in /dev/md0

3. 檢查目前的 RAID 狀態
# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 11:12:01 2016
          State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 25

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       4       0        0        4      removed
       3       8       65        3      active sync   /dev/sde1

       2       8       49        –      faulty   /dev/sdd1

sdd1[2](F) 後面 F 代表故障,md0 的最後一行 [4/3]與[UU_U]表示出有一顆磁碟壞掉了
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdb1[0] sde1[3] sdd1[2](F) sdc1[1]
      41908224 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/3] [UU_U]

unused devices: <none>

4. 將故障的硬碟移出
# mdadm –manage –remove /dev/md0 /dev/sdd1
mdadm: hot removed /dev/sdd1 from /dev/md0

5. 檢視移除 /dev/sdd1 的 RAID 狀態
# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 11:20:14 2016
          State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 26

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       4       0        0        4      removed
       3       8       65        3      active sync   /dev/sde1

6. 將新加入的 /dev/sdf 磁碟機建立磁碟分割
# fdisk /dev/sdf

7. 將新增的 /dev/sdf1 加入到 RAID 中
# mdadm –manage –add /dev/md0 /dev/sdf1
mdadm: added /dev/sdf1

8. 再次檢視 RAID 狀態
# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 11:29:08 2016
          State : clean, degraded, recovering
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 49% complete

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 39

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       4       8       81        2      spare rebuilding   /dev/sdf1
       3       8       65        3      active sync   /dev/sde1

# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdf1[4] sdb1[0] sde1[3] sdc1[1]
      41908224 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/3] [UU_U]
      [=============>…….]  recovery = 67.1% (14066688/20954112) finish=0.9min speed=124729K/sec

unused devices: <none>

mdadm 指令

# mdadm –help
mdadm is used for building, managing, and monitoring
Linux md devices (aka RAID arrays)
Usage: mdadm –create device options…
            Create a new array from unused devices.
       mdadm –assemble device options…
            Assemble a previously created array.
       mdadm –build device options…
            Create or assemble an array without metadata.
       mdadm –manage device options…
            make changes to an existing array.
       mdadm –misc options… devices
            report on or modify various md related devices.
       mdadm –grow options device
            resize/reshape an active array
       mdadm –incremental device
            add/remove a device to/from an array as appropriate
       mdadm –monitor options…
            Monitor one or more array for significant changes.[@more@]列出常用指令
mdadm –create –help
Usage:  mdadm –create device -chunk=X –level=Y –raid-devices=Z devices
 –level=      -l   : raid level: 0,1,4,5,6,10,linear,multipath and synonyms
  –raid-devices= -n : number of active devices in array
Example:
# mdadm –create /dev/md0 –level=stripe –raid-devices=4 /dev/sd[b-e]1
# mdadm –create /dev/md0 –level=5 –raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
# mdadm –create /dev/md0 –level=6 –raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

# mdadm -E /dev/sd[b-e]
# mdadm –examine /dev/sd[b-e]
# mdadm –examine /dev/sdb /dev/sdc /dev/sdd /dev/sde
/dev/sdb:
   MBR Magic : aa55
Partition[0] :     41940992 sectors at         2048 (type fd)
/dev/sdc:
   MBR Magic : aa55
Partition[0] :     41940992 sectors at         2048 (type fd)
/dev/sdd:
   MBR Magic : aa55
Partition[0] :     41940992 sectors at         2048 (type fd)
/dev/sde:
   MBR Magic : aa55
Partition[0] :     41940992 sectors at         2048 (type fd)

顯示 RAID 資訊
# mdadm –detail /dev/md0
# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 10:36:08 2016
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 23

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1

# mdadm –detail –scan –verbose
ARRAY /dev/md/0 level=raid6 num-devices=4 metadata=1.2 name=debian:0 UUID=8f039d29:9179c09a:17a76417:e54c9dfa
   devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1

停止 RAID
# mdadm –manage –stop /dev/md0

啟動 RAID
# mdadm –assemble –run /dev/md0
# mdadm -A –run /dev/md0

模擬磁碟故障
# mdadm –manage /dev/md0 –fail /dev/sdd1
# mdadm –manage /dev/md0 –set-faulty /dev/sdd1

加入磁碟到 RAID 中
# mdadm –manage /dev/md0 –add /dev/sdf1

從 RAID 中移除磁碟
# mdadm –manage /dev/md0 –remove /dev/sdd1

在 Debian Linux 建立 RAID 6

參考網頁:
Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux – Part 5

OS:Debian Linux 5.8.0
HDD:
10G*1 Debian Linux System
20G *4 (/dev/sdb,sdc,sdd,sde)

1. 安裝 mdadm 套件
# apt-get install mdadm

[@more@]2. 查看目前磁碟狀態
# fdisk -l | grep ‘^Disk /dev’
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdc: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdd: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk /dev/sde: 20 GiB, 21474836480 bytes, 41943040 sectors

3. 建立磁碟分割區
# fdisk /dev/sdb


不一定要更改成 fd

重複上面的動作,完成所有的磁碟
# fdisk /dev/sdc
# fdisk /dev/sdd
# fdisk /dev/sde

4. 建立 /dev/md0 磁碟陣列分割區
# mdadm –create /dev/md0 –level=6 –raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

5. 查看建置結果
# fdisk -l | grep /dev/md0
Disk /dev/md0: 40 GiB, 42914021376 bytes, 83816448 sectors
# mdadm –detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 10:25:52 2016
          State : clean, resyncing
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

  Resync Status : 32% complete

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 5

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1

6. 格式化分割區
# mkfs -t ext4 /dev/md0
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 10477056 4k blocks and 2621440 inodes
Filesystem UUID: 25c4c294-0b13-4e71-928e-47e1b69f1219
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

7. 建立掛載目錄並掛載
# mkdir /mnt/raid6
# mount /dev/md0 /mnt/raid6
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.3G  968M  7.9G  11% /
udev             10M     0   10M   0% /dev
tmpfs           400M  5.7M  394M   2% /run
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/md0         40G   48M   38G   1% /mnt/raid6

8. 查看硬碟的 UUID
# blkid | grep /dev/md0
/dev/md0: UUID=”25c4c294-0b13-4e71-928e-47e1b69f1219″ TYPE=”ext4″

9. 修改 /etc/fstab
# vim /etc/fstab
加入下面一行
UUID=25c4c294-0b13-4e71-928e-47e1b69f1219 /mnt/raid5               ext4    errors=remount-ro 0       0

10. 編輯 /etc/mdadm/mdadm.conf  設定檔
# mdadm –detail –scan –verbose >> /etc/mdadm/mdadm.conf
ARRAY /dev/md0 level=raid6 num-devices=4 metadata=1.2 name=debian:0 UUID=8f039d29:9179c09a:17a76417:e54c9dfa
   devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1

11. 磁碟分割資訊
# fdisk -l | grep /dev/sd
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdb1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
/dev/sda1  *        2048 20013055 20011008  9.6G 83 Linux
/dev/sda2       20015102 20969471   954370  466M  5 Extended
/dev/sda5       20015104 20969471   954368  466M 82 Linux swap / Solaris
Disk /dev/sdc: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdc1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sdd: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdd1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sde: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sde1        2048 41943039 41940992  20G fd Linux raid autodetect

12. 檢查是否有正確掛載
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.3G  1.1G  7.8G  12% /
udev             10M     0   10M   0% /dev
tmpfs           400M  5.9M  394M   2% /run
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/md0         40G   48M   38G   1% /mnt/raid6

在 Debian Linux 建立 RAID 5

參考網頁:
Creating RAID 5 (Striping with Distributed Parity) in Linux – Part 4
Debian RAID-5 (效能與備份) | JR 的網路日誌
[筆記]Linux 軟體 RAID 5 實作 @ Paul’s Blog :: 痞客邦 PIXNET ::
OS:Debian Linux 5.8.0
HDD:
10G*1 Debian Linux System
20G *4 (/dev/sdb,sdc,sdd,sde)

1. 安裝 mdadm 套件
# apt-get install mdadm

[@more@]2. 查看目前磁碟狀態
# fdisk -l | grep ‘^Disk /dev’
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdc: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdd: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk /dev/sde: 20 GiB, 21474836480 bytes, 41943040 sectors

3. 建立磁碟分割區
# fdisk /dev/sdb


不一定要更改成 fd

重複上面的動作,完成所有的磁碟
# fdisk /dev/sdc
# fdisk /dev/sdd
# fdisk /dev/sde

4. 建立 /dev/md0 磁碟陣列分割區
# mdadm –create /dev/md0 –level=5 –raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

# mdadm -C /dev/md0 -l=5 -n=4 /dev/sd[b-e]1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

5. 查看建置結果
# fdisk -l | grep /dev/md0
Disk /dev/md0: 60 GiB, 64371032064 bytes, 125724672 sectors

# mdadm –detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Mon Jun 27 19:12:21 2016
     Raid Level : raid5
     Array Size : 62862336 (59.95 GiB 64.37 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Mon Jun 27 19:14:47 2016
          State : clean, degraded, recovering
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 40% complete

           Name : debian:0  (local to host debian)
           UUID : 432ac899:b8c0fceb:26f9df48:bba894aa
         Events : 7

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       4       8       65        3      spare rebuilding   /dev/sde1

6. 格式化分割區
# mkfs -t ext4 /dev/md0
Creating filesystem with 15715584 4k blocks and 3932160 inodes
Filesystem UUID: c416cc70-98ea-4eb5-b997-b93fd2410d35
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

7. 建立掛載目錄並掛載
# mkdir /mnt/raid5
# mount /dev/md0 /mnt/raid5
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.3G  1.1G  7.8G  12% /
udev             10M     0   10M   0% /dev
tmpfs           400M  5.9M  394M   2% /run
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/md0         59G   52M   56G   1% /mnt/raid5

8. 查看硬碟的 UUID
# blkid | grep /dev/md0
/dev/md0: UUID=”c416cc70-98ea-4eb5-b997-b93fd2410d35″ TYPE=”ext4″

9. 修改 /etc/fstab
# vim /etc/fstab
加入下面一行
UUID=c416cc70-98ea-4eb5-b997-b93fd2410d35 /mnt/raid5               ext4    errors=remount-ro 0       0

10.編輯 /etc/mdadm/mdadm.conf  設定檔
# mdadm –detail –scan –verbose >> /etc/mdadm/mdadm.conf
ARRAY /dev/md0 level=raid5 num-devices=4 metadata=1.2 name=debian:0 UUID=432ac899:b8c0fceb:26f9df48:bba894aa
   devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1

11. 磁碟分割資訊
# fdisk -l | grep /dev/sd
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdb1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
/dev/sda1  *        2048 20013055 20011008  9.6G 83 Linux
/dev/sda2       20015102 20969471   954370  466M  5 Extended
/dev/sda5       20015104 20969471   954368  466M 82 Linux swap / Solaris
Disk /dev/sdc: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdc1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sdd: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdd1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sde: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sde1        2048 41943039 41940992  20G fd Linux raid autodetect

12. 檢查是否有正確掛載
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.3G  1.1G  7.8G  12% /
udev             10M     0   10M   0% /dev
tmpfs           400M  5.9M  394M   2% /run
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/md0         59G   52M   56G   1% /mnt/raid5

在 Debian Linux 建立 RAID 0

參考網頁:
Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2
Debian RAID-0 (等量模式) | JR 的網路日誌
OS:Debian Linux 5.8.0
HDD:
10G*1 Debian Linux System
20G *4 (/dev/sdb,sdc,sdd,sde)

1. 安裝 mdadm 套件
# apt-get install mdadm


[@more@]2. 查看目前磁碟狀態
# fdisk -l | grep ‘^Disk /dev’
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdc: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdd: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk /dev/sde: 20 GiB, 21474836480 bytes, 41943040 sectors

3. 建立磁碟分割區
# fdisk /dev/sdb


不一定要更改成 fd

重複上面的動作,完成所有的磁碟
# fdisk /dev/sdc
# fdisk /dev/sdd
# fdisk /dev/sde

4. 建立 /dev/md0 磁碟陣列分割區
# mdadm –create /dev/md0 –level=stripe –raid-devices=4 /dev/sd[b-e]1

# mdadm -C /dev/md0 -l raid0 -n 4 /dev/sd[b-e]1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

5. 查看建置結果
# cat /proc/mdstat
Personalities : [raid0]
md0 : active raid0 sde1[3] sdd1[2] sdc1[1] sdb1[0]
      83816448 blocks super 1.2 512k chunks

unused devices: <none>

/dev/sdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 7d559d15:91cc1bec:54dcd941:8f10e5ff
           Name : debian:0  (local to host debian)
  Creation Time : Mon Jun 27 19:09:38 2016
     Raid Level : raid0
   Raid Devices : 4

 Avail Dev Size : 41908224 (19.98 GiB 21.46 GB)
    Data Offset : 32768 sectors
   Super Offset : 8 sectors
   Unused Space : before=32680 sectors, after=0 sectors
          State : clean
    Device UUID : b77c8a2f:aad8c146:6da755a5:6f3db3e3

    Update Time : Mon Jun 27 19:09:38 2016
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 37037bb0 – correct
         Events : 0

     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAAA (‘A’ == active, ‘.’ == missing, ‘R’ == replacing)

fdisk -l | grep /dev/md0
Disk /dev/md0: 80 GiB, 85828042752 bytes, 167632896 sectors

# mdadm –detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Mon Jun 27 19:09:38 2016
     Raid Level : raid0
     Array Size : 83816448 (79.93 GiB 85.83 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Mon Jun 27 19:09:38 2016
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 512K

           Name : debian:0  (local to host debian)
           UUID : 7d559d15:91cc1bec:54dcd941:8f10e5ff
         Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1

6. 進行 RAID 磁碟分割
# fdisk /dev/md0

7. 格式化分割區
# mkfs.ext4 /dev/md0p1
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 20953600 4k blocks and 5242880 inodes
Filesystem UUID: a89c1629-75b4-4660-b5cd-cbcf72595fe8
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

8. 建立掛載目錄並掛載
# mkdir /mnt/raid0
# mount /dev/md0p1 /mnt/raid0
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.3G  1.1G  7.8G  12% /
udev             10M     0   10M   0% /dev
tmpfs           400M  5.9M  394M   2% /run
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/md0p1         79G   56M   75G   1% /mnt/raid0

9. 查看硬碟的 UUID
# blkid | grep /dev/md0p1
/dev/md0p1: UUID=”b57de29c-9210-48bc-9ba6-1f5224feb42f” TYPE=”ext4″

10. 修改 /etc/fstab
# vim /etc/fstab
加入下面一行
UUID=b57de29c-9210-48bc-9ba6-1f5224feb42f /mnt/raid0      ext4    errors=remount-ro 0       0

11. 編輯 /etc/mdadm/mdadm.conf  設定檔
mdadm -E -s -v >> /etc/mdadm/mdadm.conf

mdadm –detail –scan –verbose >> /etc/mdadm/mdadm.conf
# cat /etc/mdadm/mdadm.conf
ARRAY /dev/md/0  level=raid0 metadata=1.2 num-devices=4 UUID=7d559d15:91cc1bec:54dcd941:8f10e5ff name=debian:0
   devices=/dev/sde1,/dev/sdd1,/dev/sdc1,/dev/sdb1

12. 磁碟分割資訊
# fdisk -l | grep /dev/sd
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdb1        2048 41943039 41940992  20G 83 Linux
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
/dev/sda1  *        2048 20013055 20011008  9.6G 83 Linux
/dev/sda2       20015102 20969471   954370  466M  5 Extended
/dev/sda5       20015104 20969471   954368  466M 82 Linux swap / Solaris
Disk /dev/sdc: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdc1        2048 41943039 41940992  20G 83 Linux
Disk /dev/sdd: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdd1        2048 41943039 41940992  20G 83 Linux
Disk /dev/sde: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sde1        2048 41943039 41940992  20G 83 Linux

13. 檢查是否有正確掛載
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.3G  1.1G  7.8G  12% /
udev             10M     0   10M   0% /dev
tmpfs           400M  5.9M  394M   2% /run
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/md0p1       79G   56M   75G   1% /mnt/raid0