在 CentOS 7.x 上使用 RAID 5

1. 安裝 mdadm 套件
# yum install mdadm

2. 查看目前磁碟狀態
# fdisk -l | grep ‘^Disk /dev/sd’
Disk /dev/sda: 10.7 GB, 10737418240 bytes
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
Disk /dev/sdc: 21.5 GB, 21474836480 bytes
Disk /dev/sdd: 21.5 GB, 21474836480 bytes
Disk /dev/sde: 21.5 GB, 21474836480 bytes

[@more@]3. 建立磁碟分割區
# fdisk /dev/sdb


重複上面的動作,完成所有的磁碟
# fdisk /dev/sdc
# fdisk /dev/sdd
# fdisk /dev/sde

4. 建立 /dev/md0 磁碟陣列分割區
# mdadm –create –verbose –auto=yes /dev/md0 –level=5 –raid-devices=4 /dev/sd[b-e]
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: /dev/sdb appears to be part of a raid array:
       level=raid0 devices=0 ctime=Thu Jan  1 08:00:00 1970
mdadm: partition table exists on /dev/sdb but will be lost or
       meaningless after creating array
mdadm: /dev/sdc appears to be part of a raid array:
       level=raid0 devices=0 ctime=Thu Jan  1 08:00:00 1970
mdadm: partition table exists on /dev/sdc but will be lost or
       meaningless after creating array
mdadm: /dev/sdd appears to be part of a raid array:
       level=raid0 devices=0 ctime=Thu Jan  1 08:00:00 1970
mdadm: partition table exists on /dev/sdd but will be lost or
       meaningless after creating array
mdadm: size set to 20955136K
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

5. 查看建置結果
# fdisk -l | grep /dev/md0
Disk /dev/md0: 42.9 GB, 42916118528 bytes, 83820544 sectors

# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Thu Jun 30 03:31:44 2016
     Raid Level : raid5
     Array Size : 41910272 (39.97 GiB 42.92 GB)
  Used Dev Size : 20955136 (19.98 GiB 21.46 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Thu Jun 30 03:36:27 2016
          State : clean, degraded, recovering
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 47% complete

           Name : localhost.localdomain:0  (local to host localhost.localdomain)
           UUID : 40801919:fa833719:77db4a5b:bd3e0c50
         Events : 10

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       3       8       48        2      spare rebuilding   /dev/sdd

6. 格式化分割區 CentOS 7 改用 xfs
# mkfs.xfs /dev/md0
meta-data=/dev/md0               isize=256    agcount=16, agsize=654720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=5120, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

7. 查看硬碟的 UUID
# blkid | grep /dev/md0
/dev/md0: UUID=”fe9ac611-bb4a-4d85-87af-2998f6213cb8″ TYPE=”xfs”

8. 建立掛載目錄並掛載
# mkdir /mnt/raid5
# mount /dev/md0 /mnt/raid5
# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  8.5G  1.3G  7.2G  16% /
devtmpfs                 983M     0  983M   0% /dev
tmpfs                    993M     0  993M   0% /dev/shm
tmpfs                    993M  8.7M  985M   1% /run
tmpfs                    993M     0  993M   0% /sys/fs/cgroup
/dev/sda1                497M  153M  345M  31% /boot
tmpfs                    199M     0  199M   0% /run/user/0
/dev/md0                  40G   33M   40G   1% /mnt/raid5

9. 修改 /etc/fstab
# vim /etc/fstab
加入下面一行
UUID=fe9ac611-bb4a-4d85-87af-2998f6213cb8 /mnt/raid5                   xfs     defaults        0 0

10. 編輯 /etc/mdadm.conf  設定檔
# mdadm –detail –scan –verbose > /etc/mdadm.conf
# cat /etc/mdadm.conf
ARRAY /dev/md0 level=raid5 num-devices=3 metadata=1.2 spares=1 name=localhost.localdomain:0 UUID=40801919:fa833719:77db4a5b:bd3e0c50
   devices=/dev/sdb,/dev/sdc,/dev/sdd

在 CentOS 6.x 上使用 vsftpd FTP Server

1. 安裝 vsftpd FTP Server
# yum install -y vsftpd

2. 修改設定檔 /etc/vsftpd/vsftpd.conf
# grep -v ^# /etc/vsftpd/vsftpd.conf
anonymous_enable=No
local_enable=YES
write_enable=YES
local_umask=022
dirmessage_enable=YES
xferlog_enable=YES
connect_from_port_20=YES
xferlog_std_format=YES
chroot_local_user=YES
chroot_list_enable=YES
chroot_list_file=/etc/vsftpd/chroot_list
listen=YES
pasv_enable=YES
pasv_min_port=5000
pasv_max_port=6000
use_localtime=YES

pam_service_name=vsftpd
userlist_enable=YES
tcp_wrappers=YES[@more@]3. 讓 root 可以登入
# sed -i ‘s/root/#root/’ /etc/vsftpd/ftpusers
# sed -i ‘s/root/#root/’ /etc/vsftpd/user_list

4. 限制使用者不能切換到其它目錄,root 可以
# echo root > /etc/vsftpd/chroot_list

5. SELinux 在 vsftpd FTP Server 上的設定
# setsebool -P ftp_home_dir  on
# setsebool -P allow_ftpd_full_access  on

6. 設定開機時啟動
# chkconfig –level 3 vsftpd on

7. 啟動 vsftpd FTP Server
# service vsftpd start

8. 檢查 FTP Server 是否有正常啟動
# netstat -ant | grep :21
tcp        0      0 0.0.0.0:21                  0.0.0.0:*                   LISTEN

9. 防火牆設定
# iptables -A INPUT -m state –state NEW -m tcp -p tcp –dport 5000:6000 -j ACCEPT

在 CentOS 6.x 上使用 Samba 4

1. 搜尋 Samba 套件
# yum search samba | grep ^samba
samba-client.x86_64 : Samba client programs
samba-common.i686 : Files used by both Samba servers and clients
samba-common.x86_64 : Files used by both Samba servers and clients
samba-doc.x86_64 : Documentation for the Samba suite
samba-glusterfs.x86_64 : Samba VFS module for GlusterFS
samba-swat.x86_64 : The Samba SMB server Web configuration program
samba-winbind.x86_64 : Samba winbind
samba-winbind-clients.i686 : Samba winbind clients
samba-winbind-clients.x86_64 : Samba winbind clients
samba-winbind-krb5-locator.x86_64 : Samba winbind krb5 locator
samba4-client.x86_64 : Samba client programs
samba4-common.x86_64 : Files used by both Samba servers and clients
samba4-devel.x86_64 : Developer tools for Samba libraries
samba4-libs.x86_64 : Samba libraries
samba4-python.x86_64 : Samba Python libraries
samba4-test.x86_64 : Testing tools for Samba servers and clients
samba4-winbind.x86_64 : Samba winbind
samba4-winbind-clients.x86_64 : Samba winbind clients
samba4-winbind-krb5-locator.x86_64 : Samba winbind krb5 locator
samba.x86_64 : Server and Client software to interoperate with Windows machines
samba-domainjoin-gui.x86_64 : Domainjoin GUI
samba-winbind-devel.i686 : Developer tools for the winbind library
samba-winbind-devel.x86_64 : Developer tools for the winbind library
samba4.x86_64 : Server and Client software to interoperate with Windows machines
samba4-dc.x86_64 : AD Domain Controller placeholder package.
samba4-dc-libs.x86_64 : AD Domain Controller libraries placeholder package.
samba4-pidl.x86_64 : Perl IDL compiler

2. 安裝 Samba 4
# yum install -y samba4[@more@]3. 修改 /etc/samba/smb.conf 設定檔
# cat /etc/samba/smb.conf | grep -E -v ‘^#|^;’
[global]
        workgroup = HOME
        server string = Samba Server Version %v
        # log files split per-machine:
        log file = /var/log/samba/log.%m
        # maximum size of 50KB per log file, then rotate:
        max log size = 50

        security = user
        passdb backend = tdbsam

[homes]
        comment = Home Directories
        browseable = no
        writable = yes
        valid users = %S
        create mode = 0664
        directory mode = 0775
        veto files=/.*/

4. 測試設定檔
# testparm
Load smb config files from /etc/samba/smb.conf
rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384)
Processing section “[homes]”
Loaded services file OK.
Server role: ROLE_STANDALONE

Press enter to see a dump of your service definitions

# Global parameters
[global]
        workgroup = HOME
        server string = Samba Server Version %v
        security = USER
        log file = /var/log/samba/log.%m
        max log size = 50
        idmap config * : backend = tdb

[homes]
        comment = Home Directories
        valid users = %S
        read only = No
        create mask = 0664
        directory mask = 0775
        veto files = /.*/
        browseable = No

5. 建立使用者 Samba 密碼
# /usr/bin/pdbedit -a t850008
new password:
retype new password:
Unix username:        t850008
NT username:
Account Flags:        [U          ]
User SID:             S-1-5-21-1562595748-815096285-1647261660-1000
Primary Group SID:    S-1-5-21-1562595748-815096285-1647261660-513
Full Name:
Home Directory:       \localhostt850008
HomeDir Drive:
Logon Script:
Profile Path:         \localhostt850008profile
Domain:               LOCALHOST
Account desc:
Workstations:
Munged dial:
Logon time:           0
Logoff time:          Wed, 06 Feb 2036 23:06:39 CST
Kickoff time:         Wed, 06 Feb 2036 23:06:39 CST
Password last set:    Wed, 29 Jun 2016 09:06:19 CST
Password can change:  Wed, 29 Jun 2016 09:06:19 CST
Password must change: never
Last bad password   : 0
Bad password count  : 0
Logon hours         : FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF

6. SELinux 在 Samba Server 上的設定
# setsebool -P samba_enable_home_dirs on
# chcon -R -t samba_share_t /home/homework
# chcon -R -t samba_share_t /home/share

7. 啟動 Samba Server
# /etc/init.d/smb start
# /etc/init.d/nmb start

8. 檢查 Samba Server 是否有正常啟動
# netstat -an | grep -E ‘:137|:138|:139|:445’
tcp        0      0 0.0.0.0:445                 0.0.0.0:*                   LISTEN
tcp        0      0 0.0.0.0:139                 0.0.0.0:*                   LISTEN
tcp        0      0 :::445                      :::*                        LISTEN
tcp        0      0 :::139                      :::*                        LISTEN
udp        0      0 0.0.0.0:137                 0.0.0.0:*
udp        0      0 0.0.0.0:138                 0.0.0.0:*

9. 設定開機時啟動 Samba Server
# chkconfig –level 3 smb on
# chkconfig –level 3 nmb on

10. 防火牆上的設定
防火牆設定 設定內部網路 IP 192.168.1.0/24
# iptables -A INPUT -s 192.168.1.0/24 -m state –state NEW -m udp -p udp –dport 137 -j ACCEPT
# iptables -A INPUT -s 192.168.1.0/24 -m state –state NEW -m udp -p udp –dport 138 -j ACCEPT
# iptables -A INPUT -s 192.168.1.0/24 -m state –state NEW -m tcp -p tcp –dport 139 -j ACCEPT
# iptables -A INPUT -s 192.168.1.0/24 -m state –state NEW -m tcp -p tcp –dport 445 -j ACCEPT
# iptables -A INPUT -s 192.168.1.0/24 -m state –state NEW -m udp -p udp –dport 445 -j ACCEPT

在 CentOS 6.x 上使用 RAID 5

1. 安裝 mdadm 套件
# yum install mdadm

2. 查看目前磁碟狀態
# fdisk -l | grep ‘^Disk /dev/sd’
Disk /dev/sda: 10.7 GB, 10737418240 bytes
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
Disk /dev/sdc: 21.5 GB, 21474836480 bytes
Disk /dev/sdd: 21.5 GB, 21474836480 bytes
Disk /dev/sde: 21.5 GB, 21474836480 bytes[@more@]

3. 建立磁碟分割區
# fdisk /dev/sdb


重複上面的動作,完成所有的磁碟
# fdisk /dev/sdc
# fdisk /dev/sdd
# fdisk /dev/sde

4. 建立 /dev/md0 磁碟陣列分割區
# mdadm –create –verbose –auto=yes /dev/md0 –level=5 –raid-devices=4 /dev/sd[b-e]
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: /dev/sdb appears to be part of a raid array:
       level=raid0 devices=0 ctime=Thu Jan  1 08:00:00 1970
mdadm: partition table exists on /dev/sdb but will be lost or
       meaningless after creating array
mdadm: /dev/sdc appears to be part of a raid array:
       level=raid0 devices=0 ctime=Thu Jan  1 08:00:00 1970
mdadm: partition table exists on /dev/sdc but will be lost or
       meaningless after creating array
mdadm: /dev/sdd appears to be part of a raid array:
       level=raid0 devices=0 ctime=Thu Jan  1 08:00:00 1970
mdadm: partition table exists on /dev/sdd but will be lost or
       meaningless after creating array
mdadm: /dev/sde appears to be part of a raid array:
       level=raid0 devices=0 ctime=Thu Jan  1 08:00:00 1970
mdadm: partition table exists on /dev/sde but will be lost or
       meaningless after creating array
mdadm: size set to 20955136K
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

5. 查看建置結果
# fdisk -l | grep /dev/md0
Disk /dev/md0: 64.4 GB, 64374177792 bytes

# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      62865408 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
      [>………………..]  recovery =  2.8% (607044/20955136) finish=6.7min speed=50587K/sec

unused devices: <none>

# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Wed Jun 29 08:16:51 2016
     Raid Level : raid5
     Array Size : 62865408 (59.95 GiB 64.37 GB)
  Used Dev Size : 20955136 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Wed Jun 29 08:17:14 2016
          State : clean, degraded, recovering
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 6% complete

           Name : localhost.localdomain:0  (local to host localhost.localdomain)
           UUID : 17b9df4d:e3542df5:34c1a172:298a07a5
         Events : 2

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      spare rebuilding   /dev/sde

6. 格式化分割區
# mkfs -t ext4 /dev/md0
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=384 blocks
3932160 inodes, 15716352 blocks
785817 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
480 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 35 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

7. 建立掛載目錄並掛載
# mkdir /mnt/raid5
# mount /dev/md0 /mnt/raid5
# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                      8.3G 1022M  6.9G  13% /
tmpfs                 931M     0  931M   0% /dev/shm
/dev/sda1             477M   63M  389M  14% /boot
/dev/md0               59G   52M   56G   1% /mnt/raid5

8. 查看硬碟的 UUID
# blkid | grep /dev/md0
/dev/md0: UUID=”a24bbe2b-c0f1-4417-99d9-866ea1f2a33d” TYPE=”ext4″

9. 修改 /etc/fstab
# vim /etc/fstab
加入下面一行
UUID=a24bbe2b-c0f1-4417-99d9-866ea1f2a33d /mnt/raid5                   ext4    defaults        1 1

10. 編輯 /etc/mdadm.conf  設定檔
# mdadm –detail –scan –verbose >> /etc/mdadm.conf
ARRAY /dev/md0 level=raid5 num-devices=4 metadata=1.2 name=localhost.localdomain:0 UUID=232bc54c:6583d975:ab90c836:78be7854
   devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde

11. 重新啟動電腦
# reboot

12. 磁碟分割資訊
# fdisk -l | grep /dev/sd
Disk /dev/sda: 10.7 GB, 10737418240 bytes
/dev/sda1   *           1          64      512000   83  Linux
/dev/sda2              64        1306     9972736   8e  Linux LVM
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
/dev/sdb1               1        2610    20964793+  fd  Linux raid autodetect
Disk /dev/sdc: 21.5 GB, 21474836480 bytes
/dev/sdc1               1        2610    20964793+  fd  Linux raid autodetect
Disk /dev/sdd: 21.5 GB, 21474836480 bytes
/dev/sdd1               1        2610    20964793+  fd  Linux raid autodetect
Disk /dev/sde: 21.5 GB, 21474836480 bytes
/dev/sde1               1        2610    20964793+  fd  Linux raid autodetect

13. 檢查是否有正確掛載
# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                      8.3G 1022M  6.9G  13% /
tmpfs                 931M     0  931M   0% /dev/shm
/dev/sda1             477M   63M  389M  14% /boot
/dev/md0               59G   52M   56G   1% /mnt/raid5

讓 FreeBSD 的終端機也多彩多姿另一章

頭城國小資訊組 | 讓 FreeBSD 的終端機也多彩多姿 中雖然設定讓終端機有顏色,可是目錄顏色比較暗,甚至看不太清楚。

在網路上搜尋了一下,找到這些可以參考的文章:
FreeBSD: Enable Colorized ls Output
How To Enable ls Color?
color ls in FreeBSD (in the Bash shell) | Jared Evans Global Microbrand
ls 顏色設定(in Bash shell) – Tsung’s Blog[@more@]設定的方式:
以 bash shell 為例
/etc/profile 加入下面一行
# vim /etc/profile
export LSCOLORS=”gxfxcxdxcxegedabagacad”
alias ls=’ls -GF’

讓設定生效
# source /etc/profile
更改之後

其中 LSCOLORS 參數的設定一直都沒看懂,後來在 Mac 讓終端機 ls 有顏色 « Soda Hau’s Note 上才看懂了!

LSCOLORS 後面一串的意思是,照著順序設定顏色(共 11 項),順序是:
    directory
    symbolic link
    socket
    pipe
    executable
    block special
    character special
    executable with setuid bit set
    executable with setgid bit set
    directory writable to others, with sticky bit
    directory writable to others, without sticky bit

顏色的對應值則是:
    a -> black
    b -> red
    c -> green
    d -> brown
    e -> blue
    f -> magenta
    g -> cyan
    h -> light grey

    A -> bold black, usually shows up as dark grey
    B -> bold red
    C -> bold green
    D -> bold brown, usually shows up as yellow
    E -> bold blue
    F -> bold magenta
    G -> bold cyan
    H -> bold light grey; looks like bright white
    x -> default foreground or background

每個種類的檔案有兩個值,字的顏色和底色。以 LSCOLORS=”gxfxcxdxcxegedabagacad” 來說,
前面的 gx 是設定資料夾顯示的顏色為青色,底色是預設的前景和背景色;
            fx 是 symbolic link 的設定,字是洋紅色,底色是預設的前景和背景色
            cx 是 socket 的設定,字是 green 色,底色是預設的前景和背景色
            dx 是 pipe 的設定,字是 brown 色,底色是預設的前景和背景色
            cx 是 executable 的設定,字是 green 色,底色是預設的前景和背景色
            eg 是 block special 的設定,字是 blue 色,底色是 cyan 色
            ed 是 character special  的設定,字是 blue 色,底色是 brown 色
            ab 是 executable with setuid bit set 的設定,字是 black 色,底色是 red 色
            ag 是 executable with setgid bit set 的設定,字是 black 色,底色是 cyan 色
            ac 是 directory writable to others, with sticky bit 的設定,字是 black 色,底色是 green 色
            ad 是 directory writable to others, without sticky bit 的設定,字是 black色,底色是 brown 色

另外也可以直接由 LSCOLORS Generator 網站上設定,會比較簡單方便!

在 Debian Linux 建立 RAID 10

參考網站:
Setting Up RAID 10 or 1+0 (Nested) in Linux – Part 6

OS:Debian Linux 5.8.0
HDD:
10G*1 Debian Linux System
20G *4 (/dev/sdb,sdc,sdd,sde)

1. 安裝 mdadm 套件
# apt-get install mdadm

[@more@]2. 查看目前磁碟狀態
# fdisk -l | grep ‘^Disk /dev’
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdc: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdd: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk /dev/sde: 20 GiB, 21474836480 bytes, 41943040 sectors

3. 建立磁碟分割區
# fdisk /dev/sdb


不一定要更改成 fd

重複上面的動作,完成所有的磁碟
# fdisk /dev/sdc
# fdisk /dev/sdd
# fdisk /dev/sde

4. 建立 /dev/md0 磁碟陣列分割區
# mdadm –create –verbose /dev/md0 –level=10 –raid-devices=4 /dev/sd[b-e]1
mdadm: layout defaults to n2
mdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: size set to 20954112K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

5. 檢視 RAID 10 狀態
# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:26:44 2016
     Raid Level : raid10
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 10:27:37 2016
          State : clean, resyncing
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

  Resync Status : 27% complete

           Name : debian:0  (local to host debian)
           UUID : b0c27dbd:1ddbb962:4bc7fbd4:e072ba41
         Events : 4

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync set-A   /dev/sdb1
       1       8       33        1      active sync set-B   /dev/sdc1
       2       8       49        2      active sync set-A   /dev/sdd1
       3       8       65        3      active sync set-B   /dev/sde1

# cat /proc/mdstat
Personalities : [raid10]
md0 : active raid10 sde1[3] sdd1[2] sdc1[1] sdb1[0]
      41908224 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
      [=>……………….]  resync =  5.7% (2400000/41908224) finish=3.2min speed=200000K/sec

unused devices: <none>

# fdisk -l | grep /dev/md0
Disk /dev/md0: 40 GiB, 42914021376 bytes, 83816448 sectors

6. 格式化分割區
# mkfs -t ext4 /dev/md0
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 10477056 4k blocks and 2621440 inodes
Filesystem UUID: 66244a88-5af2-4ab8-a274-2256649d0413
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

7. 建立掛載目錄並掛載
# mkdir /mnt/raid10
# mount /dev/md0 /mnt/raid10
# df -h

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.3G  968M  7.9G  11% /
udev             10M     0   10M   0% /dev
tmpfs           400M  5.7M  394M   2% /run
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/md0         40G   48M   38G   1% /mnt/raid10

8. 查看硬碟的 UUID
# blkid | grep /dev/md0
/dev/md0: UUID=”66244a88-5af2-4ab8-a274-2256649d0413″ TYPE=”ext4″

9. 修改 /etc/fstab
# vim /etc/fstab
加入下面一行
UUID=66244a88-5af2-4ab8-a274-2256649d0413 /mnt/raid10               ext4    errors=remount-ro 0       0

10. 編輯 /etc/mdadm/mdadm.conf  設定檔
# mdadm –detail –scan –verbose >> /etc/mdadm/mdadm.conf
ARRAY /dev/md0 level=raid10 num-devices=4 metadata=1.2 name=debian:0 UUID=b0c27dbd:1ddbb962:4bc7fbd4:e072ba41
   devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1

11. 磁碟分割資訊
# fdisk -l | grep /dev/sd
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdb1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
/dev/sda1  *        2048 20013055 20011008  9.6G 83 Linux
/dev/sda2       20015102 20969471   954370  466M  5 Extended
/dev/sda5       20015104 20969471   954368  466M 82 Linux swap / Solaris
Disk /dev/sdc: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdc1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sdd: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdd1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sde: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sde1        2048 41943039 41940992  20G fd Linux raid autodetect

12. 檢查是否有正確掛載
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.3G  968M  7.9G  11% /
udev             10M     0   10M   0% /dev
tmpfs           400M  5.7M  394M   2% /run
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/md0         40G   48M   38G   1% /mnt/raid10

另外一種方式:先建立二個 RAID1,再把這二個 RAID1 合併成 RAID 0

1. 建立 RAID 1
# mdadm –create –verbose /dev/md1 –metadata=1.2 –level=1 –raid-devices=2 /dev/sd[b-c]1
mdadm: size set to 20954112K
mdadm: array /dev/md1 started.
# mdadm –create –verbose /dev/md2 –metadata=1.2 –level=1 –raid-devices=2 /dev/sd[d-e]1
mdadm: size set to 20954112K
mdadm: array /dev/md2 started.

# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sde1[1] sdd1[0]
      20954112 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdc1[1] sdb1[0]
      20954112 blocks super 1.2 [2/2] [UU]

unused devices: <none>

2. 建立 RAID 0
# mdadm –create –verbose /dev/md0 –level=0 –raid-devices=2 /dev/md1 /dev/md2
mdadm: chunk size defaults to 512K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

# cat /proc/mdstat
Personalities : [raid1] [raid0]
md0 : active raid0 md2[1] md1[0]
      41875456 blocks super 1.2 512k chunks

md2 : active raid1 sde1[1] sdd1[0]
      20954112 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdc1[1] sdb1[0]
      20954112 blocks super 1.2 [2/2] [UU]

unused devices: <none>

# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 11:15:34 2016
     Raid Level : raid0
     Array Size : 41875456 (39.94 GiB 42.88 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 11:15:34 2016
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 512K

           Name : debian:0  (local to host debian)
           UUID : a55dec26:5dcd723f:4c8d15d4:2de2d739
         Events : 0

    Number   Major   Minor   RaidDevice State
       0       9        1        0      active sync   /dev/md1
       1       9        2        1      active sync   /dev/md2

3. 編輯 /etc/mdadm/mdadm.conf  設定檔
# mdadm –detail –scan –verbose >> /etc/mdadm/mdadm.conf
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=1.2 name=debian:1 UUID=ceac80b2:8ed44990:9927f0ab:03db076a
   devices=/dev/sdb1,/dev/sdc1
ARRAY /dev/md2 level=raid1 num-devices=2 metadata=1.2 name=debian:2 UUID=2bca9bb2:b520fedb:d23a38da:7572c357
   devices=/dev/sdd1,/dev/sde1
ARRAY /dev/md0 level=raid0 num-devices=2 metadata=1.2 name=debian:0 UUID=a55dec26:5dcd723f:4c8d15d4:2de2d739
   devices=/dev/md1,/dev/md2

4. 格式化分割區
# mkfs -t ext4 /dev/md0
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 10468864 4k blocks and 2621440 inodes
Filesystem UUID: be4f4dc6-3729-4bb0-ab86-9fbd654eb882
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

5. 建立掛載目錄並掛載
# mkdir /mnt/raid10
# mount /dev/md0 /mnt/raid10
# df -h

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.3G  968M  7.9G  11% /
udev             10M     0   10M   0% /dev
tmpfs           400M  5.7M  394M   2% /run
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/md0         40G   48M   38G   1% /mnt/raid10

6. 查看硬碟的 UUID
# blkid | grep /dev/md0
/dev/md0: UUID=”be4f4dc6-3729-4bb0-ab86-9fbd654eb882″ TYPE=”ext4″

7. 修改 /etc/fstab
# vim /etc/fstab
加入下面一行
UUID=66244a88-5af2-4ab8-a274-2256649d0413 /mnt/raid10               ext4    errors=remount-ro 0       0

在 Debian Linux 建立 RAID 6 – 新增一顆備援硬碟

參考網頁:
Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux – Part 5

OS:Debian Linux 5.8.0
HDD:
10G*1 Debian Linux System
20G *5 (/dev/sdb,sdc,sdd,sde,sdf)

1. 檢視目前的 RAID 狀態
# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 10:39:54 2016
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 27

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1[@more@]2. 建立 /dev/sdf 磁碟分割區
# fdisk /dev/sdf

Welcome to fdisk (util-linux 2.25.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x8d30e7ab.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-41943039, default 2048): 按二下 Enter 鍵
Last sector, +sectors or +size{K,M,G,T,P} (2048-41943039, default 41943039):

Created a new partition 1 of type ‘Linux’ and of size 20 GiB.

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): fd
Changed type of partition ‘Linux’ to ‘Linux raid autodetect’.

Command (m for help): wq
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

3. 將 /dev/sdf1 加入到 /dev/md0
# mdadm –add /dev/md0 /dev/sdf1
mdadm: added /dev/sdf1

4. 檢視目前的 RAID 狀態,多了一顆 Spare Devices
# mdadm -D /dev/md0
/dev/md0:

        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 10:44:37 2016
          State : clean
 Active Devices : 4
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 28

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1

       4       8       81        –      spare   /dev/sdf1

5. 模擬 /dev/sdd1 故障
# mdadm –manage –fail /dev/md0 /dev/sdd1
# mdadm –manage –set-faulty/dev/md0 /dev/sdd1
mdadm: set /dev/sdd1 faulty in /dev/md0

6. 檢查目前的 RAID 狀態,RAID 系統會自動讓 /dev/sdf1 產生作用
# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 10:47:46 2016
          State : clean, degraded, recovering
 Active Devices : 3
Working Devices : 4
 Failed Devices : 1
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 35% complete

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 35

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       4       8       81        2      spare rebuilding   /dev/sdf1
       3       8       65        3      active sync   /dev/sde1

       2       8       49        –      faulty   /dev/sdd1

# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdf1[4] sdb1[0] sde1[3] sdd1[2](F) sdc1[1]
      41908224 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/3] [UU_U]
      [================>….]  recovery = 80.8% (16943104/20954112) finish=0.4min speed=161494K/sec

unused devices: <none>

7. 將故障的硬碟移出
# mdadm –manage –remove /dev/md0 /dev/sdd1
mdadm: hot removed /dev/sdd1 from /dev/md0

# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdf1[4] sdb1[0] sde1[3] sdc1[1]
      41908224 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>

8.也可以直接在建立 RAID 時直接指定
# mdadm –create –verbose /dev/md0 –level=6 –raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 -x 1 /dev/sdf1

SoftRAID 的故障與復原

參考網頁:
磁碟管理:SoftRAID 與 LVM 綜合實做應用(上)

以 Debian Linux RAID 6 為例
1. 原本的 RAID 狀態
# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 10:36:08 2016
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 23

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1[@more@]2. 模擬 /dev/sdd1 故障
# mdadm –manage –fail /dev/md0 /dev/sdd1
# mdadm –manage –set-faulty /dev/md0 /dev/sdd1
mdadm: set /dev/sdd1 faulty in /dev/md0

3. 檢查目前的 RAID 狀態
# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 11:12:01 2016
          State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 25

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       4       0        0        4      removed
       3       8       65        3      active sync   /dev/sde1

       2       8       49        –      faulty   /dev/sdd1

sdd1[2](F) 後面 F 代表故障,md0 的最後一行 [4/3]與[UU_U]表示出有一顆磁碟壞掉了
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdb1[0] sde1[3] sdd1[2](F) sdc1[1]
      41908224 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/3] [UU_U]

unused devices: <none>

4. 將故障的硬碟移出
# mdadm –manage –remove /dev/md0 /dev/sdd1
mdadm: hot removed /dev/sdd1 from /dev/md0

5. 檢視移除 /dev/sdd1 的 RAID 狀態
# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 11:20:14 2016
          State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 26

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       4       0        0        4      removed
       3       8       65        3      active sync   /dev/sde1

6. 將新加入的 /dev/sdf 磁碟機建立磁碟分割
# fdisk /dev/sdf

7. 將新增的 /dev/sdf1 加入到 RAID 中
# mdadm –manage –add /dev/md0 /dev/sdf1
mdadm: added /dev/sdf1

8. 再次檢視 RAID 狀態
# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 11:29:08 2016
          State : clean, degraded, recovering
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 49% complete

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 39

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       4       8       81        2      spare rebuilding   /dev/sdf1
       3       8       65        3      active sync   /dev/sde1

# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdf1[4] sdb1[0] sde1[3] sdc1[1]
      41908224 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/3] [UU_U]
      [=============>…….]  recovery = 67.1% (14066688/20954112) finish=0.9min speed=124729K/sec

unused devices: <none>

mdadm 指令

# mdadm –help
mdadm is used for building, managing, and monitoring
Linux md devices (aka RAID arrays)
Usage: mdadm –create device options…
            Create a new array from unused devices.
       mdadm –assemble device options…
            Assemble a previously created array.
       mdadm –build device options…
            Create or assemble an array without metadata.
       mdadm –manage device options…
            make changes to an existing array.
       mdadm –misc options… devices
            report on or modify various md related devices.
       mdadm –grow options device
            resize/reshape an active array
       mdadm –incremental device
            add/remove a device to/from an array as appropriate
       mdadm –monitor options…
            Monitor one or more array for significant changes.[@more@]列出常用指令
mdadm –create –help
Usage:  mdadm –create device -chunk=X –level=Y –raid-devices=Z devices
 –level=      -l   : raid level: 0,1,4,5,6,10,linear,multipath and synonyms
  –raid-devices= -n : number of active devices in array
Example:
# mdadm –create /dev/md0 –level=stripe –raid-devices=4 /dev/sd[b-e]1
# mdadm –create /dev/md0 –level=5 –raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
# mdadm –create /dev/md0 –level=6 –raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

# mdadm -E /dev/sd[b-e]
# mdadm –examine /dev/sd[b-e]
# mdadm –examine /dev/sdb /dev/sdc /dev/sdd /dev/sde
/dev/sdb:
   MBR Magic : aa55
Partition[0] :     41940992 sectors at         2048 (type fd)
/dev/sdc:
   MBR Magic : aa55
Partition[0] :     41940992 sectors at         2048 (type fd)
/dev/sdd:
   MBR Magic : aa55
Partition[0] :     41940992 sectors at         2048 (type fd)
/dev/sde:
   MBR Magic : aa55
Partition[0] :     41940992 sectors at         2048 (type fd)

顯示 RAID 資訊
# mdadm –detail /dev/md0
# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 10:36:08 2016
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 23

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1

# mdadm –detail –scan –verbose
ARRAY /dev/md/0 level=raid6 num-devices=4 metadata=1.2 name=debian:0 UUID=8f039d29:9179c09a:17a76417:e54c9dfa
   devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1

停止 RAID
# mdadm –manage –stop /dev/md0

啟動 RAID
# mdadm –assemble –run /dev/md0
# mdadm -A –run /dev/md0

模擬磁碟故障
# mdadm –manage /dev/md0 –fail /dev/sdd1
# mdadm –manage /dev/md0 –set-faulty /dev/sdd1

加入磁碟到 RAID 中
# mdadm –manage /dev/md0 –add /dev/sdf1

從 RAID 中移除磁碟
# mdadm –manage /dev/md0 –remove /dev/sdd1

在 Debian Linux 建立 RAID 6

參考網頁:
Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux – Part 5

OS:Debian Linux 5.8.0
HDD:
10G*1 Debian Linux System
20G *4 (/dev/sdb,sdc,sdd,sde)

1. 安裝 mdadm 套件
# apt-get install mdadm

[@more@]2. 查看目前磁碟狀態
# fdisk -l | grep ‘^Disk /dev’
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdc: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdd: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk /dev/sde: 20 GiB, 21474836480 bytes, 41943040 sectors

3. 建立磁碟分割區
# fdisk /dev/sdb


不一定要更改成 fd

重複上面的動作,完成所有的磁碟
# fdisk /dev/sdc
# fdisk /dev/sdd
# fdisk /dev/sde

4. 建立 /dev/md0 磁碟陣列分割區
# mdadm –create /dev/md0 –level=6 –raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

5. 查看建置結果
# fdisk -l | grep /dev/md0
Disk /dev/md0: 40 GiB, 42914021376 bytes, 83816448 sectors
# mdadm –detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 10:25:52 2016
          State : clean, resyncing
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

  Resync Status : 32% complete

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 5

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1

6. 格式化分割區
# mkfs -t ext4 /dev/md0
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 10477056 4k blocks and 2621440 inodes
Filesystem UUID: 25c4c294-0b13-4e71-928e-47e1b69f1219
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

7. 建立掛載目錄並掛載
# mkdir /mnt/raid6
# mount /dev/md0 /mnt/raid6
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.3G  968M  7.9G  11% /
udev             10M     0   10M   0% /dev
tmpfs           400M  5.7M  394M   2% /run
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/md0         40G   48M   38G   1% /mnt/raid6

8. 查看硬碟的 UUID
# blkid | grep /dev/md0
/dev/md0: UUID=”25c4c294-0b13-4e71-928e-47e1b69f1219″ TYPE=”ext4″

9. 修改 /etc/fstab
# vim /etc/fstab
加入下面一行
UUID=25c4c294-0b13-4e71-928e-47e1b69f1219 /mnt/raid5               ext4    errors=remount-ro 0       0

10. 編輯 /etc/mdadm/mdadm.conf  設定檔
# mdadm –detail –scan –verbose >> /etc/mdadm/mdadm.conf
ARRAY /dev/md0 level=raid6 num-devices=4 metadata=1.2 name=debian:0 UUID=8f039d29:9179c09a:17a76417:e54c9dfa
   devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1

11. 磁碟分割資訊
# fdisk -l | grep /dev/sd
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdb1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
/dev/sda1  *        2048 20013055 20011008  9.6G 83 Linux
/dev/sda2       20015102 20969471   954370  466M  5 Extended
/dev/sda5       20015104 20969471   954368  466M 82 Linux swap / Solaris
Disk /dev/sdc: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdc1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sdd: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdd1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sde: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sde1        2048 41943039 41940992  20G fd Linux raid autodetect

12. 檢查是否有正確掛載
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.3G  1.1G  7.8G  12% /
udev             10M     0   10M   0% /dev
tmpfs           400M  5.9M  394M   2% /run
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/md0         40G   48M   38G   1% /mnt/raid6