試用 Banana Pi R1 – Bananian Linux 改用 isc-dhcp DHCP Server

原本是安裝 dnsmasq 來當做 DHCP Server,在沒有設定防火牆時,Windows Client 都可以正常取得 IP,但加上防火牆之後,Client 端變成無法自動取得 IP,試了很久都無法解決,確定防火牆設定檔是沒有問題的,所以乾脆就改用 isc-dhcp DHCP Server 了。

1. 安裝 isc-dhcp DHCP Server
# apt-get update
# apt-get install isc-dhcp-server

etting up isc-dhcp-server (4.3.1-6+deb8u2) …
Generating /etc/default/isc-dhcp-server…
[FAIL] Starting ISC DHCP server: dhcpd[….] check syslog for diagnostics. … f
 failed!
invoke-rc.d: initscript isc-dhcp-server, action “start” failed.

安裝完成會出現 failed 的訊息,是因為還沒有設定的關係![@more@]2. 停止原本的 dnsmasq
# /etc/init.d/dnsmasq stop

3. 設定 dnsmasq 開機不執行
# update-rc.d dnsmasq remove

4. 設定 isc-dhcp Server  派送 IP 的網路卡
# vim /etc/default/isc-dhcp-server
INTERFACES=”br0″

5. 修改 /etc/dhcp/dhcpd.conf 設定檔
# cp /etc/dhcp/dhcpd.conf /etc/dhcp/dhcpd.conf.$(date +%F)
# grep -vE ‘^$|^#’ /etc/dhcp/dhcpd.conf
ddns-update-style none;
option domain-name “lwrt.org”;
option domain-name-servers 168.95.1.1,140.111.66.1;
default-lease-time 7200;
max-lease-time 10800;
log-facility local7;
subnet 192.168.84.0 netmask 255.255.255.0 {
  range 192.168.84.101 192.168.84.120;
  option routers 192.168.84.1;
  option subnet-mask 255.255.255.0;
  option broadcast-address 192.168.84.0;
}
host passacaglia {
  hardware ethernet 61:62:63:64:d7:cc;
  fixed-address 192.168.84.101;
}

6. 測試一下試定檔是否正常
# /usr/sbin/dhcpd -t
Internet Systems Consortium DHCP Server 4.3.1
Copyright 2004-2014 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/
Config file: /etc/dhcp/dhcpd.conf
Database file: /var/lib/dhcp/dhcpd.leases
PID file: /var/run/dhcpd.pid

7. 啟動 isc-dhcp DHCP Server
# /etc/init.d/isc-dhcp-server start

8. 設定開機時啟動 isc-dhcp DHCP Server
# update-rc.d isc-dhcp-server defaults

Linux 下壓縮程式比較

原檔
$ ls -l UNUBeaconLogo128.png
-rw-r–r– 1 bananapi bananapi 25102 10月 12 17:16 UNUBeaconLogo128.png

使用 zip 格式
壓縮
# zip UNUBeaconLogo128.png.zip UNUBeaconLogo128.png
解壓縮
# unzip UNUBeaconLogo128.png.zip[@more@]

使用 7z 格式
壓縮
# 7z a
UNUBeaconLogo128.png.7z UNUBeaconLogo128.png
解壓縮
# 7z x UNUBeaconLogo128.png.7z

使用 gz 格式
壓縮(壓縮後原檔會不見)
# gzip
UNUBeaconLogo128.png
解壓縮(解壓縮後壓縮檔會不見)
# gzip -d UNUBeaconLogo128.png.gz
# gunzip UNUBeaconLogo128.png.gz

使用 xz 格式
壓縮(壓縮後原檔會不見)
# xz -z
UNUBeaconLogo128.png
解壓縮(解壓縮後壓縮檔會不見)
# xz -d UNUBeaconLogo128.png.xz
# unxz UNUBeaconLogo128.png.xz

綜合比較:在預設的情況下,不特別加上壓縮參數
$ ls -l UNUBeaconLogo128.png*
-rw-r–r– 1 bananapi bananapi 25102 10月 12 17:16 UNUBeaconLogo128.png
-rw-rw-r– 1 bananapi bananapi 11322 12月 24 13:54 UNUBeaconLogo128.png.7z
-rw-r–r– 1 bananapi bananapi 11378 10月 12 17:16 UNUBeaconLogo128.png.gz
-rw-r–r– 1 bananapi bananapi 11244 10月 12 17:16 UNUBeaconLogo128.png.xz
-rw-rw-r– 1 bananapi bananapi 11529 12月 24 13:54 UNUBeaconLogo128.png.zip

xz > 7z > gz > zip

gz / bz2 / xz 搭配 tar 使用
壓縮
# tar cjzf filename.tar.gz dirname
# tar cvjf filename.tar.bz2 dirname
# tar cvJf filename.tar.xz dirname

解壓縮
# tar xjzf filename.tar.gz
# tar xvjf filename.tar.bz2
# tar xvJf filename.tar.xz

sed 備忘

參考網頁:
[轉貼] SED單行腳本快速參考 @ 胖虎的祕密基地 :: 痞客邦 PIXNET ::
sed 工具
阿旺的 Linux 開竅手冊
sed, a stream editor Examples
Sed – An Introduction and Tutorial

1. 刪除空白行
# sed -i ‘/^$/d’ testfile
# sed -i ‘/./!d’ testfile

2. 刪除第一行空行後的所有內容
# sed -i ‘/^$/q’ testfile

3. 刪除第一行空行之前的所有內容
# sed -i ‘1,/^$/d’ testfile[@more@]4. 刪除含 pattern 的行
# sed -i ‘/pattern/d’ testfile

# cat /tmp/testfile
1
2
3
4
5
# sed -i ‘/2/’,’/4/d’ /tmp/testfile
# cat /tmp/testfile
1
5

5. 刪除文件中開頭的10行
# sed -i ‘1,10d’ testfile

6. 刪除文件中的最後一行
# sed -i ‘$d’ testfile

7. 顯示 8~12 行
# sed -n ‘8,12p’ testfile

8. 只顯示符合 pattern 的行
# sed -n ‘/pattern/p’ testfile
# sed ‘/pattern/!d’ testfile
# grep pattern testfile

9. 不顯示符合 pattern 的行
# sed -n ‘/pattern/!p’ testfile
# sed ‘/pattern/d’ testfile
# grep -v pattern testfile

10. 一次全部更換多個符合的 pattern
# sed -i ‘s/mysql/red/g;s/php/black/g’ testfile

11. 在每一行前面插入 5 個空白
# sed -i ‘s/^/ /’ testfile

12. 更換指定行(n)符合的字串
# sed -i ‘ns/php/red/’ testfile

13. 在指定行之前插入
# sed -i ‘2i 1234567890’ testfile

14. 在指定行之後插入
# sed -i ‘2a 1234567890’ testfile

15. 在最後一行插入
# sed -i ‘$a 1234567890’ testfile

16. 字串取代
# sed -i ‘s/^(anonymous_enable=).*$/1”NO/’ /etc/vsftpd/vsftpd.conf
# sed -i ‘s/^(SELINUX=).*$/1”disabled/’ /etc/selinux/config

17. 字串取代
# sed -i ‘/foo/ s//bar/g’ testfile

18. 字串取代 指定行範圍
# sed -i ‘34,38 s/ACCEPT/DROP/’ /etc/ufw/before.rules

19. 取出 IP
# ifconfig eth0
inet addr:192.168.1.12 Bcast:192.168.1.255 Mask:255.255.255.0
# ifconfig eth0 | grep ‘inet ‘ | sed ‘s/^.*inet addr://g’ | sed ‘s/ *Bcast.*$//g’
192.168.1.12

20. 多個指令
# sed -i ‘s/123/234/; s/四忠/四義/’ list

21. 將編輯命令放在檔案之中
# cat sedscr
s/123/234/
s/四忠/四義/
# sed -i -f sedscr list

22. 刪除找到 %post 後的所有行數
# sed -i ‘/%post/ ,$d’ /tmp/anaconda-ks.cfg

23. 找到字串的後面插入一行
#PIDFILE 後面插入一行
# sed -i ‘/#PIDFILE/ a PIDFILE=/var/chroot/bind9/var/run/named/named.pid’ /etc/init.d/bind9

24. 多重取代
# sed -i -e ‘s/123/234/’ -e ‘s/四忠/四義/’ list

25. 刪除最後幾個字元
# sed -i ‘s/…$//’ testfile

26. 在每一行後面插入一行空白行
# sed -i G testfile

27. 在最後一個欄位插入字串
# sed -i ‘s/$/@smail.ilc.edu.tw/’ class3

在 Debian Linux 建立 RAID 10

參考網站:
Setting Up RAID 10 or 1+0 (Nested) in Linux – Part 6

OS:Debian Linux 5.8.0
HDD:
10G*1 Debian Linux System
20G *4 (/dev/sdb,sdc,sdd,sde)

1. 安裝 mdadm 套件
# apt-get install mdadm

[@more@]2. 查看目前磁碟狀態
# fdisk -l | grep ‘^Disk /dev’
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdc: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdd: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk /dev/sde: 20 GiB, 21474836480 bytes, 41943040 sectors

3. 建立磁碟分割區
# fdisk /dev/sdb


不一定要更改成 fd

重複上面的動作,完成所有的磁碟
# fdisk /dev/sdc
# fdisk /dev/sdd
# fdisk /dev/sde

4. 建立 /dev/md0 磁碟陣列分割區
# mdadm –create –verbose /dev/md0 –level=10 –raid-devices=4 /dev/sd[b-e]1
mdadm: layout defaults to n2
mdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: size set to 20954112K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

5. 檢視 RAID 10 狀態
# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:26:44 2016
     Raid Level : raid10
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 10:27:37 2016
          State : clean, resyncing
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

  Resync Status : 27% complete

           Name : debian:0  (local to host debian)
           UUID : b0c27dbd:1ddbb962:4bc7fbd4:e072ba41
         Events : 4

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync set-A   /dev/sdb1
       1       8       33        1      active sync set-B   /dev/sdc1
       2       8       49        2      active sync set-A   /dev/sdd1
       3       8       65        3      active sync set-B   /dev/sde1

# cat /proc/mdstat
Personalities : [raid10]
md0 : active raid10 sde1[3] sdd1[2] sdc1[1] sdb1[0]
      41908224 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
      [=>……………….]  resync =  5.7% (2400000/41908224) finish=3.2min speed=200000K/sec

unused devices: <none>

# fdisk -l | grep /dev/md0
Disk /dev/md0: 40 GiB, 42914021376 bytes, 83816448 sectors

6. 格式化分割區
# mkfs -t ext4 /dev/md0
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 10477056 4k blocks and 2621440 inodes
Filesystem UUID: 66244a88-5af2-4ab8-a274-2256649d0413
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

7. 建立掛載目錄並掛載
# mkdir /mnt/raid10
# mount /dev/md0 /mnt/raid10
# df -h

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.3G  968M  7.9G  11% /
udev             10M     0   10M   0% /dev
tmpfs           400M  5.7M  394M   2% /run
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/md0         40G   48M   38G   1% /mnt/raid10

8. 查看硬碟的 UUID
# blkid | grep /dev/md0
/dev/md0: UUID=”66244a88-5af2-4ab8-a274-2256649d0413″ TYPE=”ext4″

9. 修改 /etc/fstab
# vim /etc/fstab
加入下面一行
UUID=66244a88-5af2-4ab8-a274-2256649d0413 /mnt/raid10               ext4    errors=remount-ro 0       0

10. 編輯 /etc/mdadm/mdadm.conf  設定檔
# mdadm –detail –scan –verbose >> /etc/mdadm/mdadm.conf
ARRAY /dev/md0 level=raid10 num-devices=4 metadata=1.2 name=debian:0 UUID=b0c27dbd:1ddbb962:4bc7fbd4:e072ba41
   devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1

11. 磁碟分割資訊
# fdisk -l | grep /dev/sd
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdb1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
/dev/sda1  *        2048 20013055 20011008  9.6G 83 Linux
/dev/sda2       20015102 20969471   954370  466M  5 Extended
/dev/sda5       20015104 20969471   954368  466M 82 Linux swap / Solaris
Disk /dev/sdc: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdc1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sdd: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdd1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sde: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sde1        2048 41943039 41940992  20G fd Linux raid autodetect

12. 檢查是否有正確掛載
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.3G  968M  7.9G  11% /
udev             10M     0   10M   0% /dev
tmpfs           400M  5.7M  394M   2% /run
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/md0         40G   48M   38G   1% /mnt/raid10

另外一種方式:先建立二個 RAID1,再把這二個 RAID1 合併成 RAID 0

1. 建立 RAID 1
# mdadm –create –verbose /dev/md1 –metadata=1.2 –level=1 –raid-devices=2 /dev/sd[b-c]1
mdadm: size set to 20954112K
mdadm: array /dev/md1 started.
# mdadm –create –verbose /dev/md2 –metadata=1.2 –level=1 –raid-devices=2 /dev/sd[d-e]1
mdadm: size set to 20954112K
mdadm: array /dev/md2 started.

# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sde1[1] sdd1[0]
      20954112 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdc1[1] sdb1[0]
      20954112 blocks super 1.2 [2/2] [UU]

unused devices: <none>

2. 建立 RAID 0
# mdadm –create –verbose /dev/md0 –level=0 –raid-devices=2 /dev/md1 /dev/md2
mdadm: chunk size defaults to 512K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

# cat /proc/mdstat
Personalities : [raid1] [raid0]
md0 : active raid0 md2[1] md1[0]
      41875456 blocks super 1.2 512k chunks

md2 : active raid1 sde1[1] sdd1[0]
      20954112 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdc1[1] sdb1[0]
      20954112 blocks super 1.2 [2/2] [UU]

unused devices: <none>

# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 11:15:34 2016
     Raid Level : raid0
     Array Size : 41875456 (39.94 GiB 42.88 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 11:15:34 2016
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 512K

           Name : debian:0  (local to host debian)
           UUID : a55dec26:5dcd723f:4c8d15d4:2de2d739
         Events : 0

    Number   Major   Minor   RaidDevice State
       0       9        1        0      active sync   /dev/md1
       1       9        2        1      active sync   /dev/md2

3. 編輯 /etc/mdadm/mdadm.conf  設定檔
# mdadm –detail –scan –verbose >> /etc/mdadm/mdadm.conf
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=1.2 name=debian:1 UUID=ceac80b2:8ed44990:9927f0ab:03db076a
   devices=/dev/sdb1,/dev/sdc1
ARRAY /dev/md2 level=raid1 num-devices=2 metadata=1.2 name=debian:2 UUID=2bca9bb2:b520fedb:d23a38da:7572c357
   devices=/dev/sdd1,/dev/sde1
ARRAY /dev/md0 level=raid0 num-devices=2 metadata=1.2 name=debian:0 UUID=a55dec26:5dcd723f:4c8d15d4:2de2d739
   devices=/dev/md1,/dev/md2

4. 格式化分割區
# mkfs -t ext4 /dev/md0
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 10468864 4k blocks and 2621440 inodes
Filesystem UUID: be4f4dc6-3729-4bb0-ab86-9fbd654eb882
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

5. 建立掛載目錄並掛載
# mkdir /mnt/raid10
# mount /dev/md0 /mnt/raid10
# df -h

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.3G  968M  7.9G  11% /
udev             10M     0   10M   0% /dev
tmpfs           400M  5.7M  394M   2% /run
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/md0         40G   48M   38G   1% /mnt/raid10

6. 查看硬碟的 UUID
# blkid | grep /dev/md0
/dev/md0: UUID=”be4f4dc6-3729-4bb0-ab86-9fbd654eb882″ TYPE=”ext4″

7. 修改 /etc/fstab
# vim /etc/fstab
加入下面一行
UUID=66244a88-5af2-4ab8-a274-2256649d0413 /mnt/raid10               ext4    errors=remount-ro 0       0

在 Debian Linux 建立 RAID 6 – 新增一顆備援硬碟

參考網頁:
Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux – Part 5

OS:Debian Linux 5.8.0
HDD:
10G*1 Debian Linux System
20G *5 (/dev/sdb,sdc,sdd,sde,sdf)

1. 檢視目前的 RAID 狀態
# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 10:39:54 2016
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 27

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1[@more@]2. 建立 /dev/sdf 磁碟分割區
# fdisk /dev/sdf

Welcome to fdisk (util-linux 2.25.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x8d30e7ab.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-41943039, default 2048): 按二下 Enter 鍵
Last sector, +sectors or +size{K,M,G,T,P} (2048-41943039, default 41943039):

Created a new partition 1 of type ‘Linux’ and of size 20 GiB.

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): fd
Changed type of partition ‘Linux’ to ‘Linux raid autodetect’.

Command (m for help): wq
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

3. 將 /dev/sdf1 加入到 /dev/md0
# mdadm –add /dev/md0 /dev/sdf1
mdadm: added /dev/sdf1

4. 檢視目前的 RAID 狀態,多了一顆 Spare Devices
# mdadm -D /dev/md0
/dev/md0:

        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 10:44:37 2016
          State : clean
 Active Devices : 4
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 28

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1

       4       8       81        –      spare   /dev/sdf1

5. 模擬 /dev/sdd1 故障
# mdadm –manage –fail /dev/md0 /dev/sdd1
# mdadm –manage –set-faulty/dev/md0 /dev/sdd1
mdadm: set /dev/sdd1 faulty in /dev/md0

6. 檢查目前的 RAID 狀態,RAID 系統會自動讓 /dev/sdf1 產生作用
# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 10:47:46 2016
          State : clean, degraded, recovering
 Active Devices : 3
Working Devices : 4
 Failed Devices : 1
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 35% complete

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 35

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       4       8       81        2      spare rebuilding   /dev/sdf1
       3       8       65        3      active sync   /dev/sde1

       2       8       49        –      faulty   /dev/sdd1

# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdf1[4] sdb1[0] sde1[3] sdd1[2](F) sdc1[1]
      41908224 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/3] [UU_U]
      [================>….]  recovery = 80.8% (16943104/20954112) finish=0.4min speed=161494K/sec

unused devices: <none>

7. 將故障的硬碟移出
# mdadm –manage –remove /dev/md0 /dev/sdd1
mdadm: hot removed /dev/sdd1 from /dev/md0

# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdf1[4] sdb1[0] sde1[3] sdc1[1]
      41908224 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>

8.也可以直接在建立 RAID 時直接指定
# mdadm –create –verbose /dev/md0 –level=6 –raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 -x 1 /dev/sdf1

SoftRAID 的故障與復原

參考網頁:
磁碟管理:SoftRAID 與 LVM 綜合實做應用(上)

以 Debian Linux RAID 6 為例
1. 原本的 RAID 狀態
# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 10:36:08 2016
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 23

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1[@more@]2. 模擬 /dev/sdd1 故障
# mdadm –manage –fail /dev/md0 /dev/sdd1
# mdadm –manage –set-faulty /dev/md0 /dev/sdd1
mdadm: set /dev/sdd1 faulty in /dev/md0

3. 檢查目前的 RAID 狀態
# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 11:12:01 2016
          State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 25

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       4       0        0        4      removed
       3       8       65        3      active sync   /dev/sde1

       2       8       49        –      faulty   /dev/sdd1

sdd1[2](F) 後面 F 代表故障,md0 的最後一行 [4/3]與[UU_U]表示出有一顆磁碟壞掉了
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdb1[0] sde1[3] sdd1[2](F) sdc1[1]
      41908224 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/3] [UU_U]

unused devices: <none>

4. 將故障的硬碟移出
# mdadm –manage –remove /dev/md0 /dev/sdd1
mdadm: hot removed /dev/sdd1 from /dev/md0

5. 檢視移除 /dev/sdd1 的 RAID 狀態
# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 11:20:14 2016
          State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 26

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       4       0        0        4      removed
       3       8       65        3      active sync   /dev/sde1

6. 將新加入的 /dev/sdf 磁碟機建立磁碟分割
# fdisk /dev/sdf

7. 將新增的 /dev/sdf1 加入到 RAID 中
# mdadm –manage –add /dev/md0 /dev/sdf1
mdadm: added /dev/sdf1

8. 再次檢視 RAID 狀態
# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 11:29:08 2016
          State : clean, degraded, recovering
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 49% complete

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 39

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       4       8       81        2      spare rebuilding   /dev/sdf1
       3       8       65        3      active sync   /dev/sde1

# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdf1[4] sdb1[0] sde1[3] sdc1[1]
      41908224 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/3] [UU_U]
      [=============>…….]  recovery = 67.1% (14066688/20954112) finish=0.9min speed=124729K/sec

unused devices: <none>

在 Debian Linux 建立 RAID 6

參考網頁:
Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux – Part 5

OS:Debian Linux 5.8.0
HDD:
10G*1 Debian Linux System
20G *4 (/dev/sdb,sdc,sdd,sde)

1. 安裝 mdadm 套件
# apt-get install mdadm

[@more@]2. 查看目前磁碟狀態
# fdisk -l | grep ‘^Disk /dev’
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdc: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdd: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk /dev/sde: 20 GiB, 21474836480 bytes, 41943040 sectors

3. 建立磁碟分割區
# fdisk /dev/sdb


不一定要更改成 fd

重複上面的動作,完成所有的磁碟
# fdisk /dev/sdc
# fdisk /dev/sdd
# fdisk /dev/sde

4. 建立 /dev/md0 磁碟陣列分割區
# mdadm –create /dev/md0 –level=6 –raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

5. 查看建置結果
# fdisk -l | grep /dev/md0
Disk /dev/md0: 40 GiB, 42914021376 bytes, 83816448 sectors
# mdadm –detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 28 10:25:06 2016
     Raid Level : raid6
     Array Size : 41908224 (39.97 GiB 42.91 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Jun 28 10:25:52 2016
          State : clean, resyncing
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

  Resync Status : 32% complete

           Name : debian:0  (local to host debian)
           UUID : 8f039d29:9179c09a:17a76417:e54c9dfa
         Events : 5

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1

6. 格式化分割區
# mkfs -t ext4 /dev/md0
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 10477056 4k blocks and 2621440 inodes
Filesystem UUID: 25c4c294-0b13-4e71-928e-47e1b69f1219
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

7. 建立掛載目錄並掛載
# mkdir /mnt/raid6
# mount /dev/md0 /mnt/raid6
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.3G  968M  7.9G  11% /
udev             10M     0   10M   0% /dev
tmpfs           400M  5.7M  394M   2% /run
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/md0         40G   48M   38G   1% /mnt/raid6

8. 查看硬碟的 UUID
# blkid | grep /dev/md0
/dev/md0: UUID=”25c4c294-0b13-4e71-928e-47e1b69f1219″ TYPE=”ext4″

9. 修改 /etc/fstab
# vim /etc/fstab
加入下面一行
UUID=25c4c294-0b13-4e71-928e-47e1b69f1219 /mnt/raid5               ext4    errors=remount-ro 0       0

10. 編輯 /etc/mdadm/mdadm.conf  設定檔
# mdadm –detail –scan –verbose >> /etc/mdadm/mdadm.conf
ARRAY /dev/md0 level=raid6 num-devices=4 metadata=1.2 name=debian:0 UUID=8f039d29:9179c09a:17a76417:e54c9dfa
   devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1

11. 磁碟分割資訊
# fdisk -l | grep /dev/sd
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdb1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
/dev/sda1  *        2048 20013055 20011008  9.6G 83 Linux
/dev/sda2       20015102 20969471   954370  466M  5 Extended
/dev/sda5       20015104 20969471   954368  466M 82 Linux swap / Solaris
Disk /dev/sdc: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdc1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sdd: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdd1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sde: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sde1        2048 41943039 41940992  20G fd Linux raid autodetect

12. 檢查是否有正確掛載
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.3G  1.1G  7.8G  12% /
udev             10M     0   10M   0% /dev
tmpfs           400M  5.9M  394M   2% /run
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/md0         40G   48M   38G   1% /mnt/raid6

在 Debian Linux 建立 RAID 5

參考網頁:
Creating RAID 5 (Striping with Distributed Parity) in Linux – Part 4
Debian RAID-5 (效能與備份) | JR 的網路日誌
[筆記]Linux 軟體 RAID 5 實作 @ Paul’s Blog :: 痞客邦 PIXNET ::
OS:Debian Linux 5.8.0
HDD:
10G*1 Debian Linux System
20G *4 (/dev/sdb,sdc,sdd,sde)

1. 安裝 mdadm 套件
# apt-get install mdadm

[@more@]2. 查看目前磁碟狀態
# fdisk -l | grep ‘^Disk /dev’
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdc: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdd: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk /dev/sde: 20 GiB, 21474836480 bytes, 41943040 sectors

3. 建立磁碟分割區
# fdisk /dev/sdb


不一定要更改成 fd

重複上面的動作,完成所有的磁碟
# fdisk /dev/sdc
# fdisk /dev/sdd
# fdisk /dev/sde

4. 建立 /dev/md0 磁碟陣列分割區
# mdadm –create /dev/md0 –level=5 –raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

# mdadm -C /dev/md0 -l=5 -n=4 /dev/sd[b-e]1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

5. 查看建置結果
# fdisk -l | grep /dev/md0
Disk /dev/md0: 60 GiB, 64371032064 bytes, 125724672 sectors

# mdadm –detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Mon Jun 27 19:12:21 2016
     Raid Level : raid5
     Array Size : 62862336 (59.95 GiB 64.37 GB)
  Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Mon Jun 27 19:14:47 2016
          State : clean, degraded, recovering
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 40% complete

           Name : debian:0  (local to host debian)
           UUID : 432ac899:b8c0fceb:26f9df48:bba894aa
         Events : 7

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       4       8       65        3      spare rebuilding   /dev/sde1

6. 格式化分割區
# mkfs -t ext4 /dev/md0
Creating filesystem with 15715584 4k blocks and 3932160 inodes
Filesystem UUID: c416cc70-98ea-4eb5-b997-b93fd2410d35
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

7. 建立掛載目錄並掛載
# mkdir /mnt/raid5
# mount /dev/md0 /mnt/raid5
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.3G  1.1G  7.8G  12% /
udev             10M     0   10M   0% /dev
tmpfs           400M  5.9M  394M   2% /run
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/md0         59G   52M   56G   1% /mnt/raid5

8. 查看硬碟的 UUID
# blkid | grep /dev/md0
/dev/md0: UUID=”c416cc70-98ea-4eb5-b997-b93fd2410d35″ TYPE=”ext4″

9. 修改 /etc/fstab
# vim /etc/fstab
加入下面一行
UUID=c416cc70-98ea-4eb5-b997-b93fd2410d35 /mnt/raid5               ext4    errors=remount-ro 0       0

10.編輯 /etc/mdadm/mdadm.conf  設定檔
# mdadm –detail –scan –verbose >> /etc/mdadm/mdadm.conf
ARRAY /dev/md0 level=raid5 num-devices=4 metadata=1.2 name=debian:0 UUID=432ac899:b8c0fceb:26f9df48:bba894aa
   devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1

11. 磁碟分割資訊
# fdisk -l | grep /dev/sd
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdb1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
/dev/sda1  *        2048 20013055 20011008  9.6G 83 Linux
/dev/sda2       20015102 20969471   954370  466M  5 Extended
/dev/sda5       20015104 20969471   954368  466M 82 Linux swap / Solaris
Disk /dev/sdc: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdc1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sdd: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdd1        2048 41943039 41940992  20G fd Linux raid autodetect
Disk /dev/sde: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sde1        2048 41943039 41940992  20G fd Linux raid autodetect

12. 檢查是否有正確掛載
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.3G  1.1G  7.8G  12% /
udev             10M     0   10M   0% /dev
tmpfs           400M  5.9M  394M   2% /run
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/md0         59G   52M   56G   1% /mnt/raid5

在 Debian Linux 建立 RAID 0

參考網頁:
Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2
Debian RAID-0 (等量模式) | JR 的網路日誌
OS:Debian Linux 5.8.0
HDD:
10G*1 Debian Linux System
20G *4 (/dev/sdb,sdc,sdd,sde)

1. 安裝 mdadm 套件
# apt-get install mdadm


[@more@]2. 查看目前磁碟狀態
# fdisk -l | grep ‘^Disk /dev’
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdc: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdd: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk /dev/sde: 20 GiB, 21474836480 bytes, 41943040 sectors

3. 建立磁碟分割區
# fdisk /dev/sdb


不一定要更改成 fd

重複上面的動作,完成所有的磁碟
# fdisk /dev/sdc
# fdisk /dev/sdd
# fdisk /dev/sde

4. 建立 /dev/md0 磁碟陣列分割區
# mdadm –create /dev/md0 –level=stripe –raid-devices=4 /dev/sd[b-e]1

# mdadm -C /dev/md0 -l raid0 -n 4 /dev/sd[b-e]1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

5. 查看建置結果
# cat /proc/mdstat
Personalities : [raid0]
md0 : active raid0 sde1[3] sdd1[2] sdc1[1] sdb1[0]
      83816448 blocks super 1.2 512k chunks

unused devices: <none>

/dev/sdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 7d559d15:91cc1bec:54dcd941:8f10e5ff
           Name : debian:0  (local to host debian)
  Creation Time : Mon Jun 27 19:09:38 2016
     Raid Level : raid0
   Raid Devices : 4

 Avail Dev Size : 41908224 (19.98 GiB 21.46 GB)
    Data Offset : 32768 sectors
   Super Offset : 8 sectors
   Unused Space : before=32680 sectors, after=0 sectors
          State : clean
    Device UUID : b77c8a2f:aad8c146:6da755a5:6f3db3e3

    Update Time : Mon Jun 27 19:09:38 2016
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 37037bb0 – correct
         Events : 0

     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAAA (‘A’ == active, ‘.’ == missing, ‘R’ == replacing)

fdisk -l | grep /dev/md0
Disk /dev/md0: 80 GiB, 85828042752 bytes, 167632896 sectors

# mdadm –detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Mon Jun 27 19:09:38 2016
     Raid Level : raid0
     Array Size : 83816448 (79.93 GiB 85.83 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Mon Jun 27 19:09:38 2016
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 512K

           Name : debian:0  (local to host debian)
           UUID : 7d559d15:91cc1bec:54dcd941:8f10e5ff
         Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1

6. 進行 RAID 磁碟分割
# fdisk /dev/md0

7. 格式化分割區
# mkfs.ext4 /dev/md0p1
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 20953600 4k blocks and 5242880 inodes
Filesystem UUID: a89c1629-75b4-4660-b5cd-cbcf72595fe8
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

8. 建立掛載目錄並掛載
# mkdir /mnt/raid0
# mount /dev/md0p1 /mnt/raid0
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.3G  1.1G  7.8G  12% /
udev             10M     0   10M   0% /dev
tmpfs           400M  5.9M  394M   2% /run
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/md0p1         79G   56M   75G   1% /mnt/raid0

9. 查看硬碟的 UUID
# blkid | grep /dev/md0p1
/dev/md0p1: UUID=”b57de29c-9210-48bc-9ba6-1f5224feb42f” TYPE=”ext4″

10. 修改 /etc/fstab
# vim /etc/fstab
加入下面一行
UUID=b57de29c-9210-48bc-9ba6-1f5224feb42f /mnt/raid0      ext4    errors=remount-ro 0       0

11. 編輯 /etc/mdadm/mdadm.conf  設定檔
mdadm -E -s -v >> /etc/mdadm/mdadm.conf

mdadm –detail –scan –verbose >> /etc/mdadm/mdadm.conf
# cat /etc/mdadm/mdadm.conf
ARRAY /dev/md/0  level=raid0 metadata=1.2 num-devices=4 UUID=7d559d15:91cc1bec:54dcd941:8f10e5ff name=debian:0
   devices=/dev/sde1,/dev/sdd1,/dev/sdc1,/dev/sdb1

12. 磁碟分割資訊
# fdisk -l | grep /dev/sd
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdb1        2048 41943039 41940992  20G 83 Linux
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
/dev/sda1  *        2048 20013055 20011008  9.6G 83 Linux
/dev/sda2       20015102 20969471   954370  466M  5 Extended
/dev/sda5       20015104 20969471   954368  466M 82 Linux swap / Solaris
Disk /dev/sdc: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdc1        2048 41943039 41940992  20G 83 Linux
Disk /dev/sdd: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sdd1        2048 41943039 41940992  20G 83 Linux
Disk /dev/sde: 20 GiB, 21474836480 bytes, 41943040 sectors
/dev/sde1        2048 41943039 41940992  20G 83 Linux

13. 檢查是否有正確掛載
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.3G  1.1G  7.8G  12% /
udev             10M     0   10M   0% /dev
tmpfs           400M  5.9M  394M   2% /run
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/md0p1       79G   56M   75G   1% /mnt/raid0

在 Debian Linux 安裝 ProFTPD FTP Server

1. 搜尋套件
# apt-cache search proftpd | grep ^proftpd
proftpd-basic – Versatile, virtual-hosting FTP daemon – binaries
proftpd-dev – Versatile, virtual-hosting FTP daemon – development files
proftpd-doc – Versatile, virtual-hosting FTP daemon – documentation
proftpd-mod-geoip – Versatile, virtual-hosting FTP daemon – GeoIP module
proftpd-mod-ldap – Versatile, virtual-hosting FTP daemon – LDAP module
proftpd-mod-mysql – Versatile, virtual-hosting FTP daemon – MySQL module
proftpd-mod-odbc – Versatile, virtual-hosting FTP daemon – ODBC module
proftpd-mod-pgsql – Versatile, virtual-hosting FTP daemon – PostgreSQL module
proftpd-mod-sqlite – Versatile, virtual-hosting FTP daemon – SQLite3 module
proftpd-mod-autohost – ProFTPD module mod_autohost
proftpd-mod-case – ProFTPD module mod_case
proftpd-mod-dnsbl – ProFTPD module mod_dnsbl
proftpd-mod-fsync – ProFTPD module mod_fsync
proftpd-mod-msg – ProFTPD module mod_msg
proftpd-mod-tar – ProFTPD module mod_tar

2. 進行安裝
# apt-get install proftpd
standalone 方式啟動,效能會比較好
[@more@]3. 修改設定檔 /etc/proftpd/proftpd.conf
# grep -E -v ‘^#|^$’ /etc/proftpd/proftpd.conf
Include /etc/proftpd/modules.conf
UseIPv6                         on
IdentLookups                    off
ServerName                      “Debian”
ServerType                      standalone
DeferWelcome                    off
DefaultAddress                  192.168.1.12
MultilineRFC2228                on
DefaultServer                   on
ShowSymlinks                    on
TimeoutNoTransfer               600
TimeoutStalled                  600
TimeoutIdle                     1200
DisplayLogin                    welcome.msg
DisplayChdir                    .message true
ListOptions                     “-l”
DenyFilter                      *.*/
UseFtpUsers off
RootLogin on
DefaultRoot                     ~ !root
Port                            21
PassivePorts                  49152 65534
<IfModule mod_dynmasq.c>
</IfModule>
MaxInstances                    30
User                            proftpd
Group                           nogroup
Umask                           022  022
AllowOverwrite                  on
TransferLog /var/log/proftpd/xferlog
SystemLog   /var/log/proftpd/proftpd.log
<IfModule mod_quotatab.c>
QuotaEngine off
</IfModule>
<IfModule mod_ratio.c>
Ratios off
</IfModule>
<IfModule mod_delay.c>
DelayEngine on
</IfModule>
<IfModule mod_ctrls.c>
ControlsEngine        off
ControlsMaxClients    2
ControlsLog           /var/log/proftpd/controls.log
ControlsInterval      5
ControlsSocket        /var/run/proftpd/proftpd.sock
</IfModule>
<IfModule mod_ctrls_admin.c>
AdminControlsEngine off
</IfModule>
Include /etc/proftpd/conf.d/

4. 啟動 ProFTPD FTP Server
# /etc/init.d/proftpd start

5. 檢查 FTP Server 是否有正常啟動
# netstat -an | grep :21
tcp6       0      0 :::21                   :::*                    LISTEN