~ read.

mdadm RAID5 Creation Issues with SATA Controllers

Just a quick post in case someone else has the same issue that I ran into when I tried to set up a 4x3TB RAID5 array in my homeserver. Faulty and failing drives on a brand new installation! Turns out the problem was due to having my array drives on different SATA controllers.

My create command was

mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sd[befg]

In watching /proc/mdstat I could see that the array was being built, but the creation would eventually fail (after about 2 hours) with no indication of what went wrong.

my /proc/mdstat looked like this:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdg[4](F) sdf[2](F) sde[F] sdb[0]
      8790405120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/2] [UU__]
      bitmap: 0/22 pages [0KB], 65536KB chunk

and the details of my array looked like this:

mdadm --detail /dev/md0
        Version : 1.2
  Creation Time : Thu Jan  4 23:56:59 2018
     Raid Level : raid5
     Array Size : 8790405120 (8383.18 GiB 9001.37 GB)
  Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Fri Jan  5 09:21:15 2018
          State : clean, FAILED
 Active Devices : 1
Working Devices : 1
 Failed Devices : 3
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : homeserver:0  (local to host homeserver)
           UUID : eb23d162:55e07e9f:2dfd08eb:cd7227b7
         Events : 1164

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       -       0        0        1      removed
       -       0        0        2      removed
       -       0        0        3      removed

       1       8       64        -      faulty   /dev/sde
       2       8       80        -      faulty   /dev/sdf
       4       8       96        -      faulty   /dev/sdg


After checking my drives in BIOS and eventually looking up the manual for my motherboard, I realized that the top 4 SATA ports were actually the on the Marvell legacy SATA controller and I only had 1 array drive plugged into the main Intel controller. While still SATAIII, this limits them to the PCIe bus and slower speeds than even the Intel SATAII controller. Plugging all 4 of my new hard drives into the bottom 4 Intel ports allowed mdadm to create the array properly. It might have also worked with all 4 array drives on Marvell, but after researching the speeds I decided to put them on SATAII.

Hopefully this helps someone with their RAID troubles in the future! Good luck!