Slackware 14 (Software) RAID Installation on HDD with Advanced Format (4KB sectors)

0. Foreword

This article explains how to create a software RAID system based on Slackware 14 in HDD with Advanced Format (4KB physical sectors). The explanation mainly revolved around RAID 1 installation because that was what I did and verified. However, some of the techniques also applies to other form of RAID with similar characteristics such as RAID 5 or RAID 6. This article is probably not as straight forward as you wanted because I just dumped the official guide from Slackware 14 installation media and give comments on what didn't work for me. Nonetheless, I hope it is of some use to others as I have been struggling for several days to get my RAID 1 setup to work. Hopefully, others wouldn't need that because the steps have been documented here.


What follows is a pristine copy of README_RAID.TXT by Amritpal Bath from Slackware 14.0 installation media. This readme helps a lot despite it doesn't solve all of the problems. Thanks to Amritpal Bath for the excellent write-up.

Slackware RAID HOWTO

Version 1.01

by Amritpal Bath 


- Changelog
- Introduction
- Warnings
- Planning
- Setup
- Using the generic kernel
- Troubleshooting
- Appendices
- Acknowledgements/References


1.01 (2011/03/15):
  - Added Robby Workman's --metadata edits per James Davies' tip.
1.00 (2008/04/09):
  - Intitial release.


This document explains how to install Slackware 13.0 (and beyond) on a
software RAID root filesystem.  It is meant to cover only software raid.
If you are using a RAID expansion card, or the RAID functionality that came
with your motherboard, this document will not be useful for you.

In order to follow this document, your computer must have two or more empty
hard drives.  While it is possible to be creative and create RAID arrays on
drives that already contain data, it can be error prone, so it is not
covered in this document.


If you perform the following instructions on hard drives with data on them,

If you wish to perform these operations on hard drives that hold data of
any importance, you MUST BACKUP YOUR DATA.  The procedure below will
destroy all of the data on your hard drives, so any important data will
need to be restored from your backups.


If you don't backup your data and end up losing it, it will be your fault.
There is nothing I can do to help you in that case.

Now, on with the show... :)


The first step is to determine which RAID level you want to use.

It is recommended that you familiarize yourself with basic RAID concepts,
such as the various RAID levels that are available to you.  You can read
about these in various places - consult your favorite search engine about
"raid levels", or see the References section.

Here's a quick summary of the more common RAID levels:

 - RAID 0: Requires 2 drives, can use more.  Offers no redundancy, but
   improves performance by "striping", or interleaving, data between all
   drives.  This RAID level does not help protect your data at all.
   If you lose one drive, all of your data will be lost.

 - RAID 1: Requires 2 drives, can use more.  Offers data redundancy by
   mirroring data across all drives.  This RAID level is the simplest way
   to protect your data, but is not the most space-efficient method.  For
   example, if you use 3 drives in a RAID 1 array, you gain redundancy, but
   you still have only 1 disk's worth of space available for use.

 - RAID 5: Requires 3 drives, can use more.  Offers data redundancy by
   storing parity data on each drive.  Exactly one disk's worth of space
   will be used to hold parity data, so while this RAID level is heaviest
   on the CPU, it is also the most space efficient way of protecting your
   data.  For example, if you use 5 drives to create a RAID 5 array, you
   will only lose 1 disk's worth of space (unlike RAID 1), so you will
   end up with 4 disk's worth of space available for use.  While simple to
   setup, this level is not quite as straightforward as setting up RAID 1.


=== Partition hard drives ===

Once you have booted the Slackware installer CD, the first step is to
partition the hard drives that will be used in the RAID array(s).

I will assume that your first RAID hard drive is /dev/sda.  If it is
/dev/hda or something similar, adjust the following commands appropriately.

You can see your drives by running: cat /proc/partitions

 - /boot: RAID 0 and RAID 5 users will require a separate boot partition, as
   the computer's BIOS will not understand striped devices.  For
   simplicity's sake, we will make /boot a small RAID 1 (mirror) array.
   This means that in the case of RAID 0, it will not matter which drive
   your BIOS attempts to boot, and in the case of RAID 5, losing one drive
   will not result in losing your /boot partition.

   I recommend at least 30MB for this partition, to give yourself room to
   play with multiple kernels in the future, should the need arise.  I tend
   to use 100MB, so I can put all sorts of bootable images on the partition,
   such as MemTest86, for example.

   Go ahead and create a small boot partition now on /dev/sda, via cfdisk
   (or fdisk, if you prefer).

   Ensure that the partition type is Linux RAID Autodetect (type FD).

 - /: Every setup will require a root partition. :)  You will likely want to
   create a partition takes up most of the rest of the drive.  Unless you
   are using LVM (not covered in this document), remember to save some space
   after this partition for your swap partition! (see below)

   If you are not creating a swap partition, I recommend leaving 100MB of
   unused space at the end of the drive.  (see "safety" for explanation)

   Go ahead and create your main partition now on /dev/sda, via cfdisk
   (or fdisk, if you prefer).

   Ensure that the partition type is Linux RAID Autodetect (type FD).

 - swap: Swap space is where Linux stores data when you're running low on
   available RAM.  For fairly obvious reasons, building this on RAID 0 could
   be painful (if that array develops a bad sector, for example), so I tend
   to build swap on RAID 1 as well.  If you understand the danger and still
   want to build swap on RAID 0 to eke out as much performance as possible,
   go for it.

   For RAID 1 swap, create a partition that is the exact size that you want
   your swap space to be (for example, 2GB, if you can't decide).

   For RAID 0 swap (not recommended), create a partition that is equivalent
   to the swap size you want, divided by the number of drives that will be
   in the array.

   For example, 2GB / 3 drives = 683MB swap partition on /dev/sda.

   Ensure that the partition type is Linux RAID Autodetect (type FD).

   I recommend leaving 100MB of unused space at the end of the drive.
   (see "safety" for explanation)

   See also: Appendix A - Striping swap space without RAID 0.

 - safety!  I highly recommend leaving 100MB of unpartitioned space at the
   end of each drive that will be used in the RAID array(s).

   In the event that you need to replace one of the drives in the array,
   there is no guarantee that the new drive will be exactly the same size as
   the drive that you are replacing.  For example, even if both drives are
   750GB, they may be different revisions or manufacturers, and thus have a
   size difference of some small number of megabytes.

   This is, however, enough to throw a wrench in your drive-replacement
   plans - you cannot replace a failed RAID drive with one of a smaller size,
   for obvious reasons.  Having that small 100MB buffer just may save your

=== Copy and review partitions ===

Now that /dev/sda is partitioned as appropriate, copy the partitions to all
the other drives to be used in your RAID arrays.

An easy way to do this is:
  sfdisk -d /dev/sda | sfdisk /dev/sdb

This will destroy all partitions on /dev/sdb, and replicate /dev/sda's
partition setup onto it.

After this, your partitions should look something like the following:

 - RAID 0:
     /dev/sda1  30MB      /dev/sdb1  30MB   
     /dev/sda2  100GB     /dev/sdb2  100GB  
     /dev/sda3  2GB       /dev/sdb3  2GB    

 - RAID 1:
     /dev/sda1  100GB     /dev/sdb1  100GB  
     /dev/sda2  2GB       /dev/sdb2  2GB    

 - RAID 5:
     /dev/sda1  30MB      /dev/sdb1  30MB     /dev/sdc1  30MB 
     /dev/sda2  100GB     /dev/sdb2  100GB    /dev/sdc2  100GB
     /dev/sda3  2GB       /dev/sdb3  2GB      /dev/sdc3  2GB  
All partition types should be Linux RAID Autodetect (type fd).

=== Create RAID arrays ===

Now it's time to create the actual RAID arrays based on the partitions that
were created.

The parameters for each of these RAID commands specifies, in order:
 - the RAID device node to create (--create /dev/mdX)
 - the RAID level to use for this array (--level X)
 - how many devices (partitions) to use in the array (--raid-devices X)
 - the actual list of devices (/dev/sdaX /dev/sdbX /dev/sdcX)
 - this is not present on all of them, but "--metadata=0.90" tells mdadm
   to use the older version 0.90 metadata instead of the newer version;
   you must use this for any array from which LILO will be loading a 
   kernel image, or else LILO won't be able to read from it.

Start by creating the RAID array for your root filesystem.

 - RAID 0:
     mdadm --create /dev/md0 --level 0 --raid-devices 2 \
       /dev/sda2 /dev/sdb2

 - RAID 1:
     mdadm --create /dev/md0 --level 1 --raid-devices 2 \
       /dev/sda1 /dev/sdb1 --metadata=0.90

 - RAID 5:
     mdadm --create /dev/md0 --level 5 --raid-devices 3 \
       /dev/sda2 /dev/sdb2 /dev/sdc2

Next, let's create the array for the swap partition.  This will be RAID 1
regardless of which RAID level your root filesystem uses, but given our
partition layouts, each command will still be slightly different.

 - RAID 0:
     mdadm --create /dev/md1 --level 1 --raid-devices 2 \
       /dev/sda3 /dev/sdb3

 - RAID 1:
     mdadm --create /dev/md1 --level 1 --raid-devices 2 \
       /dev/sda2 /dev/sdb2

 - RAID 5:
     mdadm --create /dev/md1 --level 1 --raid-devices 3 \
       /dev/sda3 /dev/sdb3 /dev/sdc3

Finally, RAID 0 and RAID 5 users will need to create their /boot array.
RAID 1 users do not need to do this.

 - RAID 0:
     mdadm --create /dev/md2 --level 1 --raid-devices 2 \
       /dev/sda1 /dev/sdb1 --metadata=0.90

 - RAID 5:
     mdadm --create /dev/md2 --level 1 --raid-devices 3 \
       /dev/sda1 /dev/sdb1 /dev/sdc1 --metadata=0.90

We're all done creating our arrays!  Yay!

=== Run Slackware setup ===

First, let's format our swap array, so the installer recognizes it:
  mkswap /dev/md1

Now run 'setup' as normal.

When you choose to setup your swap partitions, /dev/md1 will show up.
Continue with this selected.

When asked for the target partition, choose the root array (/dev/md0).

You may choose the format method and filesystem of your choice.

RAID 0 and RAID 5 users must also setup /boot.  When asked about setting up
extra partitions, choose /dev/md2.  When asked where to mount this device,
enter "/boot".

After this, continue installation as normal.

For LILO configuration:
 - When asked about LILO, choose the "simple" setup.
 - When asked about additional "append=" parameters, RAID 0 and
   RAID 5 users should type in "root=/dev/md0", to ensure that the proper
   array is mounted on / at bootup.
 - When asked about where to install LILO, choose MBR.

You may see some warnings scroll by.  This is OK.

=== Finishing touches ===

After exiting the installer, we have just a few settings to tweak.

Start by switching into your actual installation directory:
 - chroot /mnt

Let's make sure LILO boots from the RAID arrays properly.  Using your
favorite editor (vim/nano/pico), edit /etc/lilo.conf:
 - add a new line (add it anywhere, but don't indent it):
     raid-extra-boot = mbr-only
 - You will need to change the following line:
     boot = 
   RAID 0 and RAID 5 users, change it to:
     boot = /dev/md2
   RAID 1 users, change it to:
     boot = /dev/md0
 - Save the file and exit your editor.
 - run "lilo".

When that's done, let's exit the installation and reboot:
  - exit
  - reboot


Using the generic kernel

The official Slackware recommendation is to switch to the "generic"
Slackware kernel after installation has been completed.  If you wish to use
the generic kernel, you must create an initrd.  This section gives a quick
example of booting a RAID system in this fashion.

If you require more information on initrds, please read /boot/README.initrd.

Typically, a user switches to a generic kernel by booting the system, and
afterwards running the following:
 - cd /boot
 - rm vmlinuz config
 - ln -s vmlinuz-generic-smp-*     vmlinuz
 - ln -s*
 - ln -s config-generic-smp-*      config

Don't run lilo yet, we'll do that soon.

Next, edit (create, if necessary) /etc/mkinitrd.conf and add:

Obviously, this assumes that you are using the EXT3 filesystem.  If you are
using another filesystem, adjust the module appropriately (reiserfs or xfs,
for example).  If you wish to read more about the MODULE_LIST variable,
consult "man mkinitrd.conf".

Note: If the module for your hard drive controller is not compiled into the
generic kernel, you will want to add that module to the MODULE_LIST variable
in mkinitrd.conf.  For example, my controller requires the mptspi module, so
my /etc/mkinitrd.conf looks like:

We're almost done.

Edit /etc/lilo.conf, and find the line at the very end that says:
  image = /boot/vmlinuz
Add a new line after it that says:
  initrd = /boot/initrd.gz
In this case, be sure to indent the line you've added!

Next, create the initrd based on the config file created earlier.
  mkinitrd -F

Finally, run "lilo" to make the new settings take effect, give yourself a
pat on the back, and reboot your finished system. :)


Any number of typos can result in a system that does not boot on its own,
but all is not lost.  Put the rubber chicken and the lemon away...

Booting your Slackware media (DVD, for example) can make it very easy to
switch into your installed system and make repairs:

 - Boot Slackware CD/DVD.
 - Login to installer as normal.
 - Scan for, and then assemble the RAID arrays:
     mdadm -Es > /etc/mdadm.conf
     mdadm -As
 - Mount root partition:
     mount /dev/md0 /mnt
 - Switch to installed OS:
     chroot /mnt
 - Mount remaining filesystems:
     mount /boot (RAID 0 and RAID 5 users only)
     mount /proc
     mount sys /sys -t sysfs

At this point, you can bring up your favorite editor, tweak config files,
re-run mkinitrd/lilo/etc as you wish, or anything else you need to do to
make your system bootable again.

When you're finished making your changes, rebooting is simple:

 - cd /
 - umount boot proc sys
 - exit
 - reboot

If you are having issues that you're unable to resolve, shoot me an email.
Perhaps the answer will make it into this section. :)


=== Appendix A: Striping swap space without RAID 0 ===

For completeness' sake, I should mention that swap space can be striped to
improve performance without creating a RAID 0 array.

To accomplish this, start by forgetting about any instructions having to do
with /dev/md1, which would be our swap array - create the swap partitions on
the hard drives, but do not create this particular array.

When creating the swap partitions, ensure that the partition type is set to
Linux Swap (type 82).

During setup, the installer will recognize the swap partitions.  Ensure that
all of them are selected, and continue as normal.

After installation is complete, go ahead and boot your system - we can
finish this once the system is booted, in the interest of simplicity.

When the system boots, edit /etc/fstab with your favorite editor.  Find the
lines that describe your swap partitions - they say "swap" in the second

Each of these lines says "default" in the fourth column.  Simply change that
to "default,pri=0" for each line.

After saving the file, either reboot, or simply run:
  swapoff -a
  swapon -a

To confirm that the setting has taken effect, you can run:
  swapon -s

Verify that the Priority column reads 0 for each partition, and we're done!


- In depth explanation of RAID levels:
  "LasCon Storage - Different types of RAID"

- Thanks to John Jenkins (mrgoblin) for some tips in:
  "Installing with Raid on Slackware 12.0+"

- Thanks to Karl Magnus Kolstø (karlmag) for his original writeup on
  Slackware and RAID, ages ago!
   RAID level 0 DEVICE"

- Of course, thanks to Patrick "The Man" Volkerding for creating Slackware!

- Also thanks to the rest of the guys that proofread, tested, and suggested!
  Eric Hameleers (alienBOB), Robby Workman, Alan Hicks,
  Piter Punk, Erik Jan Tromp (alphageek)...

- My contact info:
  Primary email:
  Secondary email:
  On certain IRC networks: "amrit" (or some variation :) )

- This latest version of this document can be found at:

2. Bug Fixing a.k.a What's Missing From README_RAID.TXT

Before delving into the differences, I want to inform you about details of my system. The machine I'm working with has the following configuration:

  • AMD Athlon X2 4000+ CPU
  • 2 GB of RAM
  • ECS AM690GM-M2 motherboard. This motherboard uses the AMD690GM chipset. The southbridge part of the chipset is AMD SB600.
  • 2 x Western Digital Blue Caviar 1 TB SATA w/ 64MB cache (WD10EZEX). NOTE: These HDDs use Advanced Format (4KB physical sectors) by default. The 512 bytes sector support is only through HDD firmware emulation.

Because of the "peculiarity" of the HDDs, you want to align your partition accordingly, to prevent performance degradation due to unaligned sector accesses. In fact, that's the sole reason I migrated to Slackware 14 from Slackware 13. Slackware 14 uses GNU parted >= version 2.1 which makes it possible to align the partitions without too much hassle.

Western Digital has a nice write-up on optimizing partition layout in Linux with GNU parted at There's also an article at about Advanced Format HDD:

2.1. Using GNU Parted to Optimize Partition Layout

Now, armed with the guide from Western Digital (WD) official user guide, we now boot the target system with Slackware 14 installation media. In my case, I boot from the installation DVD. Upon encountering the root prompt, I use the following command to start creating partition on the first SATA HDD:

parted -a optimal /dev/sda 

Then, I proceed to create the partition table with the mklabel command (inside parted). I choose to use "msdos" for maximum compatibility.

mklabel msdos

Then, proceed to create the partitions without specifying the filesystem type. fdisk/cfdisk will be used to specify the filesystem type of the specific partition later.

mkpart part-type [fs-type] start end 

Replace part-type with partition type you wanted, in my case I choose primary, and leave fs-type not set and set the start and end parameters to start and end of the partition in MB or GB respectively. Also remember to leave at least 100 MB at the end of the drive for future contingencies, as explained in README_RAID.TXT.

Now, we have the main partitioning task completed. We have to double-check the sector alignment in order to be sure that we don't make a mistake. I use the following command (in parted) for that:


The command should display whether the partitions are aligned correctly. If everything is OK. Now, we can leave GNU parted with the "quit" command.

2.2. Setting-Up Filesystem Type with fdisk/cfdisk

So far, the HDDs are already partitioned but have yet to be given valid filesystems. I used cfdisk to set the filesystem types of the partitions in /dev/sda. Then, afterwards, I copied this partitioning scheme to /dev/sdb with sfdisk, as explained in README_RAID.TXT.

2.3. RAID Setup

Follow the instructions in README_RAID.TXT to create the RAID array (in /dev/mdX). You should customize the steps according to your own RAID setup, do not just follow the steps in README_RAID.TXT blindly. After this, you should have a working RAID array(s).

2.4. Slackware Installation

Slackware installation is just as usual, the difference is only in the target block device. Instead of selecting the real physical partition such as /dev/sdX, now you have to choose the RAID array /dev/mdX. Follow instructions in README_RAID.TXT and customize it accroding to your RAID setup.

2.5. Setting-up Initrd

After completing Slackware installation, DO NOT reboot your machine right away because it won't be able to boot properly. You would end-up with kernel panic if you do so. You have to setup an initrd image that can boot the Software RAID array first. Follow the instructions in the "Using the generic kernel" section of README_RAID.TXT to create /boot/initrd.gz.

There's a catch though, the instructions in "Using the generic kernel" section of README_RAID.TXT doesn't mention that you should activate "UDEV" support in it. Without UDEV support your system won't be able to boot properly to the real root filesystem. Instead, the boot will stop at initrd image execution, complaining that the root filesystem doesn't exist. This happens because the root filesystem is not present in the "pseudo" filesystem name in /dev, for example if your root filesystem is /dev/md0, then "/dev/md0" wouldn't exist because udev is not there to create and initialize the file!!!.

Therefore, you have to copy /etc/mkinitrd.conf.sample to /etc/mkinitrd.conf and then modify the lines corresponding to the following options:


The MODULE_LIST option lists the modules need to be loaded before the root filesystem accessed, in my case, my root filesystem is an ext4 filesystem. That's why I have to include ext4 in MODULE_LIST. The second line indicate that mdadm and other RAID-related support must be loaded because the root filesystem and other parts of the filesystem is in RAID. The last line is where I added support for UDEV to the initrd image. Once you have completed this task (update /etc/mkinitrd.conf), you can invoke mkinitrd to update the initrd image as follows:

mkinitrd -F

The "-F" option means that mkinitrd uses parameters in /etc/mkinitrd.conf as parameters to create the initrd image.

2.6. Modifying lilo.conf

Section "Finishing touches" in README_RAID.TXT explains how to modify lilo.conf. Also, don't forget to add initrd support to the option that will boot the kernel you're going to use. The lilo.conf section in README_RAID.TXT explains the steps. It's a good idea to add this line to lilo.conf:


Just to prevent lilo from complaining about missing LBA32 support.

2.7. Reboot to Your New RAID System

If you've gone this far, now you can reboot your brand new Linux RAID machine. See for yourself whether everything is working as expected.

3. Closing

Hopefully what I explained here is helpful for those looking into installing RAID 1 on system(s) with HDD supporting Advanced Format (4KB sectors).

4. References


  • README_RAID.TXT by Amritpal Bath from Slackware installation media. (For example, see: