RAID arrays in (Puppy) Linux
-
- Posts: 5464
- Joined: Fri 10 Jun 2005, 05:12
- Location: Australia
And here's an alternative approach - since you don't want the host-RAID functionality of the Promise controller, you can avoid using IDE3/4 altogether and just use IDE2 instead -
connect your two storage drives to IDE2, using a single IDE cable, with one drive set as MASTER, the other as SLAVE.
Software RAID will work just as well this way.
connect your two storage drives to IDE2, using a single IDE cable, with one drive set as MASTER, the other as SLAVE.
Software RAID will work just as well this way.
Thanks tempestuous.
I formatted the drives as regular drives in GParted and I'm just going to use them as-is without any RAIDing for the time being. (configuring the software raid at the command line is a bit beyond my current comfort level anyway - maybe I'll give it a try in the future)
Can't use IDE 1 & 2 because they are already in use! Got this box filled with old IDE drives. Hey, gotta do something with them.
BTW, I'm using ext4. Why do you prefer ext3?
I formatted the drives as regular drives in GParted and I'm just going to use them as-is without any RAIDing for the time being. (configuring the software raid at the command line is a bit beyond my current comfort level anyway - maybe I'll give it a try in the future)
Can't use IDE 1 & 2 because they are already in use! Got this box filled with old IDE drives. Hey, gotta do something with them.
BTW, I'm using ext4. Why do you prefer ext3?
-
- Posts: 5464
- Joined: Fri 10 Jun 2005, 05:12
- Location: Australia
I was wary of ext4 when it was first introduced, because there seemed to be quite a few bugs with it, but it should be fine these days.
first install the mdadm-3.1.4 dotpet.
Assuming you want RAID0 - the two drives striped for maximum capacity and speed, run these commands -
Wait until this initialization is completed, then save the RAID configuration with -
Format the new RAID array (md0) with ext4, using some special formatting options suited to RAID operation -
Finally, mount the RAID array -
See if you can transfer some files to/from /mnt/md0
That's it.
Well I just updated my instructions in the third post with more detail, but here's exactly what I think you need -toronado wrote:configuring the software raid at the command line is a bit beyond my current comfort level
first install the mdadm-3.1.4 dotpet.
Assuming you want RAID0 - the two drives striped for maximum capacity and speed, run these commands -
Code: Select all
mdadm --create --verbose /dev/md0 --level=raid0 --raid-devices=2 /dev/sda1 /dev/sdb1
Code: Select all
mdadm --detail --scan >> /etc/mdadm.conf
Code: Select all
mkfs.ext4 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0
Code: Select all
mkdir /mnt/md0
mount /dev/md0 /mnt/md0
That's it.
OK, I'll give it a try. But one question about this first line...
Before running this command, should the drives have only unallocated space, or should I have already created the partitions and formatted them?
Code: Select all
mdadm --create --verbose /dev/md0 --level=raid0 --raid-devices=2 /dev/sda1 /dev/sdb1
-
- Posts: 5464
- Joined: Fri 10 Jun 2005, 05:12
- Location: Australia
OK, thanks for explaining. For the purpose of this exercise, I used GParted to create primary partitions (unformatted) on both drives, then proceeded with the commands you gave me.tempestuous wrote:i) yes, the drives need to be partitioned first - each as a primary partition. I understand that you have done this already.
ii) it's no matter whether you format or not prior to the array creation. As the array is created, any existing formatting is destroyed anyway.
Here is a copy/paste of the terminal session:
Code: Select all
sh-4.1# mdadm --create --verbose /dev/md0 --level=raid0 --raid-devices=2 /dev/sda1 /dev/sdb1
mdadm: chunk size defaults to 512K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
sh-4.1# mdadm --detail --scan >> /etc/mdadm.conf
sh-4.1# mkfs.ext4 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0
mke2fs 1.41.14 (22-Dec-2010)
fs_types for mke2fs.conf resolution: 'ext4'
Calling BLKDISCARD from 0 to 160048349184 failed.
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=32 blocks, Stripe width=64 blocks
9773056 inodes, 39074304 blocks
39074 blocks (0.10%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
1193 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 33 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
sh-4.1# mkdir /mnt/md0
sh-4.1# mount /dev/md0 /mnt/md0
sh-4.1#
EDIT: I tried:
Code: Select all
sh-4.1# mdadm --assemble /dev/md0
mdadm: /dev/md0 has been started with 2 drives.
sh-4.1# mount /dev/md0 /mnt/md0
-
- Posts: 5464
- Joined: Fri 10 Jun 2005, 05:12
- Location: Australia
Don't be concerned that /mnt/md0 doesn't show up on the desktop, neither that Pmount doesn't see the array.
Both of these probably require extra code written into them to understand software arrays.
You should browse to /mnt/md0 with ROX, and see if you can transfer files to/from. If this fails, I suspect that the ext4 filesystem has failed to properly create within the array. In this case I suggest you reformat with ext3. To format, the array must first be unmounted -
now go ahead with the reformatting -
Both of these probably require extra code written into them to understand software arrays.
This should only be necessary if the udev rule that I included in the mdadm dotpet fails to automatically restore the array at each bootup. This needs investigation.toronado wrote:Code: Select all
sh-4.1# mdadm --assemble /dev/md0 mdadm: /dev/md0 has been started with 2 drives.
This command appears to run without error - which means that Puppy can see the array, and mount it.toronado wrote:Code: Select all
sh-4.1# mount /dev/md0 /mnt/md0
You should browse to /mnt/md0 with ROX, and see if you can transfer files to/from. If this fails, I suspect that the ext4 filesystem has failed to properly create within the array. In this case I suggest you reformat with ext3. To format, the array must first be unmounted -
Code: Select all
umount /mnt/md0
Code: Select all
mkfs.ext3 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0
-
- Posts: 5464
- Joined: Fri 10 Jun 2005, 05:12
- Location: Australia
OK.tempestuous wrote:Don't be concerned that /mnt/md0 doesn't show up on the desktop, neither that Pmount doesn't see the array.
Both of these probably require extra code written into them to understand software arrays.
It is necessary on my system. Not sure why.tempestuous wrote:This should only be necessary if the udev rule that I included in the mdadm dotpet fails to automatically restore the array at each bootup. This needs investigation.toronado wrote:Code: Select all
sh-4.1# mdadm --assemble /dev/md0 mdadm: /dev/md0 has been started with 2 drives.
It works. I just added the commands to assemble and mount the RAID array to my Samba Auto-Start script and it works great. I added a Samba share on md0 and it works great.tempestuous wrote:This command appears to run without error - which means that Puppy can see the array, and mount it.toronado wrote:Code: Select all
sh-4.1# mount /dev/md0 /mnt/md0
You should browse to /mnt/md0 with ROX, and see if you can transfer files to/from.
So basically, everything seems to be working. md0 isn't showing up on the desktop with this method, but as you said, that isn't really needed.
One thing is puzzling to me though... ROX seems to show empty directories in /mnt/ for partitions that don't exist. So, if I don't mount md0, there is a (empty) directory there anyway. (Once I mount md0 the directory shows the actual contents of the partition.) I found many other "dummy" directories there for previously created partitions and what not that no longer exist. I deleted these and they haven't reappeared so I think it's ok.
-
- Posts: 5464
- Joined: Fri 10 Jun 2005, 05:12
- Location: Australia
Yes, it seems to be a quirk of the mdadm application that it creates additional "dummy" or "ghost" device nodes, typically of the format "/dev/md_d0" or similar. You can manually stop these dummy arrays as such -toronado wrote:ROX seems to show empty directories in /mnt/ for partitions that don't exist. So, if I don't mount md0, there is a (empty) directory there anyway. I found many other "dummy" directories there for previously created partitions and what not that no longer exist.
Code: Select all
mdadm --stop /dev/whatever
Sure, that startup script is fine, but generally the correct place to such additional commands is /etc/rc.d/rc.localtoronado wrote:I just added the commands to assemble and mount the RAID array to my Samba Auto-Start script and it works great.
I have just updated the instructions in the third post.
-
- Posts: 5464
- Joined: Fri 10 Jun 2005, 05:12
- Location: Australia
This is probably getting off-topic for this thread, but I don't think this issue pertains to mdadm. (It might not have anything to do with ROX either.) For example, well before installing the mdadm pet I noticed empty directories in /mnt/ for partitions such as sda2 sdb2 sdc4 sdd2 etc. and none of these partitions actually existed (or at least they didn't exist at the time I was browsing /mnt/). And while some of these partitions I actually created in the past and subsequently deleted (such as sdc4), others (like sda2, sdb2, sdd2) I don't recall ever creating in the first place. It's an old computer, so maybe it's haunted.tempestuous wrote:Yes, it seems to be a quirk of the mdadm application that it creates additional "dummy" or "ghost" device nodes, typically of the format "/dev/md_d0" or similar. You can manually stop these dummy arrays as such -toronado wrote:ROX seems to show empty directories in /mnt/ for partitions that don't exist. So, if I don't mount md0, there is a (empty) directory there anyway. I found many other "dummy" directories there for previously created partitions and what not that no longer exist.but as long as these devices/arrays don't interfere with the correct device/array, there's no problem, and you can just ignore them.Code: Select all
mdadm --stop /dev/whatever
OK thanks.tempestuous wrote:Sure, that startup script is fine, but generally the correct place to such additional commands is /etc/rc.d/rc.localtoronado wrote:I just added the commands to assemble and mount the RAID array to my Samba Auto-Start script and it works great.
I have just updated the instructions in the third post.
Sorry, I know this thread is old, but I figured this is probably the best place to put this question.
I have succeeded in setting up my RAID (1, btw). I'm now trying - and failing - to install Grub.
From the menu entry gui program, when I get to the step, something like "which disk or whatever do you want to put it in on?" and it suggests "/dev/sda(1?)" I tried both /dev/md0 and /dev/sda (and/or /dev/sda1) and /dev/sdb (and same).
I am not currently on that computer so I don't remember if that step asked for the disk or the partition, but whichever it asked for I put, ok?
It said that /dev/md0 "is not a valid Linux" something or other, and it couldn't mount sda or sdb to do its stuff there - which I figured as much.
Anyways, I tried command line. The below is approximation of what it said: (Yes, I know I'm being overly descriptive.)
Ok, so I copied the files from /usr/sbin/grub (or wherever they are) to md0's /boot/grub that I created ... so it should be present on *both physical drives* right? And grub won't find when I do the above yet again.
What am I missing? Am I just totally lost?
I have succeeded in setting up my RAID (1, btw). I'm now trying - and failing - to install Grub.
From the menu entry gui program, when I get to the step, something like "which disk or whatever do you want to put it in on?" and it suggests "/dev/sda(1?)" I tried both /dev/md0 and /dev/sda (and/or /dev/sda1) and /dev/sdb (and same).
I am not currently on that computer so I don't remember if that step asked for the disk or the partition, but whichever it asked for I put, ok?
It said that /dev/md0 "is not a valid Linux" something or other, and it couldn't mount sda or sdb to do its stuff there - which I figured as much.
Anyways, I tried command line. The below is approximation of what it said: (Yes, I know I'm being overly descriptive.)
Code: Select all
# grub
Blah blah something about BIOS this will take a while. ...
grub > find /boot/grub/stage1
Error 15: File not found
What am I missing? Am I just totally lost?
-
- Posts: 5464
- Joined: Fri 10 Jun 2005, 05:12
- Location: Australia
Ah, you're trying to boot from a RAID array, and that's complicated. Personally, I would avoid this, and install Puppy on a separate (non-RAID) drive - even a small USB flash drive. Then just use the RAID array for your user-data.
But if you're determined to persist, you will need to do some research and experimentation, and then rebuild Puppy's initrd.
Let me explain it in principle: the software RAID array can generally only be understood, and thus accessible, to a running operating system. It's difficult (but not impossible) to access files on the array at bootup.
What you need to do is include all necessary drivers, utilities, and configuration logic into Puppy's initial ramdisk.
In your case, that means rebuilding the initrd image to include the mdadm application, raid1 kernel module, and also modify the initrd startup scripts to assemble and mount the RAID array right at the start of the boot sequence.
I have no experience in this, so cannot help with the fine details.
Once this is all achieved, yes, you can put the grub configuration files onto the array ...
but as I understand it, the Master Boot Record must still be installed onto a single physical drive. I don't believe it's possible to share the MBR on an array - that would effectively be sharing two boot sectors. I can't imagine that any motherboard bios would be able to recognise this.
But if you're determined to persist, you will need to do some research and experimentation, and then rebuild Puppy's initrd.
Let me explain it in principle: the software RAID array can generally only be understood, and thus accessible, to a running operating system. It's difficult (but not impossible) to access files on the array at bootup.
What you need to do is include all necessary drivers, utilities, and configuration logic into Puppy's initial ramdisk.
In your case, that means rebuilding the initrd image to include the mdadm application, raid1 kernel module, and also modify the initrd startup scripts to assemble and mount the RAID array right at the start of the boot sequence.
I have no experience in this, so cannot help with the fine details.
Once this is all achieved, yes, you can put the grub configuration files onto the array ...
but as I understand it, the Master Boot Record must still be installed onto a single physical drive. I don't believe it's possible to share the MBR on an array - that would effectively be sharing two boot sectors. I can't imagine that any motherboard bios would be able to recognise this.
I've installed a new version of Puppy (PhatSlacko 5.5.02 for its easy Samba config) and I installed mdadm-3.2.5 from the PPM and rebooted but no md0 in /mnt.
I tried:So before I go messing things up further, what should I try next?
I tried:
Code: Select all
# mdadm --assemble /dev/md0
mdadm: /dev/md0 not identified in config file.
-
- Posts: 5464
- Joined: Fri 10 Jun 2005, 05:12
- Location: Australia
So as I understand it, you have installed a new version of Puppy, but you want to access the software RAID array that you previously created?
Oops, it sounds like you didn't keep a copy of your configuration file (from your earlier installation) - /etc/mdadm.conftoronado wrote:Code: Select all
mdadm: /dev/md0 not identified in config file.
I haven't erased the previous install (it's on a separate partition). Are you saying all I need to do is copy over the config file?tempestuous wrote:So as I understand it, you have installed a new version of Puppy, but you want to access the software RAID array that you previously created?
Oops, it sounds like you didn't keep a copy of your configuration file (from your earlier installation) - /etc/mdadm.conftoronado wrote:Code: Select all
mdadm: /dev/md0 not identified in config file.