Sunday, September 11, 2011

Raid configuration example

http://ascend4.org/Installing_Raid_1_on_Existing_Ubuntu_Server

copied from above:

Useful web sites

* how to set up a raid disk - really good instructions * benchmarks comparing fstype options * how to set up fstab entries * what mount and umount are really all about

(NOTE: I have used these instructions for Ubuntu 7.04 and 8.04 also, and they work without any problems. AWW)
 
(NOTE April 27, 2009: I attempted to use these instructions when updating the operating system on my computer to Ubuntu 9.04 from 8.10. The raid 1 disk already existed, and the install process found it and set it up automatically. It called the raid disk md_d0 rather than md0, however. I had to mount it. I also looked into /etc/fstab and corrected the automount instruction that was there.)
 
The first web site above is really excellent. It points you to the tool mdadm, which is for setting up raid disks in linux. I basically followed the instructions on this web site. May God bless Mário, its creator. As you encounter each computer instruction in his guide, it should pay handsomely if you looked at the man pages for it. If the instruction was complex or a term confusing, I also did a search on the web for better explanations (giving rise to those above for fstype, fstab and mount)).

Background

I am a real novice at doing all this so I shall put in my thoughts as we go along. I purchased a PowerEdge 2950 rack mounted server computer from Dell in February, 2007. I bought it without any OS installed and without a hardware Raid controller. It shipped with a single 160 GB SATA disk but with slots for several more disks. I installed the Ubuntu 6.10 (Edgy) linux OS from a disk downloaded from the Ubuntu web site. We later ordered from Dell and received two additional 250 GB SATA disks, which I wished to install in a Raid 1 configuration AFTER having an "up and running" server. We could not obtain only the rails to install the disks into the front slots; we had to purchase the disks, too, from Dell. 


Before starting, and with the computer running, execute the following instruction:


cat /proc/diskstats 
and note the disks that are detected. I detected sda and its partitions only - i.e., my 160 GB disk. 


To install the disks, we halted the computer and installed them by simply slipping them into two slots on the front of the computer. We powered up the computer.



Creating the Raid 1 configuration

Go to the heading "The hard work" in the guiding web page. From a remote computer via ssh, it was my intention now to configure these disks to be in a Raid 1 configuration. The first instruction told me to check if the system identified the disks, which I checked by again running 

cat /proc/diskstats 
At the bottom of the output were the two new disks: sdb and sdc.
The next instruction tells us to format these two disks. If you are using disks that were previously in a Raid configuration, they are already formatted, and you can skip this step. Mine were new. I formatted the first one using 


sudo fdisk /dev/sdb 
The fdisk instruction expects you to "run it" interactively. Typing in m will give you the menu. As noted in the guiding web page, first type in n to establish a logic partition (I wrote down the number of cylinders as told to do). Then I typed in t and was presented with a list of options. I selected fd as this was to be part of a Raid 1 disk. I typed w as instructed to save this information.
I repeated the formatting for the second disk, sbc. 


The guiding web page now instructed me to use mdadm, which I did have to install 


sudo apt-get install mdadm 
I then read the man pages for this instruction carefully. Doing so one realizes this package is the secret behind setting up a Raid disk system. I wonder how anyone would do it without it. In the version of the man pages I read, the instruction has the form: 


mdadm [mode] <raiddevice> [options] <component-devices> 
The mode part is really meant to be the place to put the mode selection options. In the following instruction --create is such an option.
I then ran the instruction as advised 


sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1 
and expected a miracle would occur, and it would work. It did not. The error message said something about md0 not existing. Of course it did not exist. I was "creating" it. Well, back to the web and a lot of not-too-helpful "help me" messages and their responses, until I came across a response that said "use the --auto option." I had read about that and had wondered about using it but lacked the courage. The response said that the --auto option is there to break the circle of md0 not being there but the mdadm instruction not functioning if it is not yet defined. They did not suggest which value to give that option so I ran the above instruction again with --auto=yes, as follows: 


sudo mdadm --create --verbose --auto=yes /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1 
and now the miracle did occur. It worked. 


Our next instruction is to restart the computer and then to wait until the disks sync. Mário suggested getting a cup of coffee. Good idea, except you can do a lot more than that in the one to two hours you will be waiting. That they are syncing will not be obvious so you should monitor it, as noted by Mario, by issuing the command 


watch cat /proc/mdstat 

After it has synced, we are now told to format md0 with the instruction (if you are reinstalling disks that were in a previous Raid 1 configuration and you do not wish to destroy their content, skip this step): 


sudo mkfs -t reiserfs /dev/md0 
Okay, so what is the option -t reiserfs all about? On to the man pages for mkfs. Ah, reiserfs is an fstype. It tell us there are lots of different types of formating from which we can choose. Questions that occur: what are the options and which should I choose? Using google again on mkfs, I found Q&A pages noting that the types are indicated by choosing among installed programs of the form mkfs.fstype so I ran 


sudo locate mkfs\. 
to see what was in my system of that form. A list popped up that included ext2, ext3, and so forth but no reiserfs. Back to google. The best page I found to give me clues as to which to choose is in the useful web pages at the start of this page. First of all one wants a type that has journaling. Journaling says that if the system powers down suddenly and not nicely, one can power up and the system can directly recover the disk without me having to ask it to fix up all the inconsistencies it is finding. Yes, I want that. ext3 and reiserfs have journaling. I finally found a statement that said feiserfs is a good choice and a newer type than ext3. So, I chose reiserfs. Now does it exists?


I ran the instruction as advised


sudo mkfs -t reiserfs /dev/md0 
and sure enough reiserfs existed and ran successfully. Okay, I could have just done what I was advised, but I felt better about knowing why. 


The next instruction is to edit the /etc/fstab file with the apology that he was not going to tell us how to do that. Okay, his instructions have been great, and I can figure this out myself. Back again to google and up pops a really excellent website on fstab (see above). It directed me to a page on mount which I also read (I read it first, as advised). I found these two pages to be to the point and very easy to follow. All of a sudden I understood the mount instruction, and fstab no longer seemed a mystery. Basically, for our needs here you can mount a partition on a disk at a mount point that is any existing directory in your system (apparently even a directory that has things in it - not tested). That directory will be the root of the mounted partition, and, while mounted, the previous content of the directory is hiding from view. md0 is a partition. sdb1 is a partition on the hard disk sdb, and so forth. 


To check out my understanding, I decided to mount md0 manually at /back (I have my reasons for this name, but they are irrelevant here). /back has to exist so I first ran 


sudo mkdir /back 
and then I mounted md0 with the instruction 


sudo mount /dev/md0 /back 
Running the instruction 


df 
listed md0 as having the mount point /back, so everything seems to have worked. Armed with this general feeling of success, I added the following line to my /etc/fstab. You need to read about fstab in the above web page to see what auto, defaults and the two zeros are all about. The first two arguments are those for running the mount instruction. 


/dev/md0        /back           auto    defaults        0       0 
fstab is a file that tells the operating system which disks to mount automatically when restarting the system. The mount instruction it will run is 


sudo mount /dev/md0 /back 
So, if you are really curious that this restarting works and if you will not spoil it for anyone else who may be on your computer at this time, you could restart your computer and then run 


df 

again.  md0 should be there at the mount point called /back.


Putting things on your Raid disk

Now it up to you to put things on md0 that you wish to be on your raid 1 disk. It seems most people put /home there (which makes complete sense). You can do this easily by tarring up the contents inside /home as the superuser (you need to be, and doing so guarantees file and directory ownership stays unchanged), umounting md0, remounting md0 to the mount point called /home, and untarring the contents of /home inside this directory. 

As we are running a server with MySQL running, it made sense to grab the files in /var/lib/mysql and put them there, too. But a word of warning. DO NOT put the whole directory /var onto the Raid disk. The sudo instruction that you need to move things around as the superuser first looks in /var for information, and it also writes back in there. You will almost certainly "mv" it to put it aside for a moment, and /var will no longer exist. sudo will no longer work. Ouch!! And you need to be the superuser to "mv" it back while attempting in full panic to rescue yourself from your not-so-clever misstep. Yes, I got bitten, and I had to take my install disk for ubuntu 6.10 to the machine and use the rescue facilities on it to put /var back in place. 

Note, I did not partition the raid 1 disk so I could have several mount points. As I have not played with this option (I did not want it), I offer no comments on how to do that.

No comments:

Post a Comment