Category: Hardware
Seagate 750GB Momentus XT Performance
I recently installed a Seagate 7200rpm 750GB Momentus XT in my m11x r2 after a friend telling me about the hybrid technology. It has an 8gb SSD integrated into the 750Gb disk drive, giving you a balance of SSD like read / write performance with out sacrificing the storage space.
It is hard to quantify how fast this drive actually is, but both Ubuntu and windows boot and feel faster in day to day usage. The drive’s firmware claims to cache commonly used files and automatically adapt to your usage, and it does seem to learn in this manner.
Copying files about and running the standard Disk Utility drive benchmark gives read speeds averaging at about 100 MB/second. I didn’t do the write test as this requires the drive to be unmounted and I was not totally convinced that it would be non destructive… but here is the read test results:
Ive go the ST750LX003-1AC154 (SM12) model and I am very impressed and thank Binky for the recommendation 😉
Atom 330 Video encoding
As you probably know by now I purchased the TranquilPC BBS2, ATOM 330 based box with raid. I am running ubuntu server 64bit.
I have quite a lot of dvd images on it, and access them from my media center, but to save space I wanted to encode them as mp4 so I installed the 64bit handbrakeCLI on it, not being bothered if it was going to take a few hours to re-encode the dvds.
Now my acer Core duo 2.0gb laptop encodes at about 80 frames/second which I have always been happy with, so I was very surprised to see that the Atom330 1.6Ghz, with it’s 2 hyperthreaded calls managed a very respectable 65 frames/sec!
Back to the Future III took 42 minutes to encode as mp4 720×576.
Again, I’m using VirtualBox to test this, I have a single OS drive with Ubuntu Server (intrepid) installed. I’ve added two 2gb virtual disk to it, which will be the starting point of the Raid5. Most places on the net say you need at least 3 disks to run raid 5, but let’s see what happens.
Lets create the raid:
mdadm --create /dev/md0 --level=5 --raid-devices=2 /dev/sdb /dev/sdc
The raid gets created! and we can monitor it with
cat /proc/mdstat
When it has finished intitialising, create a file system on the raid array (ext3):
mke2fs -j /dev/md0
create a mount point (/raid) and mount it
mkdir /raid
mount /dev/md0 /raid
df then reports it as having 2Gb free. Both my VM drives sdb and sdc are 2Gb, so the assumption is it is simply mirroring the data in 2 drive mode. This is exactly what we want, as when I get a new drive later on, I want to add it to the raid and see an increase in disk space. So lets test that, shutdown my machine and add a new drive.
Add the drive to the array
mdadm --manage --add /dev/md0 /dev/sdd
now when I run cat /proc/mdstat it says there are 3 drives in the arras but sdd is marked (S)
Lets now grow the array
mdadm --grow --raid-disk=3 /dev/md0
Watch the progress with cat /proc/mdstat and when complete we can mount it. (adding the 2Gb took about 5 mniutes! eeek! to grow the array).
After completion /proc/mdstat now reports 4gb available, but the file system on the raid still thinks it’s 2gb.
So let’s resize it:
e2fsck -f /dev/md0
resize2fs /dev/md0
Re mount /dev/md0 and df now reports 4Gb.
Success.
Following on from my article on Setting up and Managing Raid1 on Ubuntu Server, I have been testing the raid using a VirtualBox VM.
Today, I took the VM, consisting of an OS drive (with Ubuntu Server installed) and 2 drives set up in raid 1 configuration and created a new OS drive, replacing the existing OS. The intention is to test inserting a new OS drive, re-installing the OS and getting the raid working again without loosing any data.
Useful Reference the mdadm man page
I installed the OS and installed mdadm as per the instructions in my previous post.
All commands are issued as root / sudo
So let’s see if can obtain any information about the raid, we’ll see what it knows about sdb1 which was part of the raid 1 with sdc1:
mdadm –examine /dev/sdb1
It successfully detects that the drive has a superblock and knows that it was part of a raid with /dev/sdc1. See the last two lines of the output.
So lets try to re-assemble the raid.
mdadm –assemble /dev/md0 /dev/sdb1 /dev/sdc1
It responds saying /dev/md0 has been started.
So now we only need to create a mount point
mkdir /raid
and mount it
mount /dev/md0 /raid
We have successfully mounted the raid so let’s now put an entry in fstab so it mounts at startup.
nano /etc/fstab
add
/dev/md0 /raid auto defaults 0 0
reboot and all should be working.
In preparation for getting my Tranquil PC BBS2, on which I plan to install Ubuntu server on the “OS disk” and have initially two 1TB drives in Raid1 configuration and add an additional 2 later as my storage needs increase, I decided to investigate how to install and configure the raid in such a configuration.
Note: In my configuration, I am setting up a NAS / Home server, I have a single drive for the OS that is not raided as I don’t mind having to re-install the OS if that drive fails. (Which I will test in the near future that I can re-add an existing raid to a new install) The Raided drives are the drives that will store the data shared on the NAS.
I did the test using Virtualbox, creating an OS virtual disk and 2 virtual disks for the raid. I initially only mounted the OS disk and performed an usual
So with ubuntu installed, and the two drives to be raided added to the vm:
All the following commands should be run with sudo or as root.
Creating the Raid array
First we need to install mdadm (I think it means mutli-disk admin), the utility for managing the raid arrays.
Unfortunately, when I tried the expected sudo apt-get install mdadm, there were some weird package dependencies (known issue) that also install citadel-server, which prompts for loads of unexpected configuration. To get round this, do a download-only of mdadm then run the install with dpkg.
sudo apt-get --download-only --yes install mdadm sudo dpkg --install /var/cache/apt/archives/mdadm_2.6.7...deb
For each drive in your raid array, run fdisk or cfdisk and create a primary partition that uses the whole drive. These partitions should be the same size. If not the smallest size will be used for the size of the raid array. The partition type needs to be set to type ‘fd‘ – Auto raid – Linux.
fdisk /dev/sdb
Next, run mdadm to create a raid device (/dev/md0 (thats md followed by Zero) you have to call it mdX where X is an md device not in use) we set the raid level to raid1 (mirroring) and the number or devices to be included in the raid to 2 followed by a list of the disk partitions to be used.
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
The raid array will be created and you can monitor it’s progress by typing:
watch cat /proc/mdstat
Once complete, we now have a single device that can be mounted, however, it does not yet have a file system on it. I chose to format it as an ext3 fs.
mkfs -t ext3 /dev/md0
create a folder to mount the device in, I chose /raid , and mount it:
mkdir /raid mount /dev/md0 /raid
The raid drive is now mounted and available. To get it to be mounted at system startup, we need to add an entry into the fstab.
nano /etc/fstab
add
/dev/md0 /raid auto defaults 0 0
reboot and all should be working.
Examining the state of the Raid
Whilst the raid is performing operations such as initialising you can see the status with:
cat /proc/mdstat
mdadm can also be used to examine a hard disk partition and return any raid state information including failed devices, etc.
mdadm --examine /dev/sdb1
Breaking the Array (Replacing a drive)
Building a raid array and not testing it, let alone not knowing how to fix it should a drive go fault is just stupid, so I decided to put the array through it’s paces using the wonderful VirtualBox. So, I shut the machine down and removed the second raid drive from the VM, sdc.
During boot-up I noticed a [Fail] on the mounting file systems and after logging in, the /raid mount was not available. This was my first surprise, I expected as on drive of the array was still plugged in and available, that the device would just be mounted with some form of notification of the raid not being correct. I have not investigated if changing the mount options in fstab would enable this yet, so if you know please comment.
So after logging in the raid device had been stopped, so I tried running it:
mdadm --manage -R /dev/md0
This was successful, and I could even mount the raid device and access the files on it, however it is running with only one drive now.
So, I shut down the VM and created a brand new disk in VirtualBox, and added it to the VM, emulating me replacing the drive with a new one. Started the machine up, logged in and ran mdadm as above to start the array.
Faulty devices can be removed with the following command replacing sdc1 with the partition to remove.
mdadm /dev/md0 -r /dev/sdc1
However, as I had removed the physical VM drive (a bit oxymoronic I know) the device was not classed as part of the array, so now I had to prepare the new drive ready for addition to the array.
So create a primary partition of the required size on the new drive using fdisk.
We don’t need to format it, as as soon as we add it to the array, the existing drives contents will be replicated.
mdadm --manage --add /dev/md0 /dev/sdc1
Run watch cat /proc/mdstat to see it re-building the array
I am now going to have a play with extending the array and seeing if I can start off with a raid5 two drive mode, if that can mirror until I add a 3rd and 4th drive then that migh mean a change in my approach for extending the storage in the future. Hope this all helps some other relative newbies to ubuntu and raid.