Android, Linux, Ubuntu

< waiting for device > – fastboot

My TF201 is well and truly out of warranty now so I thought I might try and get dual boot to ubuntu running on it. I’m still trying to find some up to date instructions, and kind of feeling my own way by amalgamating approaches from various sources. I’ll let you know how I get on.

In the meantime, when you are trying to install an image via fastboot and you keep getting stuck at:

< waiting for device >

with no response; try running it with sudo!

Linux, Ubuntu

The most TOP utilities for Ubuntu Server

I know this will apply to various linux distros, but continuing the theme of blogs about installing Ubuntu Server on my BBS2, I give you some Top hints for some Top utilities on the command line.

  • top – You most probably already know that running ‘top’ will show you a list of processes and you can see their cpu usage alongside the total cpu usage.  If you press 1, and you have multiple processors it will break down the stats by processor.
  • iotop – You’ll probably have to apt-get install iotop, this little utility shows you the top disk i/o processes detailing the transfer read and write rates.
  • iftop – You’ll probably have to apt-get install iftop, this one shows you the network useage, nicely broken down by destination, sent / received with totals etc.

So think of these Top utilites next time you need to monitor your server in ubuntu.

Ubuntu

Installing Ubuntu on the BBS2


So I had a bit of experimentation with the BBS2. I had purchased the storage drives (2x Samsung HD103UJ SpinPoint F 1TB Hard Drive) but had decided that I did not want the OS to be installed on the raid.  So after scrounging a 160GB maxtor off a friend I installed Ubuntu Jaunty and successfully got the raid going, mounted and working.  However, at the time Jaunty was only in beta, and there seemed to be some stability problems (which would be expected), over 48 hours I had 2 kernel panics that completely halted the system.  I was not able to determine the cause.

bbs2-297x300

Realising that I need this to be as stable as I can, I then opted for Ubuntu Hardy LTS, which has been around for a year now and would be much more stable, plus it’s supported with fixes for the next couple of years as well. Leaving the sever to run, I then noticed that the core temperature was getting a lot higher than expected, the Maxtor Diamond Max Plus 9, was running at near to 50 degrees and when the Samsung drives were moved adjacent to the Maxtor, their temperature went up to 42-45 degrees, this is within the operating temperature but I didn’t like the idea of them running at 10 degrees above their normal operating temperature.

So I purchased an old 8Gb Corsair Voyager GT from the same said friend, which has a 10 year replacement guarantee, stuffed it in a free USB port, installed Ubuntu Hardy to that and it works a treat. Bootup time is as fast if not faster than when installed on the Maxtor SATA drive, it’s plenty big enough for the OS, I mounted it with noatime, nodiratime, and the drives are sitting nicely at about 33 degrees c.

The BBS is acting as a print server, samba server for the wife’s Vista laptop, NFS server for me, has apache installed and exposed as http and https for providing my svn server (still to do).  All the data is on the raid, which is currently defined as raid 5, but only has two drives and is therefore mirrored, but as my storage needs increase, I now have 3 free slots, bring the system to a maximum of 4Tb if needed (maintaining the current 1Tb drives).  If I need more than that I can even purchase a second drive only box to get another 5 bays attached by eSata.

Some useful resources and tips when setting up the server

Setting up automatic updates in Ubuntu Hardy

Adding users, new users to groups and new groups

Creating self signed certificates for apache

Getting system temperature and sensors information:

  • apt-get install lm-sensors, run sudo sensors-detect and answer yes to everything. Reboot, then run sensors to see all the info.
  • apt-get install hddtemp don’t bother running as a daemon, run sudo hddtemp /dev/sd[e.g. a].

Easily enabling remote access for Cups:

  • install lynx (the command line browser)
  • run lynx localhost:631
  • choose administration menu, then Basic Server Settings, Check Share published printers and allow remote administration. Change Settings and quit. Then in your nice gui browser goto machine name :631 and add printers, etc.

Install ebox it’s a great web admin interface, add the first line to /etc/apt/sources.list then run the second:

Install the ubuntu profiles for screen (screen allows you to start multiple bash sessions / run applications and they stay resident so you can re-attach to them in case of network dropout it’s cli based and can be run in an SSH connection), add the first line to /etc/apt/sources.list then run the second

Hardware, Linux, Ubuntu

Ubuntu Server – Creating an expandable Raid5 Array starting with 2 disks

Again, I’m using VirtualBox to test this, I have a single OS drive with Ubuntu Server (intrepid) installed. I’ve added two 2gb virtual disk to it, which will be the starting point of the Raid5.  Most places on the net say you need at least 3 disks to run raid 5, but let’s see what happens.

Lets create the raid:

mdadm --create /dev/md0 --level=5 --raid-devices=2 /dev/sdb /dev/sdc

The raid gets created! and we can monitor it with

cat /proc/mdstat

When it has finished intitialising, create a file system on the raid array (ext3):

mke2fs -j /dev/md0

create a mount point (/raid) and mount it

mkdir /raid
mount /dev/md0 /raid

df then reports it as having 2Gb free.  Both my VM drives sdb and sdc are 2Gb, so the assumption is it is simply mirroring the data in 2 drive mode.  This is exactly what we want, as when I get a new drive later on, I want to add it to the raid and see an increase in disk space.  So lets test that, shutdown my machine and add a new drive.

Add the drive to the array

mdadm --manage --add /dev/md0 /dev/sdd

now when I run cat /proc/mdstat it says there are 3 drives in the arras but sdd is marked (S)

Lets now grow the array

mdadm --grow --raid-disk=3 /dev/md0

Watch the progress with cat /proc/mdstat and when complete we can mount it. (adding the 2Gb took about 5 mniutes! eeek! to grow the array).

After completion /proc/mdstat now reports 4gb available, but the file system on the raid still thinks it’s 2gb.

So let’s resize it:

e2fsck -f /dev/md0
resize2fs /dev/md0

Re mount /dev/md0 and df now reports 4Gb.

Success.