Friday, July 04, 2008

LTSP configuration (Gutsy) - Episode 2

LTSP is the Linux Terminal Server Project. Because it's popular with schools it's had quite a bit of development and it has been adopted by Ubuntu as part of their Edubuntu package. It's generally used to allow a server to provide the horsepower for a bunch of thin clients. We'll be expanding it to server other useful purposes.

We're going to use it to help clone a bunch of Windows XP computers.

In our first episode we built an LTSP server. If you haven't read that article yet or you don't have a good LTSP server (not in production!) to work with it would be good to go back there and follow the steps so you are better able to follow along.

There are several steps to perform here. We have to select an imaging platform that's bootable with ltsp. It has to not copy the whole drive -- just the blocks that have data in them. It has to be reasonably fast. We have to select a method of getting it the large image to the clients -- probably file sharing but possibly multicasting. We have to tie it all together. One advantage we have is that we have a large number of machines laying around with 40GB drives and gigabit ethernet to use as servers. They're surplus from a prior installation.

To get the image size down we need to use a project that uses ntfsclone, since that's the project that knows about the contents of NTFS formats and can copy only the blocks that have data. We need a project that uses ntfsclone and works with ltsp but allows us some flexibility in how we use it. I chose clonezilla. This project is a subproject of Diskless Remote Boot in Linux (DRBL), which is a similar project to ltsp. DRBL and clonezilla are projects of Taiwan's National Center for High-Performance Computing. It has handy installers, comes in bootable CD and pendrive formats, and a version is available for network booting. Although they claim to support multicasting the process is as yet unwieldy so we'll be using the ltsp server as a dhcp and file server, and clone multiple servers to meet our bandwidth requirements. Since we're using Ubuntu for the ltsp server, I decided to go with Clonezilla Live Experimental (Hardy). Download the .iso and burn it to a CD.

Before we put a lot of work into making it netbootable I should probably validate that it makes a good copy in a reasonable period of time. I'll be using a recently imaged old laptop that won't be a disaster if I mangle its image, just in case clonezilla does not work as advertised. My actual target clients are dual core notebooks with faster hard drives but the image is also five times the size. I'm looking for scalability on the server (serving many clients simultaneously) and on the network.

Boot the clonezilla CD from a client you would like to clone that's connected via network to your ltsp localnet.


Choose the boot to ram option because we're going to use run from RAM if we PXE boot. After some text scrolls past you see this


We'll choose the english version and don't touch keymaps.

Start clonezilla

device image disk/partition to/from image

We'll use the ssh server because we don't have samba or nfs set up yet on our ltsp server.

It will automatically detect our nic and network

DHCP is set up so we'll use that to get our address.

It detects our server and offers it as the default.

Port 22 is the default for ssh.

The default account is root. We don't allow remote access from a root account so we'll use this one.

Here we select a directory on the host. This is a good time to make the directory and ensure it is owned by the user you selected before.

We're warned we're about to be asked a password.

Are we sure we want to connect to a new server? Of course the answer is yes here.

Here's a prompt with no useful information for what we're doing. Press enter.

We're going to choose savedisk here to take a snapshot of the hard drive in this computer. When we restore we choose restoredisk instead.

We're going to use ntfsclone, so choose the first option here.

The default here is only to choose -c for wait for confirmation. We're going to clear that and have no options set on this screen.

The hard drive on this PC has 5.4 GB. Using -z1 we can bring that down to 2.1, which is better for our networking. -z2 is much slower for little improvement in image size and net, it's probably a loss in speed.

Here we choose an image name. This will actually be a subdirectory in the folder chosen previously, with various files in it.

There's only one drive in this machine. It's a laptop. As soon as we confirm this last entry, it will begin taking the image and storing it on the server. First run with 10/100 networking took 445 seconds. Second copy to server took 442 seconds. Download with 10/100 took 382 seconds, and at gigabit speed we get 365 seconds. Obviously bandwidth isn't our bottleneck. One thing to watch out for -- on the server, storing one image both CPUs hit about 50%, considerably more than their baseline 20%. This is likely due to the encryption overhead of SSH connections. The network usage goes in spikes of about 4-12 MB/s with gigabit networking. To improve this we'll need to a different network protocol to serve the images.

Now we check the image size on the network.

ltsp:/$ ls -hal /home/partimag/2008-07-05-00-img/
total 2.1G

That's good. Now we repeat the process but choose to download the image.

Test the image thoroughly. Are all the files there? Perform a chkdsk. No errors? Then we've got a viable copy but the speed needs work.

I'll try Samba next. We'll stick with the gigabit connection since it's up. I would go over the way I configured Samba, but you can figure it out from this useful page.

Testing with samba reveals that the server processor overhead for a single gigabit connection goes from the baseline of 20% to about 22%. We've removed the processor bottleneck. We're only using about 1/10th of a gigabit link on average. We have plenty of machines available so we'll probably go with six or eight clients per server depending on how much the load slows them down.

No comments: