Libvirt vs Virt-Manager on Lenny and Lucid

I ran into this random problem with virtualisation recently. For some reason, I just could not manage the LVM storage pools on my virtualisation server from my workstation. My workstation was running on Kubuntu 10.04 and my server was running Debian 5.04 using virt-manager and libvirt on each.

This was a very weird problem because I could access the LVM if there were no allocated logical volumes in them. However, the moment there was anything in them, virt-manager would fail to start the storage pool. This was a really weird problem because I did not have this problem on some of my other installations.

After spending days digging into it, I found out the cause of the problem.

It seems that the libvirt people changed the protocol in version 0.5.0 and swapped the colon delimeter to a comma delimeter. The workstation had a newer version of virt-manager while the server had the older version of libvirt. So, all I had to do was upgrade the libvirt from lenny-backports and that fixed the problem entirely.

The reason why I had not seen this in some other machines is because of the hardware was different. On this particular server, the harddisk was not seen as /dev/sdaX but parked under /dev/blocks/XXX:X instead. So, that is why the confusion with the “:” (colon) came into the picture between the two different versions.

Stress.

Advertisements

Little Big Computer

Wow!

After taking the customary bow of respect at the person who designed this level on LBP, I’d just like to say that this was not that difficult to do – albeit very time consuming.

He did not actually design a complete computer but focused on the main computational component – the arithmetic unit. In this particular case, the arithmetic unit can perform two functions – addition and subtraction.

What this video actually shows is the exemplification of the Church-Turing thesis. In this case, the PS3 has successfully performed a simulation of an arithmetic unit, within the confines of a simulated virtual world. Nice recursion.

Also, it might seem weird that this computer was constructed from moving parts instead of, say, electrons. However, before we were inundated with the world of electronic digital computers like today, we were once using mechanical computers too, such as the Zuse Z1 currently on display in Berlin. Our history of computing is filled with all kinds of computers.

Now, if only it could be turned into a really interesting gaming level.

Multi-core Parallelism

As I mentioned in a previous blog, I am running a six-core machine – it is actually a virtual machine. Regardless, I noticed one thing while doing it. As I went from single-core to six-core, performance improved accordingly. But when I went up to eight-core, things deteriorated. The reason for this is probably because of the arrangement of the processors. I am running a two-socketed six-core AMD system. So, for an eight-core machine to work, it would need to cross sockets, which is not a good idea due to the communications hierarchy.

So, that’s why I ended up using a six-core VM.

Prefork vs Worker

Apache2 comes with several threading/process models to handle requests. The traditional method is maintained as prefork while the newer method is worker. The main difference is that the former relies on forking processes while the latter method uses multiple threads. The result is that worker is faster than prefork under load.

I tested this by setting up two servers, one with prefork and the other with worker. I used Apache Bench to hit both systems. At low concurrency of around 200 connections, both of them worked equally as well because in both cases, the main bottleneck was CPU. After I increased the CPU from single to six-cores, the load was handled equally well until I upped the ante to 500 concurrent connections.

At this point, the worker model would still run fine while the prefork model would start to fail periodically. It is like it has trouble handling the sudden spike. Once it has been hit a few times, it will also be able to handle the load but ramping up is less graceful. This is pretty obvious once you think about it a bit and preforking processes will have a larger overhead than spawning threads.

The only advantage of using preforking is when handling non-thread safe code. However, since my Apache serves only as a reverse proxy front-end, it does not need to handle any non-thread safe scripts. These are all handled at the back-end by other servers. So, there is really no necessity for the prefork model on my setup.

SSL Key Size

I have been doing some Apache optimisation recently and I thought that I will note down various tweaks for documentation purposes only. These tweaks will be written down as I find them.

Using larger RSA keys slow things down.

In my benchmark system, I only get around 500-600 connections/second for a 4096-bit RSA key but get a 800-900 connections/second for a 1024-bit RSA key. Seeing that our server certificates are issued only annually, there is no reason to use a 4k RSA key. Although a 768-bit RSA key is breakable today, a 1024-bit one still stands for now and will probably do so for a couple more years.

So, stick to using 1024-bit RSA keys for SSL until it gets broken.