Upgrading to kde 3.4 -- differently

I've been waiting to upgrade to kde 3.4 some time now, ever since it was released. Kde is one of the most popular desktop environments available on GNU/Linux mostly due innovative features being added with an overall speed improvement between releases.

Within a matter of hours after releasing kde 3.4, it was (as one would expect) available on Gentoo. But the package was masked for testing as usually the case with most new packages, before it is deemed stable with the distribution.

So I waited and waited, checking to see if it was marked as stable almost daily until..... I ran out of patience. After about a month of waiting, I finally decided it was time to make the switch after reading a review article on kde 3.4 and its new features and improvements.

Installing something as huge as kde using sources, as the case with Gentoo, requires hours and hours of painstaking compilation. Things get worse when there are updates to a few applications contained in a large packages that must be recompiled when ever there are such changes. To solve this Gentoo has recently started using split packages starting with the 3.4 branch. This introduces sub packages that consists of 270+ individual kde applications packaged separately and meta packages that are used to group split packages. (you can read more on this here). This makes it easier to upgrade in the future but with a price to pay -- initial compilation takes even longer than the old method. I wanted to try the new method but didn't want to wait 20-25% longer.

Reader discretion : The following content might lead to dizziness or vomit feelings due to explicit g(r)eek content. Continue reading at your own risk...

Day 1: Welcome to distributed compilation



Initially I wanted to just speedup the compilation process and read that it can be done via a compiler preprocessor (cpp) cacher called ccache. This was good and it seemed to improve things (I didn't bother benchmarking), but I knew it was going to take a long time to compile some 270+ packages. There was another solution by the same good old people who developed ccache and samba called distributed cc (C compiler) or in short distcc.

Distcc is a neat method where you can use a couple of machines in a compiler farm to speed up the compilation process by distributing some of the work to others. Getting distcc to work was a bit challenging in the beginning when I tried to use my desktop machine running Debian Sarge for distcc. The problem was it would compile a small part only to fail miserably stating that i686-pc-li nux-gnu-g++ was missing. This was puzzling me for a while since according to the documentation all you needed was just the same gcc compiler version on all participating nodes and nothing more. After following and failing that avenue, instead, I decided to use a friend's Gentoo box, that was running inside a Vmware session. That finally seemed to work as I felt a fresh breath of relief.

I couldn't complete the compilation of all the packages, since I was spending more time learning and experimenting on the technology. I found myself coming home with another 175 packages left to compile. After getting home, I found myself with a new dilemma. Since my home machine ran a 64bit version of Gentoo, now not only did I need to use distcc but also had to do it using a cross compiler. The cross compiling instructions looked way too complex to try after a long day so I kept trying other things and digging the net until I came across an article written by someone who went through a similar experience. It seemed like I had to emerge a package called emul-linux-x86-compat and recompile gcc and glibc with support for multilib, but it was already too late (2:45AM) by now so instead I decided to call it a day and hit the sack.

Day 2 : The saga continues



I hit the jackpot with the sarge box after I followed an intutive feeling to try to create a symlink called i686-pc-linux-gnu-g++ that pointed to the g++ binary. After that worked, I created another for i686-pc-linux-gnu-gcc.

I spent most of the time trying to optimize the compilation process further by changing parameters. First I tried changing the order of the hosts that was participating with distcc by placing the more powerful Sarge machine as the first entry, followed by localhost and the at last the Vmware Gentoo box. After reading the IBM's article on distcc, I decided to increase the -j option that is used to specify how many sources to compile at once.

By the time I was leaving for home, there was only another 70 packages left. I didn't feel like spending time to fix the cross compiling issue with the 64bit barebone, so instead left the machine to compile the rest on its own and watched a movie instead. All the kde 3.4 packages had finally finished compiling and all though it still did take a long time when compared to compiling the official large kde source packages, this was mainly due to various interruptions and experimentations I did in between.

I look forward to the next kde release, just so that I can feel good about myself for not killing time upgrading it.

Comments

Popular posts from this blog

Why KDE4 (might) suck!

Track Your New Year Resolutions With ii.do

Re-installations made easy with Install Buddy