Journalism sucks when it comes to computing

A very popular site rediff.com in my country is pretty much filled with some journalists who do not have any authenticity. To add more to the pain they seem to be people with terribly low IQ level. Lets dig into many recent articles on Bill Gates and his retirement saga.

It is hilarious that the journalist seem to be some ill informed a computer monkey who does not know anything about computers beyond a keyboard and perhaps some few word document tricks. Since when Gates became the Ultimate Geek? Ask Gates and I am sure he ll say No. He is a shrewd business man rather than some uber cool geek. What he did may be great, but lets not worship someone blindly. And same goes true for any other person related to computing be it Linus, Prof Don Knuth, great Alan Turing or John McCarthy. Infact this whole concept of blind following is not only pointless but it is also dangerous. People spread this dangerous stupidity across and then result in a steady unrest among different sects of computing. Example GPL vs BSD, Linux Vs Windows, Procedural Languages Vs Function Languages and what not. Let you be the sole decider for you. Why should you follow what some supposedly shoddy journalist( who does not even know basics of computers), dazzle you with few slides and impressive write up.

Another rather “big” example Vmware Vs Microsoft’s Hyper-V. Okay MS has released a hypervisor and has cleverly forged facts in its own favour but they in now way are a match to Vmware’s better products. All aritificially create hype around a lousy product to make sure they can make inroads into datacenters and workstations. Well what will happen is to be seen but still Hyper-V needs to do a lot of catch up. It will be great if MS developers can start improving techincally rather than spreading usual marketing shit, for which MS is world renowned.

Mr Gates are you listening?…oh nevermind

I want to use Linux(or not)

One of the interviewers in my recent appearance for a pretty big company landed me in a rather awkward situation. He gave me a big lecture on how GPL is not good for protecting IP of the company. And also how patents will be revealed if they release the source code for their software. BTW isn’t your patent available via google patent search?:)

Actually I felt like “Gosh! is this guy living in some prehistoric era or what?”. Seriously since when people have jeopardised their work by making their code GPL? Example see FsmLabs RTLinux. Or for that matter RCU mechanism in Linux kernel. Sure original companies have generously donated it but this makes it even more difficult to copy the patentable idea. Surprisingly they still want to use Linux kernel and thus use kernel but run closed drivers in userspace. Neat but horrible trick. It can be viewed as a classic example of what happens when some braindead management people start taking charge about techincal predicaments.

This brings us to yet another debate “Are software patents valid?” . My answer a big “NO”. Software patents are idiotic. Software/Algorithms is logically mathematics and patenting mathematics is ridiculous.

Imagine your kid comes to you and says – ” See Dad!, I found a way to solve this problem using my own Algorithm”. And you reply – ” Sorry Son, forget it… because my company has a patent on this Algorithm and thus you cannot use it, else you will be sued”.  “How do you know it Dad? “… “Because I filed it, Son”. Trust me he will curse you for a moment for filing that patent for something which is nothing but logically Mathematics.

Think over this.

Engineers Or Programmers?

Sometimes when i appear for a job interview something which pisses me off is the quality of questions some people ask. At times it is evident that they do not have even a clue what they are asking but still go ahead to show that “I am the boss now, because I am the one sitting in the chair now”.

That’s really disappointing. These so called technical interviewers are looking for programmers, even that would be a misnomer. What they are looking for are code monkeys, who cannot think. I want to work as an engineer not as a code monkey. Engineering is a subtle art which takes a lot of thinking and perservance to come by, while programming (no offence to any programmers out there), though art is different from engineering. Programming is like assembling a car, engineering is like building the parts for a good car.

Unfortunately what people expect in my country is employees who can program stupid tricky one liners under stress. Sounds gross, isn’t it? Since when programming became a 20-20 match of cricket. It is just like test cricket, beautiful but packed with finesse. What these people are looking for are some hotshots who have mugged few code snippets to weird puzzles. That’s not what gauges somebody’s true potential. I think the best way to gauge someone’s potential is to give a real life problem and ask the candidate to find a solution or rather debate a solution. That is what I will call engineering to some extent. Implementing ideas doesn’t take too much of time, it is the ideas which are hard to comeby.

I wish only if the companies realise this is this frenzy of hiring code monkeys.

Where is Ubuntu lacking behind?

Disclaimer:- I am kernel developer and  I use Ubuntu a lot, at home, at work , on my central repository and backup server.

This is not a rant, but a genuine flaw i have found in the Ubuntu’s armour.

The meteoric rise with Ubuntu is phenomenal, it is good for FOSS community, for people because it gives them a choice, for average joe user because it works without much fuss(mostly).

All is cool, ubuntu works great(tm), no problems.

Ubuntu seem to have got everything right for average users.

But it lacks when it comes to users who do not fall into the category of being average.Don’t get me wrong, I am not whining for stability issues at desktop. I am talking about people who use/want to use Ubuntu on servers or as a development environment. Assuming that Ubuntu follows Debain’s legacy of stability seems to be a misnomer at some places.

These places, are dark sides where very few lurk into.I am one of those and i found Ubuntu not up to the mark.

Ubuntu’s builds turn out of be buggy and these includes a lot of packages which would be deployed on a server.

Has ubuntu ignored that Linux(including Kernel and userspace tools) itself is a pretty stable?

I see this as a smaller part of the bigger picture.Ubuntu does not have privilige of company of same hackers RedHat or Suse has.Well this does not means that RedHat and Suse builds are bugfree.

Why does Intel’s Processors still suck

I am no fan of AMD, VIA or any other processor manufacturer. I use AMD at home and Intel at work. And in my experience AMD’s processors may have lost to reborn netburst architecture based Intel processors, they still are technically good for *me*.Why?

Here is the rant. Intel makes some really shitty processors as far as virtualisation is concerned(read VT technology in Intel lingo). Intel’s architecture allows only protected mode instructions to be virtualized and not the real mode instructions. This may sound like – “Okay, who cares about real mode instructions anymore? We all are using 32/64 bit protected instructions these days”. Thats right, but thanks to IBM and brethren we still have to rely on real mode instructions while booting initially. Many bootloaders which use some real mode instructions will not be able to do anything under guest virtual machines booting under a VMM.(e.g while installing a HVM Guest under XEN) The intial part of the setup to switch to a protected mode is done with the help of some real mode instructions and thus cannot complete resulting is a blank screen or an endless wait. Example :- You cannot install a guest from a standard Linux ISO, if the loader on ISO uses real mode instructions(BTW isolinux does use some AFAIR).

Fret not, there is a solution. Emulate, yes emulate every real mode instruction rather than virtualize under intel architecture. Sounds like suckage, yes it does. So somethings may work  and some may not depending on whether the instruction has been emulated or not.

FWIW, AMD has a better and clear virtualization capability, all instructions can be virtualized irrespective of the real/protected/legacy mode.

Installing a Linux HVM guest on Xen

This for Ubuntu 7.10, it will work even for other distros.

Preliminaries

1. Install a Linux distribution(Ubuntu 7.10 codenamed ‘Gutsy Gibbon’) on the machine.
2. Boot into the Linux installed in 1 above.
3. Xen version used is 3.1.0 .Kindly note that xen-3.2-rc2 is available in the mercurial repos, it looks like by this month end we will have a final release of xen-3.2. Therefore, while downloading xen, please make sure you are downloading xen-3.1.0 and not xen-3.2 .[Because i worked with 3.1.0 for this howto :)]
4. A VT/SVM-enabled processor e.g Latest Intel Core2 Duo processors and AMD Athlon X2 ,Barcelona, phenom etc. It is worth mentioning that VT/SVM-enabled alone is not enough, you need a motherboard which supports VT/SVM extensions of the processor. To check if a processor is capable of supporting HVM guests do –
$ cat /proc/cpuinfo | grep vmx or grep svm

Any kind of output means it is a VT/SVM-enabled processor.
5. If BIOS has virtualization disabled, kindly enable it so that Xen can make use of it.
6. Enable virtualization as in 4 above before compiling Xen-3.1, else Xen cannot make use of the VT/SVM extensions.
7. Ubuntu by default runs in non-root mode.Anything which requires a root access can be run using sudo .e.g sudo vi /boot/grub/menu.lst .

Actual Procedure –

1. Install all the dependencies required for compilation of Xen-3.1.0 source, using apt-get/synaptic/yum/whatnot, viz
* GCC v3.4 or later
* GNU Make
* GNU Binutils
* Development install of zlib (e.g., zlib-dev)
* Development install of Python v2.3 or later (e.g., python-dev)
* Development install of curses (e.g., libncurses-dev)
* Development install of openssl (e.g., openssl-dev)
* Development install of x11 (e.g. xorg-x11-dev)
* bridge-utils package (/sbin/brctl)
* iproute package (/sbin/ip)
* hotplug or udev

*gettext, libstdc, g++.

2. Install xen-ioemu-3.1 from synaptic/apt-get.This package will install xen-3.1 hypervisor into /boot from Ubuntu repositories and will also update /boot/grub/menu.lst.This is important to avoid problem of configuration overwrites by Ubuntu repositories. If this step is carried out after installation of the Xen-3.0.1, we are at a risk of overwriting configuration files installed by vanilla xen-3.0.1 source by Ubuntu repository.
3. compile Xen-3.1.0 source, making sure that you build two kernels instead of a common one. In other words, build linux-2.6-xen0 and linux-2.6.xenU instead of linux-2.6-xen. This can be done by passing trivial argument(s) to the make at cmdline, viz. make world KERNELS=”linux-2.6-xen0 linux-2.6-xenU”.Kindly note that in case of confusion, exact help can be found in the README file found in the xen-3.1.0-src directory.
4. Also if machine is going to use more than 4GB of physical RAM on the machine,please make sure that you compile a PAE enabled Xen. This can also be done by passing trivial argument(s) to the make at cmdline, viz. add XEN_TARGET_X86_PAE=y to the make commandline given as example in 3 above. Please make note that PAE enabled xen is required only when you are building a 32-bit Xen hypervisor, else there is no need of passing this argument for 64-bit xen.
5. After compilation 3 main files worth taking note of , viz linux-2.6.18-xen0 , linux-2.6.18-xenU and xen-3.1.0.gz.
– xen-3.1.0.gz is the main Xen hypervisor binary.
– linux-2.6.18-xen0 is the domain0 kernel a.k.a the host kernel.
– linux-2.6.18-xenU is the domainU kernel a.k.a the guest kernel.
6. We can ignore the domainU kernel for this howto.
7. Remove the xen-3.1.0 hypervisor installed by the synaptic due to step 2 and install the built ELF binaries using make at cmdline.
8. make an initrd image for the domain0 kernel using mkinitramfs. Please don not use mkinitrd to avoid any surprises.
9. Delete the entries made by the xen-ioemu-3.1 package installed in 2 above in menu.lst.
10. Make a new entry in the /boot/grub/menu.lst for just now installed Xen-3.1 as per required.Same can be done using $update-grub , but for now we have been doing everything with hand. Please make sure that you make the entry after the line “Other Operating Systems” in menu.lst, so that update-grub does not create lots of stale entries on its own later. This will make the initial grub screen look uncluttered and clean too.
10. Make the default booting entry in menu.lst to be the Xen-3.1 and not the stock Ubuntu kernel.
11. Reboot and chances are booting kernel will crash or hang.
12. If this is the case reboot into the original Ubuntu kernel from boot up grub menu.
13. The main reason(s) for boot failure –
– sata disk drive’s module was not built in the xen by default.
– root filesystem type was not built statically into the kernel.
– Or in the unluckiest of the cases we have a bleeding edge hardware which is not supported by 2.6.18 linux kernel.[There is a non-trivial solution to that too, which we can ignore for this, more on this later.]
– There can be other reasons also, which can be safely ignore as of now.
14. To solve the problem stated in 13 above, kernel recompilation is required. Kernel to be compiled is domain0 kernel with the SATA controller module/component built statically into the kernel and same should be done for the root file system type(viz ext3 mostly). We can ignore recompiling domainU kernel here.
15. Make sure that driver for NIC(s) is/are compiled too, either statically into the kernel or as a module.
16. Enable and disable anything else in the kernel compilation config menu which may be required for proper functioning of the machine.
18. Build this configured domain0 kernel.
19. Install this configured domain0 kernel./
20. Optionally rebuild the initrd image(step 8).
21. IMPORTANT : Please do $ sudo mv /lib/tls /lib/tls.original .It is very very important to do this, else we may get problems running applications using TLS(thread local storage) libraries.
21. Reboot into xen-3.1.0.
22. Once into newly booted domain0, run xend(Xen Daemon) if it is not running.
23. Create two partitions using fdisk/sfdisk/cfdisk/gparted .[A reboot may be required to re read the new partition table].
23. Modify the configuration file(s) to boot from Fedora’s ISO image.Also make the parition on which domainU is required to be visible to HVM domainU. This can be done using configuration files.
24. Once configuration files are tailored as per needs, run the HVM domain(Fedora 8 domain here) using xm tool.e.g xm create f8.hvm .
25. check the status with xm list. A ‘r’ denotes running, ‘b’ blocked, ‘p’ paused etc.[‘r’ state is of importance here].
26. connect to the Fedora installation using vncviewer 127.0.0.1 .
27. A vnc window will pop up, and Fedora installation on the disk parition can be completed as usual.
28. After completion poweroff the Fedora 8 domainU using xm.i.e xm shutdown f8(name of the domainU).
29. Modify the configuration file to not use the iso image but to boot from the physical partition, where fedora8 is installed now.
30. Start the f8 domainU using xm .
31. Connect to it using vncviewer as shown in 26.
32. Boot into installed fedora8 image and have fun.

Using Gmail with claws-mail

claws-mail is certainly worth looking if you do not want to wander in frenzy of procmail, fetchmail, ssmtp and mutt.

Pretty lightweight, fast and nice support for a *lot* of features. Plugins are really good stuff. Gives a lot of flexibility. Freedom from HTML disease is soothing.

Good thing is it is snappy too.Though looks a little slow while fetching and especially during sending messages. But overall a good GUI mail client. Better than bloated Evolution and thunderbird.

Git Vs Mercurial (Updated)

I have used Git at work and while working on linux kernel for more than one year now. Apart from this my work gives me an oppurutnity to work with the git’s close cousin Mercurial aka Hg.

My experiences with Git are good not at all bad i must admit. Initially the learning curve was huge for me, i must confess.But once i started getting used to the whole concept of Distributed SCMs it felt nice and easy and pretty logical to work in a distributed working mode.Though I insisted on using Git at work(..yes thats correct at work) for internal projects, i was faced with challenge of coming up with less muck ups and more smiling faces around. Sadly the reality bite count disappointed me.Most of the developers were spellbound on what git was doing and more importantly how?. Well others were really dumb to never able to understand Git. Looks like Git needs a minimum IQ level to understand its working principle and usage.Anyway whatever, once it was through atleast some of the developers were impressed with Git.I must also admit higher managment was skeptical and less of supportive…aah usual corporate stuff. I am enjoying using git every day.I am hooked to cheap branches, excellent merging capability, blazing fast speed and distributed nature. Some things which i felt were really commendable were the git-revert and hard resetting feature.Wow! i was able to undo a bad commit, which showed up after a lot of commits since faulty merge by the team. Branching is supercool and simple. Just a git-branch state-23-7-07 abe56ff12…. , and you have a pristine branch from the commit with id abe56ff12…. .Extremely handy. Seriously ,as Linus somewhere mentioned Git is more of a filesystem rather than a SCM tool.

Mercurial has been a pleasant surprise.Though, i had my share of problems using Hg after using Git for pretty everything i wrote at work and at home too.

Okay here is the updated entry, short answer Git is better than Mercurial, hands down. Only place when mercurial wins is ease of use for newcomers.But once you get over this happy feeling of easyness and start asking for more from mercurial, it falls down flat on its face. With all apologies to Matt Mackal and the wonderful team of Mercurial, I still find mercurial to be a little behind git. No doubt it is easy and fast. But git is more robust and rather mind boggling when it comes to saving a developers life.

I hope I am wrong here in my interpretations over mercurial but mercurial’s named branch support within one repository sucks big time. Perhaps I am doing it wrong because I come from git land(?). But it is a little clumsy to work with named branches in a single mercurial repository, unlike git where it is perfectly nice. hg revert disappointed me when working with mutiple named branches in same directory.

Well, still what i like about mercurial apart from ease of use and usually said stuff is, its hg export. It is nice and quite handy to generate a patch for my changesets. Hg is indeed a cool software which i like and use but still I think Hg can learn a thing of two from git and vice versa.

GPL Vs BSD License

Disclaimer – I do not follow the religion(read Linux or BSD). I use Linux as an operating system by choice and not as religion.

A constant rift between BSD proponents and GPL proponents shows up almost every second week on Slashdot, osnews and where not.(even on freenode channels). Someone who has used both GPL and BSD licensed software for more than 4 years, i guess i am suitable enough to give a clear view for a newcomer.

Basic objection – BSD is more free than GPL and vice versa. Franklly thats ridiculous, because it depends on how you define “free”. If you want software to be free GPL is better than BSD. If you want use of software to be free BSD is better. So, see the whole point that X license is better than Y is moot here, unless you define the term “free” or for that matter any term which makes one license better than other.

Therefore on similar terms, proprietary license is better than GPL/BSD/MIT/Apache/XYZ in terms of “closeness” and “trade secrets”. The whole point boils down to fundamental reasoning of how you want a license to be employed.

Some people who write code get confused under which license should i publish my code, GPL or BSD? Well answer is simple. If you want it to be used by everyone and do not want your license to be a problem while it is being used in a closed sourced environment choose BSD. But if you want code to remain free, that is any modifications or linking to it should also be available like you made it available to the world, choose GPL.

I primarily gave my views on GPL and BSD, thus i am purposefully not including LGPL, AGPL, MIT, Apache License etc.

As a user i never cared what the license was as long as application was available to me and performed well. But once you start modifying the code , you ought to take into consideration of licensing issues.

So, next time you see a troll or a heated argument between GPL and BSD licensing proponents, you know what to do.