This is more of a critique than a commentary!
Getting an older version of a software source-code to compile, link and run on a new Linux system is excruciating. Those who have dealt, know better!
Any Linux binary should be runnable on all Linux distros of any version. This is “the requirement”! This is not currently possible, and we live with it. But it balks us when a product has system version upgrades, and we end up managing parallel lines of versions of libraries, compilers and linkers — just to make the old stuff run.
Linux distros have seriously underestimated the gravity of portability and packaging of the executable binaries — no I am not referring to manageability through the
apt-get install or the
yum install. I am talking at a lower level where these things are of no aid. It is a horrendous task to make run an older version of software, that is no longer maintained as part of the distro packaging. More often an older version of source-code cannot be built with a latest gcc. Compiling, finding right shared-libraries and linking those — one might end up creating a new private distro!
Products make binaries for Mac and Windows — binaries that will run forever without any portability issues — but not for Linux, because it is a drudgery! The ‘portability’ here is not in context of the processor architecture, but in regards to the 100 billion versions of shared libraries out of which each distro chooses at its own discretion. Some distros deny softwares that are statically compiled. It might be good from the perspective of keeping the system lean, but that inevitably enforces a specific version of shared library — which as time passes feels like an outrageous requirement. Even glibc is no exception in breaking binary compatibility. The glibc community does approve of making changes that break old binaries for the sake of better ABI or improvement of standards. This may be well under the guidelines. But going by the book to alter the behavior of a library, that other people have been relying upon for a while, is not really justifyable.
On the other side, I really have a great regard for the way the Linux kernel community religiously conserves the binary-compatibility. The practice of breaking userspace for fixing age-old bugs in kernel is shunned. Linux is about pragmatism, not idealism. Ideal things never materialize.
“If it is a bug people rely on, it’s not a bug, it’s a feature.” — Linus Torvalds
I believe this philosophy has its roots in why Linux came into existence and why Linus really wrote and published the very first version of Linux kernel. It was not to prove how a certain design or implementation was better than other, how monolithic kernels could beat microkernels at performance or how UNIX systems were superior. The intention was to solve his own problem of not having access to a cheap/free OS, and may be because other people faced the same problem, he could save them the effort by sharing it. Linux has been about solving problems, rather than being righteous — trying to be pragmatic, not ideal!
However, this philosophy seems to have taken a hit when it comes to having portable binaries across distros, or even different versions of same distro. As more closed-source softwares, belonging to private companies, get ported to Linux, there will be higher inclination to build binaries that are statically linked. Go language already mandates static linking of binaries, for example. Resources are always economic, this will not always prove favorable for systems. Another way to deal with it is through cloud. Virtual Desktop Infrastructure (VDI) can save us from this by executing binaries in Distributed Systems paradigm. However, this is merely treating the symptom not curing the real ailment. Also, VDI and other cloud services could be a constraint.
One compiled binary should run on all Linux distros. Building binaries in a right way and streamlining the portability across versions is the onus of distros, and this should not be ignored.