Yet another GNOME/GDM 3 idiocy: dropping ~/.dmrc support

I am so sorry I have to blog about GNOME3 once again. But this time, as distributor, I feel really upset.

As long as we had GNOME2 in repositories, we stuck to gdm-2.20, the last GDM release providing complete theming support and all the bells and whistles a complete Login Manager should have.

Unfortunately, we had to move to the somewhat incomplete thing GNOME3 is, and the same happened to GDM. No more xml-based theming support (unlike current KDM 4.7), no more fancy features but also, no more ~/.dmrc handling.

For those who don’t know what .dmrc is, in short, it’s a place where the default xsession can be declared. This is really important at provisioning/boot time, especially when you want to load non-standard xsessions (perhaps: XBMC, in our Media Center boot profile).

Funny enough, this is still the suggested method at GNOME. And this is a de-facto standard. Any DM out there supports ~/.dmrc.

But it turns out GNOME developers are smarter than any other in deciding that standards (even de-facto ones) just suck and are for noobs. In fact, they happily decided to “fuck the user is stupid” (yeah, it’s a verb) and implement a GNOME-centric dbus-based service (oh MY GOD!) called: accountsservice just to mangle with DM default user settings, like language and the default XSession. How can a GNOME-related tool not really considered by the rest of the world, be hosted at is a mystery.

Anyway, you may think “ok, let’s use this thing then”. And here comes the even more fun part. There is no tool in gdm nor in accountsservice to do that. How comes? They moved a functionality somewhere else, dropped compatibility and killed its usefulness completely? Hello? Is there anybody at GNOME?

Creating a custom configuration file is not working, due to a bug in GDM/AccountsService, which always resets the default xsession at every boot. Doing the whole thing through dbus at boot time is just ridiculous and kills the performance so bad. Now you may wonder how distros tried to cope with it. The answer is right there in the open: GNOME Bugzilla. Oh, look, it’s yet another rotting bug from 2009.

WTF GNOME guys? Are you done breaking stuff? (don’t get me started with notification-daemon…). Sick of all this.


Entropy API tutorial #2: calculating dependencies

After having introduced you how to match a package inside Entropy Client repositories. Let’s see how to calculate install dependencies in order to prepare the installation on the live system. As usual, Python code will follow.
Let’s say we want to install two packages: media-sound/audacious and app-office/libreoffice. How do we build up an install schedule?
First of all we need to match the two packages against the available repositories and build up a list of package matches.

from entropy.client.interfaces import Client
entropy_client = Client()
package_matches = []
for dep in (“media-sound/audacious”, “app-office/libreoffice”):
    pkg_id, repo_id = entropy_client.atom_match(dep)
    if pkg_id != -1:
        package_matches.append((pkg_id, repo_id))

Now that we filled package_matches, we need to ask Entropy to calculate the install queue (or schedule):

install_queue, conflicts_queue, exit_st = \
    entropy_client.get_install_queue(package_matches, False, False)
if exit_st == 0:
    print(“This is the queue:”)
    for pkg_id, repo_id in install_queue:
        repo = entropy_client.open_repository(repo_id)
elif exit_st == -2:
    print(“conflicts detected in the queue:”)
    print(“, “.join(install_queue))
elif exit_st == -3:
    print(“dependencies not found:”)
    print(“, “.join(install_queue))
    print(“unknown error”)

“conflicts_queue” represents a list of package identifiers coming from the installed packages repository that should be removed. This is purely informative, since anything listed there will be removed automatically upon package install, during the execution of the schedule (not yet covered).
The core code is just about 3 lines, the rest is error checking. I’ll show you how to actually install a schedule in the next “quick&dirty” tutorial.

Entropy API tutorial #1: package matching

Entropy is written in pure Python. So here I’m going to introduce you a Python script that uses Entropy Client API to match a package through the available repositories. Entropy architecture is divided into three main chunks, Client, Server, Services. You can have a look at the full Entropy API here. You can type the following commands into a Python shell directly (open a terminal and type “python”) or save it all onto a .py file.

from entropy.client.interfaces import Client
entropy_client = Client()
# replace “sys-libs/zlib” string with your target package
pkg_id, repository_id = entropy_client.atom_match(“sys-libs/zlib”)
if pkg_id != -1:
    repo = entropy_client.open_repository(repository_id)

So simple.

Amazon EC2, kernels and S3-backed images

Some time ago I started playing with Amazon EC2 (I love working at trying to get Sabayon running in there as part of the process of learning how to make EC2-ready images. It ended up being relatively easy and the changes required have been integrated into our beloved distribution and upstreamed to Gentoo (genkernel.git), even thanks to Rich and his guidance through the platform via G+.

So, for S3-backed AMIs (the Amazon Machine Images), what is required is a Xen domU kernel inside a simple ext3/4 (or another filesystem supported by Amazon) filesystem image. Sounds easy isn’t it?

Since I wanted to provide pre-made kernel binaries through Entropy, the very first thing to do was to make genkernel able to compile Xen-based kernels. Due to build system interface changes, you cannot just “make bzImage”, the same target doesn’t actually exist. So, I jumped in and fixed genkernel in a way that is possible to override the build target for the kernel image through –kernel-target=.

Once fixed that, I moved to kernel sources and binaries ebuilds, and produced: sys-kernel/xen-dom0-sources, sys-kernel/xen-domU-sources, sys-kernel/linux-xen-dum0, sys-kernel/linux-xen-domU. Respectively Xen dom0/domU kernel sources (provided with kernel configuration that works out of the box on Amazon EC2, of course just the domU one) and dom0/domU kernel binaries. The same are available on our Portage overlay in ebuild form (for Gentoo users woot) and on Sabayon Entropy repositories as install-and-go packages.

The final step was to write a molecule for Molecule.
Molecule is an image build tool I wrote in 2009 (sick of dev-util/catalyst!), that makes possible to build different kinds of images (ISO, tarballs, filesystem images, etc — the whole interface is pluggable) through a specification file (the molecule). I am proud to say that Molecule has played a key role in latest Sabayon success by making the problem of building ISO images as easy as spawning a command. Back on topic, that’s what I did here and here.

I haven’t hooked the EC2 molecules into our automated nightly image build system yet. But if you are interested in cooking one, the procedure is pretty simple. Just clone molecules.git into /sabayon, install dev-util/molecule (it’s in Portage and Entropy), download “Sabayon_Linux_SpinBase_DAILY_amd64.iso” from one of our mirrors and place it into /sabayon/iso, and type (for amd64):

# molecule /sabayon/molecules/sabayon-amd64-spinbase-amazon-ami-template.spec

This will generate an S3-backed AMI for Amazon Ec2. You will have to pick the right kernel AKI (just take the one for x86_64 and S3 and you’re set) and use the EC2 AMI tools for uploading it into Amazon S3.

If you browse through all the available molecules, you will realize that you could virtually build all the Sabayon images yourself like we do on our ISO build server housed at Ain’t that something?

LXDM: the wannabe Login Manager

I love the idea behind LXDM: provide a lightweight, NOT freakingly bloated (in terms of dependencies, == doesn’t pull in half GNOME) Login Manager.
If it only worked properly. Until yesterday night at least.

Besides we all know that LXDM (the LXDE Login Manager) is in its early stage of development (kudos to its devs), it doesn’t mean that XDG specifications don’t deserve proper attention, and implementation.

Until yesterday, in Sabayon land, LXDM wasn’t able to load Desktop Environments correctly, for this reason (lxdm.c): the lxdm_do_login() is in charge of reading user configuration ($HOME/.dmrc or whatever) and fork() the DE loader away.

Little problem: .dmrc is a tiny file inside your home directory that contains the lastly selected DE session. This is a “vital” file for your favourite Login Manager (like LXDM is). For Fluxbox, the file contains something like this:


Telling the login manager to use the session file related to fluxbox, which in turn must translate in reading /usr/share/xsessions/fluxbox.desktop configuration, in particular the Exec= value, which evaluates to /usr/bin/startfluxbox for Fluxbox.
The problem is that to lxdm_get_session_info(), called from inside lxdm_do_login(), is given (always for fluxbox) the string “fluxbox”. So far so good you may think, this is what is read from .dmrc. Nope. Here is the problem.

lxdm_get_session_info() check if char *session ends with “.desktop”, which is not (perhaps the same function is used to parse full .desktop file paths, anyway…). In our case, the “else” branch is evaluated and  “char *exec” doesn’t get set (it remains = NULL). So, the fallback condition at the end of the function kicks in: “if (name && !exec)” and of course, both Fluxbox and E17 DEs are not listed there (yeah, that stuff is hardcoded). Resulting in using “fluxbox” as “char *exec” value. Which is totally wrong, since Fluxbox must be started using (a) the content of its .desktop file in the xsessions dir, (b) “startfluxbox”, otherwise ~/.fluxbox/startup is not executed (among other things).

So, the missing piece here is that, whenever “char *session” is passed as just a “session name”, LXDM should look for a .desktop file inside /usr/share/xsessions and read Exec= and Name= values from there.

This patch made the trick. Go Go LXDM!

Spindown your hard drive when hdparm doesn’t work

On several Western Digital drives, hdparm doesn’t play so good. That’s why the hd-idle project was created. But hey, that’s C and not very flexible actually. Have a look at my Python rewrite here. It uses sg3_utils sg_start tool and it’s been successfully tested on SATA (over SCSI layer) devices. I don’t know if it’s going to work on pure SCSI devices (given the fact that they might not come back from spindown automatically, IIRC).

Feel free to deploy it on your home server, it really saves a lot of money on the electricity bill.

Improving the Sulfur, Entropy experience

During the past two weeks, I spent several hours trying to optimize the Sulfur architecture without having to go for a complete “clean room” rewrite, yet. What I’ve been able to gain is that now, Sulfur is shining at 40% more speed. Several “critical” code regions went through high (and dirty) optimizations. The result is that now, Sulfur can be used without hiccups (and blocking UI). There are still a few places where some optimizations are needed, but it’s already good enough.

Sulfur inherited the code architecture from YumEx, which ended up being quite sub-optimal. I’ll probably wait for GTK+3 before reworking the codebase (would be a very good timing actually). On a more general note, I am very happy to tell you that many code paths have undergone the same treatment and general Entropy (equo, package installation) speed is now excellent (and there’s still room).

Side note: Gentoo 11.0 has been released today!

Sabayon “App Store”, abbr: Entropy Store. Ain’t that something?

So ladies (and gentlemen, but mostly ladies). After four years of hard work on Entropy (the Sabayon Package Manager). I’ve been able to deploy what, even before Apple AppStore, Google Market, etc etc, I had in mind, and that’s not all (anyway).

What do we have here then? Friday night I had the magic intuition: how can I provide click-and-install “executables” for a so-fragmented world like the FLOSS one? On a rolling distro, even? Where a dependency graph complexity could eat you in just one bite? So, the magic word is “metadata embedding”. I won’t discuss the internal details here (have a look at the Entropy git repo log and you’ll get it), but rather focusing on the WOW thing. So, I’ve been able to pack up into a very small space (~1mb), up to 300-400 packages metadata (flattened dep graph). The result? The ability to install Sabayon applications (for Sabayon-based systems only, of course) via, the online “Entropy Store”.

Yeah, clicky links called “Install (beta)”, on a rolling Linux distro. Ain’t that something?

Next step would be figuring out if it would be possible to poll the local Entropy installed packages repository from a web browser. But that would turn into a security matter and Javascript limitations (I would like to avoid to use/write of any flash/java applet).

Next milestone (after 5.5 XFCE, LXDE, E17, OpenVZ, ServerBase releases of course): a Web-based Sabayon releases (ISO images) generator. How about this?

AppStream, I did it in 2008

It’s the same old story. A stupid has an intuition, the smart comes and steals the idea. The stupid keeps struggling trying to make a living, the smart wins prizes and zillions of $ (best case).

In this case, the stupid is me, and the smartest guys are these.

Exactly four years ago, even before Apple AppStore, Android Market and Nokia Ovi. I had the intuition that applications should have been brought to users with some “content” generated directly by other users. I called this “the Web 2.0 of software”. I’ve been able to implement some of my ideas, thanks to NLnet funds, wrote package, wrote the “UGC” part of But when I tried to push the idea to the wild wild², I only got cold feedback.

This happens when you’re nobody, cannot count on offshore money, and are probably too young.

This is the life, they say. We will for sure support AppStream, anyway.

The ~200 lines patch that does wonders? We have it

Given the hype generated by this Phoronix article I decided that the bloody patch (:D) was worth a try. An updated 2.6.36 kernel package is now available in our testing (sabayon-limbo) repository and will be moved to mainline in a dozen of hours.

It looks like we’re the first providing these responsiveness patches for production usage. To know how to use the sabayon-limbo repository please read our wiki.

And then install sys-kernel/linux-sabayon-2.6.36 package and reboot with the new kernel.

Attention: make sure to update your video drivers and other external modules you may have on your system before rebooting. DO NOT try this if you don’t know what you’re doing. The risk of turning your hard drive into a paperweight is close to 100%.

hello, twitter

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 583 other followers