Entropy API Tutorial #3: installing packages

Basing on my previous Entropy API Tutorial #2 regarding how to calculate package dependencies through the Entropy Python API, let’s move forward and see how to actually install packages from a repository.

Let’s say we have a list called “package_matches” containing all the package matches we want to install (dependencies have been expanded already). At this point we’re ready to ask Entropy to fetch from mirrors and merge them into the live filesystem. The following example is very simple. For the same reason, it doesn’t cover all the functionalities, such as, for example, multi-fetching or fetching of package source code.

import os
from entropy.client.interfaces import Client

entropy_client = Client()
# this is out package_matches list (fake content)
package_matches = [(12345, 'sabayon-weekly'),
    (12343, 'sabayon-weekly')]

# first fetch, then merge
exit_st = os.EX_OK
for pkg_match in package_matches:
    pkg = entropy_client.Package()
    pkg.prepare(pkg_match, “fetch”, {})
    rc = pkg.run()
    if rc != os.EX_OK:
        exit_st = rc
        break
if exit_st != os.EX_OK:
    entropy_client.shutdown()
    raise SystemExit(exit_st)

# now merge into our live filesystem
for pkg_match in package_matches:
    pkg = entropy_client.Package()
    pkg.prepare(pkg_match, “install”, {})
    rc = pkg.run()
    if rc != os.EX_OK:
        exit_st = rc
        break

entropy_client.shutdown()
raise SystemExit(exit_st)

Some curiosities. For http{s,}://, ftp{s,}://, file://, Entropy uses Python urllib2 with custom User-Agent. For rsync:// Entropy uses the external rsync executable.
As some of you pointed out, exit statuses are a heritage of early Entropy codebase stages and are going to be replaced by exceptions in future. Entropy Client refactoring is going to happen in a few months. But for now, enjoy your hacking ;-).

In the next tutorial, before starting to show you the power of Entropy Services, I’ll continue my excursus through Entropy Client, focusing on metadata retrieval from repositories.

Quick and dirty: why sudo is bad for security

I always hated sudo. It’s like trying to fix a tyre with a chewing gum, close eyes and hope it works.
Just a quick and dirty, straightforward explanation why it is bad for your systems’ security.
Let’s say you have a password-less ssh keypair attached to a remote user allowed to run anything as root through prefixing commands with “sudo”. *cough* this is the default setup for any Amazon EC2 instance running Amazon Linux AMIs (and also Ubuntu AMIs I guess, and perhaps even Fedora ones?)
What happens if your keypair slips out, gets leaked somehow or somebody steals it from your hard drive? The result is simple, the attacker automatically gains root access anywhere you have the above setup.
That’s quite dangerous, especially if you’re paranoid. Funny enough, many distros are forcing users to use sudo against their will, in a password-less setup.

Of course, this also happens in the unlikely (ahaha) case where your user account gets compromised. Having two levels of passwords is always better than one.

sudo FTL!

Yet another GNOME/GDM 3 idiocy: dropping ~/.dmrc support

I am so sorry I have to blog about GNOME3 once again. But this time, as distributor, I feel really upset.

As long as we had GNOME2 in repositories, we stuck to gdm-2.20, the last GDM release providing complete theming support and all the bells and whistles a complete Login Manager should have.

Unfortunately, we had to move to the somewhat incomplete thing GNOME3 is, and the same happened to GDM. No more xml-based theming support (unlike current KDM 4.7), no more fancy features but also, no more ~/.dmrc handling.

For those who don’t know what .dmrc is, in short, it’s a place where the default xsession can be declared. This is really important at provisioning/boot time, especially when you want to load non-standard xsessions (perhaps: XBMC, in our Media Center boot profile).

Funny enough, this is still the suggested method at GNOME. And this is a de-facto standard. Any DM out there supports ~/.dmrc.

But it turns out GNOME developers are smarter than any other in deciding that standards (even de-facto ones) just suck and are for noobs. In fact, they happily decided to “fuck the user is stupid” (yeah, it’s a verb) and implement a GNOME-centric dbus-based service (oh MY GOD!) called: accountsservice just to mangle with DM default user settings, like language and the default XSession. How can a GNOME-related tool not really considered by the rest of the world, be hosted at freedesktop.org is a mystery.

Anyway, you may think “ok, let’s use this thing then”. And here comes the even more fun part. There is no tool in gdm nor in accountsservice to do that. How comes? They moved a functionality somewhere else, dropped compatibility and killed its usefulness completely? Hello? Is there anybody at GNOME?

Creating a custom configuration file is not working, due to a bug in GDM/AccountsService, which always resets the default xsession at every boot. Doing the whole thing through dbus at boot time is just ridiculous and kills the performance so bad. Now you may wonder how distros tried to cope with it. The answer is right there in the open: GNOME Bugzilla. Oh, look, it’s yet another rotting bug from 2009.

WTF GNOME guys? Are you done breaking stuff? (don’t get me started with notification-daemon…). Sick of all this.

Entropy API tutorial #2: calculating dependencies

After having introduced you how to match a package inside Entropy Client repositories. Let’s see how to calculate install dependencies in order to prepare the installation on the live system. As usual, Python code will follow.
Let’s say we want to install two packages: media-sound/audacious and app-office/libreoffice. How do we build up an install schedule?
First of all we need to match the two packages against the available repositories and build up a list of package matches.

from entropy.client.interfaces import Client
entropy_client = Client()
package_matches = []
for dep in (“media-sound/audacious”, “app-office/libreoffice”):
    pkg_id, repo_id = entropy_client.atom_match(dep)
    if pkg_id != -1:
        package_matches.append((pkg_id, repo_id))

Now that we filled package_matches, we need to ask Entropy to calculate the install queue (or schedule):

install_queue, conflicts_queue, exit_st = \
    entropy_client.get_install_queue(package_matches, False, False)
if exit_st == 0:
    print(“This is the queue:”)
    for pkg_id, repo_id in install_queue:
        repo = entropy_client.open_repository(repo_id)
        print(repo.retrieveAtom(pkg_id))
elif exit_st == -2:
    print(“conflicts detected in the queue:”)
    print(“, “.join(install_queue))
elif exit_st == -3:
    print(“dependencies not found:”)
    print(“, “.join(install_queue))
else:
    print(“unknown error”)
entropy_client.shutdown()

“conflicts_queue” represents a list of package identifiers coming from the installed packages repository that should be removed. This is purely informative, since anything listed there will be removed automatically upon package install, during the execution of the schedule (not yet covered).
The core code is just about 3 lines, the rest is error checking. I’ll show you how to actually install a schedule in the next “quick&dirty” tutorial.

Entropy API tutorial #1: package matching

Entropy is written in pure Python. So here I’m going to introduce you a Python script that uses Entropy Client API to match a package through the available repositories. Entropy architecture is divided into three main chunks, Client, Server, Services. You can have a look at the full Entropy API here. You can type the following commands into a Python shell directly (open a terminal and type “python”) or save it all onto a .py file.

from entropy.client.interfaces import Client
entropy_client = Client()
# replace “sys-libs/zlib” string with your target package
pkg_id, repository_id = entropy_client.atom_match(“sys-libs/zlib”)
if pkg_id != -1:
    repo = entropy_client.open_repository(repository_id)
    print(repo.retrieveAtom(pkg_id))
entropy_client.shutdown()

So simple.

Amazon EC2, kernels and S3-backed images

Some time ago I started playing with Amazon EC2 (I love working at lightstreamer.com) trying to get Sabayon running in there as part of the process of learning how to make EC2-ready images. It ended up being relatively easy and the changes required have been integrated into our beloved distribution and upstreamed to Gentoo (genkernel.git), even thanks to Rich and his guidance through the platform via G+.

So, for S3-backed AMIs (the Amazon Machine Images), what is required is a Xen domU kernel inside a simple ext3/4 (or another filesystem supported by Amazon) filesystem image. Sounds easy isn’t it?

Since I wanted to provide pre-made kernel binaries through Entropy, the very first thing to do was to make genkernel able to compile Xen-based kernels. Due to build system interface changes, you cannot just “make bzImage”, the same target doesn’t actually exist. So, I jumped in and fixed genkernel in a way that is possible to override the build target for the kernel image through –kernel-target=.

Once fixed that, I moved to kernel sources and binaries ebuilds, and produced: sys-kernel/xen-dom0-sources, sys-kernel/xen-domU-sources, sys-kernel/linux-xen-dum0, sys-kernel/linux-xen-domU. Respectively Xen dom0/domU kernel sources (provided with kernel configuration that works out of the box on Amazon EC2, of course just the domU one) and dom0/domU kernel binaries. The same are available on our Portage overlay in ebuild form (for Gentoo users woot) and on Sabayon Entropy repositories as install-and-go packages.

The final step was to write a molecule for Molecule.
Molecule is an image build tool I wrote in 2009 (sick of dev-util/catalyst!), that makes possible to build different kinds of images (ISO, tarballs, filesystem images, etc — the whole interface is pluggable) through a specification file (the molecule). I am proud to say that Molecule has played a key role in latest Sabayon success by making the problem of building ISO images as easy as spawning a command. Back on topic, that’s what I did here and here.

I haven’t hooked the EC2 molecules into our automated nightly image build system yet. But if you are interested in cooking one, the procedure is pretty simple. Just clone molecules.git into /sabayon, install dev-util/molecule (it’s in Portage and Entropy), download “Sabayon_Linux_SpinBase_DAILY_amd64.iso” from one of our mirrors and place it into /sabayon/iso, and type (for amd64):

# molecule /sabayon/molecules/sabayon-amd64-spinbase-amazon-ami-template.spec

This will generate an S3-backed AMI for Amazon Ec2. You will have to pick the right kernel AKI (just take the one for x86_64 and S3 and you’re set) and use the EC2 AMI tools for uploading it into Amazon S3.

If you browse through all the available molecules, you will realize that you could virtually build all the Sabayon images yourself like we do on our ISO build server housed at unimib.it. Ain’t that something?

Introducing Fusion Sources

Mitch and I, just started working on Fusion Sources. Like the glorious Zen Sources, ours are Sabayon-flavoured Linux kernel sources on steroids, containing Con Kolivas patchset (BFS included), BFQ io-scheduler, Reiser4, experimental btrfs patches, experimental drm patches, and experimental wireless-next drivers.

The first version of Fusion Sources has been tagged here. Sources and Binaries are now available in our Entropy repositories (currently sabayon-limbo) and Portage overlay (layman -fa sabayon). This early version doesn’t contain wireless-next, experimental btrfs and drm stuff, but we’ll get there on the next round.

If you are interested in joining in helping out with the patchwork, just jump into irc.freenode.net #sabayon.

Enjoy bleeding edge kernels from Sabayon.

We have to let GNOME2 go…

Some, maybe several of you won’t like it. But the time for GNOME2 is slowly coming. We expect to start upgrading our build chroots to GNOME3 in a week and push all the stuff to sabayon-limbo Entropy testing repository to iron out the migration path, fixup artwork, etc. Mainline users won’t be probably affected for another good month, but after that, the hell will freeze over and everybody will get the oh-so-cute-to-run-in-failsafe-mode GNOME3 “Desktop”.

We will try to make the migration as much comfortable as possible. We have to let GNOME 2.32 go to its destiny. R.I.P.

Sabayon 6 LXDE, XFCE, E17 releases

And these are the last 3 planned releases before the summer break (wait, what break?). Have fun with them guys. Sabayon 7 is set for October 2011, with new exciting technologies inside (hopefully :-)). Of course, we’re always looking for developers wanting to join our crazy garage band. Come on, we’re waiting for you.

Anyway, back on-topic, the Press Release is here.

We need the Google+ of Desktop Environments

This is what I came up with after having slept 9 hours straight. That’s it. I wouldn’t even call it “Desktop Environment” anymore. I would call it: Human Environment. Take Google+ or Google products in general, bring the philosophy and style to a Linux Desktop, done. So simple that even my grandma could use it, but yet complete enough for doing everyday tasks. And no, I’m not talking of having a browser as Desktop Environment, maybe I’m inconsciously just talking about Android? Ah well, I need to think about it a little bit more.

Posted in Me

Permalink 5 Comments

hello, twitter

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 584 other followers

del.icio.us

Follow

Get every new post delivered to your Inbox.

Join 584 other followers