Thanks, it’s time to push Sabayon farther

I want to take take a few moments from my deserved Christmas break to say thanks to all the donors who have contributed to our last fundraiser. After 1.5 years, we’ve been able to hit our €5000 goal. This is a big, I mean really big, achievement for such a small (I am not sure now) but awesome distro like ours.

We’ve always wanted to bring Gentoo to everyone, make this awesome distro available on laptops, servers and of course, desktops without the need to compile, without the need of a compiler! It turns out that we’re getting there.

So, the biggest part of the “getting there” strategy was to implement a proper binary package manager and starting to automate the distro development, maintenance and release process.
Even though Entropy is in continuous development mode, we’ve got to the point that it’s reliable enough. Now, we must push Sabayon even farther.

Let me keep the development ideas I had for a separate blog post and tell you here what’s been done, what we’re going to do and what we still need in 2013.

First things first, last year we bought a new and shiny build server, which is kindly hosted by the University of Trento, Italy, featuring a Rack 2U dual Octa Opteron 6128, 48GB RAM and, earlier last year,
2x240GB Samsung 830 SSDs. In order to save (a lot of) money, I built the server myself and I spent something like 2500€ (including the SSDs). Take into consideration that prices for hardware in the EU are much higher than in the US.

Now we’re left with something like 3000€ or more and we’re planning to do another round of infra upgrades, save some money for hardware replacement in case of failures, buy t-shirts and DVDs to give out at local events, etc.

So far, the whole Sabayon infrastructure is spread across 3 Italian universities and TOP-IX (see at the bottom of for more details) and consists of four Rack 1U servers and one Rack 2U.
Whenever there’s a problem, I jump on a car and fix issues myself (like PSU, RAM, HDD/SSD failures) or kindly delegate the task to friends living closer than me.

As you can imagine, it’s easy to suck 200-300€ whenever there’s a problem and while we have failover plans (to EC2), these come with a cost as well.
As you may have already realized, free software does not really come for free, especially for those who are actually maintaining it. Automation and scaling out across multiple people (individuals involved in the development of this distro) are the key, and in particular the former, because it reduces the “human error” impact on the whole workflow.

As I mentioned above, I will prepare a separate blog post about what I mean with “automation”. For now, enjoy your Christmas holidays, the NYE celebrations and why not, some gaming with Steam on Sabayon.

Secretly({Plan, Code, Think}) && PublishLater()

During the last years I started several open source projects. Some turned out to be useful, maybe successful, many were just rubbish. Nothing new until here.

Every time I start a new project, I usually don’t really know where I am headed and what my long-term goals are. My excitement and motivation tipically come from solving simple everyday and personal problems or just addressing {short,mid}-term goals. This is actually enough for me to just hack hack hack all night long. There is no big picture, no pressure from the outside world, no commitment requirements. It’s just me and my compiler/interpreter having fun together. I call this the “initial grace period”.

During this period, I usually never share my idea with other people, ever. I kind of keep my project in a locked pod, away from hostile eyes. Should I share my idea at this time, the project might get seriously injured and my excitement severely affected. People would only see the outcome of my thought, but not the thought process itself nor detailed plans behind it, because I just don’t have them! Besides this might be both considered against any basic Software Engineering rules or against some exotic “free software” principles, it works for me.

I don’t want my idea to be polluted as long as I don’t have something that resembles it in the form of a consistent codebase. And until that time, I don’t want others to see my work and judge its usefulness basing on incomplete or just inconsistent pieces of information.

At the very same time, writing documents about my idea and its goals beforehand is also a no-go, because I have “no clue” myself as mentioned earlier.

This is why revision control systems and the implicit development model they force on individuals are so important, especially for me.
Giving you the ability to code on your stuff, changes, improvements, without caring about the external world until you are really really done with it, is what I ended up needing so so much.
Every time I forgot to follow this “secrecy” strategy, I had to spend more time discussing about my (still confused?) idea on {why,what,how} I am doing than coding itself. Round trips are always expensive, no matter what you’re talking about!

Many internal tools we at Sabayon successfully use have gone through this development process. Other staffers sometimes tell things like “he’s been quiet in the last few days, he must be working on some new features”, and it turns out that most of the times this is true.

This is what I wanted to share with you today though. Don’t wait for your idea to become clearer in your mind, it won’t happen by itself. Just take a piece of paper (or your text editor), start writing your own secret goals (don’t make the mistake of calling them “functional requirements” like I did sometimes), divide them into modest/expected and optimistic/crazy and start coding as soon as possible on your own version/branch of the repo. Then go back to your list of goals, see if they need to be tweaked and go back coding again. Iterate until you’re satisfied of the result, and then, eventually, let your code fly away to some public site.

But, until then, don’t tell anybody what you’re doing! Don’t expect any constructive feedback during the “initial grace period”, it is very likely that it will be just be destructive.

Git, I love ya!

Equo rewrite, Sabayon 10 and Google

The following month are expected to be really exciting (and scary, eheh), for many reasons. Explanation below.

My life is going to rapidly change in roughly one month, and when these things happen in your life, you feel scared and excited at the same time. I always tried to cope with these events by just being myself, an error-prone human being (My tech. English teacher doesn’t like me to use “human being”, but where’s the poetry then!) that always tries to enjoy life and computer science with a big smile on his face.

So, let’s start in reverse order. I have the opportunity to do the university internship at Google starting from October, more precisely at Google Ireland, which is located in Dublin. I think many of the Googlers had the same feelings I currently have before me, scared and excited at the same time, with questions like “do I deserve this?”, “am I good enough?”. As I wrote above, the only answer I have found so far is that, well, it will be challenging but, do I like boredom after all? Leveraging on professionality and humbleness is probably what makes you a good team-mate all the time. Individuals cannot scale up infinitely, that is why scaling out (as in team work) is a much better approach.

It’s been two years since I started working at Weswit, the company behind the award-winning Lightstreamer Push technology, and next month is going to be my last one there. Even though, you never know what will happen next year, once back from the internship at Google. Sure thing is, I will need a job again, and I will eventually graduate (yay!).
So yeah, during the whole University period, I kept working and besides it’s been tough, it really helped me out bidirectionally. In the end, I kept accumulating real-world expertise during this time.
Anything in my life has been risk-free, and I took the risk of leaving a great job position to pursue something I would have regretted for the rest of my life, I’m sure. On the other hand, I’m sure that at the end of the day, it will be a win-win situation. Weswit is a great company, with great people (that I want to thank for the trust they gave me) and I’m almost sure that the next one might not be my last month there (in absolute terms I mean). You never know what is going to happen in your life, and I believe there’s always a balance between bad and good things. Patience, passion and dedication is the best approach to life, by the way.

Before leaving for Dublin, we (as in the Sabayon team) are planning to release Sabayon 10. improved ZFS support,  improved Entropy & Rigo experience (all the features users asked me about have been implemented!), out of the box KMS improvements, BFQ iosched as default scheduler (I am a big fan of Paolo Valente’s work) a load of new updates (from the Linux kernel to X.Org, from GNOME to KDE through MATE) and if we have time, more Gentoo-hardened features.

Let me mention here one really nice Entropy feature I implemented last month: Entropy adopted SQLite3 as its repository model engine since day one (and it’s been a big win!), even though, the actual implementation has been always abstracted away so that upper layers never had to deal with it directly (and up to here, there is nothing exciting). Given that a file-based database, like SQLite is, is almost impossible to scale out [1], and given that I’ve been digging into MySQL for some time now, I decided it was time to write an entropy.db connector/adapter for MySQL, specifically designed for the InnoDB storage engine. And 1000 LOC just did it [2]!

As you may have seen if you’re using Sabayon and updating it daily, Entropy version has been bumped from 1.0_rcXXX to just XXX. As of today though, the latest Entropy version is 134. It might sound odd or even funny, but I was sick of seeing that 1.0_rc prefix that was just starting to look ridiculous. Entropy is just about continuous development and improvement, when I fully realized this, it was clear that there won’t be any “final”, “one-point-oh” and “one-size-fits-all done && done” version, ever. Version numbers have been always overrated, so f**k formally defined version numbers, welcome monotonically increasing sequences (users won’t care anyway, they just want the latest and greatest).

I know, I mention “Equo rewrite” in the blog post title. And here we go. The Equo codebase was one of the first and long living part of Entropy I wrote, some of the code is there since 2007, even though it went through several refinement processes, the core structure is still the same (crap). Let me roll back the clock a little bit first, when the Eit codebase [3] replaced old equo-community, reagent and activator tools, it was clear that I was going to do exactly the same thing with the Equo one, thus I wrote the whole code in an extremely modular way, to the point that extra features (or “commands” in this case) could be plugged in by 3rd parties without touching the Eit kernel at all. After almost one year, Eit has proven to be really powerful and solid to the extent that now, its architecture is landing into the much more visible next-gen Equo app.
I tell you, the process of migrating the Equo codebase over will be long. It is actually one of many background tasks I usually work on during rainy weekends. But still, expect me to experiment with new (crazy, arguable, you name it) ideas while I make progress on this task. The new Equo is codenamed “Solo”, but it’s just a way to avoid file names clashing while I port the code over. You can find the first commits on the entropy.git repo, under the “solo” branch [4].

Make sure to not miss the whole picture: we’re a team and Sabayon lives on incremental improvements (continous development, agile!). This has the big advantage that we can implement and deploy features without temporal constraints. And in the end, it’s just our (beloved) hobby!

[1] imagine a web service cluster, etc — I know, SQL in general is known for not scaling out well without sharding or other techniques, but this is outside the scope of this paragraph, and I think NoSQL is sometimes overrated as well.
[3] Eit is the server-side (and community-repo side) command line tool, “Eit” stands for “Entropy Infrastructure Toolkit” and it exposes repo management in a git-like fashion.

Going to the Tizen Developer Conference

I am really happy to announce that I’ve been invited to the Tizen Developer Conference, in San Francisco, CA next month (May 7-9). I’m going with a friend of mine, Michele Tameni, who seems to have won the Intel consolation prize (damn you!).
I really need to thank Giovanni Martinelli and Mauro Fantechi for sponsoring my trip there and across the whole San Francisco Bay Area (I’ve been told that it’s full of Gentooers, gonna catch you!).

This is a great opportunity for me to meet a good deal of hardcore FLOSS devs and have a beer together (just one?). There are also several exciting talks I couldn’t really miss.

Feel free to contact me if you want to meet up for a drink. A post to the gentoo-core ML will follow next week.

Entropy & Rigo at Transifex

I created two projects at Transifex aiming to get better Entropy (and
Rigo, which has split .po files) localization support.

Sabayon Entropy Transifex Project Page:

Sabayon Rigo Transifex Project Page:

If you are speaking non-English languages, then help us out!

I’m going to create the Anaconda (Sabayon Installer) translation project later today.

Tech Preview: Sabayon on ARMv7

One week ago, the BeagleBone I ordered from eventually landed on my desk. As you may know, it’s an ARMv7a OMAP device with 256Mb RAM, USB 2.0 and FastEthernet 10/100. The board doesn’t come with HDMI output but daughter boards are expected to be shipped soon it seems.

It ships with Angstrom Linux, an embedded distribution that is no way close to Gentoo, at least in my opinion. I found it kinda broken and slow. opkg, the package manager, is a nightmare. I had no choice, that thing had to be dumped ASAP. And that’s what I did.

First thing I did was copycating the Angstrom kernel configuration by copying /proc/config.gz in a safe place and starting to merge the Beagleboard kernel tree into my Linux 3.1 branch. This of course means that the AM335x is not yet supported by the vanilla kernel. Last I heard is that there are plans to merge the patches during the 3.3 merge window… You can find two ebuilds in the “sabayon-distro” overlay: sys-kernel/beagleboard-sources and sys-kernel/linux-beagleboard, providing sources and binaries respectively.
Introducing ARM support in sabayon-kernel.eclass (the eclass that builds kernel binaries using genkernel) was quite straightforward, it now builds uImages directly!

The boot strategy works like this: u-boot.img searches /boot/uImage into the root filesystem (ext4 doesn’t seem to work with my image). In our case, /boot/uImage is a symlink pointing to a versioned file (the one installed by sys-kernel/linux-beagleboard). You can manage the symlink using eselect-uimage, from the “sabayon” overlay and shipped with the Sabayon images already. This means that you can change the boot kernel at runtime without even touching the boot partition!

The second thing was setting up a chroot, both Entropy build chroot (for pushing out binary packages to the armv7l repo) and “image” chroot (the one from where images are generated) using qemu-user to emulate armv7l. In order to be able to prepare disk images using loop devices, I also completely rewrote the famous “mkcard.txt” script, dropping bc dependency (hey, bash can do math already!!). You can find it here, as well as molecules that we use to build the ARMv7a images for Beagle{Bone,Board}.

If you are interested in knowing more about how I managed to get Sabayon on ARM, have a look at the “Hitchhikers Guide to the BeagleBone” on our wiki.

I uploaded the Sabayon ARMv7 images on our mirrors this morning, under the iso/daily directory, in different sizes (depending on your MMC card size): 4GB, 8GB, 16GB.

a75fd2a7d9cae17762034c5c049a08fc Sabayon_Linux_DAILY_armv7a_Base_16GB.img.xz

0e17081050fa19c7f769318e3235ebaa Sabayon_Linux_DAILY_armv7a_Base_4GB.img.xz

56606bc906715cdebce02611ae03285c Sabayon_Linux_DAILY_armv7a_Base_8GB.img.xz

Installing them onto your MMC card is as easy as running:

xzcat <image file>.xz > /dev/sdX

Where /dev/sdX is your memory card device (might be mmcblk0).
They come in different sizes, make sure to match the advertised image size with your MMC device one.

If you have 32GB or 64GB MMCs you have two choices: either use the 16GB version and create a separate partition later or take the bootfs and rootfs images (they come in different sizes but the content is the same) from the same dir:

93960b28cde8dde51f2d29cc0c76f6bb Sabayon_Linux_DAILY_armv7a_Base_16GB.img.bootfs.tar.xz

e85fe8d344dacf79eec94562a59c6750 Sabayon_Linux_DAILY_armv7a_Base_16GB.img.rootfs.tar.xz

93960b28cde8dde51f2d29cc0c76f6bb Sabayon_Linux_DAILY_armv7a_Base_4GB.img.bootfs.tar.xz

1a6bce6f585d52f2b50806bd2bd69578 Sabayon_Linux_DAILY_armv7a_Base_4GB.img.rootfs.tar.xz

93960b28cde8dde51f2d29cc0c76f6bb Sabayon_Linux_DAILY_armv7a_Base_8GB.img.bootfs.tar.xz

13b2a47a88c55c8692ce61fc2fd42022 Sabayon_Linux_DAILY_armv7a_Base_8GB.img.rootfs.tar.xz

This way you can create your own partition layout and then unpack the content into the respective partitions. So easy. No grub nor MBR nightmare!
If you are as lazy as me, here is the download link to the directory (using the GARR mirror). But you are encouraged to use our download page to find the mirror closest to you.

The root password is: root. The OS is set to automatically boot and start eth0 and sshd (so you can connect to it via ssh). During the first boot, there is a script that configures some stuff and reboots your device automatically (so on the first boot it will just reboot). If you want to change the locale, edit /etc/env.d/02locale and run locale-gen !. The System is already configured to allow serial login (at least this works on the BeagleBone out of the box).

That’s it. You have a great distro (Gentoo) with a great Package Manager, Entropy (Sabayon) in a credit-card sized device.
If you appreciate our efforts towards the ARM architecture, please consider to donate us either hardware or MONEY to buy it! (yeah, we can’t just handle money, we always need money!).
If you’re a developer and interested in ARM stuff, why don’t you join us? We can improve both Gentoo and Sabayon together!

Please note: I only have a BeagleBone for now thus I wasn’t able to test out the images on the BeagleBoard. Moreover, the boot partition contains Beagle* related boot binaries that won’t work on other OMAP devices out of the box (but still, we provide split boot and root filesystem images).

Please note 2: I consider this a tech preview because at the time of writing, we only have 400 binary packages available for install. You can browse them using


We need the Google+ of Desktop Environments

This is what I came up with after having slept 9 hours straight. That’s it. I wouldn’t even call it “Desktop Environment” anymore. I would call it: Human Environment. Take Google+ or Google products in general, bring the philosophy and style to a Linux Desktop, done. So simple that even my grandma could use it, but yet complete enough for doing everyday tasks. And no, I’m not talking of having a browser as Desktop Environment, maybe I’m inconsciously just talking about Android? Ah well, I need to think about it a little bit more.

Posted in Me

Permalink 5 Comments

Open Source projects are gone crazy, period

You all know that I like to summarize my thinking in a few sentences. I don’t really like very long boring blog posts, and microblogging is already eating my will.

GRUB2, I’ve never understood the idiocy behind using bash scripts in bootloader configuration/setup tools. The wannabe GRand Unified Bootloder isn’t unified, grand even after a decade. And one of the sane points of GRUB1, a sane (yet could have been improved) simple configuration file that users were able to understand, was just thrown to the sharks. Same for having to regenerate grub.cfg from scratch at each kernel install/removal, this is really looking for troubles. Congrats!

KDE4, they planned to dominate the world with their outstanding ideas (they were, at the time) and they ended up having a crashy fishy Desktop Environment that is giving big headaches to downstream distributors at every minor release, with configuration, ABI, API changes, yeah. And we, as a distro, are as usual taking all the blame for things breaking so often. Hello KDE upstream, just test out your stuff a bit more before feeding the crowd with it. Oh, and of couse I have to mention the super KDE fanboys anecdote I keep hearing at every new release: KDE 4.(x+1) will be much better than KDE 4.x (and will blow GNOME 3 away!). The part in the brackets is a recent addition.

GNOME3,  yeah, sooner (more likely) or later we will have to switch to it, since GNOME2 is not developed anymore, as you may have happily seen. But when this will happen, I am sure, many of our users will get promptly upset and will start to load our Bugzilla and Forum with WTFs (at a very high WTF/s rate). Many people (the majority?) just want Desktop icons which they can click, some sort of taskbar and a systray where the annoying shit is placed. It’s been like that for 15 years, why do these bright minds called “Desktop Environment developers” just pretend to know what users want and what users are not FOR SURE suitable for? On a Desktop (not Mobile nor Handheld) system. WTF? Can’t you guys stop pretending to hold the whole knowledge and sit down with us, simple human beings? And perhaps shut the fuck up for a few minutes and just listen to what users want? Hasn’t the recent KDE3->KDE4 migration taught anything useful?

I’m done for today, but for sure, it’s not all about GRUB2, KDE4 and GNOME3. Everybody seems to have gone crazy, desperately trying to be revolutionary while people just want simple things that work without too much annoyance.

Update: when I wrote this blog post it was quite late here, and I feel really guilty for having forgotten Mozilla and its products, Chromium versioning, and many other examples of my theory. Is the Apocalypse of FLOSS near?

Profiling my home server power consumption

I just bought this Voltcraft EL4000 device that, among other things, does print power consumption information such as: Ampere, Watts, VA and the infamous kWh.

Of course the aim was to spot power hungry devices and kill them (when possible) or replace with more eco-friendly alternatives. So I started with my Atom-based (N270 on the power hungry i945CF) home server, running with 2x750GB RAID1 Western Digital super-power-hungry, and an external Western Digital 2TB eSATA device (you never know, if the world falls over you need to take all your data quickly with you). Here is what I’ve found:

  • Initial consumption was at around 90W, no fancy power saving thingies were configured (besides, hdparm didn’t work on those WD drives)
  • The freaking power strip I/O button led was consuming 2W. I ripped it off immediately. My room at night is very close to the Milky Way.
  • the bloody PSU is quite inefficient, <80%, so I’ll have to replace it with a more efficient one as soon as I have some spare cash (sigh)
  • the chipset mounted on this Atom board sucks 22W, while the Atom itself *only* 4W (2.4W average), I knew that when I bought it, but then I wasn’t caring too much about energy costs. Does anybody know a good cheap & power-friendly atom board?
  • the bloody RAID1 WD setup is sucking 20W idle (idle != spun down). I just ordered two new “Green Power” drives from WD, will save around 14W idle. Interesting that once spun down, these drives consume almost nothing. Hence, after having discovered that hdparm didn’t work, I tried the hard way, sending SCSI commands and that worked, then I’ve found sg3_utils’ sg_start –stop /dev/sdX and started writing a small, memory resident daemon tool in Python that does the spindown every 7.5 minutes if certain conditions are met.
  • Both p4-clockmod and cpufreq-acpi drivers are not supported on this CPU (for design reasons, AFAIR). Anyway, the delta between C5 (and C6) and C0 is minimal. Only “powersave” and “performance” governors are working, but make the Atom running at a fixed clock of 200MHz is not a good idea.
  • The external hard drive has of course, a separate DC power converter, which ended up being very inefficient too, but anyway, I can tolerate 1W.
  • Last, but not least. The damn UPS, I was used to have two, one for the server and the other for my workstation. No reason to have 14W constantly sucked (even with batteries fully charged) by the APC 800VA RS (the other is a shitty Trust 1000VA). So I decided to keep the shitty Trust and drop the APC, connecting both puters to the same UPS.

So, my goal (and I’m already very close) is to cut the power consumption to 45W (from 90W), considering that the server does some torrent seeding and backups overnight (when energy costs less) and for the 95% of the time is idling waiting for daddy to come home.

The things that made a big diff in reducing (close to 0) spin up events (mainly for “pdflushing” data to disks caused by the cron daemon, nfs, samba, apache, mysql) have been the following:

  • moving /var/log, /var/cache/samba, /var/lib/samba, /var/lib/nfs to a separate power-friendly usb memory stick. This way, RAID1 drives had no more reason to spin up every odd minute.
  • Using my own script for putting hard drives to sleep (an infinite while loop that checks for drive stats under /sys/block to understand if the drives needs to go to sleep again).
  • Using powertop to get some advices, like the ones below (just google them to know what they’re about):

echo 1 > /sys/module/snd_hda_intel/parameters/power_save
echo 2500 > /proc/sys/vm/dirty_writeback_centisecs
echo 5 > /proc/sys/vm/laptop_mode

And, freaking kill all the SWAP partitions! Avoid having the kernel moving anonymous pages (and other swappable ones) to disk completely! If you have >2GB ram and are running a home server, there’s no reason to have swap.

Posted in Me

Permalink 3 Comments

Sabayon “App Store”, abbr: Entropy Store. Ain’t that something?

So ladies (and gentlemen, but mostly ladies). After four years of hard work on Entropy (the Sabayon Package Manager). I’ve been able to deploy what, even before Apple AppStore, Google Market, etc etc, I had in mind, and that’s not all (anyway).

What do we have here then? Friday night I had the magic intuition: how can I provide click-and-install “executables” for a so-fragmented world like the FLOSS one? On a rolling distro, even? Where a dependency graph complexity could eat you in just one bite? So, the magic word is “metadata embedding”. I won’t discuss the internal details here (have a look at the Entropy git repo log and you’ll get it), but rather focusing on the WOW thing. So, I’ve been able to pack up into a very small space (~1mb), up to 300-400 packages metadata (flattened dep graph). The result? The ability to install Sabayon applications (for Sabayon-based systems only, of course) via, the online “Entropy Store”.

Yeah, clicky links called “Install (beta)”, on a rolling Linux distro. Ain’t that something?

Next step would be figuring out if it would be possible to poll the local Entropy installed packages repository from a web browser. But that would turn into a security matter and Javascript limitations (I would like to avoid to use/write of any flash/java applet).

Next milestone (after 5.5 XFCE, LXDE, E17, OpenVZ, ServerBase releases of course): a Web-based Sabayon releases (ISO images) generator. How about this?

hello, twitter

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 680 other followers


Get every new post delivered to your Inbox.

Join 680 other followers