Secretly({Plan, Code, Think}) && PublishLater()

During the last years I started several open source projects. Some turned out to be useful, maybe successful, many were just rubbish. Nothing new until here.

Every time I start a new project, I usually don’t really know where I am headed and what my long-term goals are. My excitement and motivation tipically come from solving simple everyday and personal problems or just addressing {short,mid}-term goals. This is actually enough for me to just hack hack hack all night long. There is no big picture, no pressure from the outside world, no commitment requirements. It’s just me and my compiler/interpreter having fun together. I call this the “initial grace period”.

During this period, I usually never share my idea with other people, ever. I kind of keep my project in a locked pod, away from hostile eyes. Should I share my idea at this time, the project might get seriously injured and my excitement severely affected. People would only see the outcome of my thought, but not the thought process itself nor detailed plans behind it, because I just don’t have them! Besides this might be both considered against any basic Software Engineering rules or against some exotic “free software” principles, it works for me.

I don’t want my idea to be polluted as long as I don’t have something that resembles it in the form of a consistent codebase. And until that time, I don’t want others to see my work and judge its usefulness basing on incomplete or just inconsistent pieces of information.

At the very same time, writing documents about my idea and its goals beforehand is also a no-go, because I have “no clue” myself as mentioned earlier.

This is why revision control systems and the implicit development model they force on individuals are so important, especially for me.
Giving you the ability to code on your stuff, changes, improvements, without caring about the external world until you are really really done with it, is what I ended up needing so so much.
Every time I forgot to follow this “secrecy” strategy, I had to spend more time discussing about my (still confused?) idea on {why,what,how} I am doing than coding itself. Round trips are always expensive, no matter what you’re talking about!

Many internal tools we at Sabayon successfully use have gone through this development process. Other staffers sometimes tell things like “he’s been quiet in the last few days, he must be working on some new features”, and it turns out that most of the times this is true.

This is what I wanted to share with you today though. Don’t wait for your idea to become clearer in your mind, it won’t happen by itself. Just take a piece of paper (or your text editor), start writing your own secret goals (don’t make the mistake of calling them “functional requirements” like I did sometimes), divide them into modest/expected and optimistic/crazy and start coding as soon as possible on your own version/branch of the repo. Then go back to your list of goals, see if they need to be tweaked and go back coding again. Iterate until you’re satisfied of the result, and then, eventually, let your code fly away to some public site.

But, until then, don’t tell anybody what you’re doing! Don’t expect any constructive feedback during the “initial grace period”, it is very likely that it will be just be destructive.

Git, I love ya!

Equo code refactoring: mission accomplished

Apparently it’s been a while since my last blog post. This however does mean that I’ve been too busy on the coding side, which is what you may prefer I guess.

The new Equo code is hitting the main Sabayon Entropy repository as I write. But what’s it about?

Refactoring

First thing first. The old codebase was ugly, as in, really ugly. Most of it was originally written in 2007 and maintained throughout the years. It wasn’t modular, object oriented, bash-completion friendly, man pages friendly, and most importantly, it did not use any standard argument parsing library (because there was no argparse module and optparse was about to be deprecated).

Modularity

Equo subcommands are just stand-alone modules. This means that adding new functionality to Equo is only a matter of writing a new module, containing a subclass of “SoloCommand” and registering it against the “command dispatcher” singleton object. Also, the internal Equo library has now its own name: Solo.

Backward compatibility

In terms of command line exposed to the user, there are no substantial changes. During the refactoring process I tried not to break the current “equo” syntax. However, syntax that has been deprecated more than 3 years ago is gone (for instance, stuff like: “equo world”). In addition, several commands are now sporting new arguments (have a look at “equo match” for example).

Man pages

All the equo subcommands are provided with a man page which is available through “man equo-<subcommand name>”. The information required to generated the man page is tightly coupled with the module code itself and automatically generated via some (Python + a2x)-fu. As you can understand, maintaining both the code and its documentation becomes easier this way.

Bash completion

Bash completion code lives together with the rest of the business logic. Each subcommand exposes its bash completion options through a class instance method called “list bashcomp(last_argument_str)”, overridden from SoloCommand. In layman’s terms, you’ve got working bashcomp awesomeness for every equo command available.

Where to go from here

Tests, we need more tests (especially regression tests). And I have this crazy idea to place tests directly in the subcommand module code.
Testing! Please install entropy 149 and play with it, try to break it and report bugs!

Equo rewrite, Sabayon 10 and Google

The following month are expected to be really exciting (and scary, eheh), for many reasons. Explanation below.

My life is going to rapidly change in roughly one month, and when these things happen in your life, you feel scared and excited at the same time. I always tried to cope with these events by just being myself, an error-prone human being (My tech. English teacher doesn’t like me to use “human being”, but where’s the poetry then!) that always tries to enjoy life and computer science with a big smile on his face.

So, let’s start in reverse order. I have the opportunity to do the university internship at Google starting from October, more precisely at Google Ireland, which is located in Dublin. I think many of the Googlers had the same feelings I currently have before me, scared and excited at the same time, with questions like “do I deserve this?”, “am I good enough?”. As I wrote above, the only answer I have found so far is that, well, it will be challenging but, do I like boredom after all? Leveraging on professionality and humbleness is probably what makes you a good team-mate all the time. Individuals cannot scale up infinitely, that is why scaling out (as in team work) is a much better approach.

It’s been two years since I started working at Weswit, the company behind the award-winning Lightstreamer Push technology, and next month is going to be my last one there. Even though, you never know what will happen next year, once back from the internship at Google. Sure thing is, I will need a job again, and I will eventually graduate (yay!).
So yeah, during the whole University period, I kept working and besides it’s been tough, it really helped me out bidirectionally. In the end, I kept accumulating real-world expertise during this time.
Anything in my life has been risk-free, and I took the risk of leaving a great job position to pursue something I would have regretted for the rest of my life, I’m sure. On the other hand, I’m sure that at the end of the day, it will be a win-win situation. Weswit is a great company, with great people (that I want to thank for the trust they gave me) and I’m almost sure that the next one might not be my last month there (in absolute terms I mean). You never know what is going to happen in your life, and I believe there’s always a balance between bad and good things. Patience, passion and dedication is the best approach to life, by the way.

Before leaving for Dublin, we (as in the Sabayon team) are planning to release Sabayon 10. improved ZFS support,  improved Entropy & Rigo experience (all the features users asked me about have been implemented!), out of the box KMS improvements, BFQ iosched as default scheduler (I am a big fan of Paolo Valente’s work) a load of new updates (from the Linux kernel to X.Org, from GNOME to KDE through MATE) and if we have time, more Gentoo-hardened features.

Let me mention here one really nice Entropy feature I implemented last month: Entropy adopted SQLite3 as its repository model engine since day one (and it’s been a big win!), even though, the actual implementation has been always abstracted away so that upper layers never had to deal with it directly (and up to here, there is nothing exciting). Given that a file-based database, like SQLite is, is almost impossible to scale out [1], and given that I’ve been digging into MySQL for some time now, I decided it was time to write an entropy.db connector/adapter for MySQL, specifically designed for the InnoDB storage engine. And 1000 LOC just did it [2]!

As you may have seen if you’re using Sabayon and updating it daily, Entropy version has been bumped from 1.0_rcXXX to just XXX. As of today though, the latest Entropy version is 134. It might sound odd or even funny, but I was sick of seeing that 1.0_rc prefix that was just starting to look ridiculous. Entropy is just about continuous development and improvement, when I fully realized this, it was clear that there won’t be any “final”, “one-point-oh” and “one-size-fits-all done && done” version, ever. Version numbers have been always overrated, so f**k formally defined version numbers, welcome monotonically increasing sequences (users won’t care anyway, they just want the latest and greatest).

I know, I mention “Equo rewrite” in the blog post title. And here we go. The Equo codebase was one of the first and long living part of Entropy I wrote, some of the code is there since 2007, even though it went through several refinement processes, the core structure is still the same (crap). Let me roll back the clock a little bit first, when the Eit codebase [3] replaced old equo-community, reagent and activator tools, it was clear that I was going to do exactly the same thing with the Equo one, thus I wrote the whole code in an extremely modular way, to the point that extra features (or “commands” in this case) could be plugged in by 3rd parties without touching the Eit kernel at all. After almost one year, Eit has proven to be really powerful and solid to the extent that now, its architecture is landing into the much more visible next-gen Equo app.
I tell you, the process of migrating the Equo codebase over will be long. It is actually one of many background tasks I usually work on during rainy weekends. But still, expect me to experiment with new (crazy, arguable, you name it) ideas while I make progress on this task. The new Equo is codenamed “Solo”, but it’s just a way to avoid file names clashing while I port the code over. You can find the first commits on the entropy.git repo, under the “solo” branch [4].

Make sure to not miss the whole picture: we’re a team and Sabayon lives on incremental improvements (continous development, agile!). This has the big advantage that we can implement and deploy features without temporal constraints. And in the end, it’s just our (beloved) hobby!

[1] imagine a web service cluster, etc — I know, SQL in general is known for not scaling out well without sharding or other techniques, but this is outside the scope of this paragraph, and I think NoSQL is sometimes overrated as well.
[2] http://git.sabayon.org/entropy.git/tree/lib/entropy/db/mysql.py
[3] Eit is the server-side (and community-repo side) command line tool, “Eit” stands for “Entropy Infrastructure Toolkit” and it exposes repo management in a git-like fashion.
[4] http://git.sabayon.org/entropy.git/log/?h=solo

Sabayon on Amazon EC2

During the last week, while I was enjoying my vacations, I’ve also had a lot of fun preparing a new EC2-friendly kernel (and sources) based off our kernel git repo (which is based on Linus’ kernel tree + some patches like the BFQ scheduler, fbcondecor and aufs3).

The outcome of my puzzle game (trying to figure out why an instance doesn’t boot on EC2 is like solving puzzles at times) is that sys-kernel/ec2-sources and sys-kernel/linux-ec2 (precompiled binaries) are now available on the sabayon-distro overlay and the Sabayon Entropy repository “sabayonlinux.org”.

As you may expect, I rapidly started to get bored again. For this very simple reason, and since I always wanted to have a fallback website/webservices infra ready on EC2 (in case of a disaster) I started cooking an EBS baked AMI, copycating the current Virtual Machines snapshots from our backup server.

As you may expect, I rapidly started to get bored once again. So, I prepared a molecule .spec file that automatically creates a ready-to-go ext4-based Sabayon Server filesystem image tarball ready to be dumped into a spare EBS volume. Once you have an EBS volume you just need to snapshot it and create the AMI from there (fyi).

As you may expect, I was getting bored of course. So I started preparing a “BuildBot” AMI that could be launched programmatically (say, from a cronjob) and once started (@reboot cronjob target is <3), attaches an existing EBS volume containing a Sabayon chroot, runs equo update && equo upgrade and other stuff, then detaches the volume, makes a snapshot, creates a versioned AMI.
Yes, boring stuff deserve a lot of bash scripting, can’t be otherwise. In this way, I can continuosly build updated Sabayon AMIs for EC2 without much human intervention (of course the BuildBot AMI mails back to me the upgrade result (both stderr and stdout)).
If anybody is interested in my “BuildBot” scripts, just drop a line here.

I don’t know yet where to go from here, but you may be interested in reading this wiki entry: “Sabayon on EC2“. Moreover, you may be also interested in knowing that the aforementioned filesystem image tarballs are already available on Sabayon mirrors, inside the iso/daily directory.

You can have a look at the currently available Sabayon AMIs here:

sysadm for a weekend

Probably many of you do not realize how much time it takes to maintain a healthy distro infrastructure. With “distro infrastructure” I mean all the servers and virtual machines that are serving Sabayon users and random website visitors.

We currently have 5 servers spread around several Italian Universities and TOP-IX, all of them are running Sabayon and all of them need to be kept up-to-date, rebuilt from time to time to accomodate new services or new management architectures (like automated content provisioning for throw-away virtual machines, etc). On top of two of them, there are a bunch of OpenVZ virtual machines, for sandboxing several critical components.

During the last weekend, and the weekend before it, I’ve been busy rebuilding the whole WWW virtual machines (OpenVZ) in order to transform these simple static virtual machines into  “clusterable” ones (for instance, implementing boot-time content provisioning, decoupling static data from webapps,  etc). Moreover, I eventually found some time to also decouple the Database Server (MySQL, at this time) from the WWW virtual machine. At some point, we’ll be able to migrate some of the infrastructure to the cloud with almost no effort and, most importantly, we will be able to scale out quite easily during post-release periods. Unfortunately, at this time, cloud computing is still quite expensive and we would need a lot more donations to be able to pay for it, so we need to squeeze all the RAM and CPU available from the current infra.

On a unrelated note, I am really proud of my little monster, Entropy, which is really doing a great job on our servers. Time to get back to Rigo coding now…

Sabayon 9: released

After more than 3 months of hard work. I am pleased to announce the immediate availability of Sabayon 9.

Have a look at the press release here and enjoy Sabayon.

Going to the Tizen Developer Conference

I am really happy to announce that I’ve been invited to the Tizen Developer Conference, in San Francisco, CA next month (May 7-9). I’m going with a friend of mine, Michele Tameni, who seems to have won the Intel consolation prize (damn you!).
I really need to thank Giovanni Martinelli and Mauro Fantechi for sponsoring my trip there and across the whole San Francisco Bay Area (I’ve been told that it’s full of Gentooers, gonna catch you!).

This is a great opportunity for me to meet a good deal of hardcore FLOSS devs and have a beer together (just one?). There are also several exciting talks I couldn’t really miss.

Feel free to contact me if you want to meet up for a drink. A post to the gentoo-core ML will follow next week.

Entropy & Rigo at Transifex

I created two projects at Transifex aiming to get better Entropy (and
Rigo, which has split .po files) localization support.

Sabayon Entropy Transifex Project Page:
https://www.transifex.net/projects/p/sabayon-entropy/

Sabayon Rigo Transifex Project Page:
https://www.transifex.net/projects/p/sabayon-rigo/

If you are speaking non-English languages, then help us out!

I’m going to create the Anaconda (Sabayon Installer) translation project later today.

Sabayon runs Android Market / Google Play Apps natively

I’m sure this is going to blow your mind completely.

During the last two months I’ve been busy re-thinking the way Package Management works in Linux distros in terms of User interaction and I ended up designing the new Sabayon Entropy GUI using a “less is more” approach. In layman terms, the idea is to carry out all the activities through two, and only two, main “interaction points”: a search bar and a notification bar (for sending feedback to User).

But why not going further and supporting Android Market Google Play Applications directly in Rigo?
This is exactly what I did, and it’s been even easier than implementing the whole Entropy Package Manager thanks to the fact that Android apps are self-contained and no dependency management is required.

So, yeah, Sabayon now supports Android Market Applications natively, both installation and runtime execution (through the Dalvik Java VM).

Rigo Application Browser, getting closer

So far other two weeks are past since my last blog post about Rigo. I have been able to implement several features since then, mainly thanks to RedBull I’d say. Jokes apart, I’m used to work on Rigo in the evening and early (6am) morning and, despite hackers are all used to nightly coding marathons (including me, actually), the best is coming out of me when birds start singing.

git-log would tell you what changes have been made in this timeframe, without me wasting time here instead of coding, but I think communicating it explicitly would help me getting some feedback from all of you in order to enrich my next UI design iteration. I’m not an interaction designer myself, don’t really want to be, but I learned a lot of things working with them in the past. Although we sometimes see interaction designers as kids with a pencil abusing of erasers (i’m not serious!) they are vital in the success of your App (I learned it the hard way as well).

Dbus Service

RigoDaemon is the new Dbus-powered service that handles the privileged operations on behalf of (concurrent) Rigo clients, such as Repositories Update, Application Management (install/remove/update) and System Upgrade. These are in fact the three main tasks RigoDaemon is going to carry out. The hard part has been implementing a way to do Resource Passing from Rigo clients (holding shared Entropy Resources locks) and RigoDaemon (wanting to acquire exclusive access to the same Entropy Resources), i called the whole process: rendezvous (as it is, in fact). The rendezvous has to be fault-tolerant in the sense that, being for both releasing and acquiring back shared locks by Rigo clients, these can go away anytime and RigoDaemon is required to make sure to not hold resources if no Rigo instances are connected.
RigoDaemon is going to replace Magneto Dbus Service, once Rigo replaced Sulfur.

Concurrency

Multiple Rigo Clients can be used concurrently, as images show below. The IPC architecture allows RigoDaemon to “arbitrate” resources access, kindly asking all the clients connected to release them for example (due to the beginning of a specific Activity, see below). At the same time, it is possible to launch an activity (like repository update or system upgrade), then close Rigo, reopen it and have it resumed to the current activity state. This makes Rigo fully fault tolerant against bugs and critical system updates where GTK3 or other Rigo dependencies could (but shouldn’t) be subject to potential instabilities. Running GTK3 unprivileged is also another indirect goal achieved.

Activities

Both RigoDaemon and Rigo implement their daemon and UI state through a Finite State Machine, moving from an activity to another atomically. This makes possible to aggressively use multithreading without worrying too much about UI state and the likes. I don’t want to start describing the boring details here, but since I introduced this way of seeing the whole User <-> Rigo interaction story, everything started to make sense: spawning a repo update event now means “hey, i want to carry out this activity, gimme the UI control” and “hey dude [dude is another Rigo Client process] the other guy here wants to do Activity-X, could you please lock down and start listening to his events?”.
The concept is the same you can find developing for Android, and is about how the human being can deal with tools, you cannot use a hammer and a chainsaw at the same time in real life!

PolicyKit

Any privileged activity inside RigoDaemon is controlled by PolicyKit actions. There are three main actions (the same listed below): Repository Update, System Upgrade and Application Management. This way administrators could even setup policies allowing users to update, upgrade or install apps with no privileges at all (read: *could*). PolicyKit gives us this fantastic flexibility though.

Repositories Update

The Activity I’ve been working on is Repositories Update. Even if this is usually carried out by Magneto (automatically updating repositories on our behalf), there are cases in which this is still important for Users: they might not have Magneto loaded, repositories could have been just added to the list and Rigo would need to download them, etc.
This Activity is also a good testbed for RigoDaemon <-> Rigo Client IPC architecture. Fact is, it took me 5 iterations before getting everything working the way I wanted (I’m a picky guy you know…).

Repositories Update Activity, like all the other activities that are going to be implemented soon, require RigoDaemon running. Now, there’s a trick to start it in devel mode, and start Rigo (you need pygobject-cairo, all the GLib introspection stuff, including introspection support in policykit):

$ git clone git://git.sabayon.org/projects/entropy.git
$ cd entropy/rigo/RigoDaemon && sudo sh  devel-start-daemon.sh &
$ cd ../ && python2 rigo_app.py –debug

What’s missing

The UI is not 100% complete first of all.
System Upgrade, Application Management activies are not implemented yet.
The Rigo Preferences View is not there (this will contain handy buttons to launch secondary activies as well as forcing repositories to update).
If all goes well, I expect to have everything working in a month from now, even because I need to go back studying then.

This blog post took me 1 whole evening (split across several), if you think it’s been worth it, don’t think twice and donate to Sabayon now, http://www.sabayon.org/donate .

hello, twitter

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 584 other followers

del.icio.us

Follow

Get every new post delivered to your Inbox.

Join 584 other followers