An argument for popcon

There is a package called popularity-contest that’s available in both Debian and Ubuntu (and likely other Debian derivatives). It grabs the list of packages installed on the machine and submits it to the Debian or Ubuntu popularity contests.

There you can see which are the most popular packages in Debian and Ubuntu. Unsurprisingly, dpkg, the package manager is rather popular.

Why should you enable it? Looking at popcon results are solid numbers as to how many users you may have. Although the absolute numbers may not be too accurate, it’s a sample set and if you examine the results over time you can start to get an idea on if your software is growing in popularity or not.

But there’s something more than that, if you can prove that a lot of people are installing your software on Debian, then you’re likely going to be able to argue for more work time being spent on improving the packaging for Debian.

Quite simply, enabling popcon is a way to help people like me argue for more time being spent on making Debian better.

The MySQL Cluster storage engine

This is one close to my heart. I’ve recently written on other storage engines: Where are they now: MySQL Storage EnginesThe MERGE storage engine: not dead, just resting…. or forgotten and The MEMORY storage engine. Today, it’s the turn of MySQL Cluster.

Like InnoDB, MySQL Cluster started outside of MySQL. Those of you paying attention at home may notice a correlation between storage engines not written exclusively for MySQL and being at all successful.

NDB (for Network DataBase) started inside Ericsson, originally written in a language called PLEX, which was internal to Ericsson and used in the AXE telephone switches. Mikael Ronstrom’s PHD thesis covered NDB and even covered things that (at least were) yet to be implemented (it’s been quite a few years since I leafed through it last). The project at Ericsson (IIRC) was shelved a couple of times, but eventually got spun out into an Ericsson Business Innovation company called Alzato.

Some remnants of PLEX can still be found in the NDB source code (if you look really hard that is). At some point the code was fed through a PLEX to C++ converter and development continued from there. Some of the really, really old parts of the source may seem weird either due to this or some hand optimization for SPARC processors in the 1990s.

In 2003, MySQL AB acquired Alzato and work on a storage engine plugin for MySQL to interface to the (C++ API only) NDB was underway. Seeing as the storage engine interface was so simple, easy and modular it would only take several years for the interface to NDB to become mature.

The biggest problem: NDB itself worked really well if your workload fit exactly what it was good at… if you deviated, horrific performance and/or crashes were not as uncommon as we’d have liked. This was a source of strain for many years with the developers and support team on one side and some of the less-than-careful sales team on the other. That being said, there have been some absolutely awesome sales people selling NDB into markets it truly fits, and this is why there’s barely a place in the world where placing a mobile phone call doesn’t go through MySQL Cluster at some point.

You should read Tomas Ulin’s post Celebrating 10 years @MySQL for a bit of an insight into how Alzato became part of MySQL AB (which later became part of Sun which became part of Oracle).

I joined the MySQL Cluster team at MySQL in December 2004, not too long after Alzato was acquired, but certainly when the NDB storage engine in MySQL 4.1 was in its very early stages – it was then by no means a general purpose database.

Over the years, MySQL Cluster gained both traction and features, making it useful for more applications. One of the biggest marketing successes of MySQL was the storage engine architecture and how you could just “plug in” different engines. The reality (of course) was far different and even though MySQL Cluster did just “plug in” to MySQL, it was certainly not a drop in replacement.

In MySQL 5.0, a bunch of neat new features were added:

  • Engine condition pushdown
    This enabled conditions on non-indexed columns to be evaluated on the data nodes rather than having every row pulled up to the SQL node to be evaluated.
  • Batched read interface
    So that queries like SELECT FOO FROM BAR WHERE A IN (1,2,3) were executed as a single network round trip rather than 3 round trips.
  • Query cache
    Although the query cache should die, hey, at least it worked with NDB now…. in a way.
  • Reduced IndexMemory usage
    Remember, NDB is an in-memory database, so saving a bunch of bytes for secondary indexes was a big thing.

the first release with things I really worked on was MySQL 5.1. My first talk (to a packed room) at the MySQL User Conference in 2006 was on new features in MySQL Cluster 5.1. I’m still quite proud of that talk even though I know I am a much better speaker than I was then (It would have been great to have had more guidance… but hey, learning from experience is good too).

We added a lot in 5.1:

  • Integration with replication
    This is where row based replication was born. It was a real team effort with the NDB kernel part (going from memory and bzr logs) having been written by Tomas and Jonas seems to have a bunch of code there too. I worked a bunch on the NDB Injector thread in mysqld, Mats worked on the core row based code (at the time the most C++ like code in the entire MySQL world). You could now have a cluster replicate to another cluster with the giant bottleneck that is MySQL replication.
  • disk data
    You could store non-indexed columns on disk. I implemented the INFORMATION_SCHEMA.FILES table for this, I was young and naive enough to think that the InnoDB guys would also fill out this table and all would be happy with the world (I’m lucky I haven’t been holding my breath on this one).
  • Variable Sized columns
    A VARCHAR(255) would actually not always use more than 255bytes if you just stored a single character in it. Catch? Only for in-memory columns.
  • User defined partitioning
    Because NDB desperately needed more options, we let the user choose how they wanted to partition up their data (per table).
  • Autodiscovery of schema changes
    This was a giant workaround to the epic mess that is FRM files and data dictionary things inside the MySQL Server. It is because of all this code that when I went to rewrite the whole thing for Drizzle I took the approach of “just pass it down to the engines, the server must not attempt to know better”. FWIW, I’m still right: if the server tries to be clever you now have two places for bugs to be, not just one.
  • Distribution awareness
    i.e. better selection of which data node to talk to for a particular query, reducing latency.
  • Online add/drop index.
    How long did it take for other engines to get this? Let’s not think about that :)

After that the really interesting stuff started to happen, that is, the first major fork of MySQL: MySQL Cluster Carrier Grade Edition (CGE). Why? We had customers that simply couldn’t wait for MySQL 6.0 (after all, they’d still be waiting).

We had MySQL Cluster CGE 6.1, 6.2, 6.3 and now we’re into 7.0, 7.1 and 7.2. There is without doubt that it’s the longest serving and surviving MySQL fork. There were non-trivial changes inside the MySQL server too, which caused enough of a merge problem for the (small) Cluster team.

One big thing that you’re probably still all waiting for? Replication conflict detection and resolution in circular/multi-master replication setups. It was an NDB first and been used in production for a decent amount of time.

I remember a hack while on an airplane led to the CompressedBackup and CompressedLCP options (used zlib when writing out checkpoints/backups) – something that took more time than you’d think to go from prototype to production ready code.

The last few things I worked on in MySQL Cluster before going and working full time on Drizzle was the Windows port, online add/drop node and NDBINFO.

I’ve left out so many cool MySQL Cluster things that were worked on over the years (e.g. online add/drop column, rewriting of LCP code, micro GCPs, crash-safe DDL, the test suite). I really should mention the test suite, in lines of code it was over three times that of MyISAM.. and that was probably six years ago that I worked that out.

One thing to think about: when Innobase Oy was bought by Oracle and there was this effort to have a transactional storage engine that was inside MySQL AB rather than another company, I pointed out that I thought it would take less time adding the needed features to NDB and integrating it inside the MySQL server binary (and with the addition of online add node you could go from stand alone DB server to a full cluster with no down time) than it would for any of the alternatives to get to a suitable level of maturity.

I wish I put money on this… I put money on the MySQL 5.1 GA release date (which I was happy to loose), but in the years since you can see that InnoDB is still reigning supreme with all that came to replace it having fallen away for one reason or another. It’s still on track to have MySQL Cluster be the only real alternative (now also, funnily enough, owned by Oracle). I have to say, it’s kind of a hollow victory though, it would have been nice to see Falcon and PBXT be serious players in today’s market.

A few points on talking about the internet

There are a few things you should keep in mind when talking about the internet:

  • Use of the word “cyber” is not cool.
  • Whenever you hear the word “cyber” substitute it with “Information Super-Highway”… yes, it sounds that dated.
  • Use of the word “cyber” is applicable only in discussions relating to Doctor Who.
  • Whenever you see “” just think “AOL Keyword”. If you don’t know what AOL was, I likely have 437 trial CDs you can have.
  • There is no differentiation between life and online-life – just about everything is internet connected now. This very much counts for speech vs online speech.

A reminder: Leah and I are running in the upcoming MS Fun Run raising money for Australians affected by Multiple Sclerosis (MS). If you don’t sponsor us you’re going to die poor and alone. Really, I have an arrangement with all known deities to make it happen. Sponsor us here:

The MERGE storage engine: not dead, just resting…. or forgotten.

Following on from my fun post on Where are they now: MySQL Storage Engines, I thought I’d cover the few storage engines that are really just interfaces to a collection of things. In this post, I’m talking about MERGE.

The MERGE engine was basically a multiplexer down to a number of MyISAM tables. They all had to be the same, there was no parallel query execution and it saw fairly limited use. One of the main benefits was that then you could actually put more rows in a MyISAM table than your “files up to 2/4GB” file system allowed. With the advent of partitioning, this really should have instantly gone away and been replaced by it. It wasn’t.

It is another MySQL feature that exists likely due to customer demand at the time. It’s not a complete solution by any means, PARTITIONING is way more complete and universal…. and much harder to get right inside the MySQL server – which is why MERGE exists. It was easier to write a storage engine that wrapped MyISAM than it was to have any form of partitioning in the server.

One advantage of MERGE tables is it means that you could parallelize myisamchk to repair your broken MyISAM tables after a crash. One step better than no crash safety is at least parallel recovery. The disadvantage being that you’re using MERGE and MyISAM tables.

There is also the great security problem of MRG_MYISAM (the other name for MERGE tables): if you create a MyISAM table t1 and have a user able to access it, if they can create a MERGE table that accesses t1 (say m1) and you then revoke their access to t1, they’ll still be able to access t1 through m1.

MERGE still seems to exist in MySQL 5.6 without even a warning that it’ll go away… which I suspect it will…. we long since got rid of it in Drizzle as, well, what you really want is a query rewrite engine that does views, partitioning etc etc.

Can anyone think of a reason why you should still use MERGE tables in 2013? I can’t.

More on the FRM file format (and the minimum maximum number of columns in MySQL)

Over at the work blog, I wrote about what the true maximum number of columns in MySQL is as well as the minimum maximum. Basically, the FRM file is ass and places bizarre arbitrary limits on things due to what can only be seen as limitations in 1980s computing and automatically generated interfaces for entering rows on a 80×24 character VT100 terminal.

Full post here:

Refactoring Internal temporary tables (another stab at it)

A few weekends ago, I started to again look at the code in Drizzle for producing internal temporary tables. Basically, we have a few type of tables:

  • Standard
  • Temporary (from CREATE TEMPORARY TABLE)
  • Temporary (from ALTER TABLE)
  • Internal temporary (to help with query execution)

If you’re lucky enough to be creating one of the first three types, you go through an increasingly lovely pile of code that constructs a nice protobuf message about what the table should look like and hands all responsibility over to the storage engine as to how to do that. The basic idea is that Drizzle gets the heck out of the way and lets the storage engine do its thing. This code path looks rather different than what we inherited from MySQL. For a start, we actually have a StorageEngine object rather than just lumping everything into the handler (which we correctly name a Cursor). However… the final part, the internal temporary table code is a bit closer to what we inherited from MySQL. There is a good reason for that, it’s ass.

For a start, the table::Singular object is still abused by Item_sum_distinct (see the setup() method) as a tuple (a table with no actual table). This is not ideal and just throws a spanner in the works for refactoring a bunch of code.

The second big problem is that create_tmp_table() doesn’t actually use any normal API calls, instead it manually sets up the table::Singular object. This includes setting up the fields for the table::Singular object in a slightly different way depending on which bit of code called create_tmp_table().

The third big problem is that it’s not storage engine agnostic. Instead of using any existing and sensible way to go and create a temporary table by using the storage engine API it instead creates a series of MI_COLUMNDEF structures which as you may be able to guess, are MyISAM specific and internal data structures.

The forth big problem is that if we end up using HEAP (again, like MyISAM, hard coded) we don’t even call the create table method on the engine. The HEAP (or MEMORY engine as it’s now known) is magic in that it can create tables on open()!

All of these issues make it really, really hard to have another engine with the ability to handle internal temporary tables. You may recall that MariaDB does include the ability to use the Aria engine for internal temporary tables. No, they did not refactor any of this code, they just made a copy of the code and put in Aria where MyISAM was along with some #ifdef for the feature.

Over the past several years I’ve tried a few times to tease this code out and start the process of turning it into something that is palatable. Every one of those times I’ve either failed or gotten sufficiently frustrated that I’ve given up.

I now have a new strategy though. After looking at the code for a good few hours a few weekends ago, I think I have an idea of where to start…. (now just for a few more free weekends to implement it).

Other MySQL branch code sizes

Continuing on from my previous posts, MySQL code size over releases and MariaDB code size I’ve decided to also look into some other code branches. I’ve used the same methodology as my previous few posts: sloccount for C and C++ code only.

There are also other branches around in pretty widespread use (if only within a single company). I grabbed the Google, Facebook and Twitter patches and examined them too, along with Percona Server 5.1 and 5.5.

Codebase LoC (C, C++) +/- from MySQL
Google v4 patch 5.0.37 970,110 +26,378 (from MySQL 5.0.37)
MySQL@Facebook 1,087,715 +15,768 (from MySQL 5.1.52)
Twitter 5.5.29.t10 1,192,718 +3,624
Percona Server 5.1 trunk 1,066,418 +14,878 (from MySQL 5.1.66)
Percona Server 5.5 trunk 1,208,577 +19,483 (from MySQL 5.5.29) +142,159 (from PS 5.1)
Drizzle trunk 334,810

The Google patch has always had a reputation of being large, and with an extra 26kLOC of code, it certainly is the biggest of any of the more current branches – and that’s actually a surprise to me that it adds this much code.

The Facebook and Percona Server 5.1 branches are amazingly similar in how much extra code they add, and they’re not carbon copies of each other. The Twitter patch quite notable for how little extra code it adds.

For giggles, I included Drizzle – which is (even with all the plugins) less than a third of the size of MySQL 5.1.

It’s clear that the Percona Server and Facebook patches introduce much less code than MariaDB does, which does go with the general wisdom of them being closer to Oracle MySQL than MariaDB is.

If we look at Percona Server, we see that with Percona Server 5.5 there is indeed a bunch more code than was in Percona Server 5.1, with roughly 5,000 more lines of code than we’d expect from a simple port from MySQL 5.1 to MySQL 5.5. This feels about right, we’ve added new things to Percona Server 5.5 that weren’t in Percona Server 5.1.

MariaDB code size

Continuing on from my previous post, MySQL code size over releases.

I wanted to look at the different branches/patch sets of MySQL out there and work out how far from upstream they deviated. I’m just going to compare against whatever upstream version the most easily accessible version is based on (be it 5.0.x, 5.1.x or whatever).

For MariaDB versions, I removed innodb_plugin and replaced it with xtradb for stats purposes as the MariaDB innodb_plugin is essentially the same as upstream and I don’t want to artificially inflate the diff size.

The first three major versions of MariaDB were all based on MySQL 5.1. I used sloccount and only counted C and C++ code.

So, let’s look at some of the MySQL patch sets/branches that are around. Firstly, let’s look at MariaDB:

Codebase LoC (C, C++) +/- from MySQL +/- from prev maj Version
MariaDB 5.1 1,210,168 +157,532 0
MariaDB 5.2 1,227,434 +174,798 +17,266 (since MariaDB 5.1)
MariaDB 5.3 1,264,995 +212,359 +37,561 (since MariaDB 5.2)
MariaDB 5.5 1,377,405 +187,658 (from MySQL 5.5) +112,410 (since MariaDB 5.3)

From my previous post on lines of code in MySQL versions, we learned that with MySQL 5.6 we saw a 354kLOC increase over MySQL 5.5. What is quite surprising is how close some of the MariaDB differences are to this. With MariaDB 5.5, we’re looking at a 187kLOC difference, which is roughly two thirds that of MySQL 5.6. What’s also interesting is that each incremental MariaDB release has not added nearly as much code as the MySQL 5.1 to 5.5 and 5.5 to 5.6 jumps did.

MariaDB LoC over major versions

The MariaDB code size has also been increasing, if we look at the graph above  you can really see the jump in code size over the past few releases.

If we look at the delta between MariaDB and MySQL, the first MariaDB release (MariaDB 5.1) was certainly a large jump. Each incremental MariaDB release (5.2 and 5.3) have been a smaller delta than the initial one. With MariaDB 5.5 we actually decrease the delta from MySQL, which is something that’s interesting to look at.

If we were going a straight port of MariaDB 5.3 to be based off MySQL 5.5, we’d expect the delta to be around 137kLOC (what MySQL 5.1 to 5.5 is) but it isn’t. The difference to MariaDB 5.5 from MariaDB 5.3 is only ~112kLOC, and the on the whole delta decreases.

But what makes up this big initial jump for MariaDB? Let’s look at some of the MariaDB 5.1 only modules and what’s left:

MariaDB 5.1 component LoC (MariaDB 5.1)
PBXT 45,107
FederatedX 3,076
IBM DB2i 13,486
Total 61,669
Other 95,863

So the MariaDB delta is not increase just because they included some existing modules, there’s more code in there, about as much as any major MySQL version bump.

Tomorrow we look at other MySQL branches, and we see that the MariaDB delta truly is significantly larger than any other MySQL branch.

Being highly irresponsible (or, HOWTO DoS nearly all RDBMSs)

In my 2013 talk, I had a big slide telling the audience how to do a simple Denial of Service attack against a MySQL server (post login). This was only one example of many others I could give, but I think it’s the simplest, and only requires the mysql command line tool and a single command. FYI, this also applies to PostgreSQL but I’ll leave the specifics up to somebody else to write.

There is a fundamental flaw in just about all MVCC databases that leaves a giant Denial of Service attack hole. It is the following: START TRANSACTION WITH CONSISTENT SNAPSHOT followed by a bunch of waiting. Sine the database server has to maintain this read view, InnoDB will continue to grow UNDO until it has to extend the ibdata1 file (system table space).

It’s important to remember that you cannot shrink the system table space (unlike with file-per-table where you can just do ALTER TABLE for any individual table suddenly finding itself a lot smaller).

As UNDO grows, InnoDB will faithfully expand the system table space until ENOSPC and then everything will fall in a heap.

In theory, you could have a system table space that doesn’t auto-extend, but then you’re relying on code paths to error out gracefully that I can pretty much bet you are completely untested.

The only real way to avoid this is doing both of the following:

  1. Use kill-idle-transactions feature from Percona Server
  2. have a script that checks for long running transactions and just kills them.

Similar things affect just about any MVCC database system. You’ll also see similar things with file system and volume manager snapshots.

So is it highly irresponsible pointing this out? Of course it isn’t, this should be pretty well known to most DBAs already and so should a whole bunch of other things. Remember all the things you saw in production and then went to hit your developers over the head for? Well, they’re all in this same category.

Go run giant UPDATEs, DELETEs or ALTER TABLE on a giant table in a replication setup, you’ll pretty much DoS your app as everything can’t get up to date read-only information from slaves.

Considering that this is merely scratching the top of the iceberg of ways to DoS a database server, keeping post authentication crashing bugs secret just seems… well… futile, even if you do accept security through obscurity as valid.

Further reading:

Those who do not know the future are doomed to repeat it

A couple of weeks ago, I attended the Open Source Developers Conference (OSDC) in Sydney where I gave the dinner keynote. I had previously given the dinner keynote at OSDC 2010 in Melbourne, where I explored a number of interesting topics “that I wasn’t really qualified to talk about.”

In writing the dinner keynote for 2010, I took the idea that people come to conferences to hear from experts in the field and decided that I should instead do the opposite of that. Talk about all the things that I think are interesting but I’m not an expert in. So in 2010 I covered: Drizzle database server (the only thing I was actually qualified to talk about), developing your own film, how much effort it takes to write a book, brewing your own beer, Bluehackers (and mental health in general), Security (it was the time of Stuxnet, oppressive border security), censorship (and how government claims that the internet is both different to and just like publishing a book at the same time), Wikileaks, how perhaps we should go after child pornographers rather than waste money on totalitarian filters, feminism, code of conducts at conferences, homophobia, a history of marriage and the notion of ‘traditional marriage’, the concept of freedom itself and a few pictures of vegetables made to look like faces. In the words of some attendees, “there was something to make everyone at least slightly uncomfortable at some point”.

My 2010 talk went really well, there was much applause and it inspired at least one person to go and brew their own beer (in itself a victory). Many thanks to Donna for spending a non-trivial amount of time helping me polish the final talk and help ensure some of my most important points were communicated properly.

So for 2012, I felt I had some big shoes to fill. Picking a topic (and writing the talk itself) for a dinner keynote is tricky. You have a captive audience with a wide variety of interests (and likely a few partners of attendees who aren’t at all technically minded). I wanted a topic that could have a good amount of humour (after all, we’re at dinner, relaxing and chatting) as well as a serious message that would speak to all the developers in the room (after all, this was the Open Source Developers Conference). Needing a title for the talk much in advance of when I would start writing the talk, I started thinking along the lines of “Those who do not know UNIX are doomed to re-implement it – poorly” and “Those who do not know the past are doomed to repeat it” – thinking that there must be some good lessons that I’ve learned over the past years that could be turned into a dinner talk. I ended up settling on “Those who do not know the future are doomed to repeat it”.

I, of course, left most of the specifics to be determined much closer to the conference itself as procrastination seems to be an integral part of writing a talk. Fast forward a while and if you were nearby you would have heard me exclaim “Who had this dumb-ass idea for a talk?” and “well, it seemed like a good idea at the time.”  Setting yourself constraints is good, and at least narrowed the search space for constructing something that’d go down well. Next came “How on earth do I construct a cohesive narrative around that?” as a whole bunch of fun anecdotes about what people in the past considered the future is great, but how do you weave a story around it? In thinking about what used to be the future (and indeed, researching it), I had the realisation that this in itself is a really good story and vehicle to talk about how to produce better software.

And so, I solidified a set of laws, and for mostly humorous purposes, I’ve called these “Stewarts Laws”. So, we started with:

Those who do not know the future are doomed to repeat it.

Stewarts 0th Law

Because in computing, we start counting from zero.

I then went on a grand tour of how we got to have the PC. Early personal computers being iterative improvements on technology that came before, and how packaging technology as an appealing product helps adoption and that no matter how good something is, if it’s too expensive, it’s never going to be mainstream. This last point was a homage to the great Hitchhiker’s Guide to the Galaxy, which was successful over the great Encyclopaedia Galactica for two reasons, one of which was “it was slightly cheaper”.

The platform which is more open will eventually succeed over ones that are more closed. (This really should have been a law… but I missed the opportunity). One example was Mozilla. The initial source release was way back in 1998 and this “quirky open source project” took a very long time to deliver a useful web browser (excluding all the internal Netscape development on this complete rewrite of the browser).

All complete rewrites of any sufficiently complex software takes at least 5 years to be remotely usable.

Stewarts 1st Law

With the insight that the more free platforms (the PC, Windows, the web, Mozilla) eventually win out and being a talk about the future, I could not possibly not cover “The Year of the Linux Desktop”. This was useful to cover the install and user experience of Debian 2.2 (potato). This was Linux in the year 2000 (with IPv6 support, and with World IPv6 day only six months ago, this is certainly the future). It was not friendly.

But there was KNOPPIX that built on what came before and this showed the way so that other distributions could end up creating a situation where there are now many distributions of Linux that make running a free desktop something that is no longer masochistic, it’s something that can be decidedly pleasant.

I (of course) had to cover the freedom in your pants. The cell phone. Specifically, how there is more free software running on a computer that fits in our pants pockets than there was storage in the computers we grew up with. It doesn’t matter if Android is better than an iPhone or not, the more open, free and cheaper platforms always win. But really, it’s just iterative improvement on what came before.

All innovation is really just iterative improvement.

Stewarts 2nd Law

Very rarely (if ever) is there a “eureka” moment that doesn’t build upon the work of others. Find your giant to stand upon so that you can see further.

We can, of course, get it wrong. I used the example of New Coke and wondered if Unity or GNOME3 are our “New Coke” or if Windows 8 is the new Vista. But really, it’s not making a mistake that is bad, it’s not realising it and correcting. What we need is CI. Not Continuous Integration (although that is part of it), I’m talking about Continuous Improvement.

Anybody who took a “Software Engineering” course at university will have read about, studied, and parroted things about “the waterfall model” and “software prototyping” and “incremental build model” and “spiral model” and maybe even “SCRUM” or XP (which seems to be jumping off cliffs and yelling at fish). You probably had to do an assignment where you wrote “We’re going to do X model” and then had to stick to it, quickly finding that it just didn’t quite work that way.

This is because all this static model of software development methodology is a bunch of dairy production byproduct – otherwise known as BOOLSHIT. There is no static way written in stone and there certainly isn’t “one true way.”

The best battle plans don’t survive first contact with the compiler

Stewarts 3rd Law

This law is obviously stolen, which leads me to:

Stealing good ideas is itself a good idea, that you should steal.

Stewarts 4th Law

Software development is evolution by natural selection. Mutations in software battle it out and the fittest survive. This is even more true in free software development, as anyone is free to fork the product, mutate it and compete. In this way, free software accelerates the free market – it forces companies to continually add value rather than vendor lock in.

Our development processes also evolve. We try new things and keep what works. There may be a “state of the art” that we think exists, but really what matters is continuing to improve your development process. You don’t have to suddenly catch up, just improve.

  • Revision Control
    We’ve had RCS, CVS, Subversion. We’ve had bzr, hg and git. Distributed is obviously the current state of the art.
  • Code review
    and improving how we do code review. Could you review code better? Could we have automated code review?
  • assert(), make the compiler do the work, defensive coding
    Write code to do some of your code review for you.
  • Explode at compile time rather than runtime (i.e. not user visible)
    Detecting problems earlier is better.
  • Extensive Unit testing
    Test each component, have components be components, not spaghetti.
  • Extensive testing
    Test the system as a whole
  • Running the test suite
    Actually run the test suite
  • Reliable test suite
    Have the test suite be reliable so that a failure really is a failure and not a false negative.
  • Continuous Integration
    Always test how things go together
  • Test before integration
    Test before pushing to trunk, ensuring even further that trunk is always releasable.
  • Merge captain
    Takes approved code, merges it. This is variants on the Linus model.
  • Automated merges
    Take the manual steps out, we can automate them (who needs to type 10 version control commands in when one will do)
  • Always releasable trunk
    “Release early, release often” refined to “release something that isn’t crap”
  • Release checklists
    There are probably different things you want to do upon release, check that you do them. For adding awesome new features, you want your marketing department to know about it. For awesome bug fixes, you want your support staff to know about it.
  • Continuous Deployment
    There is no environment like production environment.

This led me to two more laws:

Any system of sufficient size will have several versions of each component deployed simultaneously.

Stewarts 5th Law

Constructing software is itself a system of sufficient size.

Stewarts 5th Law, part B

This applies to software you both deploy yourself and release as a tarball (or however you do it). Even if we don’t like to think about it, when we release a software package we are slightly involved in deploying it. We can certainly make it easier or harder to deploy. There are always OS and library updates that will be out there, so there will always be your software running in different environments.

Not only will people using your software use it in different environments, the people developing your software will be too. No two developers have the same development environment setup. One will use a different editor, different shell, slightly different version of the compiler (maybe they haven’t applied an update yet) etc etc. We can’t program to the “one true environment” because no such thing exists.

So, What’s next?

Some of our older problems have good solutions, but many of the newer ones do not. How do we get the state of the art in software development to more people? What’s the next step to explore?

I encourage you to constantly think about your development process and what the future holds for it. After all, it is adapt or perish – the past is littered with the technological corpses of things that were “the future” but failed to innovate any further.

Photos from BarCampMel 2012

Just thought I’d post a couple of photos I took today at BarCampMel. Actually, this is technically 4 photos as I’ve used a Fuji Instax shot in each one. The first is Ben making coffee: in the morning and the afternoon. The second shot is awesome partially automated brewing setup.



My first Perry

I’m pretty sure this was the first time I’ve ever had Perry. I’ve had plenty of beer and cider over the years, but never Perry. I’d like to try more of them, I can’t really relate this to anything else, except to say that it’s nice ,not overwhelming and not too sweet. At 7.3% it packs a decent amount of alcohol content too.

This one is a light yellow colour and in front of the bottle you see the little plush dolphin that we got around the time Sun acquired MySQL. Typically, you’d see photos of it next to Salmiakki and not something as low alcohol content as this.


Contributor Agreements kill Contributions

A good while ago now, there was a bit of activity from others discussing the impact that contributor agreements have on contributions. Most notably, from Simon Phipps and Michael Meeks:

I’ve been around this Free and Open Source Software thing for a while now and I’ve noticed a pattern. People who hack on free software seldom have a desire to spend time speaking to the company lawyers instead of doing their jobs.

If you require a contributor agreement signed, it doesn’t involve an individual. It involves a legal department. Being a free software developer is a different mindset than working on proprietary software. No longer are you just working on one bit of software, you can work on any bit of software. Find a bug in your compiler? Well, you can fix that. Find a bug in a library you’re using? You can both work around it and submit a patch that fixes it.

There is a different between an open source project and an open source product. An open source product is easy – you just publish your source/binary packages with an open source license and do all your development and discussion inside the company. Easy and simple for most places to execute.

What is harder is having an open source project. This is where your company participates in the project, and is a trickier thing to get going – especially within large organisations that have historically been proprietary software houses. There are very few companies that have gotten this right, and even fewer that have gotten this right across the board. I would probably have to say that Red Hat is the company that consistently does this the best.

A low barrier to entry is what has made the largest, most successful free software projects what they are today. If you’re wanting your project to be an open source project and not an open source product – then you too must set a low barrier to entry. Contributor Agreements significantly raise the barrier to entry. Suddenly my 10 line patch to fix a bug turns into a discussion with my company lawyers, your company lawyers and goes from taking an extra 5 minutes to send an email with a patch to a mailing list into something that takes hours and hours of my time.

Any time I spend speaking to lawyers is time not spent improving the world. By having a Contributor Agreement you turn a 5 minute task into a several hour one, a task involving lawyers. You know what tasks involving lawyers are? Expensive. You now have developers not contributing to your project not because a lawyer said no, but because they worked out that the time and money that would be spent on contributing to your project not to be worth it for their company.

The only people who enjoy talking business with lawyers is other lawyers and the mentally ill[1]. We’re developers, not lawyers – don’t make us talk to them for orders of magnitude more time than it takes to fix some bit of code.


[1] I have lawyer friends, and that’s social time, not work. I’ve also worked with great lawyers, but I do wish the world was more sane to begin with and I didn’t need to spend as much time doing so.

There is a story….

I have a friend who is fond of telling a story from way back in November 2008 at the OpenSQL camp in Charlottesville, Virgina. This was relatively shortly after we had announced to the public that we’d started something called Drizzle (we did that at OSCON) and was even closer to the date I started working on Drizzle full time (which was November 1st). Compared to what it is now, the Drizzle code base was in its infancy. One of the things we hadn’t yet sorted out was the rewrite of the replication code.

So, I had my laptop plugged into a projector, and somebody suggested opening up some random source file… so I did. It was a bit of the replication code that we’d inherited from MySQL. Immediately we spotted a bug. In fact, between myself and Brian I think we worked out that none of the error handling in this code path ever even remotely worked.

Fast forward a bunch of years, and recently I had open part of the replication code in MySQL 5.5 and (again) instantly spotted a bug. Well.. the code is correct in 2 out of 3 situations…

It is rather impressive that the MySQL Replication team has managed to add the features they have in MySQL 5.6.

I’m also really happy with what we managed to do inside Drizzle for replication. Ripping out all the MySQL legacy code was a big step to take, and for a while it seemed like possibly the wrong one  – but ultimately, it was incredibly the right thing to do. I love going and looking at the Drizzle replication code. I simply love it.


Many people may know that I’m a bit of a coffee fan. I do quite like a good espresso. These are, unfortunately, more rare than I would like. I know, I live in Melbourne, the average coffee quality is pretty damn high… but still, perhaps I’m just a bit of a coffee snob (oh wait, that’s where I buy my beans from).

This is a photo of the espresso I got at a place near Leah’s work the other week.