popcon-historical: a tool for monitoring package popularity in debian/ubuntu

I’ve just uploaded (where ‘just’ is defined as “a little while ago”) popcon-historical to github. It’s a rather rudimentary way to look at the popcon data from Debian and Ubuntu over time. It loads all the data into a Drizzle database and then has a small perl web-app to generate graphs (and CSV).

Github: https://github.com/stewartsmith/popcon-historical

I’ve also put up a project page on it: https://flamingspork.com/popcon-historical/

An example graph is this one of Percona Toolkit vs Maatkit installs in Ubuntu over time:

You can actually get it to graph any package (which, unlike the graphs on debian.org, the package does not have to be in the Debian archive to graph it over time – it can be a package from third party repos).

“We open source it, and then developers show up and do work for free”

Those who have been around the free and open source software world long enough have heard “We open source it, and then developers show up and do work for free” at least once and have called bullshit on it at least once.

It turns out that people don’t go and work on software for free. They are either modifying software to scratch their own itch (in which case they’re getting 99+% of the code for nothing, so contributing a small bit back is the equivalent of paying for it – with their time rather than money) or it’s a good bit of fun.

So why do software projects that are dual licensed with a commercial license get fewer outside contributions? I think it’s quite simple: people don’t tend to spend their spare time making other people money while making none for themselves. Simply, these projects are left with only contributions from those being paid to work on it (usually by the company who sells the commercial license) and people/companies scratching an itch. Projects that aren’t dual licensed are more likely to have contributors from several companies as then it’s not all-but-one company spending time and money to make another company money.

Stewart’s dot twenty rule

I realised I haven’t written on this for a while and I was asked about it again today.

Stewart’s dot twenty rule is that a piece of software is never really mature until a dot twenty release.

This was a variant of “never use a dot zero release” which has been around the industry for a long time (i.e. always wait for X.0.1).

My first written observation on my variant on this rule was back in 2006:

This is a really stupid metric of software maturity. It is, however, disturbingly accurate.

It seems to continue to be both really stupid and disturbingly accurate. The first few point releases are still going to have rough edges and once you get to about 5 you likely have something that’s intensely usable for a good number of people, by dot 10 the more complex use cases should start to be okay and once you get to dot twenty, then you could say it’s mature.

A topic for another time is how releasing often is one thing but maintaining a release is quite another.

Previously:

An old note on the Storage Engine API

Whenever I stick my head into the MySQL storage engine API, I’m reminded of a MySQL User Conference from several years ago now.

Specifically, I’m reminded of a slide from an early talk at the MySQL User Conference by Paul McCullagh describing developing PBXT. For “How to write a Storage Engine for MySQL”, it went something like this:

  1. Develop basic INSERT (write_row) support – INSERT INTO t1 VALUES (42)
  2. Develop full table scan (rnd_init, rnd_next, rnd_end)  – SELECT * from t1
  3. If you’re sane, stop here.

A lot of people stop at step 3. It’s a really good place to stop too. It avoids most of the tricky parts that are unexpected, undocumented and unlogical (yes, I’m inventing words here).

MySQL vs Drizzle plugin APIs

There’s a big difference in how plugins are treated in MySQL and how they are treated in Drizzle. The MySQL way has been to create a C API in front of the C++-like (I call it C- as it manages to take the worst of both worlds) internal “API”. The Drizzle way is to have plugins be first class citizens and use exactly the same API as if they were inside the server.

This means that MySQL attempts to maintain API stability. This isn’t something worth trying for. Any plugin that isn’t trivial quickly surpasses what is exposed via the C API and has to work around it, or, it’s a storage engine and instead you have this horrible mash of C and C++. The byproduct of this is that no core server features are being re-implemented as plugins. This means the API is being developed in a vacuum devoid of usefulness. At least, this was the case… The authentication plugin API seems to be an exception, and it’s interesting to note that semisync replication is in fact a plugin.

So times may be changing… sort of. Yesterday I noted that some storage engine API features are only available if you’re InnoDB and I’ve voiced my general disappointment in the audit API being unsuitable to implement various forms of query logging already in the server (general query log, slow query log).

One thing to note: when the API is the same for both inside the server and a plugin, it makes initial refactoring very easy, and you quickly see the bits that could be improved.

Some storage engine features you only get if you’re InnoDB

I had reason to look into the extended secondary index code in MariaDB and MySQL recently, and there was one bit that I really didn’t like.

MariaDB:

share->set_use_ext_keys_flag(legacy_db_type == DB_TYPE_INNODB);

MySQL:

use_extended_sk= (legacy_db_type == DB_TYPE_INNODB);

In case you were wondering what “legacy_db_type” actually does, let me tell you: it’s not legacy at all, it’s kind of key to how the whole “metadata” system in MySQL works. For example, to drop a table, this magic number is used to work out what storage engine to call to drop the table.

Now, these code snippets basically kiss goodbye to the idea of a “pluggable storage engine” architecture. If you’re not InnoDB, you don’t get to have certain features. This isn’t exactly MySQL or MariaDB encouraging an open storage engine ecosystem (quite the opposite really).

Having the MySQL server have this incredibly basic, busy and incomplete understanding of metadata has always been a bit of a mess. The code for reading a table definition out of the FRM file really does show its age, and has fingers all through the server.

If somebody was serious about refactoring server code, you’d certainly be looking here, as this code is a major source of arbitrary limitations. However, if you have the server and the engine(s) both having separate views of what is the “correct” state of metadata you end up with a mess (anyone who has had InnoDB be out of sync with FRMs knows this one). I worry that the FRM code will be replaced with something even less understandable by humans, again making the mistake that the server knows the state of the engine better than the engine does.

See Also:

New libeatmydata release

Good news everyone! There’s a new libeatmydata release! I’ve put a source tarball up on the launchpad page: release-79.

This version packs:

  • RPM and debian packaging in tree
  • A bug fix so that O_SYNC and O_DSYNC are properly discarded on 32bit machines both with and without _FILE_OFFSET_BITS being set.

I’d love to hear any feedback and receive any patches (hopefully things still work well on MacOS X and Solaris). So far, libeatmydata has had contributions from the following people, and many thanks to them:

  • Stewart Smith
  • Alexey Bychko
  • Blair Zajac
  • Phillip Susi
  • Modestas Vainius
  • Monty Taylor
  • Olly Betts
  • Pavel Pushkarev
  • Elliot Murphy
  • Eric Wong
  • Tamas TEVESZ
  • Joachim Berdal Haga
  • Mohsen Hariri

The EXAMPLE storage engine

The Example storage engine is meant to serve mainly as a code example of the stub of a storage engine for example purposes only (or so the code comment at the start of ha_example.cc reads). In reality however, it’s not very useful. It likely was back in 2004 when it could be used as a starting point for starting some simple new engines (my guess would be that more than a few of the simpler engines started from ha_example.cc).

The sad reality is the complexity of the non-obviousness of the bits o the storage engine API you actually care about are documented in ha_ndbcluster.cc, ha_myisam.cc and ha_innodb.cc. If you’re doing something that isn’t already done by one of those three engines: good luck.

Whenever I looked at ha_example.cc I always wished there was something more behind it… basically hoping that InnoDB would get a better and cleaner API with the server and would use that rather than the layering violations it has to do the interesting stuff.

That all being said, as a starting point, it probably helped spawn at least a dozen storage engines.

The ARCHIVE Storage Engine

I wonder how much longer the ARCHIVE storage engine is going to ship with MySQL…. I think I’m the last person to actually fix a bug in it, and that was, well, a good number of years ago now. It was created to solve a simple problem: write once read hardly ever. Useful for logs and the like. A zlib stream of rows in a file.

You can actually easily beat ARCHIVE for INSERT speed with a non-indexed MyISAM table, and with things like TokuDB around you can probably get pretty close to compression while at the same time having these things known as “indexes”.

ARCHIVE for a long time held this niche though and was widely and quietly used (and likely still is). It has the great benefit of being fairly lightweight – it’s only about 2500 lines of code (1130 if you exclude azio.c, the slightly modified gzio.c from zlib).

It also use the table discovery mechanism that NDB uses. If you remove the FRM file for an ARCHIVE table, the ARCHIVE storage engine will extract the copy it keeps to replace it. You can also do consistent backups with ARCHIVE as it’s an append-only engine. The ARCHIVE engine was certainly the simplest example code of this and a few other storage engine API things.

I’d love to see someone compare storage space and performance of ARCHIVE against TokuDB and InnoDB (hint hint, the Internet should solve this for me).

The MySQL Cluster storage engine

This is one close to my heart. I’ve recently written on other storage engines: Where are they now: MySQL Storage Engines, The MERGE storage engine: not dead, just resting…. or forgotten and The MEMORY storage engine. Today, it’s the turn of MySQL Cluster.

Like InnoDB, MySQL Cluster started outside of MySQL. Those of you paying attention at home may notice a correlation between storage engines not written exclusively for MySQL and being at all successful.

NDB (for Network DataBase) started inside Ericsson, originally written in a language called PLEX, which was internal to Ericsson and used in the AXE telephone switches. Mikael Ronstrom’s PHD thesis covered NDB and even covered things that (at least were) yet to be implemented (it’s been quite a few years since I leafed through it last). The project at Ericsson (IIRC) was shelved a couple of times, but eventually got spun out into an Ericsson Business Innovation company called Alzato.

Some remnants of PLEX can still be found in the NDB source code (if you look really hard that is). At some point the code was fed through a PLEX to C++ converter and development continued from there. Some of the really, really old parts of the source may seem weird either due to this or some hand optimization for SPARC processors in the 1990s.

In 2003, MySQL AB acquired Alzato and work on a storage engine plugin for MySQL to interface to the (C++ API only) NDB was underway. Seeing as the storage engine interface was so simple, easy and modular it would only take several years for the interface to NDB to become mature.

The biggest problem: NDB itself worked really well if your workload fit exactly what it was good at… if you deviated, horrific performance and/or crashes were not as uncommon as we’d have liked. This was a source of strain for many years with the developers and support team on one side and some of the less-than-careful sales team on the other. That being said, there have been some absolutely awesome sales people selling NDB into markets it truly fits, and this is why there’s barely a place in the world where placing a mobile phone call doesn’t go through MySQL Cluster at some point.

You should read Tomas Ulin’s post Celebrating 10 years @MySQL for a bit of an insight into how Alzato became part of MySQL AB (which later became part of Sun which became part of Oracle).

I joined the MySQL Cluster team at MySQL in December 2004, not too long after Alzato was acquired, but certainly when the NDB storage engine in MySQL 4.1 was in its very early stages – it was then by no means a general purpose database.

Over the years, MySQL Cluster gained both traction and features, making it useful for more applications. One of the biggest marketing successes of MySQL was the storage engine architecture and how you could just “plug in” different engines. The reality (of course) was far different and even though MySQL Cluster did just “plug in” to MySQL, it was certainly not a drop in replacement.

In MySQL 5.0, a bunch of neat new features were added:

  • Engine condition pushdown
    This enabled conditions on non-indexed columns to be evaluated on the data nodes rather than having every row pulled up to the SQL node to be evaluated.
  • Batched read interface
    So that queries like SELECT FOO FROM BAR WHERE A IN (1,2,3) were executed as a single network round trip rather than 3 round trips.
  • Query cache
    Although the query cache should die, hey, at least it worked with NDB now…. in a way.
  • Reduced IndexMemory usage
    Remember, NDB is an in-memory database, so saving a bunch of bytes for secondary indexes was a big thing.

the first release with things I really worked on was MySQL 5.1. My first talk (to a packed room) at the MySQL User Conference in 2006 was on new features in MySQL Cluster 5.1. I’m still quite proud of that talk even though I know I am a much better speaker than I was then (It would have been great to have had more guidance… but hey, learning from experience is good too).

We added a lot in 5.1:

  • Integration with replication
    This is where row based replication was born. It was a real team effort with the NDB kernel part (going from memory and bzr logs) having been written by Tomas and Jonas seems to have a bunch of code there too. I worked a bunch on the NDB Injector thread in mysqld, Mats worked on the core row based code (at the time the most C++ like code in the entire MySQL world). You could now have a cluster replicate to another cluster with the giant bottleneck that is MySQL replication.
  • disk data
    You could store non-indexed columns on disk. I implemented the INFORMATION_SCHEMA.FILES table for this, I was young and naive enough to think that the InnoDB guys would also fill out this table and all would be happy with the world (I’m lucky I haven’t been holding my breath on this one).
  • Variable Sized columns
    A VARCHAR(255) would actually not always use more than 255bytes if you just stored a single character in it. Catch? Only for in-memory columns.
  • User defined partitioning
    Because NDB desperately needed more options, we let the user choose how they wanted to partition up their data (per table).
  • Autodiscovery of schema changes
    This was a giant workaround to the epic mess that is FRM files and data dictionary things inside the MySQL Server. It is because of all this code that when I went to rewrite the whole thing for Drizzle I took the approach of “just pass it down to the engines, the server must not attempt to know better”. FWIW, I’m still right: if the server tries to be clever you now have two places for bugs to be, not just one.
  • Distribution awareness
    i.e. better selection of which data node to talk to for a particular query, reducing latency.
  • Online add/drop index.
    How long did it take for other engines to get this? Let’s not think about that :)

After that the really interesting stuff started to happen, that is, the first major fork of MySQL: MySQL Cluster Carrier Grade Edition (CGE). Why? We had customers that simply couldn’t wait for MySQL 6.0 (after all, they’d still be waiting).

We had MySQL Cluster CGE 6.1, 6.2, 6.3 and now we’re into 7.0, 7.1 and 7.2. There is without doubt that it’s the longest serving and surviving MySQL fork. There were non-trivial changes inside the MySQL server too, which caused enough of a merge problem for the (small) Cluster team.

One big thing that you’re probably still all waiting for? Replication conflict detection and resolution in circular/multi-master replication setups. It was an NDB first and been used in production for a decent amount of time.

I remember a hack while on an airplane led to the CompressedBackup and CompressedLCP options (used zlib when writing out checkpoints/backups) – something that took more time than you’d think to go from prototype to production ready code.

The last few things I worked on in MySQL Cluster before going and working full time on Drizzle was the Windows port, online add/drop node and NDBINFO.

I’ve left out so many cool MySQL Cluster things that were worked on over the years (e.g. online add/drop column, rewriting of LCP code, micro GCPs, crash-safe DDL, the test suite). I really should mention the test suite, in lines of code it was over three times that of MyISAM.. and that was probably six years ago that I worked that out.

One thing to think about: when Innobase Oy was bought by Oracle and there was this effort to have a transactional storage engine that was inside MySQL AB rather than another company, I pointed out that I thought it would take less time adding the needed features to NDB and integrating it inside the MySQL server binary (and with the addition of online add node you could go from stand alone DB server to a full cluster with no down time) than it would for any of the alternatives to get to a suitable level of maturity.

I wish I put money on this… I put money on the MySQL 5.1 GA release date (which I was happy to loose), but in the years since you can see that InnoDB is still reigning supreme with all that came to replace it having fallen away for one reason or another. It’s still on track to have MySQL Cluster be the only real alternative (now also, funnily enough, owned by Oracle). I have to say, it’s kind of a hollow victory though, it would have been nice to see Falcon and PBXT be serious players in today’s market.

TokuDB

Big news at Percona Live MySQL Conference and Expo this week, Tokutek open sourced TokuDB thus making my previous post Where are they now: MySQL Storage Engines out of date in just a few days.

In this case, I really don’t mind. It’s rather exciting that they’ve gone ahead and done this –  and it’s not just a code drop: https://github.com/Tokutek/ft-engine is where things are at, and recent commits were 2hrs and 18hrs ago which means that this is being maintained. This is certainly a good way to grow a developer community.

While being a MySQL engine is really interesting, the MongoDB integration is certainly something to watch – and may be something quite huge for Tokutek (after all, it’s the database engine parts of Mongo that are the most troublesome – along with the client library license).

A few notes on InnoDB in MySQL 5.7.1

I’ve started poking around the MySQL 5.7.1 source tree (although just from tarball as I don’t see a BZR tree yet). I thought I’d share a few thoughts:

  • InnoDB temporary tables. Not REDO logged. What does this mean? It’s a huge step in removing the dependency on MEMORY and MyISAM engines for temporary tables used in query execution. With InnoDB temporary tables there is no reason for MEMORY engine to continue to exist, there is absolutely no way in which it is better.
  • InnoDB temp tables aren’t insert buffered
    This probably doesn’t really matter as you’re not going to be doing REDO logging for them (plus things are generally short lived)… but it could be a future area for performance improvement
  • The NO_REDO log mode appears to be implemented fairly neatly.
  • Improvements in innodb read only mode. What does this mean? Maybe we can finally get rid of the oddity of compressed read only MyISAM tables on read only media. (on the other hand, CDs and DVDs aren’t exactly a modern form of software distribution).
  • Some of the source code comments have improved.. it’s getting easier to understand InnoDB. I’d still make the argument that if you need source code comments you’re code isn’t clear enough… but any step is an improvement. (that being said, InnoDB was always easier than the server)
  • There is some pretty heavy refactoring of lock0lock.cc – I really need to sit down and poke at it a bunch.
  • The shared tablespace code (innodb system tablespace) has been heavily refactored. This also introduces tablespaces for temporary tables – and it appears to be implemented in the correct way.

I need to look into things a bunch more, and it’ll be really useful to see a bzr tree to better understand some of the changes.

More to come later, but that’s my quick look.

Where are they now: MySQL Storage Engines

There was once a big hooplah about the MySQL Storage Engine Architecture and how it was easy to just slot in some other method of storage instead of the provided ones. Over the years I’ve repeatedly mentioned how this wasn’t really the case and that it was remarkably non trivial.

Over the years there have been many storage engines crop up and then disappear. So… where are they now?

  • ISAM
    This became MyISAM…. you know you’ve been around MySQL a long time if you’ve ever had to deal with an ISAM table.
  • Gemini
    This was the first big test of the GPL in court. Basically, you have to obey the GPL (see wikipedia for more info). The code was released as GPL and development stopped. This has been dead since ca 2002.
  • Amira – http://launchpad.net/amira
    Antony first mentioned this in 2008 on his blog. This was a continuation of the Gemini engine, you can actually go over to launchpad and get the code. This was one of the projects to have a transactional storage engine not owned by Oracle after Innobase Oy was acquired by them. It went nowhere special as Netfrastructure was acquired which became Falcon.
  • BDB
    otherwise known as the BerkeleyDB engine. It was seldom used and never gained much of a userbase. It was unceremoniously dropped back in 2006 and both users didn’t really exist.
  • PBXT - http://pbxt.blogspot.com/
    I think we can credit PBXT with at least half of the features and performance improvements to InnoDB since it first emerged back in 2006. It got attention very quickly. Why? Because it was different. It had the very rare ability to outperform InnoDB in some places. You can still find PBXT in MariaDB, but sadly it can be hard to fund development of a MySQL storage engine, especially one as tied to MySQL as PBXT is, and it’s no longer under active development. Closely related was the Blob Streaming project which was way ahead of its time as an AlsoSQL access method. The good news is that the code was released under a BSD license in 2012 (was previously GPL). We even had PBXT in Drizzle for a while.
  • Blob Streaming (PBMS) - http://bpbdev.blogspot.com/
    This project was closely related to (but not depending exclusively on) PBXT. It embedded a HTTP server inside the database and could use it to read and write BLOBs. This was not only fairly cool but way ahead of its time. We owe the existence of both HandlerSocket and the memcached interface to InnoDB to PBMS (it was also an inspiration for the JSON server plugin for Drizzle, to address some of the use cases of the PBMS plugin).
  • Federated
    It’s still there… but is effectively unmaintained and dead. There’s even FederatedX in MariaDB which is an improvement, but still, the MySQL server really doesn’t lend itself kindly to this type of engine… it’s always been an oddity only suitable for very specific tasks.
  • Archive
    Although useful, effectively unmaintained. I kinda don’t want to say dead… but if it went away, I wouldn’t exactly be surprised.
  • CSV
    Currently used to access the log tables in MySQL… and hardly used otherwise. It’s odd that the same code doesn’t deal with SELECT INTO OUTFILE and LOAD DATA INFILE, and I doubt this will ever change. I’d say effectively niche/dead.
  • SolidDB
    Purchased by IBM, abandoned.
  • DB2
    Only ever on System i. Useful for very very few people… but you can still find it around if you’re one of them.
  • Infobright
    OMG it exists! This is probably because they’re largely just using the MySQL server as a way to implement the MySQL network protocol and all of the heavy lifting is done by their own code.
  • Xeround
    I’m quite surprised these guys are still around, as they’re a proprietary storage engine as a service, and initial testing wasn’t entirely promising.
  • TokuDB
    I cannot emphasize how much more interesting TokuDB would be if it were open source. It actually holds some promise… and with their recent work with mongo, perhaps this is a good way forward for them…
  • Maria/Aria
    Another “OMG Oracle just bought Innobase Oy” engine. This was a project to take MyISAM and turn it into a lean, mean, transactional storage engine machine. It’s still not there and I don’t think it ever will be.
  • Falcon
    This was the hot new thing. It came out of Netfrastructure, which MySQL AB acquired in order to help get a transactional storage engine after Innobase Oy was acquired by Oracle. If you’re keeping count, that’s three projects for a transactional storage engine. Falcon was the star though, receiving all the press and publicity (well before it was ready). There are many reasons why Falcon isn’t around today – the chief one probably being that Oracle bought Sun who had bought MySQL and thus a need for an “InnoDB replacement” instantly vanished. There was also immense management pressure for performance to be greater than InnoDB, without any allowance for or focus on correctness…. and this showed. This was quite disappointing as Falcon had a lot of good architectural things going for it.
  • BlitzDB - https://launchpad.net/blitzdb
    I had hoped we’d replace MyISAM with BlitzDB in Drizzle. It was a wrapper around Tokyo Cabinet to the storage engine API in Drizzle. Unfortunately, the ties to MyISAM are incredibly deep (see my recent post on internal temporary tables) and we never quite got there.

I think this is all the notable engines that were aimed at widespread adoption… what ones have I forgotten?

It’s interesting to note that only Archive, CSV, Xeround, TokuDB and Infobright can be gotten anywhere, and the latter two only in their own distribution (one proprietary) and Xeround only as a service.

30 configuration options and counting

While Domas may have rather effictively trolled the discussion with his post on howto configure table/user statistics (which gave me a good chuckle I do have to say), it’s at least incorrect for Percona Server as you have to enable the “userstat” server option :)

That being said, once enabled there are no extra configuration variables to think about. This is a huge advantage over configuring PERFORMANCE_SCHEMA – which has a total of THIRTY configuration options (31 if you include the global enable/disable option).

Some of these thirty odd configuration variables are only going to matter if you’re loading your own plugins, and even then, it’s probably only going to matter if they use the MySQL mutex implementations rather than, say, the standard pthread ones or even other synchronization primitives. It helps that the vast majority of non-InnoDB storage engines are dead. Go on – name one that’s in any form of usage (MyISAM doesn’t count – it’s effectively on death row).

This really makes me want to go and resurrect and finish the perf integration with Drizzle. The operating system provides a whole bunch of performance monitoring tools already, just expose them via SQL and be done with it.

Drizzle and Google Summer of Code 2013

Those interested in hacking on Drizzle for the Google Summer of Code this year should certainly jump on the mailing list and IRC channel and work out what they may want to do. I (and others) are happy to supervise this year.

Drizzle is a relational database server with an accessible C++ code base that has been involved with GSoC for a number of years now.

Refactoring Internal temporary tables (another stab at it)

A few weekends ago, I started to again look at the code in Drizzle for producing internal temporary tables. Basically, we have a few type of tables:

  • Standard
  • Temporary (from CREATE TEMPORARY TABLE)
  • Temporary (from ALTER TABLE)
  • Internal temporary (to help with query execution)

If you’re lucky enough to be creating one of the first three types, you go through an increasingly lovely pile of code that constructs a nice protobuf message about what the table should look like and hands all responsibility over to the storage engine as to how to do that. The basic idea is that Drizzle gets the heck out of the way and lets the storage engine do its thing. This code path looks rather different than what we inherited from MySQL. For a start, we actually have a StorageEngine object rather than just lumping everything into the handler (which we correctly name a Cursor). However… the final part, the internal temporary table code is a bit closer to what we inherited from MySQL. There is a good reason for that, it’s ass.

For a start, the table::Singular object is still abused by Item_sum_distinct (see the setup() method) as a tuple (a table with no actual table). This is not ideal and just throws a spanner in the works for refactoring a bunch of code.

The second big problem is that create_tmp_table() doesn’t actually use any normal API calls, instead it manually sets up the table::Singular object. This includes setting up the fields for the table::Singular object in a slightly different way depending on which bit of code called create_tmp_table().

The third big problem is that it’s not storage engine agnostic. Instead of using any existing and sensible way to go and create a temporary table by using the storage engine API it instead creates a series of MI_COLUMNDEF structures which as you may be able to guess, are MyISAM specific and internal data structures.

The forth big problem is that if we end up using HEAP (again, like MyISAM, hard coded) we don’t even call the create table method on the engine. The HEAP (or MEMORY engine as it’s now known) is magic in that it can create tables on open()!

All of these issues make it really, really hard to have another engine with the ability to handle internal temporary tables. You may recall that MariaDB does include the ability to use the Aria engine for internal temporary tables. No, they did not refactor any of this code, they just made a copy of the code and put in Aria where MyISAM was along with some #ifdef for the feature.

Over the past several years I’ve tried a few times to tease this code out and start the process of turning it into something that is palatable. Every one of those times I’ve either failed or gotten sufficiently frustrated that I’ve given up.

I now have a new strategy though. After looking at the code for a good few hours a few weekends ago, I think I have an idea of where to start…. (now just for a few more free weekends to implement it).

diffstat of MySQL 5.6 versus 5.5

Yesterday I wrote about what the diffstat between MySQL 5.5 and MariaDB 5.5 was, and previously to that, about the MariaDB code size as reported by sloccount. Let’s look at MySQL 5.6.

A naive wc based “lines of code” for MySQL 5.6 sql/ directory is ~490kLOC which contasts with MySQL 5.5 being ~375kLOC by the same measure. If we diffstat the sql/ directory like I did for MariaDB 5.5 we get:

 357 files changed, 172871 insertions(+), 67922 deletions(-)

Versus, as you remember from yesterday for MariaDB 5.5 over MySQL 5.5:

 250 files changed, 83639 insertions(+), 23090 deletions(-)

The MySQL 5.5 to 5.6 sql/ changes line up with What I found in my post MySQL modularity, are we there yet? in that the core server code for MySQL has grown by about 100,000 lines of code.

The jump from MySQL 5.5 to MariaDB 5.5 is a smaller one than jumping from MySQL 5.5 to MySQL 5.6, at least in terms of changed server code.

A judgement all on if a smaller diff is a safer jump or not will rest more with the quality of that code more than anything else. As we’ve seen previously, modularity isn’t coming to the MySQL code base any time soon.

So what about the diffstat of MariaDB compared to MySQL?

So, I’ve looked at what sloccount says on the differences between Oracle MySQL over versions of itself and the various MySQL branches around. What I haven’t looked at is the diffstat. Firstly, let’s look at MariaDB.

I’m going to look at MariaDB 5.5.29 as compared to MySQL 5.5.29, both checked out from bzr. A naive diffstat would give us:

 5261 files changed, 1086165 insertions(+), 122751 deletions(-)

And this looks like an awful lot of code that has changed: about 1,086,165 lines! This actually includes a whole other copy of InnoDB in the form of XtraDB. If we take that into account we get:

 5032 files changed, 864997 insertions(+), 125099 deletions(-)

Which is still incredibly high. Let’s look at what’s changed though. We actually see a bunch of changes in the test suite, some of which are relatively harmless, while others, like the change to rpl_tests/rpl_innodb.test have a “–replace_result MyISAM InnoDB” line added to them, which is awfully odd (possibly legitimate, but it stuck out).

In the end, I came up with this diff command which I think leaves us with a best diff for what is the code difference between MySQL 5.5 and MariaDB 5.5:

 diff -Nru --exclude=BUILD* --exclude=.bzr* --exclude debian* \
--exclude=man* --exclude=mysql-test* --exclude=win* \
--exclude=unittest* --exclude=test* \
--exclude=support-files* --exclude=README \
--exclude=Docs --exclude=CMakeLists.txt \
--exclude=COPYING.LESSER --exclude=INSTALL* \
--exclude=KNOWN_BUGS.txt \
--exclude=cmake* mysql-5.5.29/ mariadb-5.5.29/

This is not to discount the build and test changes that MariaDB have made, but in this case I feel they distort the numbers a bit and I’ve previously just been counting C and C++ code, so it’s probably fairer this way.

We end up with a diffstat of:

 1156 files changed, 326081 insertions(+), 42751 deletions(-)

If we then exclude the copyright notice changes and any whitespace by changing the start of the diff command to this:

diff -NruiEbwB --ignore-matching-lines='Copyright.*Monty' \
--ignore-matching-lines='Copyright.*Oracle'

We end up with a diffstat of:

 1129 files changed, 322821 insertions(+), 39588 deletions(-)

Which is a little different to what I found in my previous post (MariaDB code size) that just used sloccount. There we found that MariaDB 5.5 was 187,000 more lines of code than MySQL 5.5 while here we find the difference to be 283,000 lines of code. I suspect these differences to be in how diff and sloccount count things. If you do a naive count of the number of lines in source files in the sql/ directory you get 375kLOC while sloccount says 256kLOC.

There is still some noise in this number as there’s some Copyright notices for some of the strings code that changes, but this doesn’t seem to be too much. What about server code though? If we just diffstat the sql/ directory (core server code), then we get:

 250 files changed, 83639 insertions(+), 23090 deletions(-)

Which is still nothing to sneeze at, sloccount tells me that MySQL 5.5.29 only has 256kLOC in the sql/ directory to begin with and a naive wc count to be about 375kLOC.

Which is bigger: MySQL or PostgreSQL?

From my previous posts, we have some numbers (excluding NDB) for the size of MySQL, so what about PostgreSQL? Here, I used PostgreSQL git trunk and classing things in the contrib/ directory as plugins. I put the number of lines of code in the src/backend/storage directory down as storage engines LoC but did not count it as non-kernel code.

Version Total LoC Plugin LoC Storage Engines LoC Remaining (kernel)
MySQL 5.5.30 858,441 2,706 171,009 684,726 (79% kernel)
MySQL 5.6.10 1,049,344 29,122 236,067 784,155 (74% kernel)
MariaDB 5.5 1,142,118 11,781 304,015 826,322 (72% kernel)
Drizzle trunk 334,810 31,150 130,727 172,933 (51% kernel)
PostgreSQL trunk 648,691 61,934 17,802 586,757 (90% kernel)

What we can see is that the PostgreSQL kernel size is actually smaller than any recent MySQL version (5.1 was slightly smaller). This is rather interesting as it is generally thought that PostgreSQL does more than MySQL. What’s more telling is that total code size, PostgreSQL is about half of MySQL 5.6 or MariaDB 5.5. Only Drizzle ends up being smaller, which makes sense as it “does less”.