Ghosts of MySQL Past, Part 8: The First Fork.

This is the 8th installment in the rather long series that started with Part 1 about a month ago.

Back in 2006, we were in the situation where MySQL 5.0 had taken forever, and the first “GA” release was not suitable for production. Looking towards MySQL 5.1, it was also unlikely to be out any time soon. The MySQL Cluster team had customers that needed new features in a stable release. The majority of users didn’t use the MySQL server at all, they directly used the C++ NDB API for the vast majority of queries – so the vast majority of release blocker bugs in the MySQL server would not affect the production readiness of MySQL Cluster for these customers.

So, the decision was wisely made to do separate releases from a separate tree for MySQL Cluster. This was named MySQL Cluster: Carrier Grade Edition and exists to this day.

The main use case for MySQL Cluster at this time was running telephone networks, specifically the Home Location Registry databases of GSM phone networks. Basically, you need to keep a database of which tower each subscriber is associated with so when you go to make a phone call (or SMS) the network can properly route the call. This means there’s some realtime response requirements and hardcore availabilty which demands a special type of database.

NDB has a long history (some of it detailed in a previous post), but for those kind of interested in internals, I’ll quote Frazer Clement (now a long time MySQL Cluster developer although I completely forget which year he joined the team, which is just slightly embarrassing):

…Erlang and Ndb Cluster share some Plex heritage, which can still be seen in their architectures today. Since Plex, Erlang has mated with Prolog, and Ndb Cluster was involved in a car crash with C++.

The customers for MySQL Cluster were not the ones who bought the $5000 support option… Typically, paying for the addition of a relatively major feature was considered quite okay and even “normal”. After all, the effort that goes into constructing the entire software stack of a large cell phone network is rather large, and MySQL Cluster would end up being a very small part of that.

Many major features went into MySQL Cluster first, sometimes years before they made the general MySQL Server: row based replication, circular replication with conflict resolution and online DDL (add/drop index and column). In fact, it kind of incredibly frustrates me that we solved online add column in NDB so many years ago and you still can’t add a column to an InnoDB table without some serious planning.

The key thing about MySQL Cluster releases? They happened. They were also regular, addressing customer issues and new major versions brought features that worked for the use cases of those who needed them.

Interestingly enough, you may recognize the person who ran the MySQL Cluster team as the same person who has overseen the now regular release cycle of the MySQL Server itself (and has now been in the MySQL world for over ten years).

NDB$INFO with SQL hits beta

Bernhard blogged over at http://ocklin.blogspot.com/2010/02/mysql-cluster-711-is-there.html that MySQL Cluster 7.1.1 Beta has been released. The big feature (from my point of view) is the SQL interface on top of NDB$INFO. This means there is now full infrastructure from the NDB data nodes right out to SQL in the MySQL Server for adding monitoring to any bit of the internals of the data nodes.

Drizzle FRM replacement: the table proto

Drizzle originally inherited the FRM file from MySQL (which inherited it from UNIREG). The FRM file stores metadata about a table; what columns it has, what type those columns are, what indexes, any default values, comments etc are all stored in the FRM. In the days of MyISAM, this worked relatively well. The row data was stored in table.MYD, indexes on top of it in table.MYI and information about the format of the row was
in table.FRM. Since MyISAM itself wasn’t crash safe, it didn’t really matter if creating/deleting the FRM file along with the table was either.

As more sophisticated engines were introduced (e.g. InnoDB) that had their own data dictionary, there started to be more of a problem. There were now two places storing information about a table: the FRM file and the data dictionary specific to the engine. Even if the data dictionary of the storage engine was crash safe, the FRM file was not plugged into that, so you could end up in a situation where the storage engine
recovered from a crash okay, but the FRM was incorrect for what the engine recovered to. This would always require manual intervention to find out what went wrong and then fix it (in some rather unusual ways).

When the MySQL Cluster (NDB) engine was introduced, a new set of problems arose. Now the MySQL server was connecting to an existing database, where tables could be created on other nodes connected to the cluster. You now not only had the problems of crash recovery, but the problems of keeping the FRM files in sync across many nodes, requiring
all sorts of interesting solutions that, for the most part, do work.

The “obvious” solution to some of these problems would be for an engine to write out an FRM file itself. This is much easier said than done. The file format was never created to be read and written by multiple pieces of software, the code that did the reading and writing inside the server was not reusable elsewhere and the only documentation (that
wasn’t a decent chunk of the MySQL source tree) is the rather incomplete definition in the MySQL Internals wiki (http://forge.mysql.com/wiki/MySQL_Internals_File_Formats) – not nearly enough to write a correct FRM file as the specifics are very, very odd.

Our goals for reworking the metadata system in Drizzle were: to allow engines to own their own metadata (removing any opportunity to have inconsistencies between the engine and the ‘FRM’) and for engines without their own data dictionary, to replace the FRM file format with something simple and well documented.

One option was to use SQL as the standard storage format, but it is rather non-trivial and expensive to parse – especially if we were to use it as the preferred way of talking table definitions with storage engines. We had been looking at the protobuf library
(http://code.google.com/p/protobuf/) ever since its first release and it has a number of very nice characteristics: a description language of a data structure that is then used to generate APIs for reading and writing it in a number of programming languages and a standard (documented) way to serialize the data structure.

After a bit of discussion, we arrived at a good outline for the table definition proto. The current one can always be found in the Drizzle source tree at drizzled/message/table.proto. The current format is very close to final (i.e. one that we’ll suppport upgrades from).

The process of modifying the Drizzle code base so that it would write (and read) a file format different to the FRM isn’t worth going too much into here although there were some interesting hurdles to overcome. An interesting one was the FRM file contains a binary image of the default row for the table (which is in the row format that the server uses); we now store the default value for each column in the proto and generate the default row when we read the proto. Another interesting one was removing and refactoring “pack_flag” – the details of which should only be extracted from Jay or Stewart with a liberal application of fine ale.

The end result is that we now have storage engines that are completely responsible for their own metadata. One example is the ARCHIVE engine. In the CREATE TABLE code path, the ARCHIVE storage engine gets the table definition in an object that represents the table proto. It can examine the parameters it needs to and then either store the proto directly, or convert it into its own format. Since ARCHIVE is simple, it just stores
the table proto in a serialised form (using a standard function provided by the protobuf library) and stores it in the .ARZ data file for the table. This instantly makes the ARCHIVE storage engine crash safe for CREATE and DROP table as there is only 1 file on disk, so no two files to get out of sync.

If an engine does not have its own data dictionary, it can still use the default implementation which just stores the serialised table proto in a file on disk.

We can also now use this interface to move INFORMATION_SCHEMA into its own storage engine. This means we can remove a lot of special case code throughout the server for INFORMATION_SCHEMA and instead just have a INFORMATION_SCHEMA storage engine that says it has the following tables in the INFORMATION_SCHEMA database. Because the table definition is now in a documented format with a standard API, this becomes a relatively trivial exercise.

What we’re all looking forward to is when the InnoDB data dictionary is linked into the new interface and we can have a truly crash safe database server.

Another wonderful side effect is since we now have a standard data structure for representing a table definition, we can integrate this with the replication system. In the “near” future, we can represent a CREATE TABLE in the replication stream as a table proto and not the raw SQL. If you were wanting to apply the replication stream to a different database server, you then only have to write a table proto to SQL
converter. If the target database system doesn’t do SQL at all, you could generate API calls to create the table.

So we now have a rather flexible system in place, with the code implementing it being increasingly simple and possible to be “obviously correct”.

Things that easily fall out of this work that people have written about:
– CREATE TABLE LIKE with ENGINE clause
http://krow.livejournal.com/671235.html
– table_raw_reader – looking at the raw representation of table metadata
http://www.flamingspork.com/blog/2009/10/01/table_raw_reader-reading-the-table-proto-from-disk-and-examining-everything/
– Table discovery
http://www.flamingspork.com/blog/2009/07/29/table-discovery-for-drizzle-take-2-now-merged/

Some more info:
http://krow.livejournal.com/642329.html

Feedback from MySQL Cluster tutorial

Way back on Monday (at the MySQL Conference and Expo), I gave a full day tutorial on MySQL Cluster. I awoke early in the morning to a “oh ha ha” URL in an IM; but no, it wasn’t jetlag playing tricks with me. Luckily, this didn’t take much (if anything) away from the purpose of the day: teaching people about NDB.

Distracting-and-this-time-really-annoying-thing-of-the-day-2: It seems that O’Reilly had cut back on power this year, and there were no power boards in the room. A full day interactive tutorial, and nowhere to plug in laptops. Hrrm.. Luckily, having over the many years I’ve been speaking at this event, I’ve gotten to know the AV guys okay, and asked them. They totally deserve a medal. Tutorial started at 8:30, I noticed at 7:30, and it was all fixed by 7:45. The front half of the room (enough for everyone coming) had enough power for everyone. It was quite okay to bunch everybody up – means I have to run around less.

This years tutorial was modified from last year (and that does take time, even though I’ve given it many times before). I wanted to remove out of date things, trim bits down (to better fit into the time we have, based on more experience on how long it takes to get interactive parts done) and add a bit.

When we got to the end of the day (yes, I ran over… and everybody stayed, so either I’m really scary or the material is really interesting) I pleaded for feedback. It’s amazingly scary doing an interactive tutorial. You’re placing the success of the session not so much on you, but on everyone who’s come to it.

Sometimes I’ve gotten not much feedback at all; this time was different. I spoke to a number of people afterwards (and some via email) and got some really good suggestions for small changes that would have greatly enhanced the day for them. I was pleased that they also really enjoyed the tutorial and liked the interactivity. I (and it seems a great many others) do not much like tutorials that are just long talks.

People walked out of my tutorial with a good overview of what MySQL Cluster was, how to set one up, use one, do a bit of admin and some of how it works.

I even dragged Jonas up to explain in great detail the 2 phase commit protocol for transactions. Of course, this is detail you don’t ever need to know to deploy – but people are intersted in internals.

So far the session has received an average of 4 stars in evaluations (four five star, two four star and one two star). I’d be really interested in feedback from the person who gave two stars, as this may mean I missed getting something done for them (e.g. providing information, help etc). Even though it is hard to spread yourself around a room of 60-ish-plus people, I do like to do it well. There is the other possibility of people not coming prepared, which will mean they may be bored for a lot of the day if they don’t jump in with another group and help learn that way.

So, I’m rather happy with how my first session went.