Amounts of RAM for devices so that I no longer have to worry about it.

I think this is my current “okay, I don’t have to worry about RAM” list currently:

  • Phone/Tablet: 2GB
  • Laptop: 8GB (although 8GB is better… 4GB is tolerable IFF SSD)
  • Development server: 16GB (32GB if shared) (emacs+gcc)
  • Box for testing things: 128GB (virtualization, databases)

This is… kind of mind bending.

Cyanogenmod with encryption on a Sony Z1 Compact

So, new personal surveillance device (it’s pink! No more BORING black phone!).

Needed to be able to load my own firmware on it and have encryption. It turns out I had to go and do things like this: http://forum.cyanogenmod.org/topic/82292-cm102-encryption-does-not-start-stuck-at-splash-screen/page__hl__+encryption#entry460839 which is, in fact, repartitioning my phone.

It’s been a while since I’ve had to do math on partitions to get a Linux installed somewhere… but if you don’t change where the filesystem is, you can’t run with encryption.

Basically, if enabling encryption isn’t working, run “adb logcat” on your computer and look for ” E/Cryptfs ( 1890): Orig filesystem overlaps crypto footer region.  Cannot encrypt in place.” if you see that, you’re going to need to boot into recovery and ” adb shell” before finding what block device /data is (check the output of “mount”) and then check the number of blocks it is in /proc/partitions before running mkfs.ext4 on it but with a device size of a few kb less than the device (I think I picked 16kb less. After doing that, everything “just worked”.

how do I deal with OTA updates? Quite easily – copy the zip to the SD card and install from there.

Tyan OpenPower

Good news everyone! Tyan has announced the availability of their first OpenPOWER system! They call this a Customer Reference System, which means it’s an excellent machine to start poking at OpenPower and POWER8 (or deploying applications on).

Because it’s an OpenPower machine, it runs the open source Open Power firmware (all up on github) and will happily run Linux (feel free to port your other operating system kernels). I’ll be writing more on the OpenPower firmware soon as, well, technical details are fun!

Ubuntu 14.10 is listed as recommended as not only have they been building for POWER8 but have spent some time ensuring things work fairly well out-of-the-box (both as a KVM guest and running native on the bare metal). Or, you can always just boot whatever the mainline kernel is at – build for the POWERNV (POWER non-virtualized) platform (be sure to include all the required drivers) and have fun!

MySQL 5.6 on POWER (patch available)

The following sentence is brought to you by IBM Legal. The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies or opinions.

Okay, now that is out of the way….

If you’re the kind of person who follows the MySQL bugs database closely or subscribes to the MySQL Internals mailing list, you may have worked out that I’ve spent a small amount of time poking at MySQL on modern POWER systems.

Unlike Intel CPUs, POWER CPUs require explicit memory barriers to synchronize memory state between different CPUs. This means that when you’re implementing synchronization primitives, you have one extra thing to get right.

Luckily, if you use straight pthread mutexes, this is already taken care of. Unluckily, there are some optimizations in MySQL that don’t use straight pthread mutexes and so may be problematic on non-Intel CPUs. A few of these issues have sneaked into MySQL over the past few years. The most problematic area was around the optimized mutexes in InnoDB (you can use the pthread_mutex fallback code, but it’s less performant).

Luckily, I both knew where to look and there are good asserts throughout InnoDB code to help spot any other areas that I may not have initially thought of to look at. Coding defensively with a good amount of asserts is a good thing.

After not too much work, I have a set of patches that I’m fairly confident is correct and performs near as well as possible. Initially, I had a different patch that used heavyweight memory barriers in a lot of places, but big kudos to Yasufumi for posting a better patch than mine to bug 47213 – using the lighter weight barriers gives a decent performance boost.

One of the key patches is in the InnoDB mutex code to change the thread priority – i.e. a POWER equivalent to the x86 pause instruction. These are hints to the CPU that the thread being executed is in a spinloop and CPU resources should be allocated to other threads to make betterr forward progress.

After dragging Anton in to have a look and a think, this code may have motivated him to have a go at getting kernel support for adaptive mutexes, thus removing the need for this spin/sleep/yield/eep loop in InnoDB (at least on Linux).

So… I’ve spent the appropriate time filing bugs in the MySQL bug tracker for the things I’ve found. Feel free to track them yourself, they are:

  • Bug 72715: character set code endianness dependent on CPU type rather than endianness of CP
    • I don’t think this is an issue for us… or it could be that this is actually just incredibly untested code in the MySQL Server. It’s also not POWER specific, although was caught by the Migration Assistant which is part of the Advanced Toolchain from IBM.
  • Bug 72718: CACHE_LINE_SIZE in innodb should be 128 on POWER
    • I contributed a patch that’s a simple #ifdef for CPU type. Those who care about other CPU architectures should chime in with the correct value for them.
    • There’s other places in InnoDB where there’s some padding that don’t use this define, I need to file a bug for that.
  • Bug 72754: Set thread priority in InnoDB mutex spinloop
    • This makes a big difference when you have mutex contention and SMT (Symmetric Multi-Threading) enabled (on POWER, you can dynamically change SMT levels at runtime).
    • I’ve contributed a preliminary patch that isn’t generic. I should go and fix that.
  • Bug 72755: InnoDB mutex spin loop is missing GCC barrier
    • This also applies to x86 (and indeed all platforms). If GCC gets a bit smarter, the current code could compile down to nothing, which is exactly what you don’t want from a spinloop. The correct thing to do is to have a GCC memory barrier (not CPU one) to ensure that the compiler doesn’t optimize away the spinning.
    • I’ve contributed a patch, may need #ifdef GCC added.
  • Bug 72809: InnoDB Linux native aio setup missing barrier after setup
    • This appears to be a “POWER8 is fast” related bug :)
    • Patch contributed.
  • Bug 72811: Set NUMA mempolicy for optimum mysqld performance
    • Not POWER specific.
    • I’ve contributed a patch that sets NUMA memory allocation policy inside mysqld rather than having to run “numactl” manually
  • Bug 47213: InnoDB mutex/rw_lock should be conscious about memory ordering other than Intel
    • Originally filed by Yasufumi back in 2009.
    • Some good discussion going on here to ensure the patch is correct. This is the kind of patch that requires more review  than it takes to write it.
    • This patch would fix the majority of problems for non-Intel CPU architectures.
    • Thanks to Yasufumi for providing an updated patch, it helped a lot!
  • Bug 72544: Incorrect locking for global_query_id
    • I found a bug. Rather benign and not POWER specific.

Want to run MySQL 5.6.17 on POWER? Get my MySQL 5.6.17 patch here: https://flamingspork.com/mysql/mysql-5.6.17-POWER.patch

My accumulation of 5.6 patches seems fairly reliable. I’d test before putting into production, and I’d certainly love to know any problems you hit.

Get the quilt series of patches here: https://flamingspork.com/mysql/mysql-5.6.17-POWER-patches.tar.gz

I have, of course, done the legal wrangling for the Oracle Contributor Agreement (remarkably painless) and am working on making the patches completely acceptable to be merged into MySQL.

Ghosts of MySQL Past, Part 7: PBXT

Recently, I’ve been writing based on my linux.conf.au 2014 talk, which you can watch the recording of. Also see Part 1, Part 2, Part 3, Part 4, Part 5 and Part 6. My feed feel off Planet MySQL for a bit so you may have missed those posts – so feel free to go on a trip down memory lane before returning here for, well, more of a trip down memory lane.

At the start of 2005, Paul McCullagh started working on a new transactional storage engine for MySQL. He announced it in early 2006, so the majority of initial development was done before Oracle bought InnoBase Oy.

It’s at this point I should correct myself from my Part 5 where I was talking about when Maria started – it was actually at the start of 2005 when the project started, about 10 months before InnoDB Friday. I think this was mostly small scale work at first, before months later having a larger focus on it.

But anyway, let’s talk about PBXT! It had a different architecture than InnoDB and thus could have an interesting set of performance characteristics. The shortest way to describe the original architecture is “kind of log based”. Rows are written to a log file and there’s a handle file that points to where the rows are in the log.

Also, the design had both a fixed size and a variable sized part of the row. The fixed size part was stored alongside the handle in the record data file, so it was incredibly quick at getting the fixed size part of the row.

Not having to write data twice and having really quick access to some columns gave PBXT a quite decent performance advantage for many work loads.

The BLOB Streaming feature that was also worked on (originally just for PBXT and then for any engine) was quite ahead of its time. Think HandlerSocket but think of it done more correctly. HandlerSocket is an awful binary protocol that supports very limited operations and uses two TCP ports (one for reads, one for writes). The blob streaming used HTTP, something that anyone could use and manipulate and proxy and whatever you’d want.

If we look at Paul’s Top 5 Wishes for storage engines, an engine test suite and sane APIs (or at least documented) would clearly have helped. Having to do API tracing to discover when you should start a transaction is probably not the best way to attract developers – the amount of time that could have been saved given better interfaces in the server was immense (and not just for PBXT developers).

So, why aren’t we all using PBXT now? Well… if MySQL had shipped with PBXT, it would possibly have been a different story. There was (and indeed is) a very uneven playing field for MySQL storage engines. If you’re in the main MySQL tree then you’re everywhere. If not, then you’ve got a heck of a lot of work on packaging and keeping up to date with various API changes (not to mention ABI, which since we’re talking C++ and since we’re talking MySQL, changed a lot). With InnoDB being “good enough” for many and actually being forced to improve due to challengers such as PBXT (as well as being in the tree) it just continued to have the majority of the users… and it’s not easy making money as a storage engine vendor – and developing a storage engine does cost money.

With no more than a couple of people working on it at any one time, it’s amazing that PBXT had such a big influence – and we can likely credit many InnoDB performance improvement to PBXT being so much better in some areas.

An interesting side story: I was sending some build fixes/complier warning fixes and other minimal patches to Paul when MySQL belonged to Sun and he mentioned that perhaps it was time he had a contributor agreement. I pointed out the Sun contributor agreement as an example of one that could be used, and maybe he could just replace “Sun Microsystems” with “PrimeBase” and I could submit that to the Sun system – at the very least, it would be amusing. Paul said that sounded like a great idea and I submitted Sun’s contributor agreement (which it expected outside contributors to agree to) to Sun for them to agree to.

Well… a few weeks later I got a response, and Sun wasn’t exactly keen on it – until I pointed out a few times that it was in fact their agreement and then finally it was all okay.

If your company wants people to agree to a contributor agreement: see if they’d agree to it if it was for someone else. It was an interesting exercise.

Further reading:

Google Tests Homegrown Power8 Servers

http://www.enterprisetech.com/2014/02/13/google-tests-homegrown-power8-servers/

Having joined IBM now and working on Linux on Power, I’m allowed to be all happy and gleeful about a non x86 CPU architecture again, and one where Linux and free software really is a big deal.

Some of my now colleagues talked about some things related to Power 8 at Linux.conf.au so you should go and check out their talks!

Ghosts of MySQL Past: Part 3

See Part 1 and Part 2.

We rejoin our story with a lawsuit. While MySQL suing Progress NuSphere is not perhaps the first GPL lawsuit that comes to mind, it was the first time that the GPL was tested in court. Basically, the GEMINI storage engine was a proprietary storage engine bundled with a copy of MySQL. Guess what? The GPL was found to be valid and GEMINI was eventually GPLed, and it didn’t really go anywhere after that. Why? Probably some business reasons and also, InnoDB was actually rather good and there wasn’t a lawsuit to enforce the GPL there, making business relationships remarkably easier.

In 2003 there was a second round of VC funding. The development team increased in size. One thing that MySQL AB did was invest heavily in technology. I think this is what gave the company a lot of value, you need to spend money developing technology if you wish to be seen as a giant and if you wish to be able to provide a high level of quality service to customers.

MySQL 4.0 went GA in March 2003 while at the same time there were 4.1 and 5.0 development trees. Three concurrent development trees may seem too many – and of course, it was. But these were heady days of working on features that MySQL was missing and ever wanting to gain users and market share. Would all these extra features be able to be added to MySQL? Time would tell…

The big news of 2003 for MySQL? A partnership with SAP. There was this idea: “run SAP on MySQL” which would push the MySQL Server in a bit of an odd direction. For a start, the bootstrap SQL script for SAP created something like 10,000 tables and loaded gigabytes of data – before you even started setting it up. In 2003, on MySQL 4.0, this didn’t go so well. Why was SAP interested? Well, then you’d be able to run SAP without paying Oracle licenses!

Ghosts of MySQL Past: Part 2

This continues on from my post yesterday and also contains content from my linux.conf.au 2014 talk (view video here).

Way back in May in the year 2000, a feature was added to MySQL that would keep many people employed for many years – replication. In 3.23.15 you could replicate from one MySQL instance to another. This is commonly cited as the results of two weeks of work by one developer. The idea is simple: create a log of all the SQL queries that modify the database and then replay them on a slave. Remember, this is before there was concurrency and everything was ISAM or MyISAM, so this worked (for certain definitions of worked).

The key things to remember about MySQL replication are: it was easy to use, it was easy to set up and it was built into the MySQL Server. This is why it won. You have to fast forward to September in 2010 before PostgreSQL caught up! It was only with PostgreSQL 9.0 that you could have queryable read-only slaves with stock standard PostgreSQL.

If you want to know why MySQL was so much bigger than PostgreSQL, this built in and easy to use replication was a huge reason. There is the age of a decent scotch between read-only slaves for MySQL and PostgreSQL (although I don’t think I’ve ever pointed that out to my PostgreSQL friends when having scotch with them… I shall have to!)

In 2001, when space was an odyssey, the first GA (General Availability) release of MySQL 3.23 hit the streets (quite literally, this was back in the day of software that came in actual physical boxes, so it quite probably was literally hitting the streets).

For a good piece of trivia, it’s 3.23.22-beta that is the first release in the current bzr tree, which means that it was around this time that BitKeeper first came into use for MySQL source code.

We also saw the integration of InnoDB in 2001. What was supremely interesting is that the transactional storage engine was not from MySQL AB, it was from Innobase Oy. The internals of the MySQL server were certainly not set up for transactions, and for many years (in fact, to this day) we talk about how a transactional engine was shoehorned in there. Every transactional engine since has had to do the same odd things to, say, find out when a transaction was being started. The exception here is in Drizzle, where we finally cleaned up a bunch of this mess.

Having a major component of the MySQL server owned and controlled by another company was an interesting situation, and one that would prove interesting in a few years time.

We also saw MÃ¥rten Mickos become CEO in 2001, a role he would have through the Sun acquisition – an acquisition that definitively proved that you can build an open source company and sell it for a *lot* of money. It was also the year that saw MySQL AB accept its first round of VC funding, and this would (of course) have interesting implications: some good, some less ideal.

(We’ll continue tomorrow with Part 3!)

Hong Kong (OpenStack Summit)

I’ll be in Hong Kong for the upcoming OpenStack Summit Nov 5-8. I’d be thrilled to talk database things with others present, especially around Trove DBaaS (DataBase as a Service) and high availability MySQL for OpenStack deployments.

I was last in Hong Kong in 2010 when I worked for Rackspace. The closest office to me was in Hong Kong so that’s where I did my HR onboarding training. I remember telling friends on the Sunday night before leaving for Hong Kong that I may be able to make dinner later in the week purely depending on if somebody got back to me on if I was going to Hong Kong that week. I was, and I went. I took some photos while there.

Walking from the hotel where we were staying to the Rackspace office could be done pretty much entirely through buildings without going outside. There were bits of art around too, which is just kind of awesome – I’m always in favour of random art.
Statues in walkways

The photo below was the view from my hotel room. The OpenStack summit is just by the airport rather than in the middle of town, so the views will be decidedly different to this, but still probably quite spectacular if you’re around the right place (I plan to take camera gear, so shout if you want to journey too)
Hotel Window (Hong Kong)

There are some pretty awesome markets around Hong Kong offering just about everything you’d want, including a lot just out on the street.
Java Road
Hong Kong Street Market

Nightime was pretty awesome, having people from around the world journey out into the night was great.
Rackers walking Hong Kong at Night

I was there during the World Cup, and the streets were wonderfully decorated. I’m particularly proud of this photo as it was handheld, at night, after beer.
Hong Kong streetlife

“We open source it, and then developers show up and do work for free”

Those who have been around the free and open source software world long enough have heard “We open source it, and then developers show up and do work for free” at least once and have called bullshit on it at least once.

It turns out that people don’t go and work on software for free. They are either modifying software to scratch their own itch (in which case they’re getting 99+% of the code for nothing, so contributing a small bit back is the equivalent of paying for it – with their time rather than money) or it’s a good bit of fun.

So why do software projects that are dual licensed with a commercial license get fewer outside contributions? I think it’s quite simple: people don’t tend to spend their spare time making other people money while making none for themselves. Simply, these projects are left with only contributions from those being paid to work on it (usually by the company who sells the commercial license) and people/companies scratching an itch. Projects that aren’t dual licensed are more likely to have contributors from several companies as then it’s not all-but-one company spending time and money to make another company money.

An argument for popcon

There is a package called popularity-contest that’s available in both Debian and Ubuntu (and likely other Debian derivatives). It grabs the list of packages installed on the machine and submits it to the Debian or Ubuntu popularity contests.

There you can see which are the most popular packages in Debian and Ubuntu. Unsurprisingly, dpkg, the package manager is rather popular.

Why should you enable it? Looking at popcon results are solid numbers as to how many users you may have. Although the absolute numbers may not be too accurate, it’s a sample set and if you examine the results over time you can start to get an idea on if your software is growing in popularity or not.

But there’s something more than that, if you can prove that a lot of people are installing your software on Debian, then you’re likely going to be able to argue for more work time being spent on improving the packaging for Debian.

Quite simply, enabling popcon is a way to help people like me argue for more time being spent on making Debian better.

The MySQL Cluster storage engine

This is one close to my heart. I’ve recently written on other storage engines: Where are they now: MySQL Storage Engines, The MERGE storage engine: not dead, just resting…. or forgotten and The MEMORY storage engine. Today, it’s the turn of MySQL Cluster.

Like InnoDB, MySQL Cluster started outside of MySQL. Those of you paying attention at home may notice a correlation between storage engines not written exclusively for MySQL and being at all successful.

NDB (for Network DataBase) started inside Ericsson, originally written in a language called PLEX, which was internal to Ericsson and used in the AXE telephone switches. Mikael Ronstrom’s PHD thesis covered NDB and even covered things that (at least were) yet to be implemented (it’s been quite a few years since I leafed through it last). The project at Ericsson (IIRC) was shelved a couple of times, but eventually got spun out into an Ericsson Business Innovation company called Alzato.

Some remnants of PLEX can still be found in the NDB source code (if you look really hard that is). At some point the code was fed through a PLEX to C++ converter and development continued from there. Some of the really, really old parts of the source may seem weird either due to this or some hand optimization for SPARC processors in the 1990s.

In 2003, MySQL AB acquired Alzato and work on a storage engine plugin for MySQL to interface to the (C++ API only) NDB was underway. Seeing as the storage engine interface was so simple, easy and modular it would only take several years for the interface to NDB to become mature.

The biggest problem: NDB itself worked really well if your workload fit exactly what it was good at… if you deviated, horrific performance and/or crashes were not as uncommon as we’d have liked. This was a source of strain for many years with the developers and support team on one side and some of the less-than-careful sales team on the other. That being said, there have been some absolutely awesome sales people selling NDB into markets it truly fits, and this is why there’s barely a place in the world where placing a mobile phone call doesn’t go through MySQL Cluster at some point.

You should read Tomas Ulin’s post Celebrating 10 years @MySQL for a bit of an insight into how Alzato became part of MySQL AB (which later became part of Sun which became part of Oracle).

I joined the MySQL Cluster team at MySQL in December 2004, not too long after Alzato was acquired, but certainly when the NDB storage engine in MySQL 4.1 was in its very early stages – it was then by no means a general purpose database.

Over the years, MySQL Cluster gained both traction and features, making it useful for more applications. One of the biggest marketing successes of MySQL was the storage engine architecture and how you could just “plug in” different engines. The reality (of course) was far different and even though MySQL Cluster did just “plug in” to MySQL, it was certainly not a drop in replacement.

In MySQL 5.0, a bunch of neat new features were added:

  • Engine condition pushdown
    This enabled conditions on non-indexed columns to be evaluated on the data nodes rather than having every row pulled up to the SQL node to be evaluated.
  • Batched read interface
    So that queries like SELECT FOO FROM BAR WHERE A IN (1,2,3) were executed as a single network round trip rather than 3 round trips.
  • Query cache
    Although the query cache should die, hey, at least it worked with NDB now…. in a way.
  • Reduced IndexMemory usage
    Remember, NDB is an in-memory database, so saving a bunch of bytes for secondary indexes was a big thing.

the first release with things I really worked on was MySQL 5.1. My first talk (to a packed room) at the MySQL User Conference in 2006 was on new features in MySQL Cluster 5.1. I’m still quite proud of that talk even though I know I am a much better speaker than I was then (It would have been great to have had more guidance… but hey, learning from experience is good too).

We added a lot in 5.1:

  • Integration with replication
    This is where row based replication was born. It was a real team effort with the NDB kernel part (going from memory and bzr logs) having been written by Tomas and Jonas seems to have a bunch of code there too. I worked a bunch on the NDB Injector thread in mysqld, Mats worked on the core row based code (at the time the most C++ like code in the entire MySQL world). You could now have a cluster replicate to another cluster with the giant bottleneck that is MySQL replication.
  • disk data
    You could store non-indexed columns on disk. I implemented the INFORMATION_SCHEMA.FILES table for this, I was young and naive enough to think that the InnoDB guys would also fill out this table and all would be happy with the world (I’m lucky I haven’t been holding my breath on this one).
  • Variable Sized columns
    A VARCHAR(255) would actually not always use more than 255bytes if you just stored a single character in it. Catch? Only for in-memory columns.
  • User defined partitioning
    Because NDB desperately needed more options, we let the user choose how they wanted to partition up their data (per table).
  • Autodiscovery of schema changes
    This was a giant workaround to the epic mess that is FRM files and data dictionary things inside the MySQL Server. It is because of all this code that when I went to rewrite the whole thing for Drizzle I took the approach of “just pass it down to the engines, the server must not attempt to know better”. FWIW, I’m still right: if the server tries to be clever you now have two places for bugs to be, not just one.
  • Distribution awareness
    i.e. better selection of which data node to talk to for a particular query, reducing latency.
  • Online add/drop index.
    How long did it take for other engines to get this? Let’s not think about that :)

After that the really interesting stuff started to happen, that is, the first major fork of MySQL: MySQL Cluster Carrier Grade Edition (CGE). Why? We had customers that simply couldn’t wait for MySQL 6.0 (after all, they’d still be waiting).

We had MySQL Cluster CGE 6.1, 6.2, 6.3 and now we’re into 7.0, 7.1 and 7.2. There is without doubt that it’s the longest serving and surviving MySQL fork. There were non-trivial changes inside the MySQL server too, which caused enough of a merge problem for the (small) Cluster team.

One big thing that you’re probably still all waiting for? Replication conflict detection and resolution in circular/multi-master replication setups. It was an NDB first and been used in production for a decent amount of time.

I remember a hack while on an airplane led to the CompressedBackup and CompressedLCP options (used zlib when writing out checkpoints/backups) – something that took more time than you’d think to go from prototype to production ready code.

The last few things I worked on in MySQL Cluster before going and working full time on Drizzle was the Windows port, online add/drop node and NDBINFO.

I’ve left out so many cool MySQL Cluster things that were worked on over the years (e.g. online add/drop column, rewriting of LCP code, micro GCPs, crash-safe DDL, the test suite). I really should mention the test suite, in lines of code it was over three times that of MyISAM.. and that was probably six years ago that I worked that out.

One thing to think about: when Innobase Oy was bought by Oracle and there was this effort to have a transactional storage engine that was inside MySQL AB rather than another company, I pointed out that I thought it would take less time adding the needed features to NDB and integrating it inside the MySQL server binary (and with the addition of online add node you could go from stand alone DB server to a full cluster with no down time) than it would for any of the alternatives to get to a suitable level of maturity.

I wish I put money on this… I put money on the MySQL 5.1 GA release date (which I was happy to loose), but in the years since you can see that InnoDB is still reigning supreme with all that came to replace it having fallen away for one reason or another. It’s still on track to have MySQL Cluster be the only real alternative (now also, funnily enough, owned by Oracle). I have to say, it’s kind of a hollow victory though, it would have been nice to see Falcon and PBXT be serious players in today’s market.

A few points on talking about the internet

There are a few things you should keep in mind when talking about the internet:

  • Use of the word “cyber” is not cool.
  • Whenever you hear the word “cyber” substitute it with “Information Super-Highway”… yes, it sounds that dated.
  • Use of the word “cyber” is applicable only in discussions relating to Doctor Who.
  • Whenever you see “facebook.com/BLAH” just think “AOL Keyword”. If you don’t know what AOL was, I likely have 437 trial CDs you can have.
  • There is no differentiation between life and online-life – just about everything is internet connected now. This very much counts for speech vs online speech.

A reminder: Leah and I are running in the upcoming MS Fun Run raising money for Australians affected by Multiple Sclerosis (MS). If you don’t sponsor us you’re going to die poor and alone. Really, I have an arrangement with all known deities to make it happen. Sponsor us here: http://register.mswalk.org.au/2013-MS-Walk-and-Fun-Run-Melbourne/Meep

The MERGE storage engine: not dead, just resting…. or forgotten.

Following on from my fun post on Where are they now: MySQL Storage Engines, I thought I’d cover the few storage engines that are really just interfaces to a collection of things. In this post, I’m talking about MERGE.

The MERGE engine was basically a multiplexer down to a number of MyISAM tables. They all had to be the same, there was no parallel query execution and it saw fairly limited use. One of the main benefits was that then you could actually put more rows in a MyISAM table than your “files up to 2/4GB” file system allowed. With the advent of partitioning, this really should have instantly gone away and been replaced by it. It wasn’t.

It is another MySQL feature that exists likely due to customer demand at the time. It’s not a complete solution by any means, PARTITIONING is way more complete and universal…. and much harder to get right inside the MySQL server – which is why MERGE exists. It was easier to write a storage engine that wrapped MyISAM than it was to have any form of partitioning in the server.

One advantage of MERGE tables is it means that you could parallelize myisamchk to repair your broken MyISAM tables after a crash. One step better than no crash safety is at least parallel recovery. The disadvantage being that you’re using MERGE and MyISAM tables.

There is also the great security problem of MRG_MYISAM (the other name for MERGE tables): if you create a MyISAM table t1 and have a user able to access it, if they can create a MERGE table that accesses t1 (say m1) and you then revoke their access to t1, they’ll still be able to access t1 through m1.

MERGE still seems to exist in MySQL 5.6 without even a warning that it’ll go away… which I suspect it will…. we long since got rid of it in Drizzle as, well, what you really want is a query rewrite engine that does views, partitioning etc etc.

Can anyone think of a reason why you should still use MERGE tables in 2013? I can’t.

More on the FRM file format (and the minimum maximum number of columns in MySQL)

Over at the work blog, I wrote about what the true maximum number of columns in MySQL is as well as the minimum maximum. Basically, the FRM file is ass and places bizarre arbitrary limits on things due to what can only be seen as “limitations” in 1980s computing and automatically generated interfaces for entering rows on a 80×24 character VT100 terminal.

Full post here: https://www.percona.com/blog/understanding-the-maximum-number-of-columns-in-a-mysql-table/

Refactoring Internal temporary tables (another stab at it)

A few weekends ago, I started to again look at the code in Drizzle for producing internal temporary tables. Basically, we have a few type of tables:

  • Standard
  • Temporary (from CREATE TEMPORARY TABLE)
  • Temporary (from ALTER TABLE)
  • Internal temporary (to help with query execution)

If you’re lucky enough to be creating one of the first three types, you go through an increasingly lovely pile of code that constructs a nice protobuf message about what the table should look like and hands all responsibility over to the storage engine as to how to do that. The basic idea is that Drizzle gets the heck out of the way and lets the storage engine do its thing. This code path looks rather different than what we inherited from MySQL. For a start, we actually have a StorageEngine object rather than just lumping everything into the handler (which we correctly name a Cursor). However… the final part, the internal temporary table code is a bit closer to what we inherited from MySQL. There is a good reason for that, it’s ass.

For a start, the table::Singular object is still abused by Item_sum_distinct (see the setup() method) as a tuple (a table with no actual table). This is not ideal and just throws a spanner in the works for refactoring a bunch of code.

The second big problem is that create_tmp_table() doesn’t actually use any normal API calls, instead it manually sets up the table::Singular object. This includes setting up the fields for the table::Singular object in a slightly different way depending on which bit of code called create_tmp_table().

The third big problem is that it’s not storage engine agnostic. Instead of using any existing and sensible way to go and create a temporary table by using the storage engine API it instead creates a series of MI_COLUMNDEF structures which as you may be able to guess, are MyISAM specific and internal data structures.

The forth big problem is that if we end up using HEAP (again, like MyISAM, hard coded) we don’t even call the create table method on the engine. The HEAP (or MEMORY engine as it’s now known) is magic in that it can create tables on open()!

All of these issues make it really, really hard to have another engine with the ability to handle internal temporary tables. You may recall that MariaDB does include the ability to use the Aria engine for internal temporary tables. No, they did not refactor any of this code, they just made a copy of the code and put in Aria where MyISAM was along with some #ifdef for the feature.

Over the past several years I’ve tried a few times to tease this code out and start the process of turning it into something that is palatable. Every one of those times I’ve either failed or gotten sufficiently frustrated that I’ve given up.

I now have a new strategy though. After looking at the code for a good few hours a few weekends ago, I think I have an idea of where to start…. (now just for a few more free weekends to implement it).