Ghosts of MySQL Past, Part 7: PBXT

Recently, I’ve been writing based on my linux.conf.au 2014 talk, which you can watch the recording of. Also see Part 1, Part 2, Part 3, Part 4, Part 5 and Part 6. My feed feel off Planet MySQL for a bit so you may have missed those posts – so feel free to go on a trip down memory lane before returning here for, well, more of a trip down memory lane.

At the start of 2005, Paul McCullagh started working on a new transactional storage engine for MySQL. He announced it in early 2006, so the majority of initial development was done before Oracle bought InnoBase Oy.

It’s at this point I should correct myself from my Part 5 where I was talking about when Maria started – it was actually at the start of 2005 when the project started, about 10 months before InnoDB Friday. I think this was mostly small scale work at first, before months later having a larger focus on it.

But anyway, let’s talk about PBXT! It had a different architecture than InnoDB and thus could have an interesting set of performance characteristics. The shortest way to describe the original architecture is “kind of log based”. Rows are written to a log file and there’s a handle file that points to where the rows are in the log.

Also, the design had both a fixed size and a variable sized part of the row. The fixed size part was stored alongside the handle in the record data file, so it was incredibly quick at getting the fixed size part of the row.

Not having to write data twice and having really quick access to some columns gave PBXT a quite decent performance advantage for many work loads.

The BLOB Streaming feature that was also worked on (originally just for PBXT and then for any engine) was quite ahead of its time. Think HandlerSocket but think of it done more correctly. HandlerSocket is an awful binary protocol that supports very limited operations and uses two TCP ports (one for reads, one for writes). The blob streaming used HTTP, something that anyone could use and manipulate and proxy and whatever you’d want.

If we look at Paul’s Top 5 Wishes for storage engines, an engine test suite and sane APIs (or at least documented) would clearly have helped. Having to do API tracing to discover when you should start a transaction is probably not the best way to attract developers – the amount of time that could have been saved given better interfaces in the server was immense (and not just for PBXT developers).

So, why aren’t we all using PBXT now? Well… if MySQL had shipped with PBXT, it would possibly have been a different story. There was (and indeed is) a very uneven playing field for MySQL storage engines. If you’re in the main MySQL tree then you’re everywhere. If not, then you’ve got a heck of a lot of work on packaging and keeping up to date with various API changes (not to mention ABI, which since we’re talking C++ and since we’re talking MySQL, changed a lot). With InnoDB being “good enough” for many and actually being forced to improve due to challengers such as PBXT (as well as being in the tree) it just continued to have the majority of the users… and it’s not easy making money as a storage engine vendor – and developing a storage engine does cost money.

With no more than a couple of people working on it at any one time, it’s amazing that PBXT had such a big influence – and we can likely credit many InnoDB performance improvement to PBXT being so much better in some areas.

An interesting side story: I was sending some build fixes/complier warning fixes and other minimal patches to Paul when MySQL belonged to Sun and he mentioned that perhaps it was time he had a contributor agreement. I pointed out the Sun contributor agreement as an example of one that could be used, and maybe he could just replace “Sun Microsystems” with “PrimeBase” and I could submit that to the Sun system – at the very least, it would be amusing. Paul said that sounded like a great idea and I submitted Sun’s contributor agreement (which it expected outside contributors to agree to) to Sun for them to agree to.

Well… a few weeks later I got a response, and Sun wasn’t exactly keen on it – until I pointed out a few times that it was in fact their agreement and then finally it was all okay.

If your company wants people to agree to a contributor agreement: see if they’d agree to it if it was for someone else. It was an interesting exercise.

Further reading:

Ghosts of MySQL Past, Part 6: The engine revs

This week I’ve been writing based on my linux.conf.au 2014 talk, which you can watch the recording of. Also see Part 1, Part 2, Part 3, Part 4 and Part 5. My feed feel off Planet MySQL for a bit so you may have missed those posts.

Netfrastructure was many other things apart from just a database engine, but the one part that was interesting to MySQL was the storage engine. One interesting thing about Netfrastructure was the inbuilt Java VM, which meant you could do all sorts of neat tricks right near the data and with Jim and Ann coming from much more feature complete databases, some parts of MySQL were going to be a little bit of a shock.

There was still work to be done on the Falcon engine itself, it wasn’t completely finished and ready to just drop in, even though you may have heard things like “to be completed later this year”.

There was also another experiment to see if MySQL could get a storage engine that wasn’t owned by somebody else: resurrect Gemini. Remember, it was GPL licensed now, and it could conceivably be updated to the latest MySQL versions. You can go and see the code for the Amira project on launchpad. Ultimately, this went nowhere, but it could have been an interesting story if this project got more resources. Ultimately, it was Falcon that got all the attention and effort.

This was going to be the ultimate triumph of the Pluggable Storage Engine Architecture, something that had been touted in presentations and marketing material for years. What really happened? Well, it took NDB (now MySQL Cluster) somewhere between two and four years before you could really say it was stable and free of many “obvious” bugs/unintended deviations in behavior. Would Falcon be any different?

In reality, the Falcon project unearthed all of the warts of the storage engine interface. There were many exchanges of “Is it really like that?”, “Yes, yes it is, sorry.”

The APIs (and I use the term loosely) between a storage engine and the database server were originally used to plug in ISAM and then MyISAM, and the first transactional store (InnoDB) required a bunch of hacks to work and the API was never cleaned up. When the third transactional engine came along (MySQL Cluster) – which was also a distributed engine – there were again no real fixes in the API, just more of the same work-arounds and oddness. So by the time Falcon came along, if you were lucky the API was merely just not what you’d expect. There were many, many situations where not only was the official documentation inaccurate, but you’d have to commit several gross layering violations just to get something rather fundamental to a transactional database working.

Arguably the worst thing was foreign keys, a requirement for the Falcon project. You had to implement your own SQL parser to get any information on foreign keys. Let’s not even get started on auto_increment – something which the correct behavior is pretty much completely undocumented.

Next, we’ll look at PBXT.

Google Tests Homegrown Power8 Servers

http://www.enterprisetech.com/2014/02/13/google-tests-homegrown-power8-servers/

Having joined IBM now and working on Linux on Power, I’m allowed to be all happy and gleeful about a non x86 CPU architecture again, and one where Linux and free software really is a big deal.

Some of my now colleagues talked about some things related to Power 8 at Linux.conf.au so you should go and check out their talks!

Ghosts of MySQL Past Part 5: The Era of Acquisitions

This week I’ve been writing based on my linux.conf.au 2014 talk, which you can watch the recording of.

Also see Part 1, Part 2, Part 3 and Part 4. My feed feel off Planet MySQL for a bit so you may have missed those posts.

Now we head into the era of acquisitions… there have been a few in MySQL history, and in 2005 came the second (the first was MySQL AB acquiring Alzato for NDB). In what was to be known as “InnoDB Friday”, the makers of InnoDB – Innobase Oy – was acquired by Oracle. That very same month….

MySQL 5.0 GA. The first GA release of MySQL 5.0 is infamous. It was nowhere near ready and everybody who tried to use 5.0 in the early GA days has a story about something obvious that was broken. Basically, the majority of the new features simply didn’t work. It took many point releases before people would consider 5.0 ready.

The real measure of 5.0 quality was that it took MySQL AB over a year before we started to use it for our support database.

At the end of 2005, the Maria project was started: a project to create a transactional storage engine. This should not be confused with MariaDB, which would come years later. This is Maria, now called Aria. The basic idea was to fork MyISAM and work on adding features. In hindsight, it’s easy to see that when you have a quality problem with your main product, you should probably not take a bunch of senior engineers and have them work on a different project. IIRC there was some initial estimate of a GA by the end of 2007. It’s now eight years since the project started and there’s still no stable release.

There were other efforts to get a transactional storage engine not owned by Oracle, and in 2006 MySQL AB acquired Netfrastructure and along with it Jim Starkey and Ann Harrison came to work for MySQL AB.

Originally named JSTAR, this would become known as Falcon (probably something to do with the Swedish beer by the same name).

Ghosts of MySQL Past: Part 4, A million features for Enterprise

Continuing on from Part 3….

SAP is all about Enterprises and as such, used all the Enterprise features of databases. This is a much different application than every user of MySQL so far (which were often web applications, and increasingly being used at scale). This was “MySQL focuses on Enterprise”.

This is 2003: before VIEWS, before stored procedures, before triggers, before cursors, before prepared statements, before precision math, before SQL access to logs, before any real deadlock info, before you could use a / in a table name, before any performance statistics, before full unicode and before partitioning, row level binary logs, fractional seconds in temporal types and before EXPLAIN for INSERT/UPDATE/DELETE (which we still don’t have). There was a long way to go before MySQL would fit with SAP. A long, long way.

But how were we going to run our cell phone networks on MySQL? There was an answer to that as MySQL AB acquired an Ericsson spinoff, Alzato for it’s clustered database server, NDB, which would become MySQL Cluster. This is separate from replication and originally designed for GSM HLR databases. Early versions had two types of availability: five nines (99.999%) or zero nines (0.000%) – it either worked or exploded. Taking a database engine that wasn’t fully mature and integrating it with MySQL is not a task for the faint of heart. It wouldn’t be the last time this was attempted and it would prove to be the most successful.

In October of 2004, MySQL 4.1 went GA – the first version with MySQL Cluster and incidentally, just before I joined MySQL AB on the MySQL Cluster team.

By this time, MySQL AB had hired just about all outside contributors, as, well, what better job interview than a patch? The struggle to have a good relationship with outside contributors started a long time ago, it’s certainly nothing new.

2005 was the year of MySQL 5.0 instability. At the time, 5.0 was the development version and it was plagued by instability. It was really hard for development teams (such as the SAP team) to work on deliverables when every time they’d rebase their work on a new 5.0 trunk version, the server would explode in new and interesting ways.

In fact, the situation was so bad that Continuous Integration was re-invented. Basically, it was feature++; stability–. The fact that at the same time even less stable development trees existed did not bode well.

Ghosts of MySQL Past: Part 3

See Part 1 and Part 2.

We rejoin our story with a lawsuit. While MySQL suing Progress NuSphere is not perhaps the first GPL lawsuit that comes to mind, it was the first time that the GPL was tested in court. Basically, the GEMINI storage engine was a proprietary storage engine bundled with a copy of MySQL. Guess what? The GPL was found to be valid and GEMINI was eventually GPLed, and it didn’t really go anywhere after that. Why? Probably some business reasons and also, InnoDB was actually rather good and there wasn’t a lawsuit to enforce the GPL there, making business relationships remarkably easier.

In 2003 there was a second round of VC funding. The development team increased in size. One thing that MySQL AB did was invest heavily in technology. I think this is what gave the company a lot of value, you need to spend money developing technology if you wish to be seen as a giant and if you wish to be able to provide a high level of quality service to customers.

MySQL 4.0 went GA in March 2003 while at the same time there were 4.1 and 5.0 development trees. Three concurrent development trees may seem too many – and of course, it was. But these were heady days of working on features that MySQL was missing and ever wanting to gain users and market share. Would all these extra features be able to be added to MySQL? Time would tell…

The big news of 2003 for MySQL? A partnership with SAP. There was this idea: “run SAP on MySQL” which would push the MySQL Server in a bit of an odd direction. For a start, the bootstrap SQL script for SAP created something like 10,000 tables and loaded gigabytes of data – before you even started setting it up. In 2003, on MySQL 4.0, this didn’t go so well. Why was SAP interested? Well, then you’d be able to run SAP without paying Oracle licenses!

Ghosts of MySQL Past: Part 2

This continues on from my post yesterday and also contains content from my linux.conf.au 2014 talk (view video here).

Way back in May in the year 2000, a feature was added to MySQL that would keep many people employed for many years – replication. In 3.23.15 you could replicate from one MySQL instance to another. This is commonly cited as the results of two weeks of work by one developer. The idea is simple: create a log of all the SQL queries that modify the database and then replay them on a slave. Remember, this is before there was concurrency and everything was ISAM or MyISAM, so this worked (for certain definitions of worked).

The key things to remember about MySQL replication are: it was easy to use, it was easy to set up and it was built into the MySQL Server. This is why it won. You have to fast forward to September in 2010 before PostgreSQL caught up! It was only with PostgreSQL 9.0 that you could have queryable read-only slaves with stock standard PostgreSQL.

If you want to know why MySQL was so much bigger than PostgreSQL, this built in and easy to use replication was a huge reason. There is the age of a decent scotch between read-only slaves for MySQL and PostgreSQL (although I don’t think I’ve ever pointed that out to my PostgreSQL friends when having scotch with them… I shall have to!)

In 2001, when space was an odyssey, the first GA (General Availability) release of MySQL 3.23 hit the streets (quite literally, this was back in the day of software that came in actual physical boxes, so it quite probably was literally hitting the streets).

For a good piece of trivia, it’s 3.23.22-beta that is the first release in the current bzr tree, which means that it was around this time that BitKeeper first came into use for MySQL source code.

We also saw the integration of InnoDB in 2001. What was supremely interesting is that the transactional storage engine was not from MySQL AB, it was from Innobase Oy. The internals of the MySQL server were certainly not set up for transactions, and for many years (in fact, to this day) we talk about how a transactional engine was shoehorned in there. Every transactional engine since has had to do the same odd things to, say, find out when a transaction was being started. The exception here is in Drizzle, where we finally cleaned up a bunch of this mess.

Having a major component of the MySQL server owned and controlled by another company was an interesting situation, and one that would prove interesting in a few years time.

We also saw MÃ¥rten Mickos become CEO in 2001, a role he would have through the Sun acquisition – an acquisition that definitively proved that you can build an open source company and sell it for a *lot* of money. It was also the year that saw MySQL AB accept its first round of VC funding, and this would (of course) have interesting implications: some good, some less ideal.

(We’ll continue tomorrow with Part 3!)

Past, Present and future of MySQL and variants Part 1: Ghosts of MySQL Past

You can watch the video of my linux.conf.au 2014 talk here: http://mirror.linux.org.au/linux.conf.au/2014/Wednesday/28-Past_Present_and_future_of_MySQL_and_variants_-_Stewart_Smith.mp4

But let’s talk about things in blog form rather than video form :)

Back in 1979, there was UNIREG. A text UI to records (rows) in a database (err, table). The reason I mention UNIREG is that it had FoRMs which as you may have guessed by my capitalization there is where the FRM file comes from.

In 1986, UNIREG came to UNIX. That’s right kids, the 80×24 VT100 interface to ISAM (Index Sequential Access Method – basically rows are written in insert order and indexes point to them) came to UNIX. There was no generic query language, just FoRMs and reports. In fact, to this day, that 80×24 text interface is stored in the FRM file by MySQL and never ever used (I’ve written about this before).

Then there was this mSQL thing around the 1990s, which was a small SQL server (with source) but not FOSS. Originally, Monty W plugged in his ISAM engine but it wasn’t quite the right fit… so in 1995, we had MySQL 1.0 and MySQL AB was founded.

Fast forward a bit and in 1996 we had MySQL 3.19 and development continued. It managed to gain features, performance, ports to different operating systems and CPU architectures and, of course, stability.

It wasn’t until the year 2000 that MySQL adopted the GPL. This turned out to be a huge step in the right direction for increased adoption. At the time, this was a huge risk for the company, essentially risking all the revenue of the company on making the software more free.

This was the birth of the dual licensing business model. You see, the client library (libmysql) was also GPL, which meant it was easy to use if your application was also GPL, but if you were going to distribute your application and it wasn’t under a GPL compatible license (there was also a FOSS exception so that things like PHP could use it) then you needed a license.

Revenue from licensing was to be significant throughout the entire history of MySQL AB.

(We’ll continue this in part 2 tomorrow)

The accurately named “best tofu dish ever”: Mapo Tofu

I have been having a craving for Mapo Tofu for a few days now and I finally could be bothered attempting to make it for the first time. It’s odd that for a dish I really do quite like, this was my very first time attempting to make it.

Well… I was googling around and found this recipe: http://www.kitchenriffs.com/2011/03/best-darn-tofu-dish-ever-mapo-tofu.html. I didn’t have fermented black beans, so I left that out. I used the broth from the dried shiitake mushrooms (with a tiny bit of Massel stock added) and did take the option of adding pepper (chilli) flakes.

It was absolutely delicious and really quite simple to make.

The simmering of the tofu (I just used the Woolworths Macro Classic Tofu that you can pick up at any Woolworths store… because I’m often just plain lazy when it comes to acquiring tofu) was something I haven’t done before. It turns out that makes the outside of it more likely to keep its shape, which it rather did. Neat trick!

Did I mention this was delicious? I wish there was more!

$3 Kaiser Baas keyring photo frame and linux

I picked one up for $3 from OfficeWorks and wanted to put pictures on it.

So… I could list (and get) the default photos with gphoto2 (shipped with Fedora 20 it just worked): gphoto2 -L (list), -P (get all).

It turns out that the images are 128×128 PNGs! Easy: change resolution in GIMP of a picture I want, and then “gphoto2 -u filename.png” sent it to the device.

Wheee, it works! Not bad for $3