Pluggable Metadata stores (or… the revenge of table discovery)

Users of the ARCHIVE or NDB storage engines in MySQL may be aware of a MySQL feature known as “table discovery”. For ARCHIVE, you can copy the archive data file around between servers and it magically works (you don’t need to copy the FRM). For MySQL Cluster (NDB) it works so that when you CREATE TABLE on another MySQL server,  other MySQL servers can get the FRM for these tables from the cluster.

With my work to replace the FRM with a protobuf structure in Drizzle and clean up parts of the API around it, this feature didn’t really survive in any working state.

Instead, I’m now doing things closer to the right way: pluggable metadata stores. The idea being that the whole “table proto on disk” (in MySQL it’s the FRM, but in Drizzle we’re now using a protobuf structure) code is pluggable and could be replaced by an implementation specific to an engine (e.g. the innodb or ndb data dictionaries) or a different gerenic one.

Currently, the default plugin is the same way we’ve been doing it forever: file-per-table on disk in a directory that’s the database. The API has a nasty bit now (mmmm… table name encoding), but that’ll be fixed in the future.

The rest of this week will be dedicated to plugging this into all the bits in the server that manipulate the files manually.

With luck, I’ll have modified the ARCHIVE engine by then too so that there’ll just be the archive data file on disk with the table metadata stored in it.

One thought on “Pluggable Metadata stores (or… the revenge of table discovery)

  1. YES.

    This will make my wormhole engine a lot less tricky. I provide the table structure from a scripting language. In MySQL you have to take the internal table structure, write it to disk and let the discover-handler read the FRM back again, just because that’s how it is exposed. Ugliness.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.