FreeBSD on OpenPower

There’s been some work on porting FreeBSD over to run natively on top of OPAL, that is, on bare metal OpenPower machines (not just under KVM).

This is one of four possible things to run natively on an OPAL system:

  1. Linux
  2. hello_world (in skiboot tree)
  3. ppc64le_hello (as I wrote about yesterday)
  4. FreeBSD

It’s great to see that another fully featured OS is getting ported to POWER8 and OPAL. It’s not yet at a stage where you could say it was finished or anything (PCI support is pretty preliminary for example, and fancy things like disks and networking live on PCI).

hello world as ppc66le OPAL payload!

While the in-tree hello-world kernel (originally by me, and Mikey managed to CUT THE BLOAT of a whole SEVENTEEN instructions down to a tiny ten) is very, very dumb (and does one thing, print “Hello World” to the console), there’s now an alternative for those who like to play with a more feature-rich Hello World rather than booting a more “real” OS such as Linux. In case you’re wondering, we use the hello world kernel as a tiny test that we haven’t completely and utterly broken things when merging/developing code.

https://github.com/andreiw/ppc64le_hello is a wonderful example of a small (INTERACTIVE!) starting point for a PowerNV (as it’s called in Linux) or “bare metal” (i.e. non-virtualised) OS on POWER.

What’s more impressive is that this was all developed using the simulator rather than real hardware (although I think somebody has tried it on some now).

Kind of neat!

gcov code coverage for OpenPower firmware

For skiboot (which provides the OPAL boot and runtime firmware for OpenPower machines), I’ve been pretty interested at getting some automated code coverage data for booting on real hardware (as well as in a simulator). Why? Well, it’s useful to see that various test suites are actually testing what you think they are, and it helps you be able to define more tests to increase what you’re covering.

The typical way to do code coverage is to make GCC build your program with GCOV, which is pretty simple if you’re a userspace program. You build with gcov, run program, and at the end you’re left with files on disk that contain all the coverage information for a tool such as lcov to consume. For the Linux kernel, you can also do this, and then extract the GCOV data out of debugfs and get code coverage for all/part of your kernel. It’s a little bit more involved for the kernel, but not too much so.

To achieve this, the kernel has to implement a bunch of stub functions itself rather than link to the gcov library as well as parse the GCOV data structures that GCC generates and emit the gcda files in debugfs when read. Basically, you replace the part of the GCC generated code that writes the files out. This works really nicely as Linux has fancy things like a VFS and debugfs.

For skiboot, we have no such things. We are firmware, we don’t have a damn file system interface. So, what do we do? Write a userspace utility to parse a dump of the appropriate region of memory, easy! That’s exactly what I did, a (relatively) simple user space app to parse out the gcov gcda files from a skiboot memory image – something we can easily dump out of the simulator, relatively easily (albeit slower) from the FSP on an IBM POWER system and even just directly out of a running system (if you boot a linux kernel with the appropriate config).

So, we can now get a (mostly automated) code coverage report simply for the act of booting to petitboot: https://open-power.github.io/skiboot/boot-coverage-report/ along with our old coverage report which was just for the unit tests (https://open-power.github.io/skiboot/coverage-report/). My current boot-coverage-report is just on POWER7 and POWER8 IBM FSP based systems – but you can see that a decent amount of code both is (and isn’t) touched simply from the act of booting to the bootloader.

The numbers we get are only approximate for any code run on more than one CPU as GCC just generates code that does a load/add/store rather than using an atomic increment.

One interesting observation was that (at least on smaller systems, which are still quite large by many people’s standards), boot time was not really noticeably increased.

For more information on running with gcov, see the in-tree documentation: https://github.com/open-power/skiboot/blob/master/doc/gcov.txt

Preliminary results from POWER8 optimized CRC32 for MySQL

So, Anton got some useful code working that I could patch into a MySQL server for testing purposes – a POWER8 optimized CRC32 implementation.

I went with a pretty stock MySQL 5.6.22 (one patch) with sysbench preparing a single 2GB table (10,000,000 rows). I then hacked up innochecksum so that it would only do the correct CRC32 (rather than trying each checksum type). Using the standard CRC32 algorithm it took around three seconds to verify all of the checksums. With a POWER8 optimized CRC32: 0.4-0.5 seconds. Useful speed-up!

I then ran sysbench read/write with 16 threads with oltp-table-size=10000 (on the larger table) to see if there would be an improvement in a “real world” workload. I got about 30% better performance on read/write operations!

Using perf to see where CPU was going, CPU time spent doing CRC32 calculations went down from ~2.5% to ~0.25%!

In theory, we should be able to get about 52GiB/sec of CRC32 out of a 4.1Ghz POWER8 core. I don’t think we’ll be hitting this in MySQL any time soon.

Give us another week or two and we’ll likely have a patch that’s ready to merge.

Initial benchmarks look promising though!

Building OpenPower firmware for use in POWER8 Simulator

Previously, I blogged on how to Run skiboot (OPAL) on the POWER8 Simulator. If you want to build the full Open Power firmware environment, including the Petitboot bootloader and kernel, you can now do so!

My pull request for an op-build target for the simulator has been merged, so you can now do the following three steps to compile a kernel+initramfs to use with your built skiboot for development purposes:

git clone --recursive git@github.com:open-power/op-build.git
cd op-build
. op-build-env
op-build mambo_defconfig && op-build

Then you wait for a whole bunch of time while everything compiles! Afterwards, you should be left with a zImage.epapr in output/images/ that you can copy into your skiboot directory.

With zImage.epapr in your skiboot directory, when you run “make check”, the skiboot test suite will actually launch the simulator to verify that your skiboot code boots all the way to the petitboot prompt!

We now have two boot tests as part of “make check” for skiboot!

More OpenPower Firmware code released: OCC

Inside the IBM POWER8 chip there’s another processor! That’s right folks, you get another CPU for no extra cost (It’s a lot funnier if you say these previous two sentences as if you were presenting an informercial for a special TV offer).

It is, however, not what you’d consider a general purpose processor. It is, in fact, a PowerPC 405 – so your POWER8 processor also has another PowerPC chip in it. What’s the purpose of this chip? It’s named the On Chip Controller and it has the job of helping make the main processor (the POWER8) work.

It has two jobs:

  • Monitor temperature and keep the system thermally safe
  • Monitor power usage and keep the system power safe

It runs a hard Real Time OS which has just been released up on github.com/open-power/occ

There’s more complete documentation on OCC here.

It’s fairly exciting to see more of the software that runs on every POWER8 system make it out into the world.

skiboot-4.1

I just posted this to the mailing list, but I’ve tagged skiboot-4.1, so we have another release! There’s a good amount of changes since 4.0 nearly a month ago and this is the second release since we hit github back in July.

For the full set of changes, “git log” is your friend, but a summary of them follows:

  • We now build with -fstack-protector and -Werror
  • Stack checking extensions when built with STACK_CHECK=1
  • Reduced stack usage in some areas, -Wstack-usage=1024 now.
    • Some functions could use 2kb stack, now all are <1kb
  • Unsafe libc functions such as sprintf() have been removed
  • Symbolic backtraces
  • expose skiboot symbol map to OS (via device-tree)
  • removed machine check interrupt patching in OPAL
  • occ/hbrt: Call stopOCC() for implementing reset OCC command from FSP
  • occ: Fix the low level ACK message sent to FSP on receiving {RESET/LOAD}_OCC
  • hardening to errors of various FSP code
    • fsp: Avoid NULL dereference in case of invalid class_resp bits
    • abort if device tree parsing fails
    • FSP: Validate fsp_msg in fsp_queue_msg
    • fsp-elog: Add various NULL checks
  • Finessing of when to use error log vs prerror()
  • More i2c work
  • Can now run under Mambo simulator (see external/mambo/skiboot.tcl) (commonly known as “POWER8 Functional Simulator”)
  • Document skiboot versioning scheme
  • opal: Handle more TFAC errors.
    • TB_RESIDUE_ERR, FW_CONTROL_ERR and CHIP_TOD_PARITY_ERR
  • ipmi: populate FRU data
  • rtc: Add a generic rtc cache
  • ipmi/rtc: use generic cache
  • Error Logging backend for bmc based machines
  • PSI: Drive link down on HIR
  • occ: Fix clearing of OCC interrupt on remote fix

So, who worked on this release? We had 84 csets from 17 developers. A total of 3271 lines were added, 1314 removed (delta 1957).

Developers with the most changesets
Stewart Smith 24 28.6%
Benjamin Herrenschmidt 17 20.2%
Alistair Popple 8 9.5%
Vasant Hegde 6 7.1%
Ananth N Mavinakayanahalli 5 6.0%
Neelesh Gupta 4 4.8%
Mahesh Salgaonkar 4 4.8%
Cédric Le Goater 3 3.6%
Wei Yang 3 3.6%
Anshuman Khandual 2 2.4%
Shilpasri G Bhat 2 2.4%
Ryan Grimm 1 1.2%
Anton Blanchard 1 1.2%
Shreyas B. Prabhu 1 1.2%
Joel Stanley 1 1.2%
Vaidyanathan Srinivasan 1 1.2%
Dan Streetman 1 1.2%
Developers with the most changed lines
Benjamin Herrenschmidt 1290 35.1%
Alistair Popple 963 26.2%
Stewart Smith 344 9.4%
Mahesh Salgaonkar 308 8.4%
Ananth N Mavinakayanahalli 198 5.4%
Neelesh Gupta 186 5.1%
Vasant Hegde 122 3.3%
Shilpasri G Bhat 39 1.1%
Vaidyanathan Srinivasan 24 0.7%
Joel Stanley 21 0.6%
Wei Yang 20 0.5%
Anshuman Khandual 15 0.4%
Cédric Le Goater 12 0.3%
Shreyas B. Prabhu 9 0.2%
Ryan Grimm 3 0.1%
Anton Blanchard 2 0.1%
Dan Streetman 2 0.1%
Developers with the most lines removed
Mahesh Salgaonkar 287 21.8%
Developers with the most signoffs (total 54)
Stewart Smith 44 81.5%
Vasant Hegde 4 7.4%
Benjamin Herrenschmidt 4 7.4%
Vaidyanathan Srinivasan 2 3.7%
Developers with the most reviews (total 2)
Vasant Hegde 2 100.0%

Running skiboot (OPAL) on the POWER8 Simulator

skiboot is open source boot and runtime firmware for OpenPOWER. On real POWER8 hardware, you will also need HostBoot to do this (basically, to make the chip work) but in a functional simulator (such as this one released by IBM) you don’t need a bunch of hardware procedures to make hardware work, so we can make do with just skiboot.

The POWER8 Functional Simulator is free to use but not open source and is only supported on limited platforms. But you can always run it all in a VM! I have it running this way on my laptop right now.

To go from a bare Ubuntu 14.10 VM on x86_64 to running skiboot in the simulator, I did the following:

  • apt-get install vim git emacs wget xterm # xterm is needed by the simulator. wget and editors are useful things.
  • (download systemsim-p8…deb from above URL)
  • dpkg -i systemsim-p8*deb # now the simulator is installed
  • git clone https://github.com/open-power/skiboot.git # get skiboot source
  • wget https://www.kernel.org/pub/tools/crosstool/files/bin/x86_64/4.8.0/x86_64-gcc-4.8.0-nolibc_powerpc64-linux.tar.xz # get a compiler to build it with
  • apt-get install make gcc valgrind # get build tools (skiboot unittests run on the host, so get a gcc and valgrind)
  • tar xfJ x86_64-gcc-4.8.0-nolibc_powerpc64-linux.tar.xz
  • mkdir -p /opt/cross
  • mv gcc-4.8.0-nolibc /opt/cross/ # now you have a powerpc64 cross compiler
  • export PATH=/opt/cross/gcc-4.8.0-nolibc/powerpc64-linux/bin/:$PATH # add cross compiler to path
  • cd skiboot
  • make # this should build a bunch of things, leaving you with skiboot.lid (and other things). If you have many CPUs, feel free to make -j128.
  • make check # run the unit tests. Everything should pass.
  • cd external/mambo
  • /opt/ibm/systemsim-p8/run/pegasus/power8 -f skiboot.tcl # run the simulator

The last step there will barf as you unlikely have a /tmp/zImage.epapr sitting around that’s suitable. If you use op-build to build a full set of OpenPower foo, you’ll likely be able to extract it from there. Basically, the skiboot.tcl script is adding a payload for skiboot to execute. On real hardware, this ends up being a Linux kernel with a small userspace and petitboot (link is to IBM documentation for IBM POWER8 systems). For the simulator, you could boot any tiny zImage.epapr you like, it should detect OPALv3 and boot!

Even if you cannot be bothered building a kernel or petitboot environment, if you comment out the associated lines in skiboot.tcl, you should be able to run the simulator and see the skiboot console message come up that says we couldn’t load a kernel.

At this point, congratulations, you can now become an OpenPower firmware hacker without even possessing any POWER8 hardware!

C bitfields considered harmful

In C (and C++) you can specify that a variable should take a specific number of bits of storage by doing “uint32_t foo:4;” rather than just “uint32_t foo”. In this example, the former uses 4 bits while the latter uses 32bits. This can be useful to pack many bit fields together.

Or, that’s what they’d like you to think.

In reality, the C spec allows the compiler to do just about anything it wants with these bitfields – which usually means it’s something you didn’t expect.

For a start, in a struct -e.g. “struct foo { uint32_t foo:4; uint32_t blah; uint32_t blergh:20; }” the compiler could go and combine foo and blergh into a single uint32_t and place it somewhere… or it could not. In this case, sizeof(struct foo) isn’t defined and may vary based on compiler, platform, compiler version, phases of the moon or if you’ve washed your hands recently.

Where this can get interesting is in network protocols (OMG DO NOT DO IT), APIs (OMG DO NOT DO IT), protecting different parts of a struct with different mutexes (EEP, don’t do it!) and performance.

I recently filed MySQL bug 74831 which relates to InnoDB performance on POWER8. InnoDB uses C bitfields which are themselves bitfields (urgh) for things like “flag to say if this table is compressed”. At various parts of the code, this flag is checked.

When you apply this simple patch:

--- mysql-5.7.5-m15.orig/storage/innobase/include/dict0mem.h
+++ mysql-5.7.5-m15/storage/innobase/include/dict0mem.h
@@ -1081,7 +1081,7 @@ struct dict_table_t {
        Use DICT_TF_GET_COMPACT(), DICT_TF_GET_ZIP_SSIZE(),
        DICT_TF_HAS_ATOMIC_BLOBS() and DICT_TF_HAS_DATA_DIR() to parse this
        flag. */
-       unsigned                                flags:DICT_TF_BITS;
+       unsigned                                flags;

I get 10,000 key lookups/sec more than without it!

Why is this? If you go and read the bug, you’ll see that the amount of CPU time spent on the instruction checking the bit flag is actually about the same… and this puzzled me for a while. That is, until Anton reminded me that the PMU can be approximate and perhaps I should look at the loads.

Sure enough, the major difference is that with the bitfield in place (i.e. MySQL 5.7.5 as it stands today), there is a ld instruction doing the load – which is a 64bit load. In my patched version, it’s a lwx instruction – which is a 32bit load.

So, basically, we were loading 8 bytes instead of 4 every time we were checking if it was a compressed table.

So, along with yesterday’s lesson of never, ever, ever use volatile, today’s lesson is never, ever, ever use bitfields.

volatile considered harmful

While playing with MySQL 5.7.5 on POWER8, I came across a rather interesting bug (74775 – and this is not the only one… I think I have a decent amount of auditing and patching to do now) which made me want to write a bit on memory barriers and the volatile keyword.

Memory barriers are hard.

Like, super hard. It’s the kind of thing that makes you curse hardware designers, probably because they’re not magically solving all your problems for you. Basically, as you get more CPU cores and each of them have caches, it gets more expensive to keep everything in sync. It’s quite obvious that with *ahem* an eventually consistent model, you could save a bunch of time and effort at the expense of shifting some complexity into software.

Those in the MySQL world should recognize this – we’ve been dealing with asynchronous replication for well over a decade as a good way to scale.

On some CPU architectures (POWER for example) not all loads are created equal. When you load a value from memory, it will be consistent with your thread of execution. That is, with any stores that you have done in this thread of execution. If another thread updates that memory location you may not see that update even if your load occurs after that thread updates that memory location. Think eventually consistent.

If you want up to date reads (and not clobber writes), then you get to do memory barriers! (a topic for elsewhere – the PowerISA document has good explanations of what we have on POWER though, and how load with reserve works).

What the volatile keyword does is generate load and store instructions. It is useful when talking to hardware, as the load and store instructions are actually doing something there that the compiler doesn’t know about and thus shouldn’t optimize away.

The volatile keyword does not add any memory barriers. This is important to realize – volatile just makes loads and stores happen for your thread, not in relation to any other threads of execution. Thus, you cannot use volatile as a thread synchronization mechanism at all. It is completely and totally wrong.

Basically, if you have a volatile variable and you do stores to it in one thread and loads in another, after the store happens, it could be quite a long time before the thread doing the loads sees it! For some applications this may be okay (although I can’t really think of any beyond very very inaccurate status variables)… but if it matters at all for application correctness, volatile is the wrong thing to use.

Further reading:

Preliminary MySQL Cluster benchmark results on POWER8

Yesterday, I got the basics going for MySQL Cluster on POWER. Today, I finished up a couple more patches to improve performance and ran some benchmarks.

This is on a 3.7Ghz POWER8 machine with non-balanced memory (only 2 of the 4 NUMA nodes have memory, so we have less total memory bandwidth than we could have, plus I’m going to bind ndbmtd to the CPUs in these NUMA nodes)

With a setup of a single replica and two data nodes on the one machine (each bound to a specific NUMA node), running the flexAsync benchmark on MySQL Cluster 7.3.7, I could get around:

  • 3.2 million reads/sec
  • 2.6 million deletes/sec
  • 2.4 million updates/sec
  • 2.4 million inserts/sec.

So, that’s at least in the right ballpark for a first go.

(I’m running this on a big endian host kernel, some random kernel I booted on the box and built with gcc 4.8 with whatever build options the MySQL Cluster cmake foo chooses by default)

MySQL Cluster on POWER8

So, I’ve written previously on MySQL on POWER, and today is a quick bit of news about MySQL Cluster on POWER – specifically MySQL Cluster 7.3.7.

I ran into three main issues in getting some flexAsync benchmark results. One of them was the fact that I wanted to do this in the middle of all the POWER8 machines I usually use moving buildings (hard to run benchmarks when computers are packed up in boxes on a truck).

The next issue was that ndbmtd (the multi-threaded data node) needs memory barriers for the magic message passing stuff between threads. So, that’s pretty easy (about an eight line patch).

The next issue was in the results from flexAsync, it turns out 32bit math is a bad idea with results from my POWER8 box.

My preliminary performance numbers are fairly promising (actually… what is the world record for a single machine and NDB these days? Single data node?). I think there’s a bit more low hanging fruit and a couple more things that are a bit more involved.

Bugs with patches:

  • Bug 74782 – compile fix (memory barriers for POWER)
  • Bug 74781 – flexAsync uses 32bit math, leading to incorrect summary on POWER8

New libeatmydata release: 105

Over on the project page and on launchpad you can now download libeatmydata 105.

This release fixes a couple of bugs that came in via the Debian project, including a rather interesting one about some binaries not running .so ctors to properly init libeatmydata and the code path in the libeatmydata open() not really dealing with being called first in this situation.

Enjoy!

CFP for Developer, Testing, Release and Continuous Integration Automation Miniconf at linux.conf.au 2015

This is the Call for Papers for the Developer, Testing, Release and Continuous Integration Automation Miniconf at linux.conf.au 2015 in Auckland. See http://linux.conf.au

This miniconf is all about improving the way we produce, collaborate, test and release software.

We want to cover tools and techniques to improve the way we work together to produce higher quality software:

– code review tools and techniques (e.g. gerrit)
– continuous integration tools (e.g. jenkins)
– CI techniques (e.g. gated trunk, zuul)
– testing tools and techniques (e.g. subunit, fuzz testing tools)
– release tools and techniques: daily builds, interacting with distributions, ensuring you test the software that you ship.
– applying CI in your workplace/project

We’re looking for talks about open source technology *and* the human side of things.

Speakers at this miniconf must be registered for the main conference (although there are a limited number of miniconf only tickets available for miniconf speakers if required)

There will be a projector, and there is a possibility the talk will be recorded (depending on if the conference A/V is up and running) – if recorded, talks will be posted with the same place with the same CC license as main LCA talks are.

CFP is open until midnight November 21st 2014.

By submitting a presentation, you’re agreeing to the following:

I allow Linux Australia to record my talk.

I allow Linux Australia to release any recordings of my presentations, tutorials and minconfs under the Creative Commons Attribution-Share Alike License

I allow Linux Australia to release any other material (such as slides) from my presentations, tutorials and minconfs under the Creative Commons Attribution-Share Alike License.

I confirm that I have the authority to allow Linux Australia to release the above material. i.e., if your talk includes any information about your employer, or another persons copyrighted material, that person has given you authority to release this information.
Any questions? Contact me: stewart@flamingspork.com

 

http://goo.gl/forms/KZI1YDDw8n

Is Python the new BASIC

Today I managed to finally find a way to express what I’ve been thinking for a while: “Python is the new BASIC”. Think about it: it’s easy to get started in, there’s books and tutorials on it everywhere, a bunch of real world software is actually written in it and with all the different versions and modules (and versions of modules) there’s a billion subtle differences to trip you up.

There’s also the group of people (like me) who don’t particularly like it, for a bunch of quite valid reasons. The lack of being strongly typed is a huge barrier for me.

I am of the opinion that the ideal language with the ideal compiler would not let buggy code compile. It may not be as easy to program in this hypothetical language, but seeing as code has to exist and be debugged for order of magnitudes more time than it takes to write it, making it harder to write bugs is a good thing. After all, my experience with Python apps is that bugs manifest themselves at run time, to the user, rather than to the developer at the time of writing. Also, compiler error is better than unit test failure.

Discuss.

OpenPower firmware up on github!

With the whole OpenPower thing, a lot of low level firmware is being open sourced, which is really exciting for the platform – the less proprietary code sitting in memory the better in my books.

If you go to https://github.com/open-power you’ll see code for a bunch of the low level firmware for OpenPower and POWER8.

Hostboot is the bit of code that brings up the CPU and skiboot both sets up hardware and provides runtime services to Linux (such as talking to the service processor, if one is present).

Patches to https://github.com/open-power/skiboot/blob/master/doc/overview.txt are (of course) really quite welcome. It shouldn’t be too hard to get your head around the basics.

To see the Linux side of the OPAL interface, go check out linux/arch/powerpc/platforms/powernv -there you can see how we ask OPAL to do things for us.

If you buy a POWER8 system from IBM running PowerKVM you’re running this code.

Update on MySQL on POWER8

About 1.5 months ago I blogged on MySQL 5.6 on POWER andtalked about what I had to poke at to make modern MySQL versions run and run well on shiny POWER8 systems.

One of those bugs, MySQL bug 47213 (InnoDB mutex/rw_lock should be conscious of memory ordering other than Intel) was recently marked as CLOSED by the Oracle MySQL team and the upcoming 5.6.20 and 5.7.5 releases should have the fix!

This is excellent news for those wanting to run MySQL on SMP systems that don’t have an Intel-like memory model (e.g. POWER and MIPS64).

This was the most major and invasive patch in the patchset for MySQL on POWER. It’s absolutely fantastic that this has made it into 5.6.20 and 5.7.5 and may mean that these new versions will work out-of-the-box on POWER (I haven’t checked… but from glancing back at my patchset there was only one other patch that could be related to correctness rather than performance).

Performance impact of MySQL query cache on modern hardware

Recently, Morgan has been writing on deprecating some MySQL features and inspired by that while working on MySQL on POWER, I wondered “What is the impact of the MySQL query cache on modern hardware?”

We’ve known for over six years (since before we started Drizzle) that the query cache hurt performance. It was for that reason that the query cache was one of the early things to be removed from Drizzle, it just didn’t scale on multi core systems that  we were targeting.

So what about modern hardware? While working on MySQL 5.6 on POWER8, I enabled the query cache and ran a benchmark. Enabling the query cache reduced performance by an order of magnitude. I suspect the performance impact is even higher on MySQL 5.7.

My vote? Deprecate it. Deprecate it now, print a giant warning if it’s enabled and at some point just remove it. A single global mutex just doesn’t scale to 4 cores, let alone 24 cores at 8 threads per core.

1 million SQL Queries Per Second: MySQL 5.7 on POWER8

I’ve previously covered MySQL 5.6 on POWER (with patch), MySQL 5.6 Performance on POWER8 (spoiler: new performance record) and MySQL 5.7 on POWER.

Of course, The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies or opinions. Also, these numbers should be considered preliminary, but trust me – I did get them and it’s not April 1st.

From my last post, you saw that with my preliminary patch for MySQL 5.7 to work on POWER, we could easily match the previous record for sysbench point select queries per second (i.e. key lookups). In fact, we could exceed the published record by a little bit which is kind of nice. At around 630kQPS, one could be rather happy.

But we still had 30-40% idle CPU on POWER8. This led me to file the following bug report:

  • Bug 72829: LOCK_grant is major contention point, leaves 30-40% idle CPU.

What’s going on is that there’s a rwlock in the MySQL Server that ensures that writers don’t collide with readers to the data structures describing the GRANTs (i.e. who has access to what). If you run a GRANT statement, it gets a writer lock, and nobody can read (i.e. check permissions) while everything is being updated. If you run a normal SQL statement, you get a read lock (non-exclusive) and can check permissions appropriately.

It’s been known for a long time that LOCK_grant was a bottleneck. Typically, some people have run with skip-grant-tables to help shorten the time the lock as held (as in MySQL you still take the mutex even though you’ve started the server with skip-grant-tables).

In Drizzle, we fixed that – moving authentication and authorization completely behind plugin APIs and if you didn’t load plugins for them, you executed near enough to zero instructions that it didn’t matter.

In my experiments, enabling skip-grant-tables actually hurt performance rather than helped. More investigation is needed, but it seems that simply the act of acquiring and releasing the rdlock is now a major bottleneck in some benchmarks (such as sysbench point select).

It turns out that this is a well known problem in other pieces of software (e.g. Linux kernel) and is pretty much what RCU (Read Copy Update) is best at. As far back as 2006 I remember attempting to get my head around RCU so that one day we could use it in MySQL or MySQL Cluster.

Another simpler method is simply splitting the mutex, with readers able to acquire any one of N mutexes and writers needing to acquire them all. This penalizes writers, but unless you’re executing a lot of GRANTs, you’re probably safe.

So… what is the theoretical maximum performance if this bottleneck went away?

I wrote a quick patch that just commented out the rdlock acquisition of LOCK_grant in the hot codepath of sysbench point selects. I wasn’t running GRANT statements at runtime so this was “safe”.

This patch is not production ready, it’s merely useful for demonstrating where we could be with MySQL 5.7 on POWER8 if one last bottleneck is fixed.

My results? Slightly over ONE MILLION QUERIES PER SECOND!

This is roughly twice the previous record.

This is with a dual socket 24 core POWER8 with SMT8 and DSCR=1 on 8 tables with sysbench 0.4.8. Sysbench itself is using a non-trivial amount of CPU and I could probably decently beat this number if I rewrote sysbench using the nonblocking API in libdrizzle (back when me made the Drizzle performance regression tests use a libdrizzle-ified sysbench we got double digit percentage improvement in our sysbench numbers).

There’s still around 7-10% idle CPU time… so there’s more room to grow.

Lacking a physical gauntlet to throw down, I’ll just have to submit a conference paper somewhere so that I can do that in person.

I really hope that we’re able to fix this bottleneck in MySQL 5.7 so that MySQL 5.7 will ship being able to do over a million queries per second. From SQL.