Dodge Avenger furthers the stereotypes of American cars

“Every time it goes around a corner you are going to die” would roughly be an accurate statement.

This is the car I’ve been driving for the past week. Absolutely no feeling in the steering at all (in fact less feeling when cornering).

It also bongs at you seemingly randomly. It especially likes not turning off the headlights when you get out of the car – even after you’ve locked it.

I’m sure it has torque… but only one of them.

Horsepower… sure, just not sure about plural. Foot to the floor and it kind of sits there wondering what you mean.

All this talk of “US carmakers failing” seems to be not a moment too soon.

I think I’m going to have to put a ban on driving American made cars…. except perhaps the Ford GT… I wonder if MySQL Sun Oracle would have a problem with that :)

Feedback from MySQL Cluster tutorial

Way back on Monday (at the MySQL Conference and Expo), I gave a full day tutorial on MySQL Cluster. I awoke early in the morning to a “oh ha ha” URL in an IM; but no, it wasn’t jetlag playing tricks with me. Luckily, this didn’t take much (if anything) away from the purpose of the day: teaching people about NDB.

Distracting-and-this-time-really-annoying-thing-of-the-day-2: It seems that O’Reilly had cut back on power this year, and there were no power boards in the room. A full day interactive tutorial, and nowhere to plug in laptops. Hrrm.. Luckily, having over the many years I’ve been speaking at this event, I’ve gotten to know the AV guys okay, and asked them. They totally deserve a medal. Tutorial started at 8:30, I noticed at 7:30, and it was all fixed by 7:45. The front half of the room (enough for everyone coming) had enough power for everyone. It was quite okay to bunch everybody up – means I have to run around less.

This years tutorial was modified from last year (and that does take time, even though I’ve given it many times before). I wanted to remove out of date things, trim bits down (to better fit into the time we have, based on more experience on how long it takes to get interactive parts done) and add a bit.

When we got to the end of the day (yes, I ran over… and everybody stayed, so either I’m really scary or the material is really interesting) I pleaded for feedback. It’s amazingly scary doing an interactive tutorial. You’re placing the success of the session not so much on you, but on everyone who’s come to it.

Sometimes I’ve gotten not much feedback at all; this time was different. I spoke to a number of people afterwards (and some via email) and got some really good suggestions for small changes that would have greatly enhanced the day for them. I was pleased that they also really enjoyed the tutorial and liked the interactivity. I (and it seems a great many others) do not much like tutorials that are just long talks.

People walked out of my tutorial with a good overview of what MySQL Cluster was, how to set one up, use one, do a bit of admin and some of how it works.

I even dragged Jonas up to explain in great detail the 2 phase commit protocol for transactions. Of course, this is detail you don’t ever need to know to deploy – but people are intersted in internals.

So far the session has received an average of 4 stars in evaluations (four five star, two four star and one two star). I’d be really interested in feedback from the person who gave two stars, as this may mean I missed getting something done for them (e.g. providing information, help etc). Even though it is hard to spread yourself around a room of 60-ish-plus people, I do like to do it well. There is the other possibility of people not coming prepared, which will mean they may be bored for a lot of the day if they don’t jump in with another group and help learn that way.

So, I’m rather happy with how my first session went.

Drizzle low-hanging-fruit

We have an ongoing Drizzle milestone called low-hanging-fruit. The idea is that when there’s something that  could be done, but we don’t quite have the time to do it immediately, we’ll add a low-hanging-fruit blueprint so that people looking to get a start on the codebase and contributing code to Drizzle have a place to go to find things to do.

Some of my personal favourites are:

Also relatively low hanging fruit can be writing some plugins. Some simple plugin types include:

  • Authentication
    Got somewhere that you could authenticate against for connecting to a DB? Write a plugin for it! Current auth plugins are auth_http and auth_pam.

    • Perhaps you want to authenticate against a central DB? checking in memcached first?
    • Perhaps a htaccess style method
  • Functions
    Apply some function to a column. These are pretty simple to write (see md5, compress examples). Perhaps interfaces to encryption/decryption? a hashing function?

    • ROT13
    • 3DES
    • AES
      Bonus points if you get any of these to use the T2000 crypto accellerator stuff
    • ID3 tag decoding
    • file type detection (well.. BLOB)

So there’s a fair bit you can do to get started. Best of all, you can chat with the Drizzle developers next week at the MySQL Conference and Expo and Drizzle Developer Day.

Drizzle Developer Day reminder

We’re having a Drizzle Developer Day just after the MySQL Conference and Expo next week. You don’t have to be attending the conference to come to the Drizzle Developer Day. Just bring your enthusiasm for free databases, Drizzle and good software. Spaces are limited, so head on over to the signup page and fill in your name if you haven’t already

If coming from the Hyatt Regency Santa Clara (where the MySQL Conference and Expo is), at least I will be driving from there, so let me know if you want a lift.

Using Dtrace to find out if the hardware or Solaris is slow (but really just working around the problem)

A little while ago, I was the brave soul tasked with making sure Drizzle was working properly and passing all tests on Solaris and OpenSolaris. Brian recently blogged about some of the advantages of also running on Solaris and the SunStudio compilers – more warnings from the compiler is a good thing. Many kudos goes to Monty Taylor for being the brave soul who fixed most of the compiler warnings (and for us, warnings=errors – so we have to fix them) for the SunStudio compilers before I got to making te tests work.

So, I got to the end of it all and got pointed to an OpenSolaris x86 box where the drizzleslap test was timing out. The timeout for tests is some amazingly long amount of time – 15 minutes. All the drizzle-test-run tests are rather short tests.

To make running the tests quick, I usually LD_PRELOAD libeatmydata – a simple way of disabling pesky things like fsync that take a long time (rumors that I nickname it libmacosxsimulation are entirely true). It’s pretty simple to build libeatmydata on Solaris too (I periodically do this and always intend to check in the associated Makefile but never do).

Unfortunately, on OpenSolaris a bunch of things are built 32bit and others 64bit and just doing “LD_PRELOAD=libeatmydata.so ./dtr” doesn’t work – I’d have to modify the test script to only do the LD_PRELOAD for drizzled – which is annoying.

On my T1000 running Debian, the drizzleslap test takes 42 seconds to complete with libeatmydata, or 393 seconds when it’s really doing fsyncs. So for it to be timing out on this OpenSolaris x86 box – i.e. taking more than 15 minutes, was strange.

So… what was going on? Step 1: is anything actually going on? One way to test this is to see if disk IO is being generated. On Linux, we can use “iostat”. On Solaris, we can use “zpool iostat”. Things were going to disk for the whole time of the test. Time to compare what the difference between the platforms is.

Well.. a typical way that tests have taken forever have been because of lots of transactions: i.e. lots of fsync(). You are then dependent on the fsync() performance.

If we look at “iostat -x” and the avgrq-sz field on Linux, we’ll see that the average request size is on 10-12 sectors (512 byte blocks). i.e. about 5 or 6kb.

If we look at “zpool iostat 1” on OpenSolaris, we see a bit of a different story, but similar enough that you could safely assume that lots of small synchronous IOs were going on. After a bit of reading of the ZFS on-disk format documents, I had a slightly better idea what was going on that could be causing me seeing a larger average request size on ZFS than on Linux with XFS.

So… perhaps it’s the speed of these syncs? Ordinarily, I’d just write up a quick LD_PRELOAD library that wraps fsync() and times it (perhaps writing to a file so I could do analysis on it later). Since I was working on Solaris… I thought I’d try DTrace. Some google-foo and dtrace hacking later, I tried this:

stewart@drizzle-dev:~/drizzle/sparc$ time pfexec dtrace -n ‘syscall::fdsync:entry /execname == “drizzled” / { self->ts[self->stack++] = timestamp; } syscall::fdsync:return /self->ts[self->stack – 1]/ { this->elapsed = timestamp – self->ts[–self->stack]; @[probefunc] = count(); @a[probefunc] = quantize(this->elapsed); self->ts[self->stack] =0; }’

dtrace: description 'syscall::fdsync:entry ' matched 2 probes
^C

  fdsync                                                         1600
  fdsync
           value  ------------- Distribution ------------- count
        33554432 |                                         0
        67108864 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@   1520
       134217728 |@@                                       79
       268435456 |                                         1
       536870912 |                                         0        

real	4m26.837s
user	0m0.657s
sys	0m0.566s

Which did seem like an awful long time for an fsync() to take. Although the filesystem was on a single disk, it was meant to be made remotely recently, and it’s sitting on a Sun controller… so it should be a bit better than that. From reading some of the ZFS on-disk spec, it could be some bug that means we’re waiting for a checkpoint to be written instead of forcing the sync out when we call fsync() – but I sought another solution (as on other Solaris/OpenSolaris systems this wasn’t a problem – so perhaps fixed in newer kernels or it’s a driver issue).

So I went and added “–commit=100” to a bunch of places in the drizzleslap test to batch things into transactions. The idea being to greatly reduce the number of fsync() calls to bring the execution time of the drizzleslap test on the machine to get below 15minutes. A bit of jiggerypokery later (some tests needed to not have the –commit to avoid various locking foo) and I had something that should run.

Now, ~113 seconds on the T1000 on Linux (with a single SATA disk, down from an original 393 seconds) and ~437 seconds on the OpenSolaris box. For giggles, tried it on a Solaris box that’s running UFS on a 10k RPM SAS drive: ~44 seconds.

In Summary:

T1000, Linux, libeatmydata, XFS: ~42 seconds (before optim)
T1000, Linux, 7200RPM SATA, XFS: ~113 seconds
T5240, Solaris 10, 10k RPM SAS, UFS: ~44 seconds
16 core Xeon, OpenSolaris, 7200RPM, ZFS: ~437 seconds

So, on that hardware setup – something is strange. The 10k SAS drive on UFS on the CoolThreads box is really nice though…. makes me want that kind of disk here.

This page was useful, and I used it as a basis for some of my DTrace scripts: http://fav.or.it/post/1146360/dtrace-and-the-mighty-hercules

Also thanks to several people on #opensolaris on Freenode who helped me out with various Solaris specific commands in tracking this down.