{"id":1719,"date":"2009-10-10T10:14:35","date_gmt":"2009-10-10T00:14:35","guid":{"rendered":"http:\/\/www.flamingspork.com\/blog\/?p=1719"},"modified":"2014-10-08T09:14:35","modified_gmt":"2014-10-07T23:14:35","slug":"how-many-cpu-cycles-does-a-sql-query-take-or-pagefaults-caused-or-l2-cache-misses-or-cpu-migrations","status":"publish","type":"post","link":"https:\/\/www.flamingspork.com\/blog\/2009\/10\/10\/how-many-cpu-cycles-does-a-sql-query-take-or-pagefaults-caused-or-l2-cache-misses-or-cpu-migrations\/","title":{"rendered":"How many CPU cycles does a SQL query take? (or pagefaults caused&#8230; or L2 cache misses&#8230; or CPU migrations&#8230;)"},"content":{"rendered":"<p>I like profilers. I use them when trying to make software (such as Drizzle) faster. Many profilers suck &#8211; and pretty much all of them are impossible to attach to a running system. Two notable exceptions are oprofile and dtrace (for Linux and Solaris respectively). The downside of oprofile is that it is non trivial to configure and get running and is pretty much all or nothing. Dtrace has the major disadvantage of that it is Solaris specific, so is only available to a minority of our users (and developers).<\/p>\n<p>The new Linux Performance Events interface (perf_event) presents to userspace a nice abstraction of the hardware Performance Monitoring Unit inside the CPU. Typically these are processor specific (i.e. the one in a Core is different than the one in a Core 2) and can only be used by one thing at a time. The perf_events interface lets multiple applications\/threads use the PMU (switching state at context switch as needed), even giving us ratios of how much time we got events for so we can do realistic estimates. It also provides some common defines to ask for things like CPU Cycles (the value you program into the PMU differs per CPU architecture for that, so an abstraction is rather welcome: it means we don&#8217;t need to have #ifdef __powerpc__ in our userspace code to support PowerPC, just a kernel that does).<\/p>\n<p>Since perf_events gives us an interface to only get counts for our thread, we can map this onto connected sessions to Drizzle (and if we were using a pool_of_threads type scheduler in Drizzle, we&#8217;d need a bit of extra code to get things right, but with a thread per connection scheduler, we get it for free). A simple use of this could be to answer the question &#8220;How many CPU cycles does this SQL query take?&#8221; with the condition that you do not want how many CPU cycles were spent executing other things (e.g. the 1,000 other SQL queries currently being executed on your database server).<\/p>\n<p>Many of you will now point out the RDTSC instruction for the x86 and ask why I&#8217;m just not using it. With RDTSC you&#8217;re only getting &#8220;how many cycles since reboot&#8221;. So using two values from the TSC and finding the difference only tells you how many cycles were between the two reads, not how many cycles were spent executing your thread\/process. So the value of &#8220;cycles executed&#8221; gathered with RDTSC varies between a loaded and non-loaded system. With perf_events, it does not.<\/p>\n<p>So&#8230;. after talking to paulus about perf_events, I decided to see how I could plug this into Drizzle to start getting interesting numbers out. Sure enough, a little bit of work later and I have my first proof of concept implementation over in <a href=\"https:\/\/code.launchpad.net\/~stewart\/drizzle\/perf-events\">lp:~stewart-flamingspork\/drizzle\/perf-events<\/a>. That tree has a perf_udf() function that is like BENCHMARK except it returns the number of CPU cycles spent executing the expression N times. For example, how many CPU cycles does it take for the SQL expression MD5(&#8216;pants&#8217;) to be evaluated?<\/p>\n<p>drizzle&gt; select perf_udf(1, MD5(&#8216;pants&#8217;));<br \/>\n+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;+<br \/>\n| perf_udf(1, MD5(&#8216;pants&#8217;)) |<br \/>\n+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;+<br \/>\n| 43540 |<br \/>\n+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;+<br \/>\n1 row in set (0 sec)<\/p>\n<p>So on my laptop, it&#8217;s about 40,000 cycles (over multiple runs I saw it vary between about 39,000 to 44,000). The really neat thing about using the perf_events interface is that if you run this on a different CPU architecture that has perf_events available in the kernel you&#8217;re currently running, it &#8220;just works&#8221;. e.g. if I ran this on a POWER5 box, I&#8217;d find out how many cycles it took there! No special code in Drizzle, yay!<\/p>\n<p>The really neat next thing I tried was to run the same perf_udf query while running drizzleslap on the same database server, completely hammering it. I get <strong>exactly the same result <\/strong>(within normal variance)!<\/p>\n<p>That isn&#8217;t the best part though. The best part is the other bits of information we can get out of the PMU:<\/p>\n<ul>\n<li><span style=\"background-color: #ffffff;\">PERF_COUNT_HW_CPU_CYCLES<\/span><\/li>\n<li><span style=\"background-color: #ffffff;\">PERF_COUNT_HW_INSTRUCTIONS<\/span><\/li>\n<li><span style=\"background-color: #ffffff;\">PERF_COUNT_HW_CACHE_REFERENCES<\/span><\/li>\n<li><span style=\"background-color: #ffffff;\">PERF_COUNT_HW_CACHE_MISSES<\/span><\/li>\n<li><span style=\"background-color: #ffffff;\">PERF_COUNT_HW_BRANCH_INSTRUCTIONS<\/span><\/li>\n<li><span style=\"background-color: #ffffff;\">PERF_COUNT_HW_BRANCH_MISSES<\/span><\/li>\n<li><span style=\"background-color: #ffffff;\">PERF_COUNT_HW_BUS_CYCLES<\/span><\/li>\n<\/ul>\n<p>So the same way we can use the new &#8216;perf&#8217; utility to see what a process is doing to the machine, we can enable people to do exactly the same thing with specific SQL queries (and through a bit of extra code, you could aggregate for users\/applications). Not only that, but we could write a plugin for Drizzle to\u00c2\u00a0occasionally\u00c2\u00a0sample queries running through the database server and build up a quite complete profile of what&#8217;s going on over time.<\/p>\n<p>We can also get software events out of perf_events such as:<\/p>\n<ul>\n<li><span style=\"background-color: #ffffff;\">PERF_COUNT_SW_CPU_CLOCK<\/span><\/li>\n<li><span style=\"background-color: #ffffff;\">PERF_COUNT_SW_TASK_CLOCK<\/span><\/li>\n<li><span style=\"background-color: #ffffff;\">PERF_COUNT_SW_PAGE_FAULTS<\/span><\/li>\n<li><span style=\"background-color: #ffffff;\">PERF_COUNT_SW_CONTEXT_SWITCHES<\/span><\/li>\n<li><span style=\"background-color: #ffffff;\">PERF_COUNT_SW_CPU_MIGRATIONS<\/span><\/li>\n<li><span style=\"background-color: #ffffff;\">PERF_COUNT_SW_PAGE_FAULTS_MIN<\/span><\/li>\n<li><span style=\"background-color: #ffffff;\">PERF_COUNT_SW_PAGE_FAULTS_MAJ<\/span><\/li>\n<\/ul>\n<p>So for engines that do memory mapping of files on disk, we can find out which of your SQL queries are causing the page faults! We should also be able to find out if the operating system kernel is bouncing your execution threads around CPUs a lot.<\/p>\n<p>The biggest possibility for awesomeness comes from the perf_event ability to get periodic call traces (you specify how often) including down into the kernel. This means that we could, on demand and only when we enable it, profile where time is being spent for that specific SQL query. The most important thing to note is that when this gathering is not enabled, the overhead is <strong>zero<\/strong>. Even when enabled for one query, this should have minimal impact on other things currently running (you&#8217;re going to use some extra CPU doing the profile, but we&#8217;re only profiling that one SQL query, not the 1000 other ones executing at the same time). So we could tell you that your query spends 9\/10ths of its time in filesort() without ever adding any extra instructions to the filesort() codepath (and anywhere else where you may want to find out how much CPU time was spent). We could even tell you how much time was being spent in the kernel doing specific IO operations!<\/p>\n<p>So I&#8217;m thinking that this could be used to make something pretty useful that could easily be used on production systems due to the zero overhead when not enabled and the small overhead when enabled.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I like profilers. I use them when trying to make software (such as Drizzle) faster. Many profilers suck &#8211; and pretty much all of them are impossible to attach to a running system. Two notable exceptions are oprofile and dtrace &hellip; <a href=\"https:\/\/www.flamingspork.com\/blog\/2009\/10\/10\/how-many-cpu-cycles-does-a-sql-query-take-or-pagefaults-caused-or-l2-cache-misses-or-cpu-migrations\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2},"jetpack_post_was_ever_published":false},"categories":[75],"tags":[70,141,88,187],"class_list":["post-1719","post","type-post","status-publish","format-standard","hentry","category-drizzle-work-et-al","tag-drizzle","tag-linux","tag-performance_schema","tag-perf_events"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p5a6n8-rJ","jetpack-related-posts":[{"id":1587,"url":"https:\/\/www.flamingspork.com\/blog\/2009\/04\/08\/using-dtrace-to-find-out-why-solaris-is-so-slow-and-if-its-actually-solaris\/","url_meta":{"origin":1719,"position":0},"title":"Using Dtrace to find out if the hardware or Solaris is slow (but really just working around the problem)","author":"Stewart Smith","date":"2009-04-08","format":false,"excerpt":"A little while ago, I was the brave soul tasked with making sure Drizzle was working properly and passing all tests on Solaris and OpenSolaris. Brian recently blogged about some of the advantages of also running on Solaris and the SunStudio compilers - more warnings from the compiler is a\u2026","rel":"","context":"In &quot;drizzle&quot;","block_context":{"text":"drizzle","link":"https:\/\/www.flamingspork.com\/blog\/category\/work-et-al\/drizzle-work-et-al\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":1313,"url":"https:\/\/www.flamingspork.com\/blog\/2008\/12\/16\/fixing-drizzle-on-linux-sparc\/","url_meta":{"origin":1719,"position":1},"title":"fixing drizzle on linux sparc","author":"Stewart Smith","date":"2008-12-16","format":false,"excerpt":"Since I got fed up with Solaris the other day, the T1000 is running Debian. This means that \"I'll care about Drizzle on Linux Sparc\". OMG were things broken in the most \"trivial\" ways. A good quick intro to the issues is Memory alignment on SPARC, or a 300x speedup!\u2026","rel":"","context":"In &quot;drizzle&quot;","block_context":{"text":"drizzle","link":"https:\/\/www.flamingspork.com\/blog\/category\/work-et-al\/drizzle-work-et-al\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":1276,"url":"https:\/\/www.flamingspork.com\/blog\/2008\/12\/02\/is-your-garage-internet-enabled\/","url_meta":{"origin":1719,"position":2},"title":"Is your garage internet enabled?","author":"Stewart Smith","date":"2008-12-02","format":false,"excerpt":"Real noisy fucker. So loud, that if it's in the garage but the back door is open, I still hear it. Being used for drizzle dev on Solaris... although a switch to OpenSolaris or Linux is likely imminent. Straight Solaris 10 is just too annoying.","rel":"","context":"In &quot;drizzle&quot;","block_context":{"text":"drizzle","link":"https:\/\/www.flamingspork.com\/blog\/category\/work-et-al\/drizzle-work-et-al\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/www.flamingspork.com\/blog\/wp-content\/uploads\/2008\/12\/pic-0025-225x300.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":1146,"url":"https:\/\/www.flamingspork.com\/blog\/2008\/07\/16\/adventures-with-opensolaris\/","url_meta":{"origin":1719,"position":3},"title":"Adventures with OpenSolaris","author":"Stewart Smith","date":"2008-07-16","format":false,"excerpt":"So... some colleagues have been experimenting with DTrace a bit, and I've been (for a while now) wanting to experiment with it. The challenge now, instead of in the past, is that I'm setting up a Solaris based system - not getting one premade. I chose OpenSolaris as I'd previously\u2026","rel":"","context":"In &quot;mysql&quot;","block_context":{"text":"mysql","link":"https:\/\/www.flamingspork.com\/blog\/category\/work-et-al\/mysql\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/www.flamingspork.com\/blog\/wp-content\/uploads\/2008\/07\/screenshot-300x198.png?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":1242,"url":"https:\/\/www.flamingspork.com\/blog\/2008\/11\/09\/technology-predictions\/","url_meta":{"origin":1719,"position":4},"title":"Technology predictions","author":"Stewart Smith","date":"2008-11-09","format":false,"excerpt":"In 2 years (ish): the majority of consumer bought machines (which will be laptops) will have SSD and not rotational media At the same time, servers with larger storage requirements will use disk as we once used tape. At least one Linux distributoin will be shipping with btrfs as default\u2026","rel":"","context":"In &quot;life, the universe and everything&quot;","block_context":{"text":"life, the universe and everything","link":"https:\/\/www.flamingspork.com\/blog\/category\/life-the-universe-and-everything\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":491,"url":"https:\/\/www.flamingspork.com\/blog\/2005\/10\/19\/solaris-10-under-qemu\/","url_meta":{"origin":1719,"position":5},"title":"Solaris 10 under QEMU","author":"Stewart Smith","date":"2005-10-19","format":false,"excerpt":"I'm currently watching a Solaris 10 install under QEMU on my laptop. It seems to be taking a while, but getting there. (I got a Solaris 10 DVD in my AUUG shwag) Basically, I want to play with DTrace and see how easy it is to do things with it.\u2026","rel":"","context":"In &quot;cool gadgets&quot;","block_context":{"text":"cool gadgets","link":"https:\/\/www.flamingspork.com\/blog\/category\/cool-gadgets\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"jetpack_likes_enabled":true,"_links":{"self":[{"href":"https:\/\/www.flamingspork.com\/blog\/wp-json\/wp\/v2\/posts\/1719","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.flamingspork.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.flamingspork.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.flamingspork.com\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.flamingspork.com\/blog\/wp-json\/wp\/v2\/comments?post=1719"}],"version-history":[{"count":3,"href":"https:\/\/www.flamingspork.com\/blog\/wp-json\/wp\/v2\/posts\/1719\/revisions"}],"predecessor-version":[{"id":3827,"href":"https:\/\/www.flamingspork.com\/blog\/wp-json\/wp\/v2\/posts\/1719\/revisions\/3827"}],"wp:attachment":[{"href":"https:\/\/www.flamingspork.com\/blog\/wp-json\/wp\/v2\/media?parent=1719"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.flamingspork.com\/blog\/wp-json\/wp\/v2\/categories?post=1719"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.flamingspork.com\/blog\/wp-json\/wp\/v2\/tags?post=1719"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}