Benchee 0.14.0 – Micro Benchmarks? Pah, how about Nano Benchmarks!

Long time since the last benchee release, heh? Well, this one really packs a punch to compensate! It brings you a higher precision while measuring run times as well as a better way to specify formatter options. Let’s dive into the most notable changes here, the full list of changes can be found in the Changelog.

Of course, all formatters are also released in compatible versions.

Nanosecond precision measurements

Or in other words making measurements 1000 times more precise 💥

This new version gives you much more precision which matters especially if you benchmark very fast functions. It even enables you to see when the compiler might completely optimize an operation away. Let’s take a look at this in action:

You can see that the averages aren’t 0 ns because sometimes the measured run time is very high – garbage collection and such. That’s also why the standard deviation is huge (big difference from 0 to 23000 or so). However, if you look at the median (basically if you sort all measured values, it’s the value is in the middle) and the mode (the most common value) you see that both of them are 0. Even the accompanying memory measurements are 0. Seems like there isn’t much happening there.

So why is that? The compiler optimizes these “benchmarks” away, because they evaluate to one static value that can be determined at compile time. If you write 1 + 1 – the compiler knows you probably mean 2. Smart compilers. To avoid these, we have to trick the compiler by randomizing the values, so that they’re not clear at compile time (see the “right” integer addition).

That’s the one thing we see thanks to our more accurate measurements, the other is that we can now measure how long a map over a range with 10 elements takes (which is around 355 ns for me (I trust the mode and median more her than the average).

How did we accomplish this? Well it all started looking into why measurements on Windows seemed to be weird. We noticed that the implementation of :timer.tc/1 had hard coded the values to be measured in micro seconds:

But, in fact nanoseconds are supported! So we now have our own simple time measuring code. This is operating system dependent though, as the BEAM knows about native time units. To the best of our knowledge nanosecond precision is available on Linux and MacOS – not on Windows.

It wasn’t just enough to switch to nano second precision though. See, once you get down to nanoseconds the overhead of simply invoking an anonymous function (which benchee needs to do a lot) becomes noticeable. On my system this overhead is 78 nanoseconds. To compensate, benchee now measures the function call overhead and deducts it from the measured times. That’s how we can achieve measurements of 0ns above – all the code does is return a constant as the compiler optimized it away as the value can be determined at compile time.

A nice side effect is that the overhead heavy function repetition is practically not used anymore on Linux and macOS as no function is faster than nanoseconds. Hence, no more imprecise measurements due to function repetition to make it measurable at all (on Windows we still repeat the function call for instance 100 times and then divide the measured time by this).

Formatter Configuration

This is best shown with an example, up until now if you wanted to pass options to any of the formatters you had to do it like this:

This always felt awkward to me, but it really hit hard when I watched a benchee video tutorial. There the presenter said “…here we configure the formatter to be used and then down here we configure where it should be saved to…” – why would that be in 2 different places? They could be far apart in the code. There is no immediate visible connection between Benchee.Formatters.HTML and the html: down in the formatter_options:.  Makes no sense.

That API was never really well thought out, sadly.
So, what can we do instead? Well of course, bring the options closer together:

So, if you want to pass along options instead of just specifying the module, you specify a tuple of module and options. Easy as pie. You know exactly what formatter the options belong to.

Road to 1.0?

Honestly, 1.0 should have happened many versions ago. Right now the plan is for this to be the last release with user facing features. We’ll mingle the data structure a bit more (see the PR if interested), then put in deprecation warnings for functionality we’ll remove and call it 0.99. Then, remove deprecated functionality and call it 1.0. So, this time indeed – it should be soon ™. I have a track record of sneaking in just one more thing before 1.0 though 😅. You can track our 1.0 progress here.

Why did this take so long?

Looking at this release it’s pretty packed. It should have been 2 releases (one for every major feature described above) that should have happened much sooner.

It’s definitely sad, I double checked: measuring with best available precision landed 21st of May and function call overhead measurement was basically done 27th of June. And the formatter options landed 10th of August. Keeping those out of your hands for so long really saddens me 😖.

Basically, these required updating the formatters, which isn’t particularly fun, but necessary as I want all formatters to be ready to release along a new benchee version. In addition, we put in even more work (specifically Devon in big parts) and added support for memory measurements to all the formatters.

Beyond this? Well, I think life. Life happened. I moved apartments, which is a bunch  of work. Then a lot of things happened at work leading to me eventually quitting my job. Some times there’s just no time or head space for open source. I’m happy though that I’m as confident as one can be in that benchee is robust and bug free software, so that I don’t have to worry about it breaking all the time. I can already see this statement haunting me if this release features numerous weird bugs 😉

In that vain, hope you enjoy the new benchee version – happy to hear feedback, bugs or feature ideas!

And because you made it so far, you deserve an adorable bunny picture:

IMG_20190127_150119.jpg

Advertisements

Slides: Elixir, Your Monolith and You (Elixir Berlin Version)

I was supposed to give this talk at ElixirConf.Eu, but sadly fell ill. These are the slides (still titled alpha-1) that I used to give it Elixir Berlin which was met with a great reception. Which is also why I was so looking forward to give it again and have it recorded… Anyhow, if you saw the talk and want to go through the slides again or you were looking forward to the slides – here they are.

Slides can be viewed here or on speakerdeck, slideshare or PDF

Abstract

Elixir is great, so clearly we’ll all rewrite our applications in Elixir. Mostly, you can’t and shouldn’t do that. This presentation will show you another path. You’ll see how at Liefery, we started with small steps instead of rewriting everything. This allowed us to reap the benefits earlier and get comfortable before getting deeper into it. We’ll examine in detail the tactics we used to create two Elixir apps for new requirements, and how we integrated them with our existing Rails code base.

Join us on our tale of adopting Elixir and Phoenix and see what we learned, what we loved, and what bumps we hit along the road

edit: slightly updated version from devday.io – PDF slideshare

benchee is now called bunny!

edit: This was an April’s fools joke. However, bunny will remain functional. It’s only implemented as a thing wrapper around benchee so unless we completely break API (which I don’t see coming) it’ll remain functional. Continue reading for cute bunny pictures.

It is time for benchee to take the next step in its evolution as one of the prime benchmarking libraries. Going forward benchee will be called bunny!

IMG_20171207_084810_Bokeh-ANIMATION.gif
Al likes the naming change!

We waited for this very special day to announce this very special naming change – what better day to announce something is being named bunny than Easter Sunday?

It is available on hex.pm now!

But why?

We think this is an abstraction that’s really going to offer us all the flexibility that we’re going to need for future development. As we approach 1.0, we wanted to get the API just right.

This is true courage.

We also haven’t been exactly subtle dropping hints that this naming change was coming. For once I have described benchmarking as bunnies eating food on numerous occasions (each bunny is a function that tries to eat it’s input as fast as it can!). Other than that, the frequently occurring bunny pictures (or even gifs) in benchee Pull Requests could have been a hint.

Also, eating is what they do best:

IMG_20180120_094003_Bokeh-ANIMATION
Yum yum we like benchmarking

For now bunny still works a lot like benchee. However, it exposes a better and more expressive API for your pleasure. You know, bunny can’t only run like the good old benchee. No! Bunny can also sleep, hop, eat and jump!

This all comes with your own personal bunny assistant that helps you benchmark:

After all this hard work, the bunny needs to sleep a bit though:

IMG_20180216_102445-ANIMATION.gif

This is clearly better than any other (benchmarking) library out there. What are you waiting for? Go and get bunny now. Also, I mean… just LOOK AT THEM!

IMG_20180120_103418.jpg

IMG_20171221_144500.jpg

IMG_20171221_144657_Bokeh(1).jpg

Video & Slides: Stop Guessing and Start Measuring – Benchmarking in Practice (Lambda days)

I managed to get into Lambda days this year and got a chance to present my benchmarking talk. You can watch the video here and check out the slides.

Sadly the bunny video isn’t working in the recording 😥

You can see the slides here or at speakerdeck, slideshare or PDF.

Abstract

“What’s the fastest way of doing this?” – you might ask yourself during development. Sure, you can guess – but how do you know? How long would that function take with a million elements? Is that tail-recursive function always faster?

Benchmarking is here to give you the answers, but there are many pitfalls in setting up a good benchmark and analyzing the results. This talk will guide you through, introduce best practices, and surprise you with some results along the way. You didn’t think that the order of arguments could influence its performance…or did you?

The curious case of the query that gets slower the fewer elements it affects

I wrote a nice blog post for the company I’m working at (Liefery) called “The curious case of the query that gets slower the fewer elements it affects“, which goes through a real world benchmarking with benchee. It involves a couple of things that can go wrong but how combined indexes and PostgreSQL’s EXPLAIN ANALYZE can help you overcome it problems. It’s honestly one of the blog posts I think I ever wrote so head over and read it if that sounds interesting to you 🙂

Slides: Where do Rubyists go?

I gave my first ever keynote yesterday at Ruby on Ice, which was a lot of fun. A lot of the talk is based on my “Where do Rubyists go?”-survey but also researching and looking into languages. The talk looks into what programming languages Ruby developers learn for work or in their free time, what the major features of those languages are and how that compares to Ruby. What does it tell us about Ruby and our community?

Slides can be viewed here or on speakerdeck, slideshare or PDF

Abstract

Many Rubyists branch out and take a look at other languages. What are similarities between those languages and ruby? What are differences? How does Ruby influence these languages?

Surprises with Nested Transactions, Rollbacks and ActiveRecord

Lately I acquired a new hobby. I went around and asked experience Rails developers, whom I respect and value a lot, how many users the following script would create:

The result should be the same on pretty much any database and any Rails version. For the sake of argument you can assume Rails 5.1 and Postgres 9.6 (what I tested it with).

So, how many users does it create? No one from more than a hand full of people I asked got the answer right (including myself).

The answer is 2.

Wait, WHAT?

Yup you read that right. It creates 2 users, the rollback is effectively useless here. Ideally this should create one user (Kotori), but as some people know nested transactions isn’t really a thing that databases support (save for MS-SQL apparently). People, whom I asked and knew this, then guessed 0 because well if I can’t rollback a part of it, better safe than sorry and roll all of it back, right?

Well, sadly the inner transaction rescues the rollback and then the outer transaction happily commits all of it. 😦

Before you get all worried – if an exception is raised and not caught the outer transaction can’t commit and hence 0 users are created as expected.

A fix

So, what can we do? When opening a transaction, we can pass requires_new: true to the transaction which will emulate a “real” nested transaction using savepoints:

As you’d expect this creates just one user.

Nah, doesn’t concern me I’d never write code like this!

Sure, you probably straight up won’t write code like this in a file. However, split across multiple files – I think so. You have one unit of business logic that you want to run in a transaction and then you start reusing it in another method that’s also wrapped in another transaction. Happens more often than you think.

Plus it can happen even more often than that as every save operation is wrapped in its own transaction (for good reasons). That means, as soon as you save anything inside a transaction or you save/update records as part of a callback you might run into this problem.

Here’s a small example highlighting the problem:

As you probably expect by now this creates 2 users. And yes, I checked – if you run create with rollback: true outside of the transaction no user is created. Of course, you shouldn’t raise rollbacks in callbacks but I’m sure that someone somewhere does it.

In case you want to play with this, all of these examples (+ more) are up at my rails playground.

The saddest part of this surprise…

Unless you stumbled across this before, chances are this is at least somewhat surprising to you. If you knew this before, kudos to you. The saddest part is that this shouldn’t be a surprise to anyone though. A lot of what is written here is part of the official documentation, including the exact example I used. It introduces the example with the following wonderful sentence:

For example, the following behavior may be surprising:

As far as I can tell this documentation with the example has been there for more than 9 years, and fxn added the above sentence about 7 years ago.

Why do I even blog about this when it’s in the official documentation all along? I think this deserves more attention and more people should know about it to avoid truly bad surprises. The fact that nobody I asked knew the answer encouraged me to write this. We should all take care to read the documentation of software we use more, we might find something interesting you know.

What do we learn from this?

READ THE DOCUMENTATION!!!!

Are comments a code smell? Yes! No? It Depends.

Most people are either firmly on the “Yes!” or the “No!” side when it comes to discussing comments and their status as a code smell. But, as with most question worth asking the correct answer rather is an “It depends”.

I got to re-examine this topic lately triggered by a tweet and a discussion with Devon:

So, let’s start unwrapping these layers, shall we?

Important distinction: Comments vs. Documentation

One of the first points on the list is understanding what a comment is and what it is not. For me documentation isn’t a comment, in most languages (unfortunately) documentation happens to be represented as a comment. Thankfully some languages, such as elixir, Clojure and Rust, have a separate construct for documentation to make this obvious and facilitate working with documentation.

I don’t think everything should be documented. However, libraries definitely need documentation (if you want people using them that is). I’ve also grown increasingly fond of documentation in application code, especially as projects grow. At Liefery core modules have a top level “module” comment describing the business context, language, important collaborators etc. It has proven invaluable. One of my favorites is the description of the shipment state machine that for each state shortly summarizes what it means – keeping all those in your head has proven quite difficult. Plus, it’s a gift for new developers getting into the code base.

Of course documentation still suffers one of the major drawback of comments – it can become outdated. Much less so if documentation rather provides context than describing in detail what happens.

So, documentation for me isn’t a comment. Next up – what’s this code smell thing?

What’s a Code Smell?

In short a code smell is an indication that something could be wrong with this code. Or to let the creators of the term, Kent Beck (whose idea the term was) and Martin Fowler, tell it in Refactoring:

(…) describing the “when” of refactoring in terms of smells. (…) we have learned to look for certain structures in the code that suggest (sometimes they scream for) the possibility of refactoring.

Does this description fit comments? Well, comments made the “original” list of code smells, with the following reasoning:

(…) comments often are used as a deodorant. It’s surprising how often you look at thickly commented code and notice that the comments are there because the code is bad.

They go on to explain what should be done instead of comments:

When you feel the need to write a comment, first try to refactor the code so that any
comment becomes superfluous.

That is exactly in line with my view of code comments. There is so much more that you can do to make your code more readable instead of resorting to a comment. Comments should be a last resort.

To further explore this, let’s take a look at one of my favorite distinctions when it comes to “good” comments versus “bad” comments.

WHAT versus WHY comments

I like to think of comments in 2 categories:

  • WHAT comments describe what the code does, these can be high level but sometimes they also tell you every little thing the code does (“iterates over, then… uses result to”)
  • WHY comments clarify why some code is like it is giving you a peek into the past why a decision was made

Let’s start with the WHAT – what comments can almost always be replaced by more expressive code. Most of this has to do with proper naming and concepts, which is why it isn’t uncommon for me to spend an extended period of time on these. Hell, (coincidentally) Devon and I even spent hours on defining “Scenarios” in benchee.

Variables, methods, classes, modules… all of these communicate through their name. So spending a good time naming them helps a lot. Often it is also the right call to extract one of these to keep the line count small and manageable while naming the concept you just extracted to help the understanding of the overall code.

Let’s take a look at one of my favorite examples:

Let this stand in for every long method you ever came across where the method body was broken into sections by comments. Extract 3 methods, name them somewhat like the comments. Enjoy shorter methods, meaningful names, concepts and reusability.

I’ve even seen people advocating for this style of long methods with comments. Easy to say, I’m not a fan. The article says “The more complex the code, the more comments it should have.” and my colleague Tiago probably responded best to that:

You should make the code less complex not add more comments.

Another example I wish I made up, but it’s real (I only ported it from JavaScript to Ruby):

As a first step just rename your parameters to whatever understandable name was commented above (also how does l translate to time per step?). Afterwards, look for a bigger concept you might be missing and aggregate the needed data into it so you trim the number of parameters down.

All in all, a WHAT style comment to my mind is a declaration of defeat – it’s an “I tried everything but I can’t make this code be readable by itself” You can be sure, if I get there I first consult a colleague about it and if we can’t come up with something I’ll isolate the complexity and then be sad about my defeat.

With all of that about what comments, how about WHY comments?

They can help us with things that can hardly be expressed in code. Let’s take a little example from the great shoes project:

While the puts statements communicates some of it, it is important to emphasize how dangerous not rescuing here is. The comment also helps establish context and points to where one could find more information about this.

This is an excellent use case for a comment and thankfully Kent Beck and Martin Fowler agree (again from the Refactoring book):

A comment is a good place to say why you did something. This kind of information helps future modifiers, especially forgetful ones.

There is an argument to be made that such information should be kept in the version control system and not in a comment. It is true: the commit message should definitely reflect this, ideally with an easy to produce link both to the ticket and pull request. However, a commit message alone is not enough to my mind. Tracking down a commit that introduced a change in an older code base can be quite hard (ever tried changing all strings from single quotes to double quotes? 😉 ) and you can’t expect everyone to always look at the history of every line of code they change. A comment acts a warning sign in places like these.

In short: WHY comments “yay“! WHAT comments “nay“!

Context matters

Before we get to the final “verdict” there’s one more aspect I’d like to examine: the context of your application. That context might greatly influence the need for comments. Another CRUD application like the ones you built before? Probably doesn’t need many comments. That new machine learning micro service written in Python and deployed with docker while no one in your team has done any of these things before? Yup, that probably needs a couple of more comments.

New business domain, new framework, new language, something out of your comfort zone, experience level of developers – all of these can justify more comments to be written. Those can give context, link to resources, WHAT comments describing on a high level what’s going on and so on. For instance, our route planning code has quite a few more comments explaining the used algorithms and data structures on a high level than the rest of the code base.

Yadda yadda – are comments a code smell or not?

As already established – it’s not as black and white as some people make it seem. To get back to the original twitter conversation that started all this:

For a shorter answer, I think Robert Martin also puts it quite well and succinct in Clean Code:

The proper use of comments is to compensate for our failure to express ourself in
code.

What about me? Well, if you asked me “Are comments a code smell?” on the street the answer would probably be “Yes”, the better answer would be “It depends.” and the good answer short of this blog post would be something along the lines of:

There’s a difference between documentation, which is often good, and comments. WHY comments highlighting reasoning are valuable. WHAT comments explaining the code itself can often be replaced by more expressive code. Only when I admit defeat will I write a WHAT comment.

(these days this even fits in a single tweet 😉 )

edit: As friends happily pointed out, documentation is also a construct different from code comments in clojure and rust. Added that in.

Released: benchee 0.10, HTML, CSV and JSON plugins

It’s been a little time since the last benchee release, have we been lazy? Au contraire mes ami! We’ve been hard at work, greatly improving the internals, adding a full system for hooks (before_scenarion, before_each, after_each, after_scenario) and some other great improvements thanks to many contributions. The releases are benchee 0.10.0 (CHANGELOG), benchee_csv 0.7.0 (CHANGELOG), benchee_html 0.4.0 (CHANGELOG) and benchee_json 0.4.0 (CHANGELOG).

Sooo… what’s up? Why did it take so long?

benchee

Before we take a look at the exciting new features, here’s a small summary of major things that happened in previous releases that I didn’t manage to blog about due to lack of time:

0.7.0 added mainly convenience features, but benchee_html 0.2.0 split up the HTML reports which made it easier to find what you’re looking for but also alleviated problems with rendering huge data sets (the graphing library was reaching its limits with that many graphs and input values)

0.8.0 added type specs for the major public functions, configuration is now a struct so errors out on unrecognized options

0.9.0 is one of my favorite releases as it now gathers and shows system data like number of cores, operating system, memory and cpu speed. I love this, because normally when I benchmark I and write about it I need to write it up in the blog post. Now with benchee I can just copy & paste the output and I get all the information that I need! This version also facilitates calling benchee from Erlang, so benchee:run is in the cards.

Now ahead, to the truly new stuff:

Scenarios

In benchee each processing step used to have its own main key in the main data structure (suite): run_times, statistics, jobs etc. Philosophically, that was great. However, it got more cumbersome in the formatters especially after the introduction of inputs as access now required an additional level of indirection (namely, the input). As a result, to get all the data for a combination of job and input you want to format you have got to merge the data of multiple different sources. Not exactly ideal. To make matters worse, we want to add memory measurements in the future… even more to merge.

Long story short, Devon and I sat down in person for 2 hours to discuss how to best deal with this, how to name it and all accompanying fields. We decided to keep all the data together from now on – for every entry of the result. That means each combination of a job you defined and an input. The data structure now keeps that along with its raw run times, statistics etc. After some research we settled on calling it a scenario.

This was a huge refactoring but we really like the improvements it yielded. Devon wrote about the refactoring process in more detail.

It took a long time, but it didn’t add any new features – so no reason for a release yet. Plus, of course all formatters also needed to get updated.

Hooks

Another huge chunk of work went into a hooks system that is pretty fully featured. It allows you to execute code before and after invoking the benchmark as well as setup code before a scenario starts running and teardown code for after a scenario stopped running.

That seems weird, as most of the time you won’t need hooks. We could have released with part of the system ready, but I didn’t want to (potentially) break API again and so soon if we added arguments or found that it wasn’t quite working to our liking. So, we took some time to get everything in.

So what did we want to enable you to do?

  • Load a record from the database in before_each and pass it to the benchmarking function, to perform an operation with it without counting the time for loading the record towards the benchmarking results
  • Start up a process/service in before_scenario that you need for your scenario to run, and then…
  • …shut it down again in after_scenario, or bust a cache
  • Or if you want your benchmarks to run without a cache all the time, you can also bust it in before_each or after_each
  • after_each is also passed the return value of the benchmarking function so you can run assertions on it – for instance for all the jobs to see if they are truly doing the same thing
  • before_each could also be used to randomize the input a bit to benchmark a more diverse set of inputs without the randomizing counting towards the measured times

All of these hooks can be configured either globally so that they run for all the benchmarking jobs or they can be configured on a per job basis. The documentation for hooks over at the repo is a little blog post by itself and I won’t repeat it here 😉

As a little example, here is me benchmarking hound:

Hound needs to start before we can benchmark it. Howeer, hound seems to remember the started process by the pid of self() at that time. That’s a problem because each benchee scenario runs in its own process, so you couldn’t just start it before invoking Benchee.run. I found no way to make the benchmark work with good old benchee 0.9.0, which is also what finally brought me to implement this feature. Now in benchee 0.10.0 with before_scenario and after_scenario it is perfectly feasible!

Why no 1.0?

With all the major improvements one could easily call this a 1.0. Or 0.6.0 could have been a 1.0 then we’d be at 2.0 now – wow that sounds mature!

Well, I see 1.0 as a promise – a promise for plugin developers and others that compatibility won’t be broken easily and not soon. Can’t promise this when we just broke plugin compatibility in a major way. That said, I really feel good about the new structure, partly because we put so much time and thought into figuring it out, but also because it has greatly simplified some implementations and thinking about some future features it also makes them a lot easier to implement.

Of course, we didn’t break compatibility for users. That has been stable since 0.6.0 and to a (quite big) extent beyond that.

So, 1.0 will of course be coming some time. We might get some more bigger features in that could break compatibility (although I don’t think they will, it will just be new fields):

  • Measuring memory consumption
  • recording and loading benchmarking results
  • … ?

Also before a 1.0 release I probably want to extract more not directly benchmarking related functionality from benchee and provide as general purpose libraries. We have some sub systems that we build for us and would provide value to other applications:

  • Unit: convert units (durations, counts, memory etc.), scale them to a “best fit” unit, format them accordingly, find a best fit unit for a collection of values
  • Statistics: All the statistics we provide including not so easy/standard ones like nth percentile and mode
  • System: gather system data like elixir/erlang version, CPU, Operating System, memory, number of cores

Thanks to the design of benchee these are all already fairly separate so extracting them is more a matter of when, not how. Meaning, that we have all the functionality in those libraries that we need so that we don’t have to make a coordinated release for new features across n libraries.

benchee_html

Selection_045.png

Especially due to many great community contributions (maybe because of Hacktoberfest?) there’s a number of stellar improvements!

  • System information is now also available and you can toggle it with the link in the top right
  • unit scaling from benchee “core” is now also used so it’s not all in micro seconds as before but rather an appropriate unit
  • reports are automatically opened in your browser after the formatter is done (can of course be deactivated)
  • there is a default file name now so you don’t HAVE to supply it

What’s next?

Well this release took long – hope the next one won’t take as long. There’s a couple of improvements that didn’t quite make it into the release so there might be a smaller new release relatively soon. Other than that, work on either serializing or the often requested “measure memory consumption” will probably start some time. But first, we rest a bit 😉

Hope you enjoy benchmarking and if you are missing a feature or getting hit by a bug, please open an issue