Slides: Elixir & Phoenix – Fast, Concurrent and Explicit (Øredev)

I had the great pleasure to finally speak at Øredev! I wanted to speak there for so long, not only because it’s a great conference but also because it’s in the city of Malmö. A city that I quite like and a city I’m happy to have friends in 🙂

Anyhow, all went well although I’d have loved to spread the word more.

And yes, at its basics it’s a presentation I gave a while ago but I reworked and repurposed it both for the audience and this day and age. Of course, it now also includes bunny pics 😉

Slides can be viewed here, on speaker deck, slideshare or PDF

Abstract

Key takeaways
  • What are Elixir and Phoenix? What makes them standout among programming languages and frameworks?
  • Why would I want to use Functional Programming, what are the benefits and why does it work so well for the web?
  • How capable is Erlang (Whatsapp example) performance and reliability wise and why would I consider it for a project?
  • How does explicitness help in system design?

Elixir and Phoenix are known for their speed, but that’s far from their only benefit. Elixir isn’t just a fast Ruby and Phoenix isn’t just Rails for Elixir. Through pattern matching, immutable data structures and new idioms your programs can not only become faster but more understandable and maintainable. This talk will take a look at what’s great, what you might miss and augment it with production experience and advice.

Looking for a job!

NO LONGER (REALLY) LOOKING FOR A JOB

I’m no longer looking for a job, or at least not really. I decided to go freelancing for now and I have a project until the end of October if nothing fails. So if you’ve got new freelance projects or you have a really great CTO/VP/Head of position still please feel free to contact me 😉

(decision to be explained in more detail in a further post)

Original post for historical reasons:


It’s that time: I’m looking for a job! You can find my CV and all relevant links over at my website: CV Web, CV PDF

Quick links that might interest you: website, github, twitter and blog

Who am I and why would you want to hire me?

My name is Tobi, but online I’m better known as PragTob. I am a full-stack engineer & leader deeply interested in agile methodologies, web technologies, software crafting and teaching. I love creating products that help people and am fascinated with the human side of software development.

To have a look at some of the technical topics that I’m involved with you can check out this blog but the core technologies I’ve worked with are Ruby, Elixir, PostgreSQL and JavaScript. I enjoy pair programming, mentoring, TDD, naming discussions and clean code bases. I also have a passion for performance and eliminating bottlenecks. I maintain a variety of open source projects mostly in Ruby (f.ex. simplecov) and Elixir (f.ex. benchee).

While I have this technical background, I believe that the so called “soft”/people/social skills are more important for developers as we all work together in teams. That is even more vital when looking at any kind of lead or even management position, as mentoring, communicating and collaborating is at the heart of what people in these positions should do. I’m deeply interested in topics such as motivation and communication. In my last job I was responsible for a team of ~15 people (together with the CTO). We relied on constant feedback through retrospectives to adapt our processes and grow the team through introducing the concept of seasons, going remote first, enhancing our onboarding and many more improvements. I also did regular one-on-ones, interviewed candidates, facilitated retrospectives and other meetings as well as doing knowledge sharing sessions for the whole team.

Other than these I’ve been mentoring at Rails Girls Berlin/code curious for more than 7 years and am coaching my Rails Girls project group, the rubycorns, for more than 6 years. I also run the Ruby User Group Berlin, and give presentations at various conferences.

My CV goes into more detail about what I’ve done.

What am I looking for?

I’m looking for a company where we build something useful together and where I can have an impact on product, people and process. With more concrete points this amounts to:

Position: I’m primarily looking for lead positions with responsibility so CTO/VP of Engineering/Head of Engineering/Team Lead – all depending on company size and culture. In the right circumstances, I can also imagine working as a Senior Developer. I want to be in an environment where my impact goes beyond code as I love to help people and improve processes. I like to be where I can help the team & the company the most, sometimes that’s mentoring and sharing knowledge, sometimes that’s fixing critical performance bugs and sometimes that’s participating in an all day workshop at the headquarters of a potential client.

Location: Berlin or potentially remote. I’m not looking to relocate currently.

Employment/Freelance: I’m looking for full time employment (reduced hours would also be ok) but am also open to freelance work.

Field of Work: I’m looking for a company that helps people solve real problems. I want a purpose I can get behind. For me that means no crypto currencies, advertisement or fintech for the rich. In a similar vein I’m not particularly interested in consultancy/agency work as I don’t like to travel every week and really like to work at a product company where I can really dive into the domain.

Diversity: I believe that the best products and work environments are created by diverse teams. Hence, this should be a core value applied through all levels of an organization. Or at least, the problem should be recognized and active work to counter it be put forth.

Company Culture: I’m looking for a company that trusts its employees and is open. It should also support a sustainable pace. Regular overtime is nothing to be proud of. Knowledge sharing should be key and therefore asking questions should be encouraged.

Time Allocation: I love companies that trust their employees and allow them flexible working hours and locations. Meaning it’s ok to work remotely for a couple of days or from home. Historically, I did some of my best work checking out emails and pull requests at home in the morning and then biking to work afterwards. Beloved refactorings have also emerged while dog sitting.

Benefits: Giving people time and room to grow is important to me. Particularly I usually speak at a couple of conferences during a year and think going there should be work, not vacation time.

Languages: I have expert knowledge of Ruby and Elixir. Working with Elixir would be a plus, but not a must. I also like working with JavaScript and am not afraid to do CSS albeit my CSS has gotten rather rusty sadly. I’m also naturally open to other languages, while I’m certainly most effective in the mentioned language the right company with the right culture and purpose is more important. Particularly Rust would be very interesting to me, granted I’m not too good at it (yet).

Open Source: An employer that values contributing back to open source or sharing their own creations would be a big plus, even more so if it happened to be full OSS work.

I’m aware that it’s hard to tick all of these boxes, and if you did hey we might be a great match. I just wanted to communicate wishes and values clearly as a starting point.

Getting in touch

You can send me an email at pragtob@gmail.com – otherwise my Twitter direct messages are also open.

Please don’t cold call me 🙂

What’s the Fastest Data Structure to Implement a Game Board in Elixir?

Ever wanted to implement something board game like in Elixir? Chess? Go? Islands? Well, then you’re gonna need a board!

But what data structure would be the most efficient one to use in Elixir? Conventional wisdom for a lot of programming languages is to use some sort of array. However, most programming languages with immutable data structures don’t have a “real” array data structure (we’ll talk about erlangs array later, it’s not really like the arrays in non functional languages) . Elixir is one of those languages.

As I like board games this was one of the first questions I ever asked the community. It’s also an interesting and relatable example to see and understand the performance trade-offs of different data structures.

Complete sources can be found in my elixir_boards_benchmark repo.

Benchmark Design

For this benchmark I didn’t have a very specific board game in mind so I settled for a board size of 9×9 . It’s a bit bigger than a normal chess board (8×8), it’s exactly the size of the smallest “normal” Go-board and it’s one smaller than the board used in Islands implemented in Functional Web Development with Elixir, OTP and Phoenix, so it seemed like a good compromise. Different sizes are likely to sport different performance characteristics.

Without a concrete usage scenario in mind I settled on a couple of different benchmarks:

  • getting a value at the coordinates (0,0), (4, 4) and (8,8). This is a fairly nano/micro benchmark for data access and provides a good balance of values at the beginning/middle/end when thinking in list terms.
  • setting a value at the coordinates (0,0), (4, 4) and (8,8).
  • a still nano/micro benchmark that combines the two previous benchmarks by getting and setting all three mentioned values. I call this “mixed bag”.
  • Why stop at the previous one? The last benchmark just sets and gets every possible coordinate once (first it sets (0,0) then gets it, then it sets (0, 1), then gets it and so forth). This also simulates the board filling which can be important for some data structures. Completely filling a board is unrealistic for most board games however, as most games finish before this stage. This one is called “getting and setting full board”.

Something that is notably not benchmarked is the creation of boards. For (almost) all of the board implementations it could resolve to a constant value which should be similar in the time it takes to create. I wasn’t overly interested in that property and didn’t want to make the code less readable by inlining the constant after creation when I didn’t need to.

Also noteworthy is that these benchmark mostly treat reading and writing equally while in my experience most AIs/bots are much more read-heavy than write-heavy.

Take all these caveats of the benchmark design into consideration when looking at the results and if in doubt of course best write your own benchmark taking into account the concrete usage patterns of your domain.

Without further ado then let’s look at the different implementations I have benchmarked so far:

Contenders

All boards need to implement a simple Board behaviour:

All boards are built so that accessing a previously unset field will return nil. No assumptions about the data stored in the board have been made, which rules out String as an implementation type. In the benchmarks atoms are used as values.

In the descriptions of the data types below (x, y) is used to mark where what value is stored.

  • List2D: A 2 dimensional list representing rows and columns: [[(0, 0), (0, 1), (0, 2), ...], [(1, 0), (1, 1), ..], ..., [..., (8, 8)]]
  • List1D: Using the knowledge of a constant board size you can encode it into a one-dimensional list resolving the index as dimension * x + y: [(0, 0), (0, 1), (0, 2), ..., (1, 0), (1, 1), ..., (8, 8)]
  • Tuple2D: Basically like List2D but with tuples instead of lists: {{(0, 0), (0, 1), (0, 2), ...}, {(1, 0), (1, 1), ..}, ..., {..., (8, 8)}}
  • Tuple1D: Basically like List1D but with a tuple instead of a list: {(0, 0), (0, 1), (0, 2), ..., (1, 0), (1, 1),... (8, 8)}
  • Array2D: erlang arrays aren’t exactly a common sight, even learn you some Erlang basically skims over them and says to be cautious when using them. I even forgot about them for the first version of this post 😅. They internally map to tuple usage in an interesting way that will be discussed/illustrated further below. With that out of the way, conceptually this is much like Tuple2D.
  • Array1D: see above for the data structure in general, otherwise conceptually like Tuple1D.
  • MapTuple: A map that takes the tuple of the coordinates {x, y} as the key with the value  being whatever is on the board: %{{0, 0} => (0, 0), {0, 1} => (0, 1), ..., {8, 8} => (8, 8)}. It’s a bit unfair compared to others shown so far as it can start with an empty map which of course is a much smaller data structure that is not only smaller but usually faster to retrieve values from. As the benchmarks start with an empty board that’s a massive advantage, so I also included a full map in the benchmark, see next/
  • MapTupleFull: Basically the same as above but initialized to already hold all key value pairs initialized as nil. Serves not only the purpose to see how this performs but also to see how MapTuple performs once it has “filled up”.
  • MapTupleHalfFull: Only looking at complete full performance and empty performance didn’t seem good either, so I added another one initialized from 0 to 4 on all columns (a bit more than a board half, totalling 45 key/value pairs).
  • MapTupleQuarterFull: Another one of these, this time with 27 key/value pairs. Why? Because there is an interesting performance characteristic, read on to find out 🙂
  • Map2D: Akin to List2D etc. a map of maps: %{0 => %{0 => (0, 0), 1 => (0, 1), ...}, 1 => %{0 => (1, 0), ...}, ..., 8 => %{..., 8 => (8, 8)}}
  • ETSSet: erlang ETS storage with table type set. Storage layout wise it’s basically the same as MapTuple, with a tuple of coordinates pointing at the stored value.
  • ETSOrderedSet: Same as above but with table type ordered_set.
  • ProcessDictionary: On a special request for Michał 😉 This is probably not a great default variant as you’re practically creating (process-) global state which means you can’t have two boards within the same process without causing mayham. Also might accidentally conflict with other code using the process dictionary. Still might be worth considering if you want to always run a board in its own process.

It’s significant to point out that all mentioned data types except for ETS and the process dictionary are immutable. This means that especially for those in the benchmark a new board is created in a before_each hook (does not count towards measured time) to avoid “contamination”.

Another notable exception (save for String for the aforementioned constraints) is Record. Records are internally represented as tuples but give you the key/value access of maps, however in elixir it is more common to use Structs (which are backed by maps). As both maps and tuples are already present in the benchmark including these likely wouldn’t lead to new insights.

System Setup

Operating System Linux
CPU Information Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz
Number of Available Cores 8
Available Memory 15.61 GB
Elixir Version 1.8.2
Erlang Version 22.0

Benchmarking Results

Benchmarks of course were run with benchee and the benchmarking script is here (nothing too fancy).

You can check them out in the repo as markdown (thanks to benchee_markdown) or HTML reports (benchee_html). Careful though if you’re on mobile some of the HTML reports contain the raw measurements and hence go up to 9MB in size and can take a while to load also due to the JS drawing graphs!

The results of getting and setting full board:

runtime_final.png
getting and setting full board iterations per second (higher is better)

It’s a tight race at the top when it comes to run time! Tupl1D, Tuple2D and MapTuple are all within striking range of each other and then there’s a sharp fall off.

Also there is a fair bit of variance involved as shown by the black “whiskers” (this is usual for benchmarks that finish in nanoseconds or microseconds because of garbage collection, interference etc.). Which one of these is best? To get a better picture let’s look at the whole table of results:

Name IPS Average Deviation Median Mode Minimum Maximum
Tuple1D 133.95 K 7.47 μs ±23.29% 6.93 μs 6.88 μs 6.72 μs 492.37 μs
Tuple2D 132.16 K 7.57 μs ±29.17% 7.21 μs 7.16 μs 7.03 μs 683.60 μs
MapTuple 126.54 K 7.90 μs ±25.69% 7.59 μs 7.56 μs 7.43 μs 537.56 μs
ProcessDictionary 64.68 K 15.46 μs ±14.61% 15.12 μs 15.05 μs 14.89 μs 382.73 μs
ETSSet 60.35 K 16.57 μs ±9.17% 16.04 μs 15.95 μs 15.79 μs 161.51 μs
Array2D 56.76 K 17.62 μs ±17.45% 17.15 μs 17.04 μs 16.54 μs 743.46 μs
MapTupleFull 55.44 K 18.04 μs ±11.00% 16.92 μs 16.59 μs 16.43 μs 141.19 μs
MapTupleHalfFull 53.70 K 18.62 μs ±8.36% 17.96 μs 17.87 μs 17.67 μs 160.86 μs
Array1D 50.74 K 19.71 μs ±10.60% 19.29 μs 18.99 μs 18.81 μs 469.97 μs
ETSOrderedSet 39.53 K 25.30 μs ±10.51% 24.82 μs 24.57 μs 24.34 μs 390.32 μs
Map2D 36.24 K 27.59 μs ±8.32% 27.71 μs 25.90 μs 25.12 μs 179.98 μs
List2D 29.65 K 33.73 μs ±4.12% 33.31 μs 33.04 μs 31.66 μs 218.55 μs
MapTupleQuarterFull 28.23 K 35.42 μs ±3.86% 34.96 μs 34.61 μs 34.39 μs 189.68 μs
List1D 15.41 K 64.90 μs ±2.84% 64.91 μs 64.14 μs 62.41 μs 175.26 μs

Median, and Mode are good values to look at when unsure what is usually fastest. These values are the “middle value” and the most common respectively, as such they are much less likely to be impacted by outliers (garbage collection and such).  These seem to reinforce that Tuple1D is really the fastest, if by a negligible margin.

MapTuple is very fast, but its sibling MapTupleFull, that already starts “full”, is more than 2 times slower. Whether this is significant for you depends if you start with a truly empty board (Go starts with an empty board, chess doesn’t for instance).

Somewhat expectedly List1D does worst as getting values towards to the end of the list it has to traverse the entire list which is incredibly slow.

As an aside, it’s easy to see in the box plot that the high deviation is mainly caused by some very big outliers:

boxplot_final.png
Boxplot of getting and setting full board – dots are outliers

The dots denote outliers and they are so big (but few) that the rest of the chart is practically unreadable as all that remains from the actual box is practically a thick line.

What about memory consumption?

memory_final.png
getting and setting full board memory usage (lower is better)

Here we can see the immediate drawback of Tuple1D – it’s memory consumption is many times worse than that of the others. My (educated) guess is that it’s because it has to replace/copy/update the whole tuple with it’s 9*9 = 81 values for every update operation. Tuple2D is much more economical here, as it only needs to to update the tuple holding the columns and the one holding the specific column we’re updating (2 * 9 = 18) to the best of my understanding.

Big Tuples like this are relatively uncommon in “the real world” in my experience though as their fixed size nature makes them inapplicable for a lot of cases. Luckily, our case isn’t one of them.

MapTuple does amazingly well overall as it’s probably the structure quite some people would have intuitively reached for for good constant memory access speed. It’s memory consumption is also impressively low.

ProcessDictionary is very memory efficient and also constantly in the top 4 when it comes to run time. However, at least run time wise there’s quite the margin ~15 μs to ~7 μs which doesn’t seem to make the risks worth it overall.

Other Observations

Let’s take a look at some other things that seem note worthy:

ETS isn’t the winner

This surprised me a bit (however I haven’t used ETS much). ETS was always tagged as the go to option for performance in my mind. Looking at the docs and use cases I know it makes sense though – we’re likely to see benefits for much larger data sets as ours is relatively small:

These (ETS) provide the ability to store very large quantities of data in an Erlang runtime system, and to have constant access time to the data.

81 values hardly qualifies as “very large”.

Don’t blindly follow conventional “wisdom” – always benchmark! 💪

get(0,0) vs. get(8,8)

Let’s have a look at some of the time it takes to retrieve a value – usually a much more common operation than writing:

get(0,0)

Name IPS Average Deviation Median Mode Minimum Maximum
Tuple1D 44.12 M 22.66 ns ±842.77% 20 ns 20 ns 9 ns 35101 ns
Tuple2D 42.46 M 23.55 ns ±846.67% 20 ns 19 ns 7 ns 36475 ns
Array1D 30.38 M 32.92 ns ±84.61% 32 ns 32 ns 20 ns 8945 ns
MapTuple 29.09 M 34.38 ns ±111.15% 32 ns 31 ns 19 ns 10100 ns
MapTupleQuarterFull 18.86 M 53.03 ns ±37.27% 50 ns 49 ns 38 ns 2579 ns
Array2D 18.62 M 53.70 ns ±67.02% 50 ns 49 ns 34 ns 10278 ns
List1D 18.26 M 54.75 ns ±56.06% 53 ns 52 ns 42 ns 8358 ns
ProcessDictionary 17.19 M 58.18 ns ±1393.09% 52 ns 51 ns 39 ns 403837 ns
Map2D 15.79 M 63.34 ns ±25.86% 60 ns 54 ns 41 ns 388 ns
MapTupleHalfFull 10.54 M 94.87 ns ±27.72% 91 ns 89 ns 76 ns 2088 ns
MapTupleFull 10.29 M 97.16 ns ±18.01% 93 ns 89 ns 70 ns 448 ns
ETSSet 9.74 M 102.63 ns ±26.57% 100 ns 99 ns 78 ns 2629 ns
List2D 9.04 M 110.57 ns ±69.64% 105 ns 109 ns 82 ns 4597 ns
ETSOrderedSet 6.47 M 154.65 ns ±19.27% 152 ns 149 ns 118 ns 1159 ns

get(8, 8)

Name IPS Average Deviation Median Mode Minimum Maximum
Tuple2D 42.47 M 23.55 ns ±788.60% 21 ns 20 ns 7 ns 33885 ns
Tuple1D 40.98 M 24.40 ns ±725.07% 22 ns 21 ns 10 ns 34998 ns
Array1D 29.67 M 33.70 ns ±161.51% 33 ns 32 ns 21 ns 18301 ns
MapTuple 28.54 M 35.03 ns ±986.95% 32 ns 32 ns 20 ns 230336 ns
ProcessDictionary 19.71 M 50.73 ns ±1279.45% 47 ns 47 ns 34 ns 377279 ns
Array2D 17.88 M 55.92 ns ±85.10% 52 ns 51 ns 35 ns 13720 ns
Map2D 13.28 M 75.31 ns ±32.34% 73 ns 65 ns 56 ns 2259 ns
MapTupleHalfFull 12.12 M 82.53 ns ±31.49% 80 ns 80 ns 60 ns 1959 ns
ETSSet 9.90 M 101.05 ns ±16.04% 99 ns 95 ns 78 ns 701 ns
MapTupleFull 9.85 M 101.53 ns ±19.29% 99 ns 90 ns 70 ns 487 ns
ETSOrderedSet 5.59 M 178.80 ns ±41.70% 169 ns 170 ns 135 ns 4970 ns
MapTupleQuarterFull 4.09 M 244.65 ns ±16.85% 242 ns 240 ns 226 ns 9192 ns
List2D 3.76 M 265.82 ns ±35.71% 251 ns 250 ns 231 ns 9085 ns
List1D 1.38 M 724.35 ns ±10.88% 715 ns 710 ns 699 ns 9676 ns

The top 3 remain relatively unchanged. What is very illustrative to look at is List1D and List2D though. For get(0, 0) List1D vastly outperforms its 2D sibling even being closest to the top group. That is easy to explain because it basically translates to looking at the first element of the list which is very fast for a linked list. However, looking at the last element is very slow and this is what get(8, 8) translates to. All elements have to be traversed until the end is reached. As such the whole thing is almost 16 times slower for List1D. List2D is still very slow but through it’s 2-dimenstional structure it only needs to look at 18 elements instead of 81.

MapTuple vs. MapTupleQuarterFull vs. MapTupleHalfFull vs. MapTupleFull

In most scenarios, including the biggest scenario, MapTupleQuarterFull performs worse than MapTuple (expected), MapTupleHalfFull (unexpected) and MapTupleFull (unexpected). I had expected its performance to be worse than MapTuple but better than MapTupleFull and MapTupleHalfFull. Why is that?

I had no idea but Johanna had one: it might have to do with the “magic” limit at which a map “really” becomes a map and not just a list that is linearly searched. That limit is defined as 32 entries in the erlang source code (link also provided by Johanna). Our quarter full implementation is below that limit (27 entries) and hence often performance characteristics more akin to List1D (see good get(0, 0) performance but bad get(8, 8) performance) than its “real” map cousins.

To the best of my understanding this “switch the implementation at size 32” is a performance optimization. With such a small data set a linear search often performs better than the overhead introduced by hashing, looking up etc. You can also see that the trade-off pays off as in the big benchmark where the whole board is filled incrementally MapTuple (which is initially empty and grows) still provides top performance.

What I still don’t fully understand is that sometimes MapTupleFull seems to still outperform MapTupleHalfFull – but only by a very negligible margin (most notably in the “big” getting and setting full board benchmark). The difference however is so small that it doesn’t warrant further investigation I believe, unless you have an idea of course.

Performance difference of Array vs. Tuple

In the introduction I said arrays are backed by tuples – how come their performance is way worse then? Well, let’s have a look at what an array actually looks like:

iex(3)> mine = :array.new(81, default: nil)
{:array, 81, 0, nil, 100}
iex(4)> :array.set(13, :boom, mine)
{:array, 81, 0, nil,
{10, {nil, nil, nil, :boom, nil, nil, nil, nil, nil, nil}, 10, 10, 10, 10, 10,
10, 10, 10, 10}}

It cleverly doesn’t even initialize all the fields but uses some kind of length encoding saying “the value is the default value of nil for the next 100 fields” but also saving its set size limit of 81 (fun fact: these arrays can be configured to also dynamically grow!).

Once we set a value (at index 13) the representation changes showing still some length encoding “there is nothing here for the first 10 entries” but then the indexes 10..19 are expanded as a whole tuple that’s holding our value. So, to the best of my understanding arrays work by adding “stretches” of tuples the size of 10 as they need to.

In general this is a performance optimization especially making writes/updates faster as compared to huge tuples as mainly the 10-tuple holding the concrete value needs to get updated instead of the whole thing.

However, our custom tuple implementations are perfectly sized to begin with and not too huge. Moreover, their whole size being set at compile-time probably enables some optimizations (or so I believe).  Hence the tuple implementations outperform them while arrays don’t do too shabby (especially with read access) as compared to other implementations.

Conclusion

Tuples can be very good for the use case of known at compile time sized collections that need fast access and a simple flat map performs amazingly well. All that least for the relatively small board size (9×9 = 81 fields) benchmarked against here. There is a big caveat for the map though – it is so fast if we can start with an empty map and grow it in size as new pieces are set. The completely initialized map (MapTupleFull) performs way worse, tuples are the clear winners then.

Missing a data structure? Please do a PR! There’s a behaviour to implement and then just to lists to add your module name to – more details.

Update 1 (2019-06-17): Fixed MapTupleHalfFull. Before the update it was actually just quarter full 😅 which has wildly different performance characteristics for reasons now described along with the MapTupleQuarterFull implementation. Thanks goes to Johanna for pointing that out. Also the process registry has been added as another possible implementation on a suggestion from Michał 😉 . Also added a run time box plot to show outliers clearer and visually.

Update 2 (2019-06-18): Added and investigated Arrays thanks to /u/Hauleth over on reddit. Also added a remark about records thanks to /u/friendlysock over on lobste.rs.

You may not need GenServers and Supervision Trees

The thought that people seem to think of GenServers and supervision trees in the elixir/erlang world as too essential deterring people from using these languages has been brewing in my mind for quite some time. In fact I have written about it before in Elixir Forum. This is a summary and extended version of that (including some choice replies) to give it more visibility.

It feels like we talk so much about GenServers etc. that people who come to Elixir feel like they need to use them or they are not “really” using Elixir. I fear that it keeps people out of the elixir community because all this just seems so complicated. However, you can write great applications without ever writing them yourself. I emphasize yourself because a lot of the libraries you’ll use (phoenix, ecto etc.) already have their supervision trees and their processes – you’re standing on the shoulders of giants truly. Still,  I hear people say something to the tune of “We’re still using this like Rails – we should use GenServers” – without any need or concrete reasoning. Having something as simple as rails (in parts, let’s not derail about rails here) but at the same time fully parallel and more efficient is a huge boon to my mind.

I feel like hardly anyone ever highlights this. I’ve been programming elixir since ~2015. I’ve never written a production GenServer or supervision tree (and I have multiple applications in production and multiple published libraries). I simply didn’t encounter these problems or the eco system took care of it for me. Still I felt somewhat inadequate because of it, as everyone seem to already be talking about these. Which is also understandable, they are unique, they solve hard problems – there’s lots to learn and share knowledge.

I’ve had this thought that often you don’t really need GenServers and supervision trees in my head for a long time (~2016) but was too afraid to voice it. Maybe I just don’t understand it enough? Maybe I need to learn more about them? Everyone always talks about them, I’m sure enlightenment will come to me some time soon! I finally nervously wrote the aforementioned Elixir Forum thread in 2018. My nervousness went down a notch after José Valim, creator of elixir somewhat hyper productive and omnipresent, liked the post within ~15 minutes of its creation. Also most people were really supportive, hence I’m happy to share this more openly, sorry for the delay a lot had been going on in my life 😉

Be mindful that I don’t say you don’t need them – I’m merely saying that you can build significant and great elixir applications without writing your own GenServers and supervision trees. It’s of course still a topic worth learning.

GenServers? Supervision Trees? Processes?

If you’ve ever watched a talk about elixir or erlang you’re likely to have heard about these. They’re one of the “killer features” of erlang and elixr. They give you that famed parallelism backed by the actor model, reliability and that whole “Let it crash! (and restart in a known good state)”. Both GenServer and Supervisor are behaviours showing a process “how to behave” and they both ship with Erlang/OTP.

So what’s the problem with Genservers and Supervision Trees?

GenServers, supervisors etc. are great technologies that help you solve problems. They’re one of the things that is most special & unique about elixir & erlang. As a result lots of conference talks, blog posts etc. focus on them and it seems everyone wants to use them. Through the big focus on them in the community it sometimes feels like you can’t be a “real” elixir/erlang programmer until you’ve used and mastered them.

However, do you need them all the time? At least while using a framework (like Phoenix), chances are you don’t. The hidden detail of course is that you are using GenServers and friends without even knowing it – Phoenix runs every request and every channel[1] in their own processes. Ecto has its own pool for your database connections. It’s already parallelized and you don’t need to take care of it. That’s the beauty of it. What I’m saying is that in the standard situation the eco system takes care of you.

Building a relatively standard CRUD web application with Phoenix? No need.
Just using channels for a chat like applications in Phoenix? You’re good.

Of course, you don’t need them until you really do. I like how Jordan put it:

I agree that you, 99% of the time you do not need to use anything otp related but when you are in that 1% its really a game changer that makes elixir as great as it is.

The Bad and the Ugly

It’s not like using GenServers and friends makes everything instantly better. Quite the opposite, it can make your code both harder to understand and slower.

Sometimes the introduction of processes just complicates the code, what could have been a simple interaction is now obfuscated through a bunch of GenServer calls. Through trying to use these concepts you can also essentially grind your application to a halt performance wise. To take an example from the excellent Adopting Elixir, which also covers this topic:

A new developer team started building their Phoenix applications.
They had always heard GenServers could be treated like microservices but even
tinier. This “wisdom” led them to push all of their database access control to
GenServers .
(…)
performance was abysmal. Under high-enough load, some pages took 3 seconds to render because they built a bottleneck where none existed. They
defeated Ecto connection pools because all access happened through a single
process.
In essence, they made it easy to create global, mutable variables in Elixir. They
essentially crippled the single biggest advantage of functional languages, for
no gain whatsoever.

When to GenServer?

Adopting Elixir again provides some guidance as to what to best use processes for:

  • Model state accessed by multiple processes.
  • Run multiple tasks concurrently.
  • Gracefully handle clean startup and exit concerns.
  • Communicate between servers.

They especially highlight that GenServers aren’t to be used for code organization.

Robert Virding (one of the creators of Erlang) also chimed in and his response is so measured that I want to quote it in full:

I think the most important thing to understand and to use properly is the concurrency in the problem/solution/system. Using GenServers and other behaviours is just one way of doing the concurrency but it is not the only way. They are tools and like all tools they need to be used in the right way. The problem is to get the right level of concurrency which suites your problem and your solution to that problem. Too much concurrency means you will be doing excess work to no real gain, and too little concurrency means you will be making your system too sequential.

Now, as has been pointed out, many packages like Phoenix already provide a pretty decent level of concurrency which is suitable for many types of applications, at least the ones they were intended for. They will do this automatically so you don’t have to think about it in most cases, but it is still there. Understanding that is necessary so you can work out how much concurrency you need to explicitly add if any. Unfortunately because it is all managed for you “invisibly underneath” many don’t realise that is it there.

While people agree in general, there are also some that say that most systems can benefit from a GenServer and supervision tree. As Saša Jurić points out:

With a lot of hand waving, I’d say that GenServers are OTPs built-in building block for building responsive services, Tasks are the same for non-responsive ones, and supervision tree is the built-in service manager like systemd or upstart. In the past 10+ years of my backend side experience, I’ve worked on small to medium systems, and all of them needed all of these technical approaches.

He also has a fairly extensive blog post on the topic of when to reach for these tools and when not to: To spawn, or not to spawn.

In the end, I’m also not an expert on GenServers and Supervision Trees – as I said I never wrote a production one. Still learning and still growing. I think knowing them well gives you a good basis to make informed decisions on when to use them and when not to.

Abstractions

But you came to Elixir for the parallelism! So you need GenServers right? No.

Elixir core and the community have been very good at providing easy to use solutions to write fully parallel programs without having to write your own GenServers and supervision trees. There are gen_stage, flow and broadway, but somewhat more importantly a couple of these are built-ins like Task (do something in parallel easily) and Agent (share state through a process).

Want to geocode the pick up and drop off addresses of a shipment in parallel and then wait until both have finished? Say no more:

Learning

Although I’m saying you don’t need it and can write applications without it just fine, it’s a fascinating and interesting topic that can make you a better programmer without ever writing your own supervision trees even. And as I said, education is key to know when to reach for them and when not to. So here are a couple of books I can recommend to up your knowledge:

  • The Little Elixir & OTP Guide Book – my personal favorite, in it the author takes you through writing implementations of poolboy starting with a simple one, showing what shortcomings it has and then extending on it building a complicated supervision tree but you get the reasoning to go along with it to see for what feature what level of complexity is needed. A fascinating read for me.
  • Adopting Elixir – covers many aspects as mentioned before but is especially good at what to use these concepts for and what not to use them for (and is an interesting read overall if you consider to get your company into elixir)
  • Elixir in Action – Saša Jurić is one of the most knowledgeable people I can think of about this topic, hence I quoted him before, and a significant part of the book is dedicated to it too. A second edition came out earlier this year so it’s also all up to date.
  • Functional Web Development with Elixir, OTP, and Phoenix – build a simple game, first with simple functions, then develop a GenServer interface to it with some supervisors and then wire it all up in a Phoenix App – good introduction and especially good at showing some solid debugging work.
  • Programming Phoenix – introductory book I wish everyone writing a phoenix application read first, as it also covers why things are done in a certain way and it’s by the authors of the framework which gives a unique perspective. It also has the aforementioned information of what is already parallelized etc. It also includes a pretty cool use case for a supervised GenServer (getting suggestions from an external service and ranking them).
  • Designing Elixir Systems with OTP – this is shaping up to be a great resource on GenServers, Supervision Trees and OTP in general. I haven’t read it yet, but James, Bruce and PragProg have all my trust plus I read some early praise already.

Final Thoughts

Well, you don’t need GenServers and supervision trees to start with writing elixir applications! Go out there, write an application, play with it, have fun, call yourself an elixir programmer (because you are!). Still, learn about OTP to expand your mind and to know where to look when you encounter a problem where a supervision tree could help you.

When discussing this in Elixir Forum, Dimitar also came up with a good thought: Maybe the OTP is what pulls people in and helps them discover all those other nice things?

I came for OTP. I stayed for the functional programming.

As a community, I think we should make it clearer that you don’t have to use GenServers and that doing so might actually be harmful. Of course all those conference talks about how to use them, distributed systems etc. are very cool but every now and then give me a talk about how a business succeeded by writing a fairly standard Phoenix application. Don’t over complicate things.

I’m not saying you shouldn’t learn about GenServers. You should. But know when to use them and when not to.

Lastly, if you disagree I want you to scream at me and teach me the error of my ways :smiley:

 

[1]Technically the web server of phoenix cowboy doesn’t use GenServers and supervision trees for normal http request handling but their own thing, they have a similar functionality though so it still holds true that you don’t need to roll your own. Thanks to Hubert for pointing that out.

edit1: Correctly mention new edition of Elixir in Action, be more specific about Cowboy Processes and include a thought in closing paragraph. Thanks go to Saša, Hubert and Dimitar.

Video & Slides: Stop Guessing and Start Measuring – Benchmarking in Practice (Lambda days)

I managed to get into Lambda days this year and got a chance to present my benchmarking talk. You can watch the video here and check out the slides.

Sadly the bunny video isn’t working in the recording 😥

You can see the slides here or at speakerdeck, slideshare or PDF.

Abstract

“What’s the fastest way of doing this?” – you might ask yourself during development. Sure, you can guess – but how do you know? How long would that function take with a million elements? Is that tail-recursive function always faster?

Benchmarking is here to give you the answers, but there are many pitfalls in setting up a good benchmark and analyzing the results. This talk will guide you through, introduce best practices, and surprise you with some results along the way. You didn’t think that the order of arguments could influence its performance…or did you?

Benchmarking a Go AI in Ruby: CRuby vs. Rubinius vs. JRuby vs. Truffle – a year later

A little more than a year ago I published a blog post benchmarking different ruby implementations against a bot that plays Go which I wrote. Now a little than a year later (~13.5 months) let’s see how the different contestants have improved in the time passed.

This question becomes increasingly interesting as Ruby 3.0 aims to be 3 times as fast as Ruby 2.0.

As last time the benchmarks will be run on my Go bot rubykon, which has barely changed since then. The important question for Monte Carlo Tree Search (MCTS) bots is how many simulations can I run, as this improves quality of play. You can check out the old blog post for more rationale on this.

Setup

The benchmarks were run on the 16th of January 2017 with the following concrete Ruby versions (versions slightly abbreviated in the rest of the post):

  • CRuby 2.0.0p648
  • CRuby 2.2.3p173
  • Rubinius 2.5.8
  • JRuby 9.0.3.0
  • JRuby 9.0.3.0 in server mode and with invoke dynamic enabled (denoted as + id)
  • Truffleruby with master from 2015-11-08 and commit hash fd2c179, running on graalvm-jdk1.8.0
  • CRuby 2.4.0p0
  • Rubinius 3.69
  • JRuby 9.1.7.0
  • JRuby 9.1.7.0 in server mode and with invoke dynamic enabled (denoted as + id)
  • Truffleruby on truffle-head from 2016-01-16 with commit hash 4ad402a54cf, running on graal-core master from 2016-01-16 with commit hash 8f1ad406d78f2f built with a JVMCI enabled jdk8 (check out the install script)

As you might notice I prefer to say CRuby over MRI and very old versions are gone – e.g. I dropped benchmarking CRuby 1.9.x and JRuby 1.7.x. I also added CRuby 2.0 – as it is the comparison standard for Ruby 3.0. The next 5 versions are the remaining rubies from the original benchmark, the other five are their most up to date versions.

All of this is run on my Desktop PC running Linux Mint 18 (based on Ubuntu 16.04 LTS) with 16 GB of memory and an i7-4790 (3.6 GHz, 4 GHz boost). Also running on openjdk 8.


tobi@speedy ~ $ uname -a
Linux speedy 4.4.0-59-generic #80-Ubuntu SMP Fri Jan 6 17:47:47 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
tobi@speedy ~ $ java -version
openjdk version "1.8.0_111"
OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-2ubuntu0.16.04.2-b14)
OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)


Full Monte Carlo Tree Search with 1000 playouts

I cut out the first benchmark from last years edition due to some trouble of getting benchmark-ips running – so we’ll stick with the more macro benchmark that performs a full Monte Carlo Tree Search using UCT on a 19×19 board doing 1000 playouts and see how fast we can get here. This is really the whole package of what we need to make fast for the Go-Bot to be fast! Th benchmark uses benchmark-avg, which I wrote to support more macro benchmarks than bencmark-ips.

The benchmarking code is quite simple:

Benchmark.avg do |benchmark|
  game_state_19 = Rubykon::GameState.new Rubykon::Game.new(19)
  mcts = MCTS::MCTS.new

  benchmark.config warmup: 180, time: 180

  benchmark.report "19x19 1_000 iterations" do
    mcts.start game_state_19, 1_000
  end
end

As you can see we run plenty of warmup – 3 minutes of it – and then 3 minutes of benchmarking time. So let’s see how many iterations per minute our contestants manage here:

Iterations per minute - higher is better
Iterations per minute – higher is better

As one can see, truffleruby is leading the pack by quite a margin,  followed by JRuby (but still over 2 times faster than it). Truffleruby is also an impressive 7 times faster than CRuby 2.4.0.

Of course, as the new benchmark was inspired by Ruby 3.0 aiming to be 3 times as fast as Ruby 2.0 – how are we doing? Do we maybe already have a 3 times faster Ruby? Well, there is a graph for that!

rubykon_2_speedup

As we can see JRuby 9.1.7.0 run in server mode and with invoke dynamic enabled is the first one to be 3 times faster than CRuby 2.0. Also, both the old version of truffleruby and the newest are 3 times faster than our baseline – the new one even 9 times faster! CRuby 2.4 on the other hand is at about a 14% improvement as compared to 2.0.

Another metric that intrigues me is how did the implementation improve in the time in between benchmarks, to gauge where the journey is going. Therefore, the next chart compares the newest version of a Ruby implementation benchmarked here against their older sibling from last time (Ruby 2.4.0 against 2.2.3, JRuby 9.1.7.0 vs. 9.0.3.0 etc.):

Speedup against older version (higher is better)
Speedup against older version (higher is better)

CRuby improved by about 11%, JRuby with invokedynamic about 18% while truffleruby, already leading the pack last time, managed another 2x performance improvement!

The odd one out clearly is Rubinius that only manages bout 20% of the performance of its former version (or a 5x decrease, if you will). This seemed like a setup error on my part at first, but it is not as Rubinius removed their JIT. As this benchmark is a prime example of a pretty hot loop running, the hit of removing the JIT naturally is pretty hard.

The slight decrease in JRuby performance without invokedynamic is slightly weird but it’s so small that it might as well be measurement inaccuracies.

Of course, for the data fans here is the raw table:

Ruby ipm average time (s) standard deviation Speedup to 2.0
2.0.0p648 4.54 13.22 0.44% 1
2.2.3p173 4.68 12.83 1.87% 1.0308370044
rbx-2.5.8 4.67 12.84 1.91% 1.0286343612
JRuby 9.0.3.0 7.75 7.74 0.47% 1.7070484581
JRuby 9.0.3.0 + id 12.81 4.68 0.80% 2.8215859031
truffleruby old 16.93 3.54 10.70% 3.7290748899
2.4.0p0 5.2 11.53 2.18% 1.1453744493
rbx-3.69 1.01 59.4 0.30% 0.2224669604
JRuby 9.1.7.0 7.34 8.17 2.12% 1.6167400881
JRuby 9.1.7.0 + id 15.12 3.97 0.62% 3.3303964758
truffleruby 36.65 1.64 1.25% 8.0726872247

Thoughts on different Ruby implementations

Let’s wrap this up with a couple of thoughts on the different implementations:

TruffleRuby

TruffleRuby is making steady and great progress, which I’m thoroughly impressed with. To be honest, I was wondering if its performance increased since the last benchmark as I was worried that implementing new Ruby features would lead to decreased performance. Seeing that it still managed a 2x performance improvement is mind boggling.

Raw speed is one thing, but if you’re familiar with TruffleRuby, one of the more noticable downsides is the big warmup time that it needs to do all of its fancy optimizations – so the peak performance you see here is only achieved after a certain time where it is much slower. Still, I’m happy to say that warmup also improved since last time! Where the old truffleruby, in my benchmarks, took about 101 seconds or 13 iterations to reach peak performance (hence the very long warmup time, to make sure every implementation is warm) the new one took around 52 seconds or 7 iterations. Still – the first of those warmup iterations took 27 seconds, so if you can’t deal with some warmup time to start with this might be a deal breaker.

Warmup is an important topic here – rubykon has no external dependencies so there’s not much code that needs to be JITed and also TruffleRuby can probably do its type optimizations of specific methods rather efficiently.

Of course, the team is working on that – there is a really noteworthy post about the state of TruffleRuby in early 2017. There further plans are detailed, e.g. C-extension support, improving startup time (drastically!) and running Rails.

It shall also be mentioned here, that setting up TruffleRuby took by far the most time and some bugs had crept in that needed fixing for Rubykon to run again. But after all this is a pre 1.0 project so these problems are to be expected. And with that in mind I want to explicitly thank Chris Seaton and Benoit Daloze for helping me with my setup troubles, fixing bugs and being woefully nice and responsive in general. Benoit even wrote a script to install the current graal-core master to run TruffleRuby with, which I was struggling with and which is needed at the moment to run rubykon on TruffleRuby without optfails.

JRuby

JRuby is coming along nicely, it’s the only Ruby implementation that runs this benchmark at a 3x speed of Ruby 2.0 while able to run whole Rails applications at the same time. It’s still my little personal favorite that I’d love to see more adoption of in the general ruby scene. Any time I see a talk or blog post about “Moving from ruby to the JVM for performance/Java interop” that doesn’t mention JRuby but goes straight to Java/Clojure/Scala & friends it makes me sad (nothing against those technologies though, they’re great!).

JRuby at the moment also sits sort of in the middle of CRuby and TruffleRuby in other concerns – it takes more warmup time than CRuby but a lot less than TRuffleRuby while still reaching nice peak performance. The latest release also brought along some nice performance improvements and we can only expect more of those in the future.

CRuby/MRI

CRuby is coming along nicely and steadily – we get nice improvements to the language and a 14% performance improvement over 2.0 isn’t negligible as well. It’s still a long shot from the targeted 3x. To be fair though, the team is still in the process of defining the benchmarks for which “Ruby 3×3” will be measured (current plan afaik is to have 9 of those cause 3×3 = 9). This is the ground work to start optimization work, and Ruby 3 is still far in the future with the estimated release in 2020.

Rubinius

Sadly, this is my bummer of this benchmarking round. A 5x performance decrase as compared to the previous version of this benchmark was quite surprising, as noted before this is due to the removed JIT. Comment courtesy of Brian (maintainer of Rubinus) from the issue I opened:

@PragTob the just-in-time compiler (JIT) has been removed to make way for a new interpreter and JIT infrastructure. That is the reason you’re seeing the performance degradation (and illustrates how important JIT is to making Ruby fast). The JIT was removed because it had a number of bugs and was too complicated, resulting in almost no contributors making improvements.

If I do a next version of these benchmarks and Rubinius by then doesn’t have a JIT again or some other performance improvements, then I’ll probably drop benchmarking it. It’s far behind the others as of now and if Rubinius’s goal isn’t speed but developer experience or whatever then there also isn’t much sense in benchmarking it 🙂

Final Thoughts

CRuby and JRuby did mostly what I expect them to – improve at a steady and good pace. TruffleRuby truly surprised me with 2x improvements in run time and warmup. Still a bit skeptic about warmup time when it’s running a full fledged Rails application but happy to try that out once they get there 🙂 It makes me wonder though, if I ported Rubykon to Crystal how would the performance compare to Truffle? Ah, time…

Almost forgot the usual disclaimer so here it goes: Always run your own benchmarks! This is a very CPU intensive AI problem typically solved by much more performant languages. I did it for fun and to see how far I could get. Also this benchmark most certainly isn’t indicative for performance of running a Rails application – the parts heavily used by Rails are most likely way different than what this does. E.g. we have no I/O here and little to no String operations, which play a bigger role in Rails. It might point in the right direction and speed improvements might be the same, but they don’t have to be.

Finally, this took me WAY longer than I expected to. I started this over a month ago while I still had time off. Compilation/running problems with old and very shine new rubies mostly to blame. So not sure if I’ll do this again in a year’s time – so if you’d like to see this and like this sort of thing please let me know 🙂

 

Slides: Introducing elixir the easy way (Rubyconf Portugal)

A small lightning talk I gave at Rubyconf Portugal that is complementary to my “elixir & phoenix – fast, concurrent and explicit” talk in that it goes into how we integrated a Phoenix application into with our existing Rails application and clients.

Slides are available as PDF, speakerdeck and slideshare.

Abstract

Small lightning talk with some practical advice on how we integrated a Phoenix application in our general application landscape with a rails monolith and some frontend clients.

Slides: Elixir & Phoenix – fast, concurrent and explicit (Rubyconf Portugal)

And here go the slides for my elixir and phoenix talk focusing on the great features that both bring to the table and make your development experience nicer.

It is similar to the version presented at Codemotion Berlin, save for some minor tweaks and a hopefully more readable and stronger shade of green 😀

So you can get the slides as PDF, speakerdeck and slideshare.

Abstract

Elixir and Phoenix are known for their speed, but that’s far from their only benefit. Elixir isn’t just a fast Ruby and Phoenix isn’t just Rails for Elixir. Through pattern matching, immutable data structures and new idioms your programs can not only become faster but more understandable and maintainable. This talk will take a look at what’s great, what you might miss and augment it with production experience and advice.

Slides: What did AlphaGo do to beat the strongest human Go player?

A talk about AlphGo and techniques it used with no prior knowledge required. Second talk of the Codemotion Berlin series, mostly the same talk I gave at Full Stack Fest. Something was cut/adjusted. A full recording from the Full Stack Fest version is available here.

You can get the slides via PDF, Speakerdeck and Slideshare.

Abstract

This year AlphaGo shocked the  world by decisively beating the strongest human Go player, Lee Sedol. An accomplishment that wasn’t expected for years to come. How did AlphaGo do this? What algorithms did it use? What advances in AI made it possible? This talk will briefly introduce the game of Go, followed by the techniques and algorithms used by AlphaGo to answer these questions.