Author: PragTob

Benchmarking a Go AI in Ruby: CRuby vs. Rubinius vs. JRuby vs. Truffle – a year later

Benchmarking a Go AI in Ruby: CRuby vs. Rubinius vs. JRuby vs. Truffle – a year later

A little more than a year ago I published a blog post benchmarking different ruby implementations against a bot that plays Go which I wrote. Now a little than a year later (~13.5 months) let’s see how the different contestants have improved in the time passed.

This question becomes increasingly interesting as Ruby 3.0 aims to be 3 times as fast as Ruby 2.0.

As last time the benchmarks will be run on my Go bot rubykon, which has barely changed since then. The important question for Monte Carlo Tree Search (MCTS) bots is how many simulations can I run, as this improves quality of play. You can check out the old blog post for more rationale on this.

Setup

The benchmarks were run on the 16th of January 2017 with the following concrete Ruby versions (versions slightly abbreviated in the rest of the post):

  • CRuby 2.0.0p648
  • CRuby 2.2.3p173
  • Rubinius 2.5.8
  • JRuby 9.0.3.0
  • JRuby 9.0.3.0 in server mode and with invoke dynamic enabled (denoted as + id)
  • Truffleruby with master from 2015-11-08 and commit hash fd2c179, running on graalvm-jdk1.8.0
  • CRuby 2.4.0p0
  • Rubinius 3.69
  • JRuby 9.1.7.0
  • JRuby 9.1.7.0 in server mode and with invoke dynamic enabled (denoted as + id)
  • Truffleruby on truffle-head from 2016-01-16 with commit hash 4ad402a54cf, running on graal-core master from 2016-01-16 with commit hash 8f1ad406d78f2f built with a JVMCI enabled jdk8 (check out the install script)

As you might notice I prefer to say CRuby over MRI and very old versions are gone – e.g. I dropped benchmarking CRuby 1.9.x and JRuby 1.7.x. I also added CRuby 2.0 – as it is the comparison standard for Ruby 3.0. The next 5 versions are the remaining rubies from the original benchmark, the other five are their most up to date versions.

All of this is run on my Desktop PC running Linux Mint 18 (based on Ubuntu 16.04 LTS) with 16 GB of memory and an i7-4790 (3.6 GHz, 4 GHz boost). Also running on openjdk 8.


tobi@speedy ~ $ uname -a
Linux speedy 4.4.0-59-generic #80-Ubuntu SMP Fri Jan 6 17:47:47 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
tobi@speedy ~ $ java -version
openjdk version "1.8.0_111"
OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-2ubuntu0.16.04.2-b14)
OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)


Full Monte Carlo Tree Search with 1000 playouts

I cut out the first benchmark from last years edition due to some trouble of getting benchmark-ips running – so we’ll stick with the more macro benchmark that performs a full Monte Carlo Tree Search using UCT on a 19×19 board doing 1000 playouts and see how fast we can get here. This is really the whole package of what we need to make fast for the Go-Bot to be fast! Th benchmark uses benchmark-avg, which I wrote to support more macro benchmarks than bencmark-ips.

The benchmarking code is quite simple:

Benchmark.avg do |benchmark|
  game_state_19 = Rubykon::GameState.new Rubykon::Game.new(19)
  mcts = MCTS::MCTS.new

  benchmark.config warmup: 180, time: 180

  benchmark.report "19x19 1_000 iterations" do
    mcts.start game_state_19, 1_000
  end
end

As you can see we run plenty of warmup – 3 minutes of it – and then 3 minutes of benchmarking time. So let’s see how many iterations per minute our contestants manage here:

Iterations per minute - higher is better
Iterations per minute – higher is better

As one can see, truffleruby is leading the pack by quite a margin,  followed by JRuby (but still over 2 times faster than it). Truffleruby is also an impressive 7 times faster than CRuby 2.4.0.

Of course, as the new benchmark was inspired by Ruby 3.0 aiming to be 3 times as fast as Ruby 2.0 – how are we doing? Do we maybe already have a 3 times faster Ruby? Well, there is a graph for that!

rubykon_2_speedup

As we can see JRuby 9.1.7.0 run in server mode and with invoke dynamic enabled is the first one to be 3 times faster than CRuby 2.0. Also, both the old version of truffleruby and the newest are 3 times faster than our baseline – the new one even 9 times faster! CRuby 2.4 on the other hand is at about a 14% improvement as compared to 2.0.

Another metric that intrigues me is how did the implementation improve in the time in between benchmarks, to gauge where the journey is going. Therefore, the next chart compares the newest version of a Ruby implementation benchmarked here against their older sibling from last time (Ruby 2.4.0 against 2.2.3, JRuby 9.1.7.0 vs. 9.0.3.0 etc.):

Speedup against older version (higher is better)
Speedup against older version (higher is better)

CRuby improved by about 11%, JRuby with invokedynamic about 18% while truffleruby, already leading the pack last time, managed another 2x performance improvement!

The odd one out clearly is Rubinius that only manages bout 20% of the performance of its former version (or a 5x decrease, if you will). This seemed like a setup error on my part at first, but it is not as Rubinius removed their JIT. As this benchmark is a prime example of a pretty hot loop running, the hit of removing the JIT naturally is pretty hard.

The slight decrease in JRuby performance without invokedynamic is slightly weird but it’s so small that it might as well be measurement inaccuracies.

Of course, for the data fans here is the raw table:

Ruby ipm average time (s) standard deviation Speedup to 2.0
2.0.0p648 4.54 13.22 0.44% 1
2.2.3p173 4.68 12.83 1.87% 1.0308370044
rbx-2.5.8 4.67 12.84 1.91% 1.0286343612
JRuby 9.0.3.0 7.75 7.74 0.47% 1.7070484581
JRuby 9.0.3.0 + id 12.81 4.68 0.80% 2.8215859031
truffleruby old 16.93 3.54 10.70% 3.7290748899
2.4.0p0 5.2 11.53 2.18% 1.1453744493
rbx-3.69 1.01 59.4 0.30% 0.2224669604
JRuby 9.1.7.0 7.34 8.17 2.12% 1.6167400881
JRuby 9.1.7.0 + id 15.12 3.97 0.62% 3.3303964758
truffleruby 36.65 1.64 1.25% 8.0726872247

Thoughts on different Ruby implementations

Let’s wrap this up with a couple of thoughts on the different implementations:

TruffleRuby

TruffleRuby is making steady and great progress, which I’m thoroughly impressed with. To be honest, I was wondering if its performance increased since the last benchmark as I was worried that implementing new Ruby features would lead to decreased performance. Seeing that it still managed a 2x performance improvement is mind boggling.

Raw speed is one thing, but if you’re familiar with TruffleRuby, one of the more noticable downsides is the big warmup time that it needs to do all of its fancy optimizations – so the peak performance you see here is only achieved after a certain time where it is much slower. Still, I’m happy to say that warmup also improved since last time! Where the old truffleruby, in my benchmarks, took about 101 seconds or 13 iterations to reach peak performance (hence the very long warmup time, to make sure every implementation is warm) the new one took around 52 seconds or 7 iterations. Still – the first of those warmup iterations took 27 seconds, so if you can’t deal with some warmup time to start with this might be a deal breaker.

Warmup is an important topic here – rubykon has no external dependencies so there’s not much code that needs to be JITed and also TruffleRuby can probably do its type optimizations of specific methods rather efficiently.

Of course, the team is working on that – there is a really noteworthy post about the state of TruffleRuby in early 2017. There further plans are detailed, e.g. C-extension support, improving startup time (drastically!) and running Rails.

It shall also be mentioned here, that setting up TruffleRuby took by far the most time and some bugs had crept in that needed fixing for Rubykon to run again. But after all this is a pre 1.0 project so these problems are to be expected. And with that in mind I want to explicitly thank Chris Seaton and Benoit Daloze for helping me with my setup troubles, fixing bugs and being woefully nice and responsive in general. Benoit even wrote a script to install the current graal-core master to run TruffleRuby with, which I was struggling with and which is needed at the moment to run rubykon on TruffleRuby without optfails.

JRuby

JRuby is coming along nicely, it’s the only Ruby implementation that runs this benchmark at a 3x speed of Ruby 2.0 while able to run whole Rails applications at the same time. It’s still my little personal favorite that I’d love to see more adoption of in the general ruby scene. Any time I see a talk or blog post about “Moving from ruby to the JVM for performance/Java interop” that doesn’t mention JRuby but goes straight to Java/Clojure/Scala & friends it makes me sad (nothing against those technologies though, they’re great!).

JRuby at the moment also sits sort of in the middle of CRuby and TruffleRuby in other concerns – it takes more warmup time than CRuby but a lot less than TRuffleRuby while still reaching nice peak performance. The latest release also brought along some nice performance improvements and we can only expect more of those in the future.

CRuby/MRI

CRuby is coming along nicely and steadily – we get nice improvements to the language and a 14% performance improvement over 2.0 isn’t negligible as well. It’s still a long shot from the targeted 3x. To be fair though, the team is still in the process of defining the benchmarks for which “Ruby 3×3” will be measured (current plan afaik is to have 9 of those cause 3×3 = 9). This is the ground work to start optimization work, and Ruby 3 is still far in the future with the estimated release in 2020.

Rubinius

Sadly, this is my bummer of this benchmarking round. A 5x performance decrase as compared to the previous version of this benchmark was quite surprising, as noted before this is due to the removed JIT. Comment courtesy of Brian (maintainer of Rubinus) from the issue I opened:

@PragTob the just-in-time compiler (JIT) has been removed to make way for a new interpreter and JIT infrastructure. That is the reason you’re seeing the performance degradation (and illustrates how important JIT is to making Ruby fast). The JIT was removed because it had a number of bugs and was too complicated, resulting in almost no contributors making improvements.

If I do a next version of these benchmarks and Rubinius by then doesn’t have a JIT again or some other performance improvements, then I’ll probably drop benchmarking it. It’s far behind the others as of now and if Rubinius’s goal isn’t speed but developer experience or whatever then there also isn’t much sense in benchmarking it 🙂

Final Thoughts

CRuby and JRuby did mostly what I expect them to – improve at a steady and good pace. TruffleRuby truly surprised me with 2x improvements in run time and warmup. Still a bit skeptic about warmup time when it’s running a full fledged Rails application but happy to try that out once they get there 🙂 It makes me wonder though, if I ported Rubykon to Crystal how would the performance compare to Truffle? Ah, time…

Almost forgot the usual disclaimer so here it goes: Always run your own benchmarks! This is a very CPU intensive AI problem typically solved by much more performant languages. I did it for fun and to see how far I could get. Also this benchmark most certainly isn’t indicative for performance of running a Rails application – the parts heavily used by Rails are most likely way different than what this does. E.g. we have no I/O here and little to no String operations, which play a bigger role in Rails. It might point in the right direction and speed improvements might be the same, but they don’t have to be.

Finally, this took me WAY longer than I expected to. I started this over a month ago while I still had time off. Compilation/running problems with old and very shine new rubies mostly to blame. So not sure if I’ll do this again in a year’s time – so if you’d like to see this and like this sort of thing please let me know 🙂

 

Mastery comes from failure

In software development, and many other disciplines, people strive for mastery – you want to get better to be great something. For some reason failure is often seen as the opposite of mastery – after all masters don’t fail – or do they?

We often see talks and read great blog posts about all these great achievements of amazing people but almost never about major failures. But, how are failures important and useful? Well, let me tell you a little story from back when I was teaching an introductory web development course:

We were in the practice stage and I was doing my usual rounds looking at what the students did and helped them to resolve their problems. After glancing at a screen and saying that the problem could be resolved by running the migrations the student looked at me with disbelief and said:

Tobi, how can you just look at the screen and see what’s wrong in the matter of seconds? I’ve been trying to fix this for almost 15 minutes!

My reply was simple:

I’ve  probably made the same mistake a thousand times – I learned to recognize it.

And that’s what this post is about and where a lot of mastery comes from in my mind – from failure. We learn as we fail. What is one of the biggest differences between a novice and a master? Well, quite simply the master has failed many more times than the novice has ever tried. Through these failures the now master learned and achieved a great level of expertise – but you don’t see these past failures now. This is shown quite nicely in this excellent web-comic:

befriendswithfailure0005
“Be Friends with Failure” taken (with permission) from doodlealley. It focuses on art, but I believe it applies to programming just as much and I recommend reading the whole comic  🙂

For this to work properly we can’t program/work by coincidence though – when something goes wrong we must take the right measurements to determine why it failed and what we could do better the next time around. Post-mortems are relatively popular for bigger failures, in a post-mortem you identify the root cause of a problem, show how it lead to the observed misbehavior of the system and ideally identify steps to prevent such faults in the future. For service providers they are often even shared publicly.

I think we should always do our own little “post-mortems” – there doesn’t have to be a big service outage, data leak or whatever. Why did this ticket take longer than expected? Which assumptions we had were wrong? Could we change anything in our process to identify potential problems earlier on in the future? This interaction with my co-worker didn’t go as well as it could have been, how could we communicate better and more effectively? This piece of code is hard to deal with – what makes it hard to deal with, how could it be made easier?

Of course, we can’t only learn from our own failures (although those tend to work best!) but from mistakes of others as well – as we observe their situation and the impact it had and see what could have been done better, what they learned from it and what we can ultimately learn from it.

Therein lies the crux though – people are often afraid to share their failings. When nobody shares their failures – how are we supposed to learn? Often it seems that admitting to failure is a “sign of weakness” that you are not good enough and you’d rather be silent about it. Maybe you tell your closes friends about it, but no one else should know that YOU made a mistake! Deplorable!

I think we should rather be more open about it, share failures, share learnings and get better as a group.

This was signified for me as a couple of years back a friend asked me if it was ok to give a talk (I organize a local user group) about some mistakes made in a recent project. I thought it was a great idea to share these mistakes along with some learnings with everyone else. My friend seemed concerned what the others might think, after all it is not common to share stories like this. We had the talk at a meetup and it was a great success, it made me wonder though – how many people are out there that have great stories of failures and learnings to share but decide not to share them?

I’ve heard whispers of some few meetups (or even conferences?!) that focus on failure stories but they don’t seem to have reached the mainstream. I’d love to hear more failure stories! Tried Microservices/GraphQL/Elm/Elixir/Docker/React/HypeXY in your project and it all blew up? Tell me about it! Your Rails monolith basically exploded? Tell me more! You had an hour long outage due to a simple problem a linter could have detected? You have my attention!

What I’m saying is: Please go ahead and share your failures! Sharing them you learn more about them as you need to articulate and analyze, everyone else benefits, learns something and might have some input. Last but not least people see that mistakes happen, it demystifies this image we have of these great people who never make a mistake and who just always were great and instead shows us where they are coming from and what’s still happening to them.

My Failures + Lessons learned

Of course a blog post like this would feel empty, hollow and wrong without sharing a couple of my own failures or some that I observed and that shaped me. These are not detailed post-mortems but rather short bullet points of a failure/mistake and what I learned from it. Of course, these sound general but are also ultimately situational and more nuanced than this but are kept like this in favor of brevity – so please keep that in mind.

  • Reading a whole Ruby book from start to finish without doing any exercise taught me that this won’t teach me a programming language and that I can’t even write a basic program afterwards so I really should listen to the author and do the exercises
  • Trying to send a secret encryption key as a parameter through GET while working under pressure taught me that this is a bad idea (parameter is in the URL —> URL is not encrypted –> security FAIL) , that working under pressure indeed makes me worse and that I’d never miss a code review again, as this was thankfully caught during our code review
  • Finally diving into meta programming after regarding the topic as too magic for too long, I learned that I can learn almost anything and getting into it is mostly faster than I think – it’s the fear of it that keeps you away for too long
  • Overusing meta programming taught me that I should seek the simplest workable solution first and only reach for meta programming as a last resort as it is easy to build a harder to maintain and understand than necessary code base – sometimes it’s even better to have some duplication than that meta programming
  • Overusing meta programming also taught me about the negative performance implications especially if methods are called often
  • Being lied to in an interview taught me not to ask “Do you do TDD?” but rather “How do you work?”
  • Doing too much in my free time taught me that I should say “No” some times and that a “No” can be a “Yes” to yourself
  • Working on a huge Rails application taught me the dangers of fat models and all their validations, callbacks etc.
  • Letting a client push in more features late in the process of a feature taught me the value of splitting up tickets, finishing smaller work packages and again decisively saying “No!”
  • Feeling very uncomfortable in situations and not speaking up because I thought I was the only one affected taught me that when this is the case, chances are I’m mostly not the only one and others are affected way more so I should speak up
  • Having a Code of Conduct violation at one of my meetups showed me that I should pro actively inform all speakers about the CoC weeks before the talks in our communication and not just have it on the meetup page
  • Blindly sticking to practices and failing with it taught me to always keep an open mind and question what I’m doing and why I’m doing it
  • Doing two talks in the same week (while being wobbly unprepared for the second) taught me that when I do that again none of them can be original
  • Working in a project started with micro services and an inexperienced team showed me the overhead involved and how wrongly sliced services can be worse than any monolith
  • Building my first bigger project (in university, thankfully) in a team and completely messing it up at first showed me the value of design patterns
  • Skipping acceptance testing in a (university) project and then having the live demo error out on something we could have only caught in acceptance/end-to-end testing showed me how important those tests really are
  • Writing too many acceptance/end-to-end tests clarified  to me how tests should really be written on the right level of the testing pyramid in order to save test execution time, test writing time and test refactoring time
  • Seeing how I get less effective when panic strikes during production problems and how panic spreads to the rest of the team highlighted the importance of staying calm and collected especially during urgent problems
  • Also during urgent problems it is especially important to delegate and trust my co-workers, no single person can handle and fix all that – it’s a team effort
  • Accidentally breaking a crucial algorithm in edge cases (while fixing another bug) made me really appreciate our internal and external fallbacks/other algorithms and options so that the system was still operational
  • Working with overly (performance) optimized code showed me that premature optimization truly is the root of all evil, and the enemy of readability – measure and monitor where those performance bottle necks and hot spots are and only then go ahead and look for performance improvements there!
  • Using only variable names like a, b, c, d (wayyy back when I started programming) and then not being able to understand how my program worked a week later and having to completely rewrite it (couple of days before the hand in of the competition…) forever engraved the importance of readable and understandable names into my brain
  • Giving a talk that had a lot of information that I found interesting but ultimately wasn’t crucial for the understanding of the main topic taught me to cut down on additional content and streamline the experience towards the learning goals of the presentation
  • Working in a team where people yelled at each other taught me that I don’t want to deal with behavior like this and that intervention is hard – often it’s best to leave the room and let the situation cool down
  • Being in many different situations failing to act in a good way taught me that every situation is unique and that you can’t always act based on your previous experience or advice
  • Trying to contribute to an open source project for the first time and never hearing back from the maintainers and ultimately having my patch rejected half a year after I asked if this was cool to work on showed me the value of timely clear communication especially to support open source newcomers and keep their spirits high
  • Just recently I failed at creating a proper API for my Elixir benchmarking library, used a map for configuration and passed it in as an optional first argument (ouch!) and the main data structure was a list of two-tuples instead of a map as the second argument – gladly fixed in the latest release
  • probably a thousand more but that I can’t think of right now 😉

Closing

befriendswithfailure0010
“Be Friends with Failure” taken (with permission) from doodlealley.

We can also look at this from another angle – when we’re not failing then we’re probably doing things that we’re already good at and not something new where we’re learning and growing. There’s nothing wrong with doing something you’re good – but when you venture out to learn something new failure is part of the game, at least in the small.

I guess what I’m saying is – look at failures as an opportunity to improve. For you, your team, your friends and potential listeners. Analyze them. What could have prevented this? How could this have been handled better? Could the impact have been smaller? I mean this in the small (“I’ve been trying to fix this for the past hour, but the fault was over here in this other file”), in the big (“Damn, we just leaked customer secrets”) and everywhere in between.

We all make mistakes. Yes, even our idols – we sadly don’t talk about them as much. What’s important in my opinion is not that we made a mistake, but how we handle it and how we learn from it. I’d like us to be more open about it and share these stories so that others can avoid falling into the same trap.

Slides: How fast is it really? Benchmarking in Elixir

I’m at Elixirlive in Warsaw right now and just gave a talk. This talk is about benchmarking – the greater concepts but concrete examples are in Elixir and it works with my very own library benchee to also show some surprising Elixir benchmarks. The concepts are applicable in general and it also gets into categorizing benchmarks into micro/macro/application etc.

If you’ve been here and have feedback – positive or negative. Please tell me 🙂

Slides are available as PDF, speakerdeck and slideshare.

Abstract

“What’s the fastest way of doing this?” – you might ask yourself during development. Sure, you can guess what’s fastest or how long something will take, but do you know? How long does it take to sort a list of 1 Million elements? Are tail-recursive functions always the fastest?

Benchmarking is here to answer these questions. However, there are many pitfalls around setting up a good benchmark and interpreting the results. This talk will guide you through, introduce best practices and show you some surprising benchmarking results along the way.

Released: benchee 0.6.0, benchee_csv 0.5.0, benchee_json and benchee_html – HTML reports and nice graphs!

Released: benchee 0.6.0, benchee_csv 0.5.0, benchee_json and benchee_html – HTML reports and nice graphs!

The last days I’ve been hard at work to polish up and finish releases of benchee (0.6.0 – Changelog), benchee_csv (0.5.0 – Changelog) as well as the initial releases of benchee_html and benchee_json!

I’m the proudest and happiest of finally getting benchee_html out of the door along with great HTML reports including plenty of graphs and the ability to export them! You can check out the example online report or glance at this screenshot of it:

reportWhile benchee_csv had some mere updates for compatibility and benchee_json just transforms the general suite to JSON (which is then used in the HTML formatter) I’m particularly excited about the big new features in benchee and of course benchee_html!

Benchee

The 0.6.0 is probably the biggest release of the “core” benchee library with some needed API changes and great features.

New run API – options last as keyword list

The “old” way you’d optionally pass in options as the first argument into run as a map and then define the jobs to benchmark in another map. I did this because in my mind the configuration comes first and maps are much easier to work with through pattern matching as opposed to keyword lists. However, having an optional first argument already felt kind of weird…

Thing is, that’s not the most elixir way to do this. It is rather conventional to pass in options as the last argument and as a keyword list. After voicing my concerns in the elixirforum, the solution was to allow passing in options as keyword lists but convert to maps internally to still have the advantage of good pattern matching among other advantages.

The old style still works (thanks to pattern matching!) – but it might get deprecated in the future. In this process though the run interface of the very first version of run, which used a list of tuples, doesn’t work anymore 😦

Multiple inputs

The great new feature is that benchee now supports multiple inputs – so that in one suite you can run the same functions against multiple different inputs. That is important as functions can behave very differently on inputs of different sizes or a different structure. Therefore it’s good to check the functions against multiple inputs. The feature was inspired by a discussion on an elixir issue with José Valim.

So what does this look like? Here it goes:

The hard thing about it was that it changed how benchmarking results had to be represented internally, as another level to represent the different inputs was needed. This lead to quite some work both in benchee and in plugins – but in the end it was all worth it 🙂

benchee_html

This has been in the making for way too long, should have released a month or 2 ago. But now it’s here! It provides a nice HTML table and four different graphs – 2 for comparing the different benchmarking jobs and 2 graphs for each individual job to take a closer look at the distribution of run times of this particular job. There is a wiki page at benchee_html to discern between the different graphs highlighting what they might be useful for. You can also export PNG images of the graphs at click of a simple icon 🙂

Wonder how to use it? Well it was already shown earlier in this post when showing off the new API. You just specify the formatters and the file where it should be written to 🙂

But without further ado you can check out the sample report or just take a look at these images 🙂

ipsboxplothistogram

raw_run_timesClosing Thoughts

Hope you enjoy benchmarking, with different inputs and then see great reports of them. Let me know what you like about benchee or what you don’t like about it and what could be better.

Video: Elixir & Phoenix – fast, concurrent and explicit

And here goes the video from Rubyconf Portugal – which was a blast! This talk mainly focuses on the latter explicit part of the title and how Elixir and Phoenix help with readable and maintainable code. It is also an introduction, quickly glancing at several topics that could also be topics of separate talks. This was at a ruby conference and I’m a ruby programmer, so parts of it are tailored to compare with Ruby, Object Oriented Programming and Functional Programming as well as likenesses and differences between Rails and Phoenix. Hope you enjoy!

You can also have a look at the slides right here or as PDF, speakerdeck and slideshare.

Abstract

Elixir and Phoenix are known for their speed, but that’s far from their only benefit. Elixir isn’t just a fast Ruby and Phoenix isn’t just Rails for Elixir. Through pattern matching, immutable data structures and new idioms your programs can not only become faster but more understandable and maintainable. This talk will take a look at what’s great, what you might miss and augment it with production experience and advice.

Released: deep_merge 0.1.0 for Elixir

As you might have seen on this blog or on twitter I’m thoroughly enjoying elixir. One thing that I found to be thoroughly missing is deep_merge – given two maps merge them together and if a merge conflict occurs but both values are maps go on and recursively merge them as well. In Ruby it is provided by ActiveSupport – you don’t need it THAT often but when you need it, it’s really great to have. The most common use case for me is merging a user specified configuration with some sort of default configuration to get the final configuration.

So at first no one else seemed to have the need for deep_merge, strange huh? About 1.5 months later it seemed others were having the same problem/question as in this stackoverflow question or the gist linked here:

So others do want it! Time to propose it to the elixir-core mailing list! Lots of people seemed to like the idea and were in favor of it, none of the core team members though and the discussion soon went on to be a bit more about implementation details. So, after some time I went and thought “might as well draft up an implementation in a PR so we can discuss this” – and so it was.

As you might have guessed from the blog post title, this PR didn’t get through as the core team thought it was a bit too specific of a use case, the implementation had its flaws (not handling structs properly – I learned something!) and it wasn’t general enough as it didn’t handle keyword lists. During the discussion José also mentioned that using protocols might be the way to go.

Using protocols seems to be the most correct but it feels a very niche feature to justify adding a new protocol to the language.

While I’d have liked to see it in Elixir core I gotta commend the elixir maintainers on rejecting features – I know it can sometimes be hard but in the end it’s for the better keeping the language focused and maintenance low. And one can always write a library so one can pick and choose to get the functionality in.

So what do you do? Well, implement it as a library of course! Meet deep_merge 0.1.0!

Why would you want to use deep_merge?

  • It handles both maps and keyword lists
  • It does not merge structs or maps with structs…
  • …but you can implement the simple DeepMerge.Resolver protocol for types/structs of your choice to also make them deep mergable
  • a deep_merge/3 variant that gets a function similar to Map.merge/3 to modify the merging behavior, for instance in case you don’t want keyword lists to be merged or you want all lists to be appended

All of these features – especially pit falls like the struct merging – are reasons why it might be profitable to adopt a library and not implement it yourself. So, go on check it out on github, hex and hexdocs.

Slides: Introducing elixir the easy way (Rubyconf Portugal)

A small lightning talk I gave at Rubyconf Portugal that is complementary to my “elixir & phoenix – fast, concurrent and explicit” talk in that it goes into how we integrated a Phoenix application into with our existing Rails application and clients.

Slides are available as PDF, speakerdeck and slideshare.

Abstract

Small lightning talk with some practical advice on how we integrated a Phoenix application in our general application landscape with a rails monolith and some frontend clients.

Slides: Elixir & Phoenix – fast, concurrent and explicit (Rubyconf Portugal)

And here go the slides for my elixir and phoenix talk focusing on the great features that both bring to the table and make your development experience nicer.

It is similar to the version presented at Codemotion Berlin, save for some minor tweaks and a hopefully more readable and stronger shade of green 😀

So you can get the slides as PDF, speakerdeck and slideshare.

Abstract

Elixir and Phoenix are known for their speed, but that’s far from their only benefit. Elixir isn’t just a fast Ruby and Phoenix isn’t just Rails for Elixir. Through pattern matching, immutable data structures and new idioms your programs can not only become faster but more understandable and maintainable. This talk will take a look at what’s great, what you might miss and augment it with production experience and advice.

Slides: What did AlphaGo do to beat the strongest human Go player?

A talk about AlphGo and techniques it used with no prior knowledge required. Second talk of the Codemotion Berlin series, mostly the same talk I gave at Full Stack Fest. Something was cut/adjusted. A full recording from the Full Stack Fest version is available here.

You can get the slides via PDF, Speakerdeck and Slideshare.

Abstract

This year AlphaGo shocked the  world by decisively beating the strongest human Go player, Lee Sedol. An accomplishment that wasn’t expected for years to come. How did AlphaGo do this? What algorithms did it use? What advances in AI made it possible? This talk will briefly introduce the game of Go, followed by the techniques and algorithms used by AlphaGo to answer these questions.

Slides: Elixir & Phoenix – fast, concurrent and explicit (Codemotion Berlin 2016)

This is a remix and extended version (40 minutes) of the elixir and Phoenix talk I gave in the beginning of the year at the Ruby User Group Berlin and Vilnius.rb. It is an introductory talk about Elixir and Phoenix that shortly dips into the performance aspect but then switches over to features of the Elixir programming languages, principles and how it all comes together nicely in Phoenix. Also known as why do I want to program with Elixir and Phoenix – performance/fault tolerance aside.

This talk was the first talk I gave at Codemotion Berlin 2016.

Slides are embedded here or you can get the PDF, Speakerdeck or Slideshare.

There is no video, sadly – however there is a voice recording for which you can take and then click through the slides: voice recording.

Abstract

Elixir and Phoenix are known for their speed, but that’s far from their only benefit. Elixir isn’t just a fast Ruby and Phoenix isn’t just Rails for Elixir. Through pattern matching, immutable data structures and new idioms your programs can not only become faster but more understandable and maintainable. This talk will take a look at what’s great, what you might miss and augment it with production experience and advice.