Are comments a code smell? Yes! No? It Depends.

Most people are either firmly on the “Yes!” or the “No!” side when it comes to discussing comments and their status as a code smell. But, as with most question worth asking the correct answer rather is an “It depends”.

I got to re-examine this topic lately triggered by a tweet and a discussion with Devon:

So, let’s start unwrapping these layers, shall we?

Important distinction: Comments vs. Documentation

One of the first points on the list is understanding what a comment is and what it is not. For me documentation isn’t a comment, in most languages (unfortunately) documentation happens to be represented as a comment. Thankfully some languages, such as elixir, Clojure and Rust, have a separate construct for documentation to make this obvious and facilitate working with documentation.

I don’t think everything should be documented. However, libraries definitely need documentation (if you want people using them that is). I’ve also grown increasingly fond of documentation in application code, especially as projects grow. At Liefery core modules have a top level “module” comment describing the business context, language, important collaborators etc. It has proven invaluable. One of my favorites is the description of the shipment state machine that for each state shortly summarizes what it means – keeping all those in your head has proven quite difficult. Plus, it’s a gift for new developers getting into the code base.

Of course documentation still suffers one of the major drawback of comments – it can become outdated. Much less so if documentation rather provides context than describing in detail what happens.

So, documentation for me isn’t a comment. Next up – what’s this code smell thing?

What’s a Code Smell?

In short a code smell is an indication that something could be wrong with this code. Or to let the creators of the term, Kent Beck (whose idea the term was) and Martin Fowler, tell it in Refactoring:

(…) describing the “when” of refactoring in terms of smells. (…) we have learned to look for certain structures in the code that suggest (sometimes they scream for) the possibility of refactoring.

Does this description fit comments? Well, comments made the “original” list of code smells, with the following reasoning:

(…) comments often are used as a deodorant. It’s surprising how often you look at thickly commented code and notice that the comments are there because the code is bad.

They go on to explain what should be done instead of comments:

When you feel the need to write a comment, first try to refactor the code so that any
comment becomes superfluous.

That is exactly in line with my view of code comments. There is so much more that you can do to make your code more readable instead of resorting to a comment. Comments should be a last resort.

To further explore this, let’s take a look at one of my favorite distinctions when it comes to “good” comments versus “bad” comments.

WHAT versus WHY comments

I like to think of comments in 2 categories:

  • WHAT comments describe what the code does, these can be high level but sometimes they also tell you every little thing the code does (“iterates over, then… uses result to”)
  • WHY comments clarify why some code is like it is giving you a peek into the past why a decision was made

Let’s start with the WHAT – what comments can almost always be replaced by more expressive code. Most of this has to do with proper naming and concepts, which is why it isn’t uncommon for me to spend an extended period of time on these. Hell, (coincidentally) Devon and I even spent hours on defining “Scenarios” in benchee.

Variables, methods, classes, modules… all of these communicate through their name. So spending a good time naming them helps a lot. Often it is also the right call to extract one of these to keep the line count small and manageable while naming the concept you just extracted to help the understanding of the overall code.

Let’s take a look at one of my favorite examples:


# do one thing
# do another thing
# do something more

Let this stand in for every long method you ever came across where the method body was broken into sections by comments. Extract 3 methods, name them somewhat like the comments. Enjoy shorter methods, meaningful names, concepts and reusability.

I’ve even seen people advocating for this style of long methods with comments. Easy to say, I’m not a fan. The article says “The more complex the code, the more comments it should have.” and my colleague Tiago probably responded best to that:

You should make the code less complex not add more comments.

Another example I wish I made up, but it’s real (I only ported it from JavaScript to Ruby):


# context, outlet, times, time per step, state, data
def pattern(c, o, t, l, s, d)
# …
end

view raw

parameters.rb

hosted with ❤ by GitHub

As a first step just rename your parameters to whatever understandable name was commented above (also how does l translate to time per step?). Afterwards, look for a bigger concept you might be missing and aggregate the needed data into it so you trim the number of parameters down.

All in all, a WHAT style comment to my mind is a declaration of defeat – it’s an “I tried everything but I can’t make this code be readable by itself” You can be sure, if I get there I first consult a colleague about it and if we can’t come up with something I’ll isolate the complexity and then be sad about my defeat.

With all of that about what comments, how about WHY comments?

They can help us with things that can hardly be expressed in code. Let’s take a little example from the great shoes project:


def paint_control(event)
# some painting code
rescue => e
# Really important to rescue here. Failures that escape this method
# cause odd-ball hangs with no stacktraces. See #559 for an example.
puts "SWALLOWED PAINT EXCEPTION ON #{@obj} – go take care of it: " + e.to_s
puts 'Unfortunately we have to swallow it because it causes odd failures :('
end

view raw

why_comment.rb

hosted with ❤ by GitHub

While the puts statements communicates some of it, it is important to emphasize how dangerous not rescuing here is. The comment also helps establish context and points to where one could find more information about this.

This is an excellent use case for a comment and thankfully Kent Beck and Martin Fowler agree (again from the Refactoring book):

A comment is a good place to say why you did something. This kind of information helps future modifiers, especially forgetful ones.

There is an argument to be made that such information should be kept in the version control system and not in a comment. It is true: the commit message should definitely reflect this, ideally with an easy to produce link both to the ticket and pull request. However, a commit message alone is not enough to my mind. Tracking down a commit that introduced a change in an older code base can be quite hard (ever tried changing all strings from single quotes to double quotes? 😉 ) and you can’t expect everyone to always look at the history of every line of code they change. A comment acts a warning sign in places like these.

In short: WHY comments “yay“! WHAT comments “nay“!

Context matters

Before we get to the final “verdict” there’s one more aspect I’d like to examine: the context of your application. That context might greatly influence the need for comments. Another CRUD application like the ones you built before? Probably doesn’t need many comments. That new machine learning micro service written in Python and deployed with docker while no one in your team has done any of these things before? Yup, that probably needs a couple of more comments.

New business domain, new framework, new language, something out of your comfort zone, experience level of developers – all of these can justify more comments to be written. Those can give context, link to resources, WHAT comments describing on a high level what’s going on and so on. For instance, our route planning code has quite a few more comments explaining the used algorithms and data structures on a high level than the rest of the code base.

Yadda yadda – are comments a code smell or not?

As already established – it’s not as black and white as some people make it seem. To get back to the original twitter conversation that started all this:

For a shorter answer, I think Robert Martin also puts it quite well and succinct in Clean Code:

The proper use of comments is to compensate for our failure to express ourself in
code.

What about me? Well, if you asked me “Are comments a code smell?” on the street the answer would probably be “Yes”, the better answer would be “It depends.” and the good answer short of this blog post would be something along the lines of:

There’s a difference between documentation, which is often good, and comments. WHY comments highlighting reasoning are valuable. WHAT comments explaining the code itself can often be replaced by more expressive code. Only when I admit defeat will I write a WHAT comment.

(these days this even fits in a single tweet 😉 )

edit: As friends happily pointed out, documentation is also a construct different from code comments in clojure and rust. Added that in.

Choosing Elixir for the Code, not the Performance

People like to argue about programming languages: “This one is better!” “No this one!”. In these discussion, often the performance card is pulled. This language is that much faster in these benchmarks or this company just needs that many servers now. Performance shouldn’t matter that much in my opinion, and Nate Berkopec makes a good point about that in his blog post “Is Ruby too slow for web scale?” (TLDR; we can add more servers and developer time often costs more than servers):

The better conversation, the more meaningful and impactful one, is which framework helps me write software faster, with more quality, and with more happiness.

I agree with lots of the points Nate makes and I like him and the post, but still it rubbed me the wrong way a bit. While it also states the above, it makes it seem like people just switch languages for the performance gains. And that brought up a topic that has been bugging me for a while: If you’re switching your main language purely for performance, there’s a high chance you’re doing it wrong. Honestly, if all we cared about was performance we’d all still be writing Assembly, or C/C++ at least.

It’s also true, that performance is often hyped a lot around new languages and specifically Elixir can also be guilty of that. And sure, performance is great and we read some amazing stories about that. Two of the most prominent adoption stories that I can recall are usually cited and referred to for their great performance numbers. There is Pinterest “our API responses are in microseconds now” and there is Bleacher Report “we went from 150 servers to 5”. If you re-read the articles though, other benefits of elixir are mentioned as well ore are even discussed more than  performance or even more.

The Pinterest article focuses first on Elixir as a good language, performance is arguably secondary in the post. Before we ever talk about microseconds there is lots of talk such as:

The language makes heavy use of pattern matching, a technique that prevents *value* errors which are much more common than *type* errors. It also has an innovative pipelining operator, which allows data to flow from one function to the next in a clear and easy to read fashion.

Then the famous microseconds drops in one paragraph, and then it immediately turns around and talks about code clarity again:

We’ve also seen an improvement in code clarity. We’re converting our notifications system from Java to Elixir. The Java version used an Actor system and weighed in at around 10,000 lines of code. The new Elixir system has shrunk this to around 1000 lines.

The Bleacher Report article has performance in its headline and at its heart, but it also mentions different benefits of Elixir:

The new language has led to cleaner code base and much less technical debt, according to Marx. It has also increased the speed of development(…)

So why do people rave about performance so much? Performance numbers are “objective”, they are “rationale”, they are impressive and people like those. It’s an easy argument to make that bleacher report uses just 5 servers instead of 150 in the old ruby stack. That’s a fact. It’s easy to remember and easy to put into a headline. Discussing the advantages of immutable data structures, pattern matching and “let it crash” philosophy is much more subjective, personal and nuanced.

Before we jump in, this blog post is general but some specific points might resonate the best with a ruby crowd as that is my main programming language/where I’m coming from. So, from other languages some of the points I’ll make will be like “meh I already got this” while I might miss out obvious cool things both Ruby and Elixir have.

Hence, after this lengthy introduction let’s focus on something different – what makes Elixir a language worth learning – how can it make day to day coding more productive in spite of performance? 

Let’s get some performance stuff out of the way first…

(The irony of starting the first section of a blog post decisively not about performance by discussing performance is not lost on me)

First, I wanna touch the topic of performance again really quickly – does it really not matter? Can we seamlessly scale horizontally? Does performance not impact productivity?

Well it certainly does, as remarked by Devon:

In a more general sense, if my runtime is already fast enough I don’t need to bother with more complex algorithms and extra concepts. I can just leave it as is. No extra engineering spent on “making it faster” – just on to the next. That’s a good thing. Especially caching can be just wonderful to debug.

What about performance of big tasks? Like Data processing, or in the case of the company I’m working for solving a vehicle routing problem1? You can’t just scale those up by throwing servers at it. You might be able to parallelize it, but that’s not too easy in Ruby and in general is often a bigger engineering effort. And some languages make that easier as well, like Elixir’s flow.

Vertical Scaling has its limits there. It works fine for serving more web requests, working on more background jobs but it gets more complicated when you have a big problem to solve that aren’t easily parallelizable especially if they need to be done wihin a given time frame.

Also, I might not be one of the cool docker + kubernetes kids, but if you tell me that there’s no overhead to managing 125 servers versus 5 servers, I tend to not believe it. If simply because the chance of anyone of your servers failing at any time is much bigger just cause you got more of them.

Well then, finally enough performance chatter in a post not about performance. Let’s look at the code and how it can make your life easier! I swear I try to keep these sections short and sweet, although admittedly that’s not exactly my strength (who would have guessed by now? 😉 )

Pattern Matching


defmodule Patterns do
def greet(%{name: name, age: age}) do
IO.puts "Hi there #{name}, what's up at #{age}?"
end
def greet(%{name: "José Valim"}) do
IO.puts "Hi José, thanks for elixir! <3"
end
def greet(%{name: name}) do
IO.puts "Hi there #{name}"
end
def greet(_) do
IO.puts "Hi"
end
end
Patterns.greet %{name: "Tobi", age: 27} # Hi there Tobi, what's up at 27?
Patterns.greet %{name: "José Valim"} # Hi José, thanks for elixir! ❤
Patterns.greet %{name: "dear Reader"} # Hi there dear Reader
Patterns.greet ["Mop"] # Hi

view raw

matching.exs

hosted with ❤ by GitHub

Pattern Matching is my single favorite feature. If I could pick a single feature to be adopted in other programming languages it would be pattern matching. I find myself writing pattern matching code in Ruby, then sighing… “Ugh right I can’t do this”. It changed the way I think.

Enough “this is soo great”. With pattern matching you basically make assertions on the structure and can get values directly out of a deeply nested map and put their value into a variable. It runs deeper than that though. You also have method overloading and elixir will try to match the functions from top to bottom which means you can have different function definitions based on the structure of your input data.

You can’t just use it on maps though. You can use it on lists as well, so you can have a separate function clause for an empty or one element list which is really great for recursion and catching edge cases.

One of the most fascinating uses I’ve seen was for parsing files as you can also use it for strings and so can separate the data and different headers of mp3 files all in just a couple of lines of elixir:


mp3_byte_size = (byte_size(binary) – 128)
<< _ :: binary-size(mp3_byte_size), id3_tag :: binary >> = binary
<< "TAG",
title :: binary-size(30),
artist :: binary-size(30),
album :: binary-size(30),
year :: binary-size(4),
comment :: binary-size(30),
_rest :: binary >> = id3_tag

view raw

mp3.ex

hosted with ❤ by GitHub

Immutable Data Structures and Pure Functions

If you’re unfamiliar with immutable data structures you might wonder how the hell one ever gets anything done? Well, you have to reassign values to the return values of functions if you wanna have any sort of change. You get pure functions, which means no side effects. The only thing that “happens” is the return value of the function. But, how does that help anyone?

Well, it means you have all your dependencies and their effect right there – there is no state to hold on which execution could depend. Everything that the function depends on is a parameter. That makes for superior understandability, debugging experience and testing.

Already months into my Elixir journey I noticed that I was seemingly much better at debugging library code than I was in Ruby. The reason, I believe, is the above. When I debug something in Ruby what a method does often depends on one or more instance variables. So, if yo wanna understand why that method “misbehaves” you gotta figure out which code sets those instance variables, which might depend on other instance variables being set and so on… Similarly a method might have the side effect of changing some instance variable. What is the effect in the end? You might never know.

With pure functions I can see all the dependencies of a function at a glance, I can see what they return and how that new return value is used in further function calls. It reads more like a straight up book and less like an interconnected net where I might not know where to start or stop looking.

The Pipeline Operator


config
|> Benchee.init
|> Benchee.system
|> Benchee.benchmark("job", fn -> magic end)
|> Benchee.measure
|> Benchee.statistics
|> Benchee.Formatters.Console.output
|> Benchee.Formatters.HTML.output

view raw

pipe.exs

hosted with ❤ by GitHub

How does a simple operator make it into this list?

Well, it’s about the code that it leads you to. The pipeline operator passes the value of the previous expression into the next function as the first argument. That gives you a couple of guidelines. First, when determining the order of arguments thinking about which one is the main data structure and putting that one first gives you a new guideline. Secondly, it leads you to a design with a main data structure per function, which can turn out really nice.

The above is an actual interface to my benchmarking library benchee. One of the design goals was to be “pipable” in elixir. This lead me to the design with a main Suite data structure in which all the important information is stored. As a result, implementing formatters is super easy as they are just a function that takes the suite and they can pick the information to take into account. Moreover, each and every one of those steps is interchangeable and well suited for plugins. As long you provide the needed data for later processing steps there is nothing stopping you from just replacing a function in that pipe with your own.

Lastly, the pipeline operator represents very well how I once learned to think about Functional Programming, it’s a transformation of inputs. The pipeline operator perfectly mirrors this, we start with some data structure and through a series of transformations we get some other data structure. We start with a configuration and end up with a complete benchmarking suite. We start with a URL and some parameters which we transform into some HTML to send to the user.

Railway Oriented Programming


with {:ok, record} <- validate_data(params),
{:ok, record} <- validate_in_other_system(record),
{:ok, record} <- Repo.insert(record) do
{:ok, record}
else
{:error, changeset} -> {:error, changeset}
end

view raw

railway.ex

hosted with ❤ by GitHub

I’d love to ramble on about Railway Oriented Programming, but there’s already good blog posts about that out there. Basically, instead of always checking if an error had already occurred earlier we can just branch out to the error track at any point.

It doesn’t seem all that magical until you use it for the first time. I remember suggesting using it to a colleague on a pull request (without ever using it before) and my colleague came back like “It’s amazing”!

It’s been a pattern in the application ever since. It goes a bit like this:

  1. Check the basic validity of data that we have (all fields present/sensible data)
  2. Validate that data with another system (business logic rules in some external service)
  3. Insert record into database

Anyone of those steps could fail, and if it fails executing the other steps makes no sense. So, as soon as a function doesn’t return {:ok, something} we error out to the error track and otherwise we stay on the happy track.

Explicit Code

The Python folks were right all along.

Implicit code feels like magic. It just works without writing any code. My controller instance variables are just present in the view? The name of my view is automatically inferred I don’t have to write anything? Such magic, many wow.

Phoenix, the most popular elixir web framework, takes another approach. You have to specify a template (which is like a Rails view) by name and explicitly pass parameters to it:


def new(conn, _params) do
changeset = User.new_changeset(%User{})
render conn, "new.html", changeset: changeset
end

No magic. What happens is right there, you can see it. No accidentally making something accessible to views (or partials) anymore! You know what else is there? The connection and the parameters so we can make use of them, and pattern match on them.

Another place where the elixir eco system is more explicit is when loading relations of a record:


iex(13)> user = Repo.get_by(User, name: "Homer")
iex(14)> user.videos
#Ecto.Association.NotLoaded<association :videos is not loaded>
iex(17)> user = Repo.preload(user, :videos)
iex(18)> user.videos
# all the videos

This is ecto, the “database access layer” in the elixir world. As you see, we have to explicitly preload associations we want to use. Seems awful bothersome, doesn’t it?

I love it!

No more N+1 queries, as Rails loads things magically for me. Also, I get more control. I know about the db queries my application fires against the database. Just the other day I fixed a severe performance degradation as our app was loading countless records from the database and instantiated them for what should have been a simple count query. Long story short, it was a presenter object so .association loaded all the objects, put them in presenters and then let my .size be executed on that. I would have never explicitly preloaded that data and hence found out much earlier that something is wrong with this.

Speaking of explicitness and ecto…

Ecto Changesets


def new_changeset(model, params \\ %{}) do
model
|> cast(params, ~w(name username))
|> validate_required(~w(name username))
|> unique_constraint(:username)
|> validate_length(:username, min: 1, max: 20)
end
def registration_changeset(model, params) do
model
|> new_changeset(params)
|> cast(params, ~w(password))
|> validate_required(~w(password))
|> validate_length(:password, min: 6, max: 100)
|> put_pass_hash()
end

view raw

changeset.ex

hosted with ❤ by GitHub

Callbacks and validations are my nemesis.

The problem with them is the topic for another topic entirely but in short, validations and callbacks are executed all the time (before save, validation, create, whatever) but lots of them are just added for one feature that is maybe used in 2 places. Think about the user password. Validating it and hashing/salting it is only ever relevant when a user registers or changes the password. But that code sits there and is executed in a not exactly trivial to determine order. Sometimes that gets in the way of new features or tests so you start to throw a bunch of ifs at it.

Ecto Changesets were one of the strangest things for me to get used to coming to Elixir. Basically they just encapsulate a change operation, saying which parameters can take part, what should be validated and other code to execute. You can also easily combine changesets, in the code above the registration_changeset uses the new_changeset and adds just password functionality on top.

I only deal with password stuff when I explicitly want to. I know which parameters were allowed to go in/change so I just need to validate those. I know exactly when what step happens so it’s easy to debug and understand.

Beautiful.

Optional Type-Checking


@type job_name :: String.t | atom
@spec benchmark(Suite.t, job_name, fun, module) :: Suite.t
def benchmark(suite = %Suite{scenarios: scenarios}, job_name, function, printer \\ Printer) do
#…
end

view raw

type_specs.ex

hosted with ❤ by GitHub

Want to try typing but not all the time? Elixir has got something for you! It’s cool, but not enough space here to explain it, dialyxir makes the dialyzer tool quite usable and it also goes beyond “just” type checking and includes other static analysis features as well. Still, in case you don’t like types it’s optional.

Parallelism

“Wait, you swore this was it about performance! This is outrageous!”

Relax. While Parallelism can be used for better performance it’s not the only thing. What’s great about parallelism in elixir/the Erlang VM in general is how low cost and seamless it is. Spawning a new process (not like an Operating System process, they are more like actors) is super easy and has a very low overhead, unlike starting a new thread. You can have millions of them on one machine, no problem.

Moreover thanks to our immutability guarantees and every process being isolated you don’t have to worry about processes messing with each other. So, first of all if I want to geocode the pick up and drop off address in parallel I can just do that with a bit of Task.async and Task.await. I’d never just trust whatever ruby gems I use for geocoding to be threadsafe due to global et. al.

How does this help? Well, I have something that is easily parallelizable and I can just do that. Benchee generates statistics for different scenarios in parallel just because I can easily do so. That’s nice, because for lots of samples it might actually take a couple of seconds per scenario.

Another point is that there’s less need for background workers. Let’s take web sockets as an example. To the best of my knowledge it is recommended to load off all bigger tasks in the communication to a background workers in an evented architecture as we’d block our thread/operating system process from handling other events. In Phoenix every connection already runs in its own elixir process which means they are already executed in parallel and doing some more work in one won’t block the others.

This ultimately makes applications easier as you don’t have to deal with background workers, off loading work etc.

OTP

Defining what OTP really is, is hard. It’s somewhat a set of tools for concurrent programming and it includes everything from an in memory database to the Dialyzer tool mentioned earlier. It is probably most notorious for its “behaviours” like supervisors that help you build concurrent and distributed systems in the famous “Let it crash! (and restart it in a known good state maybe)” philosophy.

There’s big books written about this so I’m not gonna try to explain it. Just so much, years of experience about highly available systems are in here. There is so much to learn here and change how you see programming. It might be daunting, but don’t worry. People built some nice abstractions that are better to use and often it’s the job of a library or a framework to set these up. Phoenix and ecto do this for you (web requests/database connections respectively). I’ll out myself right now: I’ve never written a Supervisor or a GenServer for production purposes. I used abstractions or relied on what my framework brought with it.

If this has gotten you interested I warmly recommend “The Little Elixir & OTP Guidebook”. It walks you through building a complete worker pool application from simple to a more complex fully featured version.

Doctests


@doc """
## Examples
iex> {:ok, pid} = Agent.start_link(fn -> 42 end)
iex> Agent.get_and_update(pid, fn(state) -> {state, state + 1} end)
42
iex> Agent.get(pid, fn(state) -> state end)
43
"""

view raw

doctest.ex

hosted with ❤ by GitHub

Imo the most underrated feature of elixir. Doc tests allow you to write iex example sessions in the documentation of a method. These will be executed during test runs and check if they still return the same values/still pass. They are also part of the awesome generated documentation. No more out of date/slightly wrong code samples in the documentation!

I have entire modules that only rely on their doctests, which is pretty awesome if you ask me. Also, contributing doc tests to libraries is a pretty great way to provide both documentation and tests. E.g. once upon a time I wanted to learn about the Agent module, but it didn’t click right away, so I made a PR to elixir with some nice doctests to help future generations.

A good language to learn

In the end, elixir is a good language to learn. It contains many great concepts that can make your coding lives easier and more enjoyable and all of that lives in a nice and accessible syntax. Even if you can’t use it at work straight away, learning these concepts will influence and improve your code. I experienced the same when I learned and read a couple of books about Clojure, I never wrote it professionally but it improved my Ruby code.

You might ask “Should we all go and write Elixir now?”. No. There are many great languages and there are no silver bullets. The eco system is still growing for instance. I’m also not a fan of rewriting all applications. Start small. See if you like it and if it works for you.

Lastly, if this has peaked your interest I have a whole talk up that focuses on the great explicit features of elixir and explains them in more detail: “Elixir & Phoenix – fast, concurrent and explicit”

edit1: Clarified that the bleacher report blog post is mostly about performance with little else

edit2: Fixed that you gotta specify the template by name, not the view

[1] It’s sort of like Traveling Salesman, put together with Knapsack and then you also need to decide what goes where. In short: a very hard problem to solve.

Slides: Optimizing For Readability (Codemotion Berlin 2015)

Yesterday I gave a talk at Codemotion Berlin, it was “Optimizing For Readability” – an updated version of my “Code is read many more times than written”. It features new insights and new organizational practices to keep the code base clean and nice. Abstract:

What do software engineers do all day long? Write code? Of course! But what about reading code, about understanding what’s happening? Aren’t we doing that even more? I believe we do. Because of that code should be as readable as possible! But what does that even mean? How do we achieve readable code? This talk will introduce you to coding principles and techniques that will help you write more readable code, be more productive and have more fun!

CS5j0v_WEAAm7S9.jpg:large
(pictures by Raluca Badoi

And here you can see the slides, sadly there is no video 😦 Slides are CC BY-NC-SA.

Hope you like the code, please leave some feedback. I’d especially love suggestions for a better talk title 🙂

Open Source isn’t just about code – other ways in which you can contribute!

Talking to developers and reading about open source I often get the feeling that the general notion is that open source is just about code and commits. Put another way, “If you don’t make commits for a project you are not contributing to it”. Or so they say. That notion is far from the truth in my eyes. Let me tell you why.

Sure, code is what ultimately ships and has a direct impact on the users of an open source project, so yes commits and code are important. But it’s by no means the only way you may contribute to a project. Projects mostly are a whole eco system, which is about more than just code. Here are a couple of other ways you may contribute to a project.

Report issues

If maintainers don’t know about issues they can not fix them. Therefore, it is crucial that you report issues you encounter and not just abandon using the project or only build a workaround. Most projects are happy to receive issue reports. Don’t take reporting issues lightly either, often a substantial amount of time goes into writing a good issue report. Ideally an issue report contains code to reproduce the issue, information about the expected outcome and the actual outcome, system information, version information and maybe a stack trace or similar artifacts. I also like to include a little note of appreciation for the maintainers, but that’s optional. Keep in mind that issues don’t have to be about bugs – they may also be about possible improvements or desired features. Github even acknowledges the importance of issues by giving you contribution points for opened issues – yay!

Write Documentation

Documentation is extremely important but often lacking, as a lot of people really don’t enjoy writing it. It’s a great way to help a project out by making it easier for other people to get into. Also if you found it hard to get into a project, try improving the documentation so the next person will have it easier than you did. I actually have commits on ruby – they are all documentation commits 🙂

Improve the website

Many open source projects have their own websites. Sometimes the information is outdated and sometimes it’s just plain ugly. I remember the old shoes website – it was really ugly and looked dead (at least that were my thoughts when I first saw it). But look at it now! It looks nice and presentable. And most of it is thanks to wpp – he has never pushed a commit to shoes (that I am aware of), but this certainly was a great contribution for shoes.

Offer to help with art/design

A lot of projects would love to have their logo updated, get some illustrations for their website or similar thing. So if design or illustration is your thing, maybe go to your favorite project and ask them if they want some help with that? I know I’d be more than happy about that!

Trying out preview versions

Developers need feedback if their software works. Therefore, alpha, preview or release candidate releases are made often. Go grab one of those and try it out. If everything works – great, you just made sure it works on your system! If you find a bug – report it! That’s a great help for the project.

Weigh in on Discussions

Sometimes there are discussions about API changes or ways an implementation could be improved (among other things). Comments are very welcome there, maintainers want the input of their users. I once spent a whole day discussing some architectural issues I identified in a project. It was fun. Other work might be setting up a road map – Eric Watson did that for Shoes 4 one day. He’s a great coder, but that road map helped the project more than any code he could have written in a similar time frame. His road map went on to be a very helpful guidance and reference point.

Answer Questions

Questions about a project pop up all around the place. Be it stackoverflow or just the project’s issue tracker. By answering them you help other people to get a better experience with the project overall. Also don’t forget that a question might hint at a problem with the project. Maybe the documentation for this part could be improved or there is a common task that might be automated or deserves a better API? Maybe you could jump in to do this?

Give a presentation about a project

There are many great projects out there, but developers may only adopt them if they know about them! If you really like a project, consider giving a talk about it at a local user group or handing in a talk for a conference. This way the adoption of the project may be increased, bringing more people to the project making it a better and more stable product overall – benefiting everyone.

Closing

If you already have done any of the above: thank you! You contributed to open source. Keep doing that if you like, if not give it a shot. If you want to get started on contributing to open source, this post of mine might come in handy. Personally contributing to open source has been an amazing journey for me so far. I enjoy it very much and have made quite some friends that way :).