Aug 212012
 

The age of the single-CPU computer is drawing to a close.  Symmetric multiprocessing used to be found only in high-performance servers or scientific computing, but these days it’s in tablets and phones.  Hardware designers have been making multi-core parts for years now to keep up with the expected pace of performance improvement.  It would be surprising if this trend did anything but continue.

If you’re a programmer and you’re not already writing concurrent software, you should start.  100+ cores could be common ten years from now (or possibly even sooner).  At that point, single-threaded software might as well come with a warning label that reads “runs at 1% speed.”  This should come as no surprise, as people have been pointing this out for years.

Despite its benefits, and the inevitability that it will be increasingly necessary in the future, I hear a lot of concern about making use of shared memory multi-threading (the most common form of concurrent programming).  Much of it is comments from colleagues and vague on-line discussions.  However, there are also plenty of well-reasoned arguments against using threads from smart people who have clearly spent a lot of time thinking about this.  A few examples:

As someone who has been writing and maintaining multi-threaded code for over 10 years, I find these sort of positions puzzling.  Multi-threaded programming isn’t trivial, but there are plenty of non-trivial issues which programmers have to deal with.

What are some common problems people are concerned about which can be caused by bugs in a multi-threaded program?

  • Data corruption
  • Non-deterministic behavior
  • Deadlock

Let’s compare that to some common problems encountered due to bugs in programs using pointers and dynamic memory management:

  • Data Corruption
  • Non-deterministic behavior
  • Segfault crashes / null pointer exceptions
  • Memory leaks

I’m not convinced that threading is inherently more problematic.  For the most part, both categories of problems can be avoided through a combination of following simple rules and diligent attention to detail.  (And aren’t those the foundations of all good engineering?)  Just as there are free and commercial tools to help you check for memory usage errors, there are also free and commercial tools to check for threading problems.

I believe multi-threaded programming is simply a set of skills that you need to learn.  You have to learn a new set of tools (threads, locks, etc.).  You need to understand the pitfalls and methods for avoiding them.  You need know how to think about the behavior of your program in a different way.  It can be daunting until you become accustomed to it.  So can using a functional language if you’re used to global variables and other kinds of shared state (e.g. moving from Python to Haskell).  So can explicit memory management if you’re used to garbage collection (e.g. moving from Java to C++).

There are some good reasons to be concerned, but they have more to do with insufficient understanding than with the nature of shared-memory multi-threading.

It can be difficult to get a good education in how to program with threads.  Even today, concurrent programming remains somewhat esoteric.  However I believe that will change before long.  As the typical number of CPUs increases, concurrency is changing from a beneficial option to a necessity.  That will increase demand for understanding of concurrent programming, and change concurrency from an ancillary topic to part of the core knowledge every programmer needs.  That should lead to a greater supply of quality educational resources, as well as a larger community of programmers comfortable with concurrent programming.

It is also more difficult to incorporate multi-threading into existing software than it is to write software from the ground up using threads.  You need to have a clear understanding of when shared mutable state is accessed by multiple threads and take steps to make this safe (e.g. protecting all such accesses with locks, giving each thread its own copy, etc.).  This of course includes any calls to libraries which aren’t thread-safe.  Unless you have a detailed understanding of all the code in the software you intend to make multi-threaded, it’s easy for bugs to slip through the cracks.  Diligent programmers can succeed at this, but it takes careful attention to detail.  There are some clever tricks which can help, such as transparently replacing non-thread-safe POSIX library calls (described here), but those will only get you so far.  Anyone attempting such a transition for an existing piece of software should plan for thorough code audits, as well as re-writing a non-negligible portion of the code.

There are of course alternatives to shared memory multi-threading, and they are often touted as “solutions” to the “problems” of multi-threading.  There’s software transactional memory built into languages such as Haskell and Clojure.  (There has even been a credible attempt to add it to C++.)  There’s also the message-passing actor model which is implemented by Erlang as well as several C++ libraries (e.g. Theron, libcppa).  I’m wary of people offering silver bullets, and I think there’s good reason to be skeptical here.  Without going into the gory details, there are some credible arguments that the typical “retry on conflicting access” method of implementing STM performs poorly when contention for shared data is frequent (which will tend to happen as a system scales up).  Message passing systems typically copy data sent between actors, which can also be a performance issue. These approaches may have their uses (and some implementations show more promise than others).  However they are not without their own problems, and they’re not a concurrency panacea.

The bottom line for me is that I can’t believe that shared-memory multi-threading is inherently incomprehensible or otherwise too difficult to use.  I’ve written quality multi-threaded code, and I’ve seen others do it.  The necessary skills may still be uncommon, but I don’t see how they can stay that way in the future.

Of course maybe I’m wrong and most programmers will never be able to handle multi-threaded programming.  It could be even worse than that, as some people believe concurrent programming will never see widespread use in any form.  If programmers can’t muster the ability to make use of the parallel processing capacity in future machines, our personal computers may seem to stop getting faster.

  3 Responses to “Don’t Fear Multithreading”

  1. Or, even easier, embrace Scala’s/Akka’s (for Java and Scala 2.10) Actor and supporting concurrent libraries and (mostly) don’t worry about low-level threads anymore. 🙂

    • Akka is clearly a complex toolkit, and I can’t comment on it in any detail. It does merit a closer look (as does Scala, as I said in another resp onse).

      In general my concerns with the actor / message-passing / “shared nothing” paradigm have to do with performance and scalability. For the kind of software I’m used to working w ith, I have a hard time seeing how to implement it to the actor model in a way that doesn’t result in performance or scalability problems from at least one of: certain actors b ecoming a bottleneck, the size of messages sent between actors becoming a bottleneck, an extremely large number of actors.

      Suppose we have a large data set made up of discrete objects with relationships between them. There are concurrent incoming requests from clients which either read from this o bject data, modify existing objects, or create or destroy objects. Read requests dominate over modify/create/delete requests. Assume you have millions to billions of objects.

      How do you handle this in the actor model? One actor per object seems like the “right” approach, but would clearly be a scalability problem both at start-up and during run-tim e. You could distribute objects across a smaller set of actors, but you need to be careful about creating hot spots. You could have some method of passing ownership of the ob jects between actors to re-balance them, but then you create potentially expensive messages within the system and you complicate the process of looking up which actor to contac t for an operation on a given object. You could use an external method for managing shared state (an external database, Erlang’s term storage, etc.) but then you’re basically breaking out of the actor model to solve the problem.

      To implement this using multi-threading, I would use a hierarchy of read/write locks that allow multiple readers or a single writer. This would allow an arbitrary number of re quest-servicing threads to proceed in parallel, only stalling when they contend for access to particular objects (e.g. concurrent read and modify requests for the same object).

      Maybe there is an elegant way to solve this with actors that I’m not seeing. (Maybe I lack sufficient understanding of how to think in the actor paradigm.) I certainly agree that actors can be a good solution in plenty of cases. However I’m not convinced that actors are always the right choice for concurrency.

  2. Based on Erlang.