,

Why is parallel programming so hard to express?

Walter Kriha

by Johannes Frey, Hannes Buchwald, Stephan Soller, Walter Kriha, Benjamin Binder

While trying to understand Hoares famous paper Communicating sequential processes we stumbled upon some interesting problems and concepts referring to parallel programming. In this post we want to share some of the insights we gained by discussing the matter.

Parallelism is difficult

Humans are not really made for multitasking. We are able to do more than one task at the same time, if all the different tasks have the same goal and can be performed without paying much attention to them. An example would be driving a car. But it’s getting harder if the tasks are similar to each other and demand the same amount of attention, like calculating different sums.

Take for example this simple calculation:

(2+6) × (7-3)

One person can only focus on calculating one of the two sums at a time. So real parallelism isn’t a natural way of thinking for us humans. Two persons however can each calculate one sum and then multiply their results. Real parallelism only works with more than one person.

If we want to implement this example we could copy the principle of having two people. Each sum is calculated in a different thread. After the threads have finished we can then multiply the results.

In this easy example it’s not a problem to understand what is happening. But in a more complex scenario it’s getting harder to understand who interacts with whom at what time.

As a quick reminder: Concurrency revolves around the concept of dealing with several tasks at the same time, while parallelism revolves around actually executing multiple tasks at the same time. A program build in a concurrent way can execute in parallel but doesn’t have to. It will run just fine sequentially on a single CPU core, too.

Expressing concurrency

In a concurrent program multiple things can happen at the same time. This is easy to visualize as a few boxes connected by arrows:

+-----+
| GUI |-------+-------------+
+-----+       ↓             ↓
   ↑     +----------+ +----------+
   |     |   read   | | download |
   |     | metadata | | content  |
   |     +----------+ +----------+
   |          ↓             |
   |     +-----------+      |
   +-----|  create   |←-----+
         | thumbnail |
         +-----------+

But when written as text it’s more difficult to understand the flow of events and the dependencies between the different processes:

gui_thread:
    on click:
        disk_thread.send read_file, ...
        net_thread.send download_file, ...
    on work_done:
        # show result to user

disk_thread:
    on read_file:
        # read requested file from disk
        compute_thread.send process_loaded_file. ...

net_thread:
    on download_file:
        # download requested file from server
        compute_thread.send process_downloaded_file, ...

compute_thread:
    on process_loaded_file and process_downloaded_file:
        # do time consuming processing of files
        gui_thread.send work_done, ...

This stems from the basic structure of text and in extension programming languages: They are structured one line after another, one statement after the other. Text and programming languages express things as one linear stream of things. This is a restriction of the text medium itself.

Text as a medium is extremely useful in programming. Starting at a young age we’re trained to use language to read and write and even express our thoughts. Programming languages tap into that training and we can express complex thoughts in a compact way.

But to represent the parallel structure of a program, which is basically a graph, text might not be the best choice. Maybe a simple “box and arrows” visualization better serves this purpose. There are already some examples of visual programming like Blueprints in UE4 or LabVIEW. This is something traditional editors and IDEs might want to pick up. Albeit in a more coarse way like one box per thread.

Concepts like futures and event-loops are approaches to cope with the same problem: Mapping a graph into linear text. For some situations they work while for others they don’t. But more on that later.

Helping the programmer with concepts

The two concepts we will take a closer look at are event-loops and futures. They can be found in programming languages like Clojure, or in frameworks like Node.js. Even though both concepts share the same ground (concurrent programming), they vastly differ in their approach, their style and the restrictions they enforce.

In essence both approaches allow us to break up the rather atomic chain of definition, execution and immediate result.

Futures

Futures allow us to fire up a specific task on another thread, while the main program keeps running. If, at some point in the future ;-), we are interested in the actual value, we have to explicitly ask the future reference for its value. As long as the value is “ready” when we ask for it, we are good. Using futures blends perfectly with writing linear programming. Because you don’t have to deal with callbacks, the program stays very clean and readable.

Event-Loops

The Event-Loop approach also gives away control but with the help of callbacks. In its center is one single-threaded loop that runs “forever”. This is called the event loop.

If an event is fired that is interesting to someone, a message is pushed into a queue together with a callback associated with the event. Whenever there is computing time left (aka the call stack is empty), messages get pulled from the queue and are processed. The processing of the messages is never interrupted, so there is no pre-emption and you can always be sure your code finished correctly.

Blocking and non-blocking calls

In the realms of CPUs, a single-core processor deals with concurrency by interleaving tasks onto the processor with the help of the scheduler. By doing so, the CPU provides each program with the illusion of exclusive processor possession, which makes multitasking possible. A multi-core processor however deals with concurrency by actually executing tasks on multiple cores and thereby yielding “true” parallelism.

So why do we have to wrap our head around this low-level concept?

Because we have to deal with either blocking or non-blocking operations. Ideally we want our two concepts to deal with the problem. However this doesn’t always work and we still have to keep some things in mind.

Futures and blocking function calls

When dereferencing a result from a future, we possibly run into a problem. As long as the value is “ready” when we ask for it, we are good. But if the value has to be calculated first, the dereference results in blocking the callers execution.

Event-loops and blocking function calls

Because programs using an event-loop are highly decoupled, the impact of using blocking function calls doesn’t seem that big from the perspective of the program.

The event-loop however is highly depended on non-blocking function calls. Because each callback is guaranteed to run until its completion it can also use as much computing time as desired. This makes starvation of other callbacks a huge problem. If a call would take a very long time, the entire event loop would be blocked. Therefore, it is crucial that callbacks processed by an event-loop only use non-blocking function calls.

You have to choose

So essentially the main differences between these two approaches are the composition of the program and the way to retrieve the outcome of the concurrent calls.

Futures give the programmer explicit control by allowing her/him to spawn “independent workers” which can be logically composed in a graph-like manner.

Event-Loops on the other hand try to invert the control by following an event-driven approach. The programmer does not explicitly request a result at given time but is forced to wait for it through asynchronous events which call back to him/her.

If you have to choose between one of them, maybe take a closer look at what you want to accomplish with them. If you are only using non-blocking calls, you can choose freely between both of them. If you are planning on using blocking calls however, you should stick with futures.


Posted

in

,

by

Walter Kriha

Comments

Leave a Reply