The Programming Principle

Douglas C. Schmidt
C++ Report
October 1996

I celebrated my 34th birthday recently. I thought 34 would be another one of those mundane birthdays beween 30 and retirement. But this year was different -- my parents gave me copy of Scott Adam's "The Dilbert Principle" as a gift. Unless you've spent the '90s programming in Lower Elbonia, I'm sure you know Dilbert as the posterchild for all the foibles we experience in today's techno-eccentric workplace.

One of my favorite Dilbert comic strips in the book is one where Dilbert's boss (the guy with the satanic hair) says to Dilbert: "I saw the code for your computer program yesterday. It looked easy. It's just a bunch of typing, and half the words were spelled wrong. And don't get me started about your over-use of colons." His "insights" reminded me of all the times I've listened to pompous theorists and managers (who've never written a line of production software in their lives) dismiss the accomplishments and contributions of programmers.

This "fond" memory started me thinking about what makes programming hard. There are many culprits, ranging from the chronic lack of stable requirements, to the buggy tools and components we layer our software upon, to the subtle and pernicious tradeoffs between forces like performance and correctness. This editor's corner focuses on the latter theme, using a recent experience to illustrate some of the foibles in my techno-eccentric workplace.

When I'm not teaching courses, advising students, or proof-reading submissions to the C++ Report, I develop concurrent OO communication software frameworks and applications in C++ for my research. One of the fundamental components in a concurrent network programmer's toolkit is a thread-safe message queue.

Thread-safe message queues are commonly used in concurrent programs to exchange information (e.g., work requests or data buffers) between suppliers and consumers. For instance, Steve Vinoski and I presented a thread-safe message queue implementation in our April '96 Object Interconnections column. We used this queue to exchange messages between a supplier thread and a pool of consumer threads in a stock quoter server. In our example, the supplier reads requests from network clients and inserts them into the message queue. The consumer threads block on the queue in "hungry puppy" fashion, waiting to process request messages from the supplier.

The trick to making the message queue work for multi-threaded programs is to use synchronization mechanisms like mutual exclusion (mutex) locks and condition variables. Using these mechanisms, the basic logic for safely inserting a message into a queue that can be accessed concurrently by multiple threads looks like this:

void Message_Queue::insert (Message *message)
  // Make sure we own the lock before
  // entering the critical section.
  this->mutex_.lock (); 

  // Wait until there's room in the queue.
  while (/* queue is full */)
    this->not_full_cond_.wait (this->mutex_);

  // Insert message at end of queue (omitted).

  // Inform one waiting consumer that there's a message.
  this->not_empty_cond_.signal (); 

  // Let another thread acquire the lock.
  this->mutex_.unlock ();
Likewise, the basic logic for safely removing a message from the queue that can be accessed concurrently by multiple threads looks like this:

Message *
Message_Queue::remove (void)
  this->mutex_.lock ();

  // Wait until there's a message in the queue.
  while (/* queue is empty */)
    this->not_empty_cond_.wait (this->mutex_);

  // Remove message from front of queue (omitted).

  // Inform one waiting supplier that there's room.
  this->not_full_cond_.signal (); 
  this->mutex_.unlock ();
Although this code works correctly, it can perform poorly if the underlying threads implementation incurs a context switch on every this->not_{full,empty}_cond_.signal() operation. Since context switches are one of the major performance costs in concurrent programs it's worthwhile to minimize them.

If you think about how to optimize the message queue for a while, you might arrive at the conclusion that you can omit the not_empty_cond_.signal() if the message wasn't inserted into an empty queue. In this case, the consumer wouldn't have been blocked in the not_empty_cond_.wait(), so there's no point in signaling the consumer. Likewise, it might appear that there's no point in signaling the supplier if it didn't remove a message from a full queue. In this case, the supplier wouldn't have been blocked in the not_full_cond_.wait() either.

Armed with this insight, you might be tempted to rewrite the code as follows:

void Message_Queue::insert (Message *message)
  // ...
  // Only inform one waiting consumer that there's 
  // a message if the queue was previously empty.
  if (queue_was_empty)
    this->not_empty_cond_.signal (); 
  // ...

Message *Message_Queue::remove (void)
  // ...
  // Only inform one waiting supplier that there's room
  // in the queue if the queue was previously full.
  if (queue_was_full)
    this->not_full_cond_.signal (); 
  // ...
Unfortunately, if you did this, you'd have just fallen prey to Knuth's Law: "Premature optimization is the root of all evil" [KNUTH]! The reason, of course, is that this particular optimization works correctly only if there's a single supplier and a single consumer. If there are multiple consumers, however, it's very easy to end up in a situation where all consumers block indefinitely.

For instance, consider the case where 5 consumers in our stock quoter server's thread pool are blocked in calls to not_empty_cond_.wait(), waiting for requests to show up from the clients. If the supplier thread then inserts 5 "shutdown" messages into the queue in rapid succession all of them may be enqueued before any of the consumer threads wake up. However, in the "optimized" implementation above, the supplier only invokes the not_empty_cond_.signal() method once, i.e., when the first shutdown message is inserted into the previously empty queue. Hence, if the one awakened consumer thread terminates immediately after receiving the shutdown message, all the other threads will remain blocked indefinitely.

It turns out that the "Right Way[TM]" to optimize this code is to keep track of the number of waiting consumers and to only invoke signal() if the number is greater than 0. Moreover, if the condition variable implementation does this for you already, you can achieve this optimization without having to change the original code!

So what's the moral of this story? First, I think it illustrates one reason why programming is so hard: because we're constantly trying to balance various forces that lure us towards either the scylla of performance or the charybdis of correctness (or any of the myriad other software quality factors that conflict with performance). As usual, the best defense is a deep knowledge of good concurrency patterns and techniques, plus a large quantify of diet coke...

Second, it essential to get feedback from other developers who have solved these types of problems before. The example of "premature optimization" I showed here is based on software I wrote recently. I'd made the optimization because I was initially dealing with only one supplier and one consumer. Unfortunately, the code crept into a class library that was then used to create thread pools, which triggered the bug. Fortunately, a bright colleague, Karlheinz Dorn of Siemens, found the error of my ways and Tim Harrison and I quickly fixed the bug. This type of substantive feedback is far more valuable than having your boss point out the overuse of colons...

Incidentally, if you'd like to learn more about multi-threaded programming, I highly recommend a new book called "Programming with Threads" by Steve Kleiman, Devang Shah and Bart Smaalders (ISBN 0-13-172389-8). Although most of their examples are in C, the book is a masterpiece of good, practical concurrency techniques. If you'd like to obtain C++ components for multi-threading (such as the *correct* thread-safe message queue), check out the ACE release, which is a freely available C++ network programming framework obtainable via the WWW. Finally, to bring this full circle, Dilbert and his pals are also online. The Dilbert Principle really is an excellent book -- I highly recommend it. But don't wait until your 34th birthday to get it ;-).


[Knuth] Donald Knuth, ``"Structured Programming with go to Statements,'' Computing Surveys, Vol. 6, No. 4, December, 1974, page 268.

Back to C++ Report Editorials home page.

Last modified 18:06:19 CST 25 January 2019