Thoughts on Slack and WIP

WIP_BensonThe other day, I was reading this excellent posting by Matt Heusser on the dangers and consequences of having too much work in progress (WIP). It mirrors my own experiences the past few months.

I have a number of techniques in place to manage my own work and keep WIP at a productive level, but I’ve had an unanticipated number of requests from colleagues for assistance. And I’m always happy to help… You can easily see where that leads. Before I knew it, I was overwhelmed.

After a stimulating conversation with Adam Yuret last night, I realized that Matt’s posting only looks at part of the story. It uses physical systems, like traffic and networks, to illustrate the negative results of having too much WIP. I do this too when I talk about WIP; it makes the concept readily accessible and works really well. But it misses something. It doesn’t look at the benefits of slack.

We humans are not mechanical; the costs of high WIP are even greater for us than they are for physical systems. This is because our brains continue to work on and consume varied ideas and experiences subconsciously. When we have too much to do, when we’re too focused on task, there’s too little time to step away from problems and allow these ideas to find their way to the forefront of our minds. This creates stress and tension.

When I took time away last night to have that conversation with Adam, I created slack time. I took my mind off of the topics I’d been working with for several days. I forgot my own challenges for a little while. And when the talk was over, I was hit with a wave of creativity. New ideas bubbled up; I started considering potential solutions for problems I’d been mulling over for months. I had at least one epiphany, and what I hope will be a few other good ideas.

Without taking the time to make some slack, I don’t think those ideas would ever have made it to my conscious mind. I needed that slack. I think all of us do.

So there’s two sides to the high WIP problem. The first is that it pushes us beyond our capacity and bogs us down. The second is that it suppresses our creativity. Either one of these can be crippling, but when they combine together, the challenges can seem insurmountable.

How Can We Learn When Lessons Take So Long?

Alfred-Thayer-Mahan

Alfred Thayer Mahan

Last week I was discussing the idea of software rewrites with a good friend. It was a relevant topic; different teams that we work with are being asked to, or are in the process of, rewriting various pieces of software. But our conversation wasn’t about the mechanics of rewriting applications; it was about the decision to do so, and whether that decision was the appropriate one.

Rewrites are costly. The costs are almost always larger than anticipated—both in time and effort—and failure to anticipate them correctly provides competing businesses with an opportunity. While the organization focuses on the rewrite, and tries to build to parity with their “legacy” solution, new features get pushed lower on the priority list. This creates a gap with evolving customer expectations; the longer the rewrite takes, the larger this gap tends to become. If competing businesses can step into it, they can seize market share while the rewriting organization is busy working towards “feature parity.” Both of us had seen this happen. It seems to be a common theme with major software rewrites.

My friend and I had learned from this experience. Since both of us had been party to rewrites that took longer than anticipated and had important business consequences, we were wary of them and argued for approaches that accounted for these business risks. Our experiences made us wonder if organizations—not just ours, but organizations in general—could effectively learn from these experiences.

The question is an important one. The tenure of engineering managers and their immediate superiors is relatively brief, at least compared to the cycle of software rewrites. Software systems can last for a generation, twenty years or more. A software engineer might see one or two major rewrites during a career. Would that give enough experiential knowledge to avoid a poor decision? We thought it unlikely, especially if managers and decision-makers were moving on to new responsibilities about once every five years.

This wasn’t an inspiring conclusion. We couldn’t help but wonder if software organizations might have real difficulties learning important lessons because of these dynamics. If average leadership tenure is less than ten years and the feedback cycle from a rewrite is double that, how can an organization be expected to learn from one?

In a world where Agile approaches and fast feedback loops have become so common, there are still aspects to our systems that have long cycles, and these can inhibit effective learning.

Watching President Obama’s speech to the U.N. the other day, as he laid out the case for a campaign against ISIL, I wondered if the same might not be true for the U.S. government. It is dangerous to draw specific parallels between American involvement in Southeast Asia—or more specifically Vietnam—and the recent entanglements in the Middle East, but for students of history, it is almost impossible not to. Similar themes reemerge, such as overconfidence in military force, an emphasis on winning tactical victories rather than defining strategic goals, and relative ignorance of the importance of historic and cultural contexts. Presidents, just like software managers, can have difficulty with long feedback loops because of their limited tenure.

In the late nineteenth and early twentieth century, with limited experience in fighting naval wars, the U.S. Navy attempted to solve this problem through the study of history. This was a core aspect of the approach of Alfred Thayer Mahan and the work of his colleagues at the Naval War College. Historical study augmented experiential knowledge and was used to illustrate broad themes. These broad themes became principles that formed the foundation of the Navy’s approach to tactics and doctrine in the early part of the twentieth century. If the performance of the Navy in World War Two is any indication, Mahan’s approach was successful.

Do software teams need something similar? Does the U.S. Government?

What and When to Automate?

Texas_1928This post springs out of a Twitter conversation with Marc Burgauer and Kim B. They will also be sharing their thoughts on what and when to automate (here and here, respectively).

My simple answer is that automation is most valuable when it can provide rapid feedback into the decisions people make.

When the question came up, I immediately thought about my experiences developing software, and the automation of testing cycles. I have developed an ingrained assumption that some types of automated testing are inherently “good.” It was fortunate that Kim was so pointed in her questioning. I was forced to revisit my assumptions and come at the question in another way in order to respond with a considered answer.

I believe the development of the US Navy’s early surface fire control systems is a useful illustration of effective automation. These systems were intended to allow a moving ship to fire its guns accurately and hit another ship at ranges of five to ten miles or more. At the time these systems were developed—between 1905 and 1918—these were significant distances; hitting a moving target at these ranges was not easy.

The core of these systems was a representative model of the movements of the target. At first, this model was developed manually. Large rangefinders observed the target and estimated its range. Other instruments tracked the target and recorded its bearing. These two—bearing and range—if observed and recorded over time, could be combined to develop a plot of the target’s movements. At first, the US Navy’s preferred approach was to plot the movements of the firing ship and the target separately. This produced a bird’s eye plot which could be used to predict the future location of the target, where the guns would have to be aimed to secure a hit.

Feedback was incorporated into the system to allow it to be successful. At first there was only a single feedback loop. A “spotter” high in the masts of the ship watched the movement of the target and observed the splashes of the shells that missed. To make this process easier, the Navy preferred “salvo fire,” which meant firing all available guns in a battery at once, maximizing the number of splashes. Depending on where these shells landed, the spotter would call for corrections. These corrections would be fed back into the model, in order to improve it.

The process did not work well. Building the model manually required numerous observations and took a lot of time. A different approach was adopted which involved measuring rates of change—particularly the rate at which range was changing—and aiming the guns based on that. This was less desirable, as it was not a comprehensive “model” of the target’s movements. However, automatic devices  could be used to roughly predict future ranges once the current rate of change was known, allowing the future position of the target to be predicted more rapidly.

These “Range Clocks” were a simple form of automation. They took two inputs—the current range and the rate at which it was changing—and gave an output based on simple timing. They reduced workload, but did not provide feedback. They also did not account for situations where the rate at which the range was changing was also changing. Automation would have better been focused on something else, and ultimately it was.

The early fire control systems reached maturity when the model of the target’s movements was automated. The Navy introduced its first system of this type in 1916. Called a “Rangekeeper” this device was a mechanical computer that used the same basic observations of the target (range and bearing, along with estimates of course and speed) to develop a model of its movements.

The great advantage of this approach over previous systems was that the model embedded in the Rangekeeper allowed for the introduction of another level of feedback into the system. The face of the device provided a representation of the target. This representation graphically displayed the computed target heading and speed. Overlaid above this representation were two lines that indicated observed target bearing and observed target range.

If the model computed by the Rangekeeper was accurate, the two lines indicating observed bearing and range would meet above the representation of the target course and speed. This meant that if the model was not accurate—due to a change of course by the target or bad inputs—the operator could recognize it and make the necessary corrections. This made for faster and more accurate refinements of the model. Automation in this case led to faster feedback, better decisions, and ultimately more accurate gunfire.

When we think about automating in software, I believe it is better to concentrate on this type of automation—the kind that can lead to more rapid feedback and better decision-making. Automation of unit tests can allow this, by telling us immediately when a build is broken and many teams use them exactly this way.

When we approach the problem this way, we’re not just providing an automated mechanism for a time-consuming repetitive task. There is some value to that—this is the approach the Navy took with the range clock—but it is more valuable if our automation enable better decisions through faster feedback. Decisions are difficult; often there is less information than we would like. The more we can leverage automation to improve decision-making, the better off we will be. This is the approach the Navy took with the Rangekeeper, and I think it’s a valuable lesson for us today.

Making Sense of “The Good, the Bad, and the Ugly”

I recently attended a Cynefin and Sense-Making Workshop given by Dave Snowden and Michael Cheveldave of Cognitive Edge. It was an excellent course and a useful introduction to how to apply concepts from complex adaptive systems, biology, and anthropology to better understand human approaches to problem solving.

The Cynefin framework is an elegant expression of these ideas. It posits five domains that reflect the three types of systems we encounter in the world. There are ordered systems, in which outcomes are predictable and repeatable. There are chaotic systems, which are inherently unpredictable and temporary; and there are complex systems, in which the system and the actors within it interact to shape an unpredictable future.

We can use the Cynefin framework to help us make sense of our current situation and understand what course of action might be best at a given moment. If we are dealing with an ordered system, then we are in one of the ordered domains, either “Obvious” or “Complicated.” In either of these circumstances, we can reason our way to the right answer, provided we have the necessary experience and expertise. The predictability of the system permits this.

If, however, we are in the “Chaotic” domain, the system is wholly unpredictable. The “Complex” domain embraces complex adaptive systems: those that are governed by some level of constraint yet remain unpredictable. Think of the foot traffic in your local shopping mall, and you can get some idea of how these systems manifest: you can purposefully walk from one end to the other, but if the mall is crowded, you can’t predict the course you’ll have to take to get there.

A fifth domain, “Disorder,” exists to explain those times where our current state is unknown.

To increase our familiarity with how to use the Cynefin framework, we performed a number of exercises. In one of them, my tablemates (including Adam Yuret and Marc Burgauer) and I tried to make sense of the final, climactic scene of “The Good, the Bad, and the Ugly.” Spoilers follow, so if you haven’t seen it, now’s a good time to bail out.

The scene involves a three-way standoff between “Blondie” (Clint Eastwood), “Angel” (Lee Van Cleef), and “Tuco” (Eli Wallach). The three gunslingers stand in a rough triangle at the center of a graveyard. Blondie’s written the location of the treasure on the bottom of a rock, and placed it at the center of the triangle. None of them wants to share the treasure.

At first blush, it seems to be an ideal example of a complex system. As soon as any one of them acts, the others will fire, and the standoff will end, but no one can predict how. That’s why each of them stands there, eyeing one another cautiously, as the tension builds to Ennio Morricone’s music.

But that’s not the truth of the matter. Blondie is no fool. He’d gotten the drop on Tuco and had time to unload Tuco’s weapon. As we watch the scene, we don’t know this, but for Blondie, the situation is well-ordered. All he needs to do is pick the right time to gun Angel down. Blondie knows Tuco’s not a threat.

The other two must deal with more unknowns. It’s not a chaotic system for them. There is a certain level of predictability. Someone will shoot. But the details of who that will be—and when he will fire—are uncertain. What happens after that is anyone’s guess. Both Tuco and Angel want to trigger a specific outcome—their survival and the death of the other two—but exactly how to manage this outcome is impossible to predict given the other elements of the system. It’s a perfect example of a complex adaptive system.

We thought this was an extremely useful example to help us “make sense” of Cynefin and the concepts it embraces. I hope you do too.

Kanban Seder

Last week my wife and I hosted a Passover Seder. We have entertained together a number of times, but this was the first real attempt at a coordinated, sit-down meal. Most of our gatherings have been buffets, and less dependent on timing.

We knew getting the timing of the Seder right would be a challenge. The meal is served in the middle of the Seder, not just when the guests arrive. We also knew there would be a lot of uncertainty. Potatoes don’t always cook the way you want them to, and guests never arrive all together. It would be difficult to plan everything perfectly. However, we knew we would both feel a lot more comfortable if there was a plan, to help keep us on the same page when we started making adjustments… and we always have to make adjustments.

The night before the Seder, we went through all the things we’d have to do. We’d used Kanban-style visualization techniques before (to pack for trips and track jobs around the house), and there are three cabinets in our kitchen that work great as Ready, Doing, and Done lanes. So at first I started listing out each major task on a sticky note, but I quickly felt this would be inadequate.

Time was an essential component of everything we had to do. We couldn’t just pull tasks when we were ready. The brisket had to cook for hours; potatoes had to be peeled and seasoned before going in the oven; matzo balls had to chill before going in the water to boil; and different things had to be ready at different times, paced to the rhythm of the Seder. Coordinating the timing of each task was one of the main reasons for our planning exercise.

The solution we used was to write the time required for each task on the upper right of the sticky note. This allowed us to get a good picture of the overall flow. Working backward from the end, we started to determine when we wanted each task to complete. We also recorded that information on the sticky notes. Finally, through a little quick subtraction, we determined the times we needed each task to start, and put that on the sticky notes. We tried to keep our number of concurrent tasks low—thus limiting our work in progress—by staggering tasks where we could. Once this was done, we had a plan. We went to bed, comfortable and confident.

The next morning, things started to go awry. The Seder’s co-host, who was expected to arrive early to help with the preparations, had a family emergency and couldn’t come. This was a significant problem. The plan had assumed she would be there. She was also bringing the Haggadah, so my wife started looking for alternatives online, a task we hadn’t anticipated, and one that took a long time to complete.

The brisket pan turned out to be too large for the meat and its sauce. At some point, all the sauce cooked down and started to burn. We caught it in time, but reworking the brisket dish in the middle of the other preparations was an emergency we hadn’t counted on.

But our visualizations were resilient. We were able to absorb these unforeseen issues into our plan without disrupting our overall flow. One reason for this was the slack we’d built in the night before. We had arranged the plan to keep the number of concurrent tasks low. This helped. The act of planning had also allowed us to see what needed to happen at specific times and what could be flexed. To make up for the time and hands we’d lost, we started pulling flexible tasks ahead when we had a little slack. Having the start time on each task made it easy to identify what to do next; when either one of us came free, we could grab it and get started. The visualization also allowed us to talk about what we were doing and where we were with it. We didn’t have to waste time talking about what to do next and could quickly help each other when necessary. The Kanban was our shared view of our work, an effective distributed form of cognition.

The final result was a fun Seder, an excellent meal, and nearly perfect timing, in spite of the inevitable hiccups. Our Kanban Seder was wonderful, and I’m looking forward to doing it again next year.

Follow me on Twitter.

“The Rules of the Game”

The subtitle of the July 2013 edition of “The Scrum Guide” is “The Rules of the Game.”1 This is an ironic choice. The Rules of the Game is also the title of Andrew Gordon’s in-depth analysis of the Royal Navy’s performance during the Battle of Jutland, a performance that failed to meet expectations and led to bitter recriminations. It is not the kind of performance software teams would wish to emulate.

Jutland was the great naval battle of World War One. In the late afternoon of 31 May 1916, the main battle fleets of Great Britain and Imperial Germany found each other in the North Sea. They fought on and off through the fading light and darkness for the rest of the day and into the night.

For the Royal Navy, the battle offered great promise. Victory over the German fleet would have opened communications with Russia through the Baltic, and permitted offensive action against the German coast. Together, these might have shortened the war.2 And victory was expected. Since Admiral Horatio Nelson’s victory at Trafalgar in 1805, the Royal Navy had enjoyed a preeminent position; no other naval force could compare in size and power.

The promise of victory grew more certain during Jutland’s opening moves. Signals intelligence gave the Royal Navy early warning of German movements, allowing the British to concentrate overwhelming force at the anticipated contact point. British scouting forces successfully located the German battle fleet, and led it toward the Royal Navy’s battle line. The Germans soon came under the largest concentration of naval gunfire in history, far away from their bases, outnumbered, and outgunned. Defeat seemed certain. But the promise was not fulfilled; the German fleet not only survived, but managed to inflict more punishment than it received.3

The failure of the Royal Navy to win a decisive victory is the dominant theme of Jutland. Most assign blame to the fleet commander, Admiral John R. Jellicoe, or his chief subordinate, Admiral David R. Beatty. Gordon’s analysis goes beyond personal explanations and examines the Royal Navy’s system of command. Gordon illustrates how the Royal Navy’s command mechanisms—the “rules” that had been established to guide the behavior of officers in battle—hindered rapid decision-making, crippled individual initiative, and thwarted success at this most critical juncture.4

The primary problem was an overreliance on orders and instructions from above; this created an environment where subordinates were hesitant to act on their own initiative, even in situations where such behavior endangered their forces or their mission.5 Both Beatty and Jellicoe were forced to assume the burden of commanding the bulk of their forces directly. They shouldered this responsibility quite well, but the challenge of attempting to coordinate the movements of a large battle fleet, in fading light and darkness, while maneuvering to intercept a fleeing enemy was too great for any one person, or even a small group. Jellicoe and Beatty needed greater initiative from their subordinates in order to deliver on Jutland’s promise.

This was not something the Royal Navy was prepared to deliver. The limited initiative displayed by subordinates was an unintentional—but wholly predictable—consequence of the system of rules that governed their behavior. The rules took the place of intelligent action. Instead of focusing on using every available means to defeat the enemy, the Royal Navy adhered to the “rules of the game.”

The Scrum Guidance, by creating a similar system of rules, risks creating nearly identical, unintended side effects. Scrum teams often will hesitate when confronted with situations that are not anticipated or accounted for by the rules, rather than addressing the problem creatively on their own initiative. This is common, for example, when access to the Product Owner is limited. With no one to groom or prioritize the backlog, the influx of work slows, and progress begins to stall.

A more insidious problem is that rules can frequently hinder learning, particularly when situations that contradict the rules are encountered. Because the rules provide a context for framing the problem, the most common response is to conclude that the rules have not been implemented properly. The team convinces itself that if they could only be “good enough” the problem would be solved. This view can blind a team to alternative approaches and can hinder the customization of Scrum for their own context.

If problems do arise, wasteful arguments about the correct interpretation and enforcement of the rules are likely, particularly in stressful situations or where failure has occurred. This can easily divide the team and shift focus away from the main goal of delivering software.

Gordon’s analysis illustrates all three of these negative outcomes. Limited individual initiative was a key component of the Royal Navy’s failure to decisively defeat the Germans at Jutland. In the years before the battle, alternative approaches to command were evaluated and discarded; their value was missed because the existing framework—the existing system of rules—prevented a fair assessment of them. And, most visibly, the aftermath of the battle saw a split between Beatty and Jellicoe, which led to a “Jutland controversy,” centered on their different approaches to leadership and their interpretation of the “rules.”6

Rules are necessary to help guide behaviors and align the work of teams. The performance of the Royal Navy at Jutland offers a salient example of the problems that can develop when too much emphasis is placed on adhering to rules. This is relevant for software teams, because software teams—like navies—make it their business to capitalize on dynamic and changing environments. Success in such circumstances requires individual initiative and low-level decision-making. The Scrum Guidance, by emphasizing “rules of the game” risks hindering the ability of teams to capitalize on the initiative of their members and learn from unanticipated circumstances, both of which are goals of the Scrum Framework.


2. Commander Holloway H. Frost, The Battle of Jutland, (United States Naval Institute, 1936), p. 108-116

3. Keith Yates, Flawed Victory: Jutland, 1916, (Naval Institute Press, 2000)

4. Andrew Gordon, The Rules of the Game, (Naval Institute Press, 1996)

5. The best examples of this are the handling of the 5th Battle Squadron early in the battle (Gordon, p. 81-101) and the failure of the destroyer flotillas to report encounters with the Germans during the night (Gordon, p. 472-499)

6. Gordon, p. 537-561; Yates, p. 257-275

Strategic Limitations of the German General Staff

It has become common for those who study business organizations to embrace military analogies and military models when they think about strategy and organizational complexity.1 This is a good thing; cross-disciplinary approaches can offer new perspectives and help seed new ideas. However, limited knowledge of the subject matter can lead to overly optimistic interpretations of historical examples and restrict our ability to learn from them. This is particularly true in the case of the German General Staff (GGS) in general and Helmuth von Moltke (the elder) in particular.

Moltke is considered one of the greatest military minds of the nineteenth century. He was appointed chief of the Prussian General Staff in 1857 and led the Prussian Army through the wars of German Unification, including victories over Denmark (1864), Austria (1866), and France (1870-71). Those who believe in flexibility, learning, and adaptability in the face of uncertainty find appeal in his famous quote about the nature of war: “No plan survives contact with the enemy.”2

Moltke built the Prussian Army—and later the German one—on this assumption about the nature of war. Rather than issuing detailed instructions, Moltke stressed adaptability and flexibility. Officers were given high-level objectives and guidance. They were expected to develop specific plans based on the circumstances of the moment, without consulting higher headquarters for approval. This allowed Moltke’s armies to shorten their decision cycle. In modern business terms, Moltke’s armies lowered the cost of decisions by placing authority to make those decisions at lower levels. They got inside the “decision cycle” of their opponents.

There are extremely salient lessons to be gained from this experience. Distributing decision-making more broadly throughout the organization is an effective reaction to increased complexity and uncertainty. Education, training, and practice will give better results in the field of knowledge work than detailed instructions that become obsolete at the first unanticipated circumstance. However, if we consider this one aspect of Moltke’s approach worth emulating, we must also be conscious of his limitations.

The most effective critique of the GGS and its approach springs from another Prussian military thinker, Carl von Clausewitz, and his famous dictum, “War is the continuation of policy by other means.”3 This concept has been interpreted numerous ways, but there is no escaping its fundamental essence, and that is that nations (and would-be-nations4) wage war to achieve political ends.

Moltke would have agreed with this, and the political objective of his famous victories is readily apparent from their name, the “Wars of German Unification.” It is unfortunate that in our praise for Moltke, the essential political side of these wars is often forgotten. The political side was dominated by the Minister President of Prussia, Otto von Bismarck. Without Bismarck’s skill and acumen, it is unlikely that Moltke’s battlefield victories would have achieved lasting fame.

This is primarily because Moltke’s battlefield emphasis was the encirclement and destruction of the opposing army. Moltke focused on crafting the quintessential military victory: the annihilation of the enemy forces. The approach worked for two reasons. First, the era of total war, where nations mobilized their entire economies in pursuit of victory, had not yet come. Military victory could, in such an environment, deliver political victory. Second, the circumstances that allowed the Prussian state to achieve battlefield success—the delicate management of alliances, the choice of the right moment in time, and the selection of willing allies—had been put in place by the diligent Bismarck. His deft hand provided the context for Moltke’s triumphs.

The necessity of an effective interplay between military and political spheres is illustrated by what happened after Moltke and Bismarck retired. Moltke left the GGS in 1888; Bismarck retired in 1890. The delicate balance of alliances Bismarck had brokered for Germany’s benefit fell apart. France and Russia, long potential enemies, entered into an Alliance in 1892. This left Germany in a difficult strategic position, with powerful opponents to the east and the west.

This challenge required a balance of political and military thinking, but the voids left by Moltke and Bismarck were filled by less-capable individuals. By the time of the time of the Franco-Russian Alliance, Alfred von Schlieffen had assumed leadership of the GGS. Schlieffen was not a politically astute individual; he had less influence over other governmental departments than Moltke. Rather than seeking to collaboratively develop a solution to Germany’s strategic dilemma, Schlieffen, “responded to this challenge… by focusing inward on areas he could control and influence.”5 The result was a purely military solution to Germany’s strategic problem, the infamous Schlieffen Plan.

The Schlieffen Plan was a reckless attempt to use military means to achieve political ends while ignoring the political consequences of those means. Schlieffen—and his successor, the younger Helmuth von Moltke (cousin of the elder Moltke)—embraced the approach of the elder Moltke and placed an emphasis on quick battlefield victories. They expected to defeat France with a huge flanking movement that would enter northern France through neutral Belgium. Germany’s eastern border would be secured by a small force; Russia’s mobilization was expected to be slow and cumbersome. The delay would allow the bulk of the German Army to defeat France before turning eastward and defeating Russia. As the world witnessed in the fall of 1914, it didn’t work.

There were numerous flaws in the plan; two were crucial. First, the violation of Belgian neutrality made Germany a global pariah and brought Great Britain into the war on the side of France and Russia. While there is some question as to whether Britain would have interceded anyway, the military plan guaranteed this political result. Second, the plan assumed that swift military victories were still possible in an era of national mobilization. This was no longer the case. The elder Moltke and the General Staff had gotten evidence of this in the latter stages of the Franco-Prussian War, when new French armies appeared after the initial German triumphs. Schlieffen and the younger Moltke both had sufficient evidence to anticipate these flaws. That they did not is a weighty indictment of the approach of the GGS, and through it, the legacy of the elder Moltke.

When we praise Moltke for developing a potent framework for overcoming uncertainty and developing high performance on the dynamic environment of the battlefield, we praise his work within the German Army. But it is essential to remember that Moltke’s framework was successful because of the political circumstances Bismarck brokered. The two of them, along with a broader supporting cast, created the system in which Moltke’s battlefield triumphs brought political success. Without that broader system, Moltke’s work—while impressive—does not guarantee victory. The example of Germany’s performance in World War One proves that point.

What lessons then should the modern student take from Moltke? His system of decentralized decision-making is laudable, certainly, and has been a model of effective military leadership for over a century. However, if we adopt such an approach, we need to ensure that the end goals remain at the forefront. Too often, like Schlieffen, we seek to optimize the work within the spheres we can control, and ignore the challenges outside of them.

The end goal for the GGS should have been political victory for Germany. Instead, it became victory on the battlefield. The two were not the same thing. A more modern example would be a software team that makes its end goal the creation of features, and ignores the process of validating that those features are ones that will solve their customers’ business problems. Moltke was like a software manager, who, having developed an effective rapport with his peers, focused on optimizing the work of his software team. After his departure, his successors continued to optimize and refine their work, but lost the rapport, and in the process, lost the system that allowed their work to be valuable. This is the great lesson we should take from the elder Moltke.

Further Reading:

German Strategy and the Path to Verdun, Robert T. Foley, (Cambridge University Press, 2005)

The Principles of Product Development Flow, Donald G. Reinertsen, (Celeritas Publishing, 2009)

On War, Carl von Clausewitz, Edited and Translated by Michael Howard and Peter Paret, (Princeton University Press, 1989)

Moltke, Schlieffen and Prussian War Planning, Arden Bucholz, (Berg Publishers, 1993)

The Marne, 1914, Holger H. Herwig, (Random House, 2009)

The Ideology of the Offensive: Military Decision-Making and the Disasters of 1914, Jack Snyder, (Cornell University Press, 1984)


1. Perhaps the most effective example of this is Don Reinertsen’s The Principles of Product Development Flow, (Celeritas Publishing, 2009). See Chapter 9, p. 243-266

2. Moltke’s Wikipedia entry is a good starting point to learn more.

3. Clausewitz Wikipedia entryOn War, Carl von Clausewitz, Edited and Translated by Michael Howard and Peter Paret, (Princeton University Press, 1989)

4. It is useful to think of the terrorist network of Al Qaeda and its affiliates this way.

5. Robert T. Foley, German Strategy and the Path to Verdun, (Cambridge University Press, 2005), p. 64