What and When to Automate?

Texas_1928This post springs out of a Twitter conversation with Marc Burgauer and Kim B. They will also be sharing their thoughts on what and when to automate (here and here, respectively).

My simple answer is that automation is most valuable when it can provide rapid feedback into the decisions people make.

When the question came up, I immediately thought about my experiences developing software, and the automation of testing cycles. I have developed an ingrained assumption that some types of automated testing are inherently “good.” It was fortunate that Kim was so pointed in her questioning. I was forced to revisit my assumptions and come at the question in another way in order to respond with a considered answer.

I believe the development of the US Navy’s early surface fire control systems is a useful illustration of effective automation. These systems were intended to allow a moving ship to fire its guns accurately and hit another ship at ranges of five to ten miles or more. At the time these systems were developed—between 1905 and 1918—these were significant distances; hitting a moving target at these ranges was not easy.

The core of these systems was a representative model of the movements of the target. At first, this model was developed manually. Large rangefinders observed the target and estimated its range. Other instruments tracked the target and recorded its bearing. These two—bearing and range—if observed and recorded over time, could be combined to develop a plot of the target’s movements. At first, the US Navy’s preferred approach was to plot the movements of the firing ship and the target separately. This produced a bird’s eye plot which could be used to predict the future location of the target, where the guns would have to be aimed to secure a hit.

Feedback was incorporated into the system to allow it to be successful. At first there was only a single feedback loop. A “spotter” high in the masts of the ship watched the movement of the target and observed the splashes of the shells that missed. To make this process easier, the Navy preferred “salvo fire,” which meant firing all available guns in a battery at once, maximizing the number of splashes. Depending on where these shells landed, the spotter would call for corrections. These corrections would be fed back into the model, in order to improve it.

The process did not work well. Building the model manually required numerous observations and took a lot of time. A different approach was adopted which involved measuring rates of change—particularly the rate at which range was changing—and aiming the guns based on that. This was less desirable, as it was not a comprehensive “model” of the target’s movements. However, automatic devices  could be used to roughly predict future ranges once the current rate of change was known, allowing the future position of the target to be predicted more rapidly.

These “Range Clocks” were a simple form of automation. They took two inputs—the current range and the rate at which it was changing—and gave an output based on simple timing. They reduced workload, but did not provide feedback. They also did not account for situations where the rate at which the range was changing was also changing. Automation would have better been focused on something else, and ultimately it was.

The early fire control systems reached maturity when the model of the target’s movements was automated. The Navy introduced its first system of this type in 1916. Called a “Rangekeeper” this device was a mechanical computer that used the same basic observations of the target (range and bearing, along with estimates of course and speed) to develop a model of its movements.

The great advantage of this approach over previous systems was that the model embedded in the Rangekeeper allowed for the introduction of another level of feedback into the system. The face of the device provided a representation of the target. This representation graphically displayed the computed target heading and speed. Overlaid above this representation were two lines that indicated observed target bearing and observed target range.

If the model computed by the Rangekeeper was accurate, the two lines indicating observed bearing and range would meet above the representation of the target course and speed. This meant that if the model was not accurate—due to a change of course by the target or bad inputs—the operator could recognize it and make the necessary corrections. This made for faster and more accurate refinements of the model. Automation in this case led to faster feedback, better decisions, and ultimately more accurate gunfire.

When we think about automating in software, I believe it is better to concentrate on this type of automation—the kind that can lead to more rapid feedback and better decision-making. Automation of unit tests can allow this, by telling us immediately when a build is broken and many teams use them exactly this way.

When we approach the problem this way, we’re not just providing an automated mechanism for a time-consuming repetitive task. There is some value to that—this is the approach the Navy took with the range clock—but it is more valuable if our automation enable better decisions through faster feedback. Decisions are difficult; often there is less information than we would like. The more we can leverage automation to improve decision-making, the better off we will be. This is the approach the Navy took with the Rangekeeper, and I think it’s a valuable lesson for us today.

One thought on “What and When to Automate?

  1. Pingback: Automation | Comfortable Ambiguity

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s