​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​
Home
Membership
Communities & Groups
Training
Conferences
Publications & Resources
Career Center
 

Data, displays and the critical few

By Jerry L. Harbour

Executive summary
Developing an effective performance measurement system is not for the faint of heart. It requires hard work to identify what data – useful measures – matter most to your processes and operations. Then you must figure out how to make usable displays so the end-users easily understand and translate the useful measures into actions that improve performance.

Via the old adage that you can’t manage what you don’t measure, performance measurement represents a critical enabler of sound business management. But what constitutes an effective performance measurement system? Is it the quantity of measures collected? Or is it the feature-centric richness of the information technology dashboard used to display those collected measures?

It is suggested that an effective performance measurement system exhibits two key traits:

  1. The system contains useful measures.
  2. Those useful measures are displayed in a usable manner.

Yet the reader may wonder what actually defines a useful measure and what determines a usable display. The following paragraphs explore, and hopefully answer, these two questions.

Useful measures

Somewhat ironically, often the most overlooked part in the development of any performance measurement system is assuring that truly useful measures are identified and collected. All too often we seem to forego the admittedly hard work of identifying and collecting those measures representing the critical few, instead collecting only what’s easy to collect and immediately available, irrespective of relevance or value. Unfortunately, by collecting and displaying countless measures, the truly valuable often become lost in a sea of the trivial many.

Below are some key characteristics of useful measures.

A useful measure is aligned with and supports a key strategic performance objective. Many organizations lack strategic clarity. As a result, no one really knows what to measure (or even what to manage). Instead, organizations assemble a haphazard collection of measures that exhibit little strategic relevance or forethought. If an organization can’t articulate what it is trying to accomplish, then it also can’t know what it should measure. Accordingly, the first step in developing useful measures is crafting a set of defined strategic performance objectives that clearly articulate what an organization is attempting to accomplish and in what manner. Well-written strategic performance objectives contain an action verb(s) (describing what an organization is attempting to do), a subject matter (identifying the “what” it is attempting to do something to), and a set of performance goals (defining desired parameters of success).

A useful measure is linked specifically to a critical performance driver. Performance drivers represent tactical enablers or the real “oomph factors” that determine strategic outcome performance. Almost always few in number, performance drivers capture what really counts. For example, if maintaining process safety is a critical strategic performance objective for an offshore oil drilling company, then critical performance drivers would include well control, well integrity and critical safety equipment operability and availability. Such identified drivers help ensure that desired levels of process safety performance are maintained. A service-oriented company might identify product quality, delivery timeliness and cost as its key performance drivers. By first identifying critical performance drivers, the “what to measure” question becomes fairly obvious: You measure critical performance drivers. If security force readiness performance is a function of staffing, training and equipment availability levels (representing the key performance drivers of readiness), then you measure staffing, training and equipment availability levels. As illustrated, strategic performance objectives are linked to key performance drivers that, in turn, are linked to specific measures of performance.

A useful measure helps explain current performance outcomes. Often called descriptive measures, such measures help explain a significant amount of the variance for any given outcome. For example in baseball, only 65 percent of the variation in the number of runs that a team scores can be explained by a team’s batting average. Conversely, OPS, which combines on-base percentage (a measure of the rate at which a player reaches base via a hit, walk or hit-by-pitch) plus slugging percentage (a determination of the number of bases achieved per hit), explains 89 percent of the variation in runs scored. Obviously in this case, OPS does a much better job of explaining current performance levels than does the more traditional measure of batting average. What the sports world teaches us about performance measurement is that although an incredible number of “stats” are collected routinely throughout any given game and season, only a small number actually describe a significant amount of performance-related outcome variance. Thus, when developing any performance measurement system, it is always important to identify that critical subset of measures that account for the greatest variation in a defined performance outcome.

A useful measure has some predictive power, helping to forecast what may happen but to date has not happened. Also called leading indicators, predictive measures in truth are more probabilistic than deterministic in nature. That is they are more apt to infer what may happen as opposed to forecast specifically what will happen. Often a group or set of interrelated measures are better at predicting some future performance outcome state than a single measure by itself. Returning to the sports world once again, assessing actual future performance of a current draft pick in professional sports is fraught with danger. Although numerous techniques are used, all with varying degrees of success, once again we find that a small but critical subset of measures usually tops a larger and more diverse set. For example, in professional basketball, it has been found that assessing a college player in the following three categories – two-point shooting efficiency, rebounds and steals – can provide a fair indication of how well that player eventually will perform at the professional level. Knowing that in professional basketball, teams win because they score when they have the ball and prevent their opponent from doing likewise, these three measures make sense. Once again, a surprisingly small subset of measures trumps the many.

Useful measures are sensitive enough to the conditions being assessed that they can detect subtle changes in performance. Frequently subtle changes in performance can portend a major and unwanted shift in system performance. In truth, systems rarely transition from a “green” to “red” state without first undergoing some form of intermittent degradation. Being able to measure and detect such subtle changes are critical to taking action before an unwanted event occurs.

Useful measures are often normalized, allowing valuable comparisons between and among differing entities. Although similar measures may be collected, they often have different units of measurement, making direct comparisons extremely difficult. Where appropriate, always standardize units of measure by identifying their basic “currency” of a measure, thus allowing direct apple-to-apple comparisons.

Useful measures are believable and have veracity. If the accuracy of a measure is not believed, then it will have little to no value. Ensuring data quality throughout the performance measurement process is of critical importance. A senior manager once noted that he worries more about measures in the “green” than he does if they are in the “red.” His reasoning was that much attention is always given to measures in the red. But he worried about the veracity of measures in the green. In short, could he trust that green really meant green?

In summary, useful measures have strategic importance and are linked to and measure critical performance drivers. They also describe current performance outcomes as well as aid in the prediction of future outcome states. Finally, useful measures often entail an all important but surprisingly small set of measures, truly representing the critical few as opposed to the trivial many. In performance measurement, quality tops quantity. 
 

Usable displays

The second component of an effective performance measurement system is the actual usability of the measurement display. When developing any performance measurement display, always remember that the end goal is to allow the user to translate displayed performance-based data into actionable, performance-based knowledge.

Increasingly, organizations are using software-generated “performance dashboards” to display their measurement data. Stephen Few, author of Information Dashboard Design, does an excellent job in describing different types of performance dashboards and the varying elements needed for their effective design. Wayne Eckerson, author of Performance Dashboards, defines a performance dashboard as a “layered information delivery system that parcels out information, insights and alerts to users on demand so they can measure, monitor and manage business performance more effectively.”

Below are some key characteristics of usable displays.

Usable displays display useful measures. Although this may seem obvious and not worth repeating, it is surprising how many organizations focus on IT-related collection and display issues, neglecting the heart of the matter of developing the measures themselves. The usability of a performance dashboard display is predicated on many things. But first and foremost, it is predicated on the quality and “rightness” of the performance data model itself, as represented here by the concept of useful measures. Consequently, always attempt to first get the “useful measure” part of the formula right before worrying about final IT-related collection and display considerations.

Usable displays provide useful measures when and where needed. Almost any performance measure has a specific time value. Understanding timeframe value and user need are critical. To be truly useful, a measure must be displayed when and where needed. Operational-type data commonly have a much shorter timeframe (often counted in minutes to hours) than does more strategic data that is normally required only on a weekly to monthly basis. Knowing the immediate timeframe value of a measure is of critical importance in determining how and when that measure should be displayed. If a performance dashboard cannot display data when and where and to whom it is needed, then it will have little practical value.

Usable displays communicate information clearly, rapidly and compellingly. A performance dashboard should provide relevant and meaningful data that quickly and easily is assimilated and understood by the intended user. The focus of the user should always be on grasping the meaning of the data itself, as opposed to deciphering the meaning of the display. It is always interesting to ask someone to interpret a displayed performance measurement graph. One of two things almost always is observed. First, the person instantly grasps the meaning of the graphical display and describes it clearly – the sign of a well-constructed graph that passes what the author calls his 20-second test. Or conversely, the person spends an inordinate amount of time trying to grasp the graph’s meaning, frequently failing to altogether, which is obviously the sign of a bad and “unusable” display.

Usable displays are simple in their design. They display performance data as clearly and simply as possible, avoiding unnecessary and distracting on-screen decoration and clutter. In short, usable displays achieve eloquence through simplicity. Such eloquence is achieved in a number of ways, but chief among them is through the reduction and/or outright elimination of nondata pixels (represented by distracting and unnecessary backgrounds, labels and images). Just because a dashboard software package permits the developer to import fancy objects to be used as background images does not make it right. Always remember that it is about the data. Although a fancy background may appear catchy when first viewed, it quickly will become an unwanted and bothersome irritant to the user. Simplicity always trumps complexity in the design of any performance dashboard display. Avoid the glitz, and focus on displaying useful data in the best way possible.

Usable displays incorporate a standardized library of graphs, icons, text, etc. Standardization greatly assists the user in rapidly assimilating information. It also aids in achieving simplicity as described above. If a graph’s format is even slightly different, one to another, the viewer is forced to focus on deciphering the graphical display instead of focusing on the meaning of the data itself. Although it is tempting to be creative given the many features embedded in most performance dashboard software programs, standardization is absolutely critical in developing a usable display. Select a small number of different types of graphs to use, standardize them as to format and stick with them. For example, when displaying a single key measure as opposed to multiple measures, bullet, thermometer and speedometer graphs are used commonly. In this case, pick one (many prefer the bullet graph), standardize it (that is, make all bullet graphs look essentially the same), and maintain the same consistent format throughout the entire dashboard. The viewer will appreciate this standardization greatly and quickly be able to gather information from the same graphical display with only a passing glance. Additionally, users immediately will be able to detect subtle differences in the data that may otherwise be masked if differing formats of the same graph are being interchanged constantly.

Usable displays facilitate the viewing and understanding of patterns of measures. A single measure rarely captures a current or potential performance outcome by itself. Rather, a family of measures is needed. As such, usable displays depict a family of measures in a manner that aids the viewer in deciphering and understanding changing patterns of performance. Grouping a family of associated measures on the same screen is extremely important in that it better facilitates the assessment of emergent patterns. For example, backlogs for needed operator training, preventive maintenance on critical safety equipment, and new procedure updates may be increasing. Such unwanted increases represent an emerging pattern that, in turn, may potentially affect overall process safety performance. Ideally, such patterns should be displayed and highlighted in a holistic manner, drawing the user’s attention to an emerging and potentially dangerous condition.

Usable displays highlight interrelationships between and among measures. Somewhat similar to the above criterion, usable displays quickly draw attention to possible interrelationships between and among various performance measures via synchronous highlighting or other linking mechanisms. In some instances, measures may be multiplicative, and such relationships, where important, should be highlighted. For example, staffing levels and staff qualification levels are highly interrelated. A staffing level of 90 percent and a staff qualification level of 90 percent mean that only 81 percent of on-hand personnel are fully qualified (90 percent multiplied by 90 percent equals 81 percent). Where important, usable displays should draw attention to such relationships, helping the user understand that in this case two “90s” actually equal an “81.”

Usable displays facilitate “user” intelligence. Much has been written about the relationship between business intelligence and performance dashboards and the inferred suggestion that dashboards actually create intelligence. It is argued, however, that such systems really don’t generate intelligence per se. Instead, a well-designed and usable dashboard facilitates user understanding, thus aiding the user in translating the displayed data into actionable knowledge. Software developers should leave the intelligence part to the viewer and instead focus their development efforts on embedding good display techniques that facilitate “intelligent viewing.”

As described, useful measures must be displayed in a usable manner. Usable displays communicate performance data clearly, rapidly and compellingly when and where needed. Usable displays achieve eloquence through simplicity by eliminating unnecessary decoration and clutter and by standardizing displayed graphics as much as possible. Usable displays also display families of measures, allowing users to better identify emergent patterns and associated interrelationships. In short, usable displays allow users to view performance data intelligently, enabling them to visually mine and rapidly assimilate the truly important.

Summary

Performance measurement represents a critical enabler of sound business management practice. Admittedly, the development and continued maintenance of an effective performance measurement system is not an easy task and requires sustained effort and commitment. Fortunately, the potential value of such efforts far outweighs the associated costs.

Two keys to developing an effective performance measurement system successfully entail:

  • Identifying, developing and collecting useful measures – the critical few
  • Then displaying them in a usable manner, allowing performance-based data to be communicated clearly, rapidly, compellingly and ultimately to be translated into actionable, performance-based knowledge.

Always remember that performance measurement is only one element – albeit a very important one – of effective management. As Albert Einstein is quoted as saying: “Not everything that can be counted counts, and not everything that counts can be counted.” The goal of effective performance measurement, therefore, is to count only what counts and to display what counts in a manner that counts.

Jerry L. Harbour combines more than 35 years of domestic and international work experience in varied, technologically complex, highly hazardous operational settings, including offshore oil exploration and production; nuclear weapon dismantlement and maintenance; hazardous materials processing, disposition, transport and long-term management; underground mining; unmanned vehicle (air and ground) system development; and security force training. Harbour has written four books: The Basics of Performance Measurement (now in its second edition), The Performance Paradox — Understanding the Real Drivers that Critically Affect Outcomes, Cycle Time Reduction — Designing and Streamlining Work for High Performance, and The Process Reengineering Workbook. He holds a Ph.D. in applied behavioral studies from Oklahoma State University and a B.A. and M.S. in geology. He is a senior consultant with Vector Resources Inc.

Print: Share: