Monday, June 20, 2011

The 10 Biggest Mistakes People Make Managing Organisational Performance


Mistake #1: Rely just on financial statements

Profit and loss, revenue and expenses these are measures of important things to a business. But they are information that is too little and too late. Too little in the sense that other results matter too, such as customer satisfaction, customer loyalty, customer advocacy. Too late in the sense that by the time you see bad results, the damage is already done. Wouldn't it be better to know that profit was likely to fall before it actually did fall, and in time to prevent it from falling?

Mistake #2: Look only at this month, last month, year to date
Most financial performance reports summarise your financial results in four values: 1) actual this month; 2) actual last month; 3) % variance between them; and 4) year to date. Even if you are measuring and monitoring non-financial results, you may still be using this format. It encourages you to react to % variances (differences between this month and last month) which suggest performance has declined such as any % variation greater than 5 or 10 percent (usually arbitrarily set). Do you honestly expect the % variance to always show improvement? And if it doesn't, does that really mean things have gotten bad and you have to fix them? What about the natural and unavoidable variation that affects everything, the fact that no two things are ever exactly alike? Relying on % variations runs a great risk that you are reacting to problems that aren't really there, or not reacting to problems which are really there that you didn't see. Wouldn't you rather have your reports reliably tell you when there really was a problem that needed your attention, instead of wasting your time and effort chasing every single variation?

Mistake #3: Set goals without ways to measure and monitor them
Business planning is a process that is well established in most organisations, which means they generally have a set of goals or objectives (sometimes cascaded down through the different management levels of the organisation) . What is interesting though, is that the majority of these goals or objectives are not measured well. Where measures have been nominated for them, they are usually something like this: Implement a customer relationship management system into the organisation by June 2006 (for a goal of improving customer loyalty) This is not a measure at all it is an activity. Measures are ongoing feedback of the degree to which something is happening. If this goal were measured well, the measure would be evidence of how much customer loyalty the organisation had, such as tracking repeat business from customers. How will you know if your goals, the changes you want to make in your organisation, are really happening, and that you are not wasting your valuable effort and money, without real feedback?

Mistake #4: Use brainstorming (or other poor methods) to select measures

Brainstorming, looking at available data, or adopting other organisations' measures are many of the reasons why we end up with measures that aren't useful and usable. Brainstorming produces too much information and therefore too many measures, it rarely encourages a strong enough focus on the specific goal to be measured, everyone's understanding of the goal is not sufficiently tested, and the bigger picture is not taken into account (such as unintended consequences, relationships to other objectives/goals) . Looking at available data means that important and valuable new data will never be identified and collected, and organisational improvement is constrained by the knowledge you already have. Adopting other organisations' measures, or industry accepted measures, is like adopting their goals, and ignoring the unique strategic direction that sets your organisation apart from the pack. Wouldn't you rather know that the measures you select are the most useful and feasible evidence of your organisation's goals?
mistake

Mistake #5: Rely on scorecard technology as the performance measure fix
You can (and maybe you did) spend millions of dollars on technology to solve your performance measurement problems. The business intelligence, data mining and 'scorecarding' software available today promises many things like comprehensive business intelligence reporting, award-winning data visualization, and balanced scorecard and scorecarding and an information flow that transcends organizational silos, diverse computing platforms and niche tools .. and delivers access to the insights that drive shareholder value. Wow! But there's a problem lurking in the shadows of these promises. You still need to be able to clearly articulate what you want to know, what you want to measure and what kinds of signals you need those measures to flag for you. The software is amazing at automating the reporting of the measures to you, but it just won't do the thinking about what it should report to you.

Mistake #6: Use tables, instead of graphs, to report performance
Tables are a very common way to present performance measures, no doubt in part a legacy from the original financial reports that management accountants provided (and still provide today) to decision makers. They are familiar, but they are ineffective. Tables encourage you to focus on the points of data, which is the same as not seeing the forest for the trees. As a manager, you aren't just managing performance today or this month. You are managing performance over the medium to long term. And the power to do that well comes from focusing on the patterns in your data, not the points of data themselves. Patterns like gradual changes over time, sudden shifts or abrupt changes through time, events that stand apart from the normal pattern of variation in performance. And graphs are the best way to display patterns.


Mistake #7: Fail to identify how performance measures relate to one other
A group of decision makers sit around the meeting room table and one by one they go over the performance measure results. They look at the result, decide if it is good or bad, agree on an action to take, then move on to the next measure. They might as well be having a series of independent discussions, one for each measure. Performance measures might track different parts of the organisation, but because organisations are systems made up of lots of different but very inter-related parts, the measures must be inter-related too. One measure cannot be improved without affecting or changing another area of the organisation. Without knowing how measures relate to one another and using this knowledge to interpret measure results, decision makers will fail to find the real, fundamental causes of performance results.


Mistake #8: Exclude staff from performance analysis and improvement
One of the main reasons that staff get cynical about collecting performance data is that they never see any value come from that data. Managers more often than not will sit in their meeting rooms and come up with measures they want and then delegate the job of bringing those measures to life to staff. Staff who weren't involved in the discussion to design those measures, weren't able to get a deeper understanding of why those measures matter, what they really mean, how they will be used, weren't able to contribute their knowledge about the best types of data to use or the availability and integrity of the data required. And usually the same staff producing the measures don't ever get to see how the managers use those measures and what decisions come from them. When people aren't part of the design process of measures, they find it near impossible to feel a sense of ownership of the process to bring those measures to life. When people don't get feedback about how the measures are used, they can do little more than believe they wasted their time and energy.


Mistake #9: Collect too much useless data, and not enough relevant data
Data collection is certainly a cost. If it isn't consuming the time of people employed to get the work done, then it is some kind of technological system consuming money. And data is also an asset, part of the structural foundation of organisational knowledge. But too many organisations haven't made the link between the knowledge they need to have and the data they actually collect. They collect data because it has always been collected, or because other organisations collect the same data, or because it is easy to collect, of because someone once needed it for a one-off analysis and so they might as well keep collecting it in case it is needed again. They are overloaded with data, they don't have the data they really need and they are exhausted and cannot cope with the idea of collecting any more data. Performance measures that are well designed are an essential part of streamlining the scope of data collected by your organisation, by linking the knowledge your organisation needs with the data it ought to be collecting.
mistake

Mistake #10: Use performance measures to reward and punish people
One practice that a lot of organisations are still doing is using performance measures as the basis for rewarding and punishing people. They are failing to support culture of learning by not tolerating mistakes and focusing on failure. It is very rare that a single person can have complete control over any single area of performance. In organisations of more than 5 or 6 people, the results are undeniably a team's product, not an individual's product. When people are judged by performance measures, they will do what they can to reduce the risk to them of embarrassment, missing a promotion, being disciplined or even given the sack. They will modify or distort the data, they will report the measures in a way that shows a more favourable result (yes - you can lie with statistics), they will not learn about what really drives organisational performance and they will not know how to best invest the organisation's resources to get the best improvements in performance.


Recommend this article by clicking on +1: