Measurement is an inescapable element of many aspects of our lives. To assess failure or success, we must first measure. Unfortunately, the resulting yardstick can often become either something to beat something else with, or something to grasp the wrong end of. So how can we get the measure of measurement?
The first step is to understand what can go wrong. One contentious example in the UK is school performance league tables, which some argue have had two main outcomes: schools that are focused on grooming pupils for exam results rather than educating them per se (hard to prove, but plausible enough), and a property bonanza – at least in pre-crunch times – in post codes surrounding schools where demand for places far outstrips supply (undeniable).
Some authorities are now allocating school places by lottery as a result, which seems to defeat the point of publishing tables – even where care is taken to factor in the school’s ability to transform the ‘raw’ performance of those entering it – in the first place. What started as ‘cause’ (a thing to be argued and fought for) has become ‘cause and effect’, and the effect looks uncomfortably unintentional. As Peter Wilby once argued in a long but informative News Statesman article:
I agree information of this sort should not be kept secret. But it need not be published centrally. Each school should hold standardised information, more detailed than is now published, and give it to parents of prospective pupils on request. Ministers object that journalists will collect all the results and publish less reliable league tables. So be it. All school league tables mislead, and official publication gives them an undeserved authority.”
At least schoolchildren are still being taught. Some attempts to measure and assess come far closer to killing the proverbial golden goose. Form 696 was introduced by the Metropolitan Police to assess the risk of disorder arising from public events. Although one might argue that a local constabulary should already know where its likely ‘trouble hotspots’ are and might be able to keep an eye on their events listings, event organisers in 21 London boroughs now have to complete an extensive form at least 14 days before events giving detailed information about each performer and the likely ethnic make-up of the audience. Unsurprisingly, this has met with indignant opposition and trenchant comment from the Musicians Union, Music Producers Guild (UK), and The British Academy of Composers and Songwriters, whose website commented:
… the imposition of Form 696 on live music is likely to discourage the existence and growth of live music. Music has long been a positive form of free expression, for people from all walks of life to create and enjoy.”
Perhaps it’s the influence of technology: IT makes the collection and collation of immense amounts of data plausible. The vast majority of information that is collected is probably never looked at, particularly by human eyes, but – where we feel it’s inappropriate – it can still arouse suspicions. (Jazzwise, a UK based music magazine, drew comparison with “the 160km of paper files accumulated by the Stasi”: not a comparison most journalists would make without recognizing the impact of the remark.)
Ever greater quantities of data don’t lead with any certainty to greater information – let alone to greater knowledge, wisdom or insight (which are in any case attributes of human beings, with all their fallibilities, rather than of databases). There are concerns in the knowledge management community that IT has distorted the purpose and direction of their discipline: this may also be initially unintentional – where IT systems can commoditise knowledge and make it readily available to everyone, organisations no longer need to pay higher salaries to those with the “knowledge” in their heads – but this is a separate debate.
Yet at least with school performance league tables and Form 696, some people are vocally concerned that the wrong information is being gathered, or gathered in the wrong way, or interpreted unhelpfully. In training and development, evaluation practices would benefit from a greater degree – and wider incidence – of outrage, polite or otherwise. Unless, of course, we find the development of our professional capacities, the performance of our businesses and institutions (people in the public sector go to work too, remember) – and the future of both – less important than our childrens’ schooling and the future of live music in a country with a proud musical and cultural history?
A US benchmarking survey by Bersin Associates showed that 72% of participating organizations rated the evaluation of the business impact of training and development (Kirkpatrick’s Level 4) as “extremely valuable”; yet only 10% routinely measured their training activities at this level. 81% of them, on the other hand, routinely measured at Level 1 (Satisfaction – the ‘happy sheet’ we’re all familiar with from any event) – something that only 41% thought was important.
Let’s unpack that: 62% rarely if ever measured what mattered, while 41% routinely measured something that didn’t think did matter. I don’t know about you, but two quotations sprang to my mind:
True genius resides in the capacity for evaluation of uncertain, hazardous, and conflicting information.”
Good critical writing is measured by the perception and evaluation of the subject; bad critical writing by the necessity of maintaining the professional standing of the critic.”
Surely if HR departments, L&D managers and their suppliers are persisting in measuring at Level 1 – easy information to collect, gather and “analyse” (people were happy or they weren’t; they liked the trainer/materials/venue/coffee or they didn’t) – they are justifying themselves, but not really serving the learner or the learner’s organisation. If there is a real business improvement, not only will it not have occurred at this stage – real change doesn’t happen just after the final tea-break: it takes months or years – but a “happy sheet” won’t capture it anyway.
If organisations can become serious and committed to evaluating beyond Level 1 – with the implications that that will have for designing and delivering training that is geared to effective transfer and application, as these critical elements will then be part of what is measured – then we might not arrive at “true genius”, but we’ll certainly be closer to Churchill than to Chandler.
Measurement does matter: unless we try to monitor our progress, can we be sure we are even making any? As long as we keep a watchful eye on what – and how and why – we measure, there is so much that can only be improved in the world of learning and development that a yardstick to beat ourselves might not be such a bad idea.