Why “intangibles” vanish the moment we learn to ask better questions
“When you can measure what you are speaking about, and express it in numbers, you know something about it.” — Lord Kelvin
Most of us nod when we hear Kelvin’s quip, yet the moment a colleague proposes quantifying employee morale, brand strength or innovation, we shrug and mutter: “Some things just can’t be measured.” Chapter 3 of Douglas Hubbard’s How to Measure Anything dismantles that reflex. Below I unpack its big ideas, sprinkle in real-world examples, and show how a measurement mindset turns fuzzy debates into clear bets—without drowning anyone in spreadsheets or sucking the soul out of people-centred work.
1. Measurement ≠ Precision—It’s Merely Uncertainty Reduction
The single biggest misconception Hubbard tackles is conceptual. We unconsciously treat measurement as a quest for certainty: if the number isn’t exact and error-free, it’s useless. Wrong. In information-theory terms (Claude Shannon, 1948), measurement is any observation that shrinks the cloud of what we don’t know. If I tell a project sponsor that a new feature will lift revenue somewhere between 0 % and 30 %, I’ve conveyed almost nothing. Narrow that to 8 %–12 % and she can place a far better bet—even though there’s still error.
This definition is wonderfully liberating. It turns measurement from a perfectionist hurdle into a pragmatic habit. You don’t need an MRI machine to gauge customer excitement about a prototype; you just need an observation that moves you from “no clue” to “slightly less clueless.”
2. The .COM Model—Concept, Object, Method
Why do otherwise smart people label things immeasurable? Three blind spots:
- Concept: We misuse the word measurement.
- Object: We never pin down what the slippery term (e.g. strategic alignment) really means.
- Method: We’re unaware of the simple statistical devices that already exist.
Remember “.COM”: nail the concept, clarify the object, then pick a method. Do it in that order and the “immeasurable” evaporates.
Try this at your next meeting. When someone says “quality” or “engagement,” ask, “What would we actually see improve if quality went up next quarter?” Keep drilling (Hubbard calls it a clarification chain) until you arrive at observable consequences—fewer defects, faster approvals, repeat purchases. By then the room will be awash in things that can, in fact, be counted.
3. Tiny Samples, Huge Insight
Managers routinely overestimate how much data is required to learn something useful. Two heuristics prove otherwise.
The Rule of Five
Take any random sample of five. There’s a 93.75 % probability the true median of the whole population lies between the smallest and largest sample values. That’s not a typo: five observations can bracket the middle of thousands. Suddenly, quick-and-dirty pulse surveys don’t feel so dirty.
The Urn of Mystery
Imagine an urn with an unknown mix of red and green marbles. Draw one marble at random; the colour you pull has a 75 % chance of being the majority in the urn. One data point, three-quarters confidence. Translated to business: the first customer you interview or the first log file you scan is not gospel, but it’s dramatically better than blind guessing.
These rules slice neatly through the “we’d need a huge study” objection. Small N can still swing big decisions—as long as you acknowledge the margin of error.
4. Why Objections Crumble
“It’s too expensive.”
Maybe—but have you priced the risk of flying blind? Hubbard’s Applied Information Economics (AIE) shows that, in most models, only a handful of variables dominate the outcome. Measuring even one of them often pays for itself many times over.
“You can prove anything with statistics.”
No, you can lie with statistics. You can also lie with words. The antidote is competence, not cynicism. When someone trots out the cliché, offer this wager: Prove that ‘you can prove anything with statistics’ using statistics and I’ll buy lunch for a year. So far, no takers.
“It’s unethical to put a value on human life / creativity / reputation.”
Public policy does this every day. The Environmental Protection Agency uses a statistical value of life to prioritise regulations. Hospitals perform cost-effectiveness analysis on treatments. Refusing to measure doesn’t make trade-offs disappear; it just buries them where biases and politics decide instead of evidence.
5. You Already Have More Data Than You Think
Most firms sit on oceans of under-used logs, surveys, transcripts and sensor feeds. Chicago Virtual Charter School believed it lacked data to evaluate teaching quality—until it realised every online lesson was already recorded. By sampling tiny, random slices of those videos, administrators could score engagement and differentiation without watching hundreds of hours of footage.
Action step: Make a “data scavenger hunt” part of project kick-off. Ask, “What’s already being captured that we’ve never analysed?” You’ll be stunned how often the answers are hiding in plain text.
6. When Data Truly Don’t Exist—Create Them Cheaply
Eratosthenes measured Earth’s circumference with sticks, shadows and walking pace. Nine-year-old Emily Rosa demolished “therapeutic touch” by flipping coins and masking therapists’ vision. Neither had a grant.
Modern equivalents:
- Wizard-of-Oz tests: Fake the backend manually to see if users will even click the button.
- A/B smoke tests on landing pages: Gauge demand for a feature before writing code.
- Expert-elicitation workshops: Calibrate subject-matter experts to produce probabilistic estimates that beat gut feel.
The bar isn’t perfection; it’s “better than our current guess.”
7. The Ethical High Ground of Quantification
Meehl’s haunting analogy: releasing a suicidal patient without assessing risk is like handing him a revolver with one live round but ignoring which chamber fires first. Pretending you can’t put numbers on uncertainty doesn’t make the danger go away; it makes you responsible for ignoring it.
Whether you’re allocating cybersecurity budget or triaging climate projects, measurements shine a light on the real stakes. That’s not dehumanising; it’s humane stewardship.
8. Practical Playbook
- Name the decision. What bet are we trying to place?
- List uncertainties. What variables could swing that bet?
- Estimate current ranges (even wild guesses).
- Compute information value. Which range, if tightened, changes the decision the most?
- Pick the cheapest observation that would narrow that range—often small samples or simple experiments.
- Update the model; repeat until additional info is worth less than its cost.
Stop when better data no longer changes the choice. Anything further is academic.
9. The Mindset Shift
The point isn’t that you should start drowning operations in KPIs. In fact, Hubbard’s research shows most metrics organisations track are worth zero; they simply aren’t tied to pivotal decisions. The mindset shift is recognising that the few uncertainties that do matter are almost always measurable—and usually with absurdly modest effort.
Next time someone in the room sighs, “Well, that’s an intangible,” smile and ask:
“Sure, but if it suddenly doubled tomorrow, what exactly would we notice first?”
Congratulations—you’ve just opened the door to measurement.
Closing Thought
We live in a golden age of cheap sensors, public datasets and cloud math. The only thing rarer than data is the managerial courage to abandon the comfort of anecdotes and start placing informed bets. As Hubbard makes clear, the gap between ignorance and insight is often a single well-designed observation.
So go ahead—measure something today that you swore last week was impossible. Your future self (and your stakeholders) will thank you.
Leave a Reply