The Measurement Problem

Why knowledge has resisted being governed until now.

If knowledge is so clearly valuable, why has no framework for measuring and governing it yet emerged? Not for lack of interest. Knowledge presents genuinely hard problems that no existing system has been designed to solve. The cost of that gap is well documented, even if the solutions remain absent.

Observed impact
20%
of the working week spent by knowledge workers searching for internal information and tracking down colleagues who can help.
McKinsey Global Institute, The Social Economy, 2012
48%
of executives agree that employees take critical knowledge about procedures and strategy with them when they leave the organisation.
KMWorld Survey Report, 2024
75%
of leaders say they do not trust their own data enough to use it confidently for decision-making.
Dataversity / Gartner, 2025
Value decay under no intervention
Visualisation
Knowledge value decay over time - without governance intervention
High Med Low Review Renew Validate Ungoverned Governed with intervention
Evidence from practice
Founding Statement

Most enterprise value now sits in assets that are invisible on the balance sheet and vulnerable to silent decay.

Tommy Lowe - Knowledge Intelligence
84-91%

Estimated share of S&P 500 market value represented by intangible assets, where knowledge is a core value driver.

Ocean Tomo analyses (2015-2020)
48%

of executives agree critical procedural and strategic knowledge walks out the door when people leave.

KMWorld, 2024
75%

of leaders report they do not trust their data enough for confident decision-making at the point of action.

Dataversity / Gartner, 2025
Why volatility matters

Knowledge value does not fail all at once. It leaks through attrition, drifts through outdated assumptions, and degrades when confidence is not governed. The governance problem is not storage. It is continuous validity.

Attrition Risk

Critical know-how leaves with individuals unless transfer is instrumented and owned.

Context Drift

Previously valid knowledge becomes wrong as market, regulation, and strategy change.

Confidence Blindness

Ungoverned sources receive equal weight, making low-confidence claims look decision-safe.

Why current frameworks break down
01

Knowledge has no agreed unit of measure

A decade of domain expertise, a library of validated research, and a set of informal but critical relationships - how do you compare these? How do you decide which to invest in, which to protect, which is at risk? Without a shared way of measuring knowledge, these decisions remain matters of instinct rather than evidence. Governance without measurement is just guesswork.

02

Tacit knowledge is invisible to existing systems

The enterprise content stack can only work with what has been written down and captured. The expertise in people's heads, the institutional memory embedded in practice, the relationships that determine how decisions really get made: none of these show up in any current platform. They are the largest and most valuable part of the knowledge estate, and they are entirely unmeasured. APQC's 2024 research identified tacit knowledge capture as the most urgent priority for knowledge management teams precisely because the gap is now widely felt, even if it is not yet closed.APQC, 2024

03

Knowledge value is context-dependent

A piece of knowledge can be highly valuable in one context and irrelevant in another. Its value is also dynamic. It depreciates as conditions change, and appreciates as it becomes more applicable. Static asset valuation frameworks are not designed for assets whose value shifts continuously in response to context.

04

Confidence is absent from all current frameworks

Even where knowledge has been captured, no current framework distinguishes between what an organisation knows with high confidence and what it merely believes, assumes, or has recorded without validation. All explicit knowledge is treated as equally credible. A document from five years ago carries the same implicit authority as one validated last week. An unverified assumption carries the same weight as a tested finding. This means none of it is truly governed - and when AI operates on ungoverned knowledge, low-confidence outputs become indistinguishable from high-confidence ones.

Tacit concentration risk topology
Visualisation
Tacit concentration risk by knowledge domain

Domains where critical knowledge is held tacitly by few individuals - the invisible risk topology most organisations cannot currently see.

Strategy &
Direction
Technical
Expertise
Client
Relationships
Regulatory
Knowledge
Process
Memory
High value,
High tacit
Critical
Critical
High
High
Critical
High value,
Med tacit
High
Med
Critical
Med
High
Med value,
Low tacit
Med
Low
Med
Low
Med
Low value,
Low tacit
-
Low
-
Low
-
Critical risk - govern immediately
High risk - active monitoring
Moderate risk - scheduled review
Low risk
Research baseline

It has never been more necessary for knowledge managers to demonstrate the value of their organisations' investments in knowledge and learning initiatives. But achieving this is more difficult than it looks. It is difficult to build direct cause-and-effect links between specific knowledge management practices and aspects of organisational performance.

Henley KMF - Thinking Differently About Evaluating Knowledge Management, Issue 28, 2013  ·  25 practitioners, 12 major public and private sector organisations

This was not a 2013 finding alone. Henley's own prior research from 2004-2005 found that even the most rigorous KM measurement frameworks broke down the moment tacit knowledge or organisational change entered the picture. The measurement gap is not waiting to be closed by better tooling. It is a structural condition that requires an architectural response.

Why Now

Three conditions have converged to make this both necessary and possible.

AI can only be as good as the knowledge it runs on. Right now, many organisations have the models but not the knowledge quality or governance needed for reliable output. That is why Knowledge Intelligence is needed now.

01 · Scale

Content estates have reached meaningful scale

Large organisations now hold content estates of sufficient volume and variety that patterns within them are genuinely significant. The signal-to-noise challenge is no longer a niche concern. It is a primary operational and strategic condition, and the estate is large enough that intelligence extracted from it carries real strategic weight.

02 · Quality

AI performance is downstream of knowledge quality

AI now extracts meaning from large estates with far less manual effort. But faster processing does not fix poor sources, missing context, or stale knowledge. 78% of organisations now use AI in at least one business function,McKinsey, 2025 yet many still struggle to scale trusted use because output quality is limited by knowledge quality. The organisations struggling to realise value from their AI investment are, in most cases, struggling because the knowledge foundations beneath their AI are ungoverned.

03 · Confidence

Decision stakes have raised the bar for confidence

In a fast-moving environment, wrong answers are expensive. Leaders need more than speed. They need justified confidence: evidence they can inspect, challenge, and defend before acting. 75% of leaders say they do not trust their own data enough to act on it confidently.Dataversity / Gartner, 2025 The problem is not access to information. It is the absence of any framework for knowing whether to trust it.

The AI Inflection

AI performance is downstream of knowledge quality.

Generative AI did not create the knowledge problem. It exposed it. AI systems inherit the condition of the knowledge estate beneath them. Knowledge Intelligence makes that condition measurable and governable.

What AI is doing
  • Moving from search responses to generated responses.
  • Producing output faster than teams can manually check.
  • Increasing the volume of decision support in daily work.
  • Reducing time to answer, but not automatically improving trust.
What the knowledge estate looks like
  • Fragmented content, inconsistent structure, low signal-to-noise.
  • Taxonomies decay. Context and provenance are often unclear.
  • Tacit expertise is concentrated and invisible to systems.
  • Trust is implicit - and therefore unmanaged at scale.
What happens without KI
  • Inconsistent answers delivered with convincing fluency.
  • Decision risk amplified by the scale AI enables.
  • Low confidence becomes compounding operational drag.
  • AI value stalls exactly where knowledge foundations are weakest.
Knowledge Intelligence adds the missing layer: confidence signals, traceable provenance, lifecycle governance, and clear rules for what knowledge is allowed to influence.