Making the case for a new strategic capability - and what it takes to finally see, measure, and
govern everything an organisation knows.
Knowledge is the most valuable yet volatile asset in any organizational portfolio,
driving innovation while demanding continuous adaptation.
Tommy Lowe - Founding Statement
84-91% of market value now sits in intangibles
48% of executives report critical knowledge loss on departure
Opening Statement
Organisations have never been able to see everything they know.
Founding definition
Knowledge Intelligence
The enterprise capability to identify, measure, and govern everything an
organisation knows - across both explicit and tacit forms - so that knowledge
becomes decision-relevant, confidence-weighted, and continuously
managed over time rather than merely stored.
Every organisation runs on knowledge. The decisions that shape its direction, the expertise that defines its
capability, the memory that holds its history together - all of it is knowledge. And yet no organisation can
tell you what it knows, how much of it is reliable, where it is concentrated, or how fast it is disappearing.
This is not a technology failure. It is a conceptual one. Knowledge is not one thing - it is
an enormous, varied universe: documents and data, yes, but also the judgement accumulated over decades of
practice, the institutional memory that lives only in people's heads, the relationships that determine how
decisions really get made, the patterns only visible to those who have seen enough to recognise them. No
existing framework accounts for all of this. Most don't try.
I believe this gap is not incidental. It is the central unsolved problem of the knowledge economy - and the
reason organisations continue to lose what they know, act on what they merely assume, and make decisions without
understanding the quality of the knowledge beneath them.
The capability that closes this gap is Knowledge Intelligence: the ability to treat everything
an organisation knows - in all its forms - as a governed, measurable, and decision-relevant asset. It has no
established category, no recognised framework, and no dominant platform. This paper is its founding statement.
What Knowledge Actually Is
Knowledge is not a thing.
It is everything an organisation knows - in every form it takes.
Before any intelligence framework can be built, there has to be clarity about what knowledge actually is. Not
knowledge as a vague concept, but knowledge as a practical reality - the full range of what an organisation knows,
holds, and depends on.
That range is vast. It spans the highly visible and the entirely invisible. And critically, the most visible part
- the documents, reports, and records that fill enterprise systems - is also the smallest part.
Knowledge expressed in words and numbers represents only the tip of the iceberg. The larger part is tacit:
highly personal, hard to formalise, and deeply rooted in action, experience, and individual values.
Ikujiro Nonaka and Hirotaka Takeuchi - The Knowledge-Creating
Company, Oxford University Press, 1995 · Henley
KMF, Identifying Valuable Knowledge, Issue 6, 2007
Knowledge model
Knowledge Iceberg
Explicit assets are visible and governable. Tacit assets are larger, harder to
capture, and carry most concentration risk.
Enterprise systems manage the fraction of knowledge that has been written down. Knowledge Intelligence governs
the universe it sits within.
How Knowledge Intelligence differs from traditional KM
Knowledge Intelligence sits inside the Knowledge Management lineage, but it is not a rebrand or a tool category.
It is a shift in scope, in method, and in outcome.
Dimension
Traditional KM
Knowledge Intelligence
Primary objective
Capture, organise, and enable retrieval and reuse of knowledge.
Measure and govern knowledge quality so it becomes decision-relevant and safe to act on.
Implicit trust, or manual review on a case-by-case basis.
Explicit confidence weighting with governance thresholds applied systematically.
Change over time
Periodic clean-up. Taxonomy decay is the norm, not the exception.
Continuous monitoring of drift, decay, and concentration. Renewal is systematic, not reactive.
Tacit knowledge
Addressed through cultural programmes, often separate from systems and rarely measured.
Treated as a governable estate: surfaced via risk mapping, elicitation, and network signals.
Decision linkage
Often indirect. Value is difficult to evidence and easy to question.
Direct. KI produces confidence-weighted insight artefacts with traceable, auditable decision inputs.
The Measurement Problem
Why knowledge has resisted being governed until now.
If knowledge is so clearly valuable, why has no framework for measuring and governing it yet emerged? Not for
lack of interest. Knowledge presents genuinely hard problems that no existing system has been designed to solve.
The cost of that gap is well documented, even if the solutions remain absent.
Observed impact
20%
of the working week spent by knowledge workers searching for internal information and
tracking down colleagues who can help.
McKinsey Global Institute, The Social Economy, 2012
48%
of executives agree that employees take critical knowledge about procedures and strategy
with them when they leave the organisation.
KMWorld Survey Report, 2024
75%
of leaders say they do not trust their own data enough to use it confidently for
decision-making.
Dataversity / Gartner, 2025
Value decay under no intervention
Visualisation
Knowledge value decay over time - without governance intervention
Evidence from practice
Founding Statement
Most enterprise value now sits in assets that are invisible on the balance sheet and vulnerable to silent
decay.
Tommy Lowe - Knowledge Intelligence
84-91%
Estimated share of S&P 500 market value represented by intangible assets, where
knowledge is a core value driver.
Ocean Tomo analyses (2015-2020)
48%
of executives agree critical procedural and strategic knowledge walks out the door when
people leave.
KMWorld, 2024
75%
of leaders report they do not trust their data enough for confident decision-making at
the point of action.
Dataversity / Gartner, 2025
Why volatility matters
Knowledge value does not fail all at once. It
leaks through attrition, drifts through outdated assumptions, and degrades when confidence is not governed. The
governance problem is not storage. It is continuous validity.
Attrition Risk
Critical know-how leaves with individuals unless transfer is instrumented and owned.
Context Drift
Previously valid knowledge becomes wrong as market, regulation, and strategy change.
Confidence Blindness
Ungoverned sources receive equal weight, making low-confidence claims look decision-safe.
Why current frameworks break down
01
Knowledge has no agreed unit of measure
A decade of domain expertise, a library of validated research, and a set of informal but critical
relationships - how do you compare these? How do you decide which to invest in, which to protect, which is
at risk? Without a shared way of measuring knowledge, these decisions remain matters of instinct rather than
evidence. Governance without measurement is just guesswork.
02
Tacit knowledge is invisible to existing systems
The enterprise content stack can only work with what has been written down and captured. The expertise in
people's heads, the institutional memory embedded in practice, the relationships that determine how
decisions really get made: none of these show up in any current platform. They are the largest and most
valuable part of the knowledge estate, and they are entirely unmeasured. APQC's 2024 research identified
tacit knowledge capture as the most urgent priority for knowledge management teams precisely because the gap
is now widely felt, even if it is not yet closed.APQC, 2024
03
Knowledge value is context-dependent
A piece of knowledge can be highly valuable in one context and irrelevant in another. Its value is also
dynamic. It depreciates as conditions change, and appreciates as it becomes more applicable. Static asset
valuation frameworks are not designed for assets whose value shifts continuously in response to context.
04
Confidence is absent from all current frameworks
Even where knowledge has been captured, no current framework distinguishes between what an organisation
knows with high confidence and what it merely believes, assumes, or has recorded without validation. All
explicit knowledge is treated as equally credible. A document from five years ago carries the same implicit
authority as one validated last week. An unverified assumption carries the same weight as a tested finding.
This means none of it is truly governed - and when AI operates on ungoverned knowledge, low-confidence
outputs become indistinguishable from high-confidence ones.
Tacit concentration risk topology
Visualisation
Tacit concentration risk by
knowledge domain
Domains where critical knowledge is
held tacitly by few individuals - the invisible risk topology most organisations cannot currently see.
Strategy & Direction
Technical Expertise
Client Relationships
Regulatory Knowledge
Process Memory
High value, High tacit
Critical
Critical
High
High
Critical
High value, Med tacit
High
Med
Critical
Med
High
Med value, Low tacit
Med
Low
Med
Low
Med
Low value, Low tacit
-
Low
-
Low
-
Critical risk - govern immediately
High risk - active monitoring
Moderate risk - scheduled review
Low risk
Research baseline
It has never been more necessary for knowledge managers to demonstrate the value of their organisations'
investments in knowledge and learning initiatives. But achieving this is more difficult than it looks. It is
difficult to build direct cause-and-effect links between specific knowledge management practices and aspects of
organisational performance.
Henley KMF - Thinking Differently About Evaluating Knowledge
Management, Issue 28, 2013 · 25 practitioners, 12 major public and private sector
organisations
This was not a 2013 finding alone.
Henley's own prior research from 2004-2005 found that even the most rigorous KM measurement frameworks broke down
the moment tacit knowledge or organisational change entered the picture. The measurement gap is not waiting to be
closed by better tooling. It is a structural condition that requires an architectural response.
Why Now
Three conditions have converged to make this both necessary and possible.
AI can only be as good as the knowledge it runs on. Right now, many organisations have the models but not the
knowledge quality or governance needed for reliable output. That is why Knowledge Intelligence is needed now.
01 · Scale
02 · Quality
03 · Confidence
01 · Scale
Content estates have reached meaningful scale
Large organisations now hold content estates of sufficient volume and variety that patterns within them are
genuinely significant. The signal-to-noise challenge is no longer a niche concern. It is a primary
operational and strategic condition, and the estate is large enough that intelligence extracted from it
carries real strategic weight.
02 · Quality
AI performance is downstream of knowledge quality
AI now extracts meaning from large estates with far less manual effort. But faster processing does not fix
poor sources, missing context, or stale knowledge. 78% of organisations now use AI in at least one business
function,McKinsey, 2025 yet many still struggle to scale trusted use because
output quality is limited by knowledge quality. The organisations struggling to realise value from their AI
investment are, in most cases, struggling because the knowledge foundations beneath their AI are ungoverned.
03 · Confidence
Decision stakes have raised the bar for confidence
In a fast-moving environment, wrong answers are expensive. Leaders need more than speed. They need
justified confidence: evidence they can inspect, challenge, and defend before acting. 75%
of leaders say they do not trust their own data enough to act on it confidently.Dataversity / Gartner, 2025 The problem is not access to information. It is the
absence of any framework for knowing whether to trust it.
The AI Inflection
AI performance is downstream of knowledge quality.
Generative AI did not create the knowledge problem. It exposed it. AI systems inherit the condition of the
knowledge estate beneath them. Knowledge Intelligence makes that condition measurable and governable.
OutputAI Output Quality
=
You have thisModel Capability
×
MissingKnowledge Quality
×
MissingGovernance Discipline
What AI is doing
Moving from search responses to generated responses.
Producing output faster than teams can manually check.
Increasing the volume of decision support in daily work.
Reducing time to answer, but not automatically improving trust.
AI value stalls exactly where knowledge foundations are weakest.
Knowledge Intelligence adds the missing layer: confidence signals, traceable
provenance, lifecycle governance, and clear rules for what knowledge is allowed to influence.
The Knowledge Intelligence Capability
From content management to governed knowledge - a structural shift.
Knowledge Intelligence spans both the explicit and tacit dimensions of organisational knowledge: governed,
confidence-weighted, and designed to support decisions that depend on the full range of what an organisation
knows.
Any practical path toward this capability has to start with what is most legible: the explicit layer.
Content Intelligence is the discipline of extracting meaning from an organisation's content
estate at scale, assessing signal strength, evaluating reliability, and producing insight that can justify
decisions rather than merely inform them. It is not the whole answer, but it is the necessary first layer.
The existing stack asks
Purpose
How should content be organised and found?
Reliability
Is it accessible and compliant?
Action
Has it been reviewed?
Quality
Is it complete and correctly formatted?
Evolution
Is it up to date?
Content Intelligence asks
Purpose
What does this content collectively indicate?
Reliability
How confident should we be in it?
Action
When does it justify action, and when does it counsel restraint?
Quality
Does it strengthen or weaken decision confidence?
Evolution
Is the concept it represents still valid?
Content Intelligence is a pipeline,
not a feature. Raw content enters, passes through meaning extraction and confidence weighting, and emerges as a
governed insight - or is withheld when confidence is insufficient. Every output carries a weight. The system
governs what each confidence level is permitted to influence.
Threshold
Signal strength
Governance action
0.95 +
High Confidence
95%
Auto-validated
Trusted to inform decisions without human review
0.60 - 0.94
Moderate Confidence
72%
Human review
Flagged for expert validation before influencing decisions
Below 0.60
Low Confidence
32%
Withheld
Logged but not permitted to influence any decision
Confidence thresholds are illustrative of the framework architecture, not empirically derived
benchmarks. Organisations set thresholds relative to their domain risk profile.
Bringing tacit knowledge into the framework
The most difficult part of Knowledge Intelligence is tacit knowledge: what people know but have not written down.
This cannot be fully captured. It can be managed through a metadata-first approach that does not require blanket
reading of private content - reducing tacit risk without eliminating the need for human judgement.
01
Digital Signals
Use metadata already produced by work systems: authorship, edit history, usage, and query
patterns.
02
Inferred Expertise Map
Map who knows what by combining contribution history, interaction patterns, and
domain-level confidence outcomes.
03
Targeted Elicitation
Use short interviews only where risk is high or signals are weak. This is exception work,
not the default method.
04
Exit Governance
Treat knowledge transfer as a required control at transitions, with accountable ownership
and completion checks.
The Knowledge Health Score
Every knowledge asset within a KI framework produces a single composite indicator - the Knowledge Health Score -
combining five dimensions into one signal that tells the organisation whether to trust, review, or retire what it
knows.
Composite indicator
Knowledge Health Score
A governance signal that continuously combines five dimensions, then classifies
each asset into an explicit operating state: trust, review, or retire.
Confidence
How well-evidenced and validated the knowledge is, distinguishing justified
understanding from assumption.
Currency
How recently the knowledge was validated relative to the domain's observed
rate of change.
Usage
Whether the knowledge is actively informing decisions, or sitting unused in
the estate.
Concentration
Whether critical knowledge is held by too few people or repositories,
creating governance risk.
Alignment
Whether the knowledge remains relevant to current priorities, strategy, and
operating context.
Output signal
TrustDecision-safe with routine monitoring
ReviewUse with validation and owner attention
RetireDo not influence decisions without
remediation
The state is continuously recalculated as evidence changes, so degraded assets
stop silently influencing high-stakes decisions.
Illustrative profile
Confidence0.86
Currency0.76
Usage0.82
Concentration0.61
Alignment0.71
Radar shape makes asymmetry visible instantly. A single weak dimension can force
review even when aggregate health appears acceptable.
The Living Taxonomy
Every knowledge management system ever built has the same structural failure: a taxonomy that someone designed,
published, and then watched decay. As the organisation changed, the classification didn't. Knowledge accumulated
in categories that no longer fit, under concepts that had been superseded, in structures built for a version of
the business that no longer existed. Maintaining it required manual effort that no one had time for. So it was
left. And the estate quietly became ungovernable again.
A Knowledge Intelligence framework breaks this cycle through a fundamentally different model. Classification is
not designed once and imposed on the estate - it emerges continuously from the estate itself. The intelligence
engines that process incoming knowledge read patterns across the full estate, identify where concepts are
converging or diverging, detect when new themes are forming before they have been named, and update the taxonomy
accordingly. The result is a classification system that evolves in step with the organisation - not months or
years behind it.
Humans set the parameters, review proposed changes, and retain authority over the taxonomy's direction. But the
intelligence layer does the continuous work of observation and proposal. The taxonomy becomes a living model - not
a published artefact that someone has to remember to update.
The Knowledge Intelligence Framework
How Knowledge Intelligence actually works.
Knowledge Intelligence is not a philosophy - it is a capability with a defined architecture. At its core is a
deceptively simple idea: every piece of knowledge an organisation holds should exist as a governed asset with a
known identity, a measurable quality, and a traceable history. Not just documents. Not just data. Everything the
organisation knows, in whatever form it takes.
This is what a Common Knowledge Asset is: a governed record defined by eight core
attributes, organised into three operating dimensions. Ingestion, classification, and
baseline scoring are automated. People step in for exceptions: conflicts, threshold breaches, and high-impact
disputes.
Confidence is calculated from clear inputs: source quality, corroboration with other trusted assets, recency
relative to domain change, usage in successful decisions, and detected conflicts. This is a governance signal, not
a sentiment score. Confidence, coverage, and consistency are treated as separate governance signals. Confidence
asks whether an asset is trustworthy, coverage asks whether relevant assets exist, and consistency asks whether
key
assets agree with each other.
Visualisation - CKA attribute model
Eight governed attributes surrounding one computed record. Hover segments or legend
items to inspect each attribute.
Identity
Asset type, domain, transferability, and reach.
Provenance
Source authority, authorship, and origin chain.
Content
Governed structured knowledge in executable form.
Taxonomy
Living classification and concept alignment.
Quality
Confidence, currency, and consistency signals.
Value
Decision relevance, reuse rate, and impact trail.
Lineage
Full downstream history of decisions influenced.
Governance
Ownership, exception handling, and strategic alignment.
The Common Knowledge Asset
Every knowledge asset carries a structured record across eight governed attributes,
organised into three operating dimensions. Together this model shows what the organisation
knows, how reliable it is, and who is accountable when confidence drops.
8 governed attributes3 operating dimensions1 CKA record
Classification (4 attributes)
Identity
Asset type, domain, transferability, and reach of the knowledge unit.
Provenance
Source authority, authorship chain, and evidence origin history.
Content
Governed structured knowledge in executable, retrievable form.
Taxonomy
Living classification and concept alignment to current strategy.
Measurement (3 attributes)
Quality
Confidence, consistency, and evidence integrity signals over time.
Value
Decision relevance, reuse performance, and measurable impact trail.
Lineage
Downstream decision history and dependency chain influenced by this asset.
Governance (1 attribute)
Governance
Ownership, exception handling, approval thresholds, and strategic alignment rules
defining what this asset is permitted to influence.
The knowledge lifecycle
A Common Knowledge Asset tracks knowledge through its entire life - from ingestion to
retirement - updating its health score at every stage.
01
Ingestion
Captured
Knowledge enters the system through automated ingestion. Identity, type, and
provenance are recorded.
→
02
Processing
Classified
Domain, reach, and transferability are automatically classified. Initial confidence
score is computed.
→
03
Active Use
Governed
Assets are cited and challenged in practice. Conflicts and exceptions route to
human adjudication.
→
04
Decay
Reviewed
Currency drops below threshold. Owner is alerted. Revalidation, override, or
retirement is required.
→
05
Resolution
Retired or Renewed
Asset is superseded, archived, or refreshed. Lineage preserved for future
reference.
Decision Cases
Five decisions that Knowledge Intelligence makes newly possible.
The value of Knowledge Intelligence becomes concrete when anchored to decisions that were previously unavailable,
or available only through costly, one-off, and unreliable manual effort.
01
Strategic Timing
Detecting emerging themes before they are named
Themes are typically only recognised once someone names them. Knowledge Intelligence enables early
visibility of persistent, converging signals across both the documented content estate and tacit knowledge
networks, allowing leaders to see change forming before it becomes obvious - without being asked to react to
noise.
Earlier, better-timed strategic awareness without overreaction.
02
Risk
Measuring concentration risk in the knowledge estate
When critical expertise is held by one person, or institutional memory exists nowhere but in the minds of a
small team, the organisation is carrying knowledge risk it cannot currently see or measure. Nearly half of
executives already recognise this: 48% agree that employees take critical procedural and strategic knowledge
with them when they leave.KMWorld, 2024 Knowledge Intelligence makes this risk
visible and measurable before it becomes a crisis - treating knowledge concentration the same way a finance
team treats exposure in a portfolio.
Knowledge risk becomes a manageable condition, not an invisible one.
03
Concept Health
Evidence-based evolution of strategic concepts
Organisations accumulate frameworks and models that persist long after their usefulness declines. Knowledge
Intelligence provides visibility into how concepts perform across the full knowledge estate, in documents,
in practice, and in expert judgement, enabling deliberate and evidenced evolution rather than conceptual
drift.
Concepts change based on evidence, not politics or fatigue.
04
Confidence
Governing confidence across the knowledge estate
No current framework distinguishes between what an organisation knows with high confidence and what it
merely assumes. 75% of leaders say they do not trust their own data for decision-making.Dataversity / Gartner, 2025 Knowledge Intelligence makes confidence a first-class
governance dimension, ensuring the knowledge underpinning high-stakes decisions can be interrogated, not
assumed.
Confidence becomes governable. Assumptions become visible.
05
External Alignment
Deliberate alignment between internal knowledge and external context
Internal knowledge - both explicit and tacit - drifts relative to external reality. Knowledge Intelligence
treats external context as weighted evidence, allowing internal understanding to be reinforced, challenged,
or updated deliberately rather than by crisis. This applies to documented knowledge and to the assumptions
embedded in expert practice.
Strategic choices grounded in real alignment, not assumption.
Measuring Knowledge Intelligence
How you know it's working.
A strategic capability without a measurement framework is an act of faith. The most common reason Knowledge
Intelligence programmes lose momentum is not that they fail to deliver - it is that no one agreed in advance how
to recognise success. The organisations that have deployed this capability at scale share a common discipline:
they defined what they were trying to move, established a baseline before they began, and tracked a small number
of meaningful metrics rather than a large number of available ones.
The framework below organises measurement into four tiers. Together they show quality, operational effect,
financial efficiency, and estate health. Each metric is grounded in operational signals rather than abstract
sentiment.
01
Knowledge Quality
Is what we know reliable enough to act on?
Retrieval Success Rate
The proportion of knowledge queries that return a result the user accepts as useful -
defined not by whether a result appeared, but whether the user acted on it.
What movement looks like
Baseline typically 40-60% in ungoverned estates. Governed estates with hybrid
retrieval reach 75-90%.
Zero-Results Rate
The proportion of queries that return no usable answer. High zero-results rates signal
gaps in taxonomy coverage, ingestion failure, or misalignment between how knowledge is classified and how
it is searched.
Why it matters
Every zero-result query is a decision made without support. Tracked over time,
it directly maps to coverage gaps the taxonomy must close.
Knowledge Freshness SLA
The proportion of active knowledge assets reviewed within their defined validity
window. Varies by domain - a regulatory policy may require 90-day review; a product specification may be
annual.
The failure mode
Without ownership and review incentives, content becomes stale regardless of
tooling. Freshness SLA makes staleness visible before it becomes a decision risk.
Confidence Distribution
The spread of Knowledge Health Scores across the active estate - the proportion of
assets in Trust, Review, and Retire states. Tracks whether estate quality is improving or degrading over
time.
Target direction
Trust proportion should rise over time as governance matures. A static or
declining Trust proportion signals that ingestion is outpacing governance.
02
Operational Impact
Is it improving the work decisions are made from?
Time to Insight
The elapsed time from a question being asked to an answer being accepted and acted
upon. Applies across functions - from a service agent resolving a case, to a strategist validating an
assumption before a board presentation.
Evidence from production
UCB (biopharma) reduced regulatory response assembly time by 90% - from three
weeks to two days - after governing retrieval over a large clinical and R&D content estate.
Resolution Time Reduction
Reduction in time taken to resolve cases, incidents, or requests that require
knowledge retrieval. Measured across service, legal, compliance, and HR workflows where knowledge
retrieval is embedded in the resolution path.
Evidence from production
A knowledge graph-enhanced retrieval deployment at LinkedIn reduced median
per-issue resolution time by 28.6% after six months, with retrieval quality improvement of +77.6% MRR.
Effort Efficiency
The reduction in time staff spend searching for, validating, or reconstructing
knowledge they need. Captures the upstream friction that precedes a decision or action, including search
time, re-escalation, and rework.
Evidence from production
A financial services deployment reported an 86% reduction in time spent
searching for knowledge, an 11.25% reduction in average handle time, and 2.5 days saved per new hire in
knowledge proficiency development.
Knowledge Reuse Rate
The proportion of decisions, actions, or resolutions that drew on existing governed
knowledge rather than requiring new retrieval, reconstruction, or escalation. High reuse rates indicate
that classification is working and the estate is genuinely navigable.
What it indicates
Low reuse despite high coverage signals taxonomy misalignment. High reuse with
declining freshness signals the estate is being used but not maintained - a compounding risk.
03
Financial Efficiency
Is it reducing avoidable cost and external spend?
Reinvention Cost Avoidance
Estimated cost avoided when teams reuse existing governed assets instead of rebuilding
the same analysis or deliverable. Calculated as reuse events multiplied by average asset creation time and
blended rate - calibrated to the organisation.
The signal to watch
Rising reuse with flat external spend indicates the savings are real. Flat
reuse with rising external spend indicates coverage gaps that cost money.
Onboarding Velocity
Time from start date to independent contribution in role-relevant workflows that
depend on governed knowledge access.
Failure signal
If onboarding time stays flat while retrieval quality improves, local adoption
or workflow integration is failing.
External Spend Rationalisation
Reduction in vendor or consultancy spend for questions that can be answered from
internal governed knowledge.
The proportion of critical knowledge domains where fewer than a defined threshold of
people hold the relevant expertise. A domain where only one or two people are the recognised source of
truth is a concentration risk - regardless of whether that knowledge has been documented.
When it becomes urgent
Concentration risk is invisible until it crystallises - a departure, a health
event, a reorg. Tracking it continuously enables deliberate mitigation before the loss occurs.
Taxonomy Alignment Rate
The proportion of ingested assets successfully classified by the living taxonomy
without manual intervention. A declining rate signals the estate is evolving faster than the
classification model.
The signal to watch
High alignment with low retrieval success means the taxonomy is consistent but
wrong. Both must be tracked together.
Knowledge Loss Rate
The rate at which knowledge leaves the estate without capture - through departures,
project completions, or system retirements where no elicitation or transfer has occurred.
The compounding risk
Knowledge loss is almost never measured in real time. By the time it registers
as a problem, the knowledge is already gone. This metric is most valuable as a lead indicator.
Attribution Coverage Rate
The proportion of material decisions that include a complete attribution trail showing
which assets and contributors influenced the decision.
Failure signal
Low coverage means value cannot be credited and accountability cannot be
audited, which weakens both incentives and governance.
◆
The measurement principle
No single metric tells the full story. Retrieval success without freshness tracking means you are measuring
how well people find information that may be wrong. Freshness SLA without reuse rate means you are maintaining
a library no one is reading. The four tiers work together: knowledge quality sets the standard, operational
impact proves workflow value, financial efficiency proves cost impact, and estate health protects long-term
reliability.
Start with three or four metrics that cover all tiers, and establish baselines before deployment. The most
common error is attempting to measure everything at once and measuring nothing with rigour.
What Leadership Must Do
Three shifts that cannot be delegated.
Knowledge Intelligence is a strategic capability, which means it needs strategic ownership. Three shifts in
thinking are necessary for it to take root and build value over time.
From
Content as output
↓
Knowledge as asset
Organisations need to start treating knowledge - in all its forms, tacit and explicit - as something with a
measurable state, a direction of travel, and a governance obligation. That means investing in the frameworks
that make it possible to see and manage the full knowledge universe, not just the documented surface of it.
From
Procedural compliance
↓
Confidence governance
Governance needs to go beyond compliance and cover the quality of understanding - whether the knowledge being
relied upon is current, validated, and well-founded. Confidence must become a first-class concern with clear
ownership and accountability at the executive level.
From
Snapshots
↓
Trajectories
The most valuable intelligence is not what the current state is, but what has changed and why. Leaders need
to invest in understanding over time - including the ability to see how the organisation's knowledge estate is
evolving, where it is concentrating, and where it is at risk of being lost.
What Makes This Hard
The obstacles are real. They are not reasons to stop.
A case this clear deserves an honest account of what stands in the way. Knowledge Intelligence is not a difficult
idea to understand. It is a difficult capability to build - and organisations attempting it will encounter
resistance that is predictable, significant, and worth naming directly.
01
Culture
Knowledge hoarding is rational behaviour
In most organisations, knowledge is power. Expertise that is shared is expertise that is no longer
exclusively yours. Until that incentive structure changes - and contribution is visible through attribution
trails - the tacit estate will continue to resist capture. This is not only a technology problem. It is a
cultural and leadership problem.
The technology is the easier part. The culture is the work.
02
Complexity
Knowledge estates are vast and varied
The same heterogeneity that makes Knowledge Intelligence valuable also makes it hard to implement.
Knowledge exists in hundreds of forms, across dozens of systems, held by people at every level of the
organisation. There is no clean starting point, no tidy dataset to work from. The practical path is to begin
with what is most legible - the explicit layer - and build governance habits there before extending into the
tacit domain.
Start with what you can see. Build toward what you cannot.
03
Investment
The returns are real but not immediate
Knowledge Intelligence is infrastructure. Like any infrastructure investment, its value accumulates over
time rather than delivering immediate return. Organisations accustomed to measuring technology ROI in months
will need a different frame - one that treats knowledge governance as a long-term strategic capability
rather than a project with a defined end state. The cost of not investing, however, compounds silently and
is rarely visible until it becomes a crisis.
The cost of inaction is real. It is just harder to see.
04
AI Dependency
AI capability is necessary but not sufficient
Knowledge Intelligence depends on AI to extract meaning at scale - but AI alone does not deliver it.
Without a governing framework, AI produces outputs that cannot be trusted, traced, or acted on with
confidence. The organisations currently struggling to realise value from AI investment are, in most cases,
struggling precisely because the knowledge foundations beneath their AI are ungoverned. More AI without
better knowledge governance makes the problem larger, not smaller.
AI amplifies what is already there - including the gaps.
The Structural Failure Mode
Lessons-learned systems: where
the chain breaks
01
Capture
Knowledge is documented — after-action reviews, project retrospectives, incident
reports.
02
Store
Knowledge enters a system — a wiki, a repository, a shared drive. It is filed and
forgotten.
03
Retrieve
Nobody finds it. Nobody trusts it. The same mistake is made again — elsewhere, by
someone else.
Three failure scenarios to actively govern
A
Technical
Confidence drift from stale or conflicting sources
If source quality degrades or conflicts are ignored, confidence scores drift and low-quality assets begin
influencing decisions without detection.
Mitigation owner: Platform and Knowledge Engineering leads. Monthly signal audits and
conflict resolution SLAs.
B
Governance
Automation without clear thresholds or owners
If exceptions do not have named owners and defined response windows, issues accumulate and teams stop
trusting governance decisions.
If contributors cannot see the impact of sharing, contribution drops and tacit concentration risk rises.
This also increases dependency on external vendors for knowledge the organisation already holds.
Mitigation owner: Business unit leadership and HR. Tie recognition to attribution
coverage and reuse impact.
Closing Statement
The capability is ready to be built. The moment is now.
Organisations have spent decades building systems to manage what they've written down. Those systems are
valuable and will remain so. But they address only the visible surface of what organisations actually know -
leaving the vast majority of their knowledge unmeasured, unmonitored, and ungoverned.
Knowledge Intelligence changes that. Not by adding another layer to the existing content stack, but by building
a genuinely new capability - one that treats the full universe of what an organisation knows as something that
can be classified, measured, and governed. That requires a different way of thinking about knowledge itself: not
as a category of documents, but as everything an organisation knows, in every form it takes.
This capability does not sit apart from the broader digital and AI agenda - it is its foundation. Every AI
strategy depends on the quality of the knowledge it operates on. Every data governance programme leaves the
largest part of the knowledge estate untouched. Every transformation initiative runs on assumptions that have
never been tested. Knowledge Intelligence is what changes the foundation, not just the surface.
The conditions have converged. The knowledge estates are large enough. The AI capability is sufficient. The
decision stakes are high enough. The cost of ungoverned knowledge is no longer acceptable.
This paper is a founding statement, not a final answer. The work of building the
capability and proving its value is what comes next - and it's work I intend to do through the Kore platform I
am building.
Kore Platform (In Development)
Kore is the platform I am building to operationalise Knowledge Intelligence with measurable governance,
trusted retrieval, and stronger decision confidence.
Sources and References
1
Nonaka, I. and Takeuchi, H. - The Knowledge-Creating Company: How Japanese
Companies Create the Dynamics of Innovation. Oxford University Press, 1995.
2
Henley Knowledge Management Forum - Identifying Valuable Knowledge.
Knowledge in Action, Issue 6. Henley Business School, 2007. Research conducted with GlaxoSmithKline, QinetiQ,
Defence Logistics Organisation, Unisys, and Nissan. Co-ordinated by Dr Judy Payne.
3
Henley Forum for Organisational Learning and Knowledge Strategies - Thinking
Differently About Evaluating Knowledge Management. Knowledge in Action, Issue 28. Henley Business School,
2013. Research drawn from a workshop of 25 practitioners across 12 major public and private sector
organisations. Co-ordinated by Dr Christine van Winkelen.
4
McKinsey Global Institute - The Social Economy: Unlocking Value and
Productivity Through Social Technologies. McKinsey and Company, 2012.
5
KMWorld - Toward Greater Visibility in Today's Knowledge World: 2024 Survey
on Information Sharing and Transparency. KMWorld Research, 2024.
6
Dataversity / Gartner - Data Management Trends 2025: Moving Beyond Awareness
to Action. Dataversity, 2025.
7
APQC - 2024 Knowledge Management Priorities and Predictions Survey Report.
American Productivity and Quality Center, 2024.
8
McKinsey and Company - The State of AI in 2025: Agents, Innovation, and
Transformation. McKinsey Global Survey, 2025.