WHO GUARDS THE GATEKEEPERS OF INSIGHT?
On insight inflation, institutional failure, and the politics of knowing in an age of artificial intelligence The Problem Behind the Problem John Elkington is one of the more reliable early-warning systems in international sustainability discourse. He coined the Triple Bottom Line. He has been mappi
Brian Walker

On insight inflation, institutional failure, and the politics of knowing in an age of artificial intelligence
The Problem Behind the Problem
John Elkington is one of the more reliable early-warning systems in international sustainability discourse. He coined the Triple Bottom Line. He has been mapping the terrain between business, society and environment for longer than most current commentators have been professionally active. When he identifies a new structural phenomenon, the observation is worth taking seriously.
Walker Briefing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
In a recent essay written following the EcoVadis SUSTAIN 2026 conference in Paris, Elkington introduced the concept of “insight inflation.” The argument, stated plainly in his recent Substack article, is this: we built artificial intelligence to solve the problem of data overload, and in doing so we have created a second-order version of the same problem. Instead of drowning in raw data, we now face the prospect of drowning in conclusions.
It is a well-constructed argument and it rests on a legitimate intellectual foundation. The progression Elkington traces is coherent. The industrial age suffered from scarcity of data. The digital age suffered from abundance of data and the cognitive overload that followed. The AI age now threatens to produce abundance of conclusions, each arriving with the apparent authority of analysis behind it, each demanding a response from decision-makers who are already overwhelmed.
The concept is well named. Inflation, in its classical economic sense, describes what happens when the supply of a medium outpaces the underlying value it is meant to represent. When insight is produced faster than wisdom can absorb it, something very similar occurs. The currency of knowledge depreciates.
Elkington is right about all of this. And yet the essay stops at a point where the deeper implications remain only partially explored.
In clinical medicine, the experienced practitioner learns to distrust the presenting symptom as the full account of the problem. A patient who arrives complaining of fatigue may be describing anaemia, or depression, or malignancy, or simple overwork. The symptom is real. It points toward a pathology. But treating the symptom without understanding the pathology produces, at best, temporary relief and, at worst, dangerous delay.
Insight inflation is a genuine symptom. The pathology it points toward is something rather more serious. The pathology is power.
A Progression Nobody Planned
To understand where we are, it helps to understand how we arrived here.
Alvin Toffler’s Future Shock, published in 1970, diagnosed a psychological condition he called “information overload”: the trauma produced not by ignorance but by the acceleration of change and the proliferation of stimuli beyond the capacity of the human mind to process them. A decade later, in The Third Wave, Toffler provided the structural explanation: industrial civilisation was giving way to an information civilisation, and the transition would be neither smooth nor painless.
Toffler’s predictions have held up with remarkable accuracy. What he perhaps underestimated was the degree to which the internet, which lay some distance in the future when he wrote, would reassemble concentrated power rather than distributing it. He imagined fragmentation and personalisation. He got those things. He did not fully anticipate that fragmentation and personalisation would be orchestrated by a small number of platform monopolies whose commercial interest lay precisely in maximising the volume and velocity of information flowing through their systems.
The data produced by those systems grew at a rate that few anticipated. Total captured and accessible global data reached approximately sixty zettabytes by 2020, with industry analyst projections, which should be read as directional estimates rather than precisely measured quantities, suggesting figures in the range of one hundred and sixty to one hundred and eighty zettabytes by 2025. The subjective experience of that growth is visible in survey data. The Reuters Institute Digital News Report for 2024 found that thirty-nine per cent of respondents across forty-seven countries reported feeling worn out by the volume of news, compared with twenty-eight per cent in 2019. In the workplace, the research firm OpenText reported in 2025 that eighty per cent of global workers now experience information overload, up from sixty per cent in 2020.
The World Health Organisation, confronting the co-occurrence of the COVID-19 pandemic with a proliferation of competing and often false health information, coined the term “infodemic” to describe an excess of information that made it harder rather than easier for populations to make sound decisions. The psychologist David Lewis had identified the same phenomenon in professional contexts a generation earlier, in research commissioned by Reuters in the 1990s, which he described as Information Fatigue Syndrome: a condition characterised by anxiety, impaired decision-making, memory difficulties, shortened attention spans, and damaged working relationships.
The progression can now be stated clearly.
The first stage was ignorance: the limiting factor in human civilisation was access to knowledge. The second stage was information overload: the limiting factor became the capacity to process knowledge. The third stage, the one Elkington correctly identifies, is insight inflation: the limiting factor is the capacity to evaluate competing interpretations of knowledge.
There is, however, a fourth stage that the insight inflation frame gestures toward without quite naming. It is the stage at which the production of interpretation becomes so abundant, and the cognitive costs of evaluation so high, that most individuals and most institutions are compelled to delegate the filtering of insight to others.
That stage is not a technical problem. It is a political one.
The Question Elkington Did Not Ask
The question Elkington did not ask, but which the logic of his argument makes unavoidable, is this: when insight becomes cheap and abundant, who controls the filters? This is not an abstract question. It has immediate institutional and geopolitical consequences.
Consider the contemporary landscape of great power competition and military-strategic decision-making. Intelligence analysis, in its traditional form, was always a filtering problem: how to distil enormous quantities of raw signals, human intelligence, and open-source information into a manageable set of assessments that could reach a principal decision-maker in time to influence action. The resources required for that process were large, the expertise scarce, and the institutions that performed the function, the intelligence agencies of major states, wielded significant influence precisely because they controlled what reached the desk of the decision-maker.
Artificial intelligence does not dissolve that problem. It intensifies it.
When AI systems can generate hundreds of plausible scenarios, risk assessments, and strategic recommendations from the same underlying data, the bottleneck does not disappear. It migrates. The constraint is no longer the production of analysis. It is the selection of which analysis is presented, to whom, at what moment, and in what framing.
The analyst who once spent weeks synthesising a judgement now supervises a system capable of producing fifty competing judgements in an afternoon. The briefer who once carried a single carefully constructed assessment to the minister now faces the question of which of the machine’s outputs to include, which to suppress, and how to characterise the degree of consensus or uncertainty within the analytical product.
These are not technical decisions. They are decisions saturated with interpretive, institutional, and political consequence.
The current international environment illustrates the stakes with uncomfortable clarity. In periods of acute geopolitical tension, whether in the context of great power confrontation, nuclear brinkmanship, or proxy conflict, the most consequential question facing democratic governments is not whether sufficient information exists. Information is now superabundant. The question is whose filtering of that information reaches the principal, and what institutional incentives shaped that filtering.
History offers repeated examples of catastrophic decision-making not from information scarcity but from information curation failure: the selective presentation of intelligence, the suppression of contradictory assessments, the cognitive pressure on analysts to produce conclusions consistent with the preferences of those commissioning the analysis. The Iraq War intelligence failures of 2002 and 2003 were not primarily a problem of insufficient data. They were a problem of institutional filtering under political pressure.
It should be noted that the full historical record of that episode remains contested across multiple official inquiries, and scholars continue to debate the relative weight of analytical failure, politicised presentation, and genuine intelligence ambiguity. What is documented across those inquiries, however, including the Butler Review and the Chilcot Inquiry, is that the filtering layer between raw intelligence and political decision-maker was subject to pressures that narrowed the range of interpretations reaching those with authority to act. That structural finding is the relevant one for the argument made here.
AI does not make that problem less likely. It makes the filtering layer more opaque, more technically complex, and therefore less susceptible to the kinds of scrutiny that democratic oversight mechanisms were designed to provide.
Elkington notes, correctly, that the internet “reassembled centralised power rather than dissolving it.” The same structural pattern is already visible in AI. The systems capable of producing strategic-grade insight at scale are controlled by a small number of technology firms and state actors. The markets for AI-generated analysis are consolidating rapidly. The institutions that gain early dominance over AI insight pipelines will acquire a structural advantage in shaping the informational environment within which their clients, whether corporations, governments, or electorates, make decisions.
That is not insight inflation as a cognitive challenge. That is insight inflation as a mechanism of power.
The most substantial objection to this argument deserves direct engagement. It is frequently observed that artificial intelligence democratises analytical capacity: a researcher, a journalist, or a citizen with internet access can now run queries that would previously have required a research team and significant institutional resource. That observation is accurate. It does not, however, address the relevant question. Democratisation of access is not the same as democratisation of interpretive authority. The systems performing the underlying analysis, setting the training parameters, weighting the outputs, and determining what the model treats as credible evidence, are controlled by a small number of actors whose architectural choices are neither transparent nor democratically accountable. What is distributed to the many is the interface. What is retained by the few is the infrastructure of interpretation. The distinction matters enormously.
What Medicine Already Knows
There is a domain in which the pathology of analytical abundance has been studied with rigour over several decades. It is medicine.
The phenomenon known as diagnostic inflation describes what happens when the availability of tests outpaces the capacity of clinical judgment to interpret them wisely. When imaging technology became widely accessible, the incidental finding became a new category of clinical problem: an abnormality detected not because a patient was symptomatic but because the technology was looking. Many incidental findings are benign. Some require investigation. A small proportion are genuinely significant. But the volume of findings produced by modern imaging creates pressure for further investigation of everything, which produces further findings, which creates further pressure, in a cascade that increases cost, patient anxiety, and the probability of iatrogenic harm from investigation and treatment that would not otherwise have been initiated.
The false positive is a direct analogue of what Elkington calls the AI-generated insight. It is a conclusion produced by a process that is technically sound but applied in a context where the prior probability of genuine significance is low. And because it arrives from a legitimate diagnostic system, it carries authority that makes it cognitively difficult to discard.
The clinical literature on this problem has identified several principles relevant to the broader challenge of insight inflation.
The first is the importance of a prior hypothesis. Experienced clinicians do not order investigations without a clinical question. They do not generate data in the hope that something useful will emerge. They reason from the patient’s presentation toward a differential diagnosis and use investigations to adjudicate between possibilities they have already identified as plausible. This is not a limitation of ambition. It is a disciplined approach to the management of analytical resources and cognitive bandwidth.
The second principle is the value of the explicitly sceptical role. In high-stakes clinical contexts, the second opinion is not a courtesy. It is a structural safeguard. The consultant who reviews the imaging report with a different clinical lens, the pathologist who examines the tissue with a different set of questions, the pharmacist who reviews the prescription with knowledge of interactions the prescribing clinician may not have considered: these are institutionalised forms of designated scepticism.
The third principle, perhaps the most important, is that clinical judgment cannot be automated out of existence by the abundance of data. The physician who receives fifty diagnostic suggestions from different algorithms must still decide which hypothesis matters most for this patient, at this moment, given this history and these values. The responsibility for that decision is not transferred to the algorithm. It remains with the clinician. And the clinician’s accountability to the patient is not diminished by the sophistication of the tools available.
Each of these principles maps directly onto the governance problem that Elkington identifies but does not resolve.
Organisations, whether corporations, governments, or regulatory bodies, that wish to navigate insight inflation without being captured by it will need prior hypotheses about what questions matter before they engage AI systems. They will need formally designated sceptical functions whose role is not to consume AI output but to interrogate it. And they will need to maintain, clearly and unambiguously, that accountability for decisions based on AI-generated analysis rests with identifiable human beings, not with the systems that produced the analysis.
These are not novel principles. They are extensions of intellectual practices that professions committed to high-stakes decision-making have developed over generations. The challenge is that most institutional structures were not built around them, and the arrival of AI at scale is exposing that gap with considerable speed.
What Institutions Were Not Designed For
Governance structures, almost universally, were designed for a world in which analysis was scarce, expensive, and slow to produce.
The committee system of a modern parliament was designed for a world in which a minister received a briefing document prepared over days or weeks by officials who had themselves synthesised available information through extended professional effort. The document was scarce. It was therefore presumed to have been worth preparing. It was read carefully because it represented a significant investment of institutional resource.
The board of a major corporation received quarterly reports prepared by teams of analysts over extended periods, audited by external professionals, and presented by executives who had spent time understanding the material they were presenting. The report was scarce. The meeting time allocated to reviewing it was therefore considered proportionate.
The intelligence community presented its principal assessments, the carefully calibrated documents that summarise national intelligence on a given subject, to senior decision-makers at intervals that reflected the time required to synthesise, verify, and express analytical judgement responsibly.
None of these institutional forms was designed for a world in which an AI system can generate fifty plausible scenarios from a given data set in the time it once took an analyst to draft a single paragraph.
The volume problem is obvious. Less obvious, but more consequential, is the authority problem. When analysis was scarce, its scarcity was itself a quality signal. A briefing document that had survived institutional preparation processes carried implicit assurance that it had been considered worth producing. When analysis becomes abundant, that quality signal disappears. The board or the committee or the minister must now evaluate the credibility of conclusions rather than simply receiving conclusions whose credibility was embedded in the process that produced them.
Most institutions have not yet confronted this change explicitly. They continue to receive AI-generated analysis through the same channels, in the same formats, and with the same procedural trappings as the analysis that preceded it. The form gives an impression of continuity. The underlying epistemics have changed entirely.
This has a particular resonance for a jurisdiction such as Western Australia. Western Australia serves here not as a parochial case but as a precise illustration of how the abstract problem of insight inflation operates within a jurisdiction that is structurally representative of many peripheral democracies: prosperous, globally integrated, and almost entirely dependent on the analytical judgments of institutions it does not control. The mechanism is not theoretical. When Chinese steel demand projections were systematically misread in the period leading to the 2015 to 2016 iron ore price collapse, the consequences arrived in Western Australia with considerable precision: state budget revenues fell by billions of dollars, infrastructure programmes were deferred, and household conditions in communities dependent on the resources sector deteriorated in ways that took years to recover. The proximate cause was a pricing correction. The upstream cause was a sustained misreading of policy intentions within a major economy, filtered through multiple layers of institutional interpretation before reaching the markets and governments that depended on it.
Western Australia sits at considerable remove from the primary centres of geopolitical and economic decision-making. The strategic intelligence assessments that inform Australian federal policy are produced in Canberra, filtered through the architecture of the Five Eyes intelligence alliance, and shaped by the analytical frameworks of institutions in Washington, London, and Langley. By the time a strategic picture reaches the policy environment of Perth, it has passed through multiple layers of institutional curation.
Under conditions of insight inflation, each of those layers becomes a potential point at which the quality of the filtering matters enormously. The question is not merely whether Western Australia receives accurate information about the world. It is whether the institutions that mediate that information, federal agencies, intelligence assessors, departmental briefing chains, are themselves equipped to maintain the quality of judgment that genuine filtering requires, rather than simply transmitting whatever the upstream AI system has generated with most confidence.
The economic stakes are direct. Western Australia’s prosperity is structurally connected to commodity markets that are acutely sensitive to geopolitical conditions. The iron ore trade, the LNG industry, the growing defence and technology investment associated with AUKUS commitments: each of these depends on an accurate reading of international conditions that are themselves increasingly shaped by the informational environment in which great power decisions are made.
When strategic miscalculation is possible, in part because AI-generated assessments have displaced the more cautious and qualified judgements that experienced analysts might have offered, the consequences do not remain abstractions in distant capitals. They arrive, with economic and social precision, in the communities of a state that has historically been the last to be consulted and the first to bear the consequences of decisions made elsewhere.
The case for robust institutional judgment at the state level is therefore not merely a matter of good governance in the abstract. It is a concrete requirement of self-interest in an environment where the quality of filtering at higher levels of the system cannot be assumed.
The Scarce Resource
There is a productive paradox at the centre of this analysis. When insight becomes abundant, judgment becomes scarce.
The value of any resource is a function of its relative scarcity and its utility. Data was once valuable because it was scarce. In the current environment, data is superabundant and its value as a commodity has declined accordingly. Processed information followed the same trajectory with some delay. Insight, in Elkington’s formulation, is now beginning the same cycle: produced at scale by AI systems, losing marginal value with each additional unit generated, eventually becoming a noise problem rather than a signal resource.
What AI cannot easily produce is accountable decision-making. It can generate scenarios. It can produce recommendations. It can synthesise evidence. What it cannot do is bear responsibility for the decision that follows.
Responsibility remains human even when reasoning is machine-assisted.
And because responsibility remains human, the quality of human judgment, the capacity to decide which interpretation matters, which risk is real, which recommendation is sound, and which conclusion is an artefact of the system’s training data and incentive structure rather than a genuine reflection of the world, becomes the genuinely scarce resource in the AI age.
Elkington captures part of this when he observes that human conviction may recover some of its lost value precisely because it is harder to manufacture than machine-generated analysis. That observation is correct but understated. What recovers value is not conviction alone. It is trained discernment: the capacity for judgment that has been developed through extended engagement with complex problems, real consequences, and the kind of intellectual accountability that comes from having been wrong in ways that mattered.
The professions that have historically commanded social authority, medicine, law, the senior civil service, the judiciary, did not do so merely because they possessed knowledge that others lacked. They commanded authority because they were accountable for judgments made under conditions of genuine uncertainty, and because the training required to make those judgments responsibly was long, demanding, and not easily replicated.
AI does not eliminate the value of those judgments. It creates a new context in which their value is more visible, because the alternative to human judgment is no longer ignorance but the potentially worse condition of insight without accountability.
The institutional response to this recognition is what might be called epistemic infrastructure: systems and roles whose function is not to generate more analysis but to adjudicate between competing analyses. The designated sceptic that Elkington mentions briefly is one element of this infrastructure. But the concept is broader.
It encompasses the parliamentary committee operating as a formal house of review, interrogating the analytical foundations of legislation rather than simply receiving them. It encompasses the clinical governance board that reviews diagnostic pathways for evidence of systematic bias rather than simply processing individual cases. It encompasses the intelligence oversight committee that scrutinises the analytical assumptions embedded in the assessments it receives rather than simply noting their conclusions.
In practical terms, a designated sceptic role within a board or parliamentary committee would carry defined authority to commission independent assessments of AI-generated analytical products, to require that key assumptions be made explicit and documented, and to record formally when conclusions have been discarded and on what grounds. This is not a novel institutional concept. It draws on existing models in clinical governance, financial audit, and intelligence oversight. What is novel is the urgency of formalising it before the volume and velocity of AI-generated analysis makes unchallenged acceptance the institutional default rather than the exception.
These institutions are not new. What is new is the urgency of their function in an environment where the volume and velocity of analytically credible conclusions makes unexamined acceptance the path of least resistance.
A Philosophical Problem Before a Technical One
There is a tendency in contemporary discourse to treat the challenges associated with artificial intelligence as technical problems awaiting technical solutions. More sophisticated models, better training data, more rigorous alignment processes, improved regulatory frameworks: these are the proposed remedies, and they are not without merit.
But the problem that insight inflation reveals is not primarily technical. It is philosophical. The fundamental question it poses is not how to produce better analysis. It is how to decide what questions deserve to be asked in the first place.
Elkington gestures toward this when he writes that good judgment in the age of insight overload may look very much like what good judgment has always looked like: knowing what you are trying to understand before you begin looking. That observation is correct. But it does not go far enough.
Knowing what questions matter is not a computational skill. It is a product of values, experience, institutional memory, and the kind of engagement with consequence that comes from having lived with the results of decisions over time. It requires familiarity with what has been tried before and why it failed. It requires the capacity to hold uncertainty without resolving it prematurely into a false confidence. It requires, in a word, wisdom.
Information describes reality. Insight interprets reality. Wisdom chooses how to act within reality. Artificial intelligence expands the first two categories with extraordinary efficiency. It does not expand the third.
The civilisational challenge of the current moment is not that we lack information or interpretation. It is that we risk constructing institutional and political cultures in which the appetite for wisdom has been displaced by the appetite for the rapid confident conclusion, the algorithmic certainty that feels like an answer even when the underlying question has not been properly formed.
Democratic institutions, at their best, are mechanisms for negotiating between competing interpretations of reality in ways that preserve accountability and allow for revision. A legislature that deliberates slowly, a judiciary that reasons from precedent, a scientific community that requires replication before acceptance: these are not inefficiencies awaiting optimisation. They are structural features of systems designed to resist the seductive authority of premature certainty.
The insight inflation problem does not threaten democracy only through disinformation, though that threat is real. It threatens democracy through the subtler mechanism of authoritative consensus: the production, at scale and at speed, of conclusions that carry the epistemological markings of careful analysis but were generated without the institutional accountability that genuine analysis requires.
Elkington has opened a genuinely important intellectual door. What lies beyond it is a set of questions about the distribution of interpretive power, the design of institutions capable of maintaining the quality of judgment under conditions of analytical abundance, and the preservation of the specifically human capacity for accountable wisdom in a world that increasingly rewards the rapid and the confident over the considered and the honest.
Those are not questions for technologists to answer alone. They are for legislators, clinicians, jurists, educators, and citizens. They are, in the most direct sense, political questions. And they require political answers.
The Walker Briefing is published by Hon Dr Brian Walker MLC, Leader of Legalise Cannabis Western Australia and Member of the Western Australian Legislative Council. Dr Walker is a practising general practitioner and Deputy Chair of Committees in the Western Australian Parliament.
Walker Briefing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Share this article
Stay Updated
Get the latest news and parliamentary updates delivered to your inbox