Invisible people
The dark matter of public policy is why the models don’t work
Last week, an appeal panel ruled that NICE had made a mistake. The NHS spending watchdog had rejected two new Alzheimer’s treatments - lecanemab and donanemab - on the grounds that they weren’t cost-effective. The appeal found that NICE had failed to properly account for the impact on unpaid carers when calculating what the drugs were worth.
This sounds like a small mistake. But for me, it’s a symptom of a blind spot so embedded in how we measure value that we keep reproducing it across almost every domain of public policy, usually without noticing. We build systems for the things we can see, instead of the things that are there.
Informal dementia care costs the economy over £20 billion a year. That figure represents millions of people - partners, children, siblings - who have reorganised their lives, reduced their working hours, exhausted their savings, and quietly absorbed the kind of stress and loss that doesn’t appear in any clinical trial.
The reason is simple: our databases record individuals, not relationships. Every person has a record. Nobody’s record is connected to information about the people whose lives are shaped by their illness. Carers are invisible because the instruments we use to count things were not built to see them.
But care is not the only thing we’re missing. This kind of blind spot is everywhere.
The maths doesn’t work
You have probably heard of the astrophysics concept of ‘dark matter’ - an invisible, unknown substance that makes up about 27% of the universe’s mass-energy content and roughly 85% of its total matter.
It was not discovered by observing it. It was inferred: the observable universe didn’t behave as it should if only visible matter existed. The gravitational calculations didn’t add up. Stars moved in ways that made no sense unless there was something there that the telescopes couldn’t detect. Dark matter was hypothesised not as a curiosity but as a necessity: the model required it.
I want to suggest that something similar is happening across large parts of public policy and economic life. The model doesn’t predict observed behaviour. The numbers don’t add up. And the reason, in each case, is the same: there is something there - doing real work, bearing real weight, shaping real outcomes - that our instruments were not built to see.
The NICE decision is one example. Let me give you two more.
Our standard methodology for understanding the prevalence of gambling harm is to ask people whether they are showing symptoms of problem gambling. We do not, as a rule, ask the partners and children and parents and friends - the people who might be experiencing the harm most acutely, who might be able to see it when the addict themselves cannot, and who are certainly bearing costs that the official count never captures. So we get the estimates wrong, systematically and in the same direction. The dark matter of gambling harm - the affected network around each addict - is there, is real, is measurable in principle, and is almost entirely absent from our policy models.
The social care funding debate is the largest version of the same problem. The national conversation is almost entirely about formal care: the packages, the assessments, the funding thresholds, the funding gaps. But most care is informal. It is provided by families and neighbours and friends, at no cost to the public purse, in ways that the state neither funds nor counts nor, for the most part, even sees. Crucially, the size of the demand on formal care varies enormously according to the capacity of the informal care networks that surround each person. A person with a strong family network and a supportive community needs less from the state. A person who is isolated and alone needs more. But our models treat need as a property of the individual, not of the relational system they are embedded in. So our projections for what social care will cost are consistently wrong, in a consistent direction, for a consistent reason.
These are not three different problems. They are the same problem. I’ve seen it in every policy area I’ve studied in depth - financial services, employment support, health care, transport, housing, energy markets. Our systems are calibrated to see organisations, not relationships; formal structures, not informal ones; individuals, not the networks they inhabit.
The institutional monoculture
These care examples are about relationships: people whose costs and contributions don’t appear in the model. But there is a second kind of invisibility, one level up: entire organisational forms that don’t register because the instruments were built to see something else.
My friend James Plunkett, whose work on the relational state I’ve written about before, has been thinking about why good ideas at the edges of the system struggle to spread. His diagnosis is sharp: bureaucracy has changed the soil conditions. Over decades, public systems have built measurement frameworks, funding mechanisms, accountability structures, and policy discourse all optimised for a particular organisational form: the large, formal, hierarchical institution. Everything that doesn’t look like that becomes, in a practical sense, invisible.
Land trusts, care cooperatives, mutual aid networks, community enterprises, neighbourhood health initiatives - these are not experimental or unproven. Many are mature, well-evidenced, and extraordinarily effective at doing things the formal system struggles with: building trust, sustaining long-term relationships, reaching people that institutions can’t find, doing preventative work that reduces demand downstream. They do work but stay almost entirely invisible to the national systems that allocate resources, because a commissioning team cannot easily fund something without a standard organisational form, a regulator cannot easily oversee something that operates through relationships rather than rules, and a policy model cannot incorporate something it has no data on.
This is dark matter of a different kind but the effect is the same: the model underestimates what is there, misattributes outcomes, and systematically underfunds the thing that is actually doing a huge chunk of the work.
The same problem in science
The innovation system has a version of this too - one I have been working on directly. The policy conversation about research commercialisation has, quite reasonably, focused on university spin-outs - the companies that emerge from academic labs, supported by technology transfer offices, into a venture ecosystem that has grown considerably in recent years. There has been good policy work here, and it has produced real results.
But university spin-outs represent perhaps a third of the UK’s deep tech ecosystem at most. The majority of science-driven ventures - the companies doing the riskiest translational work between academic discovery and industrial deployment - begin outside universities altogether. They are founded by former academics frustrated by institutional constraints, by clinicians who couldn’t get the NHS to move on innovations they knew would work, by engineers from large firms who got fed up watching ideas get shelved because they didn’t fit the product roadmap.
One founder I know waited a year for her university’s technology transfer office to agree licensing terms, gave up, invented a new solution outside the system entirely, and spent eighteen months doing what she called “spare-bedroom science”: borrowing friends’ labs, scraping together grant funding, building something real without any of the infrastructure that exists to help people like her. She was, for most of that period, entirely invisible to the public innovation system - not because what she was doing was unimportant, but because the instruments for supporting innovation were built around institutions, not around people.
A paper I’ve been working on with my colleagues at Zinc tries to map this population and think seriously about what it would take to support it. The core argument is the same as the argument I am making here: the dark matter of the innovation system is not a niche. It is most of the system. The model doesn’t work without it, and the model will not improve until we build instruments that can see it.
What different instruments would look like
I am not going to pretend this is easy. The reason our systems count what they count is not stupidity. It is that counting relationships is harder than counting individuals; that funding informal networks is harder than funding organisations; that building accountability frameworks for things that work through trust rather than rules is harder than applying standard governance.
And I don’t think there’s a simple choice between “build better models” (shine a light onto the dark matter) or “chill out” (put money into it and hope); we need a bit of both.
Computational capacity to build better models has transformed in recent years; the big challenge is identifying the originating data that would enable us to understand how lives are really lived, time really spent, and relationships really formed. Meta, the RSA and Neighbourly Lab dipped their toe into this recently and created a compelling report - Friends with Benefits - that mostly reminds me of how little we know. But that is no reason not to seek new data sources to map that can underpin a more granular understanding of how social capital is formed and the work it does in our lives.
However, unless (or until…) we know what everyone thinks, feels and does at every moment of the day, our models will always be limited. Modelling protein folding feels easier than modelling human society. So if we want to take the brave leap of putting more money into the dark matter, we will need to change how we think about risk.
Public funding systems are almost entirely oriented around process risk: the risk that the steps we wanted to happen didn’t happen. This is what accountability and audit systems are built to detect, which means the organisations we fund have to be ones capable of producing the data, and the people we fund need to be ones who can prove prior delivery. In practice: people who have already been funded, doing things that have already been done.
We are oddly relaxed about outcome risk - the risk that we did all the steps and got none of the results. A programme can proceed through every stage of a logic chain, tick every compliance box, produce every required report, and deliver almost nothing of value to the people it was meant to serve.
Rebalancing this is a precondition for making the dark matter fundable. A funding culture that took outcome risk as seriously as process risk - that asked “did it work?” as urgently as it asks “did the receipts arrive?” - would find the invisible infrastructure much easier to see, and much easier to support. It would also, over time, be less likely to fund the things that do all the steps and arrive nowhere in particular.
The dark matter is doing work
I want to end by going back to the metaphor.
Dark matter is not passive. It is not an absence. It is matter, exerting gravitational force, shaping the structure of everything around it. The universe has the shape it has partly because of something we cannot directly observe. Remove it from the model and the maths stops working.
The same is true of society’s dark matter, at every level. It’s the carers, the friends, the second cousins, the pub landlord who always knows which businesses are hiring, the aunty at the library who knows where to get debt advice, housing support or a great cheap meal. It’s the peculiar housing co-operative with its informal rules and incredible track record, the community association that does flood prevention as well as running a food bank for reasons that have been lost in time, the dance class that’s really a refuge for dozens of teenage girls. And it’s the scientist tinkering on a borrowed lab bench, the engineer inventing in his garage, determined to solve problems but eligible for no support.
They all emit energy that can drive change, but instead they bounce off bureaucracies designed for controlling the visible, and far less curious than they should be about why it isn’t working.
I know that when I say state capacity needs to expand to reach these less legible, institutional and relational forms, I sound a bit woo-woo. But it’s the science, not the sentiment, that takes me there.
Come and join me.
PS: I mentioned a paper I’ve been working on. It’s called Innovation without Walls: Supporting Venture Science in the UK, written with my colleague Rachel Carey at Zinc Innovation Partners. It maps the population of venture scientists working outside universities, explains why they are systematically underserved by existing policy infrastructure, and proposes three new models for supporting them: a demand-led Technology Transfer Office that works across institutional boundaries, a place-based Civic Enterprise Building model that uses public procurement to call new firms into existence, and a Slingshot Network of nimble, time-limited innovation hubs organised around specific challenges rather than geographies. If any of that sounds relevant to your work, I’d love to hear from you.


Just to add that:
- I agree 100% on measuring outcomes not process, but the public sector seems positively allergic to doing so - hence politicians of all stripes obsessing about both inputs and process (new schools, more policemen, shorter waiting lists etc).
- Long before we get to the invisible dark matter we need to persuade public agencies to take proper account of the visible matter i.e. the wider social costs of one organisation's actions on another - see for example the potentially catastrophic costs of permanent school exclusion in increased criminal exploitation. Most government agencies won't invest in prevention if it doesn't benefit them directly (aka the "wrong pocket problem"). And attempts to put a value on improved outcomes - see an example of my own modest efforts here https://www.atqconsultants.co.uk/wp-content/uploads/2024/11/VF1-SOC-Social-Value-report-update.pdf - can be challenged as exaggerating the likely benefits, when proper account of the invisible costs would often suggest the opposite.
- Government also needs to reduce the cognitive dissonance between policies - e.g. Pat McFadden now spending £88m to address youth unemployment 18 months after a budget in which the NI Secondary Threshold was lowered from £9,100 to £5,000, thus all but eliminating the incentive for an employer to "take a chance" on someone who is long-term NEET by offering them part-time work.
Another absolutely brilliant and perceptive post.