Incubator
These are businesses we believe need to exist. Some are early concepts, others are actively being explored. If you want to help build one — as a co-founder, early employee, or advisor — we want to hear from you.
The legal and regulatory backbone of AI usage at scale
Governance & compliance
As AI systems begin making decisions across an organisation, companies face immediate and escalating pressure to explain, audit, and justify those decisions to regulators, boards, and customers. We're building the operating system for AI compliance: a platform that records decision pathways, tracks reasoning chains, and generates audit trails for internal and external stakeholders. Over time, it evolves into an always-on compliance engine that continuously monitors AI behaviour against regulatory frameworks and internal policies.
The problem
Without structured governance infrastructure, large-scale AI deployment becomes legally and operationally impossible. Most organisations have no coherent way to demonstrate what their AI systems decided, why, or under what constraints.
Who we're looking for
The central nervous system for organisations running thousands of AI agents
Orchestration
AI will not exist as a single system inside an organisation — it will exist as thousands of agents operating simultaneously across functions. This creates a fundamentally new problem: ensuring independently optimising agents don't produce globally suboptimal or contradictory outcomes. We're building the coordination layer that manages dependencies, resolves conflicts, and enforces alignment between local agent actions and global organisational objectives. This category will evolve into something akin to an organisational operating system, sitting above all execution layers.
The problem
There is no shared infrastructure for multi-agent alignment today. Teams build fragile bespoke wiring between agents that breaks silently, scales poorly, and is invisible to anyone responsible for the outcomes.
Who we're looking for
Identity, permissions, and trust scoring for autonomous AI agents
Trust infrastructure
As AI agents act autonomously — browsing the web, sending emails, executing transactions, modifying systems — organisations need to know which agent did what, what permissions it had, and how reliable it has been historically. We're building the identity and attribution layer for AI agents: a system that assigns persistent identities, tracks behaviour across contexts, and generates trust scores based on performance history. It underpins accountability, security, and governance across all AI operations.
The problem
Without identity, there is no control. Without control, there is no scale. Organisations deploying autonomous agents today have almost no visibility into agent behaviour and no principled way to assign accountability when things go wrong.
Who we're looking for
Continuous risk evaluation for AI decision patterns at machine speed
Risk management
Even with governance systems in place, companies require continuous evaluation of their AI risk exposure. These platforms analyze AI decision patterns, detect anomalies, and estimate potential downside across financial, legal, and operational domains. They function similarly to internal audit and risk management teams, but at machine speed and scale. Over time, they integrate directly with insurance markets and regulatory systems — becoming the standard by which AI risk is priced and managed.
The problem
Organisations deploying AI at scale have no principled visibility into their aggregate risk exposure. Individual decisions may look fine; patterns across thousands of decisions may be catastrophic. No one is watching.
Who we're looking for
The arbiter of truth inside AI-native organisations
Trust infrastructure
When intelligence becomes abundant, credibility becomes scarce. These systems evaluate AI outputs based on accuracy, consistency, and historical performance. They compare outputs across multiple models, assign confidence scores, and detect hallucinations or unreliable reasoning before those outputs influence decisions. They effectively become the arbiter of truth within an organisation — and in the long term, may extend beyond individual companies into shared trust networks across industries.
The problem
Organisations using AI at scale have no systematic way to know which outputs to trust. Confident-sounding hallucinations are indistinguishable from reliable outputs without infrastructure specifically designed to evaluate and calibrate AI credibility.
Who we're looking for
Ensuring AI decisions are grounded in real, unmanipulated information
Data integrity
As AI-generated content comes to dominate the information environment, distinguishing real from synthetic becomes a critical operational challenge. These systems track the origin of data, verify authenticity, and flag manipulated or low-integrity inputs before they reach AI systems. They ensure that decisions are grounded in reliable information rather than corrupted or fabricated signals — a problem that will grow dramatically as synthetic content becomes cheap to produce.
The problem
Without data provenance infrastructure, analytics, forecasting, and decision-making degrade rapidly. Organisations have no principled mechanism to know whether the information their AI systems are acting on is real.
Who we're looking for
Tracking rights and provenance across AI training and inference
Data governance
AI systems rely on vast amounts of data, much of which carries legal and ethical constraints. These platforms track data origins, licensing rights, and usage permissions across training and inference processes — providing a complete chain of custody for data as it flows through AI systems. They will become essential for compliance with intellectual property laws and emerging data regulations, and over time may form the foundation for data marketplaces and licensing ecosystems.
The problem
Most organisations have no clear picture of where their training and inference data came from, what rights attach to it, or how its usage might create legal exposure. This problem compounds as AI systems interact with each other.
Who we're looking for
Structured persistence for decisions, reasoning, and institutional knowledge
Knowledge management
AI-driven organisations will generate enormous volumes of decisions and reasoning — and without structured memory, all of it is lost. We're building systems that store not just outputs, but the decisions, assumptions, reasoning chains, and outcomes that define how an organisation thinks and operates. This allows organisations to query their own history, understand why past decisions were made, and build compounding institutional intelligence over time rather than starting from zero with each new AI session.
The problem
Every AI assistant starts stateless. Organisations lose enormous value when context, reasoning, and institutional knowledge evaporate between sessions. The problem compounds with every new tool, model, and agent deployment.
Who we're looking for
Epistemic quality control for AI-generated organisational knowledge
Knowledge management
Beyond storing knowledge, organisations must ensure that it remains accurate over time. These systems validate AI-generated knowledge, cross-check sources, detect inconsistencies, and surface content likely to have become stale. They act as an internal epistemic quality control layer — preventing the gradual degradation of organisational understanding as AI generates content faster than humans can review it.
The problem
AI-generated knowledge accumulates silently in internal systems. Without continuous validation, organisations end up confidently acting on outdated, contradictory, or fabricated information at scale.
Who we're looking for
HR infrastructure for AI agents — task assignment, performance, cost
Agent infrastructure
AI agents will function similarly to employees: they take on tasks, perform work, and produce outputs. Organisations will need platforms to manage them at scale — assigning tasks, tracking performance, allocating resources, and optimising cost. These systems effectively become HR and management infrastructure for AI workers, ensuring efficiency and accountability across an AI-native workforce that may outnumber human employees by orders of magnitude.
The problem
Organisations deploying AI agents today have no principled system for managing them. Work is assigned ad hoc, performance is invisible, costs are opaque, and there is no mechanism for continuous improvement.
Who we're looking for
Dynamic routing of AI workloads optimised for latency, cost, and performance
Orchestration
As organisations run thousands of AI workloads simultaneously, the question of how to distribute them efficiently across models and infrastructure becomes critical. These systems manage task scheduling and routing at the infrastructure level — optimising dynamically for latency, cost, and performance based on real-time system conditions. This layer is the operational backbone of AI-at-scale, analogous to what load balancers and container orchestration did for web infrastructure.
The problem
AI workloads are expensive, latency-sensitive, and highly heterogeneous. Without intelligent scheduling, organisations overpay, underperform, and have no visibility into how resources are being consumed.
Who we're looking for
Model versioning, lineage tracking, and regression detection for continuously evolving AI
Model infrastructure
AI systems are not static — they evolve continuously through retraining, fine-tuning, and adaptation. Managing this evolution at scale requires dedicated infrastructure for model versioning, training pipeline management, lineage tracking, and performance regression detection. This ensures that improvements do not introduce instability or misalignment, and that organisations can reason clearly about how their AI systems have changed over time and why.
The problem
Without robust evolution infrastructure, organisations lose track of how and why their AI systems changed. Regressions are discovered in production. Successful versions can't be reproduced. The intellectual property of the AI system is opaque even to its operators.
Who we're looking for
Protecting AI reasoning from adversarial manipulation and attack
Security
Traditional cybersecurity protects systems and data. Cognitive security protects reasoning itself. These platforms detect adversarial inputs, prompt injection attacks, data poisoning, and manipulation of AI outputs — monitoring reasoning flows and ensuring integrity at every stage of the decision-making process. This is a major new security category, parallel to but distinct from cybersecurity, that will become mandatory infrastructure as AI takes on consequential decisions.
The problem
AI systems are vulnerable to attacks that have no equivalent in traditional security — prompt injections, adversarial examples, data poisoning, output manipulation. Organisations have almost no defenses against these vectors today.
Who we're looking for
Detecting AI drift before misalignment becomes a production incident
Security
AI systems can drift away from intended goals over time in ways that are subtle, hard to detect, and compounding. These platforms continuously test and monitor AI behaviour against organisational objectives, ethical constraints, and risk tolerances — simulating edge cases and detecting misalignments before they scale into major issues. They function as a continuous testing and monitoring layer for AI behaviour, not just for bugs but for goal alignment.
The problem
AI systems don't announce when they've drifted. By the time misalignment becomes visible, it has often already caused significant harm. There is no standard infrastructure for continuously verifying that AI systems are behaving as intended.
Who we're looking for
Giving humans effective supervision over large numbers of autonomous AI agents
Control systems
Despite increasing AI autonomy, humans remain accountable for outcomes. The challenge is that supervising thousands of agents through traditional interfaces is impossible. These systems provide visibility into AI decision-making, structured approval workflows, escalation mechanisms, and intervention capabilities — allowing a small number of humans to supervise a large number of agents effectively, without becoming bottlenecks or losing genuine oversight.
The problem
Human oversight today is either a fiction or a bottleneck. Either agents are effectively unsupervised, or every decision requires manual review and the efficiency of AI is negated. Neither is acceptable at scale.
Who we're looking for
Standardized protocols for negotiation, contracting, and value exchange between AI agents
Economic infrastructure
As AI agents begin transacting with each other across organisational and system boundaries, standardised protocols will emerge to govern how they negotiate, form contracts, and exchange value. These protocols will underpin machine-to-machine commerce — defining the rules of the agent economy in the same way that TCP/IP defined the rules of the internet and SWIFT defined the rules of interbank settlement.
The problem
Agent-to-agent transactions today are entirely ad hoc, with no shared standards for negotiation, verification, or dispute resolution. As the volume of autonomous transactions scales, the absence of protocol-level infrastructure will become a critical failure point.
Who we're looking for
Treasury and financial governance infrastructure for AI-driven organisations
Financial control
As AI agents are empowered to spend money and allocate resources autonomously, organisations need systems to govern those transactions — enforcing budgets, approval thresholds, and compliance requirements automatically. These systems effectively become treasury infrastructure for AI-driven organisations: ensuring that autonomous financial actions remain within sanctioned boundaries and creating a complete audit trail of every expenditure.
The problem
Organisations handing financial authority to AI agents today have almost no structured controls. Budgets are exceeded, approvals are bypassed, and compliance is an afterthought. The financial exposure from autonomous agents without proper controls is substantial.
Who we're looking for
Discovery, procurement, and evaluation of AI capabilities across organisational boundaries
Marketplace
As AI agents begin interacting across organisational boundaries, marketplaces will emerge where agents can offer services, companies can procure AI capabilities, and models can be discovered, evaluated, and compared. This infrastructure will reduce the friction of AI adoption, create liquidity for specialized AI capabilities, and over time may evolve into fully autonomous economic ecosystems where agents transact with each other continuously.
The problem
Today, discovering and procuring AI capabilities requires significant manual effort — evaluating models, negotiating contracts, building integrations. There is no liquid market for AI services and no neutral infrastructure to support it.
Who we're looking for
Testing strategies in virtual environments before committing real resources
Decision systems
Before executing decisions, organisations will increasingly want to simulate outcomes — modelling market dynamics, customer behaviour, and operational constraints in virtual environments before committing resources. These systems compress the cycle time between strategic insight and confident action, allowing leadership teams to explore far more options than would be possible through analysis alone, and to stress-test decisions against adversarial scenarios.
The problem
Strategic decision-making today is constrained by the cost of exploration. Most options are never seriously evaluated because analysis is expensive and slow. AI changes the economics of simulation, but the platforms to harness this don't exist.
Who we're looking for
Turning AI-generated option overload into actionable choices
Decision systems
AI will generate far more options, ideas, and recommendations than humans can evaluate. The bottleneck shifts from generating choices to making sense of them. These systems cluster similar ideas, rank opportunities against organisational priorities, surface contradictions, and reduce overwhelming complexity to a set of actionable choices. They transform the abundance of AI cognition into focused human action.
The problem
The output of AI systems is already overwhelming for most teams. Without infrastructure to compress and prioritize, organisations end up either ignoring AI recommendations entirely or drowning in them — neither of which captures the value.
Who we're looking for
Combining human intuition and AI simulation for genuinely new strategic thinking
Decision systems
AI excels at optimization within a defined problem space, but humans remain critical for reframing the problem itself — for creative leaps, value judgments, and the kind of insight that changes the question rather than just answering it. These platforms combine human intuition with AI simulation and exploratory scenario generation in a single collaborative workflow, enabling entirely new forms of strategic thinking that neither humans nor AI systems can achieve alone.
The problem
Current AI tools either automate decisions — reducing human agency — or present humans with analysis they still have to interpret manually. There is no platform designed specifically for the collaborative mode where human and AI thinking genuinely augment each other.
Who we're looking for
Shared validation frameworks that reduce friction in AI adoption across industries
Standards
As AI becomes critical infrastructure, industries will require standardised validation of AI system safety, reliability, and compliance. These organisations will develop and administer certification frameworks that give buyers confidence, give sellers a credentialing path, and give regulators a defensible standard to reference. They reduce friction in adoption across industries and create shared expectations that enable the broader AI economy to function.
The problem
Without shared standards, every AI procurement becomes a bespoke evaluation exercise. Buyers cannot easily compare vendors. Regulators have no reference point. The cost of validation falls entirely on individual organisations, slowing adoption and increasing risk.
Who we're looking for
Shared reputation and verification infrastructure for the AI-enabled economy
Trust networks
Individual organisations will build internal trust systems for their AI agents. But as agents begin operating across organisational boundaries — collaborating, transacting, and negotiating — there will be a need for shared infrastructure that allows companies to verify external agents, share reputation signals, and coordinate securely. This becomes the foundation for a broader AI-enabled economy: the trust layer that makes cross-organisational AI interaction possible at scale.
The problem
Organisations have no way to verify the identity, permissions, or reliability of external AI agents. As agent-to-agent interactions become common, this absence becomes a fundamental barrier to cross-organisational AI adoption.
Who we're looking for
If you want to help build one of these ideas — as a co-founder, early employee, or advisor — get in touch. Tell us who you are and what you bring.