ThesisPortfolioIncubatorContact

Incubator

Ideas we want
to build

These are businesses we believe need to exist. Some are early concepts, others are actively being explored. If you want to help build one — as a co-founder, early employee, or advisor — we want to hear from you.

Category
Exploring

AI Governance & Compliance OS

The legal and regulatory backbone of AI usage at scale

Governance & compliance

As AI systems begin making decisions across an organisation, companies face immediate and escalating pressure to explain, audit, and justify those decisions to regulators, boards, and customers. We're building the operating system for AI compliance: a platform that records decision pathways, tracks reasoning chains, and generates audit trails for internal and external stakeholders. Over time, it evolves into an always-on compliance engine that continuously monitors AI behaviour against regulatory frameworks and internal policies.

The problem

Without structured governance infrastructure, large-scale AI deployment becomes legally and operationally impossible. Most organisations have no coherent way to demonstrate what their AI systems decided, why, or under what constraints.

Who we're looking for

  • Engineer with experience in append-only audit systems or compliance tooling
  • Lawyer or compliance professional with enterprise AI governance exposure
  • Operator who has shipped regulated software in financial services, healthcare, or legal
Exploring

Multi-Agent Coordination Platform

The central nervous system for organisations running thousands of AI agents

Orchestration

AI will not exist as a single system inside an organisation — it will exist as thousands of agents operating simultaneously across functions. This creates a fundamentally new problem: ensuring independently optimising agents don't produce globally suboptimal or contradictory outcomes. We're building the coordination layer that manages dependencies, resolves conflicts, and enforces alignment between local agent actions and global organisational objectives. This category will evolve into something akin to an organisational operating system, sitting above all execution layers.

The problem

There is no shared infrastructure for multi-agent alignment today. Teams build fragile bespoke wiring between agents that breaks silently, scales poorly, and is invisible to anyone responsible for the outcomes.

Who we're looking for

  • Distributed systems engineer with background in message brokering or workflow orchestration
  • AI engineer who has operated multi-agent pipelines in production at scale
  • Product leader with experience in enterprise developer or operations tooling
Concept

AI Identity & Attribution

Identity, permissions, and trust scoring for autonomous AI agents

Trust infrastructure

As AI agents act autonomously — browsing the web, sending emails, executing transactions, modifying systems — organisations need to know which agent did what, what permissions it had, and how reliable it has been historically. We're building the identity and attribution layer for AI agents: a system that assigns persistent identities, tracks behaviour across contexts, and generates trust scores based on performance history. It underpins accountability, security, and governance across all AI operations.

The problem

Without identity, there is no control. Without control, there is no scale. Organisations deploying autonomous agents today have almost no visibility into agent behaviour and no principled way to assign accountability when things go wrong.

Who we're looking for

  • Security or identity engineer with experience in IAM, OAuth, or zero-trust architectures
  • AI engineer who has built agentic systems and understands their failure modes
  • Enterprise software founder or operator with experience selling to security or IT buyers
Concept

AI Audit & Risk Assessment

Continuous risk evaluation for AI decision patterns at machine speed

Risk management

Even with governance systems in place, companies require continuous evaluation of their AI risk exposure. These platforms analyze AI decision patterns, detect anomalies, and estimate potential downside across financial, legal, and operational domains. They function similarly to internal audit and risk management teams, but at machine speed and scale. Over time, they integrate directly with insurance markets and regulatory systems — becoming the standard by which AI risk is priced and managed.

The problem

Organisations deploying AI at scale have no principled visibility into their aggregate risk exposure. Individual decisions may look fine; patterns across thousands of decisions may be catastrophic. No one is watching.

Who we're looking for

  • Data engineer or statistician with anomaly detection or risk modelling background
  • Risk professional from financial services, insurance, or management consulting
  • Founder with experience in compliance, internal audit, or enterprise risk tooling
Concept

Cognitive Trust Infrastructure

The arbiter of truth inside AI-native organisations

Trust infrastructure

When intelligence becomes abundant, credibility becomes scarce. These systems evaluate AI outputs based on accuracy, consistency, and historical performance. They compare outputs across multiple models, assign confidence scores, and detect hallucinations or unreliable reasoning before those outputs influence decisions. They effectively become the arbiter of truth within an organisation — and in the long term, may extend beyond individual companies into shared trust networks across industries.

The problem

Organisations using AI at scale have no systematic way to know which outputs to trust. Confident-sounding hallucinations are indistinguishable from reliable outputs without infrastructure specifically designed to evaluate and calibrate AI credibility.

Who we're looking for

  • ML researcher with background in uncertainty quantification, calibration, or hallucination detection
  • Engineer who has built evaluation or observability infrastructure for language models
  • Product leader with experience in enterprise data quality or information management
Concept

Reality Verification & Data Provenance

Ensuring AI decisions are grounded in real, unmanipulated information

Data integrity

As AI-generated content comes to dominate the information environment, distinguishing real from synthetic becomes a critical operational challenge. These systems track the origin of data, verify authenticity, and flag manipulated or low-integrity inputs before they reach AI systems. They ensure that decisions are grounded in reliable information rather than corrupted or fabricated signals — a problem that will grow dramatically as synthetic content becomes cheap to produce.

The problem

Without data provenance infrastructure, analytics, forecasting, and decision-making degrade rapidly. Organisations have no principled mechanism to know whether the information their AI systems are acting on is real.

Who we're looking for

  • Engineer with experience in cryptographic content authentication or digital watermarking
  • Data scientist or researcher with background in synthetic media detection
  • Operator with experience in information security or content integrity at scale
Concept

Data Lineage & Licensing

Tracking rights and provenance across AI training and inference

Data governance

AI systems rely on vast amounts of data, much of which carries legal and ethical constraints. These platforms track data origins, licensing rights, and usage permissions across training and inference processes — providing a complete chain of custody for data as it flows through AI systems. They will become essential for compliance with intellectual property laws and emerging data regulations, and over time may form the foundation for data marketplaces and licensing ecosystems.

The problem

Most organisations have no clear picture of where their training and inference data came from, what rights attach to it, or how its usage might create legal exposure. This problem compounds as AI systems interact with each other.

Who we're looking for

  • Engineer with background in metadata systems, data catalogs, or knowledge graphs
  • IP lawyer or compliance professional with data licensing expertise
  • Founder with experience in data infrastructure or legal tech
Exploring

Organisational Memory

Structured persistence for decisions, reasoning, and institutional knowledge

Knowledge management

AI-driven organisations will generate enormous volumes of decisions and reasoning — and without structured memory, all of it is lost. We're building systems that store not just outputs, but the decisions, assumptions, reasoning chains, and outcomes that define how an organisation thinks and operates. This allows organisations to query their own history, understand why past decisions were made, and build compounding institutional intelligence over time rather than starting from zero with each new AI session.

The problem

Every AI assistant starts stateless. Organisations lose enormous value when context, reasoning, and institutional knowledge evaporate between sessions. The problem compounds with every new tool, model, and agent deployment.

Who we're looking for

  • ML engineer with experience in retrieval systems, embeddings, or vector databases
  • Backend engineer with distributed systems or append-only storage background
  • Enterprise software operator who has felt the organisational cost of lost context
Concept

Knowledge Integrity & Validation

Epistemic quality control for AI-generated organisational knowledge

Knowledge management

Beyond storing knowledge, organisations must ensure that it remains accurate over time. These systems validate AI-generated knowledge, cross-check sources, detect inconsistencies, and surface content likely to have become stale. They act as an internal epistemic quality control layer — preventing the gradual degradation of organisational understanding as AI generates content faster than humans can review it.

The problem

AI-generated knowledge accumulates silently in internal systems. Without continuous validation, organisations end up confidently acting on outdated, contradictory, or fabricated information at scale.

Who we're looking for

  • Engineer with graph database or knowledge representation background
  • AI researcher with experience in fact verification or hallucination detection
  • Product manager who has owned internal knowledge tooling at a fast-growing company
Concept

AI Workforce Management

HR infrastructure for AI agents — task assignment, performance, cost

Agent infrastructure

AI agents will function similarly to employees: they take on tasks, perform work, and produce outputs. Organisations will need platforms to manage them at scale — assigning tasks, tracking performance, allocating resources, and optimising cost. These systems effectively become HR and management infrastructure for AI workers, ensuring efficiency and accountability across an AI-native workforce that may outnumber human employees by orders of magnitude.

The problem

Organisations deploying AI agents today have no principled system for managing them. Work is assigned ad hoc, performance is invisible, costs are opaque, and there is no mechanism for continuous improvement.

Who we're looking for

  • Product or engineering leader who has built workforce or operations management software
  • AI engineer with production experience managing agent fleets
  • Operator from HR tech, staffing, or enterprise operations with an eye for category creation
Concept

AI Workload Scheduling & Orchestration

Dynamic routing of AI workloads optimised for latency, cost, and performance

Orchestration

As organisations run thousands of AI workloads simultaneously, the question of how to distribute them efficiently across models and infrastructure becomes critical. These systems manage task scheduling and routing at the infrastructure level — optimising dynamically for latency, cost, and performance based on real-time system conditions. This layer is the operational backbone of AI-at-scale, analogous to what load balancers and container orchestration did for web infrastructure.

The problem

AI workloads are expensive, latency-sensitive, and highly heterogeneous. Without intelligent scheduling, organisations overpay, underperform, and have no visibility into how resources are being consumed.

Who we're looking for

  • Infrastructure engineer with background in distributed task scheduling or container orchestration
  • ML platform engineer who has optimised inference at scale
  • Founder with experience in cloud cost optimization or developer infrastructure
Concept

AI Training & Evolution Infrastructure

Model versioning, lineage tracking, and regression detection for continuously evolving AI

Model infrastructure

AI systems are not static — they evolve continuously through retraining, fine-tuning, and adaptation. Managing this evolution at scale requires dedicated infrastructure for model versioning, training pipeline management, lineage tracking, and performance regression detection. This ensures that improvements do not introduce instability or misalignment, and that organisations can reason clearly about how their AI systems have changed over time and why.

The problem

Without robust evolution infrastructure, organisations lose track of how and why their AI systems changed. Regressions are discovered in production. Successful versions can't be reproduced. The intellectual property of the AI system is opaque even to its operators.

Who we're looking for

  • ML platform engineer with experience in training infrastructure and model lifecycle management
  • Backend engineer with background in versioning systems or data pipelines
  • Technical founder with MLOps or AI infrastructure experience
Concept

Cognitive Security

Protecting AI reasoning from adversarial manipulation and attack

Security

Traditional cybersecurity protects systems and data. Cognitive security protects reasoning itself. These platforms detect adversarial inputs, prompt injection attacks, data poisoning, and manipulation of AI outputs — monitoring reasoning flows and ensuring integrity at every stage of the decision-making process. This is a major new security category, parallel to but distinct from cybersecurity, that will become mandatory infrastructure as AI takes on consequential decisions.

The problem

AI systems are vulnerable to attacks that have no equivalent in traditional security — prompt injections, adversarial examples, data poisoning, output manipulation. Organisations have almost no defenses against these vectors today.

Who we're looking for

  • Security researcher with background in adversarial ML or prompt injection
  • Engineer with experience building security monitoring or detection systems
  • Founder with cybersecurity background who understands both the technical and enterprise security buying process
Concept

Alignment & Behavioural Monitoring

Detecting AI drift before misalignment becomes a production incident

Security

AI systems can drift away from intended goals over time in ways that are subtle, hard to detect, and compounding. These platforms continuously test and monitor AI behaviour against organisational objectives, ethical constraints, and risk tolerances — simulating edge cases and detecting misalignments before they scale into major issues. They function as a continuous testing and monitoring layer for AI behaviour, not just for bugs but for goal alignment.

The problem

AI systems don't announce when they've drifted. By the time misalignment becomes visible, it has often already caused significant harm. There is no standard infrastructure for continuously verifying that AI systems are behaving as intended.

Who we're looking for

  • ML researcher with background in alignment, RLHF, or model evaluation
  • Engineer who has built behavioural testing or monitoring systems
  • Technical founder who has grappled with AI reliability in high-stakes production environments
Exploring

Human Oversight & Control Interfaces

Giving humans effective supervision over large numbers of autonomous AI agents

Control systems

Despite increasing AI autonomy, humans remain accountable for outcomes. The challenge is that supervising thousands of agents through traditional interfaces is impossible. These systems provide visibility into AI decision-making, structured approval workflows, escalation mechanisms, and intervention capabilities — allowing a small number of humans to supervise a large number of agents effectively, without becoming bottlenecks or losing genuine oversight.

The problem

Human oversight today is either a fiction or a bottleneck. Either agents are effectively unsupervised, or every decision requires manual review and the efficiency of AI is negated. Neither is acceptable at scale.

Who we're looking for

  • Product designer or engineer with strong HCI and workflow tooling experience
  • Engineer with background in monitoring, alerting, or operations tooling
  • Operator who has managed large-scale human-in-the-loop systems and understands the failure modes
Concept

AI Economic Protocols

Standardized protocols for negotiation, contracting, and value exchange between AI agents

Economic infrastructure

As AI agents begin transacting with each other across organisational and system boundaries, standardised protocols will emerge to govern how they negotiate, form contracts, and exchange value. These protocols will underpin machine-to-machine commerce — defining the rules of the agent economy in the same way that TCP/IP defined the rules of the internet and SWIFT defined the rules of interbank settlement.

The problem

Agent-to-agent transactions today are entirely ad hoc, with no shared standards for negotiation, verification, or dispute resolution. As the volume of autonomous transactions scales, the absence of protocol-level infrastructure will become a critical failure point.

Who we're looking for

  • Protocol designer or engineer with background in distributed systems, networking standards, or cryptographic protocols
  • Economist or game theorist with experience in mechanism design or market microstructure
  • Founder who has worked at the intersection of fintech, crypto, or developer infrastructure
Concept

Autonomous Transaction & Financial Control

Treasury and financial governance infrastructure for AI-driven organisations

Financial control

As AI agents are empowered to spend money and allocate resources autonomously, organisations need systems to govern those transactions — enforcing budgets, approval thresholds, and compliance requirements automatically. These systems effectively become treasury infrastructure for AI-driven organisations: ensuring that autonomous financial actions remain within sanctioned boundaries and creating a complete audit trail of every expenditure.

The problem

Organisations handing financial authority to AI agents today have almost no structured controls. Budgets are exceeded, approvals are bypassed, and compliance is an afterthought. The financial exposure from autonomous agents without proper controls is substantial.

Who we're looking for

  • Engineer with fintech or payments infrastructure background
  • Finance professional with treasury, FP&A, or enterprise controls experience
  • Founder with experience in spend management, corporate cards, or financial compliance software
Concept

AI Marketplaces & Exchange Infrastructure

Discovery, procurement, and evaluation of AI capabilities across organisational boundaries

Marketplace

As AI agents begin interacting across organisational boundaries, marketplaces will emerge where agents can offer services, companies can procure AI capabilities, and models can be discovered, evaluated, and compared. This infrastructure will reduce the friction of AI adoption, create liquidity for specialized AI capabilities, and over time may evolve into fully autonomous economic ecosystems where agents transact with each other continuously.

The problem

Today, discovering and procuring AI capabilities requires significant manual effort — evaluating models, negotiating contracts, building integrations. There is no liquid market for AI services and no neutral infrastructure to support it.

Who we're looking for

  • Marketplace or platform engineer with experience in two-sided market dynamics
  • Business development or partnerships operator with experience building developer ecosystems
  • Technical founder with experience in API-first businesses or AI tooling
Concept

Strategic Simulation Platforms

Testing strategies in virtual environments before committing real resources

Decision systems

Before executing decisions, organisations will increasingly want to simulate outcomes — modelling market dynamics, customer behaviour, and operational constraints in virtual environments before committing resources. These systems compress the cycle time between strategic insight and confident action, allowing leadership teams to explore far more options than would be possible through analysis alone, and to stress-test decisions against adversarial scenarios.

The problem

Strategic decision-making today is constrained by the cost of exploration. Most options are never seriously evaluated because analysis is expensive and slow. AI changes the economics of simulation, but the platforms to harness this don't exist.

Who we're looking for

  • Engineer with background in simulation, agent-based modelling, or game theory
  • Strategy consultant or operator who has run scenario planning processes at scale
  • Technical founder with experience in decision intelligence or forecasting tools
Concept

Decision Compression & Prioritization

Turning AI-generated option overload into actionable choices

Decision systems

AI will generate far more options, ideas, and recommendations than humans can evaluate. The bottleneck shifts from generating choices to making sense of them. These systems cluster similar ideas, rank opportunities against organisational priorities, surface contradictions, and reduce overwhelming complexity to a set of actionable choices. They transform the abundance of AI cognition into focused human action.

The problem

The output of AI systems is already overwhelming for most teams. Without infrastructure to compress and prioritize, organisations end up either ignoring AI recommendations entirely or drowning in them — neither of which captures the value.

Who we're looking for

  • ML engineer with background in clustering, ranking, or recommendation systems
  • Product leader with experience in enterprise analytics or business intelligence
  • Operator who has built decision frameworks or prioritization processes in fast-scaling environments
Concept

Human-AI Strategic Design

Combining human intuition and AI simulation for genuinely new strategic thinking

Decision systems

AI excels at optimization within a defined problem space, but humans remain critical for reframing the problem itself — for creative leaps, value judgments, and the kind of insight that changes the question rather than just answering it. These platforms combine human intuition with AI simulation and exploratory scenario generation in a single collaborative workflow, enabling entirely new forms of strategic thinking that neither humans nor AI systems can achieve alone.

The problem

Current AI tools either automate decisions — reducing human agency — or present humans with analysis they still have to interpret manually. There is no platform designed specifically for the collaborative mode where human and AI thinking genuinely augment each other.

Who we're looking for

  • Product designer or researcher with background in decision support or collaborative intelligence
  • Engineer with experience in interactive visualization or collaborative software
  • Strategic thinker with consulting, product strategy, or organisational design background
Concept

AI Standards & Certification

Shared validation frameworks that reduce friction in AI adoption across industries

Standards

As AI becomes critical infrastructure, industries will require standardised validation of AI system safety, reliability, and compliance. These organisations will develop and administer certification frameworks that give buyers confidence, give sellers a credentialing path, and give regulators a defensible standard to reference. They reduce friction in adoption across industries and create shared expectations that enable the broader AI economy to function.

The problem

Without shared standards, every AI procurement becomes a bespoke evaluation exercise. Buyers cannot easily compare vendors. Regulators have no reference point. The cost of validation falls entirely on individual organisations, slowing adoption and increasing risk.

Who we're looking for

  • Standards body operator, former regulator, or industry association leader
  • Technical researcher with background in model evaluation, safety benchmarks, or certification processes
  • Founder with experience building trust infrastructure in regulated industries
Concept

Cross-Company Trust Networks

Shared reputation and verification infrastructure for the AI-enabled economy

Trust networks

Individual organisations will build internal trust systems for their AI agents. But as agents begin operating across organisational boundaries — collaborating, transacting, and negotiating — there will be a need for shared infrastructure that allows companies to verify external agents, share reputation signals, and coordinate securely. This becomes the foundation for a broader AI-enabled economy: the trust layer that makes cross-organisational AI interaction possible at scale.

The problem

Organisations have no way to verify the identity, permissions, or reliability of external AI agents. As agent-to-agent interactions become common, this absence becomes a fundamental barrier to cross-organisational AI adoption.

Who we're looking for

  • Engineer with background in federated identity, PKI, or decentralized reputation systems
  • Network or ecosystem operator with experience building cross-organisational trust infrastructure
  • Founder with experience in identity, verification, or trust-based marketplace businesses

Express interest

If you want to help build one of these ideas — as a co-founder, early employee, or advisor — get in touch. Tell us who you are and what you bring.