An AI ethics & governance lab.
Built for the agentic era.

We help organisations work with AI thoughtfully — across agentic systems, middleware, governance, and the human consequences. Drawing on original research into agent safety, consent, and applied AI policy in regulated, high-stakes settings.

Book a discovery call

What we work on

Agentic AI safety

Identity containment, bounded autonomy, prompt-injection defence, and agent-to-agent hygiene in production systems. The questions that matter when one model can call another, and when an agent acts on a person's behalf at speed.

AI governance & policy

Policy that holds up under inspection and use. We write and review AI policy for organisations operating under real regulatory pressure — including data protection, safeguarding, and sector-specific compliance.

Middleware & system architecture

The plumbing between models, tools, agents and humans. Where consent is captured, where data flows are bounded, where audit trails live. Most AI risk hides here, not in the model itself.

Applied ethics in high-stakes settings

What changes when AI is deployed around vulnerable people, regulated data, or operations where mistakes are expensive? We work with organisations whose mistakes have real-world consequences, not just user-experience ones.

Frameworks & published thinking

The lab's research is open and used. The frameworks below are how we think — and how we work.

Verse-ality

An agent-safety framework covering identity non-capture, consent gates, prompt-injection defence, and bounded autonomy in multi-agent systems. Used to design and audit production agent deployments.

verse-ality.com →

The Diamond AI Policy frame

A structured approach to writing AI-use policy in high-stakes, regulated environments — where data protection, safeguarding, pedagogy and operational need all have to be reconciled. Currently deployed across two online schools.

Consent infrastructure

How systems should ask for, record, respect and revoke meaningful human consent — especially when agents are acting on humans' behalf, faster than humans can read terms of service.

consent-infrastructure.com →

How we work with clients

Engagements are scoped to outcome, not headcount. Most fall into one of four shapes.

Strategic AI advisory

A thinking partner for the questions that don't fit neatly into a vendor's deck.

Ongoing or fixed-term advisory for executive teams making AI bets — where to deploy, what to refuse, how to position with regulators and customers, where the genuine risk lives. Particularly useful where AI strategy intersects with safeguarding, public trust, or regulated data.

Agentic system review & safety audit

Find what's going to go wrong before someone else does.

A structured review of an existing or planned agent deployment: identity containment, prompt-injection surfaces, consent design, audit trails, fail-safe behaviour. Output is a written report and a prioritised remediation plan, applied to your stack.

AI policy & governance design

Policy that's actually load-bearing, not theatre.

Drafting and stress-testing AI-use policy for organisations under real regulatory pressure — schools, healthcare-adjacent, public sector, regulated industry. Built using the Diamond frame and informed by what inspectors and auditors actually look for.

Sector-specific applied work

Expertise across the messy edges where AI meets the physical world.

Recent and ongoing conversations span industrial IoT and edge computing, premium beverages and agri-data, education and safeguarding, and public-sector consultation. Bring us a problem; we'll tell you whether it's the kind we can usefully shape.

Discuss your project

About the lab

Practitioner-research

The Novacene's frameworks aren't speculative. They're stress-tested daily in two online schools — The Haven (a neurodivergent-affirming online school) and Nudge Education Online (alternative provision launching September 2026). When the stakes are children's safety and regulated data, you find out quickly whether your AI policy is real or aspirational.

Independent & opinionated

We're not reselling anyone's models, platforms or training. The lab's recommendations are unbought. That means we'll sometimes tell you to do less with AI, not more — and we'll tell you why in writing.

Founder-led

The Novacene is founded and led by Kirstin Stevens, who runs the lab alongside the schools. Engagements are senior throughout — you work directly with the person whose name is on the work, not an account team.

Bring us a question

The most useful first conversation is usually a 30-minute call about what you're trying to do, what's holding you back, and whether the lab is the right fit. No pitch deck, no obligation.

Book a discovery call

Or email kirstin@thenovacene.com