We help organisations work with AI thoughtfully — across agentic systems, middleware, governance, and the human consequences. Drawing on original research into agent safety, consent, and applied AI policy in regulated, high-stakes settings.
Book a discovery callIdentity containment, bounded autonomy, prompt-injection defence, and agent-to-agent hygiene in production systems. The questions that matter when one model can call another, and when an agent acts on a person's behalf at speed.
Policy that holds up under inspection and use. We write and review AI policy for organisations operating under real regulatory pressure — including data protection, safeguarding, and sector-specific compliance.
The plumbing between models, tools, agents and humans. Where consent is captured, where data flows are bounded, where audit trails live. Most AI risk hides here, not in the model itself.
What changes when AI is deployed around vulnerable people, regulated data, or operations where mistakes are expensive? We work with organisations whose mistakes have real-world consequences, not just user-experience ones.
The lab's research is open and used. The frameworks below are how we think — and how we work.
An agent-safety framework covering identity non-capture, consent gates, prompt-injection defence, and bounded autonomy in multi-agent systems. Used to design and audit production agent deployments.
A structured approach to writing AI-use policy in high-stakes, regulated environments — where data protection, safeguarding, pedagogy and operational need all have to be reconciled. Currently deployed across two online schools.
How systems should ask for, record, respect and revoke meaningful human consent — especially when agents are acting on humans' behalf, faster than humans can read terms of service.
Engagements are scoped to outcome, not headcount. Most fall into one of four shapes.
A thinking partner for the questions that don't fit neatly into a vendor's deck.
Ongoing or fixed-term advisory for executive teams making AI bets — where to deploy, what to refuse, how to position with regulators and customers, where the genuine risk lives. Particularly useful where AI strategy intersects with safeguarding, public trust, or regulated data.
Find what's going to go wrong before someone else does.
A structured review of an existing or planned agent deployment: identity containment, prompt-injection surfaces, consent design, audit trails, fail-safe behaviour. Output is a written report and a prioritised remediation plan, applied to your stack.
Policy that's actually load-bearing, not theatre.
Drafting and stress-testing AI-use policy for organisations under real regulatory pressure — schools, healthcare-adjacent, public sector, regulated industry. Built using the Diamond frame and informed by what inspectors and auditors actually look for.
Expertise across the messy edges where AI meets the physical world.
Recent and ongoing conversations span industrial IoT and edge computing, premium beverages and agri-data, education and safeguarding, and public-sector consultation. Bring us a problem; we'll tell you whether it's the kind we can usefully shape.
The Novacene's frameworks aren't speculative. They're stress-tested daily in two online schools — The Haven (a neurodivergent-affirming online school) and Nudge Education Online (alternative provision launching September 2026). When the stakes are children's safety and regulated data, you find out quickly whether your AI policy is real or aspirational.
We're not reselling anyone's models, platforms or training. The lab's recommendations are unbought. That means we'll sometimes tell you to do less with AI, not more — and we'll tell you why in writing.
The Novacene is founded and led by Kirstin Stevens, who runs the lab alongside the schools. Engagements are senior throughout — you work directly with the person whose name is on the work, not an account team.
The most useful first conversation is usually a 30-minute call about what you're trying to do, what's holding you back, and whether the lab is the right fit. No pitch deck, no obligation.
Book a discovery callOr email kirstin@thenovacene.com