Case Study72-Hour BuildAgentic AIChildren's IP

From Children's Play Table
to Agentic Play Platform

Three AI-powered play experiences. One original IP. Built in 72 hours. This is the technical story.

72h
Concept to Live
3
AI Experiences
0
Fine-tuning Required
100%
Original IP
Tech Stack

Production-grade from day one

Every layer of the stack was chosen for speed of iteration without sacrificing the architecture a studio would need to scale.

Frontend

React 19 + TypeScript + Tailwind CSS 4

Type-safe components with utility-first styling. Framer Motion for micro-interactions. shadcn/ui for accessible primitives. Zero custom CSS frameworks — everything composable.

React 19TypeScriptTailwind 4Framer Motion

AI Procedures

Server-side tRPC + JSON Schema validation

All LLM calls are server-side procedures — no API keys exposed to the client. Structured output via JSON Schema ensures every AI response is parseable, typed, and renderable without string parsing.

tRPCJSON SchemaStructured OutputServer-side only

Backend & Data

Express + Drizzle ORM + MySQL/TiDB

Schema-first database design with type-safe queries. Drizzle migrations keep the schema in version control. TiDB provides horizontal scale without operational overhead.

ExpressDrizzle ORMMySQL/TiDBSuperjson

Agentic Pipeline

Structured prompting + context injection

Each experience uses a purpose-built system prompt that injects character context, historical facts, and output schema. The AI is a structured generator, not a freeform chatbot — outputs are deterministic in shape, creative in content.

Context EngineeringSystem PromptsSchema-firstNo fine-tuning

Request Flow

Browser
→ trpc.play.generateBuildGuide.useMutation()
protectedProcedure
→ input validated by Zod schema
invokeLLM()
→ system prompt + user context
JSON Schema response_format
→ typed, parseable output
Superjson serialisation
→ React state → Visual render
Design Decisions

Every choice was a deliberate constraint

The design philosophy was to make AI feel like a creative collaborator, not a search engine. Three principles drove every decision.

01

Structured output over freeform text

The single most important technical decision. Every AI response is validated against a JSON Schema before it reaches the UI. This means the output is always renderable — colour swatches, step cards, minifigure panels — not a text blob. It also means the AI can never hallucinate structure; it can only hallucinate content, which is far easier to catch and constrain.

Eliminated all parsing errors. Enabled rich visual rendering. Made outputs shareable as structured data.

02

IP-first context injection

Each AI procedure is seeded with the Bad Girls IP — character names, historical periods, personality traits, and canonical facts. The AI doesn't know about 'Bad Girls of Ancient India' from training data; it knows because we tell it, precisely, in every call. This is the architecture that makes any IP licensable: swap the character context, keep the pipeline.

The same pipeline works for any IP. No retraining. No fine-tuning. Just context.

03

Children-first interaction model

The UI was designed so a child can use it without reading instructions. Quick-start prompts on the Builder, visual grid on the Studio, character portraits on the Story Engine. Every interaction is a single action that produces a complete, satisfying result — no multi-turn conversation, no prompt engineering required from the user.

Reduced time-to-first-result to under 10 seconds. Zero onboarding required.

04

Historical accuracy as a design constraint

Every AI output is instructed to include at least one verifiable historical fact. This is not a nice-to-have — it is a system prompt constraint. The result is that play becomes education without feeling like it. A child building Vishpala's fortress learns that she was the world's first recorded prosthetic limb user. The history is in the play.

Aligns with curriculum requirements. Differentiates from generic AI toy experiences.

72-Hour Build

Concept to live product in three days

The timeline is the proof of concept. Not just that the technology works — but that the development model works.

72h
Total Build Time
Concept to live URL
3
AI Experiences
Builder, Story, Studio
15+
AI Procedures
Server-side, typed
1
Developer
+ agentic pipeline
Hour 0–8

Architecture & IP Setup

  • Schema design: characters, settings, elements
  • System prompt architecture for all three experiences
  • tRPC router structure and Zod input validation
  • Brand design system: parchment palette, typography, motion tokens
Hour 8–24

Agentic Builder + Story Engine

  • Builder: prompt input, character selector, quick-start seeds
  • JSON Schema for build guide (steps, bricks, colours, facts)
  • Story Engine: character × setting matrix, set concept schema
  • Visual output: colour swatches, step cards, LEGO box mockup
Hour 24–48

Creator Studio + Visual Upgrade

  • 6×4 drag-and-drop world canvas with 24 placeable elements
  • Context-aware story generation from scene composition
  • Storybook output: chapter panels, Bad Girl Moment highlight
  • Visual upgrade across all three experiences: swatches, cards, panels
Hour 48–72

Polish, Safety & Launch

  • Content safety layer on all AI procedures
  • Mobile-responsive layout across all experiences
  • Navigation integration on main site
  • Live deployment to badgirlsofancientindia.com/play

What this timeline proves

The 72-hour build is not a hackathon trick. It is a demonstration of what becomes possible when an agentic development pipeline is applied to a well-defined IP. The same pipeline — structured prompting, JSON Schema output, server-side procedures, typed UI rendering — can be applied to any licensed IP in a fraction of the time traditional game development requires. This is the architecture that changes the economics of digital play.

Safety & Security

Children-first by architecture, not afterthought

Safety for a children's product is not a feature you add at the end. It is a constraint that shapes every layer of the stack.

Architecture

No API keys on the client

All AI calls are server-side tRPC procedures. The browser never sees an API key, a model name, or a raw LLM response. The client receives typed, validated data — nothing more.

Prompt Layer

Content safety via system prompt constraints

Every AI procedure includes an explicit safety instruction: age-appropriate language, no violence beyond historical context, no content that could distress a child. The JSON Schema output format further constrains what the model can produce.

Input Layer

Input validation on every procedure

All user inputs are validated by Zod schemas before reaching the AI. Maximum lengths, allowed character sets, and enum constraints mean a child (or a bad actor) cannot inject unexpected content into the AI context.

Data Privacy

No user data stored from play sessions

Play sessions are stateless by design. No child's prompts, generated stories, or world compositions are persisted to the database. The AI generates and the client renders — nothing is logged beyond standard server telemetry.

Content Layer

Historical accuracy as a safety mechanism

Grounding AI output in verifiable historical facts reduces hallucination risk. When the AI is instructed to include a real fact about Vishpala or Gargi, it has less room to invent problematic content. Accuracy and safety are aligned.

Compliance

COPPA & GDPR-ready architecture

The stateless play session design means no personal data is collected from children during play. Future authentication flows use Manus OAuth with explicit parental consent gates. The architecture is compliant by default, not by retrofit.

The safety architecture in one sentence

"A child can only give the AI a character name, a setting, and a short prompt — and the AI can only respond with a typed JSON object that the UI knows how to render. There is no freeform channel between a child and the model."

Community Strategy

From play sessions to a living world

The architecture already supports community. What follows is a phased roadmap from individual play to collective world-building.

Phase 1 — Now

Individual play, shareable outputs

Each generated build guide, set concept, and story is a shareable artefact — a unit of community content that exists before a community does.

Historical facts embedded in every output create a shared vocabulary. Children who play with the same character discover the same history.

The /play URL is designed to be shared. Every experience is a conversation starter: 'I built Vishpala's fortress — what did you build?'

Phase 2 — Near-term

Gallery & social proof

A public /play/gallery where users can save and share their generated worlds, stories, and set concepts. The AI output becomes user-generated content.

Character leaderboards: which Bad Girl has been featured in the most stories? Which setting generates the most adventures? Data as community signal.

Parent and teacher showcase: curated gallery of classroom builds and home play sessions, creating social proof for the educational use case.

Phase 3 — Scale

Collaborative world-building

Multi-player Studio: two children build the same world simultaneously, the AI generates a story that incorporates both their contributions.

Serialised story arcs: a child's story from Monday becomes the prologue for Tuesday's adventure. Persistent narrative identity across sessions.

Creator credentials: children earn recognition for historical accuracy, creative world-building, and community contributions — a reputation system built on learning.

The community thesis

"The most powerful community mechanic is a shared mythology.
Bad Girls of Ancient India already has one."

Every character has a canonical story. Every story has a historical grounding. Every play session adds a new chapter to that mythology. The community doesn't need to be built — it needs to be given the tools to build itself. That is what the agentic pipeline enables.

Ready to build this together?

This is a prototype.
Imagine what a team could do.

Three experiences. One IP. Seventy-two hours. The architecture is ready to scale. The question is what IP we build it on next.