The Problem
Design thinking is powerful in theory but fragile in practice. Teams skip phases, conflate problem exploration with solutioning, lose research artefacts between sessions, and forget insights from earlier work. Facilitators running multiple teams simultaneously can't provide the continuous, phase-appropriate guidance each team needs.
I built a platform where AI agents enforce the methodology — architecturally, not just through instructions.
What It Does
Augment manages the full lifecycle of a Double Diamond design project — from initial research through to delivery specification — with AI agents that behave differently in each phase.
- Teams chat with phase-specific AI agents that have hard behavioural constraints. The Discover agent will never suggest solutions. The Develop agent will never critique ideas prematurely. These aren't suggestions — they're architectural boundaries baked into separate system prompts.
- Conversations produce structured artefacts — sticky notes, empathy maps, problem statements, concept cards, prototype specs — that are automatically extracted and displayed on collaborative canvas tools in real time.
- An orchestrator agent manages phase transitions by evaluating whether the team has genuinely completed the current phase, compressing their artefacts into a handoff brief that carries context forward to the next agent.
- Facilitators monitor all teams from a dashboard with live phase status, message activity, and the ability to send system messages or manually trigger transitions.
Discover
Define
Develop
Deliver
The AI Architecture
The core design decision is using eight separate AI services rather than a single model with different instructions. A single AI drifts — ask it to explore and it starts defining. Ask it to ideate and it starts evaluating. Separate agents make phase discipline an architectural constraint, not a suggestion the model can ignore.
Discover — Explorer
Warm design researcher
Asks about users, experiences, observations. Surfaces tensions and surprises. Will never suggest solutions or define the problem.
Define — Analyst
Incisive strategist
Demands evidence for every claim. Pushes toward a sharp, testable problem statement. Will never generate solution concepts.
Develop — Catalyst
Enthusiastic creative catalyst
Uses "yes-and" framing, pushes for 5–10 concepts before convergence. Will never evaluate or critique prematurely.
Deliver — Builder
Pragmatic delivery lead
Drives toward specificity and testability. Asks "how will you test that?" repeatedly. Will never reopen the problem definition.
Orchestrator
Phase boundary evaluator
Runs only at phase transitions. Evaluates whether a phase is genuinely complete, identifies gaps, and compresses artefacts into a handoff brief for the next agent.
Artefact Extractor
Background extraction pipeline
Runs after every agent response on Claude Haiku. 23 tool-specific extractors catch artefacts mentioned in natural conversation that the agent didn't explicitly structure.
Session Summariser
Context continuity engine
Triggers on disconnect or 30-minute inactivity. Generates a 3–5 sentence summary on Claude Haiku so the next session picks up where the team left off.
Nudge Generator
Team re-engagement prompt
When teams seem stuck, generates a first-person example reply on Claude Haiku that sounds like a real team member, not an instruction to the agent.
Context Window Management
- Conversation windowing — only the last 12 message turns are sent to the API; older context is covered by session summaries
- Artefact compression — current phase artefacts are injected as one-line summaries (capped at 30 items) rather than full JSON objects
- Session summarisation — on disconnect or 30-minute inactivity, Claude Haiku generates a 3–5 sentence summary stored for the next session
- Orchestrator handoff briefs — compressed artefact bundles carry context between phases without the next agent needing to read raw conversation history
Technical Architecture
The platform is a three-service architecture deployed on Railway, with the backend split between Node.js (real-time collaboration) and Python (AI processing) — each optimised for its I/O profile.
Frontend
React 18 + TypeScript + Vite
Zustand for state, Socket.io for real-time sync, 30+ canvas tool components, CSS Modules
App Server
Node.js 20 + Express + Socket.io
REST API, WebSocket rooms, JWT auth, real-time artefact broadcast
AI Service
Python 3.12 + FastAPI + Anthropic SDK
Background worker for AI streaming, artefact extraction, orchestration
Database
PostgreSQL 16 + Drizzle ORM
ACID compliance, JSONB for flexible artefact schemas, type-safe queries
Cache & Queue
Redis
AI task queue (BLPOP), pub/sub relay for streaming, Socket.io adapter, presence TTLs
AI Model
Claude Sonnet 4 (Anthropic)
Phase agents use Sonnet for depth; extraction and summaries use Haiku for speed and cost
Security
bcrypt password hashing (cost 12), JWT dual-token system with refresh token rotation, HttpOnly/Secure/SameSite cookies, role-based access control on every route, team membership middleware for data isolation, Zod schema validation on all request bodies, Helmet.js security headers, two-tier rate limiting (20/15min on auth, 100/min on API), invite-only registration with cryptographically random single-use tokens, email enumeration prevention on password reset, and full session invalidation on password change.
The AI service communicates with the App Server exclusively through Redis — it has no public HTTP endpoints except a health check. All AI task payloads and responses flow through internal pub/sub channels.
30+ Collaborative Canvas Tools
The workspace is a split-panel layout: AI chat on the left (40%), dynamic canvas on the right (60%). The canvas switches between tools based on the AI agent's structured output, and teams can also manually select any tool they've unlocked. Every artefact change is broadcast to all team members in real time via Socket.io.
Discover
- Freeform sticky note board with drag-to-cluster
- Empathy map — four quadrants (Says, Thinks, Does, Feels) plus Pains and Gains
- Stakeholder map — concentric circles with drag-drop proximity placement
- Assumption map, AEIOU board, trend timeline, extreme user profiles
Define
- Affinity clustering — drag stickies from Discover into theme groups
- Five Whys chain, fishbone diagram, Jobs to be Done sentence builder
- HMW card generator, persona builder, problem statement canvas with evidence links
Develop
- Brainstorm board with idea counter, concept cards, reversed brainstorm flip-board
- Analogies board, 2×2 matrix with configurable axes, dot voting with limited budgets
- Impact/feasibility scatter plot, concept storyboard panels
Deliver
- Prototype spec form with fidelity selector, service blueprint swim-lanes
- Test script builder, feedback grid, MVP feature sorter (In MVP / Later / Cut)
- Success metrics KPI builder, iteration log, pitch canvas, implementation roadmap
Research Team
Augment originated from an idea by Timothy Hor, whose vision for AI-augmented design thinking set the direction for the project.
Project Lead — Lecturer, Management and Technology
Timothy Hor
Senior Lecturer, Management (Entrepreneurship)
Ashenafi Biru
Senior Lecturer
Joseph Kim
Senior Lecturer & Deputy Dean, Learning & Teaching
Jessica Helmi
Associate Lecturer & Chief AI Architect*
Thomas Bierly
*self-appointed