← Back to projects
Project

Augment

A multi-agent AI platform that guides teams through the Double Diamond design process — eight specialised AI services, 30+ collaborative canvas tools, real-time multi-user collaboration.

Launch the platform →

The Problem

Design thinking is powerful in theory but fragile in practice. Teams skip phases, conflate problem exploration with solutioning, lose research artefacts between sessions, and forget insights from earlier work. Facilitators running multiple teams simultaneously can't provide the continuous, phase-appropriate guidance each team needs.

I built a platform where AI agents enforce the methodology — architecturally, not just through instructions.

What It Does

Augment manages the full lifecycle of a Double Diamond design project — from initial research through to delivery specification — with AI agents that behave differently in each phase.

  1. Teams chat with phase-specific AI agents that have hard behavioural constraints. The Discover agent will never suggest solutions. The Develop agent will never critique ideas prematurely. These aren't suggestions — they're architectural boundaries baked into separate system prompts.
  2. Conversations produce structured artefacts — sticky notes, empathy maps, problem statements, concept cards, prototype specs — that are automatically extracted and displayed on collaborative canvas tools in real time.
  3. An orchestrator agent manages phase transitions by evaluating whether the team has genuinely completed the current phase, compressing their artefacts into a handoff brief that carries context forward to the next agent.
  4. Facilitators monitor all teams from a dashboard with live phase status, message activity, and the ability to send system messages or manually trigger transitions.
Discover Define Develop Deliver

The AI Architecture

The core design decision is using eight separate AI services rather than a single model with different instructions. A single AI drifts — ask it to explore and it starts defining. Ask it to ideate and it starts evaluating. Separate agents make phase discipline an architectural constraint, not a suggestion the model can ignore.

Discover — Explorer
Warm design researcher
Asks about users, experiences, observations. Surfaces tensions and surprises. Will never suggest solutions or define the problem.
Define — Analyst
Incisive strategist
Demands evidence for every claim. Pushes toward a sharp, testable problem statement. Will never generate solution concepts.
Develop — Catalyst
Enthusiastic creative catalyst
Uses "yes-and" framing, pushes for 5–10 concepts before convergence. Will never evaluate or critique prematurely.
Deliver — Builder
Pragmatic delivery lead
Drives toward specificity and testability. Asks "how will you test that?" repeatedly. Will never reopen the problem definition.
Orchestrator
Phase boundary evaluator
Runs only at phase transitions. Evaluates whether a phase is genuinely complete, identifies gaps, and compresses artefacts into a handoff brief for the next agent.
Artefact Extractor
Background extraction pipeline
Runs after every agent response on Claude Haiku. 23 tool-specific extractors catch artefacts mentioned in natural conversation that the agent didn't explicitly structure.
Session Summariser
Context continuity engine
Triggers on disconnect or 30-minute inactivity. Generates a 3–5 sentence summary on Claude Haiku so the next session picks up where the team left off.
Nudge Generator
Team re-engagement prompt
When teams seem stuck, generates a first-person example reply on Claude Haiku that sounds like a real team member, not an instruction to the agent.
Context Window Management
Structured Output Protocol
Every agent response → conversational text + <structured_output> JSON block

The JSON drives three behaviours: uiMode triggers a canvas tool switch, artefactsToAdd are persisted and broadcast to all team members in real time, and handoffReady enables the phase transition button for team leads.

A background extraction pipeline (Claude Haiku) also runs after each response, catching artefacts mentioned in natural conversation that the agent didn't explicitly structure — 23 tool-specific extractors across all four phases, with fuzzy deduplication against existing artefacts.

Technical Architecture

The platform is a three-service architecture deployed on Railway, with the backend split between Node.js (real-time collaboration) and Python (AI processing) — each optimised for its I/O profile.

Frontend
React 18 + TypeScript + Vite
Zustand for state, Socket.io for real-time sync, 30+ canvas tool components, CSS Modules
App Server
Node.js 20 + Express + Socket.io
REST API, WebSocket rooms, JWT auth, real-time artefact broadcast
AI Service
Python 3.12 + FastAPI + Anthropic SDK
Background worker for AI streaming, artefact extraction, orchestration
Database
PostgreSQL 16 + Drizzle ORM
ACID compliance, JSONB for flexible artefact schemas, type-safe queries
Cache & Queue
Redis
AI task queue (BLPOP), pub/sub relay for streaming, Socket.io adapter, presence TTLs
AI Model
Claude Sonnet 4 (Anthropic)
Phase agents use Sonnet for depth; extraction and summaries use Haiku for speed and cost
Security

bcrypt password hashing (cost 12), JWT dual-token system with refresh token rotation, HttpOnly/Secure/SameSite cookies, role-based access control on every route, team membership middleware for data isolation, Zod schema validation on all request bodies, Helmet.js security headers, two-tier rate limiting (20/15min on auth, 100/min on API), invite-only registration with cryptographically random single-use tokens, email enumeration prevention on password reset, and full session invalidation on password change.

The AI service communicates with the App Server exclusively through Redis — it has no public HTTP endpoints except a health check. All AI task payloads and responses flow through internal pub/sub channels.

30+ Collaborative Canvas Tools

The workspace is a split-panel layout: AI chat on the left (40%), dynamic canvas on the right (60%). The canvas switches between tools based on the AI agent's structured output, and teams can also manually select any tool they've unlocked. Every artefact change is broadcast to all team members in real time via Socket.io.

Discover
Define
Develop
Deliver

Try It

The demo is pre-loaded with a team (Alpha Team) in the Discover phase. Three accounts let you experience different roles — from team member to facilitator to platform admin. All accounts share the same password.

Team Lead
Password password123
Facilitator
Password password123
Administrator
Password password123

Start as the team lead to chat with the Discover agent and see artefacts appear on the canvas in real time. Switch to the facilitator account to see the cohort dashboard with team status, message counts, and intervention tools. Use the admin account to see cohort management and invite generation. Open two browser windows to see real-time sync between team members.

Launch Augment →

Research Team

Augment originated from an idea by Timothy Hor, whose vision for AI-augmented design thinking set the direction for the project.

Project Lead — Lecturer, Management and Technology
Timothy Hor
Senior Lecturer, Management (Entrepreneurship)
Ashenafi Biru
Senior Lecturer
Joseph Kim
Senior Lecturer & Deputy Dean, Learning & Teaching
Jessica Helmi
Associate Lecturer & Chief AI Architect*
Thomas Bierly
*self-appointed