The Problem
Every semester, RMIT University's course coordinators spend days manually matching hundreds of sessional tutors to class slots — cross-referencing availability spreadsheets, avoiding double-bookings, ensuring workload fairness, and chasing acceptances via email. It's a process that's error-prone, time-consuming, and doesn't scale.
I built a platform to automate it entirely.
What It Does
The platform automates the complete tutor-to-class assignment workflow.
- Tutors self-submit availability, course interests, and credentials via portal, and rank their preferred classes across all courses in a global drag-and-drop list.
- School administrators review applicants and set per-course tutor preferences (Preferred / OK / Not Preferred).
- Course coordinators independently rank tutors for each of their specific classes.
- School administrators trigger the optimization algorithm for their school's semester, producing conflict-free assignments that respect a 36 hr/week workload cap and institutional workforce rates.
- Course coordinators review results, make bounded adjustments within their own course, and approve the schedule.
- Tutors accept or reject offers through the portal with email notifications at each stage.
The Optimisation Engine
The core uses Python with Google's OR-Tools CP-SAT constraint programming solver — the same technology powering Google's logistics systems.
Hard Constraints
- Each class receives exactly one tutor
- No tutor double-booked across overlapping slots (including simultaneous courses)
- Per-course minimum sessions: tutors assigned to a course teach either 0 or ≥ minimum sessions — preventing isolated one-off assignments that don't justify the preparation cost
- Configurable maximum load per tutor (default: 6 sessions)
- Tutors only assigned to classes matching their stated availability
- 36 hr/week cap: existing commitments + teaching hours + marking hours ≤ maximum weekly hours
- RMIT workforce framework teaching rates applied within the cap: first delivery of a course in a week = 3× contact hours; repeat delivery of the same subject within 7 days = 2× contact hours
- Marking hours included in the weekly cap, calculated via a peak-window method — a sliding window across assessment due dates finds the worst-case simultaneous marking load, scaled by enrolled students per class
Relaxation Cascade
- When no fully-constrained solution exists, the solver progressively relaxes constraints across six stages — each stage inheriting all relaxations of the previous. Applied relaxations are recorded and returned with the result.
- Stage 1: Per-course minimum sessions relaxed to 1
- Stage 2: Global maximum sessions per tutor removed (36 hr cap still enforced)
- Stage 3: Marking turnaround window extended from 2 to 3 weeks, reducing weekly marking hours attributed to each tutor
- Stage 4: Marking load may be shared — teaching tutors who would exceed the cap when marking is included are relieved; a greedy post-pass assigns orphaned marking to the eligible tutor with the most remaining capacity, preferring tutors already teaching in the same course
- Stage 5: CC-ranked NOT_PREFERRED tutors become eligible — a coordinator's explicit ranking overrides the admin preference tier
- Stage 6: Per-course minimum removed entirely (last resort)
Marking Assignment
- After optimisation, each teaching tutor marks their own class's student submissions by default
- Only tutors whose total weekly hours (teaching + existing commitments + marking) would exceed the 36 hr cap are relieved of marking
- Unassigned marking is greedily allocated to the eligible tutor with the most remaining capacity, preferring tutors already teaching in the same course for content familiarity
Performance
- School-wide runs across hundreds of tutors and classes spanning multiple courses
- Pre-filtering on availability, course interest, and hourly cap reduces active binary variables from worst-case by an order of magnitude before the solver begins
- Solver time limit: 300 seconds for school-wide runs
- Relaxation cascade ensures a result is always returned, with applied relaxations clearly reported
Technical Architecture
The platform is a production-grade TypeScript monorepo deployed on Railway, built with a clean separation of concerns across four services.
Frontend
Next.js 15 (App Router)
SSR, file-based routing, TypeScript-first
Backend API
Express.js + Prisma ORM
Clean REST separation, type-safe DB queries
Database
PostgreSQL 16
Relational integrity, ACID compliance
Optimiser
Python 3.13 + FastAPI + OR-Tools
CP-SAT solver, private network only
Job Queue
BullMQ + Redis
Async solver runs — 120s jobs can't be synchronous
Email
Resend
Transactional offer notifications and confirmations
Security
bcrypt password hashing (cost 12), JWT access tokens in httpOnly/SameSite=Strict cookies, role-based access control on every route, row-level data isolation (coordinators can only access their own courses), Zod validation on all request bodies, Helmet.js security headers, Redis-backed rate limiting on authentication endpoints, and a full immutable audit log.
The optimiser microservice runs on Railway's private internal network and is not accessible from the public internet — only the backend API can call it.