Designing an Agentic Parent Experience for GEMS SRI
At GEMS SRI, I worked on a concept for reducing parent effort in everyday school tasks. Instead of forcing parents to navigate multiple portal modules, we designed an intent-first layer that could handle common requests end-to-end when safe, and hand off cleanly when not.
Client
GEMS School of Research & Innovation (SRI)
Product
ParentXP Platform
Platform
Mobile (iOS-first)
Timeline
3–4 Months
Role
Product Designer (UX + UI + AI Interaction)
Collaborators
PM, Engineering, AI Team

Context & Problem
Parent systems existed, but everyday workflows were still fragmented
The existing ParentXP portal operated as a digital filing cabinet. Parents were overwhelmed by fragmented modules - tuition, attendance, transport, and circulars - each requiring distinct navigational journeys. The cognitive load was high, while the “Intent-to-Completion” time for simple tasks exceeded 4 minutes on average.
IA Mismatch
Users had to understand the school's internal hierarchy to find the correct task path, including basic actions such as fee or leave workflows.
Impact
High drop-off rates during fee and leave-related task cycles.
Interaction Mismatch
Dashboards mostly showed static information and did not anticipate the next logical action parents needed to take.
Impact
Increased support and call-centre volume for repetitive how-to questions.
Context Mismatch
Even when the system inferred context, it still asked for repeated details, adding unnecessary friction before task completion.
Impact
Erosion of trust and perceived product apathy toward parent urgency.
Outcome Mismatch
Action confirmations were often buried in email threads and not reflected clearly in the product UI in real time.
Impact
Duplicate task submissions by anxious parents trying to verify completion.
Why Portals Fail
Portal UX and parent intent are structurally mismatched
The platform provided access, but not a reliable path to completion.
Parents often started with a question, not a destination screen
Task-critical context such as child/campus/date was resolved too late
Simple tasks required cross-navigation across pages and channels
Completion confidence was low because action states were unclear
My Contribution
What I personally drove
I led the interaction model, defined clarify-versus-execute orchestration boundaries, shaped trust/failure handling patterns, and established multimodal response rules across use cases.
Research
Parents navigate by intent - the portal navigated by structure
Contextual interviews and task shadowing with parents across GEMS SRI campuses revealed a consistent pattern: the mental model of parents and the information architecture of the portal were fundamentally misaligned.
Insight 01
Parents think in tasks, not modules
Every parent interviewed opened with a goal, not a destination. "I need to report my child sick" - not "I need to go to the attendance module." Navigation architecture was invisible to their mental model.
Insight 02
Repeated context is the biggest friction point
Parents repeatedly had to re-enter child name, campus, and grade across every task. The system had the data but never used it proactively, creating a perceived indifference to their urgency.
Insight 03
Completion confidence was the missing signal
After submitting a request, parents had no reliable in-app signal of success. Confirmation emails were delayed or missed, leading to duplicate submissions and unnecessary support calls.
Patterns
Supporting signals from usage analytics
4+ min
Average intent-to-completion time for fee or leave tasks
3–5×
Average number of screens touched per simple task
67%
Support queries that were task-navigation questions, not real issues
Early Validation Signals
Concept-testing with 8 parents confirmed the direction
Parents preferred a single message box over a structured form for task initiation
Context pre-fill (child name, campus) reduced perceived effort even when form length was identical
Clear success states - not email confirmations - restored completion confidence
Edge cases like multi-child households needed explicit disambiguation, not silent defaults
Strategy
From Navigate → Act to Ask → Confirm
How Might We
Design an experience where a parent can express any school-related intent and get to task completion - without learning the system's structure?
Intent-First Entry
Replace module navigation with a single message input. Parse the parent's natural language intent and route to the right task path - no upfront form, no destination selection.
Progressive Clarification
Ask only what the system cannot infer. If the parent has one child, don't ask which child. Surface clarification inline, not as a gating pre-step that adds perceived friction.
Execution with Boundaries
Execute autonomously for safe, reversible actions. For high-stakes tasks - fee payments, record changes - confirm intent explicitly before acting. Define the boundary, don't blur it.
Structured UI for Confirmation
After the agent executes, return a structured summary card - not a chat message. Status, timestamp, and a clear action trail. Confidence through design, not through prose.
Trade-offs
Key design decisions and why
Challenge
Chat-first vs. structured navigation
Decision
Chose a hybrid: chat entry point, structured UI output. Pure chat creates ambiguity about task status and completion. Structured confirmations anchor completion confidence.
Challenge
How much autonomy to give the agent
Decision
Scoped execution to informational and low-stakes actions only. Fee payments and record changes always require a parent confirmation step - trust is built gradually, not assumed.
Use Cases
Four flows that shaped the interaction model
These use cases were selected because they represent the highest-frequency, highest-friction tasks in the existing portal. Each flow tests a different capability of the intent layer — from pure execution to clarification to information retrieval.
Parent Intent
“How do I let the school know my child is sick?”
Agent Flow
Parent types or says intent - absence reason optional
Agent infers child, date, campus from profile
Clarifies only if multiple children or ambiguous date
Executes submission and shows structured confirmation card
Outcome
Task complete in under 90 seconds with zero navigation required.

Parent Intent
“Is there an assignment due soon?”
Agent Flow
Agent queries upcoming assignments for detected child
Groups by subject and due date in a scannable card list
Surfaces overdue or high-priority items first
Offers to set a reminder or notify the teacher inline
Outcome
Reduced context-switching between subject portals and notification apps.

Parent Intent
“Where do I pick up my child?”
Agent Flow
Agent detects campus from child profile
Returns today's pickup zone, timing, and gate number
Flags any same-day schedule changes (event days, drills)
Option to notify school of late pickup inline
Outcome
Eliminated need to call school reception for routine pickup logistics.

Parent Intent
“Does SRI offer early payment discounts?”
Agent Flow
Agent retrieves current fee schedule and policy docs
Surfaces relevant policy directly - no PDF download
Flags upcoming fee deadlines for the child's year group
Provides payment initiation CTA with pre-filled context
Outcome
Policy queries resolved in-app without escalation to finance team.

Edge Cases
Trust is built in the failure states
Agentic UX only earns parent trust if it handles uncertainty and failure gracefully. Designing these patterns was as important as the happy path flows - perhaps more so.
Ambiguous Intent
Example Trigger
“"What time does school start?" (which campus? term time or exam week?)”
Agent Response
Agent surfaces the most likely answer based on profile context, then offers a correction tap. Does not block with a disambiguation modal.
Low Confidence Parse
Example Trigger
“Slang, partial sentences, or topic outside agent scope”
Agent Response
Agent surfaces a clarification prompt with 2–3 suggested interpretations as tappable chips. Falls back to a structured form if second attempt also fails.
Permission Boundary
Example Trigger
“Parent attempts to change a grade record or access another child's data”
Agent Response
Agent declines clearly with a one-sentence reason and routes to the correct escalation path. No error states - just a transparent handoff.
System or API Failure
Example Trigger
“Agent cannot reach the attendance module or fee gateway is down”
Agent Response
Task state is preserved. Agent acknowledges the failure explicitly, shows a retry option, and offers an alternative channel (call, email) if retry fails twice.
Multi-Child Household
Example Trigger
“Parent has two children at different year groups or campuses”
Agent Response
Agent asks a single scoped question - "This is for Aiden or Sara?" - before executing. Never silently defaults to a first-listed child.
Impact
Concept outcomes from testing
This was a concept-stage project. Outcomes below reflect qualitative findings from contextual testing with parents and internal stakeholder reviews - not shipped metrics.
Task Completion Speed
EfficiencyParents completed absence and fee-related tasks in under 2 minutes in concept testing - down from a 4+ minute baseline on the existing portal.
Navigation Drop-off
SimplicityZero cross-module navigation required for all four primary use cases. Intent was resolved at the entry point in every tested scenario.
Completion Confidence
TrustStructured confirmation cards replaced email follow-ups as the primary signal of task success. Parents expressed immediate confidence upon seeing the confirmation state.
Support Query Reduction
DeflectionIn concept testing, all four use cases previously addressed by support calls were fully resolved by the agent - without escalation.
Learnings
What this project changed about how I design for AI
Scope reliability matters more than breadth
Parents trusted an agent that did fewer things reliably over one that claimed wide coverage but failed unpredictably. Scope the first version tightly - earn trust before expanding.
Clarification is not friction if it’s scoped
A single, well-framed question ("For Aiden or Sara?") did not feel like friction - it felt like the system was paying attention. The UX of asking matters as much as when to ask.
Chat is not always the right UI for the output
Natural language input worked well for intent entry, but structured cards and tables outperformed chat prose for confirmations and lists. Mixing modalities was the right call.
System logic is core UX, not backend detail
The orchestration rules - when to execute, when to clarify, when to escalate - were the most consequential UX decisions. They needed to be designed, not delegated to engineering.