Prototyping an AI support engine - Case study - Part 1

How I designed an LLM-powered triage system to transform customer support for a national sandwich chain

Project Overview

Company: SubWay Express (hypothetical national sandwich chain)

Challenge: Handle 15,000+ monthly customer complaints across a fragmented support system

Solution: AI-powered triage assistant that routes, categorizes, and resolves issues intelligently

The Problem Space

A hypothetical fast food chain “SubWay Express” operates 2,500+ locations nationwide with a mobile app used by 3.2 million active customers. Their customer support was drowning in volume:

Current State Challenges

  • 12-hour average response time for non-urgent issues

  • 35% of tickets misrouted to wrong departments

  • Agent burnout from repetitive questions (order status, refund requests)

  • Inconsistent experience across email, in-app chat, and phone

  • Poor first-time user retention due to unresolved complaints

The Data That Mattered

Using Claude I went about researching public support ticket repositories that outlines general support issues in the food app categories. I found:

68% were routine issues (missing items, wrong orders, app bugs) that could be resolved automatically

22% required human empathy but were simple (refunds, credits, apologies)

10% were complex (allergen concerns, legal, recurring problems)

The insight: Most customers didn't need a "conversation" - they needed fast resolution. But the current system treated every complaint the same way.


Research and Personas

To build out key persona archetypes for this type of a simulation I used Manus and Claude to help deep research through customer service personas specifically within the food tech space to understand pain points. Using the research provided I build out four high level personas to help guide decision making and scoping:

Persona 1: The Loyal Regular (Sarah, 34)

  • Orders 3-4x per week via app

  • Issue: Missing item in curbside pickup

  • Need: Quick acknowledgment and immediate resolution, no friction

  • Frustration: "I just want my $3 back, why do I need to write an essay?"

Persona 2: The First-Timer (Marcus, 22)

  • Downloaded app for a promotional deal

  • Issue: 45-minute wait time, cold food

  • Need: Feeling heard, compensated generously, convinced to try again

  • Frustration: "This was my first order and it was terrible. Why would I come back?"

Persona 3: The Serious Concern (Jennifer, 41)

  • Has a food allergy, received wrong order

  • Issue: Safety risk, wants accountability

  • Need: Immediate escalation to management, documentation, reassurance

  • Frustration: "A chatbot can't handle this. I need to speak to a person NOW."

Agent Perspective (David, Support Team Lead)

  • Pain Point: "We waste time asking for order numbers, locations, basic details customers already gave us"

  • Need: Context before they even open a ticket

  • Frustration: "Half my day is copy-pasting the same apology for app crashes"

Based on the research, I also established core principles to guide the design:

  • Transparency over deception - Never pretend the AI is human

  • Speed over conversation - Resolve, don't chat

  • Graceful degradation - When uncertain, route to human immediately

  • Empower agents - Give them superpowers, not replace them

  • Safety first - Serious issues (allergens, injuries) bypass AI entirely


The Solution

High Level Architecture

Three Resolution Paths

Based on the tree above, I provided 3 broad resolution paths that would typically most cases. The goal was to be broad enough at the start to then refine these paths further as we learned more:

Path 1: Instant Resolution

Issues the LLM can resolve without human intervention

Examples: Refunds under $15, order status, app troubleshooting

Response time: Under 60 seconds

Path 2: Assisted Human Support

LLM extracts context, suggests resolution, routes to appropriate agent

Agent receives pre-filled case summary

Response time: Under 2 hours

Path 3: Priority Escalation

Safety concerns, legal issues, high-value customers

Bypass AI, immediate human connection

Response time: Under 15 minutes

 

The Interface

As this is a high level protoype I wanted to keep the initial interface as simple as possible. I also tasked myself with solely trusting Gemini to help with the overall design implementation on Gradio as I wanted to use that as my source for hosting this project.

I wanted to ensure a quick and easy design process which meant that I would create high level images to give Gemini guidance on layout as what copy and components I expected to see on the interface.

Initial Contact Screen

I wanted to stay away from designing a traditional chat interface since most of the times these types of interactions imply a certain level of interactivity that many times fails users because they expect a human styled interaction but at the end of the day are simply just talking to an AI agent. To avoid humanizing the AI and make sure there is a clear delineation between when a Human is responding to you (in the negative paths) versus when a agent is responding to you, I designed a simple guided input experience. This would also make it more efficient for the user to put in all their details at once.

I then mocked up a quick post it wireframe for Gemini to work with:

Understanding Conformation

In order to provide a sense of transparency and safety for the user, it was important to determine what the agent has understood from their input. Providing a quick high level summary and asking them to confirm its accuracy would ensure that they have control over submissions while also providing them a visual confirmation that their case is covered entirely.

Resolution Screen

The resolution path chosen by the agent would determine the need for a resolution screen. If it was an instant or easily solved inquiry, the basic resolution would highlight the actions the agent has taken and question the user about their overall satisfaction with what has been done. They would also still have the ability to get additional support if needed.

Next
Next

Echo chambers: Moving Beyond the "Yes" Trap