AP+ Testing Portal
The AP+ Testing Portal is the first web application built to replace manual, spreadsheet-heavy certification workflows.
It enables banks, fintechs, and merchants to self-serve certification against AP+ payment products, while giving AP+ admins structured tools for workflow templates, attestations, exceptions, and reporting.
By digitising certification, the portal reduced manual effort, improved transparency, and accelerated member onboarding across the AP+ ecosystem.
Approach
User Journey Mapping
Facilitated stakeholder workshops to align on goals, constraints, and user needs.
Pain Points Gathering
Interviewed internal testers to uncover inefficiencies (e.g., lack of version control, limited visibility).
Information Architecture
Designed a multi-level certification hierarchy (steps, tasks, sub-tasks).
Design System Implementation
Introduced a design-to-code pipeline (Figma MCP server, GitHub Copilot, Claude Code, Supabase).
Standardised variables and best practices, enabling smooth handover to engineers.
Scope
UX facilitation
Cross-collaboration
Content strategy
User interface and detailing
Solution
For phase 2 of the Testing Portal, we delivered two key pillars:
Certification Workflow Template Management
Admins can create, edit, and manage templates with steps, tasks, test case linkages, dependencies, and reports.
Member Certification
Participants can view assigned workflows, complete attestations, raise exceptions, and track progress.
Built secure access hierarchies for managing permissions and documentation.
Solving user problems
Multi-level certification hierarchies
Designed a scalable structure for steps, tasks, and sub-tasks that could flex across different certification scenarios (baseline, product, recertification).
Balanced clarity for users with system scalability, ensuring workflows didn’t overwhelm participants while still capturing necessary detail.
Multi-stepper UI
High cognitive load: Admins had to navigate hundreds of test cases and decide where to link them in the workflow.
Complex hierarchy: Linking rules depended on the structure — if a step had sub-tasks, test cases could not be attached at that step level.
Risk of errors: Without guardrails, test cases could be duplicated, mislinked, or missed entirely.
User frustration: The process was previously managed in spreadsheets, making it error-prone and hard to validate.
The solutions:
Introduced a guided stepper interface that broke the task into manageable stages (
Provided progress indicators so users always knew where they were in the process.
Added a review screen where admins could confirm all associations before finalising, reducing mistakes.
Used informative error messages vs alerts to distinguish between blocking issues and warnings.
Facilitating Feedback to Inform Phase 2
As part of Phase 2, I facilitated feedback and pain point gathering workshops with internal testers to learn from their usage of Phase 1 features (Test Case Management and Test Suite Management).
The goal was to uncover real-world frustrations and repetitive tasks from Phase 1 that could guide us in designing similar, but more complex workflows in Certification Workflow Management.
From these sessions, I captured:
Bulk actions and efficiency gaps: testers wanted faster ways to clone, copy/paste, or delete multiple test steps.
Navigation and usability pain points: the system resetting to the top after edits, or difficulty scrolling through large lists of test cases.
Information visibility: need to see test names in list view, and toggle the visibility of certain fields.
Performance expectations: pagination and load times needed to be smoother when working with large data sets.
Validation clarity: clearer separation between blocking errors vs warnings or alerts.
At the same time, testers shared what worked well — e.g., test case cloning and test suite creation were seen as efficient and intuitive.
By capturing these insights, I ensured that Phase 2 designs built on proven successes while addressing gaps, creating a seamless bridge between test case/test suite management (Phase 1) and certification workflows (Phase 2).
Impact
20–30% reduction in manual effort for internal testers by replacing spreadsheets with structured workflows.
20–30% faster onboarding for member organisations through streamlined certification flows and access management.
Improved cross-functional collaboration by aligning design outcomes with team OKRs.
Positive user feedback from both external members and internal testers.
AI-enabled workflows within the design team:
Custom UX QA agent using Playwright to test acceptance criteria.
AI-optimised Context Repository for project documentation.
Achievements (Highlights)
Pioneered a design-to-code process using Figma MCP server, bridging design and engineering.
Delivered production-ready Figma files, ensuring smooth handover to front-end engineers.
Standardised design variables and system practices across teams, improving CSS alignment and reducing front-end rework.
Coached engineers on Figma Dev Mode and shared best practices to optimise front-end outputs.
Designed end-to-end certification workflows supporting regulatory compliance.
Enabled self-service certification for banks, fintechs, and merchants, reducing onboarding friction.
Introduced AI-assisted workflows (GitHub Copilot, Claude Code, Playwright, custom UX QA agent).
Structured project documentation in a Context Repository optimised for AI collaboration.