Service Provider Metrics Dashboard is an internal dashboard within GEICO’s PATH platform that helps infrastructure teams track cloud capacity, usage, and cost across services like Kubernetes, VMs, Kafka, databases, and storage.

//Role
Senior Product Designer
//Duration
March 2025 to August 2025
//Context
GEICO, PATH Platform
//Industry
Enterprise / Cloud Infrastructure
I worked on the project as a Senior Product Designer, helping define the experience, organize requirements, and shape the dashboard structure through close collaboration with engineering and service provider teams.
Problem & Opportunity
The Problem
Infrastructure teams lacked a clear way to understand how allocated cloud capacity was being used. The first version of the dashboard existed, but it had important gaps around cost visibility, usage visibility, and service-specific needs.
That made it harder for teams to monitor utilization, identify inefficiencies, and make informed decisions about capacity planning.
The Opportunity
There was an opportunity to turn the dashboard into a more useful decision-making tool by focusing on the right signals, organizing the information better, and creating views that reflected how different teams actually work.
Goals & Success Criteria
Improve visibility into cloud capacity allocation and usage
Enable tracking of budget, TAC, and TAS by environment
Help teams identify overuse, underuse, and quota risks
Support better planning and forecasting across service providers
Success was based on requirement clarity, stakeholder alignment, and whether the dashboard structure reflected real user needs.

Context & Constraints
This dashboard lives inside PATH, GEICO’s Platform and Technology Hub, which centralizes SDLC and infrastructure workflows.
Within PATH’s Capacity Management system:
system owners submit capacity requests
org owners manage spend
capacity and provisioning teams review and fulfill requests
The project also had a few strong constraints:
HUE existed as a design system, but PATH had its own inconsistent version of it
component documentation was weak
the data was dense and technically complex
performance had to stay manageable
some required data depended on systems like CCRM and CMDB

Users & Research
Target Users
Service provider owners and related infrastructure teams, including:
Kubernetes
Virtual Machines
Kafka
Database
Storage
Serverless and infrastructure capacity teams

Research Approach
user interview feedback captured across multiple service teams
collaborative sessions with engineering and service provider stakeholders
review of the existing dashboard and its gaps
structured FigJam workshops to gather pain points, priorities, and desired functionality
Key Insights
teams needed clearer visibility into actual usage, not just requests
cost and budget tracking were inconsistent or missing
each service provider needed somewhat different views
one shared dashboard was too broad to serve everyone well
the amount of data made prioritization critical
Problem Statement
How might we help infrastructure teams monitor and understand cloud capacity usage in a way that supports both shared metrics and service-specific needs?
Strategy & Approach
The approach was to simplify the structure of a very technical problem.
Instead of trying to make one dashboard do everything, I helped reorganize the experience around shared patterns plus service-specific views. That meant identifying which requirements applied to everyone, which ones were unique to certain services, and which data points were actually useful for decision-making.
The focus stayed on actionable insights, not just exposing more data.
Information Architecture & Flows
The dashboard was structured around a few core needs:
filtering by region, environment, and organizational metadata
understanding usage and demand
tracking cost and budget consumption
identifying risk, inefficiency, and forecasting needs
The architecture shifted from a single broad dashboard toward a more modular model where users could access views tailored to their service area while still working within a consistent framework.

Design System & Visual Direction
The work used HUE, GEICO’s design system, but in practice PATH had its own poorly documented implementation. That made consistency harder than it should have been.
To keep the experience coherent, I worked closely with developers to understand what was actually feasible, which components were reliable, and how to handle the large number of graphs and custom visualizations the dashboard required.
The visual direction focused on clarity, structure, and practicality in a dense data environment.
Wireframes to Prototype

This project was highly iterative. From March through early August 2025, I worked through more than 10 high-fidelity iterations with constant feedback from engineering and stakeholders.
Because the same teams helping define the requirements were also building and using the tool, the work involved frequent reviews, fast feedback loops, and repeated adjustments to support performance, usability, and technical constraints.
Usability Testing & Iteration
We ran collaborative FigJam sessions with the dev and infrastructure teams, who were both helping build the product and intended to use it.
We asked them questions around decision-making, priorities, missing information, and ideal future functionality, then gathered feedback through timed exercises and sticky-note responses.

Problem: A single dashboard could not support all service providers effectively
Solution: We split the experience into service-specific dashboards such as K8s, Kafka, VM, storage, and others

Problem: Different service teams needed different data views and structures
Solution: We kept a shared dashboard foundation while allowing specialized views per service
Problem: Some data points, like total number of demands, were not actually useful
Solution: We deprioritized or removed low-value metrics and focused on more actionable signals

Problem: Users needed better prioritization of what mattered most
Solution: We emphasized usage, cost, forecasting, and quota-related insights over broad aggregate reporting
Outcome & Impact
The project resulted in a clearer requirement set and a stronger dashboard direction across multiple service teams. It helped define what each team actually needed to see, what should stay shared, and what should vary by service.
The work created a more realistic and scalable foundation for future design and development.
Reflection & Learnings
Challenges
aligning multiple technical teams with different priorities
designing inside a fragmented design-system environment
balancing ideal UX with real performance constraints
managing ongoing iteration across a highly technical product
What I Learned
complex platforms often need modular solutions, not one universal view
data-heavy products need prioritization more than they need more data
close collaboration with engineering is critical in technically constrained environments
a design system only helps if it is actually documented and consistently implemented
Next Steps
continue refining the final dashboard designs
strengthen integrations with systems like CCRM and CMDB
expand forecasting capabilities
explore cross-service navigation and insights where useful








