Insights Dashboard Design
2025
•
Cisco
Design challenges I tackled and the approach I took across SaaS UX initiative and insights dashboard revamping
analytics_dashboard
user_centered_solutions
ui_visual_design
information_architecture
interactive_prototype
Defining the Purpose and Understanding Needs
This project aimed to improve visibility into how resources were being utilized within a Lab as a Service (LaaS) platform. To ground the work, I analyzed a PRD provided by the Analytics department, aligning business objectives with user needs. My role focused on uncovering pain points and designing solutions to support decision-making around device availability and reservations.
Current Context and Pain Points
Users struggled with limited visibility into device utilization, reservation queues, and failure rates. Metrics were too high-level to provide actionable insights, leaving gaps in understanding real usage patterns.
Problem Statement
Current systems lack clear visibility into device usage versus reservations, queue times, failure rates, and device availability. This results in inefficient resource allocation and increased failed test runs due to device unavailability. Metrics were too high-level to provide actionable insights, leaving gaps in understanding real usage patterns.
User Needs
To capture the diverse perspectives of stakeholders, I synthesized user needs into clear questions and challenges that reflected how different roles interact with the system. Lab Owners sought ways to manage peak demand and optimize inventory, Engineers required visibility into queues and device availability to avoid failed runs, and Executives needed quick insights into utilization patterns and bottlenecks. These needs framed the foundation for design decisions, ensuring the solution addressed both day-to-day workflows and higher-level strategic goals.
Mapping the Existing Experience
Current Site Map
The existing site map provided a high-level overview of the platform’s structure, but key insights were hidden deep within the interface. Users could only access meaningful information after opening the devices list, requiring secondary reflection and manual analysis of platform numbers to understand utilization patterns and bottlenecks. This revealed a lack of clarity and efficiency in the navigation, making it difficult to quickly extract actionable insights.
Current Pages
The existing site map provided a high-level overview of the platform’s structure, but key insights were hidden deep within the interface. Users could only access meaningful information after opening the devices list, requiring secondary reflection and manual analysis of platform numbers to understand utilization patterns and bottlenecks. This revealed a lack of clarity and efficiency in the navigation, making it difficult to quickly extract actionable insights.
There is no relevance to see these numbers and then open a drawer with table data. It should be already a comparison across platforms/models/pin.
Ideating the Core Experience
Core Modules and Features
To guide the redesign, I broke down the system into its core modules and features, ensuring alignment with both the PRD requirements and user priorities.
Definitions and Measurements
I collaborated with the Analytics department to establish clear definitions and measurement standards for inventory and reservations. These covered availability, utilization, maintenance, and reservation outcomes, creating a shared framework to analyze efficiency and performance. Alongside these definitions, I outlined key areas of investigation and guiding questions to uncover root causes behind bottlenecks, failures, and timeouts—ensuring that insights were consistent, actionable, and tied to real user needs.
User Flows
To ensure the experience addressed real challenges, I mapped user flows that reflected how Lab Owners, Engineers, and Executives would navigate the platform to uncover insights. These flows highlighted the paths users would take to analyze utilization, investigate bottlenecks, and resolve issues, making it easier to validate whether the solution aligned with their goals and daily workflows.
Flow #1 - Analyzing Device Utilization
This flow illustrates the core actions users take to understand device resources within the platform. Starting from availability states (free, reserved, maintenance), users can analyze utilization against capacity and drill down by device type for deeper insights. The flow also highlights how efficiency is derived by comparing availability with actual utilization, ensuring that resource allocation is both transparent and actionable.
Flow #2 - Investigating Reservation Activity
This flow highlights sessions that end without being used, giving visibility into wasted reservations. By tracking timeouts and identifying usage gaps, users can spot inefficiencies, adjust scheduling strategies, and ensure that resources are actively contributing to productivity instead of sitting idle.
Flow #3 - Investigating Queue Bottlenecks
This flow demonstrates how users identify resources with long wait times and uncover the root causes of bottlenecks. By analyzing queue durations across platforms, models, and pins, users gain visibility into where demand consistently exceeds supply, enabling more effective resource planning and scheduling.
Flow #4 - Investigating Reservation Failures
This flow focuses on surfacing patterns behind failed reservations. Users can review failure causes, such as conflicts or unavailable configurations, to understand why test runs did not proceed as expected. These insights help both engineers and lab owners reduce friction, improve reliability, and optimize reservation success rates.
Delivering the Solution
High Fidelity Wireframes
These wireframes translated research insights and user flows into detailed, visually precise designs. They showcase the redesigned dashboards, navigation, and core interactions, illustrating how the platform presents actionable data and supports efficient decision-making.
Celebrating Recognition
Capturing Leader Feedback
Recently, I received positive feedback from leadership, who noted improved visibility into system behavior and decision-making after one year of working on this platform. This feedback confirmed the long-term impact of the design strategies I introduced to the project.















