Select Page

AI Canvas & Geomap — Designing a Map Widget for Two AI Surfaces

Designed a map visualization that serves two fundamentally different contexts — an interactive AI Canvas widget and a concise MCP-delivered image for AI IDEs — while exploring 11 tile styles and authoring custom map tiles that expand the org’s geomap capabilities.
 
Role    Design Systems Lead
Timeline    Q2–Q3 2026 (in progress)
Team    Design Systems (me), 1 Product Manager, 1 Engineer
Key Outcome    Established a design pattern for components that serve both interactive and AI-agent contexts, while contributing custom map tiles usable across the entire product.

Problem

ThousandEyes is building widgets as part of a larger collaboration to build a Cisco wide AI Canvas — an interactive surface where AI-generated widgets answer user prompts about network monitoring. One of those widgets is a map: “What locations am I monitoring Salesforce from?” or “Which offices have Wi-Fi agents with issues?”

 

The design challenge had two layers. First, the map widget needed to work on two fundamentally different surfaces. The AI Canvas version is interactive — users hover for tooltips, click to drill down, and explore visually. The MCP version is delivered as a static image inside AI IDEs like Cursor or Claude, where users dig deeper through follow-up prompts rather than interacting with the visual. Same data, different interaction models.

 

Second, the existing geomap component had no dark mode tiles, no custom theming, and no precedent for being delivered outside the product UI. The tile work needed to happen in parallel with the widget design — and whatever I built would be available for any team using maps across the product.

Process

Discovery

I started from the PRD and immediately translated requirements into persona-based use cases. Samantha, a network engineer, needs to verify geographic coverage. Gene, a network administrator, needs to assess performance across regions. Aydin, a helpdesk engineer, needs to triage connectivity issues by location. Each persona shaped a different map interaction — discovery, assessment, and investigation — which kept the design grounded in real user intent rather than abstract feature lists.

Define

I scoped two parallel workstreams: the widget itself (node types, clustering, tooltips, drill-down hierarchy) and the map tiles (integrating MapTiler vector tiles with custom dark-mode styling). The tile work fed into the widget, but had its own value — any product area using TeGeomap could benefit from the expanded visual design.
 

Design

This is where the process gets interesting, because it wasn’t linear. I worked between a Vue prototype and Figma, looping back and forth depending on what I was solving. The prototype handled interactions, API-connected data, and things that needed to move — clustering behavior, zoom transitions, drill-down mechanics. Figma handled high-fidelity specs, tooltip templates, and widget templatization where precision mattered.
AI was a collaborator throughout, but not the decision-maker. For example, when defining drill-down behavior and node color logic, I reviewed multiple options AI generated — but the decision that a single-issue node should be red (not amber, not a warning state) came from human assumptions about how network engineers interpret severity at a glance. AI proposed; I decided.

 

For the map tiles, I explored 11 styles — custom vector styles I authored from scratch alongside MapTiler defaults — to find the right visual surface for network data. I evaluated them across three tools: QGIS for geographic data, MapTiler Cloud for style configuration, and directly in Cursor with a prototype using MapLibre GL. Two custom styles emerged from this: CNV Dark (minimal ocean/land/borders, matching Cisco’s network visualization aesthetic) and TE Theme (a warm light palette designed for ThousandEyes product contexts). I selected CNV Dark for the AI Canvas — its minimal aesthetic keeps data-layer elements like agents, clusters, and health status in focus without visual noise. Both custom styles are available org-wide for any team using TeGeomap.

Validate

I’m currently exploring two tooltip directions: one with detailed grouped data when nodes are clustered (which risks scroll fatigue), and one with truncated summaries. I’ve already consulted AI on the trade-offs, but I plan to run user testing to let human users tell me what information density they actually need.

My Role

  • Initiated the persona-based use case approach — translating PRD requirements into three distinct user scenarios that shaped the design
  • Made key design decisions by evaluating AI-generated options through human judgment — drill-down hierarchy, node color logic, clustering behavior
  • Designed across two tools for different purposes — prototype for interactions and API access, Figma for high-fidelity specs and templates
  • Explored 11 map tile styles across QGIS, MapTiler, and Cursor prototypes — authored two custom vector styles (CNV Dark and TE Theme) and selected CNV Dark for the AI Canvas, now available org-wide
  • Defined the MCP visualization format — sizing, typography, and scaling behavior for delivery inside AI IDEs

Outcome & Impact

This project is in progress, but the work already enables several things that weren’t possible before:
 
  • A map widget that answers location-based monitoring questions in the AI Canvas, with three node types, clustering, and contextual tooltips
  • A concise MCP version that extends design system visualizations into AI IDEs — a new delivery surface the design system hadn’t served before
  • Custom dark-mode map tiles (CNV style via MapTiler + MapLibre GL) available for any product team using TeGeomap, expanding map visual design across the org
  • A reusable pattern for designing components that work in both interactive and AI-agent contexts

Reflection

What I learned: The most interesting part of this project is the design process itself. AI is genuinely useful as a collaborator — generating options, exploring edge cases, proposing interaction patterns. But every decision that matters still comes down to human judgment: what users will expect, what severity means at a glance, what information density is useful versus overwhelming. The process is not automated. It’s a conversation.
 

What I’d do differently: I would have started the tile exploration earlier. The custom dark tiles ended up being a dependency for the Figma specs, and running the two workstreams closer to sequentially (tiles first, then widget) would have reduced context-switching.

 
How this informs future work: Designing for AI-agent delivery (MCP) is a new surface that requires its own constraints — sizing, information density, interaction model. As more design system components get delivered through MCPs, this project establishes the first pattern for how to think about that.