Back to Portfolio

Turning AI Outputs into Actionable Insights

Prototyped an analytics tool that helps product teams review AI outputs at scale and surfaces performance patterns with actionable fixes.

Motivation

As a PM working on AI products, I repeatedly ran into the same bottleneck of diagnosing poor outputs. Every week I had to spend hours manually with my team to review hundreds of cases, trying to understand whether failures came from missing internal knowledge, insufficient guardrails in our prompts that allowed hallucination, or edge-case inputs that broke downstream logic.

The process was inefficient and rarely revealed clear next steps. This motivated me to prototype an analytics tool that could review AI outputs at scale, cluster failure patterns and recommend targeted fixes.

Process

Mapping The Workflow

I mapped the exact steps I went through during my weekly case review process. This includes connecting to the data source, defining the judgement criteria, scanning through outputs to diagnose vague or wrong reponses, categorizing failure reasons, and deciding what to change. That became the backbone of the tool.

Defining Product Requirements

The biggest bottleneck was manually scanning every case. I scoped the tool to prioritize surfacing failures that were recurring and high-impact, allowing teams to focus on the most valuable problems instead of reviewing every example.

Building The Tool

I then built a working prototype based on the defined requirements to automate my workflow:

  • Collected prompts, model outputs, and expected outcomes into a centralized record with templates for evaluation criteria and custom rules.
  • Clustered similar cases using embeddings and automatic evaluators to highlight the most representative failures.
  • Extended human judgments across the dataset to reduce redundant reviews.
  • Aggregated results into dashboards that grouped failure modes and suggested targeted fixes like refining prompts or adding golden datasets.

Failure Pattern Clustering

Automated clustering of similar cases with the same failure pattern

Prompt Suggestion System

LLM-powered recommendations for prompt improvements and optimizations

Outcome

Once I had a working prototype, I shared it with peers working on AI products. What resonated most with them was the narrative layer. Instead of producing raw scores or pass/fail labels, the tool explained why outputs failed and pointed to concrete next steps.

By turning debugging into a more structured process, the tool helped me and my peers move efficiently from vague “bad outputs” to clear and actionable fixes.