Project Story

Unravel — AI research teammate for product teams

Status: Live pilots

AI voice interview agents that turn qualitative research into decisions teams can actually execute.

Context

For most product teams, talking to users is either too slow because research becomes a bottleneck, or too shallow because surveys miss nuance. The real pain is not only collecting feedback, but turning messy human stories into clear decisions.

That pain gets worse when stakeholders challenge conclusions and teams cannot quickly trace insights back to source evidence. Without that traceability, good research often dies in decision meetings.

Intent

Unravel started as a self-serve qualitative research platform: configure an AI interview agent, deploy interviews through a shareable link, collect everything in a portal, and generate interview summaries quickly.

From there, the direction expanded toward a teammate model: help teams decide what to research, shape a better guide, run interviews, and synthesize insights with quote-level grounding so decisions are defensible.

Build

We built the full flow end-to-end: onboarding, agent setup, interview deployment, transcript and summary output, then human-in-the-loop synthesis for decision-making quality. The product had to be fast in execution but strict in evidence handling.

We iterated through pilots across different domains, including product research and a corporate internal-audit setting at Hines. Each loop improved setup clarity, synthesis depth, and confidence in outputs.

Outcome

The strongest result was not pure automation speed. It was confidence. Teams moved faster when they could go from insight to original quote in seconds and defend decisions without hand-waving.

That changed the product thesis: the real advantage is speed plus depth plus evidence trails. We are now refining where this confidence advantage matters most.

Lessons

Building research leverage is as much a product-design challenge as a model challenge. Trust, explainability, and frictionless setup determine whether teams use outputs in real decisions.

The core differentiator is owning the end-to-end learning loop, not a single feature. When the loop is coherent, teams learn faster and decide with higher confidence.