Context
In healthcare process improvement work, surveys were fast but often too shallow. Important root causes stayed hidden because static forms could not ask clarifying follow-up questions.
Stakeholders needed richer qualitative signal without increasing interview overhead for teams.
Intent
I aimed to prototype a chatbot that could run adaptive interviews, probe unclear answers, and produce structured insights useful for decision-making.
The key idea was preserving conversation depth while keeping the throughput benefits of digital collection.
Build
I built a conversational prototype that generated context-aware follow-up questions, captured structured outputs, and supported synthesis across multiple responses.
The system was tested with Dutch GP stakeholders to evaluate whether depth and relevance improved compared to fixed survey formats.
Outcome
The prototype validated the core hypothesis: adaptive dialogue produced more actionable qualitative detail than static questions alone.
It also highlighted quality requirements around interview pacing, prompt control, and synthesis transparency.
Lessons
Conversation quality is product quality in research tools. Better follow-up logic is only valuable when outputs remain interpretable and decision-ready.
Trust grows when stakeholders can inspect why a conclusion was generated, not just read a summary sentence.