Rethinking How Data Workers Revisit Analytical Conversations and Communicate Insights
Tableau Research presents SyncSense, a structured interface that helps data workers navigate, understand, and communicate complex analytical conversations powered by AI. This research was conducted by Ken Gu during his internship at Tableau Research, in collaboration with Vidya Setlur and Srishti Palani. This work is being presented at the ACM CHI Conference on Human Factors in Computing Systems, the premier venue for research in human-computer interaction and won an Honorable Mention Best Paper award.
In today’s AI-driven analytics landscape, conversational tools like ChatGPT are transforming how people explore data. These systems lower barriers to analysis, enabling users to express complex questions in natural language and iterate rapidly. But while they make analysis more accessible, they also introduce a new challenge: how do you make sense of long, messy analytical conversations after the fact? Data workers often find themselves scrolling endlessly through transcripts, trying to reconstruct their reasoning or retrieve a specific result, thinking: “I need to find that one chart.”
From linear chats to structured understanding
The challenge of revisiting analytical conversations reflects a deeper mismatch between how analytical work unfolds and how conversational systems represent it. While conversational interfaces capture the full history of an analysis, they do so as a linear sequence of turns, obscuring the structure, branching, and iteration that characterize real-world workflows.
In practice, data analysis is iterative, nonlinear, and full of branching ideas, refinements, and dead ends. Yet conversational tools flatten these dynamics into a single continuous scroll, making it difficult to recover context, follow analytical threads, or identify key outcomes.
SyncSense is motivated by this gap. Rather than treating conversations as unstructured logs, it introduces layered structure over existing transcripts to support revisitation and interpretation, helping users reorient themselves, navigate related lines of inquiry, and extract insights for communication.
What is SyncSense?
SyncSense is a research prototype and design probe that reimagines how data workers interact with AI-driven analytical conversations. Rather than treating conversations as flat, linear text, SyncSense structures them into interpretable components: threads (grouped analytical goals), speech acts (the intent behind queries), artifacts (charts, tables, and code), and insights (key findings). These elements are surfaced through an interactive, multi-panel interface that allows users to move seamlessly between high-level overviews and detailed conversational context, making it easier to reconstruct and communicate analytical workflows.
Figure 1: SyncSense surfaces structure in analytical conversations through multiple coordinated views, helping users navigate and interpret complex analyses.
At its core, SyncSense is grounded in a critical insight: revisiting analysis and communicating it are deeply intertwined activities. When analysts return to prior work, they are often preparing reports, explaining results to stakeholders, or reconstructing their reasoning. To support these needs, SyncSense introduces three key design principles. First, it layers structured representations over raw conversations, preserving the original transcript while making intent and outcomes easier to interpret. Second, it enables overview and detail on demand, allowing users to zoom out for context or drill down into specific steps. Third, it supports modular composition, letting users extract and recombine parts of a conversation into summaries tailored to different audiences.
These ideas come together in SyncSense’s three-panel interface: an Outline panel for a high-level map of threads, insights, and artifacts; a Detail Explorer for structured, mid-level navigation of analytical flow; and an Annotated Conversation view that preserves the full chat with added context (Figure 1).
Beyond navigation, SyncSense also supports a powerful summary authoring workflow, where users can drag and reorganize conversation elements, add narrative context, and refine outputs with AI assistance (Figure 2). Crucially, this process keeps users in control, positioning AI as a collaborator that enhances clarity and communication, rather than replacing human judgment.
Figure 2: Users can compose summaries by extracting and reorganizing conversation elements, with AI assisting in refining tone and structure.
What we learned from data workers
In evaluating SyncSense with 10 experienced analysts, we found that revisiting analytical conversations is an active sensemaking process shaped by three core needs: reorienting within the conversation, recalling specific charts, results, or steps, and prioritizing what information truly matters. Participants navigated using a combination of strategies, often starting with sequential reading to rebuild context, relying on visual artifacts as memory anchors, and moving fluidly between high-level summaries and detailed views. While they appreciated AI support for refining tone, improving clarity, and adapting summaries for different audiences, they were resistant to fully automated outputs and wary of losing control over narrative structure. Overall, the findings reinforce that AI is most effective when it assists rather than replaces analytical storytelling.
Why this matters
As conversational AI becomes a primary interface for data analysis, a new challenge emerges: the bottleneck is no longer generating insights, it is making sense of them afterward. SyncSense highlights a critical shift in how we think about AI-powered analytics: from generating answers to organizing reasoning, from chatting with AI to navigating analytical memory, and from isolated findings to clear, communicable narratives.
Beyond being a tool, SyncSense also serves as a research probe, revealing how people interact with AI-generated analyses and where current systems fall short. It points toward future directions such as richer structuring of conversations, stronger support for collaboration, and more transparent, controllable AI-assisted summarization. Ultimately, SyncSense suggests that the future of analytics isn’t just about leveraging more powerful LLMs per se; it’s about designing better interfaces for human understanding. As analytical conversations grow in scale and complexity, tools must help users revisit their thinking, reconstruct their reasoning, and communicate their insights effectively.
Interested in exploring our work in human-AI data analysis? Visit the Tableau Research website to check out the SyncSense paper and our latest work on LLM benchmarking, both of which will be presented at the ACM CHI Conference (April 13–17, 2026).
