At Tableau, a key cultural value is to “eat our own dogfood”. Over the last several years, we’ve had an on-going internal project (“Project ALPO”) focused on connecting Tableau to our own operational data, ranging from SalesForce CRM data to Fogbugz defect data, to data from our Cisco router. Not only has this given us the typical benefits of improving the team’s familiarity with the product, getting usability feedback, and finding bugs, but it has also enabled us to reap the benefits of analyzing our business’s core metrics.

As a developer, usability fanatic, and Tableau co-founder, I’ve been fascinated by several aspects of our Project ALPO to use our own products. One of the most interesting elements has been to watch how each employee’s perspective on data and its utility to their day-to-day job changes as they use the product. There are two findings I’ve seen with most new users:

1. People don’t know what questions to ask of their data. Most tools for asking questions of data are difficult to use and require a heavy investment for each new question. As a result, people rarely venture to ask new questions and invest little time thinking about what questions they would ask if they could. People stumble a bit with the possibilities: The blank palette can be intimidating at first. But then, they start asking simple questions (“How many bugs do I have open?”). When the answers come easily, they venture to ask more sophisticated questions evolving into rapid-fire Q&A sessions with their data (“How rapidly are bugs verified and closed after being resolved?”, “What percentage of revenue this quarter is due to new business compared to previous quarters?”). It is an exciting process to witness.

2. Visual analytics? I’ll take a text table, thanks. The transition to visual analytics is incremental. Tableau is really two tools in one: (a) an easy to use tool for Q&A with your data and (b) a data visualization tool. Most people start using it for the former and then incrementally venture into using it for the latter. People start by simply recreating the reports they can already generate, simply gaining the benefits of quicker and easier answers. But for almost every use, they slowly venture into visual analytics. One day they take their traditional text table and drop a metric on color, highlighting the anomalies. A week later, they tweak the table, transforming it to a tabular bar chart, communicating the same data as before but new information starts to jump out. Then they create a new sheet with a whole new view of their data and soon they are experts in the process of visual analytics and seeing their data like never before.

Over the next several weeks, I’ll author several blog posts outlining how we’ve connected Tableau to numerous common data sources. For this post, I’m focusing on a data source that I care deeply about, our defect-tracking system Fogbugz. I’ll follow up soon with a posting about SalesForce.com and our evolving use of visualization in our analytics process.

Visual Analytics for Fogbugz

Fogbugz is a defect management system built on some of the same principles we care about at Tableau: usability and simplicity. Our development group starting using it about four years ago when we matured from a couple of guys in a basement with a passion for data into a real software company with thousands of users. Fogbugz stores its data in a fairly intuitive (and well documented!) schema with clearly named primary and foreign keys in one of numerous databases, all supported by Tableau: MySQL, Microsoft Access, or SQL Server. This made it an easy initial target for Project ALPO.

Connecting Tableau to Fogbugz is simple –connect to the database as an appropriate user and model the core schema. I focused on just looking at defects and their evolution over time, so I modeled this schema within Tableau:

FogBugz Data Schema
Figure 1: Fogbugz schema for the core defect tracking tables (from the Fogbugz Support forum)

After getting connected, the interesting part started: What questions to ask? We often have multiple releases being developed simultaneously – partly due to an expanding product line and partly due to our partnership with Oracle-Hyperion. So, I started with the obvious question: What bugs are open against which release and assigned to whom?

data visualization of bugs by release
Figure 2: The number of open (Active) bugs broken down by release, to whom the bug is assigned, and the bug priority. The developers are sorted in descending order by the number of bugs they have active.

A simple question and a simple answer – but immediately useful. I could pop this open each morning (as could the developers) and get an immediate sense of where we stood. By the way, I randomized and anonymized the data (can you guess the theme for the people’s names?).

Once I asked this obvious question, I have to admit I got stuck for a while. What other questions should or could one ask of a defect management system? I didn’t want to fall into the trap of monitoring useless metrics or using them to evaluate our QA or development team (that can only lead to bad behavior). But I also had the intuition that the data had stories to tell that would help me and the team do our jobs better.

A useful question for me (and typically useful for all data) was: What is changing? For this data, that meant “What new bugs are being filed?”, “What bugs are being closed?”, “What resolutions are most bugs getting?”, or “Who is closing and opening bugs?” That led to a number of views. The following is one nice example of seeing the daily change in bugs:

visual analysis of changes to bugs database
Figure 3: A view into the day-to-day change in defects showing what bugs are being opened and closed and by who. Different hues are used to indicate different roles within the Development team: Shades or orange are used for members of the QA team and shades of Green for developers.

Tableau as a Fogbugz Interface

In many of the views we started authoring, a single mark corresponded to a single bug. That meant we could use Tableau’s ability to associate data-driven hyperlinks with marks to jump directly from outliers on a graph to the bug entry in Fogbugz. Tableau started to be about more than asking questions about the data – it was becoming our interface for the data!

A nice example is the next view: Our process for bug resolution was for the bug to be resolved back to the person who opened it, they would verify the fix, and then the bug would be assigned to QA to be verified again and closed. But despite email notifications, people often didn’t notice when a bug was resolved back to them and bugs would sit resolved but not verified for quite a while.

Then someone on the team authored and shared the following view. It let us quickly see what resolved bugs were assigned to the team, how they were resolved, and how long they had been sitting waiting for verification. And even better, one could simple right-click on any mark and jump to the bug in Fogbugz to start the verification process:

visual analytic of bugs' resolution status

Figure 4: A view of resolved bugs assigned to the person who opened them and awaiting verification. A quick glance shows bugs that have been sitting idle a long time – and which might not be correctly fixed. The color shows the resolution (duplicate, fixed, not reproducible, etc.). Users can right-click on a mark to jump directly to the corresponding bug in Fogbugz.

Time-based behavior

Soon, we became very interested in the time-based behavior of defects during a release as we experimented with different approaches to software development and release management. We wanted to graph the number of open bugs for any day during the release and see how events like stabilization periods were influencing the bugs.

Unfortunately, there isn’t historic data in Fogbugz but it was easy to add. We authored a simple stored procedure that ran nightly on our SQL Server instance collecting summary statistics. We were then able to construct the following type of view for each release:

data analysis of time-based behavior
Figure 5: A view of open bugs over time. Views like this are used to compare different approaches used with the team for software development, such as planned stabilizations and phased deliverables. Reference lines are used to indicate key milestones. The view shows a less than ideal sprint release where all of the bugs accumulate until near the end of the release.

The above was a really quick peek at our evolving usage of Tableau to understand and question ourFogbugz data The team is creating new views everyday to answer a wide range of useful questions. I’ll finish up with a screenshot of a dashboard most of the team uses for a quick peek at the state of any release.

Any questions about how we’ve connected Tableau to Fogbugz? Please don’t hesitate to send me an email at (chris at tableausoftware.com).

dashboard of overall FogBugz data visualization
Figure 6: A dashboard one of our development leads authored to provide multiple perspectives on the state of a release. It shows the recently opened bugs, graphical views of the short- and long-term history of the release, and the current bug count by person.

您可能会感兴趣的内容…

评论

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.

This comment was blocked because Project Honeypot indicates it came from a suspicious IP address.