Clinical Trial Dashboard Redesign

Organization
Product RedesignedIBM Clinical Development, an electronic data capture (EDC) used for clinical trial management 
Research MethodOpen-text survey, Literature Review, Design Thinking (Diverging & Converging)​
SkillsDomain & Context-of-use analysis, What-Why-How framework, Information Visualization, Storytelling, Prototyping
Tools UsedTableau, Microsoft Excel, Pen & Paper
RoleUX Researcher, Designer
Time PeriodJan 2020 – May 2020

PROBLEM STATEMENT: The drug development process is highly complex and expensive. The Electronic Data Capture (EDC) system is a database that enables patient data collection, data query management, and source data verification. Dashboard visualizations have been increasingly popular and widely used to summarize and track essential information from such EDCs. However, many dashboard designs are poorly designed due factors such as the lack of useful metrics, drill-down capability, and non-appealing visual design. Dashboards for these EDC systems therefore presents a novel opportunity for us to redesign. 

OBJECTIVE: This goal of this project was to develop a comprehensive solution to enhance the visibility of operational metrics and facilitate organizational strategies in the multifaceted domain of clinical trials. We redesigned the IBM Clinical Development dashboard to increase visibility of operation metrics and aid clinical trial professionals to make informed decisions

​Results 

Tableau Dashboard

Prototype Link: Click here

Original Dashboard

Research Method

1. Domain & Task Analysis

Users: There are many users within the clinical trial domain, including clinical site staff, clinical research organization (CRO) staff, sponsors and stakeholders. For this project, I chose to focus on CRO staff, who are research professionals responsible for ensuring timeliness of data entry, and data integrity before exporting data for statistical analysis and reporting to FDA.

  • Project Managers: Identify global trends, risks and issues, summarize ongoing project status to executive management
  • Site Managers: Compare site performance metrics, identify unfavorable trends to dedicate resource allocation
  • Field Monitors, or Clinical Research Associates: Inspect and prioritize data pending source data verification, conduct visit-report write-up 
  • Data Managers: Determine areas for data cleaning via query metrics, ensure data readiness prior to database lock 

Context-of-use Analysis: I also conducted an analysis on the context-of-use to further understand the features of the work domain and the anticipated use case of the product 

2. Literature Review

A literature review was conducted on current visualization strategies employed for clinical trial dashboards. 

  • Donut Charts: Commonly used for part-to-whole relatinoship
  • Bar charts: Mostly used to visualize enrollment
    • Stacked bar glyphs
    • Layered bar graphs
  • Different operation metrics are used in display
    • Missing eCRF Pages, enrollment status (Lodha, 2017)
    • Enrollment progress, visit status, query status
  • Gaps identified
    • Dashboards in lit review were developed for different user groups
    • Operation metrics are used inconsistently across literature
    • No dashboard or visualization provides comprehensive overview of trial status

3. User Research

Because there were so many inconsistencies and a wide range of operation metrics, I conducted a brief user research to further narrow down the metrics utilized. I did this through administering an open-text survey via e-mail to three IBM Clinical Development users (N = 3). 

Based on the user research findings, I identified three main opportunities for redesign:

  1. Include additional critical variables not previously visualized that were perceived as essential
  2. Clarify subject status and show exactly how many subjects were randomized vs. enrolled
  3. Rearrange enrollment bar graph to enable more efficient site comparisons

These findings were subsequently incorporated into a what-why-how analysis.  

4. What Why How Analysis:  

The What-Why-How framework (Munzner, 2014) was also applied for this project to clarify visual design goals. By removing any domain jargon and translating research goals to domain-independent language, I am able to draw inspirations and identify alternative approaches to visualize a new dashboard. 

My user research revealed two major steps the visualization should support. 

Step 1: Produce and derive quantitative attributes, which are the key performance metrics. I identified 5 KPIs and additional categorical attributes that weren’t available in the previous visualization

Step 2: Using information consumed from the first step, users can then engage in discovering trends, finding outliers, and detecting features. Mainly they are looking for sites that look off-target. Hence, the visualization should support querying and searching actions on distributions, extremities, and similarities. They are especially interested in comparing site-level information with study overall 

5. Results

The final clickable prototype was created in Tableau. Below are a few example of the types of tasks the visualization is able to support 

  • Summarize and present changes over time – using the KPIs organized above, Project Managers can rapidly summarize today’s metrics, and see the changes in comparison to yesterday, last week, or last month. 
  • Compare enrollment metrics and assess screen failure ratio – Site Managers can detect off-target sites and identify screen failure rate by comparing the lengths of the gray bar, the longer it is, the worse the site is performing.
  • Browse and lookup sites with aging queries: Depending on their role, users can filter for the specific query type or status of interest, then hover over to see additional information (site, number of records, query age group). Trial managers can quickly identify the sites that require additional attention, then compare the current metrics with the study overall. In the screenshot below, Site 115 has the most outstanding open queries. Using the side-by-side bar charts on the right, users can then compare current status with the overall average. 
  • Summarize site status for visit report write-up: After conducting a site visit, CRAs or field monitors need to summarize current status. Instead of pulling separate reports, the users can now quickly filter for the site, and locate the total number of subjects accrued, enrolled, screen failed using the nested bar chart. They can also locate the total number of AEs and PDs reported using the treemaps below. 

Iterations

  • Gallery of iterations

Maroon annotations highlights adopted design solutions, whereas pink annotations highlight abandoned design solution

Design Justifications

  1. Weber’s Law: Frame or align objects to allow precise judgment 
  2. Layering and separating: use layering to create visual hierarchy
  3. Redundant coding: use multiple channels to make symbols visually distinct
  4. Limited number of discriminable bins
  5. Gestalt Principles: Group conceptually similar items together in same region
  6. Eye beats memory: use side-by-side comparisons to reduce cognitive demands 
  7. Overview first, zoom and filter, details-on-demand: allow for controlled exploration to zoom in or filter for additionally relevant items and details 
  8. Higher saturation for high numbers: saturation level commands greater attention 
  9. Length over area for quantity: use area glypth selectively when ranges are large
  10. Extra “window” to show magnified areas of larger space 

Conclusion

I deeply enjoyed the process of this project as I was able to (1) learn a new software Tableau, (2) use real world data from current protocol and perform a deep analysis on data I was familiar with, and (3) create a design solution that received positive feedback from my colleagues. My colleagues even commended on the potential of using the prototype to perform Risk Based Monitoring, which is a challenging tasks for many clinical research professionals. Part of the satisfaction also stems from being able to fully realize the impact of a good visualization. 

The biggest limitation in my current visualization is the potential visual clutter, considering data-to-ink ratio, there were redundancies that could’ve been removed. There are certainly other techniques to visualize data such as  the use of superimpose, partition or small multiples, but the solutions have not yet been explored in tableau due to additional steps required blend and process data.

Also, because prototype was built with Tableau, this would be considered an external source and a disadvantage for EDC systems. EDCs may have in-built functionality,, but we are unsure whether it is technically feasible to create the prototype proposed. In addition, Tableau provided several design constraints, my design process was certainly influenced by what I could’ve technically achieved in the software. If I had more software proficiency, I would’ve been able to iterate my design to reduce some clutter.

Last but not least, as our user research was done on a very small sample size, I am unable to conclude metrics selected here were indeed the most critical and generalize my results, and it’s a challenge to generalize our findings.  Different protocols might be interested in other types of metrics.

As such, I advocate for more user research, not only to validate the metrics presented in the current design solution, but also to test out other possible configurations of metrics used for display, in assessing both the value and usability of the final prototype.