Finding Agility in Clinical Data Repository

A clinical data repository consolidates data from various clinical sources, such as an EMR or a lab system, to provide a full picture of the care a patient has received. Some examples of the types of data found in a clinical data repository include demographics, lab results, radiology images, admissions, transfers, and diagnoses.

 

While the data contained in a clinical repository is valuable because it shows a patient’s clinical data, the design is not an adequate solution for health systems for numerous reasons. The primary reason is this: clinical data repositories don’t offer flexible analytics for analysts to use as they work to improve patient care. These repositories function simply as a database that holds clinical data. In most cases, they also don’t have the ability to integrate with other non-clinical source systems, eliminating the chance to follow patient care across the care continuum. Because of this major limitation, clinical data repositories can’t provide a true picture of the cost per case for each patient. They also can’t show patient satisfaction scores for each visit, which means they’re inadequate for quality and cost improvement projects. There are other limitations as well.

 

Clinical data repositories are inefficient. It’s important for clinicians to be able to access their data to generate reports. But when clinicians request many reports all at once, the IT team in charge of the system turns into a report factory rather than functioning as an experienced analytics team. As a result, these highly skilled, highly paid IT employees end up spending their time tracking down the data, pulling it into the repository, spitting out reports, and moving on to the next task, rather than working with the clinical teams to refine the report to show valid data and meet their hopes and expectations.

 

There’s a large margin for costly errors. Clinical data repositories often use complex data models and their structure is normalized. Because of this complexity, the report writer will join many different tables in one report, increasing the margin for error during coding and the time it takes to build these reports. For example, a code field, such as ICD9 code 453.2, may exist in a table while all of the descriptions for the codes exist in a lookup table. In order for the report writer to get a description that tells them 453.2 is the code for “other venous embolism and thrombosis of inferior vena cava,” they need to join the lookup table with the original table. In addition, the normalized approach means extra work with the SQL to get the reports to look the way you want so it’s easier to understand the data in each field.

Reports aren’t standardized: When data is being pulled from clinical data repositories and then different visualization tools are used to build those reports, each report will look and function differently. Without a centralized tool for reporting across the organization, reporting will continue to have a different look and feel by department or functional area, making report reading less efficient.

 

Tools aren’t standardized: When tools aren’t standardized, users of the tools, such as clinicians or analysts, need to learn how to use each tool to generate their reports. This lack of standardization is frustrating. Plus learning how to use each tool adds to the time and cost of reporting.

 

Data isn’t always secure: When data is spread across many clinical data repositories, there is no way to audit who is looking at the data, which can be deadly for maintaining a secure organization. Even built-in safeguards within those systems are limited; the minute someone copies data from a system to a shared drive, or another unprotected database — despite the best of intentions the data will be used alongside data from another system — it becomes extremely vulnerable, exposing the hospital or health system to needless risk.