/ Data Integration

How to Better Manage CRO Assay Data

Automate, standardize, validate, and search across results produced by external partners.

Challenge

Every day it becomes more and more common for early-stage biotechs to leverage the ever-growing network of contract research organizations (CROs). See our blog post - Collaborative Research Organizations are your new CROs to learn about this trend and what CROs stand out. The industry’s construction of various firewalls is justified, however, it makes data-sharing a tiresome process, even for companies who have well-defined assays and relationships.

Take, for example, an early-stage biotech running DMPK studies through 3 external vendors. Key fields (IC50, peak area, % inhibition, etc) are highlighted in a standard template with corresponding values. Those assays are repeated sometimes twice, sometimes several dozen times, across a number of different compounds.

What’s next? There are several options, although they all have their faults:

  • For storage: A file drive or an ELN seems like the best repository but is an easy way to lose results. If it’s a singular person managing those files, what happens if that person is out sick, unavailable, or leaves the company and a colleague needs information ASAP? Scientists will admit that sometimes it’s easier to repeat an experiment than look for the data because it’s impossible to find. Not the best use of $$$.
  • For visualization/analysis: Want to analyze the results through INSERT TOOL OF YOUR CHOICE HERE? Well, clear the deck for the next hour (or 4) because all of those data points will be inputted manually. If the results are looking a bit funky, that may be due to a few typos you made along the way.
  • For automation/machine learning: with all the time and money sunk into generating data, why not point a computer at a database and see what insights you can ream? The challenge, if we’re talking multiple sources, here is that it’s reliant on a person or team to standardize those files. To hire an engineer to build those integrations is not out of the question, but the question to be asked beforehand is “Do I pay this great consultant/team leader to only build and then maintain these conversion pipelines?”

And these problems all exist before scaling CRO work. We work with an informaticist who saw his workload of managing CRO data increase from 2 hours each week to 8 hours within a 6-month period. That quickly became 1/5th of his week and year!

Solution

At TetraScience, we see many of these challenges are repeatable across a wide range of biotechs, and as such, we work with companies to standardize the manner in which they receive CRO data and additionally how their scientists consume that data.

CRO-Data-Assay-Architecture

Using the above example, TetraScience helps with...

...storage by automatically transporting the dataset to its desired repository, be it ELN, relational database, or file drive. Further, by pulling out key metadata (compound name, CRO, experiment #, etc.), scientists can use the TetraScience data lake query to quickly find datasets.
...visualization/analysis by exposing data directly to analytics applications and tools. TetraScience Pipelines have logic built within to understand the dataset, restructure the raw data into a reusable format, and, through its API, make available to a number of analytical tools, like Spotfire, Vortex, and more. Rather than spend hours uploading datasets, scientists can search parameters within their tool of choice and have those projects loaded in seconds.
...automation/machine learning by converting data sets into an interoperable format. Data passed through TetraScience becomes standardized data sets, so regardless of the source or stage - internal vs external, ADME vs In Vivo - data can be linked and analyzed easily.

Result

There is a time and place that value begins to materialize. Scaling CRO work is often the “ah-ha” moment for many TetraScience users. The aforementioned informaticist estimates that he now spends about 1 hour a week managing CRO data, and much of that is communicating human errors, caught by TetraScience, to the CRO (for example, the Compound ID was left off a report). Those time and resource savings happen in spite of that biotech’s externalized research quadrupling!

“One informaticist has cut time spent managing CRO data from 8 hours/week to 1 hour/week"

The project manager at a larger (~75 people) biotech says she has cut down the time she spends looking for studies from about seven hours a week to an hour at most. TetraScience is not only saving time but also hard dollars that would have been applied to running that experiment again. Additionally, most of her team members can search for datasets with ease.

If you’re interested in learning more about managing CRO data or any of our additional integrations, please reach out to us.

Steve McCoy

Steve McCoy

Steve McCoy is an Account Executive at TetraScience. With a background in enterprise sales, he specializes in ensuring his partners find value and innovation through IoT. Fun fact: is a real McCoy.

Read More