What’s Slowing Speed to Market in Life Sciences?
Considering the increasingly high costs of bringing drugs and medical devices to market—in contrast to their dwindling approval ratings—life sciences organizations must squeeze the most value out of their data. Due to the market advantage of strong brand recognition and product loyalty, expediting speed-to-market continues to be a top priority. A conversation with Matt Ferrari, Co-Founder & Former CTO at ClearDATA, reveals the challenges that are slowing down life sciences organizations’ product pipelines and the solutions available to overcome them.
Q: What dangers do internal data centers place on life sciences organizations?
Ferrari: The danger is in how healthcare and life sciences organizations are treating data inside their facilities. One: are they keeping up with the pace of innovation? And, two: what is happening either on the dark web or in the security vulnerability space? Bear in mind just one public cloud provider had more than 1,600 changes in one year—most of them security changes. So, just deploying third-party applications is not going to help you keep up the pace inside a private data center. Many healthcare organizations can still use their third-party applications including vulnerability scanning, penetration testing or anti-malware. They can still use these in the public cloud, as many of those third-party applications have native public cloud offerings via a marketplace. But the danger lies in creating vulnerabilities and deep security risks by not taking advantage of all the innovation happening on the security side.
Q: And how is this do-it-yourself approach impacting speed-to-market?
Ferrari: Beyond the security risk is the pace of innovation. The challenge life sciences organizations have is that oftentimes it takes them weeks, if not months, to deploy what they need to solve life science-related problems in the space, when they could ideally use a cloud service and spin that up in minutes versus months. The pace of that innovation could allow them to do their jobs faster, accelerating speed to market.
Q: Life sciences organizations deal with data from many sources—and in many formats—so that they often don’t have a single view into that data. How is this clogging the product pipeline?
Ferrari: Disparate data is a massive problem in life sciences. You’ll find that many life sciences organizations have tried to work together, to build consortiums with other life sciences organizations to consolidate and de-identify sensitive data. The hope is that greater access to different data sets will increase pharmaceutical development velocity. From a disparate data perspective, the challenge is getting access to all that data in secure fashion. Most life sciences organizations still have their data in different formats living inside both structured databases and unstructured databases. And the challenge there, of course, is: how do they aggregate all of that data into the same format–do they securely move that data into some kind of data lake? By that, I mean taking all that structured and unstructured data and ingesting it all into the same data set. Some life sciences organizations are doing it, which provides a couple of benefits: First, it allows their developers and data scientists to leverage a single data source. Second, it reduces traditional operating system, structured database, and software licensing costs, which can be very expensive when you’re running enterprise-level software in a 24/7 environment. So data scrubbing is a big focus of life sciences organizations today in order to develop faster.
Q: What other factors are preventing life sciences IT organizations from moving to the cloud or extending their capabilities within it?
Ferrari: A Business Associate Agreement (BAA)—which is a commitment from a legal perspective between a cloud provider and the life sciences organization on how they will secure and safeguard PHI or PII from a storing, processing, and transmitting perspective to make sure it remains GxP compliant in the life science world, or HIPAA compliant in the healthcare world.
Life sciences organizations sometimes struggle with getting a BAA signed with a public cloud provider directly. Oftentimes, one of the common reasons is breach notification periods—how quickly the public cloud provider tells the life sciences organizations there could be a security incident or a breach. The defined breach notification period may not match the period that the public cloud provider has. In that case, the life sciences organization is unable to use the public cloud for identifiable sensitive data.
The second, from the BAA perspective, is typically limited to liability. Most public cloud providers, if not all of them, have BAAs that do not insure—based off the number of identifiable records from a PII perspective. That means that if there was a legal declaration of breach and the organization was actually fined by the Office of Civil Rights for a material breach, they would not be able to have, let’s say, an “insurance policy” through the public cloud provider that covers the number of breached records. That really restricts the life sciences organization from signing a BAA with a public cloud provider.
Q: Many IT professionals pride themselves on being do-it-yourselfers. Does this time-intensive approach make sense when so many native compliance and security tools are available in the cloud, managed by DevOps talent?
Ferrari: Do-it-yourself is still the number one competitor for managed service providers.
But can your life sciences organization keep up with the pace of innovation from the public cloud provider?
The answer is, “No!” Is running infrastructure, storage, and encryption core to your mission? Your mission is likely around improving patient outcomes. That is backed by innovating or developing x amount of drugs over y amount of time. Maintaining your infrastructure and keeping the application up 24/7 are integral to your operations, but is that something that will differentiate you as a life sciences organization? In every case that I’ve seen, the answer is “no.” It’s important for the life sciences IT professional to move from being someone who watches and monitors data to someone who actually gleans insights from it.
Q: Misconfiguration is a common security and compliance risk. How can automation reduce the chances of this, while streamlining the processes that push products to market?
Ferrari: Automation is core to keeping services compliant inside a public cloud. Anyone can set up alarms, but the whole concept in the life sciences organization around GxP—the good practice quality for guidelines and regulations that apply to pharmaceutical and food industries—is that when you utilize a system, it’s to ensure the product is safe, meets its intended use, and runs the exact same way every single time. That means traceability and accountability.
From a GxP perspective, it’s incredibly important to use automation whenever possible.
When there is a misconfiguration in the cloud space, typically it’s through change management. When there is automation, it identifies that there has been a misconfiguration. The most common practice in healthcare is unencrypting storage through change management or deploying new life science-based code. Identifying when that unencrypted bucket has changed and then automatically ensuring that environment is re-deployed with encryption is critical. But most importantly, from a GxP perspective, is traceability: we’ve identified a misconfiguration, we’ve logged this misconfiguration, we have automatically remediated this misconfiguration, and finally, we’ve notified somebody that this took place so it can be solved in the future. Automation is incredibly important for GxP because it has to address both the traceability and accountability of public cloud use.
Q: Considering the enormous amount of data from clinical trials require access to massive compute power, pharma companies need analytics solutions that can scale at ease. How are clinical trials expedited with machine learning in the cloud? Pharma companies conducting them the traditional way are sometimes delayed in finding a statistically significant N (population of participants with the correct criteria). Is there a movement to modernize clinical trials with machine learning, and can it help accelerate speed to market?
Ferrari: There are technologies that support the de-identification of health information—Google Cloud Healthcare API for example—that can detect sensitive data, or PII, and then use a de-identification transformation to obscure the data. Allowing life sciences organizations to de-identify patient data sets, so they can then run clinical trials against it, accelerates results. This also allows life sciences organizations to take different types of data sets, move them into a data lake, de-identify them, then run their clinical trials using machine learning algorithm technology, and then re-identify that patient data.
This is a massive compute need that the cloud can dial up or down based on how much compute is required—making working with this kind of big data feasible. That has significantly reduced clinical trial times.
Q: How long does it take to bring a drug to market? The average time seems to be decreasing. Can the cloud’s ability to easily scale analytics take any credit for that?
Ferrari: That’s the goal. It takes multiple years to develop a new drug, on average. The pharmaceutical organizations I work with are specifically focused on taking it from years to months, and they believe they can cut their R&D costs in half. Those are the prescriptive goals for using the public cloud services to do clinical trials.
Q: Tell me a horror story—an example of how a security incident at a vulnerable life sciences organization slowed its product’s momentum?
Ferrari: A life sciences organization I know had a ransomware incident that was kicked off through phishing. The bad guys emailed a phishing attack and someone in the organization clicked on the link. That link providing access to data sets inside of a structured database—and then they use that as ransomware. From a development slowdown perspective, the incident became so serious, as this was inside their data center, that they actually had to delay their contractors’ pay because they paused access to all of their database from a processing perspective. They couldn’t even pay out their employees during that time because they had to call in third-party forensics teams to determine if there was a material breach. So life at this organization had to pause for at least two weeks while there was a material investigation. Most organizations cannot afford to lose that kind of time for themselves or their employees and contractors.
Matt Ferrari’s experiences with life sciences organizations illustrate the issues that can hamstring a company’s ability to streamline its product initiatives. At the same time, he makes it very clear that there are opportunities where an experienced healthcare-dedicated cloud provider can meet many of these challenges and increase an organizations speed-to-market.