Home » Additional Features

Using Big Data in Precision Medicine

1 February 2019 1,494 views No Comment
Jonathan Chainey, Roche; Victoria Gamerman, Boehringer Ingelheim Pharmaceuticals, Inc.; John Quackenbush, Dana-Farber Cancer Institute; Cecilia Schott, AstraZeneca; Jane Wilkinson, Broad Institute; Marc S. Williams, Geisinger; Kelly H. Zou, Pfizer Inc.

Editor’s Note: The authors and panelists are employees of their respective organizations. Views and opinions expressed here are their own and do not necessarily reflect those of their employers.

The ultimate goal of precision medicine (PM) is “right patient, right medicine, right time.” To identify the “right patient,” both cross-sectional and longitudinal real-world data (RWD) are useful. Examples of RWD include electronic health records (EHRs), claims, patient reported outcomes (surveys, preferences), genomic, images, and laboratory.

Recently, the US Food and Drug Administration (FDA) established a strategic framework to advance the use of real-world evidence (RWE) to support development of drugs and biologics.

Here, several experts from pharma, biotech, government, academia, and technology providers discuss the opportunity to use such big data to find the right drug at the right time using the right target, which is ripe for forming collaborative partnerships.

To encourage collaboration among diverse organizations, what is the high-level strategy as an overview? What can our resources achieve, and how do we need to partner in terms of value and culture?

The key to collaboration is establishing a shared vision and compatible cultures. RWD, typically in the form of big data, are characterized by big data’s five Vs: volume, velocity, variety, veracity, and value. A value proposition has to be developed that provides value for all collaborating partners, which is the fifth V of big data.

To do so, the need for the collaboration should be identified across all involved parties, including the elimination of the belief that the same deliverable can be achieved by any single party involved. Value should be placed on the synergy of the collaborating partners working together such that the sum of the knowledge and insights gained from the collaboration is greater than if each were to pursue a piece on their own.

Value should also be placed on, to the degree possible, standardizing both data and data capture, as we are often faced not with data that are overly “big” (relative to data in other domains), but rather incomplete and “messy.”

Even with well-standardized data, the challenge is synthesizing these diverse data sources in the context of a given clinical scenario. To develop the “right medicine,” especially in disease areas such as rare diseases, administrative claims data may help assess and optimize health care providers’ therapeutic decisions, monitor adherence, assess gaps in therapies, and evaluate switches among different dose levels and therapeutic options.

The health care industry puts patients first to drive innovations and develop the most appropriate medicines and treatment courses. To capture the “right timing” for treatment, both real-time instantaneous and longitudinal data are necessary.

Based on collaborative research to advance precision medicine with -omics and phenotype data, can you describe some specific case examples where there were successful partnerships?

For example, the collaboration between Geisinger and Regeneron has been positive. Various organizations were able to work together to develop a shared vision of combining genomic data with electronic health record data to support discovery, clinical care, and population health.

The value proposition demonstrated success for Regeneron by allowing them to develop new drug targets at a lower cost and for Geisinger by making available large amounts of genomic data that can be returned to patients to improve care. Both partners have had success in discovery research leading to high-impact joint publications.

With the shared vision in mind, long-term sustainable collaborations also result from a clear understanding of each partner’s currency (i.e., what is important to the organization) throughout the lifetime of the collaboration, which can best be achieved from pooling complementary cross-functional expertise and resources.

Considering operationalization and implementation, how can we implement incentives that encourage clinical data to be standardized and shared?

Progress is being made in the standardization of clinical trial data within the pharmaceutical industry, where the regulators play a key role. Specifically, since December 2016, the FDA has mandated trial sponsors submit data in accordance with data standards developed by the Clinical Data Interchange Standards Consortium (CDISC) as part of the electronic submission process for regulatory approval. Data in CDISC format is now also accepted by the PMDA in Japan, which will be mandated in 2020. Other major regulators across the globe also endorse CDISC.

While there is still progress to be made in consistency of implementation, CDISC is uniquely positioned to drive this international standardization of clinical trial data, and the progress it has made to date would simply not have been possible without the ‘regulatory incentive’ for trial sponsors to adopt it.

While closely related, data sharing presents a unique set of challenges, with patient privacy and patient consent of paramount importance. To encourage and require the incentives in place for trial sponsors to share their clinical trial data, regulators must play a crucial role.

For example, the European Medicines Agency Policy 70 will lead to the publication of redacted clinical study reports. Such policies can lead to win-win scenarios in which trial sponsors can access each other’s data or researchers have access to trial data—provided those researchers agree to share their own insights with the trial sponsor.

This more open approach has started to bear some fruit, as can be seen through projects such as consortiums like ClinicalStudyDataRequest.com (CSDR) and Project Data Sphere.

In collaborations, the incentives to standardize data for sharing are evident. Negotiations around which standards to use for the purposes of the collaboration can yield an agreed-upon strategy. The challenge is creating incentives to drive a national or international standardization around these data.

As more use cases emerge, and with the work of groups like eMERGE and GA4GH, we anticipate more useful and universal standards will emerge in the near future. That will provide the foundation needed to really accelerate the use of these data in research and clinical care.

Big data and machine learning/artificial intelligence are, in many ways, no different from quantitative and statistical methods that have long been used in health and biomedical research and operations research. All these methods attempt to fit available data to predictive models that ultimately can help improve performance and outcomes of the system.

What has limited past and present approaches to these problems hasn’t been the lack of methods, but the lack of outcomes data in sufficient quantities to build useful models. When we consider the five Vs of big data, the increases in volume, velocity, and variety in health care are obvious. The release of more comprehensive outcomes data is what will lead to value, since outcomes are what we need to match the right patient to the right drug. And it is continued access to outcomes data that will help us assure the veracity of our data and the models we build on them.

We clearly see the potential value of big data in building precision medicine. It is up to all of us to work together to see this potential realized to advance science, drug discovery, clinical practice of medicine, and—most importantly—outcomes for the patients we care for.

How are the regulatory landscape and directions in the US or globally incorporating big data into their approval process?

RWE was defined in the US 21st Cures Act (RWE, RWD, and big data). RWE has received considerable interest, with great potential in health care policy and data science. Large volume is only one aspect of the data providers and patients deal with. Thus, the right analytic strategies will require increased resources and expertise.

The path from big data to precision medicine was recently discussed by a group of panelists when sharing best practices/experiences on fostering a collaborative approach to evaluating -omics and phenotype data. Geisinger’s project is embedded in a real-world health care delivery system. The project includes a collection of clinical data from the EHR, supplemented by patient-reported outcomes to collect evidence and assess the value of return of genomic results.

Moving away from the traditional blockbuster model, how does PM present new opportunities for collaborations in the health care industry?

PM relies on a patient’s genomic data and RWE. Big data analytics and artificial intelligence technologies are the key to unlocking the power of clinical data and thereby accelerating clinical development. Realizing the potential of and effectively using this data can mean the difference between a response and a failed trial. Big data has potential to drive new insights, but the ultimate value will need to be discussed by the stakeholders involved in the collaboration to advance precision medicine.

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Comments are closed.