class="knockout" style="margin-top: 0px;margin-bottom: 1px;font-size: 14px;line-height: 1.2;padding: 5px;padding-left: 20px;padding-right: 20px;font-weight: normal;">THANK YOU FOR SUBSCRIBING
For pharmaceutical companies, the safety of their products is of paramount concern. Safeguarding the wellbeing of patients and mitigating the risk of punitive fines from regulatory agencies, such as the FDA in the US and EMA in Europe, are essential elements of day-to-day strategy. Considerable sums of money are spent on rigorously testing drugs and medicinal products, as well as carrying out ongoing monitoring once a drug reaches the market and is used by patients. But what happens when one safe and effective drug interferes with another, because a patient or clinician puts the two together, not knowing that a potentially harmful drug-drug interaction (DDI) might occur?
By some estimates, DDIs account for more than 30 percent of all drug adverse reactions. With over 59 percent of the US population taking at least one prescription drug, and a further 15 percent taking more than five prescriptions at a time – coupled with an aging population in the Western world – the potential for DDIs has dramatically increased. DDI exposure is responsible for up to 0.17 percent of the nearly 130 million emergency department visits that occur annually in the United States. In addition, DDIs are a leading contributing cause to drug failure and market withdrawals. For the sake of patients, the health care system, and a company’s bottom line, prevention of DDIs is key – and this is where the CIO has a role to play.
The practice of monitoring the effects of medical drugs for adverse or harmful events is called pharmacovigilance (PV). Today, pharmaceutical companies are increasingly adopting “proactive PV” to help ensure the safety of a new entity as early as possible in the discovery/development process. It falls to the CIO to ensure that pharmaceutical companies can adopt and integrate comprehensive in silico (computer simulated) systems that can help detect or predict potential DDIs at the various stages of the drug discovery and development process. However, implementing these systems is easier said than done.
"For the sake of patients, the health care system, and a company’s bottom line, prevention of DDIs is key"
The reliability of in silico modeling for predicting or detecting DDIs depends directly on the quality of the data sources used – and there are currently no universal standards for data input or analysis – making the job of the CIO much harder. In addition, the type and number of potential DDIs is also a challenge. Not only can interactions occur among ‘traditional’ drugs, but also between other substances and therapies that patients take – such as food, and folk and herbal medicines. Chinese medicine, for example, includes mixtures of many chemicals with different biological properties. Given the potential scope of the DDI problem, and the fact that pharmaceutical companies use different IT and informatics systems which are not interoperable, it’s not surprising there is no single complete source of DDI information for either doctors or patients to consult.
Efforts are also underway to use technology to reveal previously undetected or “unseen” DDIs. This approach involves using text and data mining, backed by techniques perfected in big data analysis, to identify possible DDIs. For example, one study used sophisticated algorithms to analyze the FDA’s Adverse Event Reporting System. The algorithms looked for DDIs that might prolong the QT interval – a heart condition that can lead to a potentially fatal arrhythmia. Once identified, their DDI predictions were validated against electrocardiogram data from patient electronic health records (EHRs). The result was the discovery of eight distinct drug pairs that increase the risk of acquired long QT syndrome (LQTS), which had been previously unknown.
Much like any big data analysis, ‘real world’ data and evidence is critical to improving the accuracy of results. In drug safety, one of the most important sources of this real world evidence is EHRs. Within these records, clinicians document their findings from patient evaluation, as well as their prescribing of a drug and monitoring of the patient afterward. This is important because whether or not a potential DDI becomes an actual DDI depends on patient-specific factors – such as age, gender and what other medications they take. This is valuable real world evidence, and as shown in the LQTS example mentioned above, the availability of accurate EHRs—and systems that can mine these records— will enable companies to identify and reduce DDIs, and ensure drug safety.
Other approaches to capturing real world evidence already being tested include informatics-driven approaches to process input from multiple big data sources, including social media, and even employing machine learning techniques. Currently, these approaches are hard to replicate for many pharma CIOs, as they require deep analytics skills and capabilities. In addition, data standardization remains a challenge. Many pharma CIOs look forward to the day when DDI data is standardized and compiled into a universally available resource. But, as this day may be some way off, CIOs must evaluate what they can do to improve PV today. Drug safety is absolutely critical to both patients and manufacturers, so CIOs must embrace new techniques to improve the efficiency and accuracy of DDI monitoring and prediction.