How does clinical data repository contribute in managing and monitoring patients medical history and record?

eHealth NSW developed a single clinical data repository to receive and store patient information from multiple different organisations regardless of their electronic medical system.

Known as the Clinical Health Information Exchange (CHIE), the repository allows clinicians to view eMR records for the same patient across different organisations. Only information from hospitals signed up to the CHIE is available to view via their local eMR.

Through CHIE, clinicians can view vital patient information including:

  • Demographics
  • Allergies and alerts
  • Problems and diagnoses
  • Past medical history
  • Encounters
  • Medications (on discharge)
  • Vital signs and observations (top 20)
  • Family history and social history and
  • A range of clinical documents such as discharge summaries, progress notes, operation reports and management plans.

This ensures patients receive better continuity of care no matter where they are treated, improving a patients healthcare experience.

How does clinical data repository contribute in managing and monitoring patients medical history and record?
Medical professionals are dedicated to providing their patients with the best care possible. This is an obvious fact, but it’s easy to lose track of when delving into complex subjects like data warehousing and business intelligence. Yet, at its core, a clinical data repository (CDR) is a database that helps medical professionals perform their jobs better, faster and with greater confidence. In layman’s terms, a CDR is a database that collects information relating to individual patients from a variety of sources. Below, we’ll catalog five significant benefits that hospitals and other medical clinics can expect to receive when they integrate a CDR:

Better Care & Treatment

Plain and simple, CDRs are vital to medical facilities because they contain crucial information regarding individual patients. With CDRs, for instance, professionals can determine if a given patient has a certain allergy, or if they require a certain medication. Plus, CDRs don’t just assist with the processing of new patients or transfers. They also allow medical facilities to track potentially contagious diseases and monitor their own use of certain drugs or medications. All of this adds up to better care and treatment on a broad scale.

Real-Time Data from Diverse Sources

There are two great features that define a CDR. They are the ability to draw data from 1) different sources and 2) to do so in real time. So rather than having to wait days or weeks to receive accurate information relating to a patient, doctors that have access to intel from a CDR can use it immediately. The ability to make updates in real time is a key factor that separates CDRs from other data-storage methods, and it’s something that administrators shouldn’t underestimate. Plus, the capacity to access information from multiple sources ensures that transfers between facilities occur smoothly.

Improved Efficiency

Inefficiency is a problem that plagues virtually all businesses. However, medical professionals really can’t afford to deal with redundancies or duplications in treatment or healthcare plans. CDRs cut down on these internal inefficiencies because they store real-time data. This means that medical pros won’t waste time dealing with an issue that has already been resolved.

Streamlined Encounters

Naturally, when doctors and other medical professionals don’t have to spend their time double-checking facts and backtracking, they’re able to address patient needs much quicker. This, in turn, can lead to streamlined encounters, and on a broader scale, improve a facility’s ability to reduce wait times and enhance turnover rates (among other KPIs). CDRs also give medical facilities a better grasp of their demographics.

Improved Clinical Trials

CDRs are perfect for labs looking to run clinical trials. Since CDRs allow researchers to log, save and share data quickly, they eliminate a lot of the “busywork” that serves to bog medical pros down.

Final Thoughts

Data analytics and warehousing are intricate processes, but they’re crucial for businesses operating in the medical and healthcare fields. At Amitech, we specialize in offering data analytics services to companies just like yours. We have years of experience in the medical industry, and we can help your organization. Contact us here for more information.

How does clinical data repository contribute in managing and monitoring patients medical history and record?


How does clinical data repository contribute in managing and monitoring patients medical history and record?

The development of new data utility builds on the considerable past progress in health care. As summarized in Chapter 6, both theoretical perspectives and specific ideas for practice were presented at the workshop. Reviewed first are workshop presentations that identified lessons learned on important components or building blocks for a next-generation data utility. The chapter concludes with a summary of comments on emerging, practical opportunities to align policy developments with improved data access and evidence development offered by a discussion panel of key policy makers.

Important strategic priorities for the development of an architecture for a next-generation data utility emerged from workshop discussion of collaborative models that offered insights into collaborative clinical data system management, and suggested a framework for expectations, purposes, incentives, priorities, structures, roles and responsibilities, and principles for data entry, access, linkage, and use. Similarly, presentation of efforts to aggregate clinical data from multiple institutions raised considerable technical, organizational, and operational challenges that need to be addressed. Finally, economic incentives and legal issues were considered as important levers to realize the full potential of health data.

Building on collaborative models. The Institute to Transform and Advance Children’s Healthcare at Children’s Hospital of Philadelphia (CHOP) is spearheading a novel effort to harness clinical and business information to improve children’s health, make their health care more efficient, and transform the delivery system. The Institute has developed a data system that links the full spectrum of information about a child’s health needs, from genomics to clinical to environmental data, in order to build out a vision of personalized pediatrics. Christopher Forrest, professor of pediatrics and senior vice president and chief transformation officer at CHOP, described the hospital’s approach to data. He discusses issues related to collaborative relationships needed to realize a vision of personalized pediatrics, including forming linkages with multiple pediatric institutions, giving patients and families access to their data, obtaining information from them, and creating provider–payer collaborations.

CHOP’s model is predicated on giving care at the right time by the right person in the right setting, minimizing waste, and shifting services from specialty care to primary care. In evolving into a data-driven organization, CHOP developed a concept of personalized pediatrics, which relies on collaborations with other pediatric institutions, public institutions, payers, patients, and families. The concept of personalized pediatrics focuses on outcomes, changes in health, and reductions in costs, both financial and nonfinancial. Apart from issues related to establishing and then sustaining strong collaborations with CHOP’s partners, other challenges include communications and changing cultural assumptions. Changing the culture of providers to collect data in a high-quality way is dramatically difficult: providers can be added to an EHR, but getting them to change what they do with the EHR requires education and time. Communication across the board is important, according to Forrest, especially in regard to engaging families in a dialogue about how they can partner in personalized pediatrics. CHOP’s model of care is family centered and designed in partnership with families. Forrest also suggested that none of the programs will work without the support and participation of families.

Technical and operational challenges. Efforts to aggregate clinical data from multiple institutions for the purposes of gaining insights on clinical effectiveness or drug/device safety face many technical, operational, and organizational challenges. Drawing on experiences from previous pilot projects and other work in this area, Brian J. Kelly, the executive director of the Health & Sciences Division at Accenture, provided an on-the-ground, real-life implementation perspective on the challenges with aggregating data from multiple sources for secondary use. He also discussed the impact of current privacy regulations, based on work to prototype the Nationwide Health Information Network, in which researchers aimed to aggregate data from 15 completely separate organizations in four states.

Among the challenges to optimizing the use of data, both in patient care and for secondary uses, is getting the data into equivalent standards and terms, and finding ways to draw data into one repository from multiple systems. Systems for such data are in place and to some extent entrenched, and changing those systems will be incremental. Kelly drew from experiences in the area to suggest that a sophisticated approach to information governance is needed. Another consideration focuses on ownership of the data—by patients or the entity that enters the data into a database. Approaches to addressing technological and architectural challenges are needed to best support envisioned goals for the data. Because states can place restrictions on data sharing in addition to HIPAA rules, the standard notification of privacy practices is changed to say data can be used if they are deidentified for secondary use in clinical research, but there will be continued trouble aggregating data among various institutions. To share data among delivery organizations, there could be a different approach to notification for privacy purposes, which Kelly indicated is one of the biggest policy areas that must be addressed.

There is a growing need for advocacy for using data as a public utility. Many organizations have started marketing campaigns to educate patients and their families on the importance of participation in clinical trials and related research endeavors. Kelly pointed out that we need to do the same thing to educate people on how important it is to be able to use data for secondary purposes. Such efforts would have to contend with security and privacy issues, but those factors can be addressed.

Economic incentives and legal issues. If we wish to change behavior, then we must directly address incentives, argues Eugene Steuerle, senior fellow at the Urban Institute. Steuerle suggested that existing incentive structures discourage information sharing, giving great weight to possible errors in protecting privacy relative to errors deriving from failing to take advantage of ways to improve public and often individual health. In addition, incentives internal to the bureaucracy also discourage optimal use of information, even functions such as merging already existing datasets. Because government now controls nearly three fifths of the health budget, including tax subsidies, it bears substantial responsibility to improve these incentives. Some incentive changes are possible now, through reimbursement and payment systems. Others require examining the reward structure internal to the bureaucracy. In the end, however, the primary incentive needs to come from consumer demand, operating either directly on providers and insurers or on the voters’ elected representatives.

In many cases the benefits of clinical data are shared by all, but in fact the benefits to the individual come from clinical data treated as a public good. Accordingly, data-sharing solutions should examine and change the incentive structure and manage the tensions between privacy and confidentiality of data used to improve well-being. Steuerle also highlighted the failure to improve the public good when and noted evidence to the contrary. We lack data sharing for individual care; we lack data sharing for an early warning system (e.g., through the Centers for Disease Control and Prevention [CDC] or other organizations); and we lack data sharing for basically solving problems and finding cures or better treatments for various health problems. In the end, however, Steuerle encouraged engaging the public to support data initiatives. Another incentive problem is the lack of bureaucratic incentives to share datasets or allow datasets under an agency’s purview to be shared. Notwithstanding good will among many public servants, there are strong disincentives in the bureaucracy to share data; consideration is needed on how to introduce incentives into the bureaucracy to reward people for enabling the sharing of data.

For solutions to some of these issues, Steuerle outlined several opportunities through which the government can leverage its position, for example, higher reimbursement of drugs prescribed electronically instead of through traditional methods. It could differentially pay for lab tests put into electronic form for sharing with patients or the CDC. It could pay for electronic filing of information on diagnoses and treatment. Government could also provide more incentives for participation in clinical trials. Steuerle also suggested that people working in the public sector need incentives to encourage data sharing.

Also summarized in Chapter 6 are discussions of a stakeholder panel charged with moving the conversation about data utility to an action agenda, by offering practical ideas on strategies or incentives that advance the development of an improved data utility, and what strategies or incentives might be necessary to make that happen. As session chair David Blumenthal observed, the environment for clinical data is much more distributive than ever, a phenomenon that overrides traditional instincts of policy makers to develop solutions by identifying roles and responsibilities for local, state, and federal governments. In a distributed environment, such an approach is too narrowly framed. For example, the conversation that engages consumers directly, and focuses on the personal health record, is a very different policy environment from one that could be addressed through a centralized authority. At the same time, the federal government is a big stakeholder and player in the collection of health-related data. However, the environment surrounding data differ from one part of the government to the other—the NIH, for example, has the capacity to focus on promoting sharing of data and has a broad mandate for data collection sharing, whereas Medicare operates in a much more restrictive environment. With these observations as context, panelists offered comments on decisions and actions that could best enable access to and use of clinical data as a means of advancing learning and improving the value delivered in health care.

Government-sponsored clinical and claims data. Steve E. Phurrough, director of coverage and analysis at the Centers for Medicare & Medicaid Services, provided an inventory of the data that Medicare collects, what it does with the data it collects, and what some of its challenges are in data utility. Medicare currently collects data in each of the four parts of the program: A, B, C, and D. Collected data are used as the basis of paying claims. Different data collection programs look at how different payment systems may affect outcomes versus clinical issues. Data are collected to help improve quality of health care, for payment purposes, and to develop pay-for-performance qualitative information. Another set of data collection programs is in Medicare demonstration projects, which look at a variety of issues and, generally, examine how different payment systems may affect outcomes versus clinical issues. Data are also collected in the interest of evidence development.

Given the limits of its authority, Medicare has had to be somewhat innovative. One example is linking some clinical data to collections to coverage of particular technologies. One carrot Medicare has developed is that it has required the delivery of clinical data beyond the typical claims data as a provision for payment for certain services; a few years ago, the system required, for example, additional clinical information for the insertion of implantable defibrillators. Such an approach has the potential to provide significant amounts of information if, in fact, we can learn how to meet the challenge of what we can do with data that have been collected, and merge those data with other sources of data so that data collection can inform clinical practice.

Government-sponsored research data. The molecular biology revolution was founded on the commonality of DNA and the genetic code among living things. Discoveries at the molecular level provide unprecedented insight into the mechanisms of human disease. This understanding has developed into an expectation of wide data sharing in molecular biology and molecular genetics. Now that powerful genomewide molecular methods are being applied to populations of individuals, the necessity of broad data sharing is being brought to clinical and large cohort studies. This has prompted considerable discussion at the NIH that have resulted in the NIH Genome Wide Association Study Policy for data sharing, and a new database at the NIH’s National Center for Biotechnology Information (NCBI) called the Database of Genotypes and Phenotypes (dbGaP).

James M. Ostell, chief of the NCBI Information Engineering Branch, heads the group that provides resources such as PubMed and GenBank, an annotated collection of all publicly available DNA sequences. He observed that in the course of collecting and distributing terabytes of data, the branch has wrestled with questions concerning which data are worth centralizing versus which should be kept distributed. Although technical and policy requirements sometimes dictate answers to those questions, nature sometimes directs information engineers to pursue certain tactics. For example, the commonality of molecular data might drive the desire to have all related information in one data pool, so that a researcher could search all the data comprehensively, perhaps not even with a specific goal in mind. This could lead to the kind of serendipitous connection that is fundamental to the nature of discovery. At the same time, however, there must be a balance toward collecting only those pieces of data that make sense in a universal way.

The NIH has required researchers to pool data collected under NIH grants so that other investigators might benefit from those data. NIH created dbGaP to archive and distribute the results of studies that have investigated the interaction of genotype and phenotype. Such studies include genome-wide association studies, medical sequencing, and molecular diagnostic assays, as well as association between genotype and nonclinical traits. The advent of high-throughput, cost-effective methods for genotyping and sequencing has provided powerful tools that allow for the generation of the massive amounts of genotypic data required to make these analyses possible. dbGaP incorporates phenotype data collected in different studies into a single common pool so the data can be available to all researchers. Dozens of studies are now in the database, and by the end of 2008, the database was expected to hold data on more than 100,000 individuals and tens of thousands of measured attributes.

Hundreds of researchers have already begun using the resource. There is also a movement on the part of the major scientific and medical journals to require deposition accession numbers when they publish the types of studies alluded to above, the same as required for DNA sequence data. The publications recognize the importance of other people being able to confirm or deny a paper’s conclusions, which requires investigators to review the data that informed the paper. To further encourage secondary use of data, other accession numbers are used when people take data out of a database, reanalyze the data, and then publish their analysis.

Professional organization-sponsored data. Guidelines and performance measures in cardiology developed by the American College of Cardiology (ACC), often in association with the American Heart Association, typically are adopted worldwide. ACC Chief Executive Officer Jack Lewin described ongoing efforts to ensure that ACC guidelines, performance measures, and technology appropriateness criteria are adopted in clinical care, where they can benefit individual patients. Although most guidelines are currently available on paper, the vision is to have clinical decision support integrated into EHRs.

The ACC’s National Cardiovascular Data Registry (NCDR) was designed to improve the quality of cardiovascular patient care by providing information, knowledge, and tools; benchmarks for quality improvement; updated programs for quality assurance; platforms for outcomes research; and solutions for postmarket surveillance. The NCDR strives to standardize data and to provide data that are relevant, credible, timely, and actionable, and to represent real-life outcomes that help providers improve care and that help participants meet consumer, payer, and regulator demands for quality care. The NCDR’s flagship registry, the national CathPCI Registry, is considered the gold standard for measuring quality in the catheterization laboratory. Other NCDR registries collect data on acute coronary syndrome, percutaneous coronary interventions, implantable cardioverter defibrillators, and carotid artery revascularizations. The ACC is currently working to standardize registry data to be able to measure gaps in performance and adherence to guidelines, with an ultimate goal of being able to teach how to fill those gaps and thus create a cycle of continuous quality improvement.

Mandates from Medicare and states have pushed hospitals to use the ACC registries, but there is room for wider adoption. The ACC is working to alleviate barriers such as the need for standardization, the expense of collecting needed data, and the lack of clinical decision support processes built into EHRs. The ACC would also like to see a national patient identifier that would enable the tracking of an individual’s overall health continuum while preserving patient privacy; such an identifier would bolster longitudinal studies. The ACC believes wider adoption of data sharing via registries is within reach, should be encouraged, and would ultimately result in better health care overall, but that strategies need to be developed and implemented that foster systems of care versus development of data collection mechanisms specific to a single hospital. Toward the development of business strategies needed to develop the clinical decision support capacity, standardization, and interoperability, the ACC wants to collaborate with other medical specialties, EHR vendors, the government, insurers, employers, and other interested parties. Going forward, the ACC supports investment in rigorous measurement programs, advocating for government endorsements of a limited number of data collection programs, allowing professional societies to help providers meet mandated reporting requirements, and implementing systematic change designed to engage physicians and track meaningful measures.

Product development and testing data. The pharmaceutical industry collects and shares a great deal of clinical data. Because the industry is heavily regulated, the data it collects are voluminous and made available publicly under strict regulations that, it is hoped, ensure their accuracy and the accuracy of their interpretations. Eve Slater, senior vice president for worldwide policy at Pfizer, noted that the pharmaceutical industry is interested in ensuring the widespread availability of data to support research at the point of patient care and care at the point of research. In the pursuit of that goal, the industry is interested in pursuing the alignment of data quality, accessibility, integrity, and comprehensiveness. An influx of regulations and an acknowledged need for transparency are prompting the appearance of product development and testing data in the public domain. Nonetheless, attention is needed to ensure data standards, integrity, and appropriate, individualized interpretation.

Although significant amounts of product development data are required by law to be in the public domain, roadblocks prevent the effective sharing of clinical data. In the area of clinical trials posted on www.clinicaltrials.gov, for example, shared information can be incomplete, duplicative, and hard to search, and nomenclature is not always standardized. The information also needs to be translated into language that patients can understand. The lack of an acceptable format for providing data summaries for the public is linked to concerns about disseminating data in the absence of independent scientific oversight; once data are in the public domain, controlling quality assurance and the accuracy with which the information is translated to patients become difficult. Policies to address some of these issues lag behind the actual availability of data.

These issues argue in support of the data-sharing and standardization principles that the IOM has articulated. The Clinical Data Interchange Standards Consortium (CDISC) and other organizations are currently focused on the issues of standardizing electronic data.

Regulatory policies to promote sharing. Although large repositories now exist for controlled clinical trial data, including primary data, Janet Woodcock, deputy commissioner and chief medical officer at the FDA, observed that much of that information unfortunately resides on paper in various archives, not in an electronic form that would readily enable sharing. The FDA’s Critical Path Initiative is an aggressive attempt to be able to combine research data from the various clinical trials in different ways and to extend learning beyond a particular research program. The FDA has been working with the CDISC to try to standardize as many data elements as possible.

Several years ago, the FDA established the ECG Warehouse, an annotated electrocardiogram (ECG) waveform data storage and review system, for which a standard was established for a digital ECG. The FDA asked companies engaged in cardiac safety trials to use that standard. Today the ECG Warehouse holds more than 500,000 digital ECGs along with the clinical data, and the FDA is collaborating with the academic community to analyze those data to learn new knowledge that would not have been accessible before the development of a standardized dataset.

The FDA is constructing quantitative disease models from clinical trials data, building electronic models that incorporate the natural history of the disease, performance of all the different biomarkers about the disease over time, and results from interventions. Given multiple interventions, the approach allows researchers to model quantitatively. The FDA expects more of these models to evolve in the future.

Within the Critical Path Initiative, the FDA worked with various pharmaceutical companies to pool all their animal data for different drug-induced toxicities, before the drugs are given to people. This groundbreaking consortium worked to cross-validate all the relevant biomarkers in each other’s laboratories. The first dataset, on drug-induced kidney toxicity in animals, has been submitted to the FDA and is under review. Similar approaches could be undertaken with humans; pooling those data from various sources could lead to new knowledge.

The FDA also plans to build a distributed network for pharmaco-vigilance. The Sentinel Network seeks to integrate, collect, analyze, and disseminate medical product (e.g., human drugs, biologics, and medical devices) safety information to healthcare practitioners and patients at the point of care. Required under the 2007 Food and Drug Administration Amendments Act (FDAAA), the Sentinel Network is currently the focus of discussions by many stakeholders about how best to proceed. One approach is to build a secure distributed network in which data stay with the data owners, but are accessible to others.

Legislative change to allow sharing. The Center for Medical Consumers, a nonprofit advocacy organization, was founded in 1976 to provide access to accurate, science-based information so that consumers could participate more meaningfully in medical decisions that often have profound effects on their health. Arthur Levin, the center’s cofounder and director, believes government has a role to play in regulating the healthcare sector; key questions in this arena concern what government can and cannot do, and what it should and should not do.

Legislatively, most of the action concerning data sharing is currently in the states. Levin noted that we may face a scenario similar to that with managed care legislation, where in the absence of federal legislation, states moved ahead on their own, for better or worse. Currently states are moving ahead rapidly with HIT and health information exchange. Issues of privacy and confidentiality are very much in the forefront and driving state legislation. In terms of legislation covering data sharing, we need to make sure that whatever policy is developed moves things in an agreed-upon direction that does not create new obstacles and barriers. A first step will be to develop a much better understanding of what barriers exist in the states and federal government to aggregating data for research, quality improvement, and similar goals.

Another issue is that data sharing is, in essence, a social contract between individuals and researchers who want to use their data. Patients are told there will be some payoff from sharing data, but perhaps patients do not hear enough about how that is supposed to happen. Where does the payoff come? How does the other side of that contract deliver? What are the deliverables? Is there a time line for those deliverables? Is there accountability for those deliverables? As part of the social contract, there should be a burden on collecting data, a requirement that the collector do something specific with the data being collected. Privacy and confidentiality rules and remedies can be legislated; however, trust must be built. All who believe that data represent a public good—and that data sharing is a public responsibility to advance the public interest in improving healthcare quality, safety, and efficacy—also understand that such a message may not resonate so readily with the public. The public has not yet been brought up to that level, and more is needed to engage consumers in this enterprise.