Monday, February 25, 2013

SHARP Focus: Secondary Use of EHR Data (SHARPn)

As part of the HITECH Act the ONC funded the Strategic Health IT Advanced Research Projects (SHARP) Program. The SHARP Program was created to fund research focused on achieving breakthrough advances to address well-documented problems that have impeded adoption of health IT. With scalpel-like precision, the hope is that this research will accelerate progress towards achieving nationwide meaningful use of health IT in support of a high-performing, continuously-learning health care system. Under the the Mayo Clinic of Medicine received $15 million to focus on Secondary Use of EHR Data.

The project: AREA 4 - Secondary Use of EHR Data (SHARPn), is a collaboration of 14 academic and industry partners to develop tools and resources that influence and extend secondary uses of clinical data. The project will enhance patient safety and improve patient medical outcomes through the use of an electronic health record (EHR). Traditionally, a patient’s medical information, such as medical history, exam data, hospital visits and physician notes, are stored inconsistently and in multiple locations, both electronically and nonelectronically.

Area four's mission is to enable the use of EHR data for secondary purposes, such as clinical research and public health. By creating tangible, scalable, and open-source tools, services and software for large-scale health record data sharing; this project will ultimately help improve the quality and efficiency of patient care through the use of an electronic health record. One year into the design and development of the SHARPn framework, they demonstrated end to end data flow and a prototype SHARPn platform, using thousands of patient electronic records sourced from two large healthcare organizations: Mayo Clinic and Intermountain Healthcare. The platform was deployed to:
(1) receive source EHR data in several formats,
(2) generate structured data from EHR narrative text, and
(3) normalize the EHR data using common detailed clinical models and Consolidated Health Informatics standard terminologies, which were
(4) accessed by a phenotyping service using normalized data specifications.

The program is working to assemble modular services and agents from existing open-source software to improve the utilization of EHR data for a spectrum of use-cases and focus on three themes: Normalization, Phenotypes, and Data Quality/Evaluation. The program was assembled into six projects that span one or more of these themes, though together constitute a coherent ensemble of related research and development. Finally, these services will have open-source deployments as well as commercially supported implementations.

Below are some videos of leaders in the project discussing some of the work.



Charles P. Friedman, PhD.; Former Chief Scientific Officer for Information Technology at the Office of the National Coordinator for Health Information Technology (ONC) in the U.S. Department of Health and Human Services speaks about the Strategic Health IT Advanced Research Projects (SHARP) Programs.

Standardize health data elements and ensure data integrity - Patient information can be stored using several different abbreviations and representations for the same piece of data. For example, “diabetes mellitus” (more commonly referred to as “diabetes”), can be referred to in a patient’s medical record alternately as “diabetic,” “249.00” and “DM.” The first phase of Mayo Clinic’s project, called “Clinical Data Normalization,” will work toward transforming this non-standardized patient data into one unified set of terminology. In this case, “diabetes mellitus,” “diabetic,” “249.00” and “DM” would all be re-named “diabetes.”



Stanley M. Huff, M.D.; SHARPn Co-Principal Investigator; Professor (Clinical) - Biomedical Informatics at University of Utah - College of Medicine and Chief Medical Informatics Officer Intermountain Healthcare. Dr. Huff discusses that to provide patient care at the lowest cost with advanced decision support requires structured and coded data.

Evaluate the progress and efficiency of Mayo Clinic’s project - Mayo Clinic will use an “Evaluation Framework” using the Nationwide Health Information Network, an Office of the National Coordinator for Health Information Technology program. Nationwide Health Information Network Exchange is a set of standards, services, and policies that enable secure health information exchange over the internet.



Calvin Beebe; SHARPn Chief Architect; Senior Technical Specialist at Mayo Clinic discusses the 'tracer-shot' pilot conducted in the SHARPn program where deidentified data from Intermountain Healthcare and Mayo Clinic was run through a pilot to normalize the data in a comparable and consistent manner for which secondary use information could be derived.

Find processes to make clinical data normalization, NLP and high-throughput phenotyping more efficient using fewer resources - This part of the process will focus on building adequate computing resources and infrastructures to accomplish the previous steps. Called “Performance Optimization,” this system will allow those seeking patient information to receive it quickly, increasing the efficiency of patient care while using fewer resources.



Marshall I. Schor, SHARPn Co-Investigator-Apache UIMA framework; Senior Technical Staff at TJ Watson Research Lab, IBM; Marshall describes the use of IBM Research: Unstructured Information Management Architecture (UIMA) in the SHARPn program.

Seek physically observable patient traits for further study - Physically observable traits, or phenotypes, can include growth and development, absorption and processing of nutrients, and the functioning of different tissues and organs. These traits result from interactions between a patient’s genes and environmental conditions. Mayo Clinic will use a process called “High-Throughput Phenotyping”, which uses clinical data normalization and NLP to identify and group a particular phenotype, such as Type 2 diabetes. This process will enhance a physician’s ability to identify and study individual phenotypes or groups of phenotypes.



Christopher G. Chute, M.D., Dr. P.H., SHARPn Principal Investigator; Professor of Medical Informatics and Associate Professor of Epidemiology at Mayo Clinic College of Medicine; Dr. Chute discusses phenotype characteristics for identifying patient cohorts (clinical trials, clinical decision support, quality numerator/denominators, etc).



Jyotishman Pathak, PhD.; SHARPn Co-Investigator; Assistant Professor of Medical Informatics at Mayo Clinic College of Medicine discusses the phenotyping tool.

Merge and standardize patient data from non-electronic forms with the EHR - Some important information, such as that from a physician’s radiology and pathology notes, is stored in “free text” form. Mayo Clinic’s project will first work to merge the patient information in free texts with that in the EHR. The next step of this project, called “Natural Language Processing” (NLP), will work toward classifying certain tags, such as “diabetic,” “DM” and “57 year old male” under specific categories, such as “disease” or “demographics.” NLP, in addition to clinical data normalization, will help improve patient care by reducing inconsistencies in patient data, providing physicians with more accurate and uniform information in a centralized location.



Guergana Savova, PhD; SHARPn Co-Investigator; Assistant Professor at Harvards Children's Hospital Boston. Dr. Savova discusses the steps to natural language processing or information extraction of clinical narrative.


No comments:

Post a Comment