Fill This Form To Receive Instant Help

Help in Homework
trustpilot ratings
google ratings


Homework answers / question archive / When developing and implementing a Health IT project, it is easy to allow the evaluation stage to take a back seat

When developing and implementing a Health IT project, it is easy to allow the evaluation stage to take a back seat

Health Science

When developing and implementing a Health IT project, it is easy to allow the evaluation stage to take a back seat. However, your evaluation plan should be created early in the development process. Evaluations allow you to analyze your predictions about your project and to understand the effectiveness of your project. Evaluations also help everyone involved in the health IT project improve.

Using the Health Information Technology Evaluation Toolkit (Links to an external site.) to guide you, complete the Evaluation Plan worksheet and then write a report following the instructions at the end of the worksheet.EVALUATION PLAN Resource for this worksheet: Agency for Healthcare Research and Quality. (2009). Health Information Technology Evaluation Toolkit. U.S. Department of Health and Human Services. https://digital.ahrq.gov/sites/default/files/docs/page/health-information-technology-evaluationtoolkit-2009-update.pdf PROJECT: PREPARED BY: DATE: Brief Project Description: Project Goals: (What are the goals of your stakeholders for this project? What needs to happen for the project to be deemed a success by your stakeholders?) Example: To improve patient safety; to improve the financial position of the hospital; to be seen by our patients as making patient safety a priority • • • Evaluation Goals: (To share lessons learned? To demonstrate project’s return on investment? To prepare a report for stakeholders and funders? Are the goals internal or external?) • • • Evaluation Measures: What needs to be measured to demonstrate that the project has met project goals? (Check potential measure categories) Clinical Outcomes Clinical Process Patient Adoption, Knowledge, & Attitudes Provider Adoption & Attitudes Workflow Impact Financial Impact Evaluation Measures: Example: (1) Goal: To improve patient safety. Measurement: The number of preventable adverse drug events in reduced post-implementation. (2) Goal: To improve the hospital’s financial position. Measurement: The number of claims rejected is reduced post-implementation. 1 Possible qualitative measures to consider Evaluation of project barriers: (Organizational, financial, legal, other) Evaluation of project facilitators: (strong leadership, training, community buy-in) Rating Evaluation Measures: (Rate each measure in order of importance to your stakeholders, i.e., CEO, clinicians, or patients. This will help you begin to filter out measures that are interesting to you but will not provide you with information to your stakeholders) Very Important: • • Moderately Important: • • Not Important: • • 2 Rate Measurement Feasibility: (Be realistic about available resources. Will the project be labor-intensive and expensive? Focus on what is achievable and what needs to be measured to determine if the implementation has met its goals.) Feasible: • • Moderate Effort: • • Not Feasible: • • Place measures in an Importance-Feasibility grid to determine which measures to undertake and which measures to avoid. 3 Chosen Measures to Evaluate: (Narrow your list down to four or five measures) 1. ____________________________________________________________ 2. ____________________________________________________________ 3. ____________________________________________________________ 4. ____________________________________________________________ 5. ____________________________________________________________ Evaluation Study Design: (Use this table to organize your evaluation study options) • See Health Information Technology Evaluation Toolkit, pp. 13-15 Data Collection Strategies Types of Study Designs Case-Control RCT Time-Motion Pre-Post Manual Chart Review Electronic Data Mining of EMR /Registry Data Instrument the EMR/Registry Surveys (Paper/Electronic) Expert Review Phone Interview Focus Group Direct Observation 4 Consider the Impact of Study Design on Relative Cost and Feasibility See Health Information Technology Evaluation Toolkit, pp. 15-17 Choose Your Final Measures: (This may be the same list as above or a shorter version of the list) 1. ____________________________________________________________ 2. ____________________________________________________________ 3. ____________________________________________________________ 4. ____________________________________________________________ 5. ____________________________________________________________ Write Your Final Evaluation Plan: 1. Short Description of the Project 2. Goals of the Project 3. Questions to be Answered by the Evaluation Effort 4. First Measure to be Evaluated – Quantitative A. Overview – General Considerations B. Timeframe C. Study Design/Comparison Group D. Data Collection Plan E. Analysis Plan 5. Second Measure to be Evaluated – Qualitative A. Overview – General Considerations B. Timeframe C. Study Design D. Data Collection Plan E. Analysis Plan 6. Subsequent Measures to be Evaluated in the Same Format 5 He a lth In fo rm a tio n Te c h n o lo g y Eva lu a tio n To o lkit 2009 Up d a te Prepared for: Agency for Healthcare Research and Quality U.S. Department of Health and Human Services 540 Gaither Road Rockville, MD 20850 www.ahrq.gov Contract No. 290-04-0016 Prepared by: Caitlin M. Cusack, M.D., M.P.H., NORC of the University of Chicago Colene M. Byrne, Ph.D., Center for IT Leadership, Partners HealthCare System Julie M. Hook, M.A., M.P.H., John Snow, Inc. Julie McGowan, Ph.D., F.A.C.M.I., Indiana University School of Medicine Eric Poon, M.D., M.P.H., Division of General Medicine and Primary Care, Brigham and Women's Hospital Atif Zafar, M.D., Regenstrief Institute Inc. AHRQ Publication No. 09-0083-EF June 2009 This document is in the public domain and may be used and reprinted without permission except those copyrighted materials that are clearly noted in the document. Further reproduction of those copyrighted materials is prohibited without the specific permission of copyright holders. Suggested Citation: Cusack CM, Byrne C, Hook JM, McGowan J, Poon EG, Zafar A. Health Information Technology Evaluation Toolkit: 2009 Update (Prepared for the AHRQ National Resource Center for Health Information Technology under Contract No. 290-04-0016.) AHRQ Publication No. 09-0083-EF. Rockville, MD: Agency for Healthcare Research and Quality. June 2009. Acknowledgments The authors would like to thank numerous members of the AHRQ National Resource Center’s Value and Evaluation Team for their invaluable input and feedback: Davis Bu, M.D., M.A. (Center for IT Leadership); Karen Cheung, M.P.H. (National Opinion Resource Center); Dan Gaylin, M.P.A. (National Opinion Resource Center); Julie McGowan, Ph.D. (Indiana University School of Medicine); Adil Moiduddin, M.P.P. (National Opinion Resource Center); Anita Samarth (eHealth Initiative); Jan Walker, R.N., M.B.A. (Center for IT Leadership); and Atif Zafar, M.D. (Indiana University School of Medicine). Thank you also to Mary Darby, Burness Communications, for editorial review. The authors of this report are responsible for its content. Statements in the report should not be construed as endorsement by the Agency for Healthcare Research and Quality or the U.S. Department of Health and Human Services. Health Information Technology Evaluation Toolkit: 2009 Update i Contents Introduction ................................................................................................................... 1 Section I: Developing an Evaluation Plan............................................................... 3 I. Develop Brief Project Description ...............................................................3 II. Determine Project Goals ..............................................................................3 III. Set Evaluation Goals ....................................................................................4 IV. Choose Evaluation Measures .......................................................................4 V. Consider Both Quantitative and Qualitative Measures ................................5 VI. Consider Ongoing Evaluation of Barriers, Facilitators, and Lessons Learned ........................................................................................................7 VII. Search for Other Easily Accessible Measures .............................................7 VIII. Consider Project Impacts on Potential Measures........................................9 IX. Rate Your Chosen Measures in Order of Importance to Your Stakeholders10 X. Determine Which Measurements Are Feasible .........................................10 XI. Determine Your Sample Size.....................................................................11 XII. Rank Your Choices on Both Importance And Feasibility .........................12 XIII. Choose the Measures You Want To Evaluate ...........................................13 XIV. Determine Your Study Design ...................................................................13 XV. Consider the Impact of Study Design on Relative Cost And Feasibility..15 XVI. Choose Your Final Measures .....................................................................17 XVII. Draft Your Plan Around Each Measure .....................................................19 XVIII. Write Your Evaluation Plan .......................................................................20 Section II: Examples of Measures That May Be Used to Evaluate Your Project .21 Section III: Examples of Projects..............................................................................42 Appendixes Appendix A: Sample Size Example .............................................................................56 Appendix B: Health IT Evaluation Resources ............................................................58 Appendix C: Statistics Resources...............................................................................59 Health Information Technology Evaluation Toolkit: 2009 Update ii In tro d u c tio n We are pleased to present this updated version of the Agency for Healthcare Research and Quality (AHRQ) National Resource Center for Health Information Technology (NRC) Evaluation Toolkit. This toolkit provides step-by-step guidance for project teams who are developing evaluation plans for their health information technology (health IT) projects. You might ask: “Why evaluate?” For years, health IT has been implemented with the goals of improving clinical care processes, health care quality, and patient safety, without questioning the evidence base behind the true impact of these systems. In short, these systems were implemented because they were viewed as the right thing to do. In the early days of health IT implementation, evaluations took a back seat to project work and frequently were not performed at all, at a tremendous loss to the health IT field. Imagine how much easier it would be for you to implement your project if you had solid cost and impact data at your fingertips. Health IT projects require large investments, and, increasingly, stakeholders are demanding information about both the actual and future value of these projects. As a result, we as a field are moving away from talking about theoretical value, to a place where we measure real value. We have reached a point where isolated studies and anecdotal evidence are not enough – not for our stakeholders, nor for the health care community at large. Evaluations must be viewed as an integral piece of every project, not as an afterthought. It is difficult to predict a project’s impact, or even to determine impact once a project is completed. Evaluations allow us to analyze our predictions about our projects and to understand what has worked and what has not. Lessons learned from evaluations help everyone involved in health IT implementation and adoption improve upon what they are doing. In addition, evaluations help justify investment in health IT projects by demonstrating project impacts. This is exactly the type of information needed to convert late adopters and others resistant to health IT. We can also share such information with our communities, raising awareness of efforts in the health IT field on behalf of patient safety and increasing quality of care. Thus, the question posed today is no longer why do we do evaluations but how do we do them? This toolkit will help assist you through the process of planning an evaluation. Section I walks you and your team step-by-step through the process of determining the goals of your project, what is important to your stakeholders, what needs to be measured to satisfy stakeholders, what is realistic and feasible to measure, and how to measure these items. Health Information Technology Evaluation Toolkit: 2009 Update 1 Section II includes a list of measures that you may use to evaluate your project. In this latest version, new measures have been added to each of the domains, and a new domain has been added around quality measures. For each domain, we include a table of possible measures, suggested data sources, cost considerations, potential risks, and general notes. A new column has been added to this updated version of the toolkit, with links to sources that expand on how these measures can be evaluated and with references in the literature. Section III contains examples of a range of implementation projects with suggested evaluation methodologies for each. In this latest version, two examples have been added on computerized provider order entry (CPOE) and picture archiving and communication systems (PACS). We invite and encourage your feedback on the content, organization, and usefulness of this toolkit as we continue to expand and improve it. Please send your comments or questions about the evaluation toolkit or the National Resource Center to NRC-HealthIT@ahrq.hhs.gov. Health Information Technology Evaluation Toolkit: 2009 Update 2 S e c tio n I: De ve lo p in g a n Eva lu a tio n P la n I. De ve lo p Brie f P ro je c t De s c rip tio n This may come straight out of your project plan or proposal. II. De te rm in e P ro je c t Go a ls What does your team hope to gain from this implementation? What are the goals of your stakeholders (CEO, CMO, CFO, clinicians, patients, and so on) for this project? What needs to happen for the project to be deemed a success by your stakeholders? Example: To improve patient safety; to improve the financial position of the hospital; to be seen by our patients as making patient safety an organizational priority. Health Information Technology Evaluation Toolkit: 2009 Update 3 III. S e t Eva lu a tio n Go a ls Who is the audience for your evaluation? Do you intend to prepare a report for your stakeholders? Are you required to prepare a report for your funders? Will you use the evaluation to convince late adopters of the value of your implementation? To share lessons learned? To demonstrate the project’s return on investment? To improve your standing and competitive edge in your community? Or are your goals more external? Would you like to share your experiences with a wider audience and publish your findings? If you plan to publish your findings, this may affect your approach to your evaluation. Example: To prepare a report for the stakeholders and funders of the project. IV. Ch o o s e Eva lu a tio n Me a s u re s Take a good look at your project goals. What needs to be measured in order to demonstrate that the project has met those goals? Brainstorm with your team on everything that could be measured, without regard to feasibility. Section II provides a wide range of potential measures in the following categories: ? Clinical Outcomes Measures ? Clinical Process Measures ? Provider Adoption and Attitudes Measures ? Patient Adoption, Knowledge, and Attitudes Measures ? Workflow Impact Measures ? Financial Impact Measures Health Information Technology Evaluation Toolkit: 2009 Update 4 Your team might find it helpful to break down your measures in similar categories. Keep in mind that measures should map back to your original project goals, and that they may include both quantitative and qualitative data. Example: (1) Goal: To improve patient safety. Measurement: The number of preventable adverse drug events is reduced post-implementation. (2) Goal: To improve the hospital’s financial position. Measurement: The number of claims rejected is reduced post-implementation. (3) Goal: To be seen by our patients as making patient safety an organizational priority. Measurement: In patient surveys, patients answer “yes” to the question, “Do you believe this hospital takes your safety seriously?” V. Co n s id e r Bo th Qu a n tita tive a n d Qu a lita tive Me a s u re s Many people feel more comfortable in the realm of numbers and, as a result, frequently design their evaluations solely around quantitative data. But this approach provides only a partial picture of your project. Quantitative data can lead to conclusions about your project that miss the larger picture. For example: A hospital implements a new clinical reminder system with the goal of increasing compliance with health maintenance recommendations. An evaluation study is devised to measure the percentage change in the number of patients discharged from the facility who receive influenza vaccines, as recommended. The study is carried out, and, to the disappointment of the research team, the rates of vaccinated patients discharged pre- and post-implementation do not change. The team concludes that their implementation goals have not been met, and that the money spent on the system was a poor investment. Health Information Technology Evaluation Toolkit: 2009 Update 5 But a qualitative study of the behaviors of the clinicians using the new system would have reached different conclusions. In this scenario, the qualitative study reveals that clinicians, bombarded with a number of alerts and health maintenance reminders, click through the alerts without reading them. The influenza vaccine reminders are not read; thus the rates of influenza vaccination remain unchanged. The study also notes that a significant number of clinicians are distracted by and frustrated with the frequent alerts generated by the new system, with no way to distinguish the more important alerts from the less important ones. In addition, some clinicians are unaware of the evidence supporting this vaccine reminder and of the financial (pay-for-performance) implications for the hospital if too few patients receive this vaccine. One clinician had the idea that the vaccine reminder could be added to the common admission order sets. These findings could be used to refocus the design, education, and implementation efforts for this intervention. However, lacking a qualitative evaluation, these insights are lost on the project team. Qualitative studies add another important dimension to an evaluation study: they allow evaluators to understand how users interact with a new system. In addition, qualitative studies speak to a larger audience because they generally are easier to understand than quantitative studies. They often generate anecdotes and stories that resonate with audiences. Therefore, it is important to consider both quantitative and qualitative data in your evaluation plan. Please add any qualitative measures you would like to consider. The National Resource Center has developed a Compendium of Health IT Surveys that may be found on the NRC Web site at Health IT Survey Compendium. This tool allows a user to search for publically available surveys by survey type, technology, care setting, and targeted respondent. These surveys can then be used as is, or can be modified to suit a user’s needs. Health Information Technology Evaluation Toolkit: 2009 Update 6 VI. Co n s id e r On g o in g Eva lu a tio n o f Ba rrie rs , Fa c ilita to rs , a n d Le s s o n s Le a rn e d Lessons learned are important measures of your project and typically are captured using qualitative techniques. These lessons may reflect the facilitators and barriers you encountered at various phases of your project. Barriers may be organizational, financial, or legal, among many other areas. Facilitators might include strong leadership, training, and community buy-in. This type of information is extremely valuable not only to you but also to others undertaking similar projects. In formulating a plan for capturing this information, consider scheduling regular meetings with your project team to discuss the issues at hand openly and to record these discussions. In addition, you could conduct focus groups with appropriate individuals to capture this information more formally. For example, you could ask nurses who are using a new technology about what has gone well, what has gone poorly, and what the unexpected consequences of the project have been. Another way to capture valuable lessons learned is to conduct real-time observations on how users interact with the new technology. Consider how you could incorporate these analysis techniques into your evaluation plan. Clearly state what you want to learn, how you plan to collect the necessary data, and how you would analyze the data. VII. S e a rc h fo r Oth e r Ea s ily Ac c e s s ib le Me a s u re s Hospitals collect a tremendous amount of data for multiple purposes: to satisfy various Federal and State requirements, to conduct ongoing quality assurance evaluations, and to measure patient and staff satisfaction. Therefore, there are teams within your facility already collecting data that might be useful to you. Reach out to these groups to learn what information they are currently collecting and to determine whether those data can be used as an evaluation measure. In addition, contact the various departments in your facility to learn the reporting capabilities of their current software programs as well as current data collection methods. There may be Health Information Technology Evaluation Toolkit: 2009 Update 7 opportunities to leverage these reporting capabilities and data collection methods for your project. For example, does the billing department already measure the number of claims rejected? Is there a team already abstracting charts for information that your team would like to examine? Could your team piggy-back with another group to abstract a bit of additional information? Are there useful measurements that could be taken from existing reports? Likewise, you may find that activities you are planning as part of your evaluation would be helpful to other teams within your facility. Cooperation in these activities can increase goodwill on both sides. Section II outlines several potential measures and provides sources where you may find those measures. Example: The finance department’s billing system can report the number of emergency department encounters that are coded as levels I, II, III, IV, and V. These reports are simple to run, and the finance department is willing to run them for you. You already know that many visits are downcoded because a visit was not sufficiently documented – an oversight that can lead to large revenue losses. A new evaluation measure is added to determine whether the new implementation improves documentation so that visits are coded appropriately and revenues are increased. Health Information Technology Evaluation Toolkit: 2009 Update 8 VIII. Co n s id e r P ro je c t Im p a c ts o n P o te n tia l Me a s u re s A project may have many impacts on a facility, but often these impacts depend on where the project is implemented – for example, across groups of hospitals versus across a single facility versus within a single department. In addition, impacts may vary according to the group that is using a new technology – for example, all facility clinicians versus nurses only. Consider the potential measures on your list and how your project might impact those measures. You may find that this exercise eliminates some measures from your list if you are trying to measure outcomes that will not be impacted by your project. Health Information Technology Evaluation Toolkit: 2009 Update 9 IX. Ra te Yo u r Ch o s e n Me a s u re s in Ord e r o f Im p o rta n c e to Yo u r S ta ke h o ld e rs Now that your team has a list of measures to evaluate, rate each measure in order of importance to your stakeholders, i.e., your CEO, clinicians, or patients, and so on You could use a scale such as: 1 = Very Important, 2 = Moderately Important, 3 = Not Important. This will help you begin to filter out those measures that are interesting to you but will not provide you with information of interest to your stakeholders. 1. Very Important:____________________________________________________ ____________________________________________________________________ 2. Moderately Important:_______________________________________________ ____________________________________________________________________ 3. Not Important:_____________________________________________________ ____________________________________________________________________ X. De te rm in e Wh ic h Me a s u re m e n ts Are Fe a s ib le Now examine your list to determine which measures are feasible for you to measure. Be realistic about the resources available to you. Teams frequently are forced to abandon evaluation projects that are labor-intensive and expensive. Instead, focus on what is achievable and on what needs to be measured to determine whether your implementation has met its goals. For example, you might want to know whether your implementation reduces adverse drug events (ADEs). While this is a terrific evaluation project, if you have neither the money nor the individuals needed for chart abstraction, the project will likely fail. Keep focused on what can be achieved. Again, you can use a ranking scale: 1 = Feasible, 2 = Feasible with Moderate Effort, 3 = Not Feasible. 1. Feasible:__________________________________________________________ ____________________________________________________________________ 2. Moderate Effort:___________________________________________________ ____________________________________________________________________ 3. Not Feasible:_______________________________________________________ ____________________________________________________________________ Health Information Technology Evaluation Toolkit: 2009 Update 10 XI. De te rm in e Yo u r S a m p le S ize A second, extremely important, facet of feasibility is sample size. An evaluation effort can hinge on the number of observations planned or on the frequency of events to be observed. The less frequently the event occurs, the less feasible the planned measure becomes. If a measurement requires a large amount of resources—for example, to directly observe clinicians at work or to conduct manual chart review—or if you are observing very rare events, such as patient deaths, your plan may not be feasible at all. In planning how to study your measure, determine the number of observations you will need to make. Generally, you need enough observations to feel confident about the conclusions you want to draw from the data collected. If you have never estimated a sample size, you should consult a statistician to help you do this correctly or utilize the resources on the AHRQ NRC Web site. Appendix A offers a hypothetical example of determining sample size. Estimate the number of observations you will need for each measure. You may find that this exercise eliminates further measures from being feasible. Health Information Technology Evaluation Toolkit: 2009 Update 11 XII. Ra n k Yo u r Ch o ic e s o n Bo th Im p o rta n c e An d Fe a s ib ility Place your remaining measures into the appropriate box in the grid below. Importance Scale Feasibility Scale 1-Feasible 2-Moderate Effort 1-Very Important (1) (2) 2Moderately Important (3) (4) 3-Not Important (5) 3-Not Feasible Those measures that fall within the green zone (Most important, Most Feasible) are ones you should definitely undertake; the measures in the yellow zones are ones you can undertake in the order listed; and those measures in the red zone should be avoided. Health Information Technology Evaluation Toolkit: 2009 Update 12 XIII. Ch o o s e th e Me a s u re s Yo u Wa n t To Eva lu a te You now have a list of measures ranked by importance and feasibility. Narrow that list down to four or five primary measures. If you want to evaluate other measures and you believe that you will have the required resources available to you, list those as secondary measures. XIV. De te rm in e Yo u r S tu d y De s ig n Now that you know which measures you are going to undertake, consider the study design you will use. Listed below are the types of study designs that may be used in your evaluation. Remember that each type of design has attributes of “timing” and “data collection strategy.” Timing can be either retrospective, looking at data from the past, or prospective, looking at new data as it is collected. The data collection strategies include chart reviews, interviews (phone, inperson), focus groups, data mining from electronic databases, observational data collection (time-motion studies), randomized control trials (RCTs), case-control data collection, cohort data collection, automatic data collection (from EMRs), and expert-reviews. This is by no means a substitute for hands-on guidance from a trained statistician. It is only meant to be a ten-thousand foot view of evaluation methods. Below depicts one way of organizing these types of studies: 1. Retrospective Studies A. Data Collection Strategies i. Manual Chart Review ii. Electronic Data Mining of EMR/Registry Data iii. Instrument the EMR/Registry (Real-Time Data Collection) iv. Surveys (Paper/Electronic) v. Expert Review vi. Phone Interview vii. Focus Group B. Study Designs i. Case Series ii. Case Control Study Health Information Technology Evaluation Toolkit: 2009 Update 13 2. Prospective Studies A. Data Collection Strategies i. Manual Chart Review ii. Electronic Data Mining of EMR/Registry Data iii. Instrument the EMR/Registry (Real-Time Data Collection) iv. Surveys (Paper/Electronic) v. Expert Review vi. Phone Interview vii. Focus Group viii. Direct Observation B. Study Designs i. Randomized Control Trial (RCT) ii. Time-Motion Study iii. Pre-Post Study iv. Meta-Analysis Use this table to organize the studies as follows. The shaded areas indicate which strategy fits which design: Data Collection Strategies Types of Study Designs Case-Control RCT Time-Motion Pre-Post Manual Chart Review Electronic Data Mining of EMR/ Registry Data Instrument the EMR/Registry Surveys (Paper/Electronic) Expert Review Phone Interview Focus Group Direct Observation Health Information Technology Evaluation Toolkit: 2009 Update 14 3. Data Sources - As you think through your study design, you will need to consider where you will obtain your data. Potential sources of data include: A. Study Databases (Data entered from surveys, focus groups, time-motion studies, and so on) B. Paper Charts C. Electronic Data Repositories and EMR databases i. Lab System ii. Pharmacy System iii. Billing System iv. Registration System v. Radiology Information System vi. Pathology Information System vii. Health Information Exchange viii. Personal Health Record ix. EMR data (ICD/Procedures) x. Administrative D. Pharmacy Logs E. Disease Registries F. Prescription Review Databases G. Direct Observation Databases H. Real-Time Capture from Medical Devices (Barcoders, and so on) I. Hospital Quality Control Program (Hospital may already be collecting this information for quality reporting.) XV. Co n s id e r th e Im p a c t o f S tu d y De s ig n o n Re la tive Co s t An d Fe a s ib ility How you have chosen to design your study will impact the feasibility of evaluating a given measure in terms of both the relative cost and the challenges you are likely to encounter. Below we list known caveats around study methodologies and their relative cost considerations, as well as alert you to possible solutions. You may find additional measures you will want to drop from your evaluation plan once you carefully consider these issues. Appendix B includes more resources on Health IT evaluation. 1. Developing your own survey can be time consuming. If you are conducting randomized trials or other rigorous evaluations, you also will need to validate the survey, especially if it is scored, which can add additional time and expense. Some resources on survey design can be found here: Health Information Technology Evaluation Toolkit: 2009 Update 15 A. Doyle JK. Introduction to survey methodology and design. In: Woods DW. Handbook for IQP advisors and students. Chap. 10. Worcester, MA: Worcester Polytechnic Institute; 2006. B. AHRQ National Resource Center for Health IT. Health IT survey compendium. C. California Health Interview Survey. Survey design and methods. D. Hinkin TR. A brief tutorial on the development of measures for use in survey questionnaires. Organizational Research Methods 1998;1(1):104-21. 2. Focus groups require planning and the logistics can become complicated when busy stakeholders are asked to convene. The methodology for data analysis from focus groups requires the expertise of a qualitative researcher to analyze free-text narratives for themes and common principles. This can also increase the cost of your evaluation quickly. A. Iowa State University. Focus group fundamentals. Methodology Brief (PM 1969b) 2004 May. B. Kitzinger, J. Qualitative research: introducing focus groups. BMJ 1995 Jul 29;311(7000):299-302. C. Robert Woods Johnson Foundation. Focus groups. D. Dawson S, Manderson L, Tallo VL. A manual for the use of focus groups (Methods for social research in disease). Boston, MA: International Nutrition Foundation for Developing Countries; 1993. 3. Manual chart reviews are time consuming and expensive, depending on how many charts you need to review or how many data elements are abstracted. Common pitfalls with chart reviews include unintentional data omission, data entry problems, or the chart itself may be incomplete or have missing information. In addition, reviewers can fatigue easily from the tediousness of the work. 4. Some prospective studies can be done fairly efficiently and quickly. For example, timemotion studies (also known as work-sampling or observational studies) can be quickly performed by motivated research assistants or students at reasonable costs. However, these studies require the development of a list of tasks that the subjects will perform and also require that you have a data collection tool (personal digital assistant-based timer tool, paper-based tool, and so on) where you can record the times for the completion of each task. One could also automate the process by directly “instrumenting” an EMR, meaning specific programming is added to an EMR to capture data. For example, if evaluators want to evaluate the “usefulness of an alert,” programming is added to automatically track every time an alert is fired and every time that alert is followed. In another example, if evaluators want to capture use of e-prescribing, the system will automatically track and aggregate the number of times users prescribe medications electronically. A. Finkler SA, Knickman JR, Hendrickson G, Lipkin M Jr, Thompson WG. A comparison of work-sampling and time-motion techniques for studies in health services research. Health Serv Res 1993 Dec;28(5):577-97. Health Information Technology Evaluation Toolkit: 2009 Update 16 B. Caughey MR, Chang BL. Computerized data collection: example of a timemotion study. West J Nurs Res 1998 Apr;20(2):251-6. 5. Other types of prospective studies (randomized controlled trials) and before-after type observational studies are more complicated and expensive. They require modeling of the outcome variables using advanced statistical techniques (generalized linear models, logistic regression, analysis of variance (ANOVA), and so on). While they may provide the most accurate and valid data of all the study designs, they are also the most expensive to undertake. Appendix C includes more resources on statistics. A. Sibbald B, Roland M. Understanding controlled trials: why are randomized trials important? BMJ 1998;316(7126):201. B. Green S, Raley P. What to look for in a randomized controlled trial. Science Editor 2000 Sept-Oct; 23(5):157. C. Concato J, Shah N, Horwitz R. Randomized, controlled trials, observational studies, and the hierarchy of research designs. N Engl J Med 2000;342:1887-92. 6. For retrospective data analysis or case-control studies, you will need cohorts of matched cases and controls in order to then evaluate the outcome in question. The challenge in these studies is trying to identify the matched cases and controls. A. Barlow WE, Ichikawa L, Rosner D, Izumi S. Analysis of case-cohort designs. J Clin Epidemiol 1999 Dec;52(12):1165-72. B. Schenker M. Case control studies. Department of Public Health Sciences, UC Davis. C. Meirik O. Cohort and case control studies. Geneva Foundation for Medical Education and Research. D. Ernster VL. Nested case-control studies. Prev Med 1994 Sep;23(5):587-90. 7. Data mining refers to the use of sophisticated statistical techniques in the analysis of existing data within a given database. You may need to have access to experienced statisticians to model and analyze patterns within a dataset that can indicate certain conditions or outcomes. A. Moore A. Statistical data mining tutorials. B. Palace B. Data mining. Technology notes prepared for management 274A; 1996. XVI. Ch o o s e Yo u r Fin a l Me a s u re s Based on your study design choice and their relative costs, you may have eliminated additional measures from your evaluation plan. You should now be left with a final list of measures that you want to evaluate as part of your evaluation plan. Health Information Technology Evaluation Toolkit: 2009 Update 17 Health Information Technology Evaluation Toolkit: 2009 Update 18 XVII. Dra ft Yo u r P la n Aro u n d Ea c h Me a s u re Map out how you will measure each measure. What is the timeframe for your study? What is your comparison group? If you are doing a quantitative study, what statistical analysis will you use? Having a statistician review your plan at this point may save you time later in your evaluation. If you plan to deploy a survey or conduct a time-motion study as part of your evaluation, you may want to conduct a small pilot to save you time later as well. Below is a template to walk you through these questions. Section III contains example plans for your reference. Measure 1st measure 2nd measure 3rd measure 4th measure Briefly describe the intervention. Describe the expected impact of the intervention and how you think your project will exert this impact. What questions do you want to ask to evaluate this impact? These will likely reflect the expected impact (either positive or negative) of your intervention. What will you measure in order to answer your questions? How will you make your measurements? How will you design your study? For a quantitative study, you might consider what comparison group you will use. For a qualitative study, you might consider whether you will make observations or interview users. For quantitative measurements only: What types of statistical analysis will you perform on your measurements? Estimate the number of observations you need to make in order to demonstrate that the measure has changed statistically. How would the answers to your questions change future decisionmaking and/or implementation? What is the planned timeframe for your project? Who will take the lead for the project? For data collection? Data analysis? Presentation of the findings? Final write-up? Health Information Technology Evaluation Toolkit: 2009 Update 19 XVIII. Write Yo u r Eva lu a tio n P la n You now have everything you need to write your evaluation plan: project description, goals, measures, and methodology for your evaluation. 1. 2. 3. 4. 5. 6. Short Description of the Project Goals of the Project Questions to be Answered by the Evaluation Effort First Measure to be Evaluated – Quantitative A. Overview – General Considerations B. Timeframe C. Study Design/Comparison Group D. Data Collection Plan E. Analysis Plan F. Power/Sample Size Calculations Second Measure to be Evaluated – Qualitative A. Overview – General Considerations B. Timeframe C. Study Design D. Data Collection Plan E. Analysis Plan Subsequent Measures to be Evaluated in the Same Format Health Information Technology Evaluation Toolkit: 2009 Update 20 S e c tio n II: Exa m p le s o f Me a s u re s Th a t Ma y Be Us e d to Eva lu a te Yo u r P ro je c t The following section outlines potential measures for evaluation. For each domain (Clinical Outcomes; Clinical Process Measures; Provider Adoption and Attitudes Measures; Patient Adoption, Knowledge and Attitude Measures; Workflow Impact Measures; and Financial Impact Measures), we include a table of possible measures, suggested data sources, cost considerations, potential risks, and general notes. In addition, we include links to sources that expand on how these measures can be measured, with references in the literature. Ta b le 1: Clin ic a l o u tc o m e s m e a s u re s Me a s u re Preventable adverse drug events (ADEs) Qu a lity Do m a in (s ) Da ta S o u rc e (s ) Patient Safety Chart review Quality of Care Prescription review Direct observations May also consider patient phone interviews Instrumenting the study database EMR No te s Need to distinguish between ADEs and MEs MEs can be divided by stage of medication process: • Ordering • Transcribing • Dispensing • Administering • Monitoring Can be assessed in both inpatient and outpatient settings. ADEs are: • Idiosyncratic reactions • Drug-diagnosis interactions Health Information Technology Evaluation Toolkit: 2009 Update P o te n tia l Ris ks Lin ks Preventable ADEs are relatively common, especially if there is no clinical decision support (CDS) at the time of drug ordering. Many drug-drug and drug-diagnosis interactions can be avoided if CDS tools are available at the time or ordering of medications. See Canada Health Infoway’s Benefits Evaluation Indicators Technical Report, page 43, for a detailed definition and evaluation method for this measure: Infoway Report Keep track of alerts that fire in a system with CDS, understanding that in a system without CDS those alerts will not be available; we can get an upper bound for preventable ADEs. It is hard to define what is meant by a “preventable ADE.” Some idiosyncratic reactions are not preventable and it is impossible to predict who will get what reaction. 21 Me a s u re Inpatient mortality Qu a lity Do m a in (s ) Da ta S o u rc e (s ) No te s Patient Safety Medical records Need to risk-adjust. Effectiveness Billing data May be very difficult to find statistically significant differences in mortality rates, since death rates tend to be relatively low. Discharge summaries Coroner’s office records chart review Need to distinguish between people who die in the ED and real inpatient mortality. EMR Data repository: administrative Hospital complication rates P o te n tia l Ris ks Patient Safety Some can be obtained from ICD-9 codes, although a chart review sample is preferable. Some measures may already be collected for external reporting purposes (i.e., quality and HEDIS data) Instrumenting the EMR to automatically detect and keep count of key terms related to complication rates Chart review EMR Check your facility’s quality assurance team Common targets: • Nosocomial infections • PE/DVT (post-op, or if develops in hospital in patient without external risk factors such as cancer, hypercoagulable state, and so on) • PE/DVT • Falls • Pressure ulcers • Catheter-related infections • Post-op infections • Operative organ/vessel/nerve injury • Post-op MI • Post-op respiratory distress • Post-op shock • Pneumothorax intracranial hemorrhage Health Information Technology Evaluation Toolkit: 2009 Update Watch out for documentation effect (e.g., falls may become more reliably documented because the measure makes it easier to document falls). Need to make sure that the event is really a complication and not a predictable outcome of the patient’s intrinsic disease process: for example, a pneumothorax in a patient who has bullous emphysema is not a hospital complication. But a pneumothorax in a patient who just had a thoracentesis done is a hospital complication. Lin ks http://content.nejm.org/ cgi/content/abstract/31 7/26/1674 http://www.thedeltagro up.com/assets/PDF/Pu blications/RiskWhitePa per.pdf?phpMyAdmin= nfZdFeMdPJ2KIC1b% 2C3lUEJeCkO7 http://www.thedeltagro up.com/assets/PDF/Pu blications/RiskWhitePa per.pdf?phpMyAdmin= nfZdFeMdPJ2KIC1b% 2C3lUEJeCkO7 22 Me a s u re Length of stay Qu a lity Do m a in (s ) Patient Safety Efficiency Da ta S o u rc e (s ) No te s Medical records, especially discharge summaries P o te n tia l Ris ks Lin ks Need to adjust for disease severity and diagnosis. See the NRC’s Health IT Evaluation Measure Briefing Sheet: “Length of Stay” Consider external issues (e.g., financial pressures to discharge patients early, other concurrent QI programs, and so on). Billing data Hospital quality measures data (HEDIS, and so on) Chart review Data repository: administrative Check on data being collected by your facility’s quality assurance team Readmission rates after discharge Patient Safety Medical records Effectiveness Billing data Efficiency ED visit histories Patient Centeredness Discharge summaries Chart review Need to define the time period for the readmission. For many organizations, this standard is 7 days and/or 30 days after inpatient discharge. Data repository: administrative Check on data being collected by your facility’s quality assurance team Health Information Technology Evaluation Toolkit: 2009 Update Need to adjust for changes in patient diagnosis mix over time. Need to consider reason for readmission and correlate it with a previous diagnosis – i.e., whether it is a complication of or inadequate treatment of a previous diagnosis. This is quite difficult. For example, consider the following scenario: See Canada Health Infoway’s Benefits Evaluation Indicators Technical Report, page 85, for detailed definition and evaluation method for this measure: Infoway Report A patient is admitted for work up of a new tumor and had a biopsy and diagnosis made. Then the patient is discharged and readmitted a week later for initiation of chemotherapy. This has no bearing on patient safety, efficiency, and so on It is a planned admission. 23 Me a s u re Inpatient admission rates/ED visits for populations with chronic diseases Qu a lity Do m a in (s ) Da ta S o u rc e (s ) Patient Safety Medical records Effectiveness Billing data Efficiency Patient registries Patient Centeredness ER visit data Chart review Data repository: administrative No te s Common targets: • CHF • Asthma • DM • ESRD • CAD • COPD P o te n tia l Ris ks Lin ks Watch out for secular trends (e.g., change in admission criteria). See Canada Health Infoway’s Benefits Evaluation Indicators Technical Report, page 88, for detailed definition and evaluation method for this measure: Infoway Report Be mindful that chronic diseases invariably require extra ED visits, not because of primary care but because these diseases invariable will have symptoms that require clinical attention beyond current primary care settings. Check on data being collected by your facility’s quality assurance team Health Information Technology Evaluation Toolkit: 2009 Update 24 Ta b le 2: Clin ic a l p roc e s s m e a s u re s Me a s u re Potential adverse drug events (“near misses”) Qu a lity Do m a in (s ) Patient Safety Da ta S o u rc e (s ) Chart review Prescription review Direct observations May also consider patient phone interviews Instrumenting EMRs Expert review Medication errors Patient Safety No te s Errors can be divided by stage of medication use: • Ordering • Transcribing • Dispensing • Administering • Monitoring Can be assessed in both inpatient and outpatient settings. Direct observations May also consider patient phone interviews Instrumenting EMRs Expert review Patient Safety Efficiency Pharmacy intervention logs EMR verbal orders for providers Lin ks Chart reviews do not capture all errors (especially dispensing and administration errors). Therefore evaluators may need to conduct patient interviews to back up chart reviews, especially in the outpatient setting, as documentation of adverse events in the ambulatory setting typically is not very reliable. Chart reviews do not capture all errors (especially dispensing and administration errors). Therefore evaluators may need to conduct patient interviews to back up chart reviews, especially in the outpatient setting, as documentation of adverse events in the ambulatory setting typically is not very reliable. Chart review Prescription review Number of pharmacist interventions per medication order P o te n tia l Ris ks If you have CDS with ePrescribing you can reduce the number of pharmacy interventions. A pre-post design would be appropriate. Health Information Technology Evaluation Toolkit: 2009 Update Might change threshold for pharmacy intervention. For example, if a pharmacist assumes a system is catching a particular type of error, that pharmacist may not look as hard for those errors. See Canada Health Infoway’s Benefits Evaluation Indicators Technical Report, page 51, for detailed definition and evaluation method for this measure: Infoway Report 25 Me a s u re Percentage of verbal order Qu a lity Do m a in (s ) Patient Safety Da ta S o u rc e (s ) Medical records Pharmacy records EMR data Time to complete cosignature of verbal orders Patient Safety No te s P o te n tia l Ris ks Lin ks Health IT will likely not change this significantly, unless corollary orders are addressed; in this case you should test corollary orders specifically and not the number of verbal orders. Evaluation, particularly for preimplementation baseline, will depend on whether orders are documented clearly as verbal orders in the medical or pharmacy record. Any manual chart review is resource intensive in terms of space, time, and costs. See the NRC’s Health IT Evaluation Measure Briefing Sheet: “Percentage of Verbal Orders” Medical records Check reliability of time measurements on paper records. Efficiency Time-to-cosignature should not be a surrogate for order completion. Some systems may allow providers to cosign orders months to years after they were ordered and potentially completed. Chronic disease management targets Effectiveness Patient Centeredness Electronic data repository (if available) Chart reviews Chronic disease registries EMR data DM: A1c within goals, LDL within goals, annual foot exam, annual nephropathy screening, annual opthalmological exam HTN: percent of patients controlled, medication use within guidelines Depression: appropriate monitoring after starting SSRI ESRD/chronic kidney diseases: care consistent with K-DOQI guidelines CAD: aspirin use, betablocker use, smoking cessation counseling Health Information Technology Evaluation Toolkit: 2009 Update Check for documentation effect of measure (e.g., smoking cessation might be better documented than before even though it is not more commonly performed). Check for inaccuracies in problem and/or medication lists. Common issue with problem lists is that they are seldom up to date, even if a problem was resolved a long time ago. Therefore, be very careful to make sure a problem is “current” before assuming a target was not met. For example, a woman who See Canada Health Infoway’s Benefits Evaluation Indicators Technical Report, page 88, for detailed definition and evaluation method for this measure. Includes measures for asthma, diabetes, heart failure and hypertension: Infoway Report Also look at HEDIS measures: http://www.ncqa.org/t abid/784/Default.asp x 26 Me a s u re Qu a lity Do m a in (s ) Da ta S o u rc e (s ) No te s CHF: ACE inhibitor use, appropriate beta-blocker use Asthma: smoking cessation counseling Childhood ADHD P o te n tia l Ris ks Lin ks had pregnancy-induced diabetes is not diabetic now that she has had her baby. Thus, checking A1c’s in these patients regularly is not indicated and can be misconstrued for suboptimal care. Childhood obesity Health maintenance target Patient Safety HEDIS measures Effectiveness Electronic data repository Chart reviews Immunizations (adult and childhood) Cancer screening (mammogram, Pap smears, and so on) Watch out for documentation effect of measure. Billing data may be more resistant to this effect. HEDIS measures: Need to assess and monitor quality of data used to trigger the alerts and reminders. See the NRC’s Health IT Evaluation Measure Briefing Sheet: “Percentage of Alerts or Reminders That Resulted in Desired Action” http://www.ncqa.org/t abid/784/Default.asp x Counseling (e.g., smoking cessation) Appropriate actions/usage: Patient Safety • Percent of alerts or reminders that resulted in desired action • Percent of tests ordered inappropriately (for target tests) • Percent of blood products used appropriately Effectiveness Electronic data repository CPOE usage logs Medical records Chart reviews Best to let the alerts trigger equally for both the intervention and control groups, and then prevent the alerts from being displayed to users in the control group. By doing this, you can track opportunities to carry out the desired action equally between the intervention and control groups. What you should look for is documentation of exceptions, i.e., why an alert was not acted on? However, this is exceedingly hard to do. Be very careful of how you are defining appropriate and inappropriate actions. For example, what is meant by an “inappropriately ordered test?” There are no accepted definitions of this. In different settings, patient circumstances and diagnoses, an otherwise inappropriately ordered test may be appropriate to order. The same thing applies with percent of alerts that result in desired action. Clinician judgment supersedes all computer alerts. Health Information Technology Evaluation Toolkit: 2009 Update 27 Me a s u re Documentation of key clinical data elements Qu a lity Do m a in (s ) Patient Safety Quality of Care Da ta S o u rc e (s ) Likely will need chart reviews for paperrecords group. No te s Examples include: • Allergy on admission • Follow-up plan on discharge • Care plan for next phase of care • Complete pre- and postadmission medication list P o te n tia l Ris ks Lin ks May need to look in different places to get this, for example, paper charts versus EMRs. Some practices may enter orders online but hand-write a note in the paper chart. See Canada Health Infoway’s Benefits Evaluation Indicators Technical Report, page 37, for detailed definition and evaluation method for this measure for medication information only: Infoway Report Should also assess clinician perception of data quality. Medical chart/patient medication agreement Patient Safety Patient Centeredness Compare EMR data with patient report Compare EMR data with PHR data Need to understand how patients manage medications via PHR – request refills, or report side effects. Need to understand what features of “patient portals” are useful – medication refills, documenting side effects, setting up appointments, and so on Health Information Technology Evaluation Toolkit: 2009 Update Be careful here: accessing clinical data does not imply that the patient “understands” what is meant by it. There are many examples of slightly abnormal tests that clinicians would not pay attention to, while patients may jump to incorrect conclusions about them. 28 Ta b le 3: P ro vid e r Ad o p tio n a n d Attitu d e s Me a s u re s Me a s u re Percent of orders entered by authorized providers on CPOE Qu a lity Do m a in (s ) Patient Safety Da ta S o u rc e (s ) No te s CPOE usage logs (including laboratory and radiology orders) Pharmacy logs P o te n tia l Ris ks Lin ks This can get complicated because a physician may not be the one entering orders – it may be a nurse or a clerk. If the order the physician called in does not match the computer understood order exactly, errors may occur. See the NRC’s Health IT Evaluation Measure Briefing Sheet: “Percentage of Orders Entered by Authorized Providers using CPOE” Correlate with “verbal orders” and also look for discrepancies between orders “called in” and the actual order entered into a system. Frequency of order set use Efficiency CPOE usage logs Patient Safety Order system logs Effectiveness Percent of outpatient prescriptions generated electronically Patient Safety EMR data Effectiveness Chart reviews Percent of notes online Patient Safety EMR data Would be helpful to present data in context of how many times order sets could have been used in the same period (e.g., number of patients admitted with CHF). Order sets may not be electronic. In many hospitals, order sets are PDF files printed on paper. The clinician may check off the orders and a clerk enters them into a computer. Therefore, tracking them from the EMR data alone would be difficult. Could do a pre-post study and estimate this by querying the pharmacist. Electronic prescriptions would be typed out. Getting the denominator will require chart review. Chart reviews Health Information Technology Evaluation Toolkit: 2009 Update Getting the denominator may require chart review. 29 Me a s u re Percent of practices or patient units that have gone paperless Qu a lity Do m a in (s ) Efficiency Da ta S o u rc e (s ) No te s EMR usage logs P o te n tia l Ris ks Lin ks Likely a gradual progress that takes many months, if not years. Training logs The term “paperless” is hard to define. No one is ever “totally paperless” – you have to have very clear guidelines for what you mean by paperless. For example, paperless may mean: • Use of CPOE for all orders • Use of ePrescriptions • Use of electronic notes Percent of physicians and nurses who have undergone voluntary training for target IT intervention N/A Training logs Use of help desk N/A Help desk logs If training is mandatory, the percentages are not reflective of attitude or willingness to adopt. May be confounded by quality of up-front training, continued support, or usability of application. Also may be confounded by the training level of the user: the novice user will require more support, while someone with more experience with technology may solve many problems on their own. Health Information Technology Evaluation Toolkit: 2009 Update 30 Me a s u re Time to resolution of reported problems Qu a lity Do m a in (s ) N/A Da ta S o u rc e (s ) No te s Help desk logs P o te n tia l Ris ks Lin ks May be confounded by nature of reported problems. You have to adjust for reported problem types and the time it takes to solve them – some can be fixed quickly, while others are system wide issues that may take years to resolve. Provider satisfaction towards specific interventions N/A Satisfaction surveys and interviews that assess: Difficult to achieve good response rates from physicians. Ease of use Creating satisfactions surveys is not easy and takes time. Usefulness Impact on quality and time savings Suggestions for improvement Provider satisfaction towards own job N/A Direct surveys (human resources may administer already) Consider using an existing survey. Review existing surveys using the Health IT Survey Compendium on the AHRQ Health IT Web site. Many potential confounders. Interviews and focus groups Turnover of staff N/A Human resources log Health Information Technology Evaluation Toolkit: 2009 Update See Canada Health Infoway’s Benefits Evaluation Indicators Technical Report, page 121, for detailed definition and evaluation method for this measure for medication ordering only: Infoway Report Consider using an existing survey. Review existing surveys using the Health IT Survey Compendium on the AHRQ Health IT Web site. Many potential confounders. 31 Me a s u re EHR adoption Qu a lity Do m a in (s ) Da ta S o u rc e (s ) Patient Safety Provider surveys Efficiency Focus groups No te s Many surveys of EHR adoption exist. May wish to use one. P o te n tia l Ris ks Lin ks Need to be careful to document reasons for, and for not, adopting. There may be very legitimate reasons for failure to adopt. Consider using an existing survey. Review existing surveys using the Health IT Survey Compendium on the AHRQ Health IT Web site. Note: May be helpful to correlate patient clinical outcomes with adoption of measure, either at the physician or practice unit level. Need to collect baseline data for comparison. Health Information Technology Evaluation Toolkit: 2009 Update 32 Ta b le 4: P a tie n t a d o ptio n , kn o wle d g e , a n d a ttitu d e s m e a s u re s Me a s u re Patient knowledge Qu a lity Do m a in (s ) Da ta S o u rc e (s ) Patient Centeredness Patient surveys and interviews Patient focus groups No te s Knowledge of own medications (regimen, indications, potential side effects), other prescribed care Knowledge of own health maintenance schedules Knowledge of own medical history Knowledge of own family's medical history Comfort level Barriers and facilitators for use Patient attitudes Patient Centeredness Patient surveys Patient interviews Focus groups and other qualitative methodologies Patient satisfaction Patient Centeredness External surveys (CAHPS, commercial) Internally developed survey Health Information Technology Evaluation Toolkit: 2009 Update P o te n tia l Ris ks It is important to do iterative cognitive testing and piloting of surveys developed internally. Methodologies leading to good survey response rates may be expensive. Lin ks Consider using an existing survey. Review existing surveys using the Health IT Survey Compendium on the AHRQ Health IT Web site On-line surveys might lower cost, but may bias results because on-line patients may be different from the general population. May be able to add customized questions to standard surveys such as Consumer Assessment of Healthcare Providers and Systems (CAHPS®). Consider using an existing survey. Review existing surveys using the Health IT Survey Compendium on the AHRQ Health IT Web site Consider using an existing survey. Review existing surveys using the Health IT Survey Compendium on the AHRQ Health IT Web site 33 Me a s u re Patient use of secure messaging Qu a lity Do m a in (s ) Patient Centeredness Da ta S o u rc e (s ) Patient surveys Focus groups Logs of EMR/PHR systems and RHIOs Patient utilization of the PHR portal Patient Centeredness Portal and PHR logs Focus groups Surveys No te s Need to understand how messages are communicated to providers – for example via an EMR or PHR. Would be helpful to identify what “functions” of the PHR are being utilized. Need to consider differences between true PHR functions and those that are just “patient portals.” Patient utilization of functions within a PHR Patient Centeredness Portal and PHR logs Focus groups Surveys Patient compliance with medications. Patient Centeredness P o te n tia l Ris ks Lin ks Consider using an existing survey. Review existing surveys using the Health IT Survey Compendium on the AHRQ Health IT Web site. Looking at raw numbers may not give the type of information you are interested in. Collecting data on numbers of new users versus recurring users may be more informative. Consider using an existing survey. Review existing surveys using the Health IT Survey Compendium on the AHRQ Health IT Web site. Would be useful to keep track of what functions patients are using or looking at. Pharmacy and billing logs: number of medications prescribed and number of medications dispensed or refills requested Focus groups Surveys Health Information Technology Evaluation Toolkit: 2009 Update Just because a medication is documented does not mean it has been taken, or taken correctly. Patients often take their medications in ways not authorized by their providers. Therefore, if you are looking for effects of “proper” medication reconciliation on quality and safety outcomes, make sure you question whether medications are being taken properly. 34 Ta b le 5: Wo rkflow Im pa c t Me a s u re s Me a s u re Time measures: • Spent per patient • Placing orders Qu a lity Do m a in (s ) Efficiency Da ta S o u rc e (s ) No te s P o te n tia l Ris ks Lin ks Time-motion studies (PDA and Tablet programs are available from the National Resource Center) Should focus on measuring time spent on activities that may be affected. Observers need to understand basic clinician workflow, be familiar with applications, and be careful with usage logs, since usage logs typically do not capture interruptions when users interact. See Canada Health Infoway’s Benefits Evaluation Indicators Technical Report, page 48, for detailed definition and evaluation method for time spent per patient: Infoway Report You may need to adjust for patient care unit, severity of illness, time-of-day, or patient volume to account for possible confounding. You need to also consider the type of medication order placed (routine versus stat versus recurring) and stratify your results by these categories. For example, a medication administered on a recurring basis may have an order placed several days ago; if this is not considered, there will be a long interval between time of order and time of administration, but this is not due to a delay. Confounding based on type of order. If conducting a time-motion study, observers need to understand basic provider workflow and their processes, as well as be familiar with the technology being used. See the NRC’s Health IT Evaluation Measure Briefing Sheet: “Medication Turnaround Time in the Inpatient Setting” Instrumenting the EMR to automatically capture these times Medication turnaround time Efficiency Time-motion studies (PDA and Tablet programs are available from the National Resource Center) Health Information Technology Evaluation Toolkit: 2009 Update 35 Me a s u re Qu a lity Do m a in (s ) Da ta S o u rc e (s ) No te s P o te n tia l Ris ks Lin ks Observers need to understand the difference between a “callback episode” and a single callback. A callback episode is when there is some backand-forth vetting and multiple callbacks occur. See Canada Health Infoway’s Benefits Evaluation Indicators Technical Report, page 54, for detailed definition and evaluation method for this measure: Infoway Report Percentage of orders or prescriptions which require a pharmacy callback Efficiency Pharmacy logs Patient throughput Efficiency Billing and administrative data Could be patient volume in ED, hospital, practice, or OR turnover. Concurrent interventions may have an effect. See Canada Health Infoway’s Benefits Evaluation Indicators Technical Report, page 92, for detailed definition and evaluation method for this measure: Infoway Report Patient wait time in ED Efficiency ED administrative data This may already be captured in many ED settings, therefore you may be able to measure with minimal effort. Confounded by many other factors (e.g., patient volume or demand) See Canada Health Infoway’s Benefits Evaluation Indicators Technical Report, page 92, for detailed definition and evaluation method for this measure: Infoway Report Process redesign templates Should have a preliminary phase where all workflow stages are documented. Need to create taxonomies of workflows and time each one. Patient Centeredness End users' job tasks or workflow Efficiency Time-motion studies (PDA and Tablet programs are available from the National Resource Center) Health Information Technology Evaluation Toolkit: 2009 Update Observers need to understand end users’ workflow and be trained on workflow documentation. 36 Me a s u re Nurses’ time spent on direct patient care Qu a lity Do m a in (s ) Efficiency Da ta S o u rc e (s ) Time and date information from a direct observation study (e.g., timemotion study or work sampling). Time-motion studies (PDA and Tablet programs are available from the National Resource Center) Documentation time Efficiency Usage logs Time-motion studies Compliance rate for outpatient follow-up appointments: • For all outpatients in a practice or • For specific conditions or diagnoses where there is continued treatment and maintenance Patient Centeredness Effectiveness Registration system logs No te s P o te n tia l Ris ks Observers need to understand basic nursing workflow and their processes in the setting of implementation, as well as be familiar with the technology being used. Extensive work to categorize nurse tasks in inpatient settings has already been conducted and developed into a time-motion observation instrument, publicly available on the NRC Health IT Web site. Could configure the EMR to record when a user enters and leaves a “note” field to estimate documentation time. Need trained observers to record when documentation happens and if it occurs as a continuous activity or in a random fashion. This measure gives a sense of how well patients comply with scheduled or recommended follow-up appointments within recommended timeframe. Unavoidable missed appointments – such as provider cancelled appointments, hospitalizations, or care provided in other settings – should be excluded from this measure. If possible, document “reason” for missed appointment, which can be challenging as there can be many potential reasons. This measure can be impacted by heath IT because of patient reminders and clinical alerts for follow-up appointments. It can help monitor patient care utilization, such as whether compliance with follow-up appointments reduces hospitalizations and ED visits Lin ks Compliance by specific condition/diagnoses (e.g., follow-up post-natal visit after delivery) is usually based on guidelines or Health Information Technology Evaluation Toolkit: 2009 Update 37 Me a s u re Qu a lity Do m a in (s ) Da ta S o u rc e (s ) No te s P o te n tia l Ris ks Lin ks protocols for continued care that specify number and timeline for follow-up visits. Prescribing patterns of preferred or formulary medications Efficiency E-prescribing CPOE logs You may want to consider the patient as the unit of analysis since the same physician may see a mix of patients supported by a myriad of payers and where the formulary for each payer will be different. Another way to understand this is to be sure to consider each patient’s preferred formulary based on their payer when analyzing the data. Health Information Technology Evaluation Toolkit: 2009 Update See the NRC’s Health IT Evaluation Measure Briefing Sheet: “Prescribing Patterns of Preferred or Formulary Medications” 38 Ta b le 6: Fin a n c ia l Im p a c t Me a s u re s Me a s u re Qu a lity Do m a in (s ) Da ta S o u rc e (s ) No te s P o te n tia l Ris ks Percent claims denials Efficiency (only from providers’ perspective) Billing data Could measure this pre-post when implementing a CPOE system. Note that without a CPOE system this is likely not going to change. Watch for secular trend as payer policies change while you roll out a CPOE system over several years. “P4P” (pay for performance) increments from payers N/A Billing and administrative data Difficult to measure and have to account for things like inflation, cost of care increases, and so on Likely slow to react to interventions. Utilization: Efficiency Billing and administrative data Have to define what is meant by a duplicate test. In many cases repeat testing is necessary as the standard of care. May not be easy to capture, especially if clinical information is on paper. Evaluators will need to adjust for drug categories to account for possible confounding, since different categories of drugs may differ significantly in cost. Cost data is often very difficult to analyze properly and may need expert analysis for proper interpretation. • Prescribing patterns of cost-effective drugs • Duplicate testing • Radiology utilization Prescribing patterns of cost-effective drugs Efficiency Pharmacy claims or billing data Consider only those prescriptions that were ordered using the health IT. Verbal orders would not be affected by CDSS applications and their inclusion in the analysis would therefore reduce the impact. Health Information Technology Evaluation Toolkit: 2009 Update Cost data is often very difficult to analyze properly and may need expert analysis for proper interpretation. If all formularies from all available insurance carriers have not been integrated, a provider may end up prescribing a higher cost drug from the perspective of the insurer, inadvertently reducing the impact of the application. Lin ks See Canada Health Infoway’s Benefits Evaluation Indicators Technical Report, for detailed definition and evaluation method for this measure. For laboratory testing, see page 68 and for radiology, see page 32: Infoway Report See the NRC’s Health IT Evaluation Measure Briefing Sheet: “Prescribing Patterns of CostEffective Drugs” 39 Me a s u re Cost of maintaining paper medical records Qu a lity Do m a in (s ) Efficiency Da ta S o u rc e (s ) Administrative data from medical records No te s P o te n tia l Ris ks Lin ks Measure the cost of pulling charts, medical records office costs. This cost is the sum of the costs of FTEs for medical records, and so on Forms costs Efficiency Administrative data Cost of paper forms is what is being addressed here. Likely to be overwhelmed by other cost-savings. EMRs may not reduce paper forms. In some settings a CPOE system only allows providers to enter orders which are then taken “out of a system” by a clerk and “filled in a paper-based form.” Staffing costs: • Nursing • Pharmacy • Physician FTE measures: • Training physicians • Support applications • Manage medical knowledge (rules, order sets) • Subject matter experts Efficiency Billing and administrative data Have to relate these specifically to your health IT implementation. Many concurrent initiatives might confound this measure. Not very elastic. Efficiency Training logs IS administrative data Realize that any health IT implementation incurs additional costs for maintenance that otherwise would not be there if there was no health IT system in place. Health Information Technology Evaluation Toolkit: 2009 Update May be influenced by quality of vendor or the tools provided by vendor. May also be influenced by the resources at your disposal and your funding for the implementation process. 40 Me a s u re Qu a lity Do m a in (s ) Risk reduction measures Patient safety • CMS fines for readmission Efficiency Financial indicators N/A • Accounts receivable • HARA measures Da ta S o u rc e (s ) No te s Lin ks Very hard to define what is meant by “readmission.” For example, in many cases a readmission may be the result of the natural history of a disease and not because of the health IT system. Billing and administrative data Financial accounting systems P o te n tia l Ris ks The Hospital Accounts Receivable Analysis (HARA) is a published synopsis of statistical data related to hospital receivables. Improved billing compliance and reduced claims denial may improve the accounts receivable on the balance sheet. Health Information Technology Evaluation Toolkit: 2009 Update 41 S e c tio n III: Exa m p le s o f P ro je c ts The following section contains examples of implementation projects with suggested evaluation methodologies for each. They include two barcode medication implementation projects, a telemedicine project, a computerized provider order entry (CPOE) implementation, and a picture archiving and communication systems (PACS) project. Exa m p le 1: P ha rm a c y P ro je c t Briefly describe the intervention. Inpatient pharmacy of a 735-bed tertiary care hospital is converting to a barcode-assisted medication dispensing and distribution system. All medications that do not have a barcode at the unit dose level will be repackaged. All medications dispensed will be verified by barcode, with scanning prior to dispensing to the unit. 1 2 3 4 5 Describe the expected impact of the intervention and briefly describe how you think your project will exert this impact. Pharmacy staff will barcode scan all medications before the doses leave the pharmacy, because this will be made mandatory after extensive educational efforts. Dispensing errors will decrease, since these errors will be caught during the dispensing process. 1) Medications will be available more often when nurses need them, due to the increased efficiency of the new distribution system. 2) Resources will be better targeted toward medications that need to be filled quickly. Staffing level at the pharmacy will not be affected, because there is no extra budget for staff. There will be resistance from the pharmacy staff in the first 3 months, but this resistance will be overcome when they see the benefits of the system. What questions do you want to ask to evaluate this impact? These will likely reflect the expected impact (either positive or negative) of your intervention. 1) Are medication doses scanned during dispensing? 2) Are the scans bypassed or manually overridden during scanning? Will the various types of dispensing errors decrease with the implementation of the system? How do nurses feel about the timeliness of medication delivery? How has staffing level changed with the implementation of the new system? 1) What are the barriers to barcode implementation in the pharmacy? 2) How can these barriers be overcome? Health Information Technology Evaluation Toolkit: 2009 Update 42 Exa m p le 1: P ha rm a c y P ro je c t What will you measure in order to answer your questions? 1) Proportion of medication doses approved by the pharmacist for dispensing that is scanned prior to delivery. 2) Proportion of scans that are entered manually, or bypassed because pharmacy technician stated that “barcode not available” or “barcode would not scan.” Proportion of medications leaving the pharmacy containing errors: wrong medication, wrong dose, wrong strength or form, wrong quantity, or a safety violation. Nursing satisfaction level about the availability of medications when needed. Pharmacy technician and pharmacist staffing levels. Qualitative assessment of barriers and facilitators. How will you make your measurements? 1) Denominator: number of medication doses (by medication type) approved for dispensing by pharmacists, Numerator: medication doses (by medication type) logged into the system as scanned in a 1- week period. 2) Denominator: number of doses scanned, Numerator: number of overrides within a 1-week period. Have a pharmacist visually inspect 200 medication doses prior to delivery once a week and log all errors by type. Nursing satisfaction survey: ask nurses on a Likert scale how much they agree with the statement: “Medications are available in the units when my patients are due for them.” Pharmacy payroll Implementation teams will review and document issues and lessons learned once a month. How will you design your study? What comparison group will you use? Trend measurement starting at go-live for 1 year, to compare use over time. Measure before go-live, and then at regular intervals after go-live. Measure preimplementation and then six months after go-live. Before and after comparison Iterative review of notes Health Information Technology Evaluation Toolkit: 2009 Update 43 Exa m p le 1: P ha rm a c y P ro je c t For quantitative measures only: What types of statistical analysis will you perform on your measurements? Compare difference in proportions across two time points with chisquared test: graph trends. Compare error rates pre- and postimplementation with chi-squared test: graph error rate. T-test comparing preand-post satisfaction levels. 1) Compare expenditures on payroll before and after implementation, while adjusting for inflation. 2) Compare the number of technicians and pharmacist FTEs pre- and postimplementation. N/A How would the answers to your questions change future decisionmaking and/or implementation? 1) Will help identify workarounds. 2) Will help define the length of time needed to overcome resistance (may correlate with Impact 5). 1) Define the safety value of this system. 2) Estimate the number of adverse events avoided. Understand the impact of this technology on overall hospital efficiency and nonpharmacy staff satisfaction. Understand the financial impact of this technology on the pharmacy budget. Lessons learned will make implementation easier for the next hospital. Health Information Technology Evaluation Toolkit: 2009 Update 44 Exa m p le 2: Ba rc o d ing Nu rs in g Eva lu a tio n Briefly describe the intervention. A 735-bed tertiary care hospital is converting to a barcode medication administration system (BCMA). The paper medication administration record will be eliminated and electronically driven by pharmacy-approved physician orders. Each nurse will be given a laptop, which will run a medication administration application that can help manage the medications for which his/her patients are due. Before medications are given to patients, the patient’s barcoded wristband, the medication, and the nurse's ID badge will be scanned to ensure the “five rights.” 1 2 3 Describe the expected impact of the intervention and briefly describe how you think your project will exert this impact. 1) Nursing staff will barcode scan all medications before the doses are administered to the patient because there will be extensive training before, during, and after implementation. 2) Use of barcode scanning will become part of the new nursing policy. Use of barcode scanning will catch a significant number of errors (“near misses”). Medication transcribing errors will be eliminated. Medication administration errors will decrease. What questions do you want to ask to evaluate this impact? These will likely reflect the expected impact (either positive or negative) of your intervention. 1) Are medication doses scanned during administration? 2) Are the scans bypassed or manually overridden during scanning? 1) For units that have implemented BCMA, what kinds of alerts are generated when the nurses scan medication doses? 2) Of the alerts generated, what proportion is overridden by nurses? 1) How much does BCMA reduce the incidence of transcribing errors? 2) Of the errors eliminated, how many are serious and have the potential to lead to adverse events? 1) To what extent do nurses feel that BCMA improves patient safety? 2) To what extent do patients feel that BCMA improves the accurate and timely administration of medications? Health Information Technology Evaluation Toolkit: 2009 Update 4 5 6 7 Nursing efficiency will not be adversely affected. Nursing satisfaction will remain stable after implementation. There will be resistance from the nursing staff in the first 3 months, but this resistance will be overcome once they see the benefits of the system. Do nurses spend more or less time on medication administration after introduction of BCMA? 1) How does BCMA affect nursing satisfaction with their jobs? 2) How does BCMA affect nurse turnover? What are the barriers to barcode implementatio n on the nursing units, and how can these barriers be overcome? 45 Exa m p le 2: Ba rc o d ing Nu rs in g Eva lu a tio n What will you measure in order to answer your questions? 1) Of the doses recorded as being administered in the eMAR, what proportion of medication doses are scanned prior to administration? 2) Of the medications recorded in the eMAR, what proportion are entered manually or bypassed because nurse stated that “barcode was not available” or “barcode would not scan”? How will you make your measurements? Reports from BCMA software: 1) Denominator: total number of medication doses recorded as administered in a 1week period, Numerator: medication doses recorded as scanned prior to administration. (Would also do a secondary analysis looking at the proportion of due medication doses that are scanned.) 1) Type and number of alerts generated during scanning. 2) Of the alerts generated during scanning, the proportion that was associated with given medication (in spite of the alert) within 30 minutes of the alert. Reports from BCMA software: Outcomes discussed above expressed as a proportion of all medications administered. 1) Number of transcribing errors on the paper eMAR prior to the introduction of BCMA. 2) Proportion of transcribing errors that led to at least one erroneous medication administration . 1) Compare paper MAR with orders approved by pharmacy for discrepancies. 2) Review MAR after transcribing error occurs and before correction, for erroneous medication administration . 1) Nursing satisfaction level with the efficacy of BCMA on patient safety. 2) Patient satisfaction with the accuracy and timeliness of medication administration. 1) Develop nursing satisfaction survey. 2) Leverage existing hospitalsponsored patient satisfaction survey to ask patients about their satisfaction with accuracy and timeliness of medication administration. 1) Nursing attitudes toward the impact of BCMA on their workflow. 1) Overall nurse satisfaction 2) Nurse turnover rates Qualitative assessment of barriers and facilitators 2) Ask explicitly whether BCMA has affected their time spent on medication administration (versus other nursing professional activities). Develop nursing attitude survey and administer 6 months and 1 year after golive. 1) Develop nursing satisfaction survey and administer preimplementation, 6 months and 1 year after golive. Implementatio n teams will review issues and lessons learned once a month and document them. 2) Human resources records for turnovers 2) Denominator: number of doses Health Information Technology Evaluation Toolkit: 2009 Update 46 Exa m p le 2: Ba rc o d ing Nu rs in g Eva lu a tio n scanned, Numerator: number of overrides within a 1-week period. How will you design your study? What comparison group will you use? Trend measurement starting at go-live for 1 year, to compare use over time. Trend measurement starting at go-live for 1 year, to compare errors over time. 1) Measure before go-live. For quantitative measurements only: What types of statistical analysis will you perform on your measurements? Graph trends. Compare difference in proportions across 2 time points with chisquared test. Graph trends. Compare difference in proportions across 2 time points with chisquared test. How would the answers to your questions change future decisionmaking and/or implementation? Will help identify workarounds. Will help define the length of time needed to overcome resistance (may correlate with Impact 7). Will help identify workarounds. Will help define the length of time needed to overcome resistance (may correlate with Impact 7). Measure preimplementation (patient satisfaction only) and at 6 months and 1 year after golive. Trend measurement across 2 time points. Preimplementation versus postimplementation comparison. Iterative review of meeting minutes. Formal interviews with representative nurses preimplementatio n and at 6 months and 1 year postimplementatio n. Compare error rates pre- and postimplementatio n (assumed to be zero) with chi-squared test. Graph trends. Ttest comparing pre-and postsatisfaction level across 3 time points. Graph trends. T-test comparison for satisfaction levels across 2 time points. Graph trends. T-test comparison for satisfaction levels across 3 time points. N/A Define the safety value of this system. Estimate the number of adverse events avoided through the elimination of the transcription step. Understand the impact of this technology on perceived safety. Help with nursing recruitment and retention. Help with patient marketing. Understand the perceived impact of BCMA on workflow. Understand impact of technology on nurses' professional satisfaction. Diffuse opposition against change. Help with nursing recruitment and retention. Lessons learned will make implementatio n easier for the next hospital. 2) Assume transcription error rate is zero after implementatio n of BCMA. Health Information Technology Evaluation Toolkit: 2009 Update 47 Exa m p le 3: Te le m e d ic in e Briefly describe the intervention. One tertiary medical center in a small state is the primary source for all pathology referrals. Referring pathologists have indicated a number of problems with the current system of mailing slides to the tertiary site, including slow turnaround time and a general lack of confidence in the consultants' reports. To address these issues, a synchronous telepathology system will be implemented among the tertiary site within the pathology department and four rural referring pathologists. 1 2 3 4 5 Describe the expected impact of the intervention and briefly describe how you think your project will exert this impact. Image quality, when compared to prepared slides, will be as good or better using telepathology. Turnaround time between specimen collection and consultation will decrease. There will be a better understanding among pathologists about the nature of the referral request. Referring pathologists will gain knowledge in the synchronous pathology consultation. Satisfaction with the pathology consultation process will improve. What questions do you want to ask to evaluate this impact? These will likely reflect the expected impact (either positive or negative) of your intervention. What are the attributes that affect image quality? 1) What are the current turnaround times? 1) What are the issues regarding the expressed lack of confidence in the consulting? Do synchronous consultations between consulting and referring providers lead to continuing education on the part of the referring providers? 1) What are the attributes of referring provider dissatisfaction with the consultation process? What will you measure in order to answer your questions? 1) Clarity of image Referring provider feedback on education Provider feelings on the consultation process 2) What is the optimal turnaround time to improve patient care? 2) Resolution as enhanced by filtering 1) Current turnaround times 2) Turnaround times using the telepathology system 2) What can be done through telepathology to address these issues? Referring provider confidence in consultation 2) Will telepathology decrease dissatisfaction? 3) Time from consultation to patient action Health Information Technology Evaluation Toolkit: 2009 Update 48 Exa m p le 3: Te le m e d ic in e How will you make your measurements? Compare digital images with slides for clarity and resolution through use of filtering. 1) Prior to implementing the telepathology system, collect turnaround times for the various sites. 2) Collect automatic times of electronic consultation. Using structured interviews prior to implementation, ask providers why they expressed lack of confidence in the consultations provided by the tertiary care center. 1) Use Likert-type survey instrument to collect expectations for learning transfer through the telepathology program. 1) Interview all referring providers prior to implementation to determine components of dissatisfaction. 1) Design Likert-type survey instrument to ascertain specific learning objectives. 2) Follow up using structured interviews with both consulting and referring providers. 3) Solicit time from consultation to patient action from referring providers. How will you design your study? What comparison group will you use for your measurements? Use two pathology residents to review duplicative slides, commenting on both clarity and filtered resolution, with a consulting pathologist serving as the gold standard for disagreements. 1) Time to task measurement done during random period prior to implementation. 2) Capture of time to task in the telepathology consultation, factoring in technology access time, and so on 2) Re-interview providers postimplementation. 2) Create structured interview questions to be administered after the pilot period. 1) Use Likert-type survey instrument to collect attributes for both expectations and dissatisfactions with the two types of pathology consultations. 2) Follow up using structured interviews with both consulting and referring providers. 1) Design Likert-type survey instrument to ascertain attributes for both expectations and dissatisfaction with the two types of pathology consultations. 2) Create structured interview questions to be administered after the pilot period. 3) Survey of referring providers on time to patient action following the consultation, regardless of type of pathology consultation. For quantitative measurements only: What types of statistical analysis will you perform on your measurements? Descriptive statistics T-test comparing turnaround time before and after telepathology program implementation. Analysis of interviews Health Information Technology Evaluation Toolkit: 2009 Update Analysis of interviews and comparison to data captured on Likert-type survey instrument. Analysis of interviews and comparison to data captured on Likert-type survey instrument. 49 Exa m p le 3: Te le m e d ic in e How would the answers to your questions change future decisionmaking and/or implementation? A finding that the image quality does not meet standard comparisons will eliminate the program. A lack of time improvement will result in process reengineering and reevaluation of system efficacy. Provider satisfaction is the main objective of this component. If the telepathology project fails, look at workflow redesign and other ways to address findings to mitigate dissatisfaction. Health Information Technology Evaluation Toolkit: 2009 Update This is one of the projected value-added benefits of the system; negative findings will not adversely impact this project. Provider satisfaction is the main objective of this component. If the telepathology project fails, look at workflow redesign and other ways to address findings to mitigate dissatisfaction. 50 Exa m p le 4: CP OE Im p le m e n ta tio n P ro je c t Briefly describe the intervention. Your community hospital is installing a new EMR with CPOE and CDS. You wish to evaluate the impact of the CPOE from the viewpoint of several stakeholders, including clinicians, patients, and your CFO, in order to document value for each of these stakeholders. 1 2 3 4 5 Describe the expected impact of the intervention and briefly describe how you think your project will exert this impact. Efficiency: It will be more efficient to route orders directly to the location of service than to have a clerk take the order out of a system, make phone calls, or enter them into a separate ordering system. Patient Safety: The CDS module will alert clinicians to potential medication interactions with other medications, potential adverse effects in the instance of abnormal labs or with a given diagnosis. Quality of Care: The CDS module will allow clinicians to better comply with practice guidelines and on a timelier basis. Cost Reduction: The CDS module can help reduce length of stay, allow clinicians to choose less costly medications, and reduce avoidable ED visits. User Satisfaction: The CDS module will increase satisfaction (patients, clinicians, and others e.g., ward clerks, and so on). What questions do you want to ask to evaluate this impact? These will likely reflect the expected impact (either positive or negative) of your intervention. What is the current workflow for orders and how will the various responsibilities change with CPOE? What is the current rate of medication interactions and how are alerts being responded to by clinicians? Are clinicians complying with guidelines? Does CDS reduce the length of stay, reduce medication costs, and reduce avoidable ED visits? How is the system being accepted by clinicians and patients? Health Information Technology Evaluation Toolkit: 2009 Update 51 Exa m p le 4: CP OE Im p le m e n ta tio n P ro je c t What will you measure in order to answer your questions? 1) Time spent entering orders by clerk (pre). 1) Numbers and types of alerts fired. 2) Time spent writing orders by clinician (pre). 2) Numbers and types of alerts responded to and ignored. The rate of responded to alerts will indicate potential interactions averted. Prior to implementation, conduct a screening of medication lists to see how they interact with diagnoses, labs, and other medications. 3) Time spent entering orders by clinician (post). 4) Time to action on an order (pre and post). 1) Numbers of guidelines responded to and ignored in any visit and overall for a time period (1 year or so). 2) Reasons for noncompliance. 1) Current length of stay, average cost of medications, and numbers of “avoidable” ED visits (medication side effects, and so on) measured at preand postimplementation. Qualitative assessment of barriers and facilitators. 3) Preimplementation rate of compliance with guidelines from chart reviews. How will you make your measurements? Time-motion study preand postimplementation. By instrumenting your CPOE implementation you can track these automatically. You will need to do chart reviews to measure preimplementation. By instrumenting your CPOE implementation you can track these automatically. You will need to do chart reviews to measure pre-implementation. Data analysis, chart reviews pre- and postimplementation. Implementation teams will review and document issues and lessons learned periodically. How will you design your study? What comparison group will you use? Time motion study preand postimplementation. Pre-post design with chart reviews and then track instrumented data. Pre-post design with chart reviews and then track instrumented data. Before and after comparison Iterative review of notes For quantitative measures only: What types of statistical analysis will you perform on your measurements? T-test comparing means of the time-motion data before and after Graph error rates. Compare error rates preimplementation and postimplementation with chisquared test. Compare pre- and post-guideline compliance numbers using chi-squared test. Compare expenditures on length of stay, medications, and ED visits before-after using t-test. N/A Health Information Technology Evaluation Toolkit: 2009 Update 52 Exa m p le 4: CP OE Im p le m e n ta tio n P ro je c t How would the answers to your questions change future decision-making and/or implementation? Help identify factors that can enhance workflow, lead to quicker turnaround of orders. Define the safety value of this system. Estimate the number of adverse events avoided. Health Information Technology Evaluation Toolkit: 2009 Update Understand the reasons for noncompliance with guidelines and how guidelines can be optimized for better compliance. Understand the financial impact of this technology on the hospital budget. Lessons learned will make implementation easier for the next hospital. 53 Exa m p le 5: On g o in g Co s t S a vin g s Fro m a P ACS Im p le m e n ta tio n...

Option 1

Low Cost Option
Download this past answer in few clicks

18.89 USD

PURCHASE SOLUTION

Already member?


Option 2

Custom new solution created by our subject matter experts

GET A QUOTE

Related Questions