Antimicrobial Stewardship

AMS program evaluation

Measurement and evaluation are fundamental components of an effective AMS program. An AMS program with good measurement practices embedded from the outset is well placed to:



  • Keep track of whether their AMS program is performing as planned and reaching its target audience.
  • Be attentive towards trends in prescribing.
  • Identify areas that require improvement and determine the extent to which efforts have been effective.
  • Formulate and implement robust initiatives where the need for such interventions has been identified based on sound data.



Evaluation techniques can be applied to the AMS program as a whole, as well as to individual initiatives within the program (e.g., a specific IV-to-oral switch campaign). Evaluation should not simply come at the end of a program’s implementation, but should inform all stages of a program’s life-cycle including:



  • formation and planning;
  • implementation; and
  • analysis of results (including evaluation of outcomes, effectiveness and long term impacts).

Evaluation during formation and planning



The evaluation that occurs during the formative or planning stages of a program, also known as formative evaluation, can have a crucial impact on the success of a program. While it may be tempting to move quickly through this stage, the investment of a little time and thought will improve the design of the intervention, thereby improving the chances of a successful outcome. The following should be considered:


  • A needs assessment should be conducted to determine the gap between the current state and the desired state, and whether there is clear need for the planned intervention.
  • The relevant stakeholders should be identified and consulted. 
  • Baseline data collection can be crucial in pinpointing the specific areas of need and ensuring that the impact of the interventions can be measured and compared.
  • Articulate the problem/s and define goals and priorities.
  • Articulate the proposed solution.
  • Identify the causal pathways involved in the problem that is being addressed. This includes identifying internal and external factors, how they impact the problem, how they can be rectified and whether any of these are beyond your control.
  • Even at this early stage, it is vital that the AMS team should consider how the results of the intervention are intended to be reported and communicated. This may have a large impact on how the intervention is delivered, and it is important to ensure that the appropriate measurements can be made along the way. It is very dispiriting to get to the end of a program only to discover that key questions cannot be answered because the necessary data was not collected correctly.



Key questions to be answered:



  • What do I want this program to achieve? What does success mean?
  • What measures will I use to know if it has been successful?



Implementation evaluation



The aim of implementation evaluation (or process evaluation) is to evaluate how the program is performing under real conditions with real people. Note that this may need to be done frequently during implementation.



The purpose of this type of evaluation is to: 



  • gain information to improve the program;
  • facilitate learning and decision-making;
  • verify that the program is running according to plan and reaching the right people; and
  • contextualise the outcomes. Obtaining a good understanding of how a program was implemented will assist in understanding the outcome. For example, an evaluation that finds that a program was poorly implemented may explain why certain anticipated outcomes were not achieved. Accurate identification of implementation difficulties means that they can be avoided in future interventions.



Key questions to be answered:



The key questions that should be asked are: ‘what?’, how?', ‘why?' and 'to what extent?‘.



Therefore, qualitative analysis is important as it allows the AMS team to gain a deeper understanding of the issues, perceptions, barriers and enablers. It also provides an opportunity for the team to further engage with hospital staff and build a sense of rapport and understanding.



These questions should be focused around 4 key areas:



  • Reach: Did you reach your target population? 
  • Extent: Have all the components been implemented? What were the barriers and facilitators (internal and external)?
  • Quality: Is the program content of high quality?
  • Satisfaction: Are hospital staff satisfied with the program?



Outcome (summative) evaluation



Outcome (or summative) evaluation should describe both the outputs and effects of the program (both intended and unintended), and make a value judgement about the program’s achievements. Whilst conducting this evaluation, it is also important to consider how the program’s implementation affected the outcome.



Key questions to be answered:



The key questions for outcome evaluation should be focused around 3 main areas:


1. Effectiveness: Has my program worked? Were there unintended effects?



Examples:



  • To what extent have I improved documentation?
  • Is there evidence of improved prescribing for pneumonia?



2. Adoption: Who participated in this AMS activity?



Examples:



  • What is the uptake of the pre-approval system?
  • How many medical staff used this checklist?
  • How many times have the new guidelines been accessed?



3. Maintenance: Has my program been embedded in the organisation?



Examples:



  • Do I have enough senior support to sustain this?
  • Are any learnings transferrable to the next program/project?



Case example: a campaign to improve documentation



The following describes some key evaluation questions that the AMS team should ask when designing and implementing a campaign to improve documentation of the antimicrobial management plan in the patient’s clinical notes (documenting indication and a review- or stop- date).



Formation and planning evaluation



  1. Define.
  2. What is to be documented and how. For example, the minimum standards should include that:
  3. the antimicrobial name, route, dose and frequency should be documented in a safe and legible manner;
  4. the reason for the antimicrobial should be documented in the clinical notes; and 
  5. the start-date, planned review-date or cessation-date of the antimicrobial should be clearly visible and easy to find. 
  6. Where should it be documented: e.g., in the clinical notes, on the medication chart, on the administration chart. 
  7. Who should be documenting it: e.g., the doctor, nurse, clinical pharmacist. 
  8. Collect baseline data. This will help to quantify your hospital’s current practice and the gap that needs to be addressed. This information will also be useful to communicate to clinicians during the campaign’s implementation.
  9. Create a data collection form (can be very simple).
  10. Determine which wards will be included in the data collection. This will depend on your level of resources – you may decide to focus on a few key wards and specialties, or sample a random selection of patients from all wards.
  11. Identify barriers and enablers.
  12. What are the barriers to implementation and how might you overcome or mitigate these?
  13. Are there enablers and how could these be harnessed or incorporated into your campaign?
  14. Identify and communicate with relevant stakeholders.
  15. It is useful to collect qualitative data by speaking with clinicians to determine why they do or do not document, whether they consider it important, whether they perceive any barriers to documentation, etc.
  16. Discuss at relevant committee meetings.
  17. Determine if any face-to-face meetings need to be held during the campaign and ensure that the time slots are booked and the necessary people are available.
  18. Identify key champions and ask for their support in driving this initiative and leading by example.
  19. Draft emails or letters to key clinical leads communicating why this campaign is needed and why documentation is important.
  20. Prepare for the campaign.
  21. Define your goals (what does “success” look like?).
  22. Develop your campaign materials including posters, stickers in the medical notes, articles in hospital newsletters, screensavers for hospital computers, lanyard cards, etc.
  23. Determine which areas the campaign will be focused on, e.g., whole hospital or targeted to a particular ward or specialty.
  24. Update any relevant policy or procedure documents to reflect the new requirements.
  25. Determine how you will measure the impact of the campaign and to whom you will report the results and what sorts of information they will want to know. 
  26. Determine a launch date.
  27. Determine at what time-points re-auditing will take place to monitor any improvements in practice.



Implementation evaluation



  1. After a certain amount of time has elapsed (e.g., 2 weeks, 1 month, etc.), perform some interim evaluations:

a) Have all areas received the necessary education or communications?

b) Consider conducting a small audit to determine which areas have had improvements in documentation and which have not. For areas that have not improved, try to determine the possible reasons for this.

c) Talk to some clinicians and ask whether they are aware of the new requirements and why they have or have not changed practice.

d) Talk to your key champions to determine their perspective on the implementation and whether any feedback has been communicated to them. Note that your findings from this will determine whether aspects of the program’s implementation need to be revised.



Summative evaluation



  1. Re-audit to determine the impact of the campaign. To what extent has documentation improved? Which areas/specialties saw the greatest improvement and which saw minimal or no improvement?
  2. Perform some qualitative analysis by talking to some clinicians to determine why they have or have not changed practice.
  3. Talk to your key champions to determine their perspective on the campaign.
  4. Think about how the campaign was implemented and whether this impacted on the results.
  5. Consider whether the program has been embedded in the organisation and what ongoing work/monitoring/re-education is required.



Reporting and feedback of results



  1. Determine which groups are going to receive feedback on the audit results. Contact these groups and schedule a time to discuss the results with them.
  2. Note that feedback should consist of both positive feedback and recommendations for areas of improvement.
  3. Congratulate staff and acknowledge any areas of good practice.
  4. Present results on poor practice. It is useful to provide general summary results as well as specific case scenarios to illustrate the areas of deficiency.
  5. Formulate a plan for how to improve the areas of deficiency that have been identified.
  6. Ensure that any reports are in an appropriate format for the recipient. For example, the AMS Committee may be interested in more detailed reports, whereas patient-facing clinicians may only want the key take-home messages.
  7. Determine if and when to re-audit to determine changes in practice over time.
Documentation checklist

Implementing antimicrobial stewardship

Share by: