Author: Celia Popovic
Ensuring and Measuring Impact
Educational Development centres can be vulnerable to the question of impact. How do we measure our impact? How do we convince others of our worth? How do we know if we are doing a good job?
The EDC Guide on Centre Reviews provides pragmatic advice, particularly for Centre Directors and leads.
What can the average developer do to ensure and maintain impact?
When planning an event, be sure to assess the need. It is not uncommon for an enthusiastic faculty member, for example, to suggest a topic that they believe will be of value to others. However, unless the need is clear and the value of the workshop is agreed this may not result in high participation.
While ad hoc workshops have a role, many find far greater impact when events are tailored to the specific needs of an identified group. Events that are arranged for an identified group can be tailored to use discipline or context specific content, can be timed to meet the group's schedules and can be marketed directly to them. For example I have had success in 'piggy-backing' on existing events, such as a department or faculty meeting for an event for a group, resulting in a higher turn out rate than when the workshop was run ad hoc.
The need may be identified by the group themselves, such as dealing with a particular issue within the course/program. It can also be lead by institutional priorities, tailored to the context of the group For example, at a time when my home institution was prioritizing an increase in online and blended learning, a department head asked for a personalized event for his colleagues. The resulting half day workshop was based on the generic workshop we offered to all faculty but incorporated discipline specific examples. The group were able to talk about the impact of the policy on their circumstances, and led to a highly productive event.
Identify the aim of the event or resource, and find out if it was achieved.
Many centres make us of immediate feedback forms distributed immediately or shortly after an event. These may be called 'happy sheets'. They aim merely assess whether the event met the immediate needs of the participant. While they provide useful feedback, they do not explore the levels of impact that eventually lead to a change in student experience, which is the ultimate aim of our work.
Kirkpatrick sets out an invaluable guide to measuring impact. He suggests impact should be measured at several levels, not merely the initial reaction.
Reaction - how did the participant find the experience? For example, a workshop "happy sheet" typically asks this question.
Learning - what did the participant learn from the experience? How often do we test participants on the content of the workshop or use of a resource? We know from learning theories that recall and application are necessary to embed learning, but this is rare in educational development activities. How might you do this in your practice?
Behaviour - to what extend has the participant changed their behavior as a result of the experience? Does the instructor use a different approach. If they do, how do we know? It can be difficult to assess this level of change, but some centres send out feedback forms weeks or months after an event asking participants this very question.
Results - has there been a change in the organization as a result of the experience? In the case of educational development the ultimate recipients of a change are the students. Is it possible to measure teaching success in terms of impact on students? This is a tricky area, as there are so many variables. Kirkpatrick would argue though, that unless there is change at this level it really makes little sense in trying!
References and resources
EDC guide no. 3 Centre Reviews: Strategies for Success (2018) https://edc.stlhe.ca/edguides#guide3
*************************************************************************************************
This work is licensed under a Creative Commons Attribution 4.0 International License.