9.1 Assessment for Learning

Author: Kay Sambell

Assessment for learning: an agenda for change

The Assessment for learning (AfL) movement is gaining momentum in tertiary education (Sambell et al, 2013; Carless et al, 2017). Its advocates promote this agenda for educational change based on the premise that assessment can valuably be designed to promote and support students’ learning, as well as measure and quality assure it. There is an established and relatively long-standing literature which provides the evidence-base for this agenda (see, for example, Brown and Knight, 1994; Boud and Falchicov, 2007; Sambell et al, 1997) but, despite considerable support from the educational development community and appreciable moves in the right direction, progress still seems frustratingly slow.

Assessment as a wicked issue

One of the issues facing those who seek to promote AfL is that assessment in higher education represents a “wicked issue” (Deeley et al, 2019), which means it is far from simple or straightforward to bring about assessment change. A wicked issue or problem is one that, amongst other features, is unique, complex, interwoven, ill-defined, frequently shifting, and it has multiple stakeholders with potentially conflicting values. Above all, there is no single correct solution (Norton, 2018).

Ramaley (2014) suggests that wicked problems in higher education (HE) call for ‘boundary-spanners’ (p.8) who can build a culture of engagement and foster new ways of working collaboratively that are capable of enabling HE communities to rethink current challenges and explore ways to provide a coherent and meaningful educational experience in the face of turbulence, uncertainty and fragmentation that characterise much of HE today. ‘(p8)

When we think of assessment in our universities, it is clear that there are many stakeholders, all of whom are deeply invested in assessment practices, but who are likely to have different priorities, preoccupations and agendas when it comes to defining ‘good assessment’ regimes from their particular standpoint. So, even if we set aside the complexities which inevitably emanate from the diverse perspectives, values, and ideological assumptions that any individual brings to bear on the topic when they consider it, it stands to reason that different stakeholders, (such as academics, students, quality assessors, university managers, professional bodies) tend to look at assessment through very different lenses, largely because they, often tacitly, view it as fulfilling markedly different purposes.

The conundrum the AfL movement faces is that all these purposes are simultaneously true, and assessment designs must, to some degree, fulfil them all. At one end of the continuum assessment is all about accurate measurement, and this view (where the main purpose is the assessment of learning) focuses, understandably, on summation, certification, marks, grades, scores and associated ideas about, say, the reliability that these represent. At the other end of the continuum, assessment where the main purpose is to promote and support learning (that is, assessment for or as learning), prioritises the formative, developmental purposes of assessment. From this viewpoint the focus tends to be on issues of validity and the future impact of assessment on students’ approaches to learning, either within the academy or in the longer term, beyond graduation. It also tends to focus on assessment’s capacity to review progress in order to improve it.

This means, at a pragmatic level, all these varying purposes need to be managed, balanced and, as far as possible, fulfilled in and by any assessment design. Boud (2000) referred to this as the ‘double duty’ that assessment must perform. In practice, then, it means that assessment designs are rather more akin to a juggling act in which competing priorities are, as far as possible, all simultaneously held metaphorically aloft.

As Boud (2000) states, “Acts of assessment need both to meet the specific and immediate goals of a course as well as establishing a basis for students to undertake their own assessment activities in further. To draw attention to the importance of this, the idea that assessment always has to do double duty is introduced.” (p. 151).

Of course, this isn’t easy, making it one of the most important wicked issues educational developers face. In short, it means there is no perfect solution (Sambell and McDowell, 1998) or perfect system. One dilemma is that many stakeholders assume there is a perfect solution, and that educational developers can simply point it out and tell people to make it happen!

A question of balance

Assessment for learning, then, is a movement which seeks to foreground the educational dimension of assessment such that assessment is, first and foremost, designed to promote and support learning (Carless, 2015). This can be made to sound very easy but in practice it’s a complex and challenging balancing act.  It’s all about balancing the relationship between the assessment of learning sensitively with AfL.

Further complications arise as tactics and strategies that are typically promoted in the name of assessment for learning in one area cannot simply be dropped in or scaled up (Carless et al, 2017) in a straightforward way in others. Formative activities based on the provision of detailed expert feedback, which might be perfectly manageable with small groups led by a single tutor, may prove challenging to resource and manage effectively and consistently with huge cohorts and large teaching teams, for instance. Disciplinary ways of thinking and practising have a strong influence on the ways in which knowledge is viewed, valued, and applied. This means there are likely to be important divergences in the ways in which authentic assessment practices might manifest themselves across subject areas. Signature pedagogies (such as studio-based practices in creative industries) clearly diverge significantly from large-based courses in some disciplinary areas, with a knock-on effect on staff and student assumptions, expectations, and so on.

Identifying key principles

That’s why instead of providing off-the shelf recipes or solutions, ideally AfL ideally builds on the bedrock of an outlook or philosophy. Therefore it is as much about winning over hearts and minds, challenging tired prior assumptions or misconceptions, and enabling local communities to develop assessment for learning literacies, as about it is about dispensing advice about ‘what works’.

This is why most large-scale AfL initiatives have developed a framework of principles, such as:

In England our Centre for Excellence in Assessment for Learning at Northumbria University (Sambell et al, 2013) developed six evidence-based conditions. These were intended to act as key questions which academics, and the stakeholders (professional services, study skills, librarians, and of course students as consultants, partners, etc.) who worked in partnership with them, could ask themselves in relation to their own local practices. Thus the six conditions were used to redesign assessment practices by prompting reflection and discussion which encouraged practitioners to join a wider community which was engaging with the issues in an evidence-informed way. Building on a long track record of research and expertise, and harnessing the enthusiasms of several pioneering enthusiasts, individual practitioners and teaching teams across the institution were supported to recognise the (often deleterious) impact assessment had on learning, and to devise their own strategies for improving the situation.  In other words the six conditions supported stakeholders, often at the module (course) level but even better at the programme level, to review their existing designs and adjust them if needs be, rethinking assessment to ensure that, as far as possible it acts as a catalyst for learning (McDowell et al, 2011).

In our approach, AfL is all about involving students, meaningfully and appropriately, as responsible, engaged partners in assessment practices, rather than casting them as passive recipients. This represents a shift of power which is encapsulated by the AfL manifesto with which we conclude Assessment for Learning in Higher Education (Sambell et al., 2013).

This book, which was specifically aimed at busy practitioners, contains many worked examples of how this all panned out in a range of contexts across our own institution. The examples are often welcomed by academics who are keen to improve assessment and feedback processes in their courses, but it’s worth emphasising that they are not recipes to follow, but food for thought to consider and potentially adapt.

In what follows I will simply outline the key conditions in the hopes that they help you to spark discussions and raise key questions in the minds of academics, students and others with whom you work.

The model of AfL we developed called for an overall curriculum design and the development of productive learning and teaching environments that:

  1. Emphasizes authenticity and complexity in the content and methods of assessment rather than reproduction of knowledge and reductive measurement;
  2. Uses high stakes summative assessment rigorously but sparingly rather than as the main driver for learning;
  3. Offers students extensive opportunities to engage in the kinds of tasks that develop and demonstrate their learning, thus building their confidence and capabilities before they are summatively assessed;
  4. Is rich in feedback derived from formal mechanisms e.g. tutor comments on assignments, clickers in class;
  5. Is rich in informal feedback e.g. peer discussions of work-in-progress, collaborative project work, which provides students with a continuous flow of feedback on ‘how they are doing’;
  6. Develops students’ abilities to direct their own learning, evaluate their own progress and attainments, and support the learning of others.

Conclusion: Your mission, should you choose to accept it!

I’ve tried to indicate why AfL leadership is far from easy, but why, too, it is so important a mantle for educational developers to pick up. The reflection and power-sharing it entails requires a culture-shift in stakeholders’ thinking about the potential meanings of assessment. This flies in the face of simple techno-rationalist ‘solutions’ and is particularly challenging when working with frenetically busy practitioners who are struggling to meet the competing demands and external drivers in the so-called era of performativity we are seeing in higher education.  Students, too, need strong support to appreciate the practices that underpin AfL and the important role that have to play within effective assessment and feedback processes.

AfL is, then, not for the faint hearted. But, as they say, nothing worthwhile or important is ever easy! Good luck with your AfL endeavours.

References

Bloxham, S., Reimann, N., & Rust, C. (2019) Calibrating standards: what, why and how? (previously titled “Degree Standards Project: Calibration synthesis report”) Advance HE Available at: www.heacademy.ac.uk/calibration-of-academic-standards

Boud, D., 2000. Sustainable assessment: rethinking assessment for the learning society. Studies in continuing education22(2), pp.151-167.

Boud, D. and Falchikov, N. eds., 2007. Rethinking assessment in higher education: Learning for the longer term. Routledge.

Brown, S. and Knight, P., 1994. Assessing learners in higher education. Routledge.

Carless, D., Joughin, G. and Liu, N.F., 2006. How assessment supports learning: Learning-oriented assessment in action (Vol. 1). Hong Kong University Press.

Carless, D. (2015) Excellence in University Assessment: Learning from award-winning practice. Routledge.

Carless, D., Bridges, S.M., Chan, C.K.Y. and Glofcheski, R., 2017. Scaling up assessment for learning in higher education. Springer Singapore:.

Carless, D., 2017. Scaling up assessment for learning: progress and prospects. In Scaling up assessment for learning in higher education (pp. 3-17). Springer, Singapore.

Deeley, S.J., Fischbacher-Smith, M., Karadzhov, D. and Koristashevskaya, E., 2019. Exploring the ‘wicked’problem of student dissatisfaction with assessment and feedback in higher education. Higher Education Pedagogies4(1), pp.385-405.

McDowell, L., Wakelin, D., Montgomery, C. and King, S., 2011. Does assessment for learning make a difference? The development of a questionnaire to explore the student response. Assessment & Evaluation in Higher Education36(7), pp.749-765.

Norton, L., 2018. Action research in teaching and learning: A practical guide to conducting pedagogical research in universities. Routledge.

Ramaley, J.A., 2014. The changing role of higher education: Learning to deal with wicked problems. Journal of Higher Education Outreach and Engagement18(3), pp.7-22.

Rust, C. (2020 Revised) Re-thinking assessment – a programme leader’s guide.

http://ocsld.brookesblogs.net/2017/12/22/re-thinking-assessment-a-programme-leaders-guide/

Sambell, K. and Brown, S. (2021) Covid 19 Assessment Collection https://sally-brown.net/kay-sambell-and-sally-brown-covid-19-assessment-collection/

Sambell, K., Brown, S. and McDowell, L., 1997. " But Is It Fair?": An Exploratory Study of Student Perceptions of the Consequential Validity of Assessment. Studies in educational evaluation23(4), pp.349-71.

Sambell, K. and McDowell, L., 1998. The construction of the hidden curriculum: messages and meanings in the assessment of student learning. Assessment & Evaluation in Higher Education23(4), pp.391-402.

Sambell, K., McDowell, L. and Montgomery, C., 2012. Assessment for learning in higher education. Routledge.

*************************************************************************************************

9 The Real Deal

Home page

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Leave a Reply

Your email address will not be published.