I suppose it’s tedious to keep emphasising the extent to which assessment is a critical activity in the lives of staff and students, but I’m going to carry on doing it until we think that all of our systems are appropriately centred on it. Assessment affects individual progression, workloads, institutional performance and reputation and it dominates the planning of the annual academic cycle. And yet it only seems to feature as a bit player in systems design.
In HE, assessment is pretty much the ultimate multi-professional activity – almost everyone in the organisation has some involvement with it at some time or other, especially if you count the Hall of Residence reception staff mopping tears or providing an incident report to support a mitigating circumstances claim, the library staff helping students to find sources, the technical staff rushing to deal with complicated equipment or to fix IT or print services just before a deadline, or the student life team coping with an unexpected problem.
Mark Stubbs and I had a paper accepted at the AUA annual conference in March 2013, about our experiences of reviewing processes which have multi-professional input. We used two assessment-related examples to show how we would like to but don’t always manage to achieve the right input to projects and I thought it would be worth writing some further notes here.
1: Coursework receipting
MMU has a brilliant coursework receipting system which Mark has blogged about before. It does the job it was designed to do really well. It finds out when things need to be handed in (well, it doesn’t do that on its own, but it provides an incentive for academic staff to tell administrators this information). It provides information which can be fed to the VLE, providing a constant reminder to staff and students of when assignments are due. It allows submissions to be safely ‘posted’ in collection boxes rather than queuing at deadline time. It generates bar codes for the submission cover sheets. It sends an automated email to students to reassure them of safe receipt. It copes with registering information about submissions from students who have been given extensions, and with late submissions.
There is nothing ‘wrong’ with the Coursework Receipting System. It does exactly what it said it would. It deals with a series of problems which were identified by professional staff.
But when we started reviewing our overall assessment management systems, hoping to be able to incorporate the CRS, we identified a few issues:
- It was written in-house using database software which the university no longer supports routinely
- It doesn’t provide the same full service for ephemeral or electronic submissions, or exams
- It doesn’t provide any system for returning work to students
This may be a slight over-simplification, but these issues are basically due to the fact that the process was designed and managed by one group of people, when the process itself actually must involve two other important groups: students, and academic staff. These two groups have other needs which aren’t currently being met by the system. So we need to do some redesigning.
2: Academic Appeals
As part of our baseline report for the TRAFFIC project, we were asked to review academic appeals. Academic appeals are the last resort for students who’ve failed a module. They are a quasi-legal process and there are only two grounds for appeal: that the student had mitigating circumstances which couldn’t be declared at the time, or that there was a material irregularity in the conduct of assessment. I won’t go into the report here, because it hasn’t yet been signed off by those who commissioned it, but it is a matter of public record that our institution has a lot of appeals, and that a very substantial number of these is upheld. The question posed to me before I began the review was whether there was something wrong with our appeals process.
A colleague, Helen Jones, and I reviewed dozens of appeal cases and interviewed academic, Students’ Union, and administrative staff. The conclusion we came to was that it was actually a good process, managed and supported by people who care deeply about fairness. The reasons for the number of appeals came earlier in the assessment management process. And without going into detail here, what was really interesting was that lots of people involved in appeals knew EXACTLY how earlier processes could be improved, but didn’t know how to change the systems. So Students’ Union staff might be able to tell you about why students were having difficulty with something, but because they didn’t have regular contact with that something, they didn’t know how to effect change. Administrctive staff might have very shrewd ideas about changing some parts of the support system, but feek unable to suggest them. And academic staff may have little idea of what happens to bits of paper once they are ‘in the system’: one member of academic staff said “Thank you for asking me about the administrative systems; nobody has done that before.” Whilst the working environment may be friendly and collegiate, there is still a sense of ‘that’s someone else’s domain’ whenever academic and administrative issues collide.
Celia Whitchurch has written extensively about the difficulties of being a ‘third space’ professional, inhabiting a grey area between academia and administration, but I think we could do with more third space. Why don’t/can’t we have shared ownership of the whole process?
This is not news to anyone interested in change management, but solving it is surprisingly difficult. Workshops with representatives from a variety of services should be effective, but there can be issues with one or two people dominating, even when you manage to timetable them. We’ve tried to work around this in the past by using ‘levelling’ activities such as competitive games, but not everyone likes these. Doing what Helen and I did with the appeals report, or our baseline report, is another way of approaching it: one or two people carrying out interviews and synthesising the outcomes. This has the benefit of ensuring that voices are heard, and not drowned out. Not ideal, though, because the chosen person may bring their own perspectives too strongly to the final result, and it may be hard to challenge. Sending documents round for comment is inclusive, but hit and miss in terms of replies, and again, the collator may be more biased then they realise.
Basically, I’m still wrestling with the problem of reliably and effectively extracting ideas from multiprofessional teams. I thought about turning everyone upside down and shaking them, but apparently that’s just a metaphor for sharing best practice, not a real process. Who’s managed to solve this effectively in an HE context? What am I missing?
PS Disclaimer: I cause just as much irritation to professional services staff, in the matter of assessment administration, as any other academic. I’m trying to do better!