“Run away” (a post about Marking Criteria)

“On second thought, let’s not go to Camelot. It is a silly place.” (King Arthur, in Monty Python and the Holy Grail)

Which is rather a silly way to introduce a post on Marking Criteria and Grade Descriptors, but the quest to agree them has been something of a grail in our institution for several years, and King Arthur’s other catchphrase of “run away” has at times been a tempting proposition.

To explain why our team has even considered straying into this dangerous area, I have to begin by referring you to our last QAA institutional audit report. In the baseline report for the TRAFFIC project, we said:

“The most recent QAA institutional audit for MMU recommended that the university should “establish a set of comprehensive university-wide assessment criteria to help maintain consistent standards across all provision, both on and off-campus” (QAA 2010). This recommendation has resulted in considerable discussion, particularly in the assessment community of practice, which has considered this issue regularly since 2007, but no acceptable structure has yet been found which covers awards from Fine Art to Nursing”

The existing approach at institutional level was simply to provide marking bands:

Programme teams then determined their own marking criteria, ie what it meant to produce an excellent, very good, good piece of work in their own discipline for a particular assignment. There has never been a problem with this and external examiners have never picked out this issue as a concern. There were no issues with competence.

The photo at the top of this post is titled ‘outstanding’. I’m sure we’d all agree. But why? We’d start talking about colour, composition, and so on. We do need more words than the ones in this bare set of bands.

Believe me, we had put a lot of thought into a better set of descriptors over the last few years. We always got stuck on the principle of institution-wide criteria. There is just too much variation in the type and purpose of assessments – how do you compare an essay with an exhibition with a treatment plan with a lesson observation? Once we started getting any more descriptive than the bald grading bands listed above, we got locked into endless discussions about sources, data, threshold concepts and more.

However, having a common language to discuss grading would be useful, particularly for new staff and for those contemplating assessment change. There is a heavy dependence on an implicit shared understanding if you are using differentiators like ‘good’, ‘very good’, and ‘excellent’. This might have an impact on people’s confidence in having discussions about marking and might lead to people taking longer over the marking process than is really needed.

More pressingly, getting a consistent approach to marking criteria was needed for the purposes of responding to institutional audit. But it needed to be an approach which worked across the institution.

We have to ensure that any institutional systems that we design for electronic assessment management are able to accommodate the use of marking criteria and their directly-related feedback strategies. If programme teams can’t use the institutional criteria and have to constantly translate their own across to a compulsory electronic system, we’re all going to be wasting huge amounts of time. Obviously I’m talking hypothetically here, nobody would ever be that daft, would they?

So we’ve had another go at this.

The Centre for Academic Standards and Quality Enhancement managed to gather a group together with appropriate representation from each of our eight faculties. This group decided that it was probably possible to agree a set of standard descriptors which would work across the whole institution and to produce guidance on how they could be interpreted into specific marking criteria for each assignment.

Approach

The University has recently introduced an Employability Curriculum Framework which includes a set of Graduate Outcomes which should be assessed in every programme. Each unit (module) description at undergraduate level now indicates which of these Outcomes is addressed in the unit.

So, we decided to use these outcomes to provide a basis for our institutional grade descriptors. For each Graduate Outcome, we’ve written a threshold (pass) descriptor for each of the academic levels at which we provide taught courses: 3-7 on the framework for higher education qualifications (FHEQ) – click to see a full size version.

image of descriptors for each level and each graduate outcome at MMU

So this gave us descriptors for a threshold pass for each of the Graduate Outcomes. Next, we wrote descriptors to differentiate performance at each level in very generic terms. For example, the structure of presented work moves from ‘recognisable’ through ‘coherent’, ‘careful’ to ‘creative’ as performance improves through the bands for level 3.

This process gave us a full set of descriptors for each grade band, at each level – rather a long document, but it will be split up into levels to make it easier to use. In the process of doing this, we decided to split the top and bottom bands to provide guidance for differentiating truly outstanding and poor performances – something that external examiners here, as at many other institutions, have mentioned frequently in reports.

The main value to the descriptors seems to me to be in the provision of more specific language for differentiating performance, both within levels and as students proceed up the levels. We’ve started a list here, but I’m sure this will develop as the project continues, and there may well be intense discussions about which band particular words should be in. But I think we’ll all agree that they are more helpful than ‘good’, ‘very good’ and ‘excellent’!

descriptive language for grading bands

Each programme team will be encouraged to replace words from the standards descriptors such as ‘primary and secondary sources’, ‘theory’, ‘community’, ‘audience’, ‘professional values and standards’ and so on with language which is specific to their discipline.

I hope the existence of these descriptors will improve confidence in marking judgements, particularly for new staff, and make it easier for people to discuss marking decisions. We are now at the stage of having a set of descriptors. Next stage is to write some guidance and provide examples across the disciplines taught in the university, and then present the descriptors to Faculty Student Experience committees during Spring 2013. I’m sure there will be some tweaks to the descriptors during this process. (tweaks = robust discussion).

Coming back to JISC Assessment and Feedback programme, this process ties into institutional electronic assessment management systems in the following ways.

  1. It gives us a framework for electronic marking rubrics and grids
  2. It ties our Graduate Outcomes recognisably into each assignment, strengthening our Employability Curriculum Framework and giving us potential to include better information into the Higher Education Achievement Report without extra work on the part of staff
  3. It has the potential to provide a bank of standard feedback comments (eg Grademark Quickmarks) for each band at each level because of shared agreement about descriptive language.

We’ll be reporting back on progress….

 

This entry was posted in Baseline, Marking. Bookmark the permalink.

One Response to “Run away” (a post about Marking Criteria)

  1. Pingback: University Standards Descriptors | JISC Transforming Assessment and Feedback For Institutional Change

Comments are closed.