Safety in numbers?

Maybe it’s just me, but there is something about quantitative data that I find vaguely comforting. There are numbers here! They must measure something useful! We can do things with them, like make a bar chart or, if there are variables, look for correlations. They don’t need to be interpreted. We don’t need to go back to the transcript and wonder what was meant by them or whether we asked the question in a leading way.

The institution has invested hugely in new systems to help us to manage information about assignments and other aspects of curriculum design. So it seemed quite exciting to be able to download a list of all the assignments which enrolled students at MMU will take during 2012/13. We were able to do this last year too, but the information had not been very consistently logged before the EQAL process began, and so it was difficult to make much comparative use of the data. Since we moved to a new unit proforma database two years ago, information is captured more consistently. Each assignment is classed as either exam or coursework, and then each item of coursework gets a one or two word description. In theory, it should be simple to sort these and then get a picture of the kinds of assignments which are being set. For the TRAFFIC project, this might then give us an idea of which are the most popular assignments so that we know where to focus system development efforts and to identify unusual but interesting assignments which might make good case studies to test the assignment management systems and also to share good practice.

It turns out that even though the information is in better shape, it still isn’t as clear as we might have expected. A quick sort in Excel by ‘assignment name’ is quite useful to show that ‘exam’ and ‘examination’ are both in use, but you have to look carefully to see that ‘unseen exam’ and ‘seen exam’ also appear in the list. Essays are often logged by their size or weighting (‘20% essay’, ‘essay 20%’, ‘2500 word essay’ or ‘essay 2500 words’, might be used, for instance). Multiple-choice, multiple choice or computer-marked tests might all refer to the same kind of thing. What’s the difference between a ‘critical evaluation’ and a ‘critical analysis’ (discuss)? Is a ‘class test’ the same as an ‘in-class test’ or just a ‘test’? Is there a difference between ‘coursework’ and ‘assignments’? Should a ‘group presentation’ be considered as a presentation, or as groupwork? Are ‘coursework’ or ‘groupwork’ actually useful names for assignment tasks? They could cover any other kind of assignment!

When you are counting the frequency of individual items, then these kinds of interpretations might have a big impact on the data. There’s probably no problem with replacing ‘exam’ with ‘examination’, but would it be useful to know about the frequency of unseen exams compared to seen ones. or ‘open book’ ones? If it would be useful, do we know whether all of those marked just as ‘exam’ are seen or unseen?

So, useful though it is to have this data, it can only be regarded as indicative, and despite the apparent confidence given by the use of numbers, they need to be treated with caution. And with this long preamble, I include a bar chart showing the top 7 assignments being offered to our students at levels 4 and 5 this year. These assignments represent around two-thirds of the 1300 or so tasks being set each year for our 36,000 students.

Now, this pattern will not surprise anyone; these are good, solid, well-tried assignment techniques. The information is useful in giving us a notion of what kinds of items need to be tested to ensure that any changes in processes for assignment management work properly, because it would be disastrous to implement a new system which didn’t take all of them into account.

Level 6 assignments are still using our ‘old’, pre-EQAL programme structure, with 20 credit units and up to nine assignments allowed per unit, and vaguer descriptions, so it is meaningless to even attempt a like-for-like comparison. 34% of the 3775 assignment tasks are simply described as ‘coursework’ which could include any or all of the non-exam items above. However, in the remaining 66%, the same kinds of assignments are still popular, with essays, portfolios, presentations and reports still featuring in the top ten.

Perhaps more interesting is the range of ‘other’ types of assignment which clearly address key graduate outcomes such as articles, blogs, bibliographies, exhibitions, moots and so on which are barely used – there are 75 different types of assignment at level 4 which are used in 9 or fewer units. Replacing just one of the usual suspects in the chart with one of these might add a lot of value to other programmes if they felt confident in taking them up. What kind of information might colleagues need to find out about these types of assignment and make informed choices about using them? Can we learn anything from this kind of information or is it just there ‘because we can’?

What do you think? Is it faddish, or restrictive, to move away from general descriptions of assignment tasks to more precise names such as ‘personal dossier’ (sinister?) or ‘health and safety report’? Is it useful to students to have more detail in the assignment title? What would you do with the information to inform assessment practice? Ideas and comments are very welcome or you can tweet comments to @mmutraffic.

 

 

This entry was posted in assignment tasks. Bookmark the permalink.

2 Responses to Safety in numbers?

  1. Pingback: TRansforming Assessment + Feedback For Institutional Change (TRAFFIC) at MMU | Electronic Management of Assessment

  2. Pingback: Can we avoid marking hell? | JISC Transforming Assessment and Feedback For Institutional Change

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>