What do you do with feedback?

Piles of uncollected feedback in an office

Uncollected Feedback

Discussion about the purpose of feedback has a distinguished academic history (try Young 2000; Race 2001; Higgins, Hartley et al. 2002; Winter and Dye 2004; Carless 2006; Weaver 2006; Huxham 2007; Prowse, Duncan et al. 2007; Poulos and Mahony 2008; Rae and Cochrane 2008; Varlander 2008; Lunt and Curran 2009; Carless, Salter et al. 2011 for some introduction to the debate). In my experience of talking about feedback with academic staff, the topic often leads to much wringing of hands and a fairly good list of complaints about what students do or don’t do with feedback, starting with ‘they never pick it up, they just want the mark’.

Whilst we’ve tried to address this in the past with an extensive feedback resource, which includes the FAQ Why don’t some students collect their feedback?‘’ there is a nagging suspicion that perhaps we’re looking at the whole thing from the wrong, teacher-centred, perspective. In recent months I’ve been starting to ask the question “What do you do with the feedback you produce?”, which leads to much shorter conversations, but better plans for action. We’ve tried to incorporate reflection on feedback in the assessment lifecycle, but there is more thinking to be done about practical advice.

I have been thinking about this a bit more since EAS12 last week, at which David Boud (Boud and Molloy 2012) identified three ‘generations’ of feedback: Mark 0, Mark 1 and Mark 2

photo by Sheila MacNeill, taken at EAS12

According to Boud, Mark 0 feedback is an adjunct to marking: it’s done by teachers to students, teachers hope it is used, and no direct response is required or expected. I think I might characterise this as the ‘I write feedback for the external examiner’ approach, or perhaps that’s feedback Mark -1?

Mark 1 feedback is an improvement, as it’s more focused on student improvement. Teachers write it to change student behaviour. There might be lots of it, and an open invitation to come and discuss it. Boud points out that this model is unsustainable. Students remain dependent on the ‘Fat Controller’ teacher (his choice of analogy, not mine!) and don’t develop skills to make use of the feedback by themselves. They need the teacher to keep feeding them more of this feedback and will make only incremental improvements linked closely to the assignment themselves.

In Boud’s view, Mark 2 feedback is needed. Such feedback is ‘agentic and open’ will contain illustrations, answers and explanations, but it won’t tell students what to do. They need to form their own ideas, or as Boud put it at EAS12, “only the learner can learn”.

How can we convert these ideas to meaningful action plans for busy teachers and their students? As we develop new systems as part of this project, we can certainly try to embed a good variety of approaches to giving feedback. But I think we need to ask the question ‘What do you do with the feedback?’ for every single assignment that’s set. This simple question does seem to work in getting people to think about the value of what they do.

And I had better start at home…I commit to a) think about what will happen to my feedback before I start writing it and b) systematically discuss it with students in a feed-forward way, starting with the 20 credit Research Methods for Academic Practice unit which starts next week.

The thumbnail photo at the beginning of this post shows a pile of uncollected student assignments. All that time spent writing feedback which nobody will read…

Boud, D. and Molloy, E., Eds. (2012). Feedback in Higher and Professional Education: Understanding it and doing it well, Routledge.

Carless, D. (2006). Differing perceptions in the feedback process. Studies in Higher Education 31/2: 219-233.

Carless, D., Salter, D., Yang, M. and Lam, J. (2011). Developing sustainable feedback practices. Studies in Higher Education 36/4: 395 – 407. http://www.informaworld.com/10.1080/03075071003642449

Higgins, R., Hartley, P. and Skelton, A. (2002). The Conscientious Consumer: reconsidering the role of assessment feedback in student learning. Studies in Higher Education 27/1: 53 – 64. http://www.informaworld.com/10.1080/03075070120099368

Huxham, M. (2007). Fast and effective feedback: are model answers the answer? Assessment & Evaluation in Higher Education 32/6: 601 – 611. http://www.informaworld.com/10.1080/02602930601116946

Lunt, T. and Curran, J. (2009). ˜Are you listening please?” The advantages of electronic audio feedback compared to written feedback. Assessment & Evaluation in Higher Education 35/7: 759-769. http://dx.doi.org/10.1080/02602930902977772

Poulos, A. and Mahony, M. J. (2008). Effectiveness of feedback: the students’ perspective. Assessment & Evaluation in Higher Education 33/2: 143 – 154. http://www.informaworld.com/10.1080/02602930601127869

Prowse, S., Duncan, N., Hughes, J. and Burke, D. (2007). ‘….. do that and I’ll raise your grade’. Innovative module design and recursive feedback. Teaching in Higher Education 12/4: 437 – 445. http://www.informaworld.com/10.1080/13562510701415359

Race, P. (2001). Using Feedback to help students to learn. accessed on http://www.heacademy.ac.uk/resources/detail/resource_database/id432_using_feedback.

Rae, A. M. and Cochrane, D. K. (2008). Listening to students: How to make written assessment feedback useful. Active Learning in Higher Education 9/3: 217-230. http://alh.sagepub.com/cgi/content/abstract/9/3/217

Varlander, S. (2008). The role of students’ emotions in formal feedback situations. Teaching in Higher Education 13/2: 145 – 156. http://www.informaworld.com/10.1080/13562510801923195

Weaver, M. R. (2006). Do students value feedback? Student perceptions of tutors’ written responses. Assessment & Evaluation in Higher Education 31/3: 379 – 394. http://www.informaworld.com/10.1080/02602930500353061

Winter, C. and Dye, V. (2004). An investigation into the reasons why students do not collect marked assignments and the accompanying feedback. accessed on 8/1/8, from http://wlv.openrepository.com/wlv/bitstream/2436/3780/1/An%20investigation%20pgs%20133-141.pdf.

Young, P. (2000). ‘I Might as Well Give Up': self-esteem and mature students’ feelings about feedback on assignments. Journal of Further and Higher Education 24/3: 409-418. http://www.ingentaconnect.com/content/routledg/cjfh/2000/00000024/00000003/art00010

 

 

Posted in Feedback | Leave a comment

EAS12 conference, Dundee

The eAssessment Scotland conference in Dundee was a good way to begin the new academic year. With keynotes from David Boud, Russell Stannard and Cristina Costa, and no conference fee, the conference was very popular and there were over 300 delegates. David Boud gave a great keynote on feedback, pulling no punches about potentially complacent attitudes to feedback “Sometimes students manage our feelings so that we don’t realise what a crappy job we’re doing” and encouraging the audience to think about doing things differently rather than trying to do more and basically encouraging people to think about feedback strategies in which “Students can learn to calibrate their own judgements”. (For more on developing a feedback strategy at MMU, see this article from Learning and teaching in Action). David suggested that we need to judge feedback in term of effects and Focus on what learners, not teachers, do, to ensure that we get better. He also has a new book coming out shortly, on feedback – Feedback in Higher and Professional Education: Understanding it and doing it well.

I then went to workshops by the FASTECH team, and people from York St Johns and Leeds Met. The FASTECH team are a JISC Assessment and Feedback project, like TRAFFIC, and are looking at technologies to achieve changes in assessment and feedback. It was interesting to hear from students taking part in their project and how they saw their role, and trying out an activity they use with programme teams for problem-solving around feedback. My group looked at ‘how we could speed up the process of giving feedback’ – it was interesting to work through some of these issues with the group of medical educators who were at my table. I look forward to catching up more with the project at the Programme meeting in October.

Mark Dransfield and Nikki Swift talked about how they’d supported the introduction of onscreen marking. This was probably the session of most interest to me, as it’s generated fairly fierce debate for us. I thought they were very crafty to offer people a choice of technologies tailored to their needs: iPad, Kindle, digital pen, laptop or a second screen for their desktop. The best (and cheapest) solution was the second screen – well worth knowing! You can read their interim project report here.

Graham Hibbert from Leeds Met described a really effective (and beautiful) home-grown system for feedback/dialogue/personal tutoring in the School of Art. I really liked it but I can’t see how it would be scalable to the kind of numbers of assignments we have at MMU.

PS – We were invited to send in a poster at short notice, to be printed by the organising committee. For some reason it didn’t print properly and had big black blobs instead of our attractively coloured assessment lifecycle – for those who would like to see the proper version, here it is.

PPS the photo is from the riverside in Dundee, taken on the morning of the conference.

PPS Sheila MacNeill has done a much better critical summary of the conference over on her blog, worth a read!

Posted in News | 1 Comment

TRAFFIC assessment info goes mobile

MMU has developed a “Core Plus” VLE based on Moodle, which presents information relevant to each student’s studies, such as their timetable, reading list and assignment deadlines. The “mega-mashup” web-service developed to deliver this personalised information is described in our JISC Distributed VLE project W2C.

Our consultation work with students has revealed a growing appetite for receiving this information via mobile phones, so TRAFFIC project team members have been working with MMU’s mobile phone partner (oMbiel) to consume the “mega-mashup” web-service from its CampusM App.

As the “mega-mashup” was designed to be called from Moodle, a companion web-service was developed to present each student with a list of Moodle areas for the Programme and Units they are studying. The App picks up the student ID and generates a short-lived security token which it uses to call the web-service that returns a list of Moodle areas (and single-sign-on links for email, timetable, coursework and library systems). This list is returned as an ATOM feed, which is transformed by the App to display a personalised list of Moodle areas with icons for email, timetable, coursework and library. Clicking any of the Moodle areas calls the “mega-mashup” web-service as if it had been called from that Moodle area, which returns an ATOM feed that is transformed for optimised display on a mobile.

After some great rapid development between MMU’s LRT team and oMbiel, the MyMMU App was released in the App Store, Google Play and Blackberry World and as mobile browser site on Tuesday, August 21. Students can use the following download link to get the App. The App currently has over 9,500 registrations and we look forward to discovering how students will react to a personalised list of assignment deadlines on their mobile!

Posted in News | 1 Comment

Why I mark on-screen

The issue of on-screen marking is emotive and in my experience polarising. It doesn’t seem to be something colleagues can easily take or leave as evidenced by a recent lively exchange between members of the assessment community of practice at MMU on the pros and cons of on-screen marking. Following this really stimulating sharing of views (a welcome and much needed one), Rachel asked me to write a blog post titled “Why I mark on-screen”. When I sat down to write, the first thing that occurred to me is that I don’t always mark on-screen. Rather I use the technologies for on-screen marking when I find there is a good educational reason to do so. I think it is important to set out my stall and make it clear that I see marking and provision of feedback as elements of a joined up assessment strategy (for my units/programmes). I’m not going to get into the issues around designing valid, robust, engaging, plagiarism deterring assessment practice in this post (vitally important though they maybe), I’m simply going to explore my experience of the basic mechanics of marking on screen when I have designed its use into my assessment strategies. This is a personal experience, yours may vary considerably and in the interests of opening up the debate I offer it as an Aunt Sally. So rollup, it’s three balls for a shilling and try and knock her head off! :mrgreen:

For starters, this is my handwriting. I hope that you are not dyslexic (as my handwriting is barely readable even to me) or a visually impaired colleague accessing this post using a screen reader or other assistive technology (because your assisstive technology almost certainly won't be able to read this)

Actually, if you click on the image to view it in full, I have added alternative digital text to the handwriting so you can decode my handwriting (and a screenreader can use it as well ;-) )

Forgive me for stating the obvious, but isn’t this text much easier to read? I’ve made as many spelling mistakes typing this as I did in the handwritten section but you’d never know it (thanks to my spell checker). I’m not a touch typist but I’ve produced this paragraph as quickly as the handwritten one (much quicker if you take into account how long it took me to get the handwriting into this post). Which is quicker, handwriting or typing, isn’t really the point anyway. What’s important, to me, is that the faster I write by hand the more unreadable it becomes (there’s quite a bit of research that supports this in general terms). So when I’m marking to a deadline, and time gets tight, my handwriting gets worse and worse. My typing might get less accurate as I type faster but I find it is easily and quickly corrected (I can’t do that on paper). I’m also currently on a moving train which doesn’t impede my typing, however, I had to wait until I got home to handwrite the preceding paragraph. Yes that is my very bestest handwriting, completed at my desk, and it would have been even more spider like had I tried to do it on the train. To get my handwriting into this post I had to write it, scan it into my computer, crop it to a suitable size, upload it to the blog and embed it in the post.

So reason one is simple. I mark on line because I believe it delivers a better product for my students. Not only is the product of my onscreen endeavor more readable, more accessible and inclusive but it is also in a useful and versatile format ready to go straight into an email, the VLE, a blog post, ePortfolio system or to be printed out and taken to the classroom.

For my sins I suffer with a chronic back problem. So thank heaven for on-screen marking! I have to carefully manage my condition. What causes me more grief than anything is sitting on the sofa for too long, be that doing my marking or watching the telly. Marking on-screen requires me to go and sit at my computer desk, which has been properly set up in respect of my condition and has been a godsend in terms of my recovery and ongoing management of the condition.

So reason two is also simple. It requires me to be disciplined about where I do my marking and requires me to adopt a more healthy approach to this aspect of my work. It sometimes requires me to do a bit of planning for sure but that is also a good thing as it makes me get organized.

My favored approach to on-screen marking is to use track changes and comments under the review tools tab in MS Word to annotate submissions and then to go into more depth in a typed feedback proforma that is the norm on my programme (I also use audio recordings and video screen captures for this purpose depending on the type of assignment – but that is another story). I find that annotating the submission first is a good way of getting to grips with the work and helps me to organise my thoughts for the more in-depth feedback in the proforma.

I’ve also learned a few tricks. There are always common issues which frequently arise when marking a set of scripts and I copy and paste these into a separate document and then use that to copy and paste common feedback into later submissions as and when required. I’ve got very skilled and very quick at doing this (I’ve practiced ;-) ). I know that you can do a similar sort of thing with Turnitin Grademark but I’m not as proficient with that (plus I’m not so keen on the TII rubrics which do not fit with the way we (my programme) produces assessment criteria. For now I’ll stick with MS Word as it suits the way I work and anyway I can still use TTI to produce originality reports if I choose to do so anyway.

So reason three is also very simple. I can do the job a lot quicker on-screen that I can by hand on paper.

A final point I’d like to make is that I sort of feel obliged to do this. I want my students to submit work that I can read, so on the whole they are required to type up their work. I don’t know how long they sit at the computer typing up their assignments for me but even if it is only a couple of hours I think it only right and proper that I respond in kind with feedback that is typed so that they can read it too. For me it is just good manners.

So to sum up my overall experience, I believe that appropriate use of on-screen marking results in better, more versatile marking and feedback, produced efficiently, in a healthy, organized way that shows my students I’m not asking them to work in a way that I’m not prepared to do myself.

Posted in News | Leave a comment

Students – we need to hear from you

As part of the TRAFFIC project, we are having a special look at coursework (assignment) guidelines and feedback at the moment, and this post is to make contact with students who would be prepared to share their views and experiences.

Iqra Ali, the assessment project assistant, has spent a huge amount of time looking at all the relevant comments made by students about assignment briefs and guidelines, and we’ve got a good overall picture of what students find useful, as well as what you find confusing. However, we’d like to develop that with some more in-depth responses from you. If you would be prepared to be interviewed face to face, or by phone or by email, or to complete a questionnaire, then please get in touch with Iqra at i.ali@mmu.ac.uk.

For more information about the project, read more of this blog, follow us on Twitter #mmutraffic or see our static pages.

Posted in Briefs | Leave a comment

Continuing the Conversations

image of different assignment task outlines, with link to more detailed page

I mentioned in an earlier blogpost that dissemination around the baseline report was still going on and that it was a key part of our engagement strategy. This post summarises a few of the activities which have taken place in the last few weeks.

Firstly, Rod Cullen organised a webinar on the TRAFFIC project as part of a series loosely based on learning technologies. It was a good opportunity to present the project to colleagues from across the institution, get their feedback and to gauge their interest in participating in a pilot of electronic submissions. You can view the webinar, but just a warning that it does seem to need a lot of bandwidth.

Other dissemination of the baseline report continues, with a session for the school of Maths and Computing, followed by a robust discussion about assessment management, and a presentation to senior staff in the faculty of Science and Engineering which led to some interesting discussions about managing portfolios.

The ‘assignment lifecycle’ is proving to be very useful in highlighting issues and getting discussion going, as well as helping people to visualise the connections between parts of the assessment management process and how they are interdependent. Of course they know this already, but it can be useful to be reminded of it when considering particular issues such as electronic submissions, or the prevalence of portfolios, for instance.

We also had a workshop at the HEA Annual conference where participants used some early versions of our assignment planning tools to discuss specifying and supporting assignment tasks in particular scenarios. We did get a graveyard slot of ‘after lunch on the last day, just before the final plenary session’ but we did have four tables of assignment planners at work and it was good to meet those select participants who had foregone rushing off for their trains to come along and participate.

Next week I’ve got a slot at our internal conference for Student and Academic Services staff, which I’m looking forward to – pretty much everyone in that department has some involvement with assessment, whether it’s an obvious one like receiving coursework submissions or helping students to make Exceptional Factors claims or entering assessment specifications into student record systems or a less obvious one such as dealing with upset students who are close to their deadlines and can’t get hold of the person they need. We’ve found from previous change management projects that it’s essential to involve people in a wide range of roles so I’m hoping also that the session helps us to recruit a few more volunteers to get involved with the project and give their opinions on how different proposals will affect their work.

And after that, it’ll be time for some consolidation of all the ideas we’ve hoovered up during this process and some careful specification of the electronic assessment management processes.

Posted in Baseline, News | Leave a comment

HEA/HeLF seminar on Institutional Transformations on Electronic Submission

The Heads of e-Learning Forum (HeLF) and the HEA Flexible Learning team got together to organise a really useful meeting on ‘institutional transformation on electronic submission’ on Friday 8 June 2012. There was a real sense of zeitgeist about the meeting, with 50 participants and another 30 or so on the waiting list. The first presentation, by Barbara Newland, gave the results of a survey by HeLF members about various aspects of electronic submission. The full results will be made available, but my takeaway points were:

  • Few institutions have special policies on electronic submission
  • There is a sense of a ‘top-down’ push
  • Academics seem to be keen on electronic submsisions, less so on on-screen marking and feedback
  • Students are very keen
  • Many, many building blocks are needed to make it successful.

Alice Bird spoke next about how LJMU has gone about electronic submission. She described a four year process which moved from feasibility study through a pilot to early stage implementation and then a ‘third way’ implementation which was intended to meet the needs of as large a number of stakeholders as possible. This third way requires that assignments meeting certain criteria to be submitted electronically (if it is a single file of less than 2000 words, in Word or PDF format – I did think that anyone who didn’t want to use the system would just change their assignments so that they didn’t meet these criteria!). She talked about what issues each of the stages had highlighted and what support had been needed. The Sheffield Hallam assignment handler is being used with Blackboard. She said that this works really well and the only major problem has been that some staff may have had difficulties working with files and folders, which isn’t a problem with the software itself.

I think the main thing we all seized on was Alice’s description of the shifting powerbases which some people perceived as an outcome of this project: away from academic staff and towards students and administrative staff. This is clearly a very important issue which needs to be tackled head on in any innovation, and different institutions may have different approaches. The expression that Alice used about dealing with difficult issues also struck a chord, if Twitter is a good guide to interest levels: she said the best response to concerns from stakeholders is “Just do it” – set policy, then provide training and support and make it happen. Although she also emphasised the need for contingency plans! Again I guess that this approach may depend on the institution; ‘just doing it’ may compound the issues of power shift which she identified and some institutions may wish to work more slowly with a more consensual approach. (Note: Alice’s approach has echos of something Mark Russell said at the iTeam webinar earlier in the year: ““Just because it’s messy and clumsy shouldn’t stop us at least trying to do it”)

After lunch, Neil Ringan presented the TRAFFIC project – as readers of this blog will be pretty familiar with this, I won’t summarise it here, but I did enjoy hearing someone else talking about what genuinely is an interesting project.

The final presentation was Matt Newcombe talking about the Exeter Online Coursework Management system. I was really interested in this, as it’s a system we are planning to pilot next year. It links in to Moodle and it has many of the features we want (note to the team – we’d like an offline marking option) including good support for moderating and second marking. Matt showed a process map which generated a lot of interest and talked about the outcomes of the project which have been real improvements in the efficiency of handling assignments and great support from students and administrative staff. A really great feature is that personal tutors can pull up all of students’ previous work and feedback via the VLE, which is a fantastic tool to improve support. (As Mark S pointed out to me, though, you would need a definitive list of tutors and tutees…we’ll add that to our ‘nice to have’ list!) Matt also emphasised that the most important parts of the project had been effective communication with all stakeholders – surely a key message for all JISC assessment and feedback projects, and something we’ve been thinking about a lot recently for this project.

Finally we spent some time talking about actions – I did write down a few for myself (1: get Matt Newcombe’s process map!) but a full list will be coming out shortly. One thing for all of the JISC Assessment and Feedback projects to think about might be terminology: Cath Ellis from the eBeam project at Huddersfield made a plea for us to use the term ‘Electronic Assignment Management’, or EAM, in a consistent way. Worth thinking about. And it’s worth noting that there seemed to be a clear message that students really like electronic submission (or EAM).

The tweets from the day have been storified for those who like that kind of thing.

Update: 26/06/12: the presentations from the workshop are now available on the HEA website.

Posted in News | 1 Comment

Coding and decoding

QR code for the TRAFFIC blog
My bookshelves contain a number of books on qualitative analysis and yet somehow none of them ever quite seems to deliver what I want. Part of the reason for this dissatisfaction is that I teach a research methods module and I’m always on the look out for the single ‘right book’ for participants in the module. However, I also have my suspicions that it says something about me and my approaches to social science research. Nobody ever told me to read up on scientific method when I was doing my PhD – my ‘methods’ chapter described how I designed my lab apparatus (a handy little device to bend pieces of PMMA (PerspexTM) by an easily measurable angle, since you ask). But without going into a lot of autoethnographic reflections on what it means to have trained as a scientist and to find oneself doing social science research, I’ve decided on a new set of criterion for selecting books on qualitative analysis. In the future, whenever I look at one, I’m going straight to the index to see if it has any sections on coding, and if so, if they cover actually developing a code. If not, I’ll read no further.

Coding is preoccupying me in terms of the analysis of large quantities of qualitative data. Every year, we get around 9000 comments from the National Student Survey. We’ve generally used these comments to identify and develop institutional development themes, and have left departments and course teams to worry about the meanings of the few comments which can be attributed to them individually. But this year, everything has stepped up several notches in the ‘difficult to do’ area. I suppose it’s logical that, in the throes of other large scale institutional change, that we should ask students in more detail what they think of their experiences, so we now ask every student to complete a survey for every module that they take. Twice a year. The response rate has been good, and the result is that there are now 72,000 comments to look at for this academic year alone.

The intention was that these comments would be very useful at module level, and they are, but when you are doing a big institutional change project you might also want to see if there are any patterns to be seen on a larger scale. In order to find patterns, you need to code the responses. And in order to code the responses, you need a reliable code. This code is not available in a book or another paper (probably), unless someone else has done a very similar study before, although the literature does point to certain themes which are important. So, the first job is to write the code. You can have a rough go at this based on the aims and objectives of your project, but you can’t get into the detail without reading some of the comments first. It’s an iterative process: write code, start coding, add to code, go back and check that the original comments are still accurately coded, continue coding, and so on. So one thing I want from a qualitative analysis textbook is a good explanation of what kinds of codes are valid and what kind of thinking goes into making that judgement*.

So all of that was to introduce the fact that I have developed a rough code for considering student comments about assessment from our internal surveys and that it would be quite good to have some feedback on its validity or otherwise. For this particular part of the study, we are trying to work out how much information we need to capture about assignments but there may be other useful assessment information which we want to extract at the same time, so the themes are fairly broad. We can link individual comments back to particular units, so these codes will also help us to locate examples of good practice identified by students.

Engagement : Interesting, Fun or enjoyable, Pointless
Relevance to course / alignment to learning outcomes: learned a lot from assignment, well aligned to course/relevant, assignment explicitly supported by classwork, would like more on assignments in classes
Organisational: well timed, badly timed, too complicated, good information/briefing provided, not enough information provided, clear information/briefing, unclear information/briefing
Feedback: feedback was useful, feedback was not useful
Choice of task: liked type of assessment, didn’t like type of assessment
Not included in analysis: there were no comments relevant to assessment

The story of how the 70,000 comments were sifted to identify those relevant to assessment is for another day. For the moment, any comments on these codes? What obvious themes have I missed?

*For the record, Rapley, T (2011), Some Pragmatics of Data Analysis in Silverman, D. Qualitative research : issues of theory, method and practice. London, SAGE, meets my ‘makes sense to a scientist’ test.

Posted in Briefs | 3 Comments

Send up three and fourpence, we’re going to a dance (*)

A large audience by LeWEB on Flickr, CC licensed for reuse It seems a long time since we completed the baseline report but we are still working through all of the dissemination activities we planned around it. This is partly because of the timescales of the committee system but also just because it is a big institution full of busy people who are already managing all the usual business of teaching and research alongside a huge academic change project. It has been interesting trying to balance the dissemination so that particular audiences don’t get overwhelmed with more information than they need, but that we also make sure that we spark enough connections to find out about all the good practice which may inform the project – much of which is buried in an institution this size (did we already mention the 600,000 pieces of assessment per year?)

So we have found ourselves in a variety of different contexts talking about the project: one to one meetings with key people, formal committees such as Academic Development Committee and their Faculty equivalents, groups of senior managers, ‘captive’ groups of staff at professional development events, online with our Community of Practice for Assessment, and so on. We’ve also put up a simple web page with a link to the report (MMU only) and encouragement to contact us and to get involved. The main questions are always: “is the baseline report accurate? Does it reflect your experiences of assessment at MMU? Do you think the proposals will take us where we want to go?”

I was braced for more wearied responses from colleagues (“not another change project…”) but actually the responses to the questions have been that the report does reflect their experiences (with a lot of useful accompanying stories of how potential for inconsistency is mitigated at local level) and that they are happy with the proposals. In fact, some colleagues have even gone so far as to welcome them (steady…). We’ve also got plenty of volunteers for various pilots.

So at this point in time it looks as though our strategy of putting a lot of effort into dissemination of the baseline report has been a good one. A lot of people know about the project, although I can’t believe everyone has had time to read the full report, and there is a general undercurrent of support. This is a good start, even if we are now eight months into the project and perhaps on other projects we would have moved on from the baseline by now. In reality, of course, in parallel to this the project team has been getting on with some of the underpinning work to support the next phases. Neil and Rod have developed a clear model for the processes of assessment (Word file) and everything we now do will relate to the eight stages they’ve modelled. We’re using the term ‘mini-project’ to describe different parts of the main project and we’ve got three of these up and running, with others following closely behind. It’s going to be really important to keep telling people about progress on these, too – we must make time for it.

As well as the internal dissemination, as Rod noted last month we presented some of the findings of the baseline report at the Association of Business Schools Learning and Teaching conference. We will also be talking about the project at an HEA/HELF (Heads of eLearning Forum) seminar in June, at the HEA conference in July and ALT-C in September (all of which are in Manchester, so no potential for exotic travel, but we don’t mind because obviously Manchester is a fantastic destination).

(*) Younger readers may find this an obscure cultural reference, which it is. See this page for a short explanation

 

Posted in Baseline | 2 Comments

Of course….

Bee on flower, image by Moosealope on FlickrI was at a meeting about something else the other day and one of the Deans present said something which had a lot of resonance to the TRAFFIC project: “whatever new things you ask us to do, they must take work away from staff”.

This got me thinking. There is an inevitable defensive reaction to this kind of comment, which is “well, of course we wouldn’t do that” but a lot of the issues around academic workload are clouded by your perspective. A Dean is besieged by people telling him or her how busy they are, and constantly requested to get those staff doing more of things other than teaching and assessment (third stream! International! Consultancy! REF! Outreach!). It may also seem axiomatic, from this perspective, that any developments imposed by ‘The Centre’ must add to workload and ought to be treated with appropriate suspicion.

In general, of course those of us working on the TRAFFIC project want the whole process of assignment management to be as efficient as possible – that is to say, that staff shouldn’t have to spend longer than necessary on it, and students should receive feedback as quickly as possible. I’m sure there are systems developments which can help with this, and of course nobody wants to make extra work for the sake of it.

However, from a quality and student experience perspective, I think there is a certain minimum set of assessment-related activities which need to be done in order to achieve assessment for learning. Many academic staff do spend hours and hours doing these activities, to tight deadlines and high quality. But these activities are rarely ‘budgeted’ for in academic workload planning, which tends to be based on the number of hours of contact with students, with some rounding up to include preparation and marking, and in some cases an allowance for the number of students on the course. From that perspective, it isn’t rational to spend too much time on marking, feedback and moderation, because nobody is counting this time when they consider your workload. (Fortunately for the functioning of the institution, most teachers seem to be irrational about this aspect of their work. )

What I’m getting to is that there isn’t going to be any ‘of course‘ about the final recommendations of the project. Implementing better assessment principles across the institution is unlikely to ‘take work away’ from everyone. Some people may end up by ‘getting permission’ from the better principles we hope to develop to reduce their workload, because some of what they do now turns out to be ineffective in supporting assessment for learning (if you want examples now, then let’s say feedback written for the external examiner, copious feedback on a huge piece of work of a type which will never be repeated, complicated assignments which students struggle to understand…) . But I think it’s possible that some people may need to do extra work, simply because that work needs to be done and isn’t done now. So how is the TRAFFIC project going to manage that issue? It’s something we need to give very careful thought to as we move to detailed proposals.

 

Comments welcome, and the image of the bee is intended to symbolise the hard and generally effective work most academic staff do around assessment, much of which goes unrecognised.

Posted in Baseline, e-submission, News | Leave a comment