Working with partners: Talis Aspire and Equella

A distinguishing feature of our distributed VLE is integration of a range of complementary learning technologies: Moodle, Equella, Talis Aspire, campusM, etc, based on:

  • web services; and
  • consistent naming of data within different systems.

In consultation with library colleagues we devised a workflow for including digitized materials (stored in Equella), in reading lists, held in Talis Aspire and publicised in Moodle.
Digitizated Materials Workflow
As part of new Unit (module) approval procedures introduced to support MMU’s EQAL curriculum transformation initiative, tutors are required to provide reading lists that distinguish items to buy, essential and further reading. Where essential or further reading can be delivered digitally, tutors are encouraged to use electronic sources and, within the terms of the institution’s Copyright License Agreement, tutors and library staff have been identifying chapters and articles for digitization. Library colleagues are agreeing a format for the “notes for librarians” field that will enable clear digitization instructions to be captured against item entries on Unit reading lists. All Talis Aspire lists are reviewed prior to publication. If digitization requests are encountered then the chapter or article will be scanned, uploaded to Equella, tagged with the Unit code and, finally, the Talis Aspire list item will be updated with the Equella URL. To make the outcome of this workflow as easy as possible for students, we wanted the Talis Aspire link presented in Moodle to be single-sign-on.

In an earlier post, we described a Talis Aspire integration scenario that went beyond the sample code provided for Talis Aspire. We wanted to supply a Unit (module) code and an academic year identifier and retrieve titles and URLs for the items on the reading list for that unit and display the content in Moodle. We prototyped a solution for this which initially parsed the RDF for the unit code to identify the URI for the relevant list and then parsed its XHTML representation. As prototyping began to move towards production deployment we started looking for performance improvements in our code.

On Tuesday, May 24, colleagues from Talis Aspire were on site for a project catch-up. We agreed a new naming convention for identifying versions of lists for particular academic years, provided data for a new Unit hiearchy that reflected this convention, and raised our two technical challenges with Chris and Ian:

  • providing single-sign-on links to Equella resources on Talis Aspire lists when those lists are displayed in Moodle
  • speeding up access to list item titles and URLs for a given Unit code and academic year

Chris demonstrated a new feature of Talis Aspire, which allowed javascript widgets to be added to Talis Aspire pages which could interact with the page’s content or extract data from its calling URL.

After 35 minutes of agile development, Chris had produced and deployed to our Talis Aspire tenancy a widget that could retrieve a single-sign-on token from the querystring of a list item page, and append this to the reading list item URL used to access material in Equella; Alex had modified the web service used to publish data in Moodle from Talis Aspire and Equella so that links to Talis Aspire items were appended with a short-lived token that would grant single-sign-on access to Equella.

Steve, Chris + Alex in agile dev mode

In just 35 minutes, powered by enthusiasm, coffee and a large tub of chocolate, we had achieved our aim of seamless access from Moodle via Talis Aspire to digitized journals and chapters stored in Equella!

Chris also mentioned that Talis Aspire supported CSV representations of lists, as well as the XHTML representations we had been parsing in our web service. We found the CSV representation gave a noticeable performance improvement of over 200ms per call, and have incorporated this change into the production codebase.

Our distributed VLE depends on integrating solutions from a number of different partners, and the kind of working relationship described in this post is a key enabler for delivering our DVLE vision.

Responses from 100 device-led student interviews

Between January and March 2011, one hundred students were interviewed about their use of technology and places of study at Manchester Metropolitan University (MMU). The interviews took place at locations across MMU and, as far as possible, students were selected in order to give a course and gender balance that reflected MMU as a whole. Students were asked to talk about the technology that they had with them. Those who took part were rewarded with a £5 print credit voucher.

Detailed analysis of the results and reflection on questions will follow, but this post presents preliminary analysis of headline results for students’ use of technology in their studies. Please note accuracy of data is yet to be validated by the research team, and use of specialist equipment, for instance Apple Mac computers available to Design students, is yet to be factored in. These results are presented as a preliminary indication only and should not be considered safe for citation.

A link to the interview schedule is available here.

Of the 100 students interviewed, 98 had brought a mobile phone with them and 45 had brought a laptop (or netbook). Questions concerned use of the devices *for* learning and study.

Frequency of Technologies Used In Learning 20110610

Used in learning Mobile Laptop Desktop
10+ times/day 15 4 0
6-9 times/day 2 4 0
3-5 times/day 13 13 2
1-2 times/day 17 6 7
5-6 times/wk 6 8 8
3-4 times/wk 14 6 10
1-2 times/wk 15 1 17
less than 1/wk 6 2 6
Total 88 44 50
Locations for Technology Use In Learning 20110610
Accessed from Mobile Laptop Desktop
uni 79 31 40
home 81 43 21
work 32 4 1
train/bus 67 13 n/a
café/pub 57 15 n/a
Technologies Used In Learning 20110610
Supporting learning with Mobile Laptop Desktop
calls 62 2 0
texts 76 0 0
e-mail 49 42 42
social networking 45 37 31
web 47 43 44
uni portal 31 43 40
uni VLE 25 42 38
e-books/journals 11 18 22
blogs 13 17 15
youTube 21 36 25
podcasts 9 17 9
music 11 14 7
films 2 20 7
TV 3 24 12
Apps 21 0 0
flickr 4 5 1
taking photos 28 2 1
taking video 10 1 0
games 5 1 0
dictionary 2 0 0

Initial Observations:

If the number of mobiles used for email and web is taken as an indicator of smartphone ownership amongst the students interviewed, then the figure of 49/98 (50%) is practically identical to the figure obtained in the online survey undertaken in October 2010 (496/982 = 50.5%).

Despite popular opinion that students use social networking rather than email for communication, more of those interviewed used email for learning-related communication:

Supporting learning with Mobile Laptop Desktop
e-mail 49 42 42
social networking 45 37 31

Interestingly, email use was more common on mobiles than on laptops or desktops.

Reflection on questions and responses in the online survey and interviews:

For future studies, it could be useful for research instruments to elicit responses for different (but sometimes overlapping) categories of technology-use for learning:

  1. Discussing and arranging course work
  2. Accessing course deadlines, timetables, briefs and feedback
  3. Discovering and accessing learning materials
  4. Producing and submitting course work
  5. Maintaining a personal study environment

An initial skim of the qualitative interview data suggests that mobile messaging, particularly texts and blackberry messenger (BBM), is used extensively for category#1.

Category#2 access to e-admin information to support learning emerged as the top priority for institutional mobile development in the larger (982 respondent) online survey

Prompts in this questionnaire could have done more to elicit responses about category#3 and category#4 use, but it is interesting to see in both the quantitative data and the qualitative responses to technology and study space questions that a number of students place an importance on playing music while studying (category#5).

As ever with W2C, we look forward to feedback and further ideas generated by this post.

CETIS Widget Bash

On returning home from a productive and enjoyable two days in Bolton I’m rather pleased to see that our PC Availability widget displays the free PCs in the MMU drop-ins in a different order now I’m in South Manchester. Our widget became location-aware at 13:47 GMT on March 24, and popular culture would counsel against pulling the plug, so instead, I’ll try and share how we moved things on from the version we described in our previous post that is currently running in WordPress to the right of this article.

On the first day of the CETIS event, wookie champion and JISC OSS Watch Service Manager Ross Gardler described how widgets could be enhanced using open source javascript libraries, such as geo-location-javascript. While Ross was presenting I added a reference to the geo-location-javascript library to the html and some sample geo code to the pc.js file in our PC Availability widget and found that most browsers would disclose latitude and longitude after checking first with the user.

[codesyntax lang=”javascript” title=”Geo-location code added to pc.js”]

// Updated the Controller.init function
init:function() {
	else {

// Added Controller.success_callback
success_callback:function(p) {

//Added Controller.error_callback
error_callback:function(p) {


Having found that I could get the latitude and longitude of a mobile device, I was keen to see if these values could be used to find the nearest available drop-in PCs. Whilst the processing could be done client side, I decided it would be more flexible for the future if our PC Availability web-service were able to receive latitude and longitude as query parameters and order its results based on proximity.

We already had geo-location data for our drop-in facilities from our work with oMbiel’s CampusM mobile app, so Kieron kindly extended the table that holds the list of drop-in facilities (an MS SQL Server known as stu_services) to include two extra fields: lat and long. I then needed to modify the C# code for our web-service to:

  1. Modify the Windows Communication Framework web-service definition to take latitude and longitude params
  2. Set the default sort order as alphabetic by location
  3. Test if the service had been called with valid latitude and longitude values, and (if it had) set the sort value based on a pythagoras calculation (which I know is not as accurate as the Haversine formula, but should be adequate for our purpose)
  4. Modify the SQL query to incorporate the sort criteria

The C# for step 1 was:

[codesyntax lang=”csharp”]

        [OperationContract, WebGet(UriTemplate = "getPcAvailability?latitude={latitude}&longitude={longitude}", ResponseFormat = WebMessageFormat.Xml)]
        PcAvailability getPcAvailability(string latitude, string longitude);


The C# code I used within the getPcAvailability method for steps 2 and 3 was:
[codesyntax lang=”csharp”]

double dblLatitude = 0;
double dblLongitude = 0;
string sort = ", location ";

if (Double.TryParse(latitude, out dblLatitude) && Double.TryParse(longitude, out dblLongitude))
   sort = ", SQRT(POWER(CAST(longitude AS REAL) - CAST('" + longitude + "' AS REAL),2) + " +
            " POWER(CAST(latitude AS REAL) - CAST('" + latitude + "' AS REAL),2)) ";


And the modified SQL for Step 4 was:

[codesyntax lang=”csharp”]

                SqlCommand myCommand = new SqlCommand("" +
                "SELECT location " +
                ", 	rid " +
                ", 	info " +
                ",  	latitude " +
                ",  	longitude " + sort +
                ", 	SUM(free) as 'free' " +
                ", 	SUM(pool) as 'pool' " +
                "FROM " +
                "( " +
                "SELECT usage.rid as rid " +
                ",	stu_services.location as location " +
                ", as info " +
                ", as latitude " +
                ",  	stu_services.long as longitude " +
                ", 	count(*) as 'free' " +
                ", 	0 as 'pool' " +
                "FROM 	usage " +
                "INNER JOIN stu_services " +
                "ON usage.rid = stu_services.rid " +
                "WHERE InUse='NO' " +
                "GROUP BY usage.rid " +
                ",	stu_services.location " +
                ", " +
                ", " +
                ",  	stu_services.long " +
                "UNION " +
                "SELECT usage.rid as rid " +
                ",	stu_services.location as location " +
                ", as info " +
                ", as latitude " +
                ",  	stu_services.long as longitude " +
                ", 	0 as 'free' " +
                ", 	count(*) as 'pool' " +
                "FROM 	usage " +
                "INNER JOIN stu_services " +
                "ON usage.rid = stu_services.rid " +
                "GROUP BY usage.rid " +
                ",	stu_services.location " +
                ", " +
                ", " +
                ",  	stu_services.long " +
               ") status " +
                "GROUP BY location " +
                ",	rid " +
                ",	info " +
                ",  	latitude " +
                ",  	longitude " + sort +
                "ORDER BY " +
                " 6,1 " +
                "", myConnection);


… and it worked (on our test box at least)!

Now to modify the pc.js file of our widget to append the latitude and longitude query string parameters to the web-service URL if the device can provide them. After experiencing the benefits of the Firebug FireFox add-in as a javascript debugger, I eventually ended up with a re-worked pc.js file that worked when zipped as a widget and deployed to wookie:

[codesyntax lang=”javascript” title=”Our location-aware pc.js widget file”]

var Controller = {


	init:function() {

	update:function() {

		var loc = Widget.proxify("" + Controller.coords);

		        type: "GET",
			url: loc,
			dataType: "xml",
			timeout: 1000,
			complete: Controller.parseResponse

	parseResponse:function(response) {

		var rooms = $("#rooms-listview");
		$(response.responseXML).find("room").each(function () {
			rooms.append($("<li/>").text($(this).attr("location") + ": " 
			+ $(this).attr("free") + "/" 
			+ $(this).attr("seats") + " free"));

	success_callback:function(p) {
		Controller.coords = '?latitude='+p.coords.latitude.toFixed(2)+'&longitude='+p.coords.longitude.toFixed(2);

	error_callback:function(p) {


This code needs tidying – the web-service URL should be read from a properties file, etc – but hopefully this quick post will help maintain the excellent spirit of community development that everyone enjoyed at the CETIS Widget Bash. Thanks to Sheila, Li, Sarah, Ross and Scott for organizing and to all who attended for making it such a valuable event.

Running with Wookie

When we first looked into getting involved with the Widget revolution we wanted scalable widgets that could enhance the student learning experience and be deployed on a range of different platforms.

Walking with Wookie

Er… thought you said Running with…

Well, when we first looked into getting involved with the Widget revolution we wanted scalable widgets that could enhance the student learning experience and be deployed on a range of different platforms. Indeed, we still do. We are in the process of deploying Moodle, hosted by the University of London Computing Centre (ULCC), as our institutional VLE, and were attracted to the potential of widgets as a way to enhance the VLE and be available on mobile devices. We realized that if widgets were to be part of our core offering, we’d need a widget server that could handle multi-thousands of hits in a short time interval, so began exploring Wookie with that in mind.

I proceeded to put a test platform together on our existing webserver and opted for the Wookie install that utilises Tomcat and MySQL so that we could potentially load-test the platform at a similar level to our existing web platforms.

I have to say the process wasn’t attractive or easy. “Dependency hell” took over fairly soon with trying to determine which JDK to run, out of the three that lived on our system (an Ubuntu derivative), making sure we had the right JDBC and that it could talk to MySQL (also having to be set up correctly). That said, much of this is really dependent on the platform you wish to run Wookie on, the variant or distro and so on. Having revisited the Wookie trunk in recent weeks I can say great strides have been made in only a year in making Wookie easier to install. You still really have to beware of the various Java JDK’s out there – stick with the Oracle one would seem to be safe advice.

The development team are fantastic, frenetic and focused – I cannot recall too many other software / platform development projects that I have been involved with that have released so many upgrades, patches and fixes over such a short period. Wookie itself is still a project under Apache incubation, in theory a kind of Beta state. This means you need to expect a certain amount of work to get the software going on a system of your own. In my struggles I was able to visit Paul and Scott (and also Scott’s interesting personal page ) in Bolton and get a couple of errors sorted out to enable our install to function. This actually fed back into the process at the time – a script hadn’t performed as it should, was corrected, and the nice new working version made its way back into the build. Just goes to show how fast communities can rectify problems!

Result – a working copy of Wookie on a server base.

Problems with the Server-based install

At some point I opted to patch our system (as you should, once everyone else has done it and found the bits that break!), ANT – an essential component climbed a couple of revision levels and promptly broke the Wookie install script. This is kind of expected behaviour when working at the beta end of things, but it came at a time when we needed to start working on our widgets: not good! The team have been working to sort out all sorts of similar issues throughout the year, but with increasing interest in developing our own widgets to deploy into our enhanced VLE platform (check out Mark Stubbs’ recent post Writing Widgets) we revisited a server-side copy of Wookie. ANT is still not working – what to do?

Running with Wookie – use the Wookie standalone install

[pullquote]Useful  links



Given that our project really needs to focus on the development of Widgets, a rethink was needed on applying efforts towards writing Widgets, and not messing with Wookie. Whilst visiting the developers, Scott and Paul, in Bolton about some issues regarding our server-based installation I noticed that the folks there are essentially all developing against local standalone copies of Wookie.

I decided to give it a try. Simply put, it’s the fastest and easiest way to get Wookie going. I have no stats to tell you how robust the standalone copy is when operated within a typical server situation, but thus far, for our tests, it has remained up and viable.

So – what are the crucial benefits of using the standalone copy?

Firstly, ease of install. It can be as quick as this:

[codesyntax lang=”bash” lines=”no”]

svn co
ant run


an example of a Running Wookie Instance
an example of a Running Wookie Instance in MSDOS cmd shell

Several lines of text later after SVN has copied you the latest trunk build of Wookie, and ant has performed a great many tasks including downloading IVY, Wookie will instantiate and start

logging stuff to the screen.


There are some pre-requisites here though:

  • An accessible JRE (Java Runtime Environment) on your machine
  • a copy of Apache ANT (for windows – you can check out WinANT which takes out some of the grief in getting this going and setting environment paths and so on, and has the added bonus that you can get an archived copy of WinANT 4.1 – the last 1.7.x release before ANT migrated to the 1.8 series that leaves your Wookie broken and downhearted).
  • Subversion
Wookie first menu
Wookie's first menu

Once it’s up and running you will have a nice fast localised copy of Wookie running with all the default settings (so the Admin user and password are java & java respectively).

You can fiddle with some settings in the config files that reside in the ‘root’ level of the trunk directory. Best place to get information on these is the wookie site.


Flushed with this success, I determined to remove the schizoid and out of date copy of Wookie we have running on the LRT server to see if a standalone copy might work better. The LRT server – like others at the MMU – is behind the MMU firewall. At the time I built it we were not allowed to be added to the bypass group that permits servers to communicate with the WWW without having to go via a proxy – which usually makes a mess of all sorts of things on a server level. So – the original install of Wookie had entries in the config to add the proxy in by hand – and – alas – down the line, it was just possibly getting in the way of examining accurate widget behaviour.

Nowadays the whole server is in the bypass group, so in theory a fresh install and a standalone server should have widgets that can communicate with resources out on the web without judicious proxy messing.

Wookie's Demo Widgets
Wookie's Demo Widgets

Et Voila – it works. We now have a ‘standalone copy’ running on a server hosting widgets that now actually tell me what the weather is in Manchester (and no, it’s actually sunny!).

Running with Wookie – Tips for maintaining a standalone install

The problem is, that we are running this is on an actual server. It’s designed to sit there without someone nurse-maiding an instance of a program running in a terminal or shell, any reboot, any premature death of that shell (like killing the SSH shell from my machine to the server in which I actually launched Wookie) is going to see Wookie die a death. On a reboot, it’s not going to come back alive.

The standalone doesn’t act like a standard daemon in Linux. Once you hit ant run – its going to run and stay resident in the shell window you typed that command in – logs to that window and no escape from it without killing wookie with a control-c.

So – how do we manage?

SCREEN is a wonderful tool that has, after some years, now wormed its way onto standard distros for *NIX and even MacOS X. It allows you to run virtual terminals that stay resident in memory, have full TTY access and a witty set of escape sequences that, at a base level, allow you to essentially kick a terminal into life, run a command and virtually exit the terminal. Next time you log in you can reuse the Screen command and attach to that very same terminal you left – and, barring a reboot of the box – it will still be there with your program running merrily away and not the least bit bothered that you wandered off and left it.

Check this command out:

[codesyntax lang=”bash” lines=”no” blockstate=”expanded”]

screen -S "wookie-running" -d -m ant run -Drun.args="initDB=false"


Woah neddy! thats a lot of arguments, here is an explanation:

  • screen – the command we need
  • -S “wookie-running” our instance of screen will have a nice english language name to allow us to refer to it by
  • -d -m as a couplet – these commands allow screen to do its stuff, run your command, and automatically come back out to the terminal you are currently in – auto-detaching itself in other words. The cunning part here is that your stuff is still running in the background in memory
  • all the rest is the standard wookie command to run without destroying our existing DB

You can nip back into your virtual terminal by the following command:

[codesyntax lang=”bash” lines=”no” blockstate=”expanded”]

screen -r wookie-running


except you need to swap wookie-running out to whatever name you used for the screen sessing with the -S command. Hey presto – you will shoot right into a log of log lines Wookie has been dutifully dumping to the Standard IO of the terminal.

Getting back out? A doddle, hold control A and hit c

This will take you back to your original terminal without killing the session. If you want to kill the session, hit control D, or control C to kill Wookie then then Control D to get out of the virtual screen.

You might take to this method of doing things, I’ll stick some links in to good resources relating to Screen as soon as possible, it is extremely powerful and indeed programmable too. I will leave you with one final very useful command in case you do decide to do a few screens:

How to see how many screens you have established:

[codesyntax lang=”bash” lines=”no” blockstate=”expanded”]

lrt ~ # screen -ls

There are screens on:

20486.wookie-running    (03/21/2011 10:58:06 PM)        (Detached)

17838.stats     (03/01/2011 10:25:58 AM)        (Detached)

15863.backup    (02/24/2011 09:30:26 AM)        (Detached)

3 Sockets in /var/run/screen/S-root.


Finally, we haven’t really solved the issue of making sure Wookie is running following a server reboot. This isn’t a massive problem, so for the moment I will just suggest that the screen command can be put into a standard init shell script. It must either be properly pathed, or execute a ‘cd’ command to the Wookie trunk before executing the screen portion. I will trial this shortly and put the additional steps into this article.

Lessons Learned

Really, if you are testing something, it’s not always best to go for the platform you think you will need to run it in anger. Luckily for us we had time to adapt, chuck out the stuff we didn’t need and came back to the problem from a fresh angle.

If running the server itself is your main angle (maybe you are a sysadmin), then invest some time on the Wookie pages and the email group finding out the best combination of dependencies – JDK, ANT, JDBC etc… really consider if you even need MySQL vs the built-in Derby engine. These choices may change depending on the extent of your target audience and the numbers of hits you expect Wookie to be able to cope with. If you expect to hang a major system off the back of this – remember – it’s in Beta/ Incubation, hiccups are going to happen from time to time despite best efforts of the developers.

If, on the other hand, you are really more interested in developing the widgets themselves, just go with the easiest option and use the Wookie standalone. It’s far less messy, and you can be up and running developing widgets within ten minutes or so.

Caveats and other Witty Considerations

I have used the term standalone here to indicate the difference in how the platform behaves, rather than the notion that you could use Wookie in this way off the net. When you type ant run, ANT is going to try to nip off and find updates to IVY, and it will fail miserably if you aren’t on the net, or are sitting behind a proxy and haven’t set the correct proxy variable. Here’s a couple of pointers:

Running ANT in offline mode

[codesyntax lang=”bash” lines=”no”]

ant run -Doffline="true"



Setting the Proxy variable for ANT (behind a proxy)

[codesyntax lang=”php” lines=”no”]

ant -run -autoproxy


Read a whole lot more about Ant and proxy here

Due Thanks to…

Scott, Ross, Paul and Co

Please dip in with comments, this post was put together over a few days including two great days at the Bolton CETIS Widget Bash, so most likely there are some lapses of concentration! Corrections gratefully received.

Now, give it a go, give the guys a go and get active. Get hold of Wookie, read about Widgets, grab some, play with them, join the mailing list [info further down the page].

Widget development

Over the last few months, W2C has been laying foundations for its extensible VLE by developing web services to publish frequently-asked-for information from university systems. We are now ready to test the potential of W3C Widgets as a mechanism for publishing the output of these web-services on a range of platforms, and have just written our first widget. This post describes our experience.

To de-scope security issues from our first development, we decided to develop a widget to publish public data available from our PC Availability web-service. First we reviewed published examples of widgets and found particularly helpful WIDE’s step-by-step guide to developing a Calendar web-service and Scott Wilson’s recent post on progressive enhancement of mobile content. Based on this initial review, we decided that our widget should use:

  • jQuery – javascript library (to consume our web-service)
  • jQuery-mobile – javascript library and CSS (to render the content)
  • A simple Controller model to organise our javascript functions

and we knew we’d need:

  • a Wookie server – Steve’s working on a post about lessons learned from getting this going!
  • a folder structure in which to develop our PC Availability widget, comprising
    • config.xml
    • pc.html
    • scripts folder, containing:
      • latest copy of jQuery.js (minimized version)
      • latest copy of jQuery-mobile.js (minimized version)
      • our javascript: pc.js
    • style folder, containing:
      • latest copy of jQuery-mobile.css (minimized version)
      • folder of jQuery-mobile images

Our files contained the following:

[codesyntax lang=”xml” title=”config.xml”]

<widget xmlns=""
  <name>PC Availability</name>
  <description>A sample widget to display PC availability</description>
  <content src="pc.html"/>
  <author>Mark Stubbs</author>

Our config.xml file defines a simple “PC Availability” widget that will be loaded from pc.html. Default size has been set as 320×480 for mobile rendering.

[codesyntax lang=”xml” title=”pc.html”]

<html lang="en">
	<meta charset="utf-8" />
	<title>PC Availability</title>
	<link rel="stylesheet" href="scripts/" type="text/css" />
	<script type="text/javascript" src="scripts/jquery-1.5.1.min.js"></script>
	<script type="text/javascript" src="scripts/"></script>
	<script type="text/javascript" src="scripts/pc.js"></script>
  <body onLoad="Controller.init()">
    <div data-role="page" id="home">
      <div data-role="header">
        <h4>PC Availability</h4>
      <div data-role="content" class="ui-content">
        <ul data-role="listview" id="rooms-listview" data-theme="d" data-inset="true">
      </div><!-- /content -->
    </div><!-- /page -->

Our pc.html file loads the jQuery-mobile 1.03a stylesheet; the jQuery 1.5.1 minimized library; the jQuery-mobile 1.0a3 minimized library and our own pc.js javascript. The body tag contains an onLoad handler that calls a javascript function we have defined for the Controller class called init(). The xhtml within the page is organized to present information as a single page (using the div data-role=”page” tag), which has a header (div data-role=”header”) and content (div data-role=”content”) section. Within the header section, the h4 tag is used for the page title. Within the content section, an unordered list tag is included (with an id of “rooms-listview”), which will contain list items for current PC Availability at each drop-in space within the university. The ul tag is styled with some jQuery-mobile markup: data-role=”listview” data-theme=”d” and data-inset=”true”. Further information about controlling the presentation of list data in jQuery-mobile is available at


[codesyntax lang=”javascript” title=”pc.js”]

var Controller = {
	init:function() {

	update:function() {

		var loc = Widget.proxify("");

		        type: "GET",
			url: loc,
			dataType: "xml",
			timeout: 1000,
			complete: Controller.parseResponse

	parseResponse:function(response) {

		var rooms = $("#rooms-listview");
		$(response.responseXML).find("room").each(function () {
			rooms.append($("<li/>").text($(this).attr("location") + ": "
			+ $(this).attr("free") + "/"
			+ $(this).attr("seats") + " free"));


Our scripts/pc.js file defines the Controller class referenced in the body onLoad handler of pc.html. The Controller class has 3 functions:

  1. init()

    initializes by simply calling the update function

  2. update()

    uses Wookie’s Widget.proxify to get a URL within the same domain (using Wookie’s proxy to avoid cross-domain scripting restrictions) that can be used to call the PC Availability web-service, which it then does using the jQuery $.ajax function; this calls the Controller’s parseResponse function when it completes, passing across the httpResponse received from the web-service call.

  3. parseResponse(response)

    gets a handle for the rooms-listview element, clears its content and then iterates over the “room” elements in the xml returned from the web-service. A function is defined for handling each room element that is found. That function appends to the rooms-listview element a new list-item and sets the text of the list-item to be the location followed by the number of seats free, followed by the total. The parseResponse function completes by refreshing the rooms-listview with this new content.

On our Windows 7 development machine, we selected the pc.html, config.xml and the scripts and styles folders and right-clicked on the “send to Compressed (zipped) folder” option. We then renamed the resulting file “pc.wgt“. From the Wookie Adninistration menu we clicked the “Add new widget”, browsed for our pc.wgt file and clicked the “Publish” button to add it. Our PC Availability widget (minus an icon, as we hadn’t specified one in the config.xml file) was then available in the widget gallery for testing. We added the address of our PC Availability web service to the Wookie white list (so that the Widget.proxify command would work for the URL we’d specified) and then clicked on the demo button in the gallery, which produced this in Firefox:

Screen-shot of PC Availaibility widget running in Wookie using Firefox 3.6.15
Screen-shot of PC Availaibility widget running in Wookie using Firefox 3.6.15

We then tried in IE8 and the widget was rendered without any CSS and a javascript error. After much searching we found this post about IE support in jQuery, which suggests that work still needs to be done for IE 7/8 and Windows Phone 7, but that Scott Jehl had produced an “at your own risk” workaround that must be loaded after the jQuery library and before the jQuery-mobile library loads. We inserted the javascript inline between the two library calls, and the widget then rendered in IE8:

Screen-shot of PC Availability Widget rendered in Wookie using IE8.0.7600
Screen-shot of PC Availability Widget rendered in Wookie using IE8.0.7600

Apart from not having rounded corners, there was no obvious difference between the rendering of the two in different browsers (content differs as expected from a dynamic feed).

We then tried in a number of other browsers:

  • Safari on web and iPhone worked
  • the Android browser worked on a phone and a Galaxy Tab
  • the native browser failed to display anything on a Blackberry or a Windows Mobile 7 phone
  • Opera mobile worked on a Blackberry

Our initial foray into the world of widgets was pretty positive, although support for popular Windows browsers would need to move beyond an “at your own risk” hack for the interesting progressive enhancement approach to be viable for production deployment. We hope this post will encourage others to have a go and share their thoughts.

Understanding how our fees web-services are being used

After intensive testing, fee status information was released via the myMMU portal to all first year students on November 17, 2010. Naturally, there has been considerable interest in understanding the impact of this new system in terms of take-up and in the broader context of feedback from students and front-line FLS staff dealing with financial queries.
Here is how we sorted the records to get the statistics

Alex’s earlier post described how LRT developers worked with colleagues in Financial and Legal Services (FLS) to give students access to a personalised traffic light summary of their financial standing across three categories: tuition, accommodation and other fees. Information displayed in the myMMU SharePoint portal was provided by a getFeeStatus web-service and used in a WebPart that enabled students to use a second web-service getFeeEmail if they wished to receive a detailed financial statement by email.

After intensive testing, fee status information was released via the myMMU portal to all first year students on November 17, 2010. Naturally, there has been considerable interest in understanding the impact of this new system in terms of take-up and in the broader context of feedback from students and front-line FLS staff dealing with financial queries.

So, here are some initial statistics gathered from the server running the REST web-services:

Total accesses Nov 17th 2010 to January 15th 2011:

All logs from November:   1,754,573 for all objects and pages on the web-services server

Students who have viewed the Traffic Lights summary page:  9084

Students who follow through from the Traffic Lights summary page to request a detailed financial statement:  1132

How did we go about getting these stats?

The getFeeStatus and getFeeEmail REST web-services run on a Microsoft IIS server which, like all other MS IIS servers, keeps hit logs of people and machines accessing pages hosted on it: URL u accessed at time t using browser b from internet address i – and the really good news is that being REST services, not only will the name of the web-service called be logged, but all the querystring calling parameters will also appear in the URL … that’s going to come in handy later.

So the process of determining usage of the traffic light web-services starts and even ends (for this part of the analysis) on the IIS server, and by necessity it involves getting grubby with raw web server log files, rooting out nuggets of information pertinent to the things we’re interested in.

Sure – we could run the logs (all x Gigabytes of them) through a good log analyser like AWStats or the freeware Funnel Web Analyzer Kieron used for the graphs on a previous post.

These are great at providing generic page and object hits, but they tend to deal with ‘top 10 links served in November’ type scenarios. Useful, but not really what we need to discover how many students made use of a particular service with particular parameters within a particular timeframe.

To do this we decided to use a suite of geek-level powertools on a Linux box. Of course, we could use a suite of geek-level tools on a Windows box, but we’d have to find them first. Yes, yes… there is Powershell, but just read on – and let me know if Powershell could cope!

Prelims – pour yourself a good strong coffee!

Firstly, we had to copy the raw logs over from the IIS box to an ancillary box – why? First of all so we can use the aforementioned Linux power tools (bash commands.) Secondly, log processing is inherently processor and memory intensive – why ruin the service you are trying to report on.  Admittedly we perhaps shouldn’t have used the web server hosting this blog as it slowed it down a bit!  But we did.

Now, let’s take a look at a few typical lines from an IIS log file; if you try this on an Apache based log file (all of these servers adhere to the Common Log Format, which, alas, allows for many different variations of the actual information stored in certain agreed keys – so you need to examine the fields in your file really carefully to make sure you are storing the right stuff. Apache normally does this out of the box, IIS has certain interesting data turned-off by default. No idea why – just take some time and make sure you configure the logging correctly on your server – stats could be dribbling away into the ether!)

[codesyntax lang=”apache”]

2011-01-14 11:56:07 W3SVC1797328370 GET /convertid/Service1.svc/getIdByMmuId8 id=5503xxxx 80 - - 200 0 0
2011-01-14 11:56:07 W3SVC1797328370 GET /finance/Service1.svc/getFeeStatus person=550xxxx0&dtm=1295006167&developer=mymmu&format=rss&token=ee5b3e4bc033f093bd2eecec7331812f 80 - - 200 0 0
2011-01-14 11:56:09 W3SVC1797328370 GET /srs/Service1.svc/getCurrentEnrolments person=0838xxxx 80 - - 200 0 0
2011-01-14 11:56:13 W3SVC1797328370 GET /vle/Service1.svc/getWebCtAreas format=rss&dtm=12950xxxx2&developer=mymmu&token=057ee448560dedadbddfc842eaf838f1&person=08186066 80 - - 200 0 0
2011-01-14 11:56:13 W3SVC1797328370 GET /convertid/Service1.svc/getIdByMmuId8 id=0818xxxx 80 - - 200 0 0


Gibberish isn’t it? Well, no. It just looks complicated because it has really been designed to be read by machines and witty processing routines. The fields are delineated (made distinct from other fields) by spaces. So – the first field on each line is the date field containing ‘2011-01-14’ and the 6th field starts ‘/convertid’ and so on. Each line is a record, an instance of  single request to the server for a single object or action. This is an important point to remember – each record is not a record of a single person hitting your page as practically all pages on a server contain more objects than just the page itself.

(NB: student IDs in the example data have been anonymised by replacing half of the ID number with xxxx.)

Another important thing to remember for this exercise, is that the records are distinct, regular and structured. In effect – this is the same as having a database of materials that we can process looking for regular pieces of information that match a pattern that we are interested in. As the data is regular we can know that we shouldn’t be getting any weird anomalies that would distort any processing and give errant data. That the data is distinct means we know that we can count up instances of the records we find and know that the count actually represents something.

Continue reading “Understanding how our fees web-services are being used”

Web Services Update

Back in December 2010 Mark blogged about the Web Services we use in MMU to feed our Portal and mobile devices (see his post). At the end of November, web service usage was:

Web Services Hits
Usage at November 2010

This post is by way of an update on usage, and to explain how usage has grown.

Graph showing Web Services usage
Usage at January 2011

The current position can be seen below November’s graph. While the order has not changed – getWebctAnnouncements is still the most popular – the number of hits has grown from just over 800,000 to over 2.1 million! Of particular interest is the 391,000 hits on the PC availability web service: all these hits are from mobile devices using the CampusM myMMU-mobile App Of more surprise to me is the 700,000 hits on getFeeStatus.

I should clarify that the figures are for the current academic year: 2010/11

Our DVLE model?

Sheila and Wilbert’s Distributed Learning Environments briefing paper describes 5 possible integration scenarios and invites consideration of their relative strengths and weaknesses. We have given some initial thought to where our W2C work would be positioned and it seems our emphasis on re-usable web-services is producing a hybrid that straddles the JISC-CETIS DVLE categories.

Our approach has aspects of Model 1 in that we have a collection of services gathered in one place, consumed from a range of platforms. However, for W2C the service collection is at the web-service rather than the widget/IMS tool level. We wish to use a subset of these services within our Moodle VLE to deliver our vision of convenient, integrated and extensible learning systems, and this potentially introduces aspects of Model 2 (see below).

Feeding the VLE

Previous focus group and survey evidence has made a strong case for consistency as a valued aspect of VLE interaction, and we are keen to use our collection of web-services to enhance the student experience in this regard. We are therefore pursuing a strategy of making our VLE an aggregation point for the consistent display of relevant data from a number of university systems. To realise this strategy we have argued for a university-wide tagging convention in which the code used to identify an offering of a Unit of Study in the Student Records System is used to tag material of relevance to that Unit. We have approved a university-wide Moodle policy that states that areas will be created to support every Unit of Study and will be identified by their Student Record System codes. As Moodle blocks can pick up the identifier of the current course and authenticated user we have all the information we need to make calls to our collection of web-services to transform Moodle courses into hubs for presenting a raft of consistent, personalised information, such as:

  • the authenticated user’s up-coming timetable for the Unit
  • the authenticated user’s assessment deadlines and any preliminary marks for the Unit
  • any podcasts tagged as relevant for the Unit
  • the reading list for the Unit
  • any chapters or articles digitised to support the Unit
  • any past papers for the Unit

We have written these features into our Moodle policy as threshold standards for a Unit of Study presence:

Moodle Unit of Study course area

We source the information from a number of corporate systems: Unit4 (formerly Agresso)’s QLS, Scientia, Apple Podcast Producer, Talis Aspire and Equella:

Aggregating tagged data from corporate systems into the Moodle Unit hub

We now face some architectural decisions about how best to deliver this information into Moodle:

  1. A single custom block that incorporates all the calls?
  2. Multiple custom blocks, one for each call?
  3. A single Widget that runs in the Wookie-Moodle block and incorporates all the calls?
  4. Multiple Widgets that run in Wookie-Moodle blocks, one for each call?

Our decisions will be influenced by usability and accessibility issues – for instance the ease with which the Block or Blocks inherit Moodle’s CSS and pick up any high-contrast, large-font variants. Performance and ease of maintenance will also be factors as this needs to be live for all MMU students (34,000+) from September 2011. Our decisions will be informed by some intense prototyping but we would really value thoughts from the community on the best way to go.

Reading List Web Services

MMU’s learning technologies review (2010) selected the Talis Aspire hosted reading list system to play a key role in delivering its vision of an integrated and extensible VLE.

Following the review, policy statements were drafted and approved that set out threshold standards for VLE content at Unit (Module) and Programme level. A policy was also developed to govern the structure and normal length of reading lists. Together these policies set out a requirement that each instance of a Unit of study should provide links to reading materials organised in terms of three categories: recommendations for purchase; essential reading available through the library; and further reading.

Colleagues at Talis have already described some integration scenarios between VLEs and Aspire (using JSON and RDF APIs), and have developed some sample code for Moodle that demonstrates the potential of Aspire APIs for blurring the boundaries between systems to create a more seamless user experience.

In order to meet our policy requirements we realised that we would need something more than the functionality provided by the sample code. Our requirement is to present all the Aspire reading list items within Moodle as clickable links organised in terms of the three categories from our reading list policy. It might not be a scenario that Talis originally anticipated, but our consistent tagging policy based on curriculum codes and their commitment to semantic web principles and open architectures offered a way to achieve this.

Each area created in Moodle to support an offering of a particular Unit of study is created with a course id formed from concatenating the Unit Code, with an underscore, the academic year of delivery, a second underscore, and the occurrence code within the year. For instance 2CP3D011_1011_3 refers to the March-starting instance of the Unit 2CP3D011 within the 2010/11 academic year.

In Aspire we have chosen to create Reading Lists for Units per academic year (rather than Unit instances per academic year), as reading list items are consistent for all offerings of a Unit within a given academic year. The Aspire Reading List for the Unit mentioned is thus created with the tag 2CP3D011_1011, which can be derived easily from the Moodle course id. The consistency of our coding convention gives us the basis for a standard block within Moodle for bringing in content from Aspire.

Aspire supports URLs that reference its tag hierarchy. For instance, displays lists attached to the (lower case) 2CP3D011_1011 unit tag. The XHTML returned could be scraped to get the URL for the list, however it is easier to use the RDF version:

The RDF representation provides a detailed data set including a reference to the list being described:
<rdf:Description rdf:about=””>

The contents of the HTML representation of this list provide all the items to display in Moodle (including any category information indicating whether it is an item recommended for purchase etc):

To meet our policy requirements we prototyped a .NET web-service that used the RDF to identify the XHTML representation of a list for a given Unit tag and transformed the response to produce a RSS feed that can be called for a given Unit:
We are developing a Moodle block that picks up the course id and appends the appropriate parts to create the URL to feed a modified RSS reader that distinguishes our 3 categories of list item.

We are interested to know whether this light-weight RSS integration approach would be of interest to others as a VLE-Reading List integration scenario?

Device led student interviews

During January and February the Project Team (members of Learning Research & Information Services and the Centre for Research in Library & Information Management) will be carrying out 100 short impromptu interviews with MMU students. The aim of the research is to find out about students’ use of technology in their studies. A pilot has already been run and the team are currently tweaking some of the questions based on the responses and feedback received. What’s distinctive about the interviews is that the questions are device led i.e. asking students to show us the technology device/s they are carrying with them there and then (whether it be a notebook, Kindle, mobile phone etc) and ask how these devices are used in their learning (including what they use them for, where and how frequently). We’ll also be asking how well students feel MMU supports their use of technology and how it could be improved. By using this method it is anticipated we will gather some very meaningful research – to be utilised at the next stage of the W2C research… student design workshops (more info to follow!)