Skip to content

Whither ERA?

2015/02/16

The ARC released the composition of the ERA’15 Research Evaluation Committees (RECs) a few days ago. The one relevant to us is the Mathematics, Information and Computing Sciences (MIC) REC. So I was a bit surprised when I looked at it and recognised almost no names.

For those living outside Australia, outside academia, or have their head firmly burrowed in the sand, ERA is the Excellence in Research for Australia exercise the Australian Research Council (ARC, Oz equivalent of the NSF) has been running since 2010. It aims to evaluate the quality of research done at Australian universities. I was involved in the previous two rounds, 2010 as a member of the MIC panel, and 2012 as a peer reviewer.

The ERA exercise is considered extremely important, universities take it very seriously, and a lot of time and effort goes into it. The outcomes are very closely watched, universities use it to identify their strengths and weaknesses, and everyone expects that government funding for universities will increasingly be tied to ERA rankings.

The panel is really important, as it makes the assessment decisions. Assessment is done for “units of evaluation” – the cartesian product of universities and 4-digit field of research (FOR) codes. The 4-digit FORs relevant to computer science and information systems are the various sub-codes of the 2-digit (high-level) code 08 – Information and Computing Sciences.

For most other science and engineering disciplines, assessment is relatively straightforward: you look at journal citation data, which is a pretty clear indication of research impact, which in turn is a not unreasonable proxy for research quality. In CS, where some 80% of publications are in conferences, this doesn’t work (as I’ve clearly experienced in the ERA’10 round): the official citation providers don’t understand CS, they don’t (or only very randomly) index conferences, they don’t count citations of journal papers by conference papers, and the resulting impact factors are useless. As a result, the ARC moved to peer-review for CS in 2012 (as was used by Pure Maths and a few other disciplines in 2010 already).

Yes, the obvious (to any CS person) answer is to use Google Scholar. But for some reason or other, this doesn’t seem to work for the ARC.

Peer review works by institutions nominating 30% of their publications for peer review (the better ones, of course), and several peer reviewers are each reviewing a subset of those (I think the recommended subset is about 20%). The peer reviewer then writes a report, and the panel uses those to come up with a final assessment. (Panelists typically do a share of peer review themselves.)

Peer review is inevitably much more subjective than looking at impact data. You’d like to think that the people doing this are the leaders in the field, able to objectively assess the quality of the work of others. A mediocre researcher is likely to emphasise factors that would make themselves look good (although they are, of course, excluded from any discussion of their own university). Basically, I’d trust the judgment someone with an ordinary research track record much less than that of a star in the field.

So, how does the MIC panel fare? Half of it are mathematicians, and I’m going to ignore those, as I wouldn’t be qualified to say anything about their standing. But for CS folks, citation counts and h-factors as per google scholar, in the context of the number of years since their PhD, is a very good indication. So let’s look at the rest of the MIC panellists, i.e. the people from computer science, information systems or IT in general.

Name Institution years of PhD cites h-index
Leon Sterling (Chair) Swinburne ~25 5,800 28
Deborah Bunker USyd 15? max cite =45
David Green Monash ~30 3,400 30
Jane Hunter UQ 21 3,400 29
Michael Papazoglou Tilburg 32 13,200 49
Paul Roe QUT 24 <1,000 17
Markus Stumptner UniSA ~17 2,900 28
Yun Yang Swinburne ~20 3,800 30

[Note that Prof Bunker has no public Scholar profile, but according to Scholar, her highest-cited paper has 45 citations. Prof’s Sterling’s public Scholar profile includes as the top-cited publication (3.3k cites) a book written by someone else, subtracting this leads to the 5.8k cites I put in the table. Note that his most cited publication is actually a textbook, if you subtract this the number of cites is 3.2k.]

Without looking at the data, one notices that only three of the group are from the research-intensive Group of Eight (Go8) universities, plus one from overseas. That in itself seems a bit surprising.

Looking at the citation data, one person is clearly in the “star” category: the international member Michael Papazoglou. None of the others strike me as overly impressive, a h-index of around 30 is good but not great, similar with citations around the 3000 mark. And in two cases I can really only wonder how they could possibly been selected. Can we really not come up with a more impressive field of Australian CS researchers?

Given the importance of ERA, I’m honestly worried. Those folks have the power to do a lot of damage to Australian CS research, by not properly distinguishing between high- and low-quality research.

But maybe I’m missing something. Let me know if you spot what I’ve missed.

From → academia

12 Comments
  1. Joe permalink

    What you missed is that h-index is not a good measure of someone’s competence as an ERA reviewer.

    • The h-index not the only criterion, but it’s generally understood that research excellence is best judged by those who’ve demonstrated excellence in their own research. The h-index measures research impact, which is generally considered a reasonable proxy for research excellence (and the only one that can be measured objectively). In fact, ERA rankings in journal-based disciplines mare mainly driven by citation impact. The only reason this isn’t happening for CS is that the ARC isn’t happy with Google Scholar, and the “trusted” citation providers totally butcher CS.

      • Joe permalink

        Consider this possibility, the objective measures are so inaccurate that subjective measures are superior. In this circumstance you can’t then go and use objective measures to choose who will do the subjective measuring!

  2. UKDude permalink

    As a mere PhD student with a citation count of 0, these sorts of discussions are well above my “pay grade”. Nevertheless, I am aware that the UK recently went through its “REF” process, which as far as I can tell is roughly the same thing. In the UK the outcome of the REF is directly linked to University funding, so people care perhaps a little more about the result.

    One of the surprising results of this process was that, on the Computer Science and Informatics sub-panel out of 7500 outputs graded, only 100 had 2 people (out of at least 3) with significantly different scores, and in 75% of these cases the discrepancies were resolved by discussion. That’s a pretty impressive amount of agreement.

    On major difference the panel had about 25 members on it which is significantly more than the 8 members on Australian equivalent. A quick skim of the panelist list (available here: http://www.ref.ac.uk/media/ref/content/expanel/member/Main%20Panel%20B%20-%20Final%20List%20(Jan%202015).pdf) reveals that members have wildly varying citation counts ranging from nearly 30,000 (Prof. Jon Crowcroft – https://scholar.google.co.uk/citations?user=qnMs-XYAAAAJ&hl=en) to just over 1,500 ( Prof. Joseph Sventek – https://scholar.google.co.uk/citations?user=U_VRZucAAAAJ&hl=en). This suggests to me two things: 1) The process can work if done rigorously and 2) the citation counts/h-indexes/whatever don’t seem to matter /that/ much. Although, I would be intrigued to see if the real factor that matters is the panelist size, in which case, I fear that the Australian process is in for a rough ride.

    • UKDude permalink

      And by “panelist size” what I mean is panel size.

      • Thanks for your insightful comments.

        I’m broadly aware of the UK scheme, the Oz one was modelled (to a degree) after it, and when I was on the panel in 2010, we had a member from the UK who was very familiar.

        Note also that the ERA MIC panel actually has 16 members. Half of them are mathematicians, and I ignore those in my blog, as I don’t feel qualified to comment on their qualifications.

        Also, I don’t mean to imply that citation impact is everything, sorry if I created that impression. But I would definitely expect that a significant fraction (actually the majority) would have outstanding citation impact. Which isn’t the case for our panel.

        Looking at the UK panel (subpanel 11), my immediate observation is that I recognise several names – unlike the ERA panel, where I recognised almost none. You’d naively expect the opposite, wouldn’t you?

        But then, naive is probably what I am…

        Gernot

  3. Alan Fekete permalink

    Consider the pragmatics. There are very few people in Australia who have the sort of citation counts you are seeking. At Sydney Uni, only 3 of the School of IT have h-index much over 30 (Eades, Zomaya and Kay). At UNSW, I think there are also 3 (Benatallah, yourself and Lin). Adding in the people who, like myself, have h-index around 30 widens the pool enough to make a panel feasible, while still respecting the necessary diversity of field, type of institution, gender, etc.

    As an extra remark, I don’t think research excellence is really the issue here, either in the panel, or in the whole activity. After all, since “world-class” in ERA terminology means “like the average of the world” rather than “like the best in the world”, the whole ERA is to my mind really about finding where/whether respectable, adequate research is happening.]

    • Hi Alan,

      I don’t disagree with what you’re saying. I would have just thought that, given the pool of highly-cited folks is non-empty even in Oz, one or two of them might end up on the panel. And I really wonder what the qualifications of people are who have never had a single highly-cited paper in their life…

  4. Pete permalink

    You don’t feel qualified to comment on the qualifications of the mathematicians? I don’t understand. You just look at cites and h-index, and could do that just as well for any subject.

    • Maybe, maybe not. I know what a particular h-index means in CS, I have no clue what it means in Maths. From the beginning, the pure mathematicians insisted on using peer review instead of citation counts, as they (as a community) feel that citations aren’t a good measure for quality in that field. I suspect they know better than I.

      Anyway, I’m thrilled to see that this post has generated so much discussion.

  5. Nice post Gernot. I know about half that panel and share your impressions. I’ve spent 80% of my 25 years post PhD in industry/R&D labs, not academia, and have a h-index and citation count that rivals or surpasses most of the panel. Good luck 😉

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: