Pages

Thursday, July 15, 2004

On Selective Conferences

I was having a conversation with a colleague today about the selectivity of academic conferences and wanted to share some of my thoughts on this issue.

In the research area of wireless networks, as in other EE/CS areas, there are a number of academic conferences that are regarded as highly selective. These include Infocom, Mobicom, Mobihoc, Sensys. They tend to have very low acceptance rates, often on the order of 20% or less. There are also other non-selective conferences where relevant papers may appear, e.g. IEEE Vehicular Technology Conference, IEEE International Communications Conferences, and the annual SPIE conference.

This effectively divides academic conferences into two classes: the prestigious "high quality" conferences, and the run-of-the-mill conference with papers of uncertain quality. I want to examine carefully the impact of this situation on the area of research and the research community.

In the following discussion, I will make the idealized assumption that the quality of a peer-reviewed paper is not a purely subjective notion. There are papers that are objectively "good" (in part or in full) in terms of commonly accepted measures such as novelty, interest, correctness. This itself is a debatable point, but
it is certainly implicit in the whole notion of peer-reviewed papers, and will make my discussion somewhat easier.

There are, of course, many positive outcomes. This categorization helps rapid dissemination of good papers that appear in top conferences, because they get widely cited and read. It also helps create a hierarchy within the community - those with a large number of selective conference publications are perceived as top researchers and receive attention and fame. A top conference also attracts top researchers to its program committee, ensuring high quality reviews and thus a good positive feedback (well, at least so long as the number of submissions is not large - more on this below).

However, what I worry about is that the classification of conferences creates an unfair prejudice on the part of researchers reading and citing papers. This classification encourages an presumption of the quality of a paper based solely the venue in which it appears, and discourages careful consideration of papers on their own merits. A paper in the prestige conference is assumed to be of good quality. This is often true, but not always. In particular, with conferences as large as Infocom (on the order of 1500-2000 submissions a year), there can be a high variability. Good papers may not get in and bad papers may, since the quality of reviews and the review process is harder to ensure when submissions are large in number. Many researchers use the conference itself as a filter and are more inclined to read and cite papers from selective conferences. Thus there are many excellent papers that appear in the non-selective conference that are often overlooked. This can slow and hinder the dissemination of important new results.

The main solution to this dilemma is to make researchers aware of this bias, and to encourage them to be more inclusive and critical in their reading and citations.

The two-tier classification also creates pressure for a researcher to submit only to the top conferences. But that may exacerbate the problem. Increased submissions with limited acceptable slots increases the selectivity even further, but also the frustration of authors of good papers that don't get in because of limited room and poor "noisy" reviews. This can result in significant publication delay and key results may either never get shared widely with the community, or may become obsolete (defeating the whole purpose of a conference publications).

More subtle issues that one should consider include the psychological impact of creating a overly competitive (as opposed to a cooperative) academic community; this should be balanced, though, against the fact that academia is meant to be a meritocracy.

I am by no means arguing against high-quality limited-room conferences; indeed, I have a distinct preference for them myself when I am submitting papers, I gladly serve on the program and organizing committee of such workshops/conferences; and on the whole as a new faculty member, I do rely on them myself considerably for the many positives I listed above (as indicators of paper and researcher quality, and as useful in disseminating and publicizing one's work). I just wish there was a greater discussion and awareness within the community about the potential negatives.

3 comments:

Bhaskar said...

To add to my earlier remarks, I had a conversation with another colleague about this yesterday that adds some more to think about. What is the significance of a workshop, a conference and a journal?

In electrical engineering as in many traditional disciplines, workshops and conferences are treated about the same, papers appearing here are essentially meant to bring work to the attention of others, to give a talk, get direct and quick feedback; only journal publications are considered archival and guarantors of high quality.

In computer science, generally, workshops are more informal, while conferences are expected to be the guarantors of high quality. This is partly because it is claimed the demands of a rapidly evolving field require fast publication times to prevent results from becoming "obsolete".

So, in networks research, where the two disciplines overlap considerably, is it surprising if chaos ensues?

Anonymous said...

Interesting discussion. I wanted to add my thoughts. Firstly, to address the problem of good results in not-so-selective conference to be widely visible; The key problem is that there are so many conferences that it is near impossible for a person to keep track of current state of research in a very specific area - say localization in sensor networks, let alone in the whole area of sensor networks.
One idea to get around this problem would be to have a community-based ranking system. ACM actually provide this. Readers can review and post comments on what they think of that article. This is very sparsely used however. Along with this if there was a numerical ranking, say on a scale of 5, it would serve as a quick estimate of how good the paper is.
An analogy is imdb's movie ranking. It might not exactly cater to every person's tastes but serves as a good average estimator of people's liking of that particular movie.
As far as reviewing for the premiere conferences go, I think its more of an elimination process than selection. What that guarentees is that most papers that appear are usually above a certain quality (this is highly subjective and others might disagree .. but by and large its true). That in no way guarentees that papers with similar quality havent been rejected. Its just one of the banes of being a premiere conference and being highly selective I guess. Also, double-blind reviews, lots of feedback etc help better the process. One thought I always had was that the authors should be given a chance to rebutt the comments/criticisms of the reviewers .. and this rebuttal should be taken into consideration before decision to accept/reject a paper. Accepted that the Program Committee is chosen out of the top researchers in the field but it so happens sometimes that the reviewer might not have paid enough attention and/or seen the idea from the perspective that the author did.

It does overly complicate the review process though.

KAR.

Bhaskar said...

Nice comment.

Journals, of course, do offer multi-step reviews with the possibility of rebuttals and improvement. In theory, with enough iterations to address reviewer comments from different conferences one out to be able to make similar improvements, but consistency is a major issue: reviewers of conf. B may not be happy with changes made in response to reviewers of conf. A.