Pages

Monday, July 26, 2004

We regret to inform you..

These are the most dreaded words, always prefacing a note indicating that a paper or proposal has been rejected. Sometimes the note cushions the fall a little by adding something like "unfortunately a lot of good submissions could not be accepted due to lack of room."

The sting of seeing a paper/proposal rejected has diminished a little for me over time, but has by no means disappeared completely. I do find it's more demoralizing for students, though, particularly when writing their first papers.

I guess all we can do is take the criticism in stride, try to learn from it and improve.

On the whole I find it's generally worth attempting at least one revised re-submission, before moving on.

But occasionally you get a rejection where your gut tells you that the reviewers are mistaken. This is particularly likely to happen when attempting some new direction that is not incremental improvement of someone else's work. In this case, it's worth trying harder to get it published. Failing all, have it put up and cited as a tech report.

Wednesday, July 21, 2004

On, Off, Quick, Slow

Perhaps the most exciting and energizing aspect of my job is the opportunity to guide some really talented graduate students. I'm still figuring much of it out, but it has been a great experience so far. The following are some thoughts that have occured to me about being an advisor.

A key issue is deciding whether to be a more hands-on or a more hands-off advisor. To illustrate these roles, consider the activity of problem formulation, which is at the heart of the research enterprise. A hands-on relationship would imply assigning well-formulated problems to the student; a very hands-off relationship would imply waiting for the student to come up with new problems and ideas; somewhere in between would be working with the student to formulate problems together.

On this spectrum, I'm probably closer to being hands-on than hands-off in general (perhaps understandable, given that I am a new faculty member trying to establish a distinct identity in my field through a coherent research program). However, while there are many faculty members who are static in their roles, my own belief is that there's no real one-size-fits-all approach. An advisor has to be flexible enough to adapt to each individual advisee's capabilities and drive, which also vary with time. I would prefer to be as hands on as possible early on when students are likely to need the most guidance, but also believe it's important to give them much more independence and flexibility over time.

Another key issue pertains to when the student should start doing research. I think there are again two schools of thought on this -- 1. as soon as possible; 2. only after sufficient preparation through course-work. There are pros and cons to each approach. Starting the student off with some research problem early on is good because it gives them motivation and feeling for how research is different from coursework (which is all they've ever encountered before, in many cases). On the other hand, sophistication and maturity require a good amount of time taking challenging and useful courses. It is perhaps a sweeping generalization, but I believe that for EE/CS students, the start-quick approach may be better suited for more applied or experimental research, while the go-slow approach is particularly well suited for highly theoretical research. The key reason is that incoming graduate students are already equipped with the ability to do programming and some simple analysis, but generally need some time to pick up sophisticated analytical tools and deep theory.

Because of my own prior experiences and style of research (which I would refer to as applied theory), I've primarily chosen the start-quick approach to date. But I certainly see the advantages of the go-slow approach. I will ask my students to take formative, challenging courses to continually hone their abilities. It's crucial that the quality of their work improve over time, as they become more knowledgable and mature through coursework and also their own growing research experience.

The Research Spectrum

We all know the political spectrum. There are conservatives on the right and liberals on the left.

There is a similar spectrum in engineering and computer science research, but the nomenclature is up for grabs (as far as I know). We have Theoreticians on one end, and Experimentalists* on the other.  Simulationists might be somewhere near the center but towards the experimentalists; while Applied Theorists would perhaps be near the center but towards the theorist end of the spectrum.

My interest is in knowing if I should say I'm just left of center or just right of center. I see myself very clearly as an applied theorist of sorts.  My work involves a  balance of equations and analysis with simulations or modeling/curve-fits based on real data.  I crave the cleanliness and generalizable insights you get from analytical formulations, but at the same time also care deeply about grounding this work on practical problems and realistic settings.  Sometimes, in moments of doubt, I wonder if this is dangerous ground to be in, in terms of how one is perceived by the community; but I have to say all my instincts and passions place me squarely in this position on the spectrum.

So how should we describe this spectrum:  Theoreticians on the left and Experimentalists on the right, or the other way round? What is your view? 

 
* Note: Experimentalists are sometimes referred to as Systems researchers in Computer Science parliance: so there is a well-known dichotomy between Theory and Systems. But the term has a different connotation in Electrical Engineering, where there is another distinction drawn between Electrophysics/Circuits (which comprises electrophysics, semiconductors, circuit design) and Systems (which covers communications, computer engineering, signal processing, controls) ;  and from the EE point of view, though this is a bit of a stereotype, "Systems" has a connotation of being the more theoretical of the two.


Thursday, July 15, 2004

On Selective Conferences

I was having a conversation with a colleague today about the selectivity of academic conferences and wanted to share some of my thoughts on this issue.

In the research area of wireless networks, as in other EE/CS areas, there are a number of academic conferences that are regarded as highly selective. These include Infocom, Mobicom, Mobihoc, Sensys. They tend to have very low acceptance rates, often on the order of 20% or less. There are also other non-selective conferences where relevant papers may appear, e.g. IEEE Vehicular Technology Conference, IEEE International Communications Conferences, and the annual SPIE conference.

This effectively divides academic conferences into two classes: the prestigious "high quality" conferences, and the run-of-the-mill conference with papers of uncertain quality. I want to examine carefully the impact of this situation on the area of research and the research community.

In the following discussion, I will make the idealized assumption that the quality of a peer-reviewed paper is not a purely subjective notion. There are papers that are objectively "good" (in part or in full) in terms of commonly accepted measures such as novelty, interest, correctness. This itself is a debatable point, but
it is certainly implicit in the whole notion of peer-reviewed papers, and will make my discussion somewhat easier.

There are, of course, many positive outcomes. This categorization helps rapid dissemination of good papers that appear in top conferences, because they get widely cited and read. It also helps create a hierarchy within the community - those with a large number of selective conference publications are perceived as top researchers and receive attention and fame. A top conference also attracts top researchers to its program committee, ensuring high quality reviews and thus a good positive feedback (well, at least so long as the number of submissions is not large - more on this below).

However, what I worry about is that the classification of conferences creates an unfair prejudice on the part of researchers reading and citing papers. This classification encourages an presumption of the quality of a paper based solely the venue in which it appears, and discourages careful consideration of papers on their own merits. A paper in the prestige conference is assumed to be of good quality. This is often true, but not always. In particular, with conferences as large as Infocom (on the order of 1500-2000 submissions a year), there can be a high variability. Good papers may not get in and bad papers may, since the quality of reviews and the review process is harder to ensure when submissions are large in number. Many researchers use the conference itself as a filter and are more inclined to read and cite papers from selective conferences. Thus there are many excellent papers that appear in the non-selective conference that are often overlooked. This can slow and hinder the dissemination of important new results.

The main solution to this dilemma is to make researchers aware of this bias, and to encourage them to be more inclusive and critical in their reading and citations.

The two-tier classification also creates pressure for a researcher to submit only to the top conferences. But that may exacerbate the problem. Increased submissions with limited acceptable slots increases the selectivity even further, but also the frustration of authors of good papers that don't get in because of limited room and poor "noisy" reviews. This can result in significant publication delay and key results may either never get shared widely with the community, or may become obsolete (defeating the whole purpose of a conference publications).

More subtle issues that one should consider include the psychological impact of creating a overly competitive (as opposed to a cooperative) academic community; this should be balanced, though, against the fact that academia is meant to be a meritocracy.

I am by no means arguing against high-quality limited-room conferences; indeed, I have a distinct preference for them myself when I am submitting papers, I gladly serve on the program and organizing committee of such workshops/conferences; and on the whole as a new faculty member, I do rely on them myself considerably for the many positives I listed above (as indicators of paper and researcher quality, and as useful in disseminating and publicizing one's work). I just wish there was a greater discussion and awareness within the community about the potential negatives.