Pages

Monday, August 09, 2004

What makes for good research?

Claim 1: Any strong paper should possess at least some of the following characteristics:
  • Solve a significant/important problem (such as being of practical relevance, having a broad impact, solving a long-standing problem)
  • Come up with a significant conjecture (whose solution would be important)
  • Use a sophisticated/surprising solution method
  • Say something non-obvious/counter-intuitive
  • Correct a popular misconception
  • Shed new light/initiate a new line of research
  • Demonstrate significant improvement over state of the art
  • Make a valuable contribution to further research (e.g. present new metrics/models, provide new experimental data, present observations that clearly merit further study)
  • Present convincing results (e.g. by validating simplified analysis with realistic experiments)

Claim 2: the following features are must-haves in a good paper:

  • Demonstrate thoroughness and effort
  • Be technically rigorous
  • Present material in a clear, compelling manner
It would be nice to have some kind of a checklist like the above when writing a paper, to gauge its quality for oneself before submitting it for external review...

Tuesday, August 03, 2004

Building Knowledge, Together

"In the long history of humankind (and animalkind, too) those who learned to collaborate and improvise most effectively have prevailed." - Charles Darwin

Even a casual glance over my publication list will reveal that I am a great believer in both intra and inter-disciplinary research collaborations. In my first two years as an assistant professor, I have actively developed collaborations with a number of faculty members in several departments/institutes at USC: Electrical Engineering, Computer Science, Information Sciences Institute, Industrial and Systems Engineering. My collaborators span a wide range of disciplines and perspectives including experimental networking, signal processing, optimization, data management, distributed computing, and algorithms. My own research focus is on applying theoretical techniques (including modeling, performance analysis, and algorithms) for practical problems in sensor networks.

In nearly every case, I have found that the complementary nature of interests between my collaborators and me has yielded interesting new problems and helped to bring relevant techniques to bear on such problems. For instance working with more experimental colleagues I am able to understand practical considerations and determine what empirical observations need to be better understood through modeling and analysis. At other times I help translate practical problems into a formal setting so that I can work with colleagues who have expertise in particular relevant theoretical tools (e.g. estimation theory/randomized graphs/network flow optimization) to develop solutions for them.

Such collaborations are becoming increasingly fruitful because they bridge the gap between theory and practice, between independent yet related disciplines. Indeed, in recent years, the National Science Foundation (NSF) as well as other funding agencies have expressed a clear preference for funding collaborative research.

This may all be well and good, but I am now having second thoughts that my strong focus on collaborative research runs counter to a conventional "bean-counting" perspective to evaluating scholarship. Come tenure time, how can the individual contributions of a faculty member like myself be assessed "quantitatively" if most papers are written in collaboration with others? One base assumption in this perspective is that all papers have the same "value." Under this assumption, it follows that the individual credit for a genuinely significant collaborative paper with 3 authors (say with one student and two senior collaborating faculty members) is worth less than that for a paper with 2 authors (say just the student and his advisor).

The alternative perspective I would advocate strongly, but that I genuinely fear is not widely accepted in the academic world today, is expressed beautifully by Keith Dorwick in a thoughtful essay titled "The Ways We Build Knowledge." Although Dorwick's focus is on literary fields, the following comments from his essay apply equally well to engineering disciplines:

"Currently, there is only one dominant model for the creation of knowledge that we call research -- that model is a solitary one in which, generally speaking and especially among literary critics, scholarship is seen as a personal possession, one that is owned by the person who has done the work necessary to write the books and journals that bear her name and that, for those who find themselves in the professoriate, is not just a creative expression of one's talents, but the means necessary to keep one's job in a market that is only now possibly beginning to open up.

In this model, collaboration is seen as a problem, not an opportunity -- since both initial employment and tenure depend on the production of scholarly articles that are disseminated in peer-reviewed journals and books that are published by university presses that also depend on peer reviews, it is very necessary to know exactly who did what work. Collaboration becomes nothing less than an administrative problem -- how can tenure and promotion committees and deans, for instance, know whether or not to promote a local candidate who has spent much of her career in collaboration. The problem is simple, from this viewpoint: who owns the work, and who, therefore, should benefit from it?

Of course, the simple answer is this: "we did it together." A collaboration ought to be judged as the collective work of the individuals involved and tenure and promotion committees ought to see good work as something of which the department, college, and university can be proud. In fact, of course, as anyone who has worked in a good collaboration knows, the fact is that the work is often the stronger for being the product of two or more people... "

Monday, July 26, 2004

We regret to inform you..

These are the most dreaded words, always prefacing a note indicating that a paper or proposal has been rejected. Sometimes the note cushions the fall a little by adding something like "unfortunately a lot of good submissions could not be accepted due to lack of room."

The sting of seeing a paper/proposal rejected has diminished a little for me over time, but has by no means disappeared completely. I do find it's more demoralizing for students, though, particularly when writing their first papers.

I guess all we can do is take the criticism in stride, try to learn from it and improve.

On the whole I find it's generally worth attempting at least one revised re-submission, before moving on.

But occasionally you get a rejection where your gut tells you that the reviewers are mistaken. This is particularly likely to happen when attempting some new direction that is not incremental improvement of someone else's work. In this case, it's worth trying harder to get it published. Failing all, have it put up and cited as a tech report.

Wednesday, July 21, 2004

On, Off, Quick, Slow

Perhaps the most exciting and energizing aspect of my job is the opportunity to guide some really talented graduate students. I'm still figuring much of it out, but it has been a great experience so far. The following are some thoughts that have occured to me about being an advisor.

A key issue is deciding whether to be a more hands-on or a more hands-off advisor. To illustrate these roles, consider the activity of problem formulation, which is at the heart of the research enterprise. A hands-on relationship would imply assigning well-formulated problems to the student; a very hands-off relationship would imply waiting for the student to come up with new problems and ideas; somewhere in between would be working with the student to formulate problems together.

On this spectrum, I'm probably closer to being hands-on than hands-off in general (perhaps understandable, given that I am a new faculty member trying to establish a distinct identity in my field through a coherent research program). However, while there are many faculty members who are static in their roles, my own belief is that there's no real one-size-fits-all approach. An advisor has to be flexible enough to adapt to each individual advisee's capabilities and drive, which also vary with time. I would prefer to be as hands on as possible early on when students are likely to need the most guidance, but also believe it's important to give them much more independence and flexibility over time.

Another key issue pertains to when the student should start doing research. I think there are again two schools of thought on this -- 1. as soon as possible; 2. only after sufficient preparation through course-work. There are pros and cons to each approach. Starting the student off with some research problem early on is good because it gives them motivation and feeling for how research is different from coursework (which is all they've ever encountered before, in many cases). On the other hand, sophistication and maturity require a good amount of time taking challenging and useful courses. It is perhaps a sweeping generalization, but I believe that for EE/CS students, the start-quick approach may be better suited for more applied or experimental research, while the go-slow approach is particularly well suited for highly theoretical research. The key reason is that incoming graduate students are already equipped with the ability to do programming and some simple analysis, but generally need some time to pick up sophisticated analytical tools and deep theory.

Because of my own prior experiences and style of research (which I would refer to as applied theory), I've primarily chosen the start-quick approach to date. But I certainly see the advantages of the go-slow approach. I will ask my students to take formative, challenging courses to continually hone their abilities. It's crucial that the quality of their work improve over time, as they become more knowledgable and mature through coursework and also their own growing research experience.

The Research Spectrum

We all know the political spectrum. There are conservatives on the right and liberals on the left.

There is a similar spectrum in engineering and computer science research, but the nomenclature is up for grabs (as far as I know). We have Theoreticians on one end, and Experimentalists* on the other.  Simulationists might be somewhere near the center but towards the experimentalists; while Applied Theorists would perhaps be near the center but towards the theorist end of the spectrum.

My interest is in knowing if I should say I'm just left of center or just right of center. I see myself very clearly as an applied theorist of sorts.  My work involves a  balance of equations and analysis with simulations or modeling/curve-fits based on real data.  I crave the cleanliness and generalizable insights you get from analytical formulations, but at the same time also care deeply about grounding this work on practical problems and realistic settings.  Sometimes, in moments of doubt, I wonder if this is dangerous ground to be in, in terms of how one is perceived by the community; but I have to say all my instincts and passions place me squarely in this position on the spectrum.

So how should we describe this spectrum:  Theoreticians on the left and Experimentalists on the right, or the other way round? What is your view? 

 
* Note: Experimentalists are sometimes referred to as Systems researchers in Computer Science parliance: so there is a well-known dichotomy between Theory and Systems. But the term has a different connotation in Electrical Engineering, where there is another distinction drawn between Electrophysics/Circuits (which comprises electrophysics, semiconductors, circuit design) and Systems (which covers communications, computer engineering, signal processing, controls) ;  and from the EE point of view, though this is a bit of a stereotype, "Systems" has a connotation of being the more theoretical of the two.


Thursday, July 15, 2004

On Selective Conferences

I was having a conversation with a colleague today about the selectivity of academic conferences and wanted to share some of my thoughts on this issue.

In the research area of wireless networks, as in other EE/CS areas, there are a number of academic conferences that are regarded as highly selective. These include Infocom, Mobicom, Mobihoc, Sensys. They tend to have very low acceptance rates, often on the order of 20% or less. There are also other non-selective conferences where relevant papers may appear, e.g. IEEE Vehicular Technology Conference, IEEE International Communications Conferences, and the annual SPIE conference.

This effectively divides academic conferences into two classes: the prestigious "high quality" conferences, and the run-of-the-mill conference with papers of uncertain quality. I want to examine carefully the impact of this situation on the area of research and the research community.

In the following discussion, I will make the idealized assumption that the quality of a peer-reviewed paper is not a purely subjective notion. There are papers that are objectively "good" (in part or in full) in terms of commonly accepted measures such as novelty, interest, correctness. This itself is a debatable point, but
it is certainly implicit in the whole notion of peer-reviewed papers, and will make my discussion somewhat easier.

There are, of course, many positive outcomes. This categorization helps rapid dissemination of good papers that appear in top conferences, because they get widely cited and read. It also helps create a hierarchy within the community - those with a large number of selective conference publications are perceived as top researchers and receive attention and fame. A top conference also attracts top researchers to its program committee, ensuring high quality reviews and thus a good positive feedback (well, at least so long as the number of submissions is not large - more on this below).

However, what I worry about is that the classification of conferences creates an unfair prejudice on the part of researchers reading and citing papers. This classification encourages an presumption of the quality of a paper based solely the venue in which it appears, and discourages careful consideration of papers on their own merits. A paper in the prestige conference is assumed to be of good quality. This is often true, but not always. In particular, with conferences as large as Infocom (on the order of 1500-2000 submissions a year), there can be a high variability. Good papers may not get in and bad papers may, since the quality of reviews and the review process is harder to ensure when submissions are large in number. Many researchers use the conference itself as a filter and are more inclined to read and cite papers from selective conferences. Thus there are many excellent papers that appear in the non-selective conference that are often overlooked. This can slow and hinder the dissemination of important new results.

The main solution to this dilemma is to make researchers aware of this bias, and to encourage them to be more inclusive and critical in their reading and citations.

The two-tier classification also creates pressure for a researcher to submit only to the top conferences. But that may exacerbate the problem. Increased submissions with limited acceptable slots increases the selectivity even further, but also the frustration of authors of good papers that don't get in because of limited room and poor "noisy" reviews. This can result in significant publication delay and key results may either never get shared widely with the community, or may become obsolete (defeating the whole purpose of a conference publications).

More subtle issues that one should consider include the psychological impact of creating a overly competitive (as opposed to a cooperative) academic community; this should be balanced, though, against the fact that academia is meant to be a meritocracy.

I am by no means arguing against high-quality limited-room conferences; indeed, I have a distinct preference for them myself when I am submitting papers, I gladly serve on the program and organizing committee of such workshops/conferences; and on the whole as a new faculty member, I do rely on them myself considerably for the many positives I listed above (as indicators of paper and researcher quality, and as useful in disseminating and publicizing one's work). I just wish there was a greater discussion and awareness within the community about the potential negatives.