Linda Northrop reports that "federating" the program committee this way worked for some other ACM conference. -rpg
I have a hunch that making up a committee from people early or late in their careers will be better than packing it with mid-career folks who are likely to be too conservative or interested in preserving their own paths. Maybe this how to make a CoolProgramCommittee. But I could be wrong. -rpg
I agree that people early or late in their careers will be less likely to be conservative. Also, tell people that you pick that you were looking for people who were expecially creative and accepting of new ideas, and you thought of them. Nearly everybody thinks that they are accepting of new ideas, but if you let people know that that is an important criterion for the p.c., you'll encourage them to be more so. -RalphJohnson
I like doing something to stir up the PC. Breaking it up might help. Other ideas are:
Require each advocate to write one sentence describing the contribution to the paper, and or one sentence describing who & how this paper will benefit, and/or one sentence about why we should reject a paper.
Setting a policy that rejects papers that are only one-year's worth of work better than the previous paper published in the field.
-- DavidUngar
As a 'mid career folk who is protecting his patch' I think that only people who have had a paper accepted to OOPSLA should be on the PC :-) (OK, I'm still bitter about this year's rejections) I do notice that this year has a more diverse selection than the traditional two sessions of type theory, two of JVMs, and one on Smalltalk. Nonetheless, I do think there is a discipline in active researchers being the majority of the PC. But then, I don't think the PC makeup (or indeed the technical programme is what is most broken about OOPSLA). -- JamesNoble
We could also use a much larger spread of external reviewers than just the PC.
I reviewed a bunch of papers for SIGCHI, but none for OOPSLA this year.They seem to have an hierarchical structure, with reviewers at large bidding for reviews, and then one or two PC members summarising the reviews for each paper. -- JamesNoble
In my opinion, one of the main problems with the program committee is that almost all the papers are programming language papers. This was not true ten years ago. Most of the people who come to OOPSLA are not interested in programming languages. Hardly any of the tutorials are about programming languages. Why are the topics of tutorials so different from the topics of the papers?
If the program committee is divided into groups, ONE of the groups should be Programming Languages, which should include language design and language implementation. No more than half the papers should be in this category. After all, if good papers get rejected, they can go to PLDI. But when papers
on design get rejected, there is no place else for them to go.If you look at the tutorials, you will see that patterns and software architecture are two popular topics. There should be
more papers on these subjects accepted than there are now.My experience with program committees (and I've been on more than I care
to remember) is that junior people are great at seeing the trees and not so great at seeing the forest, while senior people are great at seeing the forest but not so great at seeing the trees. The junior people come in with a full command of all the details and are perfectly willing to reject every paper that is not perfectly executed. The senior people look more at the big idea but are much more likely to miss something. My ideal program committee meeting is to have the junior people explain the papers to the senior people, who then make sure the junior people don't reject all the interesting papers because they can be perceived to have a flaw.Another program committee bug is that one negative person is usually enough to kill a paper. One way to avoid this problem is let every program committee member unilaterally accept one paper even if everyone else wants to reject it. This will eliminate the problem that if you need to reach consensus to accept a paper, you will only accept papers that everyone can see are worthwhile. When the acceptance rates get as low as they are getting, this means that only very well executed incremental papers get in.
New papers are inevitably more difficult to appreciate, if only because they take more precious time to understand and evaluate (which is a sure way to annoy a time-strapped PC member). I'd like to see conferences accept more new, interesting, and controversial papers, which will inevitably require more variance in the selection process.
If you back off of requiring consensus to accept papers, you can immediately solve orthodoxy and distribution problems by putting the right people on the committee. (This assumes, of course, that you can get submissions in the areas you'd like to accept papers in). With this kind of policy in place, you can afford to have a single PC meeting and still have a range of papers.
You also stand a chance of at least addressing the problem that sometimes people who have worked in an area inevitably apply a much higher standard to papers in that area than in other areas, which can make it really difficult to accept papers in that area. This requires someone on the PC to have enough guts to basically disregard the advice of the expert in the area.
Another tension inherint in this (and other) !PCs is the desire for "high quality" papers (traditionally academic with lots of proofs or performance statistics) vs. exploratory "off the wall" ideas that engender lots of discussion and argument. The first tend to encourage !LPUs (least publishable units) while the latter encourage some really BAD papers. This puts added stress on PC members either to keep to the old standards or to get some exciting items accepted.
uncannyily relevant article in Computer, June 2004 -- pp 92-90 on the reconomics of International Conferences -- balancing quality and the need to accept papers to attract an audience.
- CeciliaHaskinsAs with James Noble, I have been involved in reviewing HCI papers for CHI and national conferences in the UK. Having a panel of reviewers who 'apply' to review means that the reviews come more 'from the community' as a whole. There is an issue of quality associated with opening the review process to anyone who wants to sign up, though.
I'm still not sure if the technical program is the key problem with OOPSLA. OOPSLA is still a highly-regarded academic conference, and so maybe academics attend in about the same numbers as before. Making the technical program relevant to the broader software community may help, but it is hard: ideally we would publish lots of solid work that is relevant to a broad audience, .... and the paper should be have great new ideas as well. How many of these can we get in a year? More likely the papers are weak in some area, or is really solid but not very novel, or somewhat narrow. What are the criteria? I would err on the side of trying to publish papers that matter. Perhaps we should give authors more help, both before and after the PC meeting. Fixing up a weakness is a paper that is important is easier than adding real fundamental significance to an otherwise bland but solid paper. Yet the latter is more likely to be accepted.
I agree with MartinRinard that requiring concensus is bad: but perhaps two PC members must get together to accept a paper in the face of objections. I think that strong support from a few PC members is more important than lack of objections from the entire PC. - WilliamCook
What we saw in this year's PC meeting was that some papers that several passionately argued in favor of were rejected because one respected person objected (often with some vigor). There were at best rare (I can't think of one, actually) cases where the obector would compromise by allowing a passionate person to accept a paper, but numerous cases where a passionate objector was granted his (gender intended) wish to veto. That is, lots of black balls, no white balls (gender not intended). I took this as a sign of caution. The Technical program description included this sentence:
Some technical papers fall short of acceptance for the conference proceedings because they are premature or rhetorical but are still considered relevant to the future of computing and thus to OOPSLA.
I take this as saying that somehow the Onward! papers are second class, because papers that "fall short" of the regular program may be "relevant." This reflects an attitude about academic papers.
Another interesting phenomenon was that one person strenuously objected to anything like shepherding, even in the face of people in the room volunteering. There were several papers (passionately argued for) where relatively minor shepherding could have fixed, but these were generally rejected (we took one or two with some shepherding, I think).
On Cook's main point, I think the main technical program is ok - that is, that the acadmic papers pull in a reasonably-sized academic audience - but that we need to add things for people who don't care for the academic papers and have little to do once the workshops have shut down and there are no exhibits to go hang out around. Well, there are the riotous special events and posters.
-rpgConsensus is always desirable, but never required.
There is a very simple solution to the problem of a paperthat has several supporters and one loud objector:
after a reasonable amount of discussion has occurred,the program chair has the power to call for a vote.
(The program chair should point this out at the start of thePC meeting.) - GuySteele
And, ultimately to use their casting vote if there is a tie. I always thought this was why the programme chair was the editor of the proceedings: ultimately, they decide.
- JamesNobleI appreciate Martin Rinard's remarks above,
but there is a minor problem with the suggestion"One way to avoid this problem is let every program committee member unilaterally accept one paper even if everyone else wants to reject it."
If enough PC members exercise this whiteball, the result can beacceptance of more papers than will fit in the conference.
Then you're back to horse-trading over which papers have to becut after all, and maybe it's not quite fair to have certain
papers protected by the whiteball in this situation.I am running a small internal conference where I work. I'm making up a program committee, but the way we will work is that they will read and review, but I will make all the decisions. This is partly because of the problems of getting consensus on a program committee, but mostly because the primary goal of the conference is pedagogical (though the attendees don't know that). -rpg
I share the perception that especially the experts in a field tend to be too stringent with some papers. It seems that papers with nobody in the PC being a real expert are more likely to be accepted.
Whiteball might be too much power on a single member, but cyperchair shows all the ratings and maybe the reviewers of a controversial paper should have the discuss offline before the PC meeting and work out the points in favour and against together with recommendations for improvement. The result can be presented at the conference and the meeting can decide according to the prepared material more easily.What might be a reason to be against shepherding?
As I recall, the main reason was that to offer shepherding properly, the committee had to be put together knowing that shepherding was going to be done, the call for papers had to announce shepherding, and there had to be a way to ensure that shepherding and the final decision would be done properly. Since none of that had been done, someone objected to offering shepherding on the spur of the moment.
I have also heard people comment that conferences don't do shepherding, journals do. -rpg
Getting high-quality shepherding is hard enough at PLoP: how would it work at OOPSLA? Is a mate of mine on the PC agreed to shepherd the paper the way we decide what is accepted? If not, how are the final decisions to be made? Do we re-review the paper?
If OOPSLA is serious about giving more help to authors, let's follow CHI and
run a formal mentoring scheme before the deadlines, not patch things up afterwards. But, don't the practitioner reports do this already?There were several papers this year I hated to see rejected. Having left the academic path 13 years ago, I'm sometimes exited by papers that are just well-written without boosting the whole research and industry by ten years (or maybe the PC just fails to recognize that). During the discussions I sometimes wondered whether this PC would have accepted papers by Einstein or Heisenberg if we had been hundred years earlier. To be more constructive: I think any help for the authors (shepherding or mentoring or whatever) would help authors who have interesting ideas but a writing style that is not up to academic standards. In addition we may define more than one track. E.g. a track of high quality papers that really boost academia or industry (the kind of papers many PC members were looking for this year in my perception), a track for papers that are only a minor progress but are exceptionally well written, a track for weird ideas that may exhibit their value in ten or twenty years, and so on.
I think the academic papers are ok, but I'm missing an alternative. The practitioner reports are a good start. What I'm missing are lessons learned from the field, this could be practitioner reports, but I think they are too narrow. What I really like, is what e.g. JAOO offers: They have 'talks' of about 45min length (or 1 hour), so these are more like mini-tutorials, which allow to offer a lot of knowledge. These talks are hardly never academic, but very pragmatic / directly useable. So I suggest for keeping the academic "touch" of OOPSLA, we'll keep the academic papers, but offer as well something different. This will definetely also require a different pc.
I totally agree - as I mentioned in the CallForPapers discussion, I think exploratory empirical work on OO applications and practitioners can bridge this gap. I gave a talk at an AOSD on some investigation that some of us did on how programmers deal with aspects, and it was incredibly well attended. Practitioners love hearing about themselves, and hearing validation and evidence for the issues they've been intuiting for a while. Researchers also came away with hearing about open problems that might help drive further technology or more investigation. The thing with SE studies though, is that they're difficult to make large, and hence believable. These place a huge onus on the researcher, to argue well for the generalizability of their results, and also means that a PC member will have to be familiar with expectations for empirical work of this style.