Some of the abstract reviews I got

Daniil M. Ozernyi

At the time of writing this sentence, I have submitted more than three dozen abstracts to conferences, some of which were accepted, and some of which were not. Some of the abstracts got uniformly favorable reviews, some got uniformly unfavorable ones, and some got contradictory ones. I think some of the contradictions are rather instructive, and some are funny – so they are here for everyone to see.

Chicago Linguistics Society 58

LSRL 52

Penn Linguistics Society

One particular case

Most reviewers do not spend a whole lot of time writing the reviews and explaining their point of view, but there was one particular case when I did submit a rather lousy paper to Society for Computation in Linguistics – and I knew it was lousy. It was rejected, but one of the reviews was what ia given below, and what is the most thorough review I have ever gotten or I think I will ever get, and for which I am incredibly grateful to the reviewer. (I should point out that submitted was a paper, not a two-page abstract)

The review

Score: -2 (reject)

I think it’s a good goal to provide a concrete metric in the 3LA domain (i.e., the metric of algorithmic efficiency), but I found the implementation confusing in several places (also, where was algorithmic efficiency actually defined?) so it’s hard for me to tell how much to believe the conclusions. To be fair, the paper itself notes in the conclusive remarks that this is a preliminary proposal meant to spur discussion, but I worry that it’s still too preliminary (perhaps due to length considerations as well).

Specific comments:

Also, why should the learner only be able to adjust one level up in the tree, if they have access to the whole tree and can parse with whatever parts of the tree seem to work? Is this meant as just one illustrative example of the kind of adjustment that can be made by using a parsing-to-learn strategy?