I’ve decided to write more on this topic as, having come in the top 20% of failed applicants for last year’s research grants, I was persuaded to try again, with a revised version of the same project. As I noted before, much of the art of success comes down to making (literally) incredible claims, so with valuable input from the university’s research support office I “beefed up” the project with even bigger assertions about how it would change the field, and what sort of research results I’d obtain. Once again, I was failed. This time I’ll give the results in detail; if 100 people did this, we might gain a little more insight into the system.
Three judges (who, as I wrote before, may very well know the applicant, or the applicant’s rivals) assess the project from various points of view and grade these on a 1 to 4 scale, 4 being very good. An average of around 3 across all the grading criteria generally seems to be enough to obtain funding. One of the categories roughly translates as the “academic validity” of the project. Last year, the less ambitious version of my project scored 2.67 here, probably meaning that two judges scored it “3” and one “2.” This year, despite the bigger claims, and supposed improvements, this score slipped to 2.33. On the other hand, another category is “the ability of the candidate to conduct the research.” Last year I scored just 2.67 here, but this year was evaluated far higher, at 3.67 (i.e. one point short of a perfect score!). It would be a fool’s errand to try and find anything objective or scientific about this, but if even a modicum of objectivity is teased out, the conclusion is apparently that my abilities and my judgement are increasingly at odds: I’m a good researcher who just can’t find an appropriately “valid” project! Conversely, of course, less talented researchers are apparently having more brilliant ideas. Yet another category is to determine the international impact of the research. Here I scored 3 last year, but 2.67 this year.
What’s most revealing about this is that I actually added a book to the expected research results! I said I would write (among other things) a pioneering study of the fascinating writer-composer Mary Linwood (1783–1862), and that Cambridge University Press (CUP) had expressed interest in publishing such a study. Perhaps the judges thought that was just boasting—and who has heard of Mary Linwood anyway? But in March this year, CUP really did issue me with a contract for such a book. Would the result have been different if, at the application stage, I could have said I had a book under contract? Probably not, because though it’s hard to credit, the idea of academic “validity” is quite incredibly nebulous in Japan and (I would argue) deliberately kept that way. Many Japanese would swear till blue in the face that my ideas about Linwood, whether presented to the world in a CUP volume or not, are no more “valid” than someone else’s ideas presented in one of the thousands of in-house, unrefereed journals, published in Japanese universities. By extension, a past record of publishing in top-class journals and with internationally-recognized publishers won’t influence the way the money goes, and the ultimate result is that Japanese universities will continue to slide in international rankings despite all the money thrown at academic research.