Award season
I had always been faintly impressed with the X-Men movies for featuring three Oscar-winners: Ian McKellan, Halle Berry, and Anna Paquin.
But I just channel-surfed past Mars Attacks! with:
Jack Nicholson (12 nominations, three Oscars)
Glenn Close (five Oscar nominations)
Natalie Portman (one nomination)
Annete Bening (two nominations)
What movies that themselves couldn't possibly have been Oscar contenders have had the biggest concentration of heavy-hitting actors, either by Oscar nominations or Oscar wins? I hope that the answer is some lesser Altman pic with a sprawling cast, or some high-ambition catastrophe like Ishtar or Cleopatra, or else some legendary monstrosity like Caligula, or at best something quirky like Murder By Death. But I fear that it'll be some goofy cameo-heavy thing akin to Mars Attacks!-- maybe a Cannonball Run. Wouldn't that be awful?
Godfather III of course doesn't count-- it was, after all, nominated for Best Director and Best Picture. And I'll resist the temptation to count Titanic, since everyone else in the world seemed to think it was a great movie, even though I think it was a high-ambition catastrophe that happened to draw a couple of brilliant young actors in the making (it took me a couple of years to be able to see DiCaprio and Winslett as actors again, notwithstanding Gilbert Grape and Heavenly Creatures-- really for DiCaprio it took me until this year, when The Departed and Blood Diamond successfully applied a sledge hammer to my head and forced me to see past "I'm king of the world!") as well as Kathy Bates.
And no, that judgment is not just because I now live in a city whose most famous resident is Celine Dion.
Saturday, January 13, 2007
Monday, January 08, 2007
Fascinating.
Via The Chronicle, news of a for-profit, annually-updated ranking service for doctoral programs-- one without a pure reputational component (subscription needed). Unsurprisingly, it seems that Washington University St. Louis is a big beneficiary of rankings that measure research productivity without getting confounded by name recognition-- that's a kind of face verification of the service's plausibility.
It looks like the Political Science rankings (again, subscription probably required) use the following data:
Number of faculty
Percentage of faculty with a book publication
Books per faculty
Percentage of faculty with a journal publication
Journal publications per faculty
Percentage of faculty with journal publication cited by another work
Citations per faculty
Citations per paper
Percentage of faculty getting a new grant
New grants per faculty
Total value of new grants per faculty
Average amount of grant
Percentage of faculty with an award
Awards per faculty
And the top ten departments:
Wash U
Harvard
Yale
SUNY Stony Brook
UIUC
U Kansas
U Maryland College Park
Princeton
UCSB
UVA
Update:
Chris Lawrence observes: "I’m not going to say that they’re implausible, but the fact that there’s one UC school ranked in the top ten and it’s not located in Berkeley or San Diego makes me a mite skeptical."
True enough. I treat Wash U as intuitive confirmation; Wash U in general and political science in particular has turned into the kind of place that's much better than its reputation, because reputation is such a lagging indicator. I certainly won't say that the list as a whole conforms to my intuitions. Of course, if all the rankings did was to confirm intuitions then no one would be paying $30,000 a year to subscribe to it. But there's plausibly counterintuitive and... less plausibly counterintuitive.
A few quirks:
Maryland seems to make the top 10 list on the basis of very high "faculty with a book" and "books per faculty" results. On the other hand, Stony Brook has no books but lots of articles. We're used to poli sci rankings that track American Politics rankings which are more journal-dependent; Maryland's more a theory/ public law/ APD kind of place, where books are more important. It's to the ranking system's credit that it recognized poli sci was a hybrid discipline, whereas books don't show up in, e.g., the chemistry rankings. We're not told how heavily the two categories are weighted, nor are we told what counts as a relevant "book publication" (probably *not* only a peer-reviewed monograph from an academic press). UVA also seems to be very pulled up by the book measure.
The citation measure has a huge range, from more than 9 citations/ faculty member at Harvard to 1-1.25 at Kansas, Maryland, and Virginia. Insofar as that means the latter three are producing a lot of work that doens't get read in the discipline, that's a bad sign, and maybe an underweighted bad sign. I suspect that, if we saw the top 20 on this measure alone, it would correspond a lot more closely to informed intuitions; and that's a vote in favor of the informed intuitions.
Grants are a funny category. On the one hand, they're inputs not outputs, and so in some sense shouldn't be counted at all-- but that ship has long since sailed. And they're inputs that are directly relevant to grad student support. On the other hand, I look at the tiny Princeton figures and think, "well, yeah, why bother with big bureaucratic grant programs so often when you're institution's so rich that it can routinely provide research accounts comparable to a smallish NSF grant?" And money already sloshing around the institution is just as good for grad students as money coming in on government checks.
FINAL UPDATE: So, it turns out that the formula is 60% publications and citations, 30% grants, and 10% awards (from Fulbrights to Nobels), which is arbitrary but fine. But for hybrid book-journal fields, the formula within "publications" is 5:1 books:articles. I'm all for books, and am well within the bookish part of the discipline; for political theory a 5:1 ratio is probably fine. (Always assuming that "books" means "peer-reviewed monographs.") But it clearly underweights journal articles in American and methods in particular and maybe in IR as well. Given the dominance of American in both numbers and disciplinary centrality, that's a problem.
Via The Chronicle, news of a for-profit, annually-updated ranking service for doctoral programs-- one without a pure reputational component (subscription needed). Unsurprisingly, it seems that Washington University St. Louis is a big beneficiary of rankings that measure research productivity without getting confounded by name recognition-- that's a kind of face verification of the service's plausibility.
It looks like the Political Science rankings (again, subscription probably required) use the following data:
Number of faculty
Percentage of faculty with a book publication
Books per faculty
Percentage of faculty with a journal publication
Journal publications per faculty
Percentage of faculty with journal publication cited by another work
Citations per faculty
Citations per paper
Percentage of faculty getting a new grant
New grants per faculty
Total value of new grants per faculty
Average amount of grant
Percentage of faculty with an award
Awards per faculty
And the top ten departments:
Wash U
Harvard
Yale
SUNY Stony Brook
UIUC
U Kansas
U Maryland College Park
Princeton
UCSB
UVA
Update:
Chris Lawrence observes: "I’m not going to say that they’re implausible, but the fact that there’s one UC school ranked in the top ten and it’s not located in Berkeley or San Diego makes me a mite skeptical."
True enough. I treat Wash U as intuitive confirmation; Wash U in general and political science in particular has turned into the kind of place that's much better than its reputation, because reputation is such a lagging indicator. I certainly won't say that the list as a whole conforms to my intuitions. Of course, if all the rankings did was to confirm intuitions then no one would be paying $30,000 a year to subscribe to it. But there's plausibly counterintuitive and... less plausibly counterintuitive.
A few quirks:
Maryland seems to make the top 10 list on the basis of very high "faculty with a book" and "books per faculty" results. On the other hand, Stony Brook has no books but lots of articles. We're used to poli sci rankings that track American Politics rankings which are more journal-dependent; Maryland's more a theory/ public law/ APD kind of place, where books are more important. It's to the ranking system's credit that it recognized poli sci was a hybrid discipline, whereas books don't show up in, e.g., the chemistry rankings. We're not told how heavily the two categories are weighted, nor are we told what counts as a relevant "book publication" (probably *not* only a peer-reviewed monograph from an academic press). UVA also seems to be very pulled up by the book measure.
The citation measure has a huge range, from more than 9 citations/ faculty member at Harvard to 1-1.25 at Kansas, Maryland, and Virginia. Insofar as that means the latter three are producing a lot of work that doens't get read in the discipline, that's a bad sign, and maybe an underweighted bad sign. I suspect that, if we saw the top 20 on this measure alone, it would correspond a lot more closely to informed intuitions; and that's a vote in favor of the informed intuitions.
Grants are a funny category. On the one hand, they're inputs not outputs, and so in some sense shouldn't be counted at all-- but that ship has long since sailed. And they're inputs that are directly relevant to grad student support. On the other hand, I look at the tiny Princeton figures and think, "well, yeah, why bother with big bureaucratic grant programs so often when you're institution's so rich that it can routinely provide research accounts comparable to a smallish NSF grant?" And money already sloshing around the institution is just as good for grad students as money coming in on government checks.
FINAL UPDATE: So, it turns out that the formula is 60% publications and citations, 30% grants, and 10% awards (from Fulbrights to Nobels), which is arbitrary but fine. But for hybrid book-journal fields, the formula within "publications" is 5:1 books:articles. I'm all for books, and am well within the bookish part of the discipline; for political theory a 5:1 ratio is probably fine. (Always assuming that "books" means "peer-reviewed monographs.") But it clearly underweights journal articles in American and methods in particular and maybe in IR as well. Given the dominance of American in both numbers and disciplinary centrality, that's a problem.
Subscribe to:
Posts (Atom)