Somewhat to my surprise, especially given the slow time of year, the RHSU 2010 Public Presence Rankings seem to have struck quite a chord. I mostly noticed this due to the rash of questions about how I could have omitted scholar X or Y. I'd indicated in the initial post that the list was intended as an illustrative cross-section of faculty "from various disciplines, institutions, generations, and areas of inquiry," but that didn't seem to fully satisfy. So, given my accommodating nature and interest in seeing how some of the additional folks would score out, we've supersized the rankings.
The rankings are now bigger and badder than ever (or at least bigger and badder than they were on Tuesday), with 35 additional scholars rated. The indomitable Daniel Lautzenheiser has now calculated scores for UVA's Bob Pianta and Dan Willingham, the omnipresent Diane Ravitch, school-budgeting ace Marguerite Roza, Pedro Noguera, Gary Orfield, U. Wisconsin's Adam Gamoran and Sara Goldrick-Rab, Stanford's Larry Cuban and Mike Kirst, and more than two dozen others.
And, truth be told, some of the omissions were based on mistaken information. Roza is at Gates, but I'm now informed she's still on faculty at U. Washington and is only on temporary leave; given how large she's loomed this year in discussions of shrinking budgets, I was intrigued to see her score. Similarly, I had thought of Ravitch more as a personality or think tanker than as a university-based researcher, but Diane let me know that she has an NYU faculty card and office and that her salary is paid by NYU. So, while my gut feel was that it might be more fitting to compare Diane's public presence in 2010 to that of a well-known edu-pundit like Checker Finn (or even to that of Michelle Rhee), I was happy to include her in the expanded edu-scholar rankings—where she promptly posted the ridiculous, curve-crushing score you'd expect from her action-packed year.
Now, while we've got scores for 89 faculty at 37 institutions, I'll remind everyone that there's thousands of faculty who might be rated, that there was little science in determining the composition of the list, that inclusion or omission shouldn't be taken as any kind of statement, and that the methodology we published on Monday means anyone with an Internet connection can rate any scholar in about 15-20 minutes. And we won't be adding more this year. So, if you've got suggestions, feel free to pass 'em along—but just know it'll be with an eye to 2011.
In response to several of the queries that came in, I'll touch again on the question of how I decided who to score. Given that the point of the exercise is to nudge the academy to do more in acknowledging scholars engaged in translating research into policy and practice, I focused on active university scholars. This meant that think tankers, emeritus faculty, researchers with only a nominal university affiliation, and independent authors were generally not scored. These determinations can be murky, individuals may have multiple affiliations, and Daniel and I did this as an engaging but unfunded exercise—and I'm sure some of our judgment calls are eminently contestable. In response to popular requests, we've gone ahead and scored some of those folks we omitted on those grounds in the initial list.
Several folks also asked what I think they should make of the results. My answer: That's really up to them. As I said on Monday, I think these kinds of metrics are relevant because I believe it's the scholars who do these kinds of things "who can cross boundaries, foster crucial collaborations, and bring research into the world of policy in smart and useful ways." If you disagree, or think some of these metrics are interesting and others are not, that's cool with me. Again, as I wrote earlier in the week, the aim is to "urge universities, foundations, and professional associations to consider the merits of doing more to cultivate, encourage, and recognize contributions to the public debate that the academy may overlook or dismiss."
Finally, as I noted on Tuesday, "If readers want to argue the relevance, construction, reliability, or validity of the metrics or the rankings, I'll be happy as a clam. I'm not at all sure I've got the measures right, that categories have been normed in the smartest ways, or even how much these results can or should tell us. That said, I think the same can be said about U.S. News college rankings, NFL quarterback ratings, or international scorecards of human rights... For all their imperfections, I think these systems convey real information—and do an effective job of sparking discussion." That's the aim here.