I have complained for quite some time now about inconsistencies in the way Pitchfork assigns its ever important Best New Music tag. As you’ve probably noticed, there’s no set score at which it’s a given: one album that gets an 8.2 will get it, but then another one, with the same score, just won’t. There is no conceivable way this could make any sense at all, obviously.
But as part of their new feature, Inbox, the first installment of which I just came across last night even though it was published all the way back on March 1st, this very topic is addressed by a reader’s email. Aurora Nuncio writes:
How does Pitchfork’s Best New Music system work? I’ve read reviews where two albums get a score of say 8.2, but one is BNM and the other isn’t.
The truth of it is breathtakingly simple: Editors choose Best New Music albums based on the records that we think are the cream of the crop. These are excellent records that we feel transcend their scene and genre. When an album gets Best New Music, we think there’s a very good chance that someone who doesn’t generally follow this specific sphere of music will find a lot to enjoy in it.
So, as I understand this, an album can get an 8.2 for being a really good example of a particular genre of music but still not be considered Best New Music, while another album will get an 8.2 and Best New Music because it transcends its own genre, even though logic would then dictate that its ability to transcend would in turn demand a higher numerical score? This is an exhausting, unpleasant thing to think about on the Monday morning after SXSW (or any morning!), I know, and perhaps it’s even become a moot point—so far this year, no album has received BNM with less than an 8.4.
Follow Mike Conklin on Twitter @LMagMusic.