One of the topics the Election Review Committee has discussed is how best to score the interview portion. Because of the nature of the interview, it's not something that can really be put out there for review by the world at large. If you're thinking, why not? Imagine you just had a job interview. And it went smashingly bad. As in, "It doesn't matter what my resume says, I won't be hired" bad. Would you want that put out there for the whole world to see? We know some folks don't interview well, but that doesn't mean they have poor verbal communications skills. Nervousness plays a huge part in the interview process.
The question then is how to score the interview itself. There are basic two methods available. They are:
- By Ranking
- By Component
By Ranking:
In this scenario, all candidates are ranked from first to last. First place gets the maximum number of points, with a sliding scale from that point forward. So if the interview is worth 50 points, 1st gets 50, 2nd gets maybe 40, 3rd gets a little lower than that, and so on. There's also a discussion about whether the Nominating Committee (NomCom) has to come to consensus for the ranking or if it works like a lot of the Hall of Fame voting, where you get so many points for 1st, less for 2nd, etc. and then you determine who is first from the aggregated points of all the members of the NomCom. The first is harder, because consensus can be hard to get, but it ensures that all of the NomCom are behind the rankings and therefore, the points.
By Component:
Let me try to approach this from the times competing for the All-* bands (All-County, All-Regional, etc.) or when I was in charge of being a judge for an All-County Band (flutes). There were several sections we were to score:
- major scales
- music terminology
- prepared piece
- sight reading
Each component was worth a certain amount of points. If I remember right, music terminology was worth 10 points, or 2 points per music term asked. The prepared piece was worth the most, because it was more difficult than the major scales, and you had an opportunity to work on it before you walked into the room. But each component had a certain point total and you could get some or all of the points (technically, you could get zero, and the only time I saw this was when a competitor came in and didn't know a single music term). All of these points are added up for each judge, and then the judges' totals are added up and you determine your chairs accordingly. For the interview you'd go through a similar process where the points would add up and be counted as part of the overall whole.
Pros and Cons:
The first method, by ranking, allows for a definite point scale that's easy to determine. The issue comes in ranking the candidates. A second issue is that if you have two candidates who are neck-in-neck, one candidate is going to receive less points based on the scale. But it also means that even if one does poorly on the interview, there are still some points to be had.
The second method, by component, adds in a lot more subjectivity, but it allows for more granular scoring. It means that if you have two candidates really close on the interview, they'll get about the same score. It also means that since you are looking at components, it's still hard to do so bad on an interview that you don't get any points at all. The bigger issue is that since the interview isn't available to the public, this is an area, because of the breakdown by components, that can be a firebrand to an argument as to why a candidate did or did not make the slate.
We'd Love to Hear From You:
Because interview scoring can be an issue and because of the lack of transparency with the interview, we'd love to hear your opinion as to the two methods. If you would, please visit the forum thread on the ERC site devoted to this topic.