I was on a committee to help rate abstracts for the PASS Summit this year. It was an interesting and challenging experience. I learned some things, and I can better appreciate that this is a tough job. It’s hard to try and choose what people want, what will interest people, and what makes the Summit a better sell to potential attendees. Or at least their managers.
I found some flaws in the process, at least things that I thought made it a difficult set of decisions for me. I wanted to list them out, not to blame anyone, but to give some insight into the process, and perhaps get some ideas on how to better serve the community in the future.
Here’s the basic process:
- People submit abstracts. This year they could see what else was submitted prior to their entry. I believe they could edit theirs, but not sure.
- All abstracts were put in an XLS and sent to committee members.
- We had a tool on the PASS site which allowed us to rate each abstract in four areas. Ratings were 1-10, and were in these areas: Abstract, topic, speaker, subjective.
- Once all sessions in our area were rated, the committee scheduled a call to review the overall rating.
- We picked a certain number of sessions in various tracks along with alternates.
- People get notified.
A quick list of the issues I saw, and I’ll give some more detailed thoughts on each one of these. However I’m curious about any feedback as well from other people that are interested.
- My first issue was that I didn’t really have a set of guidelines for what each rating meant. What is abstract v topic v subjective? Speaker I could figure out, but what do you do if you don’t know the speaker or haven’t seen them talk? We actually had people rating this differently
- I find it difficult to actually rate things on a 1-10 scale. What’s a 7 v 8 v 9? I found myself struggling and probably inconsistently rated things. I might give someone an 8 in one area, and then find a very similar item and give it a 7. This is a hard one
- I didn’t have much feedback on how other people in the community felt about speakers.
- I didn’t have much feedback on how other people in the community felt about topics, or these specific sessions.
- It was hard to tell if we were covering all types of topics in SQL Server. I only saw one replication submission in the spotlight sessions. No idea if there were any in the regular sessions.
- I had no insight into what other groups were doing for their tracks. For all I know we all picked SSIS ETL sessions and everyone left out fuzzy matching or data mining.
- I have no idea what speakers submitted in other tracks or areas.
Some of these worked themselves out, so they aren’t major complaints. As an example, we discussed our ratings, and didn’t necessarily pick the top xx in some area. We moved around, and sometimes picked sessions rated much lower.
As I mentioned, I’ll post some notes on each of these areas in more detail, assuming I can disclose things. Please feel free to comment on what you think we should do to pick sessions of more value to everyone.