Before I go ahead and jump into questions or individual products I'd like to take a moment to discuss the actual process I went through to arrive at my nominees, as well as a few thoughts on what I'd change if I did it again.
First, when receiving a product I would usually give it a quick flip-through to get a feel for what it's about. Something much like what you might do when encountering something new and interesting at the FLGS. It's not a particularly necessary part of the process, but I believe it got me thinking about the book. Here I'm not looking for anything in particular, just impressions. That can come from anything, chapter titles, opening fiction, interior art...
After that the book gets separated into one of three boxes that are sill sitting around my living room (sorry dear): Read Now, Read Later, or Already Read. Any number of factors can play into which of the first boxes the product gets placed in. If it's for a setting I'm unfamiliar with (like Helios Rising was), I set it aside until I can do a little research. If it's for a game I'm unfamiliar with (like the WFRP products), it gets set aside until I can get my hands on a core book. And, if it's a product of a type of been over doing (like any number of adventures were) I set it aside until the products stopped bleeding together. Otherwise, it goes into the Read Now box.
From here it's a bit random as to when a book gets read. At any given point in the process I was usually reading two to four books at a time. Usually that would be one big one, a couple of small to average ones, and a little adventure or something so I could actually feel like I was accomplishing something. Books were generally read in the order they were submitted, but a couple caught my interest so much during that initial glance-through that they got moved to the head of the line.
The actual reading is, of course, the most time consuming part. For the most part I would get the first book in the Read Now box and just go through it beginning to end. That wasn't always practical, so often I would have books scattered all about. Like I said, I read more than one at a time. I usually kept a big book by the bed so I could make some progress before going to sleep, a couple of mid-sized books in the car for reading at work (and often a backpack full in the back seat during the later portions of the process), and something small at hand wherever I was so I could read during my free time.
During the reading I took notes. These might be neat rules or cool setting features I liked, perhaps some artwork that really clicked with me, or more often, how I felt about the book's writing and mechanics as a whole. If there was anything that I didn't like, that got written down as well. The notes for each book were then folded and stuffed inside the front cover for easy reference, and I would turn to my excel spreadsheet that I used to track the entries, and give each item a numerical score in any category it qualified for.
After that it was time for the leg-work. Wherever possible I would search for reviews, actual play posts, or commentary online. This helpe me to see things that I might have missed in a hasty read-through, and often helped to point out the things that made the game click for the people who enjoyed it. This helped me a lot with some games that I was struggling with, and really served to inspire me in a lot of ways.
At that point the book got moved into the Already Read box, and I moved on to the next product.
So, that's the review process from my end. I'm sure every judge has their own way of doing it, but that's what seemed to work best for me. From here we move on to the group efforts.
The judging forums started off pretty slow. There were three new judges this year, and I think most of us fumbled around a bit at first. Here is where I credit Jeff for really pushing things forward. He got us talking, and kept us going. It really broke the ice and got us discussing the entries in an open manner before we had to start really pronouncing judgments.
Basically someone would just hop online and make a post about an individual product. Not so much focusing on the awards at this point, but just our impressions. If it was something we liked, we said why, if it was something we didn't get, we asked for help. That, for me at any rate, was what started the bonding process between the judges and served to solidify some of our favorites early on.
As the end drew near discussion turned to the categories themselves. That usually started with some informal lists, things that might qualify, maybe one or two favorites, that sort of thing. As time passed those lists solidified into something more structured, and firmed up a lot. These eventually evolved into our top 5/7. Here each judge posted the top 5 products they wanted to see in each category, along with two alternates. At that point time was running short and things began to move quickly. Left with little time to haggle, we started tossing out alternative methods for choosing the finalists. When the other judges talk about the work I did, I can only guess this is what they meant. I started by asking everyone to choose their top 2 most-haves in each category. I still think this was a good idea, but there simply wasn't time to implement it.
Stuart then stepped in and suggested the method that really set the stage for our final deliberations. We compiled each judge's top 5/7 lists for each category into a single list that showed every nomination in order from most nominations to least, and listing the names of each judge that nominated the product beside it. Anything that all five judges nominated was locked into a category, everything else was open to discussion. We had an excellent fall-back plan for voting that Stuart suggested, but I think we were all committed to working this out amongst ourselves.
This is where the compromises started, which was both necessary, and the weakest link in the process. Sometimes this was handled very well, with individual judges sacrificing some of their choices for the choices that other judges were passionate about. Other times, especially as time grew short, we were left with little option but to back the choices that had the support of the most judges. I kind of feel like this was a bit weaker than the rest of the process just because it wasn't hugely different than actual voting, and I think that, given time, we probably could have worked it out better.
That's not to say there was anything wrong with it. Like I said, it was necessary, but it was a little disappointing because it wasn't what I felt we were shooting for.
That's about it really. We spent the last three days going through the lists and narrowing them down into what you have seen posted. In the end a surprising number made the list by unanimous decision, but a couple managed to fall through the cracks because there simply wasn't enough room for everything.
So, what would I change.
It's all about time. I was notified that I was elected back in mid-February. I received my firs products at the end of March. That's five weeks wasted. From there we had until the end of June to read the 239 entries, and less than a week more to discuss the final nominations. That's a little over three months to get it all done.
It simply wasn't enough time.
I know that there are plans, at least, to fix this in the future, which I find very exciting.
Second, I think I would have started my lists sooner. It would have been better, I think, to have a sort of active list for each category that I could compare each book to as I read it. It would be more time-consuming, but I think we'd end up with a more varied list in the end. It would make it a lot easier to judge the individual merits of every product in each category if the books were judged piece by piece, instead of as a whole.
Finally, I think I'd be a bit more passionate about products. I tried to remain more-or-less clinical about the process, but I think that hurt the chances of a couple books that I really loved, but was afraid to push on the other judges too hard. I used a terrible numerical rating system instead, which I wish I had just dropped. I was afraid to just trust my gut feelings, I didn't want to allow room for favoritism, but really that's what it's all about. There simply is no good objective way to do this, and I should have just embraced that. It all comes down to opinion in the end anyways. All the rating system did was screw up my thought process. If I gave something a high score I was forced to measure everything that came after against that, and the scores ended up meaning different things from the beginning to the end. Half way through I switched to a less granular system to try to fix the damage I had done, but Is should have just thrown it out altogether.
Given more time I would have really liked to try out the idea of each judge listing a couple must-haves for each category. I think that we would see a little more diversity, and more room for the products we really loved instead of just the products we agreed on.
So, what does this mean for the results.
It means that I made the whole thing a lot harder than it needed to be. It doesn't invalidate the final decisions, it just means that I went the long way around to get there. There's something to be said for voting for the things that have the most universal appeal, but I think the process could also be served well by giving into the things we are passionate a bit more. It might not change the results a huge amount, but it might lead to a little bit more personal satisfaction with the decisions I made on individual products.