Getting feedback during play testing is central to board game development. I’ve found the feedback from play testers comes right after I’ve played my budding new game myself for awhile.
Once I’ve got sufficient feedback I’ll move onto graphic design, which in my case means hiring someone to do that. But, how do I know when I’ve had enough feedback, or when the feedback is no longer contributing to making the board game better.
Play testing and feedback these days takes place on Tabletop Simulator (TTS) with Discord as the voice channel. Feedback for the games I’m play testing is generally simply verbal comments and suggestions. I’ve experienced some other designers request the completion of an an online survey. As long as there are not too many questions, and it’s brief, like 45 seconds, then that’s fine.
Building Trust
The first time I had received feedback from two members of the Vancouver Play test Group was after I had been participating for a couple of months and played some of their games. I noticed something interesting when they were giving feedback.
The quality of their feedback changed starkly during the fifteen minutes or so that they were talking about my early game prototype. At first they were quite cautious and gave tempered comments. These comments wouldn’t even be considered to be criticisms by most. Something along the lines of “This is a good start” or “It seems you put a lot of time into this.” I simply responded with an enthusiastic “Thank you,” and asked for more.
At one point, I realized that they were being quite cautious and reserved, so I said, “There’s really nothing you could say to hurt my feelings. I’m open to any and all feedback.” I actually said some variation of that two or three times, and then the conversation became more productive
Retaining more caution than reservation, they tested the waters. I now heard things like, “I don’t understand what my role is here,” or “these symbols don’t make sense to me.” I took that this feedback as a good sign. I found it useful, and wanted to encourage more.
I told them that I agreed with their suggestions, and I could see what they were referring to. I reiterated their comments and suggestions, so that I knew I had heard it correctly. This listening technique was quite useful because at least once I reiterated what I thought I had heard, only to realize upon their response that I had not heard correctly. I go more deeply into listening skills when I outline various forms of consensus as a group process.
They heard me reaffirming their perspectives on my first attempt at a board game that included someone other than me. They became even bolder, and I learned and grew more. I now heard comments like, “you really need a central focus.” My responses to their feedback built the necessary trust for them to offer more generous feedback.
Willing and Able
Once they realized I was not going to be defensive and was willing and able to really hear what they were saying, their suggestions became deeper and more profound. The two keys here are “willing” to hear feedback, and “able” to hear feedback.
“Willing” to hear feedback means having a mind, a thought process, that’s open to other perspectives. That amenable thought process looks like asking for help. I remember another time in my early twenties asking someone for feedback on my resume. It was my boss, and my not-at-all subtle way of saying I’m moving on. But he did give feedback. I reached for that historical story, because it lends itself nicely to the feature to observe: “able” to hear.
“Able” to hear includes being able to set aside one’s ego and see the situation from another person’s perspective. It’s related to empathy, but from a vulnerable position. In my immature two-week notice example, I wasn’t “able” to hear feedback. I kept defending each of my written choices. That defensive response contradicts being “able” to receive feedback.
Diminishing Returns
After many more play test sessions, I started to get feedback of a different sort. Instead of comments about missing elements, or a players experience, I started to hear suggestions about changing this or that symbol. Comments about graphic design, … even though I had stated that the art was all placeholder art.
There would be suggestions about how “it could be different,” or “where I could take this idea.” Basically, the comments and suggestions were no longer about the game itself. But instead they were about the potential for a “spinoff.” At that point I knew that it was time to take play testing to the next level.
Instead of play testing with game developers, I’ll now move onto play test with game players and then eventually with anyone (if the target market is a group specifically distinct from people who ordinarily play games for fun).
The concept of “can it be better” is quite elusive. Is it worth the price (time and effort) to make a board game incrementally better? Are the opportunity costs of a delayed deployment worth the increased value of slightly better?
How I know
So, I look for two key markers to decide if a board game has had enough play testing and is ready to move onto the next step
- the feedback is no longer about the game were testing , but instead about the potential to make other games
- the development moves from addressing deficiencies, red flags, problems to perpetually trying to make the game better (and without having a clear idea of what that “better” looks like).
How do you know when you’ve done enough play testing for you board game design?