Steve54 wrote:
Personally I think batreps are the most important final stage in development and list tweaking after they have been properly setup and guided by reasoned discussion and an element of maths. Maths as the be all and end all is relatively worthless IMO as their are so many factors that a straight spread/graph don't take into account.
mordoten wrote:
So testing out ideas in actual games is not a part of any game development process? I'm pretty sure games are developed by actually trying them out, not only by comparing spreadsheets and discussing theories. The "feel" for a unit is as importantas it's statistical value in a mathematical equation.
Or else there would be no beta-test of games being done before releasing them.
So I would say battle reports play an important rule in changing stats around because it gives the players the chance to actually experience the ideas being thrown around and it lets others not participating to read how it went.
I don't really agree with this, and I'll explain why.
Play testing is great to get a sense of the "feel" of a formation or rule if you've never tried it before. It's also very important in game development as a way to try to "break" the game by testing for edge cases and exploits. Play testing should be done systematically by a large group of people to try to get some kind of useful data. There is definitely a place for play testing. You can't write rules in a vacuum or based on math. They have to be enjoyable when they hit the table and that means people need to try them.
However, what we're doing here isn't professional or scientific, it's play testing for fun. We're doing a handful of battles for each proposed change, the people playing vary greatly in skill level, and they're voluntarily testing these changes because they probably already like or hate them. The performance of the formations in question could swing all over the place based on dice and poor decisions. If the change is small (and almost everything being proposed is a small change), then the formation will probably not "feel" any different. Or else it will feel however the play tester wants it to feel.
For small changes like we're making, discussion and list building and math are usually more critical than games to evaluate the impact of what you're changing. If you change a weapon's strength or a unit's armor by a pip, if you change the cost by 25 points, how does it compare mathematically to similar units? We can remove the variability of dice and see if we've accidentally created a monster or a dud. Does our points cost make list building too restrictive? Does it open it up to abuse? We can sit and make a dozen army lists and see whether anything looks off before any models hit the table. From experience, just looking at it and talking, did we make some mistake? Many sets of eyes can help find something we missed when we proposed the change.
Play testing the way we do it here is still a good thing. It has merit, especially when you want to test a radical change or when you suspect you have created a loophole that can be exploited. But what we're talking about here is more like a popularity contest where the barrier for entry is writing a little bit about a game you played using the formation. That is a good thing on its own. It shows the community is invested and they like the change. Consider it icing on the cake once consensus has basically been reached on your proposed rules. But it isn't performance data.