If you haven’t read Jeff Atwood’s post on Boyd’s Law of Iteration, you should. The gist is, speed of iteration beats quality of iteration. We can sometimes have a very slow iteration time on the engineering team here, and our tools suffer. On the other hand, I think our design team has done a bang-up job getting feedback and iteration with internal and external playtesters, and you can tell because our game is fun as hell.
Iteration is king. But more than that, you need to really explore- Sid Meier has a design rule called “Double it or cut it by half“:
…the probability of success is often directly related to the number of times a team can turn the crank on the loop of developing an idea, play-testing the results, and then adjusting based on feedback. As the number of times a team can go through this cycle is finite, developers should not waste time with small changes. Instead, when making gameplay adjustments, developers should aim for significant changes that will provoke a tangible response.
If a unit seems too weak, don’t lower its cost by 5%; instead, double its strength. If players feel overwhelmed by too many upgrades, try removing half of them… The point is not that the new values are likely to be correct – the goal is to stake out more design territory with each successive iteration.
Imagine the design space of a new game to be an undiscovered world. The designers may have a vague notion of what exists beyond the horizon, but without experimentation and testing, these assumptions remain purely theoretically. Thus, each radical change opens up a new piece of land for the team to consider before settling down for the final product.
-From Soren Johnson’s blog post, originally read in Game Developer Magazine
Especially in tools, we often don’t realize our iterations are often just polish for something that doesn’t work in the first place. You really need to be able to make big, sweeping changes, especially early on, when creating or refactoring a tool. The only way to do this is to make sure your data model is solid and complete, so you can rewrite the entire tool in those two weeks and break as little as possible. This is the most effective way to discover new workflows and make those really big, order-of-magnitude improvements.
But much harder is being allowed to make those changes in the first place- either because the data model is so brittle, or management is averse to risk. The only way to make those sweeping changes, then, is to take risks, but always deliver on your word. Eventually you’ll earn enough trust to be allowed to do them, and enough skill to do them well enough that you can fix that broken data model at the same time.