Automating these test cases -- along with the rest of the “test pyramid” (technical unit tests, functional integration tests, interface contract tests, user acceptance tests) -- provides an efficient and reliable option to verify that a code change works as intended without breaking anything. Automated tests are a safety net, providing the team with confidence and courage.
Skipping modeling and design completely
Favoring working software over documentation does not mean “skip all modeling and design activities and only write code.” What you are trying to avoid is spending endless hours undertaking the speculative task of creating detailed diagrams and specifications. After all, the only way to know if the model/design is right is to test it by writing code.
But if you have to solve a really hard problem, use any and all means necessary to figure it out. A low-fidelity model/design can be brain-tested on the story’s test cases, and different designs can be explored rapidly. You might even want to time-box this activity based on the story size to start: for example, five minutes for a one-point story to review the basic flow and touchpoints, 15 minutes for a two-point story to see whether there’s hidden complexity, and so on.
Your model/design should speak to the story’s benefits and give you a jump-start on the solution, which should be tested in code. Use your judgment as to how much design, at what fidelity, using what methods, and for how long, for each story; don’t feel like you “can’t” model or design because you’re “doing agile.”
If something is painful, do more of it. This will incentivize automation.
Treat machines as cattle, not pets, by automating infrastructure using tools like Ansible, Chef, Puppet, and so on. Make running tests and deploying software automatic or at least push-button triggered. Get the infrastructure out of the way; make it invisible by incorporating it as another part of the code base and/or using self-service platforms like AWS. Cycle time -- the time required to process a code change into a production release -- can be drastically reduced by automation, which allows for faster feedback cycles and, in turn, accelerates learning. Accelerated learning leads to higher-quality software delivered more frequently.
Adopting “best practices”
There are no such thing as universal “best” practices. What works well for one team may not work well for another, even at the same company, even on the same project. Everything we build is a unique snowflake of design and circumstances; every team a unique combination of personalities, skills, and environment. Read about practices that have been effective for others, give them a trial run if they seem applicable, but don’t automatically adopt them because some authority says they’re “best.” Someone else’s “best” practices may be your team’s straitjacket.
Sign up for CIO Asia eNewsletters.