In part II of our interview with SOASTA’s Dan Bartow, we get his thoughts on why load testing often gets neglected; advice for test tool selection; the challenge of load testing for mobile apps; SOASTA’s plans for 2011; fights between flying sharks vs. flying crocodiles and more. If you missed our previous segment, you can read Part I here. Enjoy!
uTest: True or false: Load and performance testing is often one of the most neglected phases of software quality. Please explain why this is (or is not) the case.
DB: True! The time allotted for QA in the software development lifecycle has always been the first thing to get squeezed when a project gets behind. Traditional software methodologies such as Waterfall essentially go from requirements to development and ultimately QA at the tail end. When the project development activities get behind then the only thing left to cut from and still deliver a product on time is the QA cycle. Performance still isn’t a part of many project plans today (this is almost a separate topic in itself), but when it is in the project plan it usually gets a slice of the QA time which is already too short in most cases.
Now we live in an agile development world and while agile functional QA is catching up we still don’t have agile performance testing as an industry standard. The reason for this is that the dominant product in lab performance testing, HP LoadRunner, requires you to write code for your performance tests that is more complex than the actual web application code you’re testing. If you have to write your test cases in C and it takes two weeks to write an end-to-end scenario on a finished web app then you have dead weight in your dev lifecycle. As a result of these weaknesses companies have lost confidence in the value of performance testing their apps. The way to reinstate this confidence is with a modern testing tool and a modern approach to testing.
uTest: How important is tool selection when it comes to load and performance testing? Are testing failures a result of this or something else, like personnel?
DB: Tool selection is very important for overall success although testing failures can be because of people, processes and/or technology. You need the right tool for the job and you need the right people to use them in a process that’s set up for success. Just like QA isn’t a one size fits all shoe neither is performance testing. Personally though I think most testing failures are a leadership and execution problem and not because of the tools being used or the processes. Quality comes from the top down. The companies out there delivering the highest quality offerings are the ones that build quality in from the CEO all the way through the company. Probably every tester reading this knows what its like to be a QA Engineer at a company that doesn’t seem to actually care about quality. How ironic! I said tool selection was very important, but I really don’t even want to focus on that here because tools are just tools. Before you worry about whether or not you have the right tools time should be spent on making sure you have the right attitudes on your team and the right players. Once you have a good enough team that is pushing the capabilities of your toolset then I think you’ve got a foundation for success and you can start driving higher.
uTest: How does the expansion of mobile apps and devices impact load testing? Is this a game-changer? Or something current load testing is well suited for?