Google Test Director James Whittaker recently concluded his fantastic “How Google Tests Software” series. We covered Part I a few weeks back, but I wanted to re-direct your attention to his latest post, which deals with the size and scope of their various projects.
Despite the perception that Google’s testing is highly complex and indecipherable to us mere mortals, we find the opposite to be true. As Whittaker explains, testing scope is determined by “emphasizing scope over form.” Plainly stated, testing comes in three sizes at Google: small, medium and large. Here’s his explanation:
Small Tests are mostly (but not always) automated and exercise the code within a single function or module. They are most likely written by a SWE or an SET and may require mocks and faked environments to run but TEs often pick these tests up when they are trying to diagnose a particular failure. For small tests the focus is on typical functional issues such as data corruption, error conditions and off by one errors. The question a small test attempts to answer is does this code do what it is supposed to do?
Medium Tests can be automated or manual and involve two or more features and specifically cover the interaction between those features. I’ve heard any number of SETs describe this as “testing a function and its nearest neighbors.” SETs drive the development of these tests early in the product cycle as individual features are completed and SWEs are heavily involved in writing, debugging and maintaining the actual tests. If a test fails or breaks, the developer takes care of it autonomously. Later in the development cycle TEs may perform medium tests either manually (in the event the test is difficult or prohibitively expensive to automate) or with automation. The question a medium test attempts to answer is does a set of near neighbor functions interoperate with each other the way they are supposed to?
Large Tests cover three or more (usually more) features and represent real user scenarios to the extent possible. There is some concern with overall integration of the features but large tests tend to be more results driven, i.e., did the software do what the user expects? All three roles are involved in writing large tests and everything from automation to exploratory testing can be the vehicle to accomplish accomplish it. The question a large test attempts to answer is does the product operate the way a user would expect?
Simple enough, yes? So if you’re in charge of testing at a growing company – and wish to follow in the footsteps of giants – you would be wise to read the entire series, starting here.
As a supplement to this series, here’s Whittaker’s last uTest appearance – a webinar titled “More Bang For Your Testing Buck” (after the jump). Enjoy!