Quality Assurance is a Process, Not a Department

File this one under the “how-the-hell-have-we-not-blogged-about-this-yet” category. A few weeks ago, BATS Global Markets Inc. had just made its IPO debut. Seconds after trading began, a software bug “disrupted trading” of the stock (i.e. it went from $16 to under a penny in seconds), prompting the company to cancel its offering. You can read more about it here.

Anyway, the incident spurred Forbes writer Ken Goldstein to re-examine the meaning of quality assurance. There’s a lot of gold in this article, but here are a few of my favorite nuggets (emphasis added):

There is no argument that we live in a world of staggering speed, where competitors race to meet customer needs and time to market matters. Innovation is always factored by the ticking clock, who gets the jump and the competitive advantage, when a cost center becomes a profit center. Information compounds on our desktops, the team with analysis paralysis most often loses to the nimble risk takers–but all this means is that in product development, the role of Quality Assurance (QA) has never been more critical.

Notice that he does not say the QA department has never been more critical. He says the role of QA. This is an important distinction which he goes on to clarify:

Here is the way I like to think about quality in product development: Quality Assurance is a Process, not a Department….

…Of course every great development company will have a final step in the process called Quality Control or Quality Assurance, but it is my sense that the QA formal group is there to be the standard-bearer for Quality and rally the company around it, putting a final go or no-go procedure in place before the world gets its hands on a product, but not accepting proxy status for an otherwise poor process. A QA department is not a dumping ground, not a remote server where code is parked as a step function or convenient checkpoint in a perfunctory release approval, not a cynical target of blame. QA is the proxy for the customer, not management, and as such must have a voice that is shared throughout a company. If a Decision Maker chooses not to listen to either the process or a warning from fully objective and independent QA stewards, you get what you get.

Continue Reading

This Will Only Take a Second: United Nations Debates Time Change

In the software business, it’s all about precision, as even the slightest coding mistake can lead to catastrophic failure. This lesson is clearly not lost on the folks over at the United Nations telecommunications agency, who are meeting as we speak to decide whether or not to abolish the leap second. That’s right, the leap second.

The Sidney Morning Herald explains how this relates to software testing:

Unlike the better-known leap year, which adds a day to February in a familiar four-year cycle, the leap second is tacked on once every few years to synchronise atomic clocks – the world’s scientific timekeepers – with Earth’s rotational cycle, which, sadly, does not run quite like clockwork. The next one is scheduled for June 30 (do not bother to adjust your watch).

The United States is the primary proponent for doing away with the leap second, arguing that these sporadic adjustments, if botched or overlooked, could lead to major foul-ups if electronic systems that depend on the precise time – including computer and cellphone networks, air traffic control and financial trading markets – do not agree on the time.

Abolishing the leap second “removes one potential source of catastrophic failure for the world’s computer networks,” said Geoff Chester, a spokesman for the US Naval Observatory, America’s primary timekeeper. “That one second becomes a problem if you don’t take it into account.”

By now, you’re probably wondering what the “debate” is all about. Is anyone voting in favor of catastrophic failure? On the other hand, how can a unit of time be abolished, even if it’s only a second? The story continues:

Continue Reading

Missile Firing Predator Drones + Virus = Bad News

We recently wrote about the need for security testing on medical equipment, but it looks like an even larger virus threat has come to light – on U.S. Predator and Reaper drone weapons systems.

While an unofficial source said they suspect it’s benign, they also added, “But we just don’t know”.  The thought of an attack drone being hacked is a chilling to say the least.  Jalpnik has a nice write-up of some of their historic missions (and the virus) but this seems to reinforce the hypothesis that the United States is entering a “Code War”.

Here’s the crux:

The virus, first detected nearly two weeks ago by the military’s Host-Based Security System, has not prevented pilots at Creech Air Force Base in Nevada from flying their missions overseas. Nor have there been any confirmed incidents of classified information being lost or sent to an outside source. But the virus has resisted multiple efforts to remove it from Creech’s computers, network security specialists say. And the infection underscores the ongoing security risks in what has become the U.S. military’s most important weapons system.

We keep wiping it off, and it keeps coming back,” says a source familiar with the network infection, one of three that told Danger Room about the virus. “We think it’s benign. But we just don’t know.”

For those interested, we have a new whitepaper on Software Security Testing.

Code of Silence: When To Keep Your Software Bugs a Secret

Chances are, if your software bugs make it on the front page of The Consumerist, then you’ve probably done some highly questionable things — either in product design or execution. Last week it was Apple (i.e. Location-Gate). This week it’s USAA Bank.  Here’s a brief, firsthand summary of the ladder company’s unresolved issues:

Apparently their website is full of bugs, shows inaccurate ledger balances, shows debits as credits and vica-versa, and apparently just makes up balances. I was informed that they are aware of the bug and have been for several months, and that they have to fix all accounts effected manually. Worse, they have chosen not to notify anybody of the problem and they have chosen to continue using their new money manager software rather than rolling back to their previous software until this bug is fixed.

The representative told me that they expect to have this issues fixed by May 20th and it could be affecting anybody west of the central time zone. If USAA members are having trouble on their accounts, they will need to call and have their accounts added to the IT ticket # 4347297 to have them fixed.

Should USAA bank have notified its customers of a bug that renders their application useless? The answer here should be self-evident to all our faithful readers. But in other circumstances, the answer is not so cut and dry. That said, here are three instances when you should keep your software bugs a secret.

Continue Reading

It’s a Software Crisis!

At least that’s what they were saying in the late 1960s. By chance, I stumbled across a brief Wikipedia entry today on that precise term (“software crisis“) and suddenly realized that very little has changed in 50 odd years. Take a look at this excerpt and see what I mean:

Software crisis was a term used in the early days of computing science. The term was used to describe the impact of rapid increases in computer power and the complexity of the problems which could be tackled. In essence, it refers to the difficulty of writing correct, understandable, and verifiable computer programs. The roots of the software crisis are complexity, expectations, and change.

The term “software crisis” was coined by F. L. Bauer at the first NATO Software Engineering Conference in 1968 at Garmisch, Germany.[1] An early use of the term is in Edsger Dijkstra’s 1972 ACM Turing Award Lecture[2]:

The major cause of the software crisis is that the machines have become several orders of magnitude more powerful! To put it quite bluntly: as long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now we have gigantic computers, programming has become an equally gigantic problem. – Edsger Dijkstra, The Humble Programmer (EWD340), Communications of the ACM

One wonders what Edsger would have thought about the explosion of smartphone devices, which was just getting started when he passed away (2002). In any event, the entry goes on to summarize the various forms in which the software crisis “manifested” itself, including:

Continue Reading

The Software Testing Quote of the Year

“As soon as they get the bugs out, it’s going to work a lot better.”

Brilliant! This software testing gem comes from Kim McMillan, an engineer for the city of Tigard, Oregon. Earlier this month, the state’s Department of Transportation launched a new software “experiment” to reduce traffic congestion on the South Pacific Highway.

Here was the idea: a software application would collect data from 21 traffic lights along the route. Once compiled, that information would be used to alter the signal lights to function based on traffic flow, as opposed to fixed timers. Great idea, poor execution. OregonLive.com explains:

One problem: A bug in testing the software has caused a few lights to malfunction and switch to “flash mode” — that’s when the signal simply flashes a red or yellow light, stacking up traffic behind it. The state transportation department has asked drivers to call if they see a light stuck in flash mode.

Transportation officials expect to finish the software update and testing by the end of May.

If they don’t have mobile phone handy, I supposed the alternative would be to just floor it and hope for the best?

In any event, this highlights the benefit of testing in a staging environment, as opposed to a live site. Wouldn’t you say?

Should There Be a Tax On Software Bugs?

Tax season is underway here in The States, as people square away their expenses, deductions and exemptions from the past year. Nothing new about that. But if a popular tech author had his way, software publishers would be in store for a big surprise come April 15th: a tax on software bugs.

David Rice makes the case for such a policy in his book Geekonomics. A former vulnerability analyst for the NSA and cryptologic officer in the US Navy, Rice’s name has been in the headlines of late on rumors that he is the new Director of Global Security for Apple.

So what’s his reasoning for a tax on software bugs? Here’s ComputerWorld with the premise:

Rice has floated include the notion of a Pigovian tax designed to correct the current “broken” market outcome in the software industry. That’s to say, end users pay the price for shoddy software through attacks, bolted-on security solutions, and the never-ending patching process. If security related vulnerabilities were somehow taxed, the cost burden would be shifted more from the consumer of software to the software manufacturer. That’s the idea, however many industry experts don’t think it would work.

It’s a horrible idea,” says John Pescatore, analyst at the research firm Gartner. “It’s as silly as the senator who proposed making buffer overflows illegal years ago,” he says.

“Basically, market forces are already at work. Look at the market share of IIS and Internet Explorer today compared to years ago. Every company has the ability to choose a software provider and to highly weight lack of vulnerabilities or patch histories or whatever,” Pescatore says.

Another idea mentioned in Rice’s book is the notion of liability and tort reform to make it easier to sue software makers for the damages created by faulty software. The idea, again, is to shift the costs of damages caused by shoddy software to the manufacturer.

Continue Reading

It’s 5:00 PM on Friday: Do You Know Where Your Software Bugs Are?

If you work on one of those QA or Dev teams that have weekends off (rare, I know) bug-tracking can be exceedingly difficult. With two days away from the office, it’s especially easy for the low priority bugs to get lost in the mix. Despite some remarkable advances in bug-tracking software, many companies still lose track of their software bugs. Does this sound familiar?

Of course, that’s just one of the many problems with bug-tracking software. Here’s a nice 3-point summary of the situation from the UserMetrix blog, in an post aptly titled: The Problems With Bug Tracking:

Triaging becomes increasingly difficult. With bug tracking alone, it is a bit of a black art to determine the severity of an issue or the importance of a new feature. Particularly when extremely vocal users are involved – of course their problem is of utmost importance to them, but is it representative of a large body of users suffering in silence? Without collecting data and making some measurements, how can you really be sure that you have correctly triaged an issue?

How can you encourage people to send feedback? Let’s imagine you are one of the users of your product. You are hammering the software like crazy – desperately trying to meet a deadline. Then –- bam. The application implodes. Right at the same time, your boss rolls in – “Hey Milton, are you done yet?” Arrrggggghhh! The people who built that particular application are not Milton’s favorite people at this point in time. In fact, Milton blames them for missing his deadline. Never mind that Milton has just spent the last two weeks browsing reddit. Do you really think Milton is in the right frame of mind to help you out by sending in a bug report? Are you really going to get the information you need out of Milton to make your software better?

Continue Reading

How Do You Defy The Final Wishes Of Half A Million British Organ Donors?

No, that’s not the start of a riddle or a joke. In fact, it’s the all-too-serious ramification of a 12-year old software glitch that afflicting Britain’s Driver and Vehicle Licensing Agencies (DVLA) and affecting the final wishes of countless families.

It seems that a software bug that dates back to 1999 has incorrectly recorded the donor preferences of 444,031 people. And while the circumstances surrounding this glitch are serious, it underscores an important point: in modern society, even things that we don’t associate with technology are supported and driven by software, and thus, are susceptible to defects. Organizations, whether private companies or public agencies, have a responsibility to thoroughly test their software and secure their data.

The cost for betraying that responsibility is the lost trust of that organization’s user base. And that trust, once lost, is difficult to win back. A bit more about this story after the jump:

Continue Reading