War stories from the Static Analysis Trencheson Oct 13, 2010 in Coding, QA by Andrew Darnell
I have always had a soft spot for automated analysis tools, fundamentally believing that for a reasonable subset of problems, where the rules can be expressed relatively mechanically, that they should be able to effectively diagnose a reasonable number of issues that would have taken a long time to identify manually.
Note the use of the word ‘diagnose’ here… Very different to automatically fix.
Understanding the totality of the code, and divining the intention of the programmer is a large step beyond where the current research is and is fundamentally a hard (read impossible) problem, for a large enough subset that makes it true, though offering better alternatives for simple cases allows more time to focus on the trickier cases.
Here is a fascinating article from the trenches of a company working on static analysis tools, that talk about some of these very issues and trade-offs.
It identifies a bunch of social engineering issues as well as some interesting technical challenges, and posits the theory that paying attention to the social issues is more important than the technical.
I have seen how interesting it can be to encourage process improvement with metrics and tools supplying data to highlight particular areas for improvement.
Two classic quotes…
A misunderstood explanation means the error
is ignored or, worse, transmuted into a false positive.
…it’s not uncommon for tool improvement to be viewed as “bad” or at least a problem.
A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World -
How Coverity built a bug-finding tool, and a business, around the unlimited supply of bugs in software systems.