The Insider News is for breaking IT and Software development news. Post your news, your alerts and
your inside scoops. This is an IT news-only forum - all off-topic, non-news posts will be
removed. If you wish to ask a programming question please post it
Get The Daily Insider direct to your mailbox every day. Subscribe
I just authorized it to analyze RSC, which is written in C++ (first link in .sig below). Let's see what it finds. I'm sure there will be some "obvious" things, but I'm more interested in how "deep" it goes.
EDIT: That was fast! It analyzed 745 files in 0.336 seconds and returned 7 types of warnings:
object used after being freed (looks legitimate, but somehow the code must muddle through!)
possible nullptr dereference (2 occurrences in 1 file)
possible memory leaks (109 occurrences in 21 files--I hope most of these are spurious!)
divide by zero (deliberate, to test the ability to handle SIGFPE)
expression will always evaluate to false (19 occurrences in 10 files)
expression will always evaluate to true (11 occurrences in 7 files)
unreachable code (5 occurrences in 4 files)
I need to investigate most of them. It also made two suggestions:
use empty() instead of size() == 0 to check for an empty string (sure)
operator new and operator delete should be implemented in pairs (22 occurrences in 21 files: likely a false alarm)
I just downloaded and installed PVS-Studio. It claims to be integrated with VS and should therefore be easier to use than Coverity, which doesn't seem nearly so straightforward. I'll let you know what happens. I skimmed some of the documentation during the install and saw that it supports MISRA, which will probably result in lots of drool.
Coverity has an integration with VS, but not to make VS/Coverity one integral unit - more like a front end, not unlike e.g. the Jira plugin.
A full-depth analysis is similar to running a build on a backend build server (in practice, you would often run it on the build server), storing the analysis results in a central datbase, usually common to the entire company. A quite extensive web interface to the database lets you classify and triage "defects", assign responsitbilities for followup, generate logs and charts etc. This interface cannot be integrated into VS - which makes sort of sense when you see how complex the management can be. (E.g. for each project in the database setting up access rights for each role - there is about a dozen, associating users with roles, setting up summary reporting etc. This goes far beyond a development environment. Compare it to Jira: You don't have the full functionality of Jira inside your IDE, either.
Running the complete analysis and loading the results into the database is so resource consuming that you don't want to do that after every ten edits. There is a lightweight work mode: You can import to your desktop PC a snapshot of last full analysis, usually made on the same code that you check out from your VCS. You make your edits in VS, and from VS you activate an "incremental" analysis which only considers those lines differing from the Coverity snapshot. Defects are reported similar to compilation errors, in a VS window pane, with all the standard navigation facilities etc. The report may be quite extensive; it may contain a deep trace back to the root of the defect. You can correct it and repeat the analysis to see if you got rid of the report, without leaving VS. The resource requirements are comparable to lint analysis, i.e. it is so fast that you really don't worry about it.
For better or worse: This is a completely local operation. Nothing is comitted to the Coverity database. If you clean up defects introduced through your recent code editing, they have no trace in the database. They will not appear in any defect counts, will not go through any central triaging. It will just help your subsequent code commit to be "clean"(er). The VS integration provides a quick "between commits" analysis. It is not a standalone option but a supplement to the centralized full analysis, to make both your code commit and the Coverity database cleaner.
It is certainly true that Coverity is not geared towards the hobby programmer. I came to think of an age-old-term: "Programming-in-the-large" (vs. programming-in-the-small) - it is definitely for "large" programming, where you analyze the ten million code lines of your subsystem in a nightly build. In such a scenario, the lightweight incremental desktop analysis, integrated in VS, is most definitely valuable. For the small business/hobbyist running everything on that desktop PC, the infrastructure is probably too heavy.
I never tried the free, open source, cloud based offering, but suspect that it is with Synopsis (the Coverity vendors) running the infrastructure for you: You commit your code to Git and invoke your "nightly Coverity build". After downloading analysis results, you can continue your VS editing, with VS integrated incremental analyses along the way. The infrastructure is still there, but you are not responsible for managing it. Coverity also integrates with a handful other IDEs/editors as well, like Eclipse and Emacs, but the list is not very long.
I will not claim that Coverity is the "best" at identifying defects - but I certainly would like to see someone setting up a thorough test to compare it to the others. What makes me love it is the support it provides to me as a programmer to help me finding the real source of the problem. It is as far from "Error 101: Something wrong" as you can possibly get; it takes me by the hand and leads me all the way along the path, pointing out every detail. Another tool that identifies 5% more issuses, but just says "Error 345 at line 2164", and that is it, will never win my heart.
Thanks for all the info. I downloaded the version that's free for open-source projects but have to figure out where to install the files and how to build from the command line. I've only done builds using the VS menu and don't know what magic command has to be used. It looks like you have to build first and then give them a link to a compressed file for analysis. I'm certainly not looking for something that constantly runs in the background, nor would I expect it to provide me with a tracking tool.
You might be able to save yourself some time and just look for Google submitted bugs in MS's guthubs. The only reason I can see them not having picked on a rival like that is if they're keeping quiet while their vulnerability team looks for which bugs it found can be exploited.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, weighing all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
To follow up on my previous post, many of them were false alarms, but understandably so:
object used after being freed: good find!
possible nullptr dereference: looks spurious
possible memory leaks: didn't realize that each relevant constructor assigns the object to an owner
divide by zero: deliberate, to test the ability to handle SIGFPE
expression will always evaluate to false: because there is no override of a virtual framework function
expression will always evaluate to true: because there is no override of a virtual framework function
unreachable code: each one a break after a return
use empty() instead of size() == 0 to check for an empty string: alright
operator new and operator delete should be implemented in pairs: didn't realize that internal memory management frameworks use a header that allows a superclass to return the memory to the correct pool or heap
I love it when Coverity tells me that "if X is negative in function abc, and then the true branch is taken in function def, and then ... then the pointer will be null when you dereference it". I've seen it go to a depth of seven function call levels in my code, and that is probably nowhere close to a record breaker.
Coverity will, however, do "nothing" in 0.336 seconds. It is rather heavy, even more on RAM requirements than CPU. And for commercial use, it is far from free. Yet, lots of people consider it somewhat like a gold standard.
So I'd be very curious to see a shootout between Coverity and DeepCode! (but I wish DeepCode would get C# support before that!)
I have to look into it. If a free version that isn't a toy is available, I'll try it. I'd also found an outfit in Germany with a similar tool that looked to be very good. The reason I developed this[^] was because it was a challenge and I wasn't willing to pay for a commercial version. But I'd really like to see what they would find.
I don't think they provide any toy version, but you must have a license key.
For free, open source projects, you may (at least under a given set of circumstances) get a license for free, but require you to use a cloud service, so that you cannot run away with it. See Coverity Scan[^]
The possible memory leaks (you gave a count of 109 in your first post) is an inidication of why I have come to favor automatic garbage collection. You do not mention dangling pointers and double freeing - maybe your code is so disciplined that you don't experience it; it can certainly create nasty bugs.
Many years ago, we had a basic General Object Dispenser, GOD, and whe didn't free objects explicitly but send them home to GOD.
(and before you ask: I created GOD)
divide by zero: deliberate, to test the ability to handle SIGFPE
In my student days, the Computing Center newsletter brought an article about the shocking number of divide by zero faults on the great Univac mainframe, it was something like a million a day. The next issure brought a note from the Mechanical Engineering dept: Some of their matrix operations most certainly did divisions by matrix elements with value zero (i.e. uninitialized value), but the following operations would never use those partial results. Identifying which values were not relevant and skipping the divide for those would be far more complex and time consuming than using a tight loop over all elements and simply accept the divide faults.
So the essential question is: Can you flag this code in a/the faults database as intentional, so that you won't see it reported the next time aound? (I hate cluttering up code with thousands of lint directives! The database approach is far better - but far more complex/expensive.)
expression will always evaluate to true
Although the detail explanation is different: Embedded code in particular is overcrowded with "while (1)" (I tried to introduce "for ever", "ever" being a #define expanding to "(;;)", but the respones was certainly negative. Real Programmers write "while (1)"!)
I suspect that code checkers treat "while (1)" as a special case that is not reported. If not, being able to flag it as intentional is an absolute must!
The leaks were all false alarms, though some probably exist elsewhere. Double freeing is a problem, so a pointer must be cleared after being passed to delete. The introduction of unique_ptr and its kin has greatly reduced the risks. If only there was time to retrofit all that legacy code!
I use a form of garbage collection that can run intermittently rather than frequently, which is important when a system is heavily loaded[^].
Thanks for the tests... I might give it a try too.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.