You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Use a collection of source code weakness analyzers (these are static analysis tools) to look for vulnerabilities (HP Fortify, Coverity, SWAMP’s set, etc.). You can use vulnerability density (#hits/KSLOC) to hint at the quality of the code overall. This isn’t a new idea, of course, but it still seems to be one of the bigger ones being discussed in places such as the NIST 2016 forum on security metrics. This is challenging for the census, because there are so many languages involved, but it’s possible.
Use tools to identify “where did the source code weakness analyzers give up or are likely to miss things?” Sadly, the proprietary tool-makers have some incentives to not reveal where they give up, and in any case it’s often hard to report (they have to approximate). I don’t know of any production-quality tool that really does this, suggestions welcome.
Use tools to examine quality-related issues; these can hint at potential problems, and also might hint at areas where the source code weakness analyzers are likely to give up (since they can identify especially-complex code). There are, of course, tools that do this.
Use dynamic analysis tools (e.g., fuzzers). The problem here, of course, is that not only is this compute-intensive, but it’s labor-intensive to set up execution environments for each one. I don’t think this makes sense for the census at this time.
The text was updated successfully, but these errors were encountered:
There's an interesting list of ways to get OSS project metadata in the discussion about what OMB should ask for.
Other approaches:
The text was updated successfully, but these errors were encountered: