Skip to main content

How to Evaluate Open Source Projects?

By 2009-02-188月 22nd, 2017Blog
Article Source Amanda McPherson’s Blog
February 18, 2009, 12:27 pm

 

If youâÃôre in the open source world, you probably donâÃôt need a lot of convincing about the high quality software that results from the open source development model. ¬â Mass collaboration coupled with vociferous peer review makes for better code and products. It just does. ¬â No matter how much of a monopoly might exist today, this collaboration cannot be duplicated within the proprietary software model.

But there remains companies and organizations that still need convincing. Not because open source software holds any secrets âÃî in fact, just the opposite is true given its transparency âÃî but because adoption of new technologies is a process not a destination. It will always be that way, and that is a good thing for all of us. Peer review. Code scrutiny. This will continue to make all software better.

To this end, tools that help other developers utilize open source programs are extremely important.

Today, Coverity is releasing application architecture diagrams from over 2,500 open source projects showing the key components that make up a given software project. ¬â This visual presentation of an applicationâÃôs architecture and related data provides a fascinating and detailed portrait of the software analyzed and can be a great tool in evaluating what the software can do. ¬â TodayâÃôs release from Coverity exemplifies what transparency in software development can produce.

As an aside, this announcement only makes me wish that we could provide similar analysis to our government legislation. There is a strong push to provide the same transparency and participation ethos of the open source world to government. LetâÃôs hope in a few years I can write about a similar project being applied to our federal, state and local bills.

CoverityâÃôs SCAN, the software behind this big release of data, was originally a part of the Department of Homeland SecurityâÃôs Open Source Hardening Project. ¬â The data provides a clear map for navigating the inner workings of an OSS project as well as a clear path to developing similar functionality.

Back in 2006, Jon Corbet of LWN.net reported on CoverityâÃôs initial defect survey results using an early version of SCAN.Â¬â  The company claimed:Â¬â  âÃúThe LAMP stack âÃî Linux, Apache, MySQL, and Perl/PHP/Python âÃî showed significantly better software quality above the baseline with an average of 0.290 defects per thousand lines of code compared to an average of 0.434 for the 32 open source software projects analyzed.âÃÃ¹Â¬â  Corbet noted that some of the results didnâÃôt immediately square with the amount of security advisories released, and comments pointed out the unclear nature of the definition of a âÃúdefect.âÃù

SCAN has progressed significantly over the past three years, and todayâÃôs announcement focuses on architecture diagrams, not defects.Â¬â  The data will be available under the Creative Commons license and is available onÂ¬â  CoverityâÃôs SCAN site.

The Linux Foundation
Follow Us