Open source software (OSS) has undoubtedly broken into the mainstream, with projects like OpenOffice.org, Mozilla Foundation's Firefox browser and Samba all garnering their fair share of the headlines in 2005.
But as the open source projects pile up on sites like SourceForge.net, the nature of the community development model means some models will inevitably get pushed to the wayside even as many companies still use the technology in their software stacks.
In August, the Business Readiness Rating (BRR) standard was founded to sort out projects so that the everyday IT guy wouldn't have to. With more than 100,000 OSS projects currently available on SourceForge and similar repositories like Codehaus, Tigris, Java.net, OpenSymphony -- the list goes on -- members of the group believe their initiative could prove very valuable in 2006.
Michael Goulde, an analyst with Cambridge, Mass.-based Forrester Research Inc. and member of the BRRS steering committee, recently gave SearchOpenSource.com a progress report for the group. Although it is still in its infancy, the group is showing promise as a standard for IT professionals who might need to conduct an OSS evaluation process of their own.
What's up with the BRR? Who's involved and where do you stand with it?
Michael Goulde: There's universities like Carnegie Melon, and several entities [including O'Reilly CodeZoo, SpikeSource and Intel] that are involved with open source software that
What aspects appealed most strongly to you about this particular project?
Goulde:The reason I was interested is because of the methodology they had come up with -- this is not a way to create methodology out of whole cloth -- but similar to the existing methodology we use to evaluate the many categories of commercial and proprietary products. We share information based on the strength and weakness of applications using a community-based evaluation tool.
Another thing attracted me to it -- working with our own [Forrester] clients we get a lot of requests about specific open source projects. So it made perfect sense that open source users are involved in their own community effort to evaluate software, and therefore they needed a vehicle for sharing and collaborating around their independent evaluations that almost every one of them engaged in.
See, to me it's important that users of open source have a set of best practices and guidelines to go by. They can then communicate and generate a framework that they would then be able to go evaluate and build up using a knowledge base that could be shared by others; other users who could not share in evaluation process before due to budget or time constraints.
Also, the economic benefit is there because [when this is completed] not every company would have to go through the entire process for each application they wanted to look at or evaluate. Oftentimes at many companies the marketing power just isn't there.
Who will be able to best utilize what BRR will offer?
Goulde: It's for any developer who finds a need to evaluate open source projects. There are multiple levels of evaluation, and in some ways it is identical to commercial software and in other areas it is very different.
The technology aspect -- does it have functionality I need, the documentation necessary -- these are things the average developer cares about when they go to look at piece of software they are going to use.
How can BRR put some of the more apprehensive developers' minds at ease if they do not fully understand OSS?
Goulde[BRR] also addresses the various levels of approval normally necessary for commercial software, where there is some kind of measure with commercial companies in place to judge a software provider and their product, but this is not necessarily the case with an open source project, which can disappear if support drops out. With [BRR], there is methodology in place to make sure supplier of the open source application, so to speak, will be there in the future.
Has this come far enough along that the steering committee can say what the next step might be?
Goulde: Well, one piece is to continue to evolve the methodology. The question still is, 'Is this workable?' There are some aspects, for instance, where companies, IT developer shops, feel that they need to somehow build their own unique content.
We still need a lot of developer and IT management input to validate our approach, but after that we will start to build up a collection of evaluations. Then there is market adoption, getting developers and IT managers to use it and publicize their use.