What you are describing is called within the security world as 'security by obscurity'. The principle is one that many government and intelligence organizations operate by, the NSA being a prime example of that mindset. If no one knows about your potential vulnerabilities or weaknesses, then you are safer.
The big thing about 'security through obscurity' that concerns me is that very assumption -- that no one knows about your vulnerabilities or weaknesses. I think this assumption is a fallacy. First, reverse engineering code has become easier and easier with automated tools available that perform the function for you. Secondly, a large percentage of attacks and compromises are launched internally by employees, who are often privy to large amounts of information about an application or system. Closing the source code does not protect you from these internal perpetrators.
Finally, crackers have proven more than adequate at comprising closed operating systems and applications (Microsoft Windows, for example). Therefore, I don't think the obscurification of source code provides any greater security.
Dig deeper on Linux system security best practices
Related Q&A from James Turnbull
A user wants to implement OSSEC on a Windows server because he has no server side Linux operating system.continue reading
Solaris 10 Trusted Extensions and SELinux are best suited to different system requirements and administrator skill sets. Our security expert explains...continue reading
Configuring spam filters Spamassassin and dspam together in the email server Postfix is easy with the resources listed by our security expert.continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.