What you are describing is called within the security world as 'security by obscurity'. The principle is one that many government and intelligence organizations operate by, the NSA being a prime example of that mindset. If no one knows about your potential vulnerabilities or weaknesses, then you are safer.
The big thing about 'security through obscurity' that concerns me is that very assumption -- that no one knows about your vulnerabilities or weaknesses. I think this assumption is a fallacy. First, reverse engineering code has become easier and easier with automated tools available that perform the function for you. Secondly, a large percentage of attacks and compromises are launched internally by employees, who are often privy to large amounts of information about an application or system. Closing the source code does not protect you from these internal perpetrators.
Finally, crackers have proven more than adequate at comprising closed operating systems and applications (Microsoft Windows, for example). Therefore, I don't think the obscurification of source code provides any greater security.
This was first published in August 2006