Castle walls may stop assaults from the outside, but they do nothing to halt attacks from mutinous inhabitants
inside the keep. Of course, if the castle walls are not maintained, getting in from the outside gets easier and easier over time.
In this tip, I look at common mistakes made in physical IT infrastructure security and security maintenance. In part one, I covered problems with installations and the hard-perimeter, soft-center security approach. There are other areas to consider, but I find these particular ones to be frequently overlooked or mishandled.
Forgetting the physical
Many tightly-secured systems have major weaknesses in the physical realm. Poorly-secured physical facilities can allow external and internal attackers access to critical systems. Always ensure hosts are stored in a secure, locked environment. It is no good securing your host if an attacker can steal the physical asset itself or some component of the asset, such as its hard disk drives.
Additionally, make sure any console or management station is automatically logged out or locked, if left unattended. The TMOUT environmental variable for the Bash shell is useful for this. If the console needs to be left logged in, then a tool like vlock will assist by allowing you to lock the console in a logged in state.
Another frequently overlooked physical weakness is the potential to restart the host. By physically rebooting the host, an attacker may potentially manipulate the host. For instance, an attacker may log into single mode and change the root password. There are a number of ways to prevent this, either by ading a BIOS level password, or making the LILO and GRUB boot loaders password-protected. There is a risk that, if the host is remote from you when rebooted, it will require manual intervention to input the password(s). The latter risk posed to availability needs to be balanced against the potential mitigation of physical risks.
Appreciate the the power of entropy
Not many security controls are "set-and-forget," and most require regular maintenance and tweaking, since threats against your hosts don't stay static either. New threats appear and old threats can morph and change. If you apply security controls, then neglect the maintenance and review, you can unknowingly leave your hosts vulnerable. Here are some solutions to prevent potential problems.
Apply patches and updates, especially security-related ones, to hosts on a regular basis.
Updates should be scheduled according to the criticality of the vulnerability being addressed. To assist with this, many security companies calculate the criticality/vulnerability ratings of vulnerabilities. See the CVSS rating system. Take this rating and apply your knowledge of your local environment to determine if you are vulnerable and, if so, to what extent. A rating applicable to your environment then determines the priority for the application of any patch or fix.
Don't think that, because a software patch isn't available, you must live with the vulnerability. Seek out workarounds or alternative controls that you can apply, in place of the software patch, until it is available. For example, other systems, like Intrusion Prevention Services (IPS), can also be configured to detect a particular vulnerability and block exploitation of it.
Monitor security information resources regularly.
Make a habit of checking the SANS Web site, vulnerability mailing lists, announcement lists for the products/tools you use and security company Web sites and forums.
As you are reading, compile a list of the vulnerabilities that apply to you on a daily or weekly basis. Then, analyze and rate them according to the threat they pose to your environment, as discussed in part one of this series. Finally, act on them accordingly.
Management tools allow you to remotely and consistently configure your hosts, services and applications. They also allow you to ensure that configurations, users and groups, permissions and other settings are correct on your hosts by regularly resetting these to a centrally-defined default. This helps counter the inevitable effect that entropy has on configurations, and could potentially limit the period of any exposure due to a misconfiguration or unauthorised change.
Regular testing is critical to ensuring continued security.
Regular testing can detect new vulnerabilities and identify any vulnerability that has emerged as a result of entropy or change. Testing processes may include: scanning with open source tools like Nessus or commercial equivalents, in-house or out-sourced application and infrastructure penetration testing or code reviews of applications.
I recommend testing after major changes or on a regular basis with the frequency dependent on the criticality of the host.
Lastly, and most importantly, it is not enough to just find vulnerabilities; you also need to fix them. In order to do this, ownership for the remediation of the vulnerability needs to be clear, and an agreed workflow put in place for remediation. This should include a clear picture of the financial cost of fixing the vulnerability, weighed against the potential business impact of the vulnerability. Always remember to express the risks in business terms. After all, it is the business that ultimately has to pay for any remediation.
Click here to go back to part one.