Last week President Obama issued an Executive Order on Improving Critical Infrastructure Cybersecurity and a Presidential Policy Directive — Critical Infrastructure Security and Resilience (PPD 21). Unfortunately, these are a watered down rehash of the previous administrations’ proposals. Without legislation, these are directives to the executive branch and mere suggestions to industry.
What I found interesting was the report, Raising the Bar on Cybersecurity, released by the Center for Strategic and International Studies. The report identified risk reduction measures that reduced the risk from attacks by 85%:
I agree. These measures are effective. If these controls had been in place at Department of Energy, they may have prevented the intrusion and subsequent loss of data. But, first I would add two additional controls to this list:
But, there are challenges.
Whitelisting is time consuming and can be difficult to implement. You need locked-down desktops, standards for server builds, and network configuration profiles already in place. Otherwise, whitelisting may be circumvented, especially by determined outside attackers and disgruntled internal users.
Patching is always a good thing, but testing is required before any patch should be applied. This is particularly challenging for industrial control systems associated with critical infrastructure.
Reducing administrative privileges is a good idea that is noted in most InfoSec improvement programs. It is one of the first thing covered by a security auditor’s review. A tool like Unix’s sudo is a good way to assign specific privileges to specific users. Versions of, or equivalents to, sudo are available for other operating systems
Often outdated software or hardware is the source of vulnerabilities as patches and updates are no longer provided. Unfortunately, some entities do not have the resources to eliminate old software and hardware so they must identify alternate means to provide adequate security.
Finally, annual A&P testing may be expensive, but for organizations at risk — especially for loss of intellectual property, exposure of personally identifiable information (PII) or at risk of taking a hit to corporate reputation – the expense is well justified. A&P testing performed by qualified third-parties is a “no-harm, no-foul” way of ensuring networks and systems are properly protected and will show that a modicum of due diligence has been performed which may help forestall or lessen the impact of a law suit following a breach.
So, why doesn’t every organization jump on the A&P testing bandwagon? Well, in addition to the expense side of the equation, some IT directors are reticent to employ A&P testing because of the risk of exposing poor network design and/or configuration errors to executives or the board of directors. These folks, however, should be thinking of how they can protect their shareholders’ interests and not their careers.
Implementing these five processes will not only raise the bar with respect to protecting organizational assets, but will also reduce the efforts required to maintain an efficient and effective IT environment.
The Washington Free Beacon reported on 2/4/2013, “Computer networks at the Energy Department were attacked by sophisticated hackers in a major cyber incident two weeks ago and personal information on several hundred employees was compromised by the intruders”. A total of 14 computer servers and 20 workstations at the headquarters were penetrated during the attack.
This article and other articles in the recent past all raise the same issue: inadequate security measures stemming from (pick one or more): improperly trained administrators, inexperienced security staff, budgetary constraints, and/or “institutional hubris”. Government has a responsibility to protect the information entrusted to it by its citizens. However, the government – all branches – has failed in this endeavor and will likely continue to fail until they wake up to reality and get smarter than those attempting to compromise their systems.
Mandatory security testing and training must be implemented at all levels of IT and operations throughout the government. If sensitive information is involved, training must be held. I am not talking about awareness training; I am talking about training the administrators, IT managers and security staff on what to look for, how to properly program and configure and, most importantly, how to test systems and how to properly conduct attack and penetration tests.
Do not rely on hiring people with long strings of certifications behind their names. In many cases, they are merely cert collectors who have no clue as to what the certs really mean – other than the more certs you have the better chances of getting a job. Establish real training programs. Work with groups such as the GIAC (Global Information Assurance Credentials) which has programs that REQUIRE a practical exercise before a cert can be awarded. NSA relies on GIAC certified individuals, why shouldn’t the rest of government?
Finally, forget sending trained staff away to conferences. Not only will the conferences be a waste of time – it seems only controversial, contrarian views are desired for talk topics these days – but you will leave your networks and systems in the hands of those not as qualified to deal with crises should the inevitable happen. Everyone likes to go to conferences (if for no other reason than to collect suitcases full of vendor-supplied swag) but the best bet on training spending is on real training as supplied by organizations such as the SANS Institute.
CSO magazine analyzing this story provided a number of sources that support the same conclusion.