This blog is a continuation of our blog series on the CIS Critical Controls.
We’ve now passed the halfway point in the CIS Critical Controls. The 11th deals with Secure Configurations for Network Devices. When we say network devices, we’re referring to firewalls, routers, switches, and network IDS setups specifically, but many of these concepts can and should be applied to DHCP/DNS appliances, NAC enforcement appliances, and other solutions, too. The goal is to harden these critical network infrastructure devices against compromise, and to establish and maintain visibility into changes that occur on them—whether those changes are made by legitimate administrators or by an adversary.
The first three of the seven sub-controls state:
- 11.1: Compare firewall, router, and switch configuration against standard secure configurations defined for each type of network device in use in the organization. The security configuration of such devices should be documented, reviewed, and approved by an organization change control board. Any deviations from the standard configuration or updates to the standard configuration should be documented and approved in a change control system.
- 11.2: All new configuration rules beyond a baseline-hardened configuration that allow traffic to flow through network security devices, such as firewalls and network-based IPS, should be documented and recorded in a configuration management system, with a specific business reason for each change, a specific individual’s name responsible for that business need, and an expected duration of the need.
- 11.3: Use automated tools to verify standard device configurations and detect changes. All alterations to such files should be logged and automatically reported to security personnel.
If we distill these concepts, the idea is that we need to begin from a baseline-hardened configuration, then apply strict change management operational and detective controls to any modifications. Easy enough to say, but how do we do that?
Baseline hardening for network devices can be established by either using guides from the vendor (if they are available), or by utilizing an open, peer-reviewed framework such as the CIS Benchmarks or the Defense Information Systems Agency (DISA) Security Implementation Technical Guides (STIGs). Vendor guides may be helpful in offering quicker, more prescriptive advice custom to your platform, but they may not be comprehensive or free of bias, so we don’t typically recommend them as a primary guides. Both the CIS Benchmarks and the DISA STIGs are free; we find that the CIS Benchmarks are often easier to approach for many organizations since the guides are in PDF and are more human-readable. If you’re a current InsightVM or Nexpose customer, you can configure credentialed device scanning and reporting against the CIS Benchmarks to report which settings may be out of compliance. Other vulnerability management solutions may also have the ability to scan against these templates; doing so makes it that much easier to reach and maintain these baselines, and to ensure you receive alerts if a settings change contradicts them.
Speaking of changes, the second facet of these first three sub-controls deals with Change Management: those wonderful tickets that ask for whos, whats, whens, wheres, whys, hows, and what-ifs (back-out plans) of any significant change. If you have any compliance objectives at all you probably have a CM ticketing and approval system. While they’re often grumbled about, they do provide a good support structure for stability and predictability in your day-to-day operations. A well-tuned CM system is as low-friction as possible and allows network and system admins a good ledger of all changes; this lets you virtually wind back the clock and figure out if an unintended interaction of one or more changes has caused an operational issue. A “sliding scale” approach—one that allows a reviewer to determine if a small change only requires minimal information, whereas a large and complex change may require more detailed plans and meetings—is often a better way to tune than a one-size-fits-all CM detail requirement.
The last facet of these first three sub-controls has to do with change detection. Many network configuration manager platforms offer the ability to alert if a change is detected from a previous configuration, and those alerts can be reconciled against a change management system. Such checks and balances can keep network admins honest and offer a detective control against attackers adding accounts or modifying configurations to their advantage (provided the attacker neglects to disable the conduit to the network config management platform, of course). Open-source solutions such as RANCID can provide change detection in this manner as well.
Sub-control 11.4 is relatively straightforward: Manage network devices using two-factor authentication and encrypted sessions. Many network infrastructure devices now can directly integrate with multi-factor authentication solutions. If your 2FA platform of choice doesn’t directly integrate, consider restricting administration to geographically disparate or independently-hosted administrative “jump stations” and implementing 2FA on those stations. We’ll talk more about those in 11.6. In the meantime, the second part of this section talks about the use of encrypted sessions: This means no telnet. Never. Not anywhere. Technically you could tunnel it through an IPSec tunnel, but that’s a lot more work than just turning on SSH v2, testing it, making sure SSH v1 has been disabled because of several security flaws, and then disabling telnet. Seriously, if you do nothing else in this guide, disable telnet everywhere on your network after testing SSH v2.
Sub-control 11.5 is also fairly clear: Install the latest stable version of any security-related updates on all network devices. Surprisingly, when we perform evaluations of customer environments we see a lot of network infrastructure devices that are treated with the “If it ain’t broke, don’t patch it” philosophy. These infrastructure devices are often one or two hops away from laptops or other mobile systems that enter and leave the network frequently (sometimes several times a day), and can present a broad and rich attack surface if not hardened and patched frequently. The days of considering attack surfaces only on your outer boundaries are long gone. The problem is many of these network infrastructure devices require downtime to properly patch and test, and in a substantially complex environment that can overwhelm the number of network engineers available to perform that function. A mixture of high-availability configurations for failover, combined with automation both for patching and post-testing, can go a long way in moving beyond this critical security maturity level. With that said, if you don’t already have this capability and architecture, this can be one of the hardest sub-controls to meet. It should not be considered optional or nice-to-have; patching of all network devices is essential in risk mitigation and proper defense-in-depth.
The second-to-last sub-control is 11.6: Network engineers shall use a dedicated machine for all administrative tasks or tasks requiring elevated access. This machine shall be isolated from the organization's primary network and not be allowed internet access. This machine shall not be used for reading email, composing documents, or surfing the internet.
The aim of this control is to limit the likelihood of an attacker compromising a network engineer’s 'daily-driver' machine and riding into the firewall, router, or switch via an admin channel. By setting up a secured and limited-functionality jump station, you can create a choke point to apply detective and preventive controls that make it much more difficult for an attacker to pass through undetected. Technologies such as endpoint protection agents, session recording, multi-factor authentication, file integrity monitoring, and good workstation hardening can strengthen this, along with stringent limitations on where network devices will accept incoming SSH sessions from (because you disabled telnet, right?) and where they’ll allow connections to. If you disallow connectivity from that jump station out to the internet, email, or document hosting platforms that may carry infection vectors, you significantly limit the risk of compromise from that angle. It’s worth noting again that you shouldn’t create just one jump station, as it would represent a single point of failure. Create a few on geographically disparate or independently-hosted machines. If you’re like us and enjoy having a backup to your backup, you can also leave an aux or console port enabled, but make sure you set up and test alerting such that it gets everyone’s attention if it is ever used; it should be a very rare case and always in an emergency.
The last sub-control further defines how segmented your administrative connectivity should be from other business channels. 11.7 states: Manage the network infrastructure across network connections that are separated from the business use of that network, relying on separate VLANs or, preferably, on entirely different physical connectivity for management sessions for network devices. If you’re going to embark on an internal network segmentation initiative, the easiest and most predictable network to start with is usually your network infrastructure device administration connectivity. These devices are probably not changing location very often; nor are they entering and leaving your network at semi-random intervals. This network is also a great place to try out additional detective alerts such as getting everyone’s attention if there’s an unscheduled network port scan, failed logins to any devices, or any new devices added to this segment. Network administration, backup, and printer networks are favorite places for attackers to hide and attempt lateral movement through your environment; due to their likely static and predictable nature, they should be the first place you can apply much more strict control without impacting business operations.