Last updated at Fri, 24 Aug 2018 15:19:41 GMT

This is a continuation of our CIS Critical Control Series blog series. Need help addressing these controls? See why SANS listed Rapid7 as the top solution provider addressing the CIS top 20 controls.

If you’ve ever driven on a major metropolitan highway system, you’ve seen it: The flow of traffic is completely engineered. Routes are optimized to allow travelers to reach their destinations as quickly as possible. Traffic laws specify who is allowed in which lanes and at what speeds—carpool lanes, slow lanes, truck lanes, and so on. There are special rules in place to control the passage of hazardous cargo and oversized vehicles. Toll booth signage directs traffic based on payment and vehicle type. And all of this has been pre-defined by a group of civil engineers long before the first cubic yard of concrete is poured.

Engineering like this optimizes for efficiency and prioritizes safety. The same can be done when designing computing systems and considering how data is transported across networks.

The key principle behind Critical Control 9 is management of ports, protocols, and services (PPS) on devices that are a part of your network. This means that all PPS in use within your infrastructure must be defined, tracked, and controlled, and that any corrections should be undertaken within a reasonable timeframe. The initial focus should be critical assets and evolve to encompass your infrastructure in its entirety. By maintaining knowledge of what is running and eliminating extraneous means of communication, organizations reduce their attack surface and give attackers fewer areas in which to ply their trade.

The security control encourages people to examine and eliminate unnecessary PPS on each system. Going back to our road analogy, there is no reason for there to be an aircraft runway on your highway, is there? The same is true of your web server. Do you need to have FTP enabled on there? How about SMTP? Unless you are running your mail server or an open FTP service on your externally-facing web server, which I DO NOT recommend, the answer should be a resounding no. Eliminate those services whenever they’re not necessary. Leveraging hardening guidelines for these systems can be a great starting point, and can be deployed and monitored through configuration management tools.

In the case of off-the-shelf software, where you will be configuring it to run on your network, one way to ensure that you are not over-exposing yourself to risk is during the testing phase. Most server software will come with instructions that inform you which ports are required to run the system and allow you to configure things like communications between applications and databases. Add this information to your documentation of the system. This helps with developing firewall rules and in ensuring that your data protection program and disaster recovery/business continuity plans are kept up to date with the information required for success.

Prior to installation, perform a baseline port scan of the hardened system using your vulnerability scanner (or other freely available applications, such as port scanners and packet capturing tools). Once the system has been installed, perform another port scan and compare the results. Anything required of the system that wasn’t already mentioned in the configuration instructions or additional services should be made known at this time.

Leverage host-based firewalls on your servers, with whitelists configured to ONLY allow communications between the aspects of the systems (e.g., database connections or administrative access from specific IP spaces). Workstations can also use this technology to the same end. All other non-essential communications should be blacklisted, as they are opening the system up to additional risk.

Perform port scans of your infrastructure to understand and control exposure. Developing a baseline should be one of the first things you do. This activity should be taking place not only on a system-by-system basis but also across the landscape as a whole. When a discrepancy is discovered between the known and approved baseline, your setup should allow appropriate stakeholders to receive alerts so they know to investigate the activity and validate its business purpose.

Hit your external IP space by performing port scans against the entire range of external IPs you have assigned. Discovering hosts that shouldn’t be internet-facing can save you a lot of heartache down the line! A number of organizations only scan specific external IP addresses as a part of their vulnerability management programs and there is always a chance that a host may have been placed in the external space by accident. Finding these hosts and moving them onto a VLAN in your internal private IP space is an important part of risk reduction.

Separate critical services on individual host machines. We mentioned SNMP and FTP earlier, but it goes deeper than that. While you may be leveraging your domain controller for DHCP, you certainly should not be including any additional critical services on these boxes. If at all possible, physical segregation is ideal, but in complex modern computing and operational environments, this may not be feasible. Regardless of means of segmentation, enhance the security of the hosts by locking them down to only the required services. In the case of critical services such as DNS, DHCP, and database servers, for example, this is simply a means of ensuring that the attack landscape is kept at a minimum and that attackers would not gain access to multiple lines of advancement to the crown jewels.

Use application firewalls and place them in front of any critical servers. This helps ensure that only the appropriate traffic is permitted to access the application.

Additional information on all of these options can be found in the other controls within the series, specifically, Critical Control 11: Secure Configuration for Network Devices, and Critical Control 12: Boundary Defense.

Once again, the Limitation and Control of Network Ports, Protocols and Services comes down to knowing your environment, having a clear understanding of what is necessary and maintaining that through documentation, observation, testing and validation.

Safe driving!

Like what you see? Check out our next post in this series, “CIS Critical Control 10: Data Recovery Capability.”