In 2011, I wrote a blog post, “Fighting the Advanced Attacker: 9 Security Controls You Should Add To Your Network Right Now.” I wanted to revisit the topic with today’s technologies in mind to see if the suggested controls would still work or if some adjustments were necessary.
Following (in italics) is my original post along with some highlighted (and non-italicized) 2016 updates. I think you’ll find it interesting to see how the landscape has changed, and how it has stayed the same. (Spoiler alert: We’ve added a tenth item to the list.)
Defending against “advanced” threats are hard. That doesn’t mean you should throw your hands up and simply not try. But you also don’t need to run out and plunk down money for the latest high-tech products to combat advanced attackers, modern malware, and misbehaving employees.
Below are our recommendations for the bare minimum steps that modern organizations should be following. As with nearly everything in our industry, this list below isn’t a silver bullet. However, it represents the foundation of a defensible network — and a network that helps responders detect and recover when things go wrong.
2016 update: Defending against advanced threats is still hard. While there are some good solutions out there, you still need to make sure that your network and systems are configured to take advantage of your technology investments. The list below still stands, with some tweaks.
It’s simple: Web servers don’t need to web surf. There is no business justification for allowing your web server to initiate a connection to an IP in China. Rather, allow servers to connect to certain specific hosts/ports (e.g., a web service call to a business partner). Keep in mind that some well-known sites may be used as command and control channels (twitter.com and blogspot.com). As such, filter and log accordingly.
2016 update: Even in 2016 — with our emphasis on advanced attacks that use exotic command and control methods — a lot of malware still uses non-standard ports. Most enterprise networks restrict outbound traffic to 80 and 443 (to allow web surfing). All other ports are not allowed. However, we are still seeing a lot of malware that connects to oddball ports (e.g., 7878). As such, this control holds true in 2016. Restrict all outbound traffic from your servers and user endpoints, and log the exceptions. (Side note: I realize this doesn’t do much for remote endpoints — but I’ll cover that in the new #10.)
It’s been said that DNS is the linchpin of the Internet. It’s arguably the most basic and underappreciated human-to-technology interface. It’s no different for malware. When you suspect that a device has been compromised on your network, it’s important to be able to see what the suspected device has been up to. The DNS logs of a compromised machine will quickly allow responders to identify other machines that may also be infected.
2016 update: DNS is still a big part of how malware works. Once a machine is infected, the malware typically phones home to its command and control (C2) domain. One approach that several security tools use is to look for DNS lookups for newly registered or unregistered domains. An attacker can only use a domain for so long before the good guys notice, resulting in confiscation or blacklisting of the domain. Attackers will sometimes register a domain before or at the moment of attack, which explains the focus on “young” domain names.
A responder’s worst nightmare is learning of an anomaly on the network and finding out that the IP address that it originated from is a DHCP address and has been leased out several times since. By logging the date, time, hostname, and the IP that it was assigned, responders will be able to look back in time and figure out what machine generated the traffic.
2016 update: This one still holds true. For example, if you discover a log entry that only identifies the device by its Windows name, how will you know the IP address it was assigned at the time? Many DHCP logs will record the assigned IP as well as the host name. One additional point to note: ensure that you log the assigned IP addresses of VPN users.
It is possible to tunnel data out of even the most restrictive networks using a wide array of protocols. When an IP on the Internet is implicated in an incident, responders need to understand if any other hosts on the network communicated with the IP. A quick search of a firewall log will show all traffic to the IP. And here’s tip to save some disk space: Full packet captures are not necessary. Only the date, time source/destination IP, and source/destination ports are required.
2016 update: The main point to note here is that though disk space was cheap in 2011, it is even cheaper now. The more you can log, the better. It is frustrating to see that a host is talking to an IP address that you know is bad but you have no idea what was said. Was that the initial C2 traffic or was that your entire database being exfiltrated? Getting the payload here is key.
According to Mandiant, attackers are increasingly using legitimate access methods (e.g. VPN, RDP, OWA, etc.) to gain unauthorized access to victim organizations. And why wouldn’t they? Anti-malware controls such as AV, IDS/IPS, and anomaly or behavioral detection systems can detect the presence of malware. Malware is the “smoking gun” that is left over for the responder to reverse engineer, fingerprint, and investigate. The bad guys may use malware in the initial stages of the attack, but that quickly changes. By logging all access (successes and failures), you ensure that you have an accurate audit trail of what users are connecting to.
2016 update: This still holds true. The M-Trends® 2015 report by Mandiant (a FireEye Company) is 28 pages long, and the word “credential” is mentioned 20 times. Attackers are using credentials to persist access and move laterally through the network. Furthermore, PowerShell and WMI are being used by attackers because…well…they are just so powerful and helpful. The ability to trace where an account went through your environment is key.
When investigating a possible or confirmed incident, the analyst needs as much data in one place so that it can be correlated. Central logging is also an old school “no brainer” security consideration to prevent local logs from being tampered with on the compromised device.
2016 update: Today, it’s important to have a central log that provides responders with a trusted “fallback” enclave that is cordoned off from everything else. So even if you were to “lose the ship,” your security enclave (and the logs you are throwing at it) would be protected from tampering.
Mandiant’s M-Trends report found that 83% of malware used 80 and 443 to establish a command and control (C2C) channel to the attacker’s server. Your proxy is in a fantastic position to log a lot of details about outbound HTTP (and if you break SSL, HTTPS) connections. Here, make sure you log date, time, client IP, requested URL, browser agent, etc. Get it all.
2016 update: In #4 above, I mentioned doing a full packet capture. That’s easy for me to type, but not exactly easy to do without some serious infrastructure. If that seems like too big an ask, consider logging what you do allow: HTTP/S traffic. Get the full GET and POST payloads. Lots of malware uses HTTP or “HTTP-ish” C2 methods. Proxy servers are a perfect tool to grab this.
Breaking client SSL connections used to be rare because it was hard. However, we have found an increase in the amount of customers who are man-in-the-middling clients. Yes, attackers can use advanced techniques like custom crypto or obfuscated commands embedded in HTML comments. However that is hard(er) for the attacker to pull off. SSL is easy; it’s usually a check box.
You should at least make it hard on the bad guys. Once client SSL is being inspected at the proxy, identify those devices that are not being proxied. This may indicate misconfigured machines, rogue devices not playing by the rules, or a process running on a machine that isn’t using the local system settings. All are suspect and warrant investigation.
2016 update: Not a huge change here. SSL is still being used — but as we’re finding, sometimes the attackers don’t even have to try that hard. In some cases, attackers are using (and reusing) the same self-signed or rogue SSL certificates within their C2 infrastructures. As such, many security solutions (e.g., Bro IDS) log SSL server certificate details (like OU, Issuer, etc.) that are available during the SSL negotiation. This is a great way to spot odd SSL servers that your endpoints are connecting to.
The final recommendation we have is to test your assumptions. You assume that your proxy will filter HTTP traffic, that your DLP solution will catch things at the email gateway, and that the IDS will catch C2C traffic. You assume that all packets out of the firewall are logged and that correlating events will be easy.
2016 update: Let’s reword this to the new reality: “Test your assumptions about your network and your personnel.” A lot of security professionals assume that their technologies are great and their users are hopeless. Both are wrong. If technology was the be-all and end-all, would we even talking about any of this?
There is no magic “secure” checkbox that you can just enable. Our datacenters, which are full of very expensive blinking lights, certainly do something really cool things. But we need more. Just as you test for (and then remediate) exploitable vulnerabilities in your technical infrastructure, you should be doing the same for your users. Servers and workstations require ongoing maintenance and security attention. Users aren’t any different. Continuous security awareness and training has a material impact on your security posture. Nearly every organization values its technology and its personnel. As such, end-user risk management should be part of any mature security program.
This is 2016 addition is a big one — and it underscores the impact “always-on connectivity” has had on organizations of all sizes in the past few years.
Many of the controls I introduced in 2011 work on the assumption that the assets you’re trying to protect are always located within your network perimeter. As we all know, assets have been steadily moving outside of the “castle walls.” A few, some, or all of your personnel may be remote. Activities like working from home, visiting client sites, and checking email at Starbucks essentially mean one thing: employees are doing their jobs outside the safety net of many of your security controls. In reality, a lot of your data could be living within applications that you have little control over. I’ll spare you a bunch of well-known advice about picking the right cloud vendor and focus on some related items instead:
Looking for a “user remediation” solution? Our Anti-Phishing Training Suite is a great way to introduce security awareness training to your employees. Our SaaS-based solution combines simulated phishing attacks and brief, interactive education modules, providing a thoughtful, engaging approach to effective risk reduction.
Posted by Trevor Hawthorn on 01.14.16
Posted by Trevor Hawthorn on 01.14.16
Posted by Trevor Hawthorn on 01.14.16