Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Turning the Tables on the Network Intruder

public://default-article.png
E. Allen Smythe Correspondent, TechBeacon
Photo by Kindel Media on Pexels
 

Spy novels often involve operatives who have long been trying to infiltrate an organization. Waiting, learning, picking their mark, not being too rash, being very patient and strategic until finally they see their opportunity, make their play, and have a breakthrough. The play worked perfectly, they have now gained trust; they are now "on the inside." The stakes are now raised significantly for the operative, who must evade being uncovered while carrying out their mission. A top operative may remain uncovered, coming and going without raising any eyebrows—but even the best can make one misstep. When only one error is required, the tables are tilted in favor of the defender, not the infiltrator.

Let's apply this to information security.

Let's now illustrate this with an extreme—but nonetheless actual—real world example. The moral of the story could well be: "You only need to catch them once!"

A world-class red team was conducting an operation against an organization that, at that time, was under rapid growth and had reasonable security visibility—but was also still building out components and improving its detection and response capabilities.

After a significant recon phase and trying a few fruitless avenues of attack, the red team was able to determine the software version of an externally facing system, for which they then built out a lab replica and developed a zero-day exploit—tailor made for that target. At a time of their choosing, the red team launched the zero-day at the externally facing system—compromising it. The red team had a good day, and the defenders just had their first bad day (but they didn't know it quite yet). This is where the advantage shifted, however; the red team now had to evade detection systems that they didn't even have knowledge of.

Meanwhile, the defenders were doing their best with what they had. They had been successful every day up to this point, fending off millions of attempts (including from this attacker). Quite separately, the defenders had developed a "leetspeak" search that would alert on seeing hacker leetspeak patterns in logs. It was never expected to work against a red team; it was designed to catch script kiddies, proof-of-concept exploits, and commodity attackers.

In this case, the red team had named a particular element of their exploit something along the lines of "pwnedby31337"—perhaps forgetting to rename it from their lab version to some innocuous looking name that would have easily gone unnoticed. When this element invariably appeared in logs, it was more than enough to trigger the leetspeak-search alert—which ran every hour against the log repository. To wit, the red team had made one small OPSEC blunder leading to their detection; they had their first bad day while the blue team had one very good day.

While this may appear a far-flung or a "lucky blue team" case, the reality is that many attackers have suboptimal operational security. Red teams are skilled in offense, but aren't primarily focused on defense and detection evasion. Even those with excellent OPSEC can make mistakes and oversights.

The good news is that taking advantage of this does not always involve heavy or expensive lifting. For network evidence alone, there are even free open-source tools that have thriving communities, constantly churning out improvements and detection content for significant exploits. Furthermore, bespoke detection content can be created specifically to the environment being defended.

There are two basic elements that enable this detection:

First, we need the evidence/data/visibility. Call it what makes sense to you, but the essence is that we can't detect what we can't see. We need raw logs—not just more alerts that aren't tuned or targeted to what you're looking for and have no context to help you understand them.

Secondly, we need search logic. This can broadly be fitted into two types. The first involves specific indicators ("We are looking for this particular thing"). This may involve looking for specific APT toolsets, for example. (Another example would be the real-life one described above, involving a leetspeak search.)

The second involves anomalous behavior ("We are looking for weird things"). To look for anomalous behavior, defenders must first know what is normal in the environment. This is much easier said than done, but it's not hopeless. Again, an attacker needs only one bad day to give the defenders an advantage—and attackers tend to be noisy once they gain a foothold and try to learn about the environment. They need to discover, for instance, where to find key data and who the users with privileged access are. They tend to learn this information by scanning, (for instance, through brute-forcing, using such tools as nmap or Bloodhound, or taking advantage of LDAP). Then, they strike.

Examples of abnormal activity may include such red flags as:

  • A Secure Shell Protocol (SSH) session from the laptop of someone in HR,
  • Remote Desktop Protocol access being granted to a domain controller for a machine not from the sysadmin group, or
  • Brute-force scanning from an internal desktop machine.

It is often said that defense is harder to achieve than an attack. This is mostly true, but only up to the point where most people focus—"left of boom". After a compromise, however, the tables are turned and defenders take the advantage. It behooves organizations not only to protect against compromise but, in parallel, to assume compromise and detect it.

Fundamentally, this means that organizations should collect artifacts/evidence/logs at intelligent levels of abstraction that scale.

None of this is easy, but without data, "no logs, no crime!"

Keep learning

Read more articles about: SecurityInformation Security