Prediction of (Cyber)warfare — Security Teams Under Attack!

Written by jeremy.trinka | Published 2018/06/30
Tech Story Tags: security | cybersecurity | malware | incident-response | technology

TLDRvia the TL;DR App

Photo by Henry Hustava on Unsplash

It happens more often than I would like to admit, but I do catch myself daydreaming. Typically it is of a semi-cyberpunk landscape, not too far into the future — maybe five to ten years down the road.

It is always of a transitional time, where those hot tech-terms reverberating in the halls of the Valley have finally been acclimated into the mainstream. A place where someone finally found a use for VR headsets and chatbots. Cryptocurrency has overtaken fiat with the release of ‘Bitcoin 2: Blockchain Boogaloo’. Fluorescent bomber jackets and camouflage cargo pants are no longer faux-pas in the workplace.

As a result, I can’t help but speculate on future threats, attacks, and breaches. What will be the big headlines of this future world? Which compromises will have the most impact? This article is just that — musings on recent events, and having some fun conjuring up what big headlines might grace our news feeds.

The thought of truly malicious actors targeting security teams: pentesters, red-teamers, bug hunters — wouldn’t it be an interesting idea to speculate on? We have all seen at least one episode of Mr Robot by now, right? A security team, contracted to better secure a client’s infrastructure, getting compromised? A step further, compromising their clients as a result? With well-known breaches such as Target, Wendy’s, or more recently Ticketmaster, occurring as a result of third-party compromise, is the idea so far-fetched?

Come with me down the rabbit hole. Let’s pretend to be conspiracy theorists for just a few minutes, and see what kind of oddities we might conjure up.

It’ll be fun, I promise.

Operating Systems as a Target

Security teams have an OS distro of choice which contain the tools necessary to conduct their testing. Whether it be Kali Linux, BlackArch, PwnPi, Blackbox, or one of a myriad others, the purpose is to expedite setup, standardize, and eliminate redundancy. Most likely the OS is open sourced and available to the public, unless the organization has some modified templates they supply to their employees.

Could the OS itself be a target to attack? Let’s start with a recap on some past events that would support the idea.

In February of 2016, the group behind the popular distro Linux Mint had determined that the ISO file made available on their website was compromised. Based on their analysis, the ISO had been modified to contain a backdoor, and the actor hacked their website to point to the modified download file. This breach always fascinated me, as it indicated that it was the attacker’s desire to cut to the chase and compromise the OS at the source. An attack at the supply chain.

Okay, this example was from a while ago. Let’s talk about more recent events.

On 6/29/18, it was announced that the popular Gentoo Linux distro’s GitHub mirror repository was hacked, and portions of the code were modified to include malware. The thought of altering the code base of an OS would have devastating effects on any information processed within it. Further, the lower-level you go, the harder it is to detect.

Typically the term “rootkit” is used to describe malware running at such a low level that it is difficult to identify or resolve without wiping the box altogether. In this case, the malware is so firmly rooted in the OS code base that even a fresh format to the same OS couldn’t rid you of the infection.

What if a popular security testing distro were to become compromised, similarly to the Linux Mint or Gentoo hack, and went unnoticed? A malicious actor could be at the helm of a red team exercise with an unsuspecting tester doing the hard work. The results would be catastrophic.

Backdooring Legitimate Applications Affects Everyone

Operating Systems aside, hiding tools to spy on individuals within legitimate applications is becoming more prevalent. This month, the ACLU discussed the topic of malicious software updates, and the tainting of legitimate software for the use of intelligence.

Much of the theme of the article targets software developers, and discusses being wary of law enforcement requesting modifications that would spy on user activity. However, the idea of backdooring software is not necessarily a new one, and may not even require access to the underlying code.

If you have any experience performing offensive testing, “Backdoor Factory”, or simply BDF, is a tool you may have used to get this done. The implanted files lend themselves to watering holes or social engineering attacks very nicely. The tool places supplied shellcode into the code caves of an executable without compromising its usability. The code then executes at run time with the end user being none the wiser.

From here you may ask, “there exists a way to help prevent this, right?” Well, yes. Two methods immediately come to mind — code signing and validating checksums after download. The unfortunate reality is that they may not be as effective as they seem.

On the topic of code signing, take a look at this post over at reddit on r/netsec earlier this month. If you don’t frequent FileZilla’s support forums, there was some heated discussion around their setup installer showing signs of carrying malware, with a trail of posts furthering community suspicion. The fact that non-signed processes were being called was pointed out specifically.

My aim is not to sound accusatory towards FileZilla; I will leave that to everyone else. Their admin claims the behavior of the installer is normal. However, at a minimum, it demonstrates that software thought to be legitimate does not always call signed processes, setting a lower overall expectation, and making this mitigation often invalid.

What if malware actually was hidden in the installer after all? Let’s imagine WinSCP, PSExec (or any Sysinternals tool), even PuTTY in place of FileZilla. How about actual malicious code injected into a Metasploit module? I would be hard pressed to find anyone in security who hasn’t used all of the above at least once.

Looking back to CCleaner’s compromise last year, APT groups have employed the tactic of signing their modified binaries with legitimate certs to compromise the supply chain and fly under the radar. Attackers will go to great lengths to go undetected. If the executable is compromised and uploaded to a hacked vendor’s site, who is to say the checksum isn’t compromised as well? With the Linux Mint hack, the attackers had the website owned. What is to stop them from changing that published hash to validate the new ISO?

Poisoning the Well and Abusing Inexperience

Let’s re-examine the focus of this post — security teams.

Few outside groups have the level of access and trust in an organization’s network than a third-party security team, and more often than not the tools used are open sourced. In most cases, this is fantastic as the community as a whole contribute greatly to its progress. It opens the role of code review to everyone by providing transparency, as access to source is usually public via Github or GitLab.

Code, tools, scripts, exploits, and research are not always published in a way that can permit contributors. How else have testers gotten their tools? They can be sourced from multiple locations, whether they be from Twitter, blogs, IRC channels, forums, Telegram, Discord, or really any medium of communication.

When tools are brought into an organization during a test, how thoroughly are they vetted? Probably pretty well from the most reputable firms. What about the others? I am reminded of the old acronym RTFM (and its variant RTFC), and how they originated in the first place.

Let’s shift gears for a moment. Those who have immersed themselves into hacker culture have undoubtedly come across proof-of-concept code published with a stealthy backdoor to scare the living daylights out of script-kiddies. These n00bs were usually looking to snag a quick, free keylogger to spy on a friend/significant other/family member, or even break into someone’s Facebook account, without examining the contents of the code.

Hacker culture and the cybersecurity industry are tightly intertwined; The digital equivalent of yin and yang. They coexist in a chaotic kind of harmony, and frequently lend tools and proof-of-concepts between the two.

With code possibly coming from any of a multitude of sources, is it so crazy to think a backdoor could weasel its way into a less-experienced pentester’s toolkit? Someone fresh out of school, and anxious to get an easy exploit off in a network? Could this theoretical vetting process be bypassed with a blind copy-paste?

I would argue, it happens in every other job role. Ask Jan from accounting about that time payroll got screwed up because an Excel formula she found online (and didn’t fully understand) skewed the numbers. It happens.

Taking negligence out of the picture entirely — A technique dubbed “typosquatting” was demonstrated by a researcher back in 2016. This test amounted to using common typos to push simulated malicious updates to hosts via the Python pip utility. This was a fascinating and creative approach to distributing illegitimate software to unsuspecting users.

In response to the experiment, I will say this —

“Let s/he who hath never typoed cast the first stone!”

Code Flaws in Post-Exploitation Frameworks

Popular post-exploitation frameworks have contained their own sets of issues in the past. In some cases, remote code execution flaws which could lead to the compromise on the command-and-control (C2) infrastructure. Code flaws happens to everyone, and even the most highly skilled and well recognized are not infallible.

In the wise words of Daft Punk,

“We are human, after all.”

When it comes to post-exploitation tools, they can be thought of as the tester’s ‘scalpel’ used to cut open the network or host after that first puncture. Just as a scalpel is nothing more than an exceedingly sharp piece of metal, post-exploitation frameworks are ultimately just software designed to meet the specific needs of security teams. Scalpels may contain imperfections and need to be sanitized before surgery, and this kind of software is no different.

In this instance with PowerShell Empire, an RCE existed which could compromise a C2 setup. harmj0y went to great lengths to explain the issue that occurred, pinpointed the code with issues, and went on to thank the individuals who reported it. The level of transparency in this post truly is admirable.

There is no denying that flaws in security software stand out from typical software — they receive extreme scrutiny from cybersecurity folks and the general technology crowd alike. Not just with distributed software, but any security related slip-up for a security-focused organization can be a death sentence — ask Hacking Team!

The unfortunate truth is that, despite being focused on meeting cybersecurity needs, it does not change the fact that software is software regardless of purpose or application. Therefore, it is subject to the same pitfalls.

PowerShell Empire is not alone. Cobalt Strike, a commonly used commercial product for red team infrastructure, has been subject to its own coding flaws in the past. This blog post describes a form of directory traversal attack against its team server software, and even notes it as being identified as actively exploited at the time.

In regards to exposure, here is an article was written by Tenable about identifying Empire listeners via Shodan, available for all the Internet to see. Granted, the listeners discussed here are probably not serving a legitimate purpose, but I would still bet a couple of entry-level red-teamers are behind at least a handful.

With that in mind, I certainly do hope that every team using these applications understand the impact of exposing these products unrestricted to the wild. Hopefully practicing what they preach by limiting exposure to their infrastructure, and applying proper patch management procedures.

I sincerely pray that it happens before an APT group takes notice and recruits it’s clients into a botnet, at least.

Wrapping up

In retrospect, this article reads a lot like a spooky campfire story for cybersecurity professionals. Something to fire up your OCD, and cause an after-hours check-in to make sure that server is properly configured. Maybe trigger the need to grep through some newly acquired scripts for something that looks like suspicious Base64, IP addresses, or domain names.

At the very least, hopefully it is a reminder to always be on guard, and that everyone is a target. No exceptions.

Something that we can tell when looking at recent compromises is that the trend for advanced threats in the future will likely lean towards supply chain compromises and leveraging third-parties to enter a network. As the deployment of application whitelisting, next-generation endpoint detection software, and other network defense systems become the norm, traditional means of entering networks may become more difficult. This will not stop advanced threats, but rather push them towards new (and arguably more sophisticated) means of getting to data.

Furthermore, an implicit trust towards any third-party could be a fateful mistake, and continue clogging our headlines a couple years down the road. Performing risk analysis before opening up an interconnection between your data and any third-party is a must. Every connection between parties opens new avenues which could be exploited. We need to remember that performing due diligence is key — again, no exceptions.

With the emergence of new technologies happening at an exponentially higher rate than any time in the past, new threats will undoubtedly surface, and new stories will cycle into prime-time. It kind of makes you wonder — what other wild theories can we come up with?

Thanks for reading! My goal is to help individuals and organizations tackle complex cybersecurity challenges, and bridge policy into operations. Comments or critiques? Reach me on LinkedIn, Twitter, email — jeremy.trinka[at]gmail[dot]com, or reply below.


Published by HackerNoon on 2018/06/30