There is (and always has been) a debate about the ethics and impact of the release of Proof-of-Concept Exploit for an identified vulnerability and Open-Source Tools related to red-teaming. The debate, well really it has devolved into an argument, is very complex, nuanced, and (in full honesty) has multiple contexts that can be applied to it. However, it has become exceptionally binary. We now have the “Pro” side and the “Anti” side… The fact that this is where the line in the sand is being drawn is, well it is almost ignorant.
Let’s look at the situation as it stands now. Recently a vulnerability was found in ConnectWise ScreenConnect, on the server side. It was an authentication bypass (well sort of). The method of exploitation was extremely simple. ConnectWise, pushed out patches for on-prem systems, and mitigated the vulnerability in the cloud service while they pushed out the patch internally.
While this was happening, security researchers (as they are wont to do) were looking into the vulnerability and how to exploit it. This is nothing new and this type of security research is a good thing in the end, we saw a few companies indicate that they had a working PoC on how to exploit one of the vulnerabilities. Most were withholding details of the exploit due to its simplicity and a desire to allow people to patch and/or mitigate the flaw. Most… there was one that simply released their PoC of how to exploit this very simple flaw and pushed it out to the world.
Now, the next part gets complicated. It was not long after this release that the vulnerability began to be exploited in the wild. As correlation does not equal causation, I will not say that the exploitation was in any way related to the PoC release, but I also cannot rule this possibility out either. It is this piece that ignited the argument that has been going on for years once again.
Oh, and, before we go too much further, I am being intentionally vague about the ScreenConnect vulnerability because I am writing a more detailed article about that after a conversation with John Hammond from Huntress (not to be confused with Richard Hammond from The Grand Tour).
Ok, so we are all caught up on the timeline. Let’s take a look at some items that are “given”. A given is something that can be demonstrably factual, or axiomatically factual. Both are useful in understanding a landscape. In this case the threat landscape as it relates to the target environment. Oh and some definitions too!
1 – Threat Actors (I will call them Threat Thespians moving forward). For the purposes of this article Threat Thespians are the groups that carry out an attack regardless of the motivation of the attack.
2 – Initial Access Brokers. These are bad guy security researchers who look for way to exploit either identified vulnerabilities, or actual 0-Days. They may also run phishing campaigns to harvest credentials. Both items (Exploits or Credential Dumps) are sold on markets for use by the Threat Thespians.
3 – PoC, Proof of Concept. An example of how to exploit a vulnerability without a malicious payload. These are often published by security researchers for use in Red-Teaming, or to help build detection rules to assist Blue-Teams
4 – OST, Open-Source Tools. Also typically released by security researchers, these tools can assist both Blue and Red-Teams. If used properly.
Now for some “Given” items
1 – Threat Thespians are clever. They are always looking for ways to make money and through what is shockingly like “combined arms” efforts are always finding new ways to make a buck off some flaw (even flaws in human nature). This term does not apply to Nation-State Thespians.
2 – IABs are more of a threat than most realize. IABs are not the people who are going to dump ransomware into your environment. They are the people that are going to find a way to get into your environment and sell it to the ransomware groups. They typically run in stealth mode as they want to ID the attack plane and sell it without being caught and their profitable entry vector loses value.
3 – Both Threat Thespians and IABs work on a principle of Least Effort. You might hear this said as “hackers are lazy”, but this is not true. They are very industrious, but they also want to get the most money for their efforts.
3 – Security Researchers are both awesome and necessary. There is not really another way to put this. The work they do is essential to furthering cybersecurity progress.
4 – All software and hardware products have vulnerabilities. They all do, some are going to be harder to find than others, but they are all there.
5 – Not all Developers/Vendors care about security in the same way or at the same level. This axiomatic fact is best demonstrated with Log4J. Some vendors knew about the flaw and did little to prevent exploitation, others had mitigations already in place before Log4Shell was a thing.
6 – Very, very, very (yes three verys) organizations can patch overnight. Even with a CVSS 10 vulnerability on the KEV (known exploited vulnerability). It can be complicated to get things patched due to down time and significant financial impact. The IT and security teams might want to patch, and the patch might be queued up, but I can tell you from experience, you might not be given an opportunity to push a patch out for a couple of weeks and then only a 2-hour window to get it all done.
Ok, so with these items (our context) let’s dive into the argument.
Threat Thespians are never going to stop what they are doing. Cybercrime is a money printer, why would they stop? Some cybercrime organizations include their own registered, “legitimate” development operations, IAB groups, Cybercrime Intelligence groups and more. They are monsters. They are not unbeatable; they are just persistent and always on the hunt (like sharks). IABs are also not going to stop as their livelihood is connected to finding the next entry point and selling it in a marketplace. This feeds into the ecosystem of cybercrime and allows it to not only survive but thrive. But these two items are not the only thing that feeds the cybercrime ecosystem. On the defender side we have two major factors; software and hardware always have flaws and vulnerabilities and far too many companies operate on the principle of least effort when it comes to cybersecurity. So, we have at least four major factors feeding into cybercrime and creating a target rich environment.
It is into this that we inject the security researcher. Security Researchers will look at a potential issue and work on it (often without sleep) until they find a way in. This can result in the identification of a vulnerability which is disclosed to the vendor/manufacturer. This disclosure is often done in a responsible manner. By “responsible” I mean they contact the vendor/manufacturer let them know about the flaw. Give them time to confirm the flaw, develop a patch for it, alert their customers about it and the availability of the patch, and after all of this is done, coordinate a public release.
This last part is important, and this was not always how it went. I have known many researchers who reached out, with nothing but good intentions, only to end up talking to a lawyer about possible lawsuits. Other times they just ignored the communication and sent out threatening letters if the researcher talked about it. Where we are today with many companies’ willingness to talk about vulnerabilities and flaws as well as offer bug-bounties for flaws is night and day to the way it was.
Again, this is most, if not the majority, of security researchers. There are some that do not follow this though and will push out information on an identified vulnerability, along with PoC exploit without any responsible disclosure. There are also some that see the public disclosure of a vulnerability and immediately work on and publish a proof of concept before there a change to remediate the vulnerability.
This type of behavior could cause companies to shift back to the bad old days of lawyering up first and/or ignoring communication. After all, why offer bug-bounties, a responsible disclosure path or talk to any researchers, if someone is just going to push it out anyway? Remember, to a business security is not a priority, they operate on least effort. The cost benefit scale will tip away from working with researchers and it will just be more cost effective to litigate. If they are already not doing the right thing to save on cost, do you really think they won’t go back to pretending the problem does not exist? This is often a nuance that many miss in the conversation, but it is a massively important one.
Now let’s add another bit of context and nuance that is often missed. Threat Thespians and IABs keep track of vulnerability disclosures and changes in regulation. This is an axiomatic and demonstrable fact based on observation of attack frequency and communications seized during takedown events. Some of their intel is good enough to know budget and patching cycles of their preferred target types (or actual targets). With this information and OSINT performed on a target, they know their window of opportunity to attack an organization if there is a regulatory change or a vulnerability disclosed in a product the target(s) have. This lets them know how long they have to identify and develop an exploit for a given vulnerability or exploit weak security that might no longer be available after a “regulatory forced” mitigation.
Into this let’s inject the less that understanding researcher. They see a vulnerability, and in their understandable need to know and build, they develop a PoC for exploit. They release this within days of the disclosure. Threat Thespians and IABs are known to follow many researchers, and they see it. This small act has potentially saved the Threat Thespians and/or IABs a lot of time (which is money). The companies have not had the time to patch and are open to all sorts of attacks.
This is the world we live in.
Now… before, those of you who think it is ok to release as you like, get out your pitch forks. None of what I am saying absolves vendors/manufacturers and companies from their almost gross negligence to employ proper preventative security practices. They hold a huge portion of ownership in the current threat landscape and why we have such a rich target environment. Of that there is little question in my mind.
However, when one of these companies is popped, they are not who end up getting fucked. It is their customers and clients. Normal people who have very little knowledge or understanding of what is happening in the background. They just know that their information is now publicly available on some site, their social security number, credit card numbers, and/or other personal information was stollen. Worse, they may find out their bank account was emptied, and they now have to argue with not just the bank, but perhaps with utilities, landlords, mortgage companies. Still worse than that, what if the organization popped was a hospital with a loved one in their care and they are affected.
I am all for security research and I do believe that by keeping companies honest through continual prodding we can move the needle. I am not for releasing PoCs or OSTs without some thought into the larger potential impact, not to companies, but to the people at the end who may be hurt by it.
Just my .57 cents on the topic, but I do hope that by building more context and nuance on the topic the community can come to understand some of the responsibility and ownership they have.