This is the case I see with a recent attack vector that was discovered by researchers at Uptycs. The team at Uptycs found that someone had planted a fake proof of concept (POC) in GitHub for CVE-2023-35829. This is a high-severity vulnerability in the Linux Kernel. The PoC was forked 25 times meaning that someone had actively leveraged the POC for their own use before it was taken down. The account responsible ChrisSanders22 has been suspended but had another POC for a different vulnerability that was also on GitHub. Additionally, it looks like the Chris Sanders22 account forked the original POC into another profile which is still online as of this writing.
The POC in question attempts to hide is malicious intent by appearing to be a kernel-level process, but downloads and executes a malicious bash script in the process. Through the use of a make command which is run as part of the POC the target actually executed the code snippet needed to create the malware (a file named) kworker. The code in question creates a backdoor with information stealing capabilities. It also adds the attackers SSH keys to the SSH authorized key file (.ssh/authorized_keys). Persistence is created by adding the $HOME/.local/kworker path to $HOME/.bashrc.
The use of fake POCs is not new. The tactic tends to come in waves (like other attack methods) but is again on the rise. Other fake security researcher accounts have been identified on GitHub in the past few months showing a renewed focus on compromising researcher systems and potentially gaining a foothold in a larger security environment.
There is good news though, many security researchers follow the best practice of only executing POC or other code like it in an isolated environment. This is not just to prevent infection of their main work system, but also to allow them to monitor processes and communications coming from the test machine. If you are not one of these types of researchers or the organization you work for/with has not budgeted for this expense, we highly recommend changing this to prevent larger scale compromise. Cyberecurity engineers and security researchers are often given much more control over the systems they use even for regular work. This makes executing any unknown code on this type of system a very bad idea. Still with the right organizational changes this can be accounted for and attack vectors like this can be prevented or at least minimized.