Friday, 04 February 2022 10:23

First provable SHA-1 Collision Happened Five Years Ago Yet SHA-1 is Still an Option.

Written by

Reading time is around minutes.

On February 23rd, 2017, Google published a paper on their security blog that showed how a SHA-1 collision was possible. It proved that the aging cryptographic and hashing standard was no longer a safe or secure method. Google showed that they could produce two different files yet have them show the same hash, thus causing a collision and getting around some of the file hashing systems in place at the time. The problem is that SHA-1 hashing is still in use today by many tools.

The Google research released 5 years ago was already being run on a 20-year-old standard. This standard had a computational complexity that many felt it would never be a target. Google did prove otherwise, but with great effort. The original release shows that they performed nine quintillion SHA-1 computations. This took an estimated CPU computation time of 6,500 years and 110 years of GPU computation. This is not how long things took they just adjusted those numbers based on the number of CPU and GPUs in the computational group. This is a lot of CPU time and is out of the range of many attack groups. It is not out of the range of criminal organizations and state level attackers though.

Even some crypto mining setups could probably be used to perform this type of computation especially given that Google did release the code used to create the collision in the first place (90 days after the publication). Five years is a long time in the computer and security world. GPU and CPU computational capabilities do increase with each year, so the original time frames and number of processors has likely decreased. This means that the attack might be available to even more threat actors out there.

So again, why is SHA-1 still in such widespread use? We see it used by several security tools to positively ID a file and to add that file into an exception. We see it used as part of encryption routines and even to generate SSL certificates. Why would these be in place if there were a risk that would allow large scale compromise? Well, the reason, as always, is the level of effort and complexity involved in making the changes. Every VPN, every SSL certificate, every file scanning, or antimalware solution using SHA-1 would have to not just reengineer how they function at a core level, but also push those changes out to existing customers and services.

These types of core changes require time and money. If the time and money needed outweigh the risk, then there is no change made. Right now, vendors have decided that the chance of this type of attack being used successfully is small enough that they can put off the need to change until later. So, SHA-1 stays around as a widely available option. Meanwhile, the power and time needed to compute a successful collision goes down, making the likelihood of this being part of an attack greater each year. It is the retirement of MD5 all over again. It also represents a fundamental flaw in how security operates. Leaving a significant vulnerability in play because the cost of removing it is high is not the right way to think about things when it comes to security. The “bad guys” are always evolving their techniques and tactics. I would not be surprised to see this come into play in the next year considering how quickly the workforce moved to remote. Being able to subvert encrypted connections via a collision or bypass antimalware between a remote worker and critical infrastructure would be a nice addition to the attacker’s tool kit.

For now, the internet might be safe from this style of attack simple because there are easier ways to compromise an environment (user account compromise). However, that does not mean this will not be used as part of a more advanced attack pattern in the near future.

Read 861 times

Leave a comment

Make sure you enter all the required information, indicated by an asterisk (*). HTML code is not allowed.