Asset management –
The first item on the list for ensuring you have a secure workplace and infrastructure is knowing what you have. If you do not have a good asset inventory (hardware and software) you are already operating at a deficit. After all, how can you protect something if you do not even know it exists? There are several solutions available for this each with their own pros and cons. Picking one is also going to depend on what you want to get out of it. If all you are looking for is an asset list then there are some compact options that leverage API calls to your cloud services (including MS AzureAD, Google Workplace, AWS etc.) Theses solutions report on the end points as they exist in those platforms, they have a wide range of data they can provide to you including understanding where you might have gaps in toolset or access coverage. Although the API based solutions are good at generating a list of devices that have access to cloud-based platforms, they often fall short of being able to tell you what is installed on devices. You will need a second tool to perform that function for you. If you are looking to cut down on the number of tools you must monitor, you can look for a hybrid solution. These will often leverage an installed agent as well as API calls so that you get better picture of your organization. The agent-based systems allow for a solid software inventory while the API calls give you information about your cloud services (including if a device is checking into cloud-based security tools).
If you also have a castle to worry about you will want to ensure that your choice of asset management software/service includes an option to collect from on-prem solutions. After all you still need to know what you have inside the house so you can maintain that as well. You would be surprised at how many times I have encountered a group of vulnerable devices only to be told, “Oh, those should have been decommissioned a while ago.”. The fact that they were left in the environment, not kept up-to-date and were no longer being checked on is a problem that a good asset inventory system could have prevented.
Patching and updating -
This is one that you would think it is no-brainer. Sadly, it is not. While most organizations have some level of patching, it is typically a minimal effort (Microsoft patch Tuesday). The reality of patching is that software is never in a static state. Bugs, flaws, security issues are found all the time. Software developers are usually always at working fixing one thing or another. This means that patches are available just about every day. The longer you wait to push patches, the longer your assets are exposed. By updating as often as possible you continue to take away avenues of attack on your organization. Planning these update windows is important and does take some re-thinking about risk. The perception that attackers are going after the pot of gold right out of the gate is an outdated one. Typical attack chains start with an exposed user endpoint. Someone takes or has a device (laptop, phone tablet) out outside of the protection of the castle walls, an attacker chances on or targets it and compromises the device and/or credentials for the user. Patching for user endpoints should become a part of daily life (part of the culture of security). User endpoints mobile or otherwise should be patched daily. Your patching solution should be able also be able to patch more than the operating system as well. 3rd party apps are very commonly targeted as part of an attack simply because they are left unpatched. Adobe products are the bane of many security teams because Adobe does not make it easy to keep them up to date. You end up needing to back flip through flaming hoops of fire just to build a package for the latest version of a product that you can deploy. Attackers know this, between being complicated to keep updated and some significant flaws present Adobe still ranks high on the attackers’ favorite list.
On the server side of things your patching should still be as often as possible. Maybe not daily (although that would be ideal), but as often as you can without significant business impact. Keep the software on servers limited to only the applications needed for it to perform its function. A SQL server containing PCI data with MS office, Flash and old versions of Adobe reader installed is a bad thing. Servers should also be using the latest versions of the operating system that can be supported by the installed application. Where a vendor is reluctant to move to newer operating systems or keeps just inside the end of support/life dates you will still need to keep up with all available patches, but you will want to add some extra security controls (and possibly look for another vendor or solution).
In both cases, your patching solution should cover operating system patches for on-prem and off-site endpoints. It should be flexible enough to maintain different patching cycles to cover any change control processes you have in place and should be accessible to all without the need to punch holes in your firewall if you have one.
Vulnerability/Risk Management -
Risk management can be one of the most misunderstood functions in a secure environment. To far too many it is simply scanning and patching. Executives and business owners get into a pattern of number chasing. They only look at the total number of vulnerabilities in an environment and base their progress on those. To say this is backwards is a bit of an understatement. You are probably never going to be vulnerability free. Security researchers and threat actors will ensure that. There needs to be a fundamental shift in how risks and vulnerabilities are treated. Simply reporting on and chasing numbers of critical and high vulnerabilities is not enough. You also need to make sure your scans are set in cadence with your patching and updating policies; if you are patching daily, scan daily and so on. Your vulnerability scanner choice should support both agent and appliance-based scans. If you have a cloud presence (AWS, GCP, Azure) the solution needs to support comprehensive scanning of those areas (including containers if they are in use). You want the most visibility of your environment you can get. You should even scan assets that you might not directly own, but you are responsible for (Hosted websites etc).
Once your scans are complete, you want to identify any items that are or can be addressed by your patching tools. Those are going to part of you automated remediation cycle. Everything else, you need to plan out. You focus should be on risk and not just high and critical. If you identify a single system or a few systems with a lot of remotely exploitable vulnerabilities, even if they are user endpoints, those are your immediate remediation targets. Next focus would be on groups of systems where a single change or implementation can remove many vulnerabilities regardless of criticality (threat actors go after medium and low vulnerabilities all the time). After that you are probably going to be left with more individual action items. Plan these out from the top-down look for vulnerabilities where it is easy to exploit them remotely or there are already exploit frameworks out there. Just about all vulnerability scanners will be able to give you this information in their dashboards and reports. Get used to exporting your scan data and either working it in a spreadsheet format or importing it into a database if you are more comfortable with that. There are also asset management and ticketing systems that automatically ingest this information so it can be tracked if your organization is complex enough to warrant that.
Access and Account management –
As we mentioned above, a security incident often starts with an attacker popping a user endpoint and compromising their account. This leads to them being able to dwell in an environment and get the lay of the land while looking for opportunities to pivot to a more important system or compromise a more privileged account. Service and application accounts are favorite targets as they often “require” admin level permissions to function. API accounts are another target as for too often API connections and access are not logged accurately enough to really identify who or what connected or made changes.
Depending on how your organization is set up this can be one of the larger expenses. Leveraging a Single Sign On (SSO) or MFA service when you do not have one baked into your account management system can get costly. Where you have options for this such as MS365 or Google Workplace you can have an easier time of managing access. You can also get fancy here, including adding in device and user certificates to validate that the access attempt is from who it says it is, but also from a trusted device (if you are not using a full domain structure. Multi-Factor Authentication is a must and should be enforced for all accounts regardless of access privileges. If you are using a cloud service that does not integrate to your access control service, you should have it enabled there are well. Basically, turn on and require multi-factor authentication everywhere you can.
This leads to access privilege; user and even admin accounts should only have access to what they need to do their day-to-day activities. These should also be logged and monitored for out of pattern items or attempts to access things the account is not supposed to. Any out of pattern or elevated access (even for IT operations and security teams) should be planed, documented, and approved by more than one person. Enforcing this one is difficult, but once you do it becomes the norm and part of the culture making it much easier to maintain. Where many organizations have problems is with Vendors, they do not like to be told they have to ask permission to do things on their own software or devices. They are also a huge attack vector. Not all that long ago Oracle’s customer support list for their point-of-sale software was compromised. Because many of their customers were not forcing approval on support connections, this led to a few breaches. The same thing happened to Visual One and led to the same results. Remember, even if a vendor account or access is compromised, you are the one left holding the bag if there is an incident. Ultimately access controls all fall into your wheelhouse, so there is a great need to have access control policies, software, services, and approval processes in place to help reduce the risk of an incident.
This applies to all your cloud services as well (Salesforce, Slack, etc.). It might be easier to just give someone admin permissions, but it represents a big risk in terms of compromise and potential data loss.
Endpoint Protection –
After you identify the endpoints, patch them, scan them for vulnerabilities you also need to make sure they are protected from malware and intrusion. This is probably the easiest part of the process to check off but can be the one that is overlooked in terms of efficacy. In many environments simply having an anti-malware solution is enough. However, just having something often leads to massive security holes. Tracking the evolution of malware and the sophistication of attacks there has been a massive increase in obfuscation techniques used. This means that traditional anti-malware solutions are not going to be enough. You need something that can follow a process all the way through the execution stage (at rest, pre-execution, execution etc.) you also need something that can monitor the behavior of those processes and binaries to detect and block things that are out of pattern or that fit an attack pattern (like ransomware). It should be able to identify and block drive by attacks that exist on websites, scripts that attempt to execute and have some functionality to block potentially malicious traffic if detected. Most importantly, if should be able to isolate the endpoint from all organization resources regardless of where it sits (BYOD, remote worker, inside an office) while allowing for forensic data to be collected about the potential incident. It must do all of this while not slowing down the device it is on or preventing day-to-day authorized activity. You should also make sure that your anti-malware tool can be installed on as my types of devices as you have (Windows, MacOS, Android, iOS, Virtual Servers, Containers etc.)
If this all sounds like a tall order, it is. The good news is that there are several companies out there that can fit this bill. You will need to manage them, but you should be doing that anyway.
Monitor your cloud assets and services -
Just as you should be securing your endpoints, you also need to make sure you secure your cloud services. Zoom, Slack, Salesforce etc. are all targets that can allow attackers access into your environment. You need to keep an eye on who is logging into them, when, from where and what they are doing in those services. Remember an end user on a laptop working remotely is a big target. If they are compromised, then a logical next step is to see what access they have in your cloud services. By having proper access control, you can limit what someone can do, but it is better to know when there is suspicious behavior in the first place. There are tools that can allow you to monitor these services in near real time and give you a bit of warning when something out of pattern happens. They can often also provide forensic details if you do have a larger security incident. The best in breed of these services can also monitor your organization and provide guidance on how to improve your security.
Putting it all in one place -
So far, we have talked about a lot of services it would be challenging at best to monitor each of them separately. It can be done, but in a small organization you would likely spend all your time moving from service to service ensuring the safety of your business than you would doing business (this is one of the reasons so many small and medium sized businesses have little to no security). Thie is where tools like Security Orchestration and Response (SOAR) and Security Information and Event Management (SIEM) come into play. The ingest the information from all these services and correlate events with other items for you in the background (once properly configured). For example, your cloud monitoring software notices a login from someone way after normal working hours and from an IP that is in a different country. This alert also corresponds with a malware alert in your anti-malware tool and with a known vulnerability that exists on the device. The SIEM can lump all this data together to provide context to alerts and events so that you see actionable items in detail rather than having to constantly scan through all your services. If something is detected that fits the rules you have setup you get an alert.
SOAR, on the other hand, can have automated responses built into their identified alerts. With SOAR, in the same situation above, you could set it to automatically disable the user account and lockdown the originally infected endpoint. Once done your and your designated security team would then get an alert so you can take over. The response rules can be simple or very complex, but they offer you not only a view into your environment, but a wat to get the ball rolling in the event of an incident.
Getting it all working –
With everything on the to-do list, where do you start? Many would say that you protect your servers and services first and there is good logic to support this. However, things have changed in the way attackers look at a target. They are actively looking for the remote workforce. People that travel or operation 100% outside of the office walls. These are the big targets so starting things off at the user endpoint level is a solid first move. The first thing you need to do is know what you have so you know what to protect. As you are getting your inventory, roll out vulnerability management, patching and anti-malware solutions. From there you can tighten up your access controls and ensure multi-factor authentication is in place. This should happen in conjunction with enabling monitoring for cloud services and connections. The last item is to funnel all that data into an alerting system or an alerting and response system.
While you are looking for the right tools to fit each role, do not worry about tool overlap. Too much time is spent on this topic, and it is a bit of wasted effort. It is important to ensure that you do not have multiple tools doing the exact same thing, it is ok if there is some overlap, in fact it can be an excellent way of checking data between sources. In smaller organizations managed services become an important part of your operation and many can provide tools that fall into these categories. In working with any managed service, make sure they are not looking at a cookie-cutter approach. Every business, group, organization is different. If a service provider cannot or is not willing to understand security in the context of your organization then they might not be the right choice.
Ensuring security in any business is hard work, it takes a lot to get things setup, deployed, and then to manage them. Proper security can be costly and really has no true return on investment. You can make up calculations around it in terms of what a breach or ransomware incident would cost, but most of those are speculative numbers at best. Trying to put a number to it in these terms often makes security efforts look like nothing more than a hole you pour money down (it isn’t). To make matters worse when everything is working correctly, your security tools seem to fade into the background making some wonder what they are even paying for.
Covid-19 forever changed the way we live and do business, it also changed the threat landscape and not for the better. To combat this change there needs to be a fundamental shift in how we think about security from the smallest business to the largest enterprise. It is not easy, but it is certainly doable with the right tools and understanding of how the term “secure infrastructure” has changed.