Cloudy Horizons in Enterprise Security Architecture

Abstract

Enterprises are rapidly migrating towards consuming more cloud based services and wanting more “anywhere/anytime access” working architectures.

This article will focus on some security architecture recommendations, and associated supporting technologies for effectively supporting these changes.

Whilst specific technology and products are listed, they are just examples and the actual technology or service utilised is sometimes less important than the way it is implemented. Within security, good design, implementation and effective ongoing management can often trump bleeding edge feature sets.

Introduction

Security is not a product, it is an ongoing process which within the real world does not exist in a vacuum as such its most cost effectively approached from a risk management point of view.

Highlighting security been a continuous process, with constantly moving goal posts, companies wanting a somewhat definable quantitative metric for the “minimum bar” can look to what has been dubbed “HD Moore’s Law”; which defines “Casual Attacker power grows at the rate of Metasploit”.

Henceforth your environment security controls should at bare minimum be keeping parity with the common capabilities of Metasploit in order to mitigate intrusions from the more common casual attacks.

HD Moore’s law: 
“Casual Attacker power grows at the rate of Metasploit” 
Joshua Corman.

HD Moore’s law is a pun on the more commonly known Moore’s Law in computing, it was coined by Joshua Corman, with “HD Moore” rather been the author of Metasploit, the most popular automated attack platform.

So whilst continually patching vulnerabilities and putting out all the day to day fires it’s equally important to continually review the overall strategy and operating architecture to make sure it’s a cost-effective approach within the current (and predicted future) technological trends and threat landscapes.

The following recommendations will be described below in further detail;

  1. End-to-End Encryption
    1.1 Encryption in transit
    1.2 Encryption at rest
  2. Adjusting to the new perimeter
    2.1 Secure the service
    2.2 Secure the data
    2.3 Secure the endpoint
  3. Secure and Scalable Authentication, Authorisation and Accounting
    3.1 SSO, Federation and Token Based Authentication
    3.2 Multifactor Authentication
  4. Continuous Security Monitoring, real-time anomaly detection and alerting
  5. When and How to implment BYOD Securely
  6. Inventory/ Configuration Management/ Cloud Orchestration Platform
  7. Rapid and Secure SDLC (Software Development Life Cycle)

1. End-to-End Encryption

1.1 Encryption in transit

Secure TLS encryption for ALL website/services should be mandatory, not only for security (anti-snooping, non-repudiation) but also from a business functionality and user experience level.

Google search engine and even most browsers are starting to penalise websites without secure TLS encryption (lower ranking in the search engine, and warnings within the browsers). As well since Apple IOS 9 all new apps submitted must use TLS exclusively and support IPv6

Administrators can easily verify their public facing service certificate and cipher configurations by checking they are “C” ranked or higher at https://www.ssllabs.com/ssltest/

To mitigate against SSL striping or users incorrectly clicking through dangerous certificate warnings associated with MITM tactics all web services should utilise the HSTS (Strict Transport Security) header to inform user’s browsers to never allow the service to be accessed over plain text HTTP or allow users to click through certificate warnings.

Consider presenting any manually managed internet facing web services through SAAS based reverse proxy/ security CDN’s such as CloudFlare or Incapsula which can dynamically manage/update the TLS configuration for the riskiest portion of the connection, they also support IPv6 as well as providing additional WAF/ threat detection/ DDOS mitigation capabilities.

As TLS use increases it’s worth investigating some form of internal automation around certificate and TLS cipher protocol management as existing manual methods obviously won’t scale effectively. Netflix have open sourced their TLS certificate management platform (Lemur) and AWS have also recently released a certificate management platform AWS ACM.

1.2 Encryption at Rest

Most organisations tend to be migrating away from desktops to more portable devices like laptops/ tablets with potential for locally stored corporate data. Turning on Bitlocker or similar FDE (full disk encryption) is a no-brainer to secure any data on Windows devices.

Bitlocker itself is native, free and easily centrally managed via group policy, it’s implementation and use can be generally transparent to the end-user providing their device has TPM 1.2 chip or higher. If more detailed encryption status auditing/reporting is needed for compliance requirements than Microsoft MBAM might be considered or another third party product.

If a company is looking at buying shiny new MDM for mobile devices and they haven’t already FDE’d their corporate laptop fleet, they’re likely doing risk management wrong.

Companies utilising Apple OSX laptops should consider implementing some form of suitable EPM (end point management) product capable of centrally enforcing FDE encryption requirements and key escrow. Such products include Airwatch, Microsoft’s EMS, Jamf or MaaS360 etc.

Until fully homomorphic encryption becomes more feasible, the minimum requirement companies should be aiming for when storing or processing data in cloud service providers, is for the ability to encrypt the data in transit to/from the cloud and “at rest” in the cloud. Preferably whilst maintaining effective control over the encryption keys. Controlling the encryption keys for encrypted “at rest” data in the cloud allows both current content security, whilst also providing a simple mechanism for potential future data removal/disposal when decommissioning or if migrating to another provider or service (e.g by simply removing the master encryption keys can mitigate requirements for scrubbing/ overwriting data on exiting).

Overlaying self-managed encryption in the cloud needs to be tempered against the native security capabilities of the cloud platform given risk associated from increased complexity of management.

Central Encryption KMS (key management services) are available on most major platforms including AWS KMS and should certainly be considered.

2. Adjusting to the new perimeter

Some people will say that perimeters are disappearing others might argue they are just moving. Either way traditional concepts of chokepoint based network perimeter security are somewhat changing as many organisations migrate towards internet based anywhere/anytime access architectures and as such focus should be shifted towards securing the datasecuring the end-point, secure the service and  as discussed above already making sure all communications between end-points and services are securely encrypted.

Google’s “Beyond Corp” architectures is a fantastic reference architecture for modern enterprises interested in both enhanced security and mobility

  
 http://research.google.com/pubs/pub43231.html 

Some people will say that perimeters are disappearing others might argue they are just moving. Either way traditional concepts of chokepoint based network perimeter security are somewhat changing as many organisations migrate towards internet based anywhere/anytime access architectures and as such focus should be shifted towards securing the datasecuring the end-point, secure the service and  as discussed above already making sure all communications between end-points and services are securely encrypted.

2.1 Secure the Service

Preference should be given when possible to SAAS/PAAS or ServerLess Code based services where the supplier is responsible for maintaining security patching and maintenance of the underlying infrastructure (Office365, Google Apps, AWS Lambda etc). Allowing the business to focus more on the application logic and relevant business processes the system is there to support

There can be some confusion around what is and what isn’t a PAAS service and also where their demarcation points of responsibility are for a services security management such as security patching.

For example many people historically perceived the earlier version of AWS Elastic Beanstalk as a PAAS service, (when it is arguably more like a set of automation tools built for their IAAS platform) and Elastic Beanstalk environments at the time would not natively automatically patch/update themselves. Leaving some customers blissfully unaware they where running vulnerable systems they where in need of patching themselves.

So make sure to validate where the various demarcations points of responsibility are for a services security management, and if necessarily implement your own additional processes or controls. (AWS’s Elastic Beanstalk has more recently implemented automated patching capabilities for  new patch and minor platform versions)

Migrating to SAAS/PAAS based services to outsource the underlying infrastructure management and security patching requirements is great but SAAS/PAAS service still come with their own set of issues. Including how to implement all your necessary security policies/ controls around the way cloud services are interacted with by your users. Depending on the native capabilities within the service you may also need to consider a Cloud access security brokers (CASBs) to effectively meet your requirements. 

Mature organisation will already be leveraging advanced WAF (web application firewalls) to protect their critical web services. They should now be further reviewing RASP (Runtime application self-protection) based solutions to provide additional layer of security protection linked or built directly into the application runtime. RASP is capable of effectively controlling application execution to detect and mitigate real-time attacks. By been integrated into the application runtime the protection mechanism can have more context around whats going and more effectively determine what is and what isn’t malicious activity.

Developers can code RASP style controls directly into their apps themselves or leverage commercial products from the likes of Contrast Security or HP etc.

2.2 Secure the Data

Make sure to limit and control the data exit points from your environment. For example block portable USB drives if they aren’t required, or enforce encryption on them if they are (Bitlocker group policy). Uses URL content filtering services, end point DLP and cloud broker controls to manage access to cloud storage and sharing services.

DLP systems which purely use inspection and pattern recognition at various choke points within the network and endpoint will will continue to fight an uphill battle against data leaks.

Microsoft based enterprises wanting to control data on the endpoints and the ways it is disseminated might consider reviewing the newish Azure RMS (Rights Management Services). Azure RMS uses encryption, identity, and authorisation policies to help secure your files, emails and works across multiple devices and platforms. Whilst an interesting and intuitive approach the products have yet to be widely deployed or tested by a lot of enterprises, so you’ll currently need to make your own assessment.

2.3 Secure The EndPoint

Primarily due to to rapid malware mutation traditional signature based Anti-Virus is definitely at the end of it’s effective half-life and continues to decrease in efficacy (to even consider it 45% effective would be generous ). It has become critical that organisations take the inverse approach and migrate to an “application whitelisting” system to more effectively mitigate infections.

Application whitelisting essentially “whitelists” all known and trusted executable code and only allows that to be run. Blocking any unknown or potentially malicious code in it’s tracks.

Apple’s OSX has included this capability (Gatekeeper) natively since OSX Mountain Lion. In an enterprise environment you might just require some form of EPM to manage the settings.

Available natively for enterprise windows platforms there’s  Applocker which has been around for quite a while. For Windows 10 end points there’s the newer “Device Guard” which appears to be a very promising security solution with better security and scalable management capabilities. Device Guard is still quite new and not yet commonly deployed (or thoroughly independently tested) but the concepts and approach seem well thought out and definitely worthy of consideration. 

There’s also multiple third party application whitelisting products available for both platforms such as Bit9 and Lumension (and plenty more). These not only include application whitelisting capabilities but also enhanced EDR (endpoint detection and response) capabilities. EDR is a rapidly growing field and endpoint products such as CrowdStrike, Tanium and Confer etc are all great tools to tie into a continuous monitoring and automated remediation approach discussed later.

Windows 10 and its new Edge browser provides multiple new security features/ mitigations several of which where only previously available as part of the EMET (Enhanced Mitigation Experience Toolkit) add on. It’s recommended that all Windows devices which can be upgraded to windows 10 are done so as soon as reasonably possible.

Companies might consider migrating away from Windows OS for their user endpoint platform to less commonly attacked operating systems such as Apple OSX, Linux, IOS, Android or Chromebook. Large companies such as Google and IBM have cited improved security and reduced technical support costs as some of their key drivers and successes with their use of OSX and other platforms.

Discussing the associated attack vectors or benefits of one OS over another is outside the scope of this article. For better or worse the majority of large enterprises do predominately utilise Windows.  The fact is in the end all operating systems will always have some vulnerabilities, and not all security issues are directly related to malicious code (e.g malicious users/ inappropriate use) as such platform selection can come down to what you’re organisation is currently most comfortable and capably/skilled with been able to effectively manage within your own environment, otherwise it’s arguably a cat and mouse game of security through obscurity.

Perhaps the key trend here is that as many business applications are becoming more browser centric there becomes less dependencies on the underlying operating system compatibility to consume these services. Giving companies and users theoretically more freedom to pick and choose their preferred platform, and your security architecture needs to consider this diversity in order to be able to effectively maintain end-to-end security.

Investigate SAAS based URL and content filtering services such as Zscaler which can provide effective controls for endpoints regardless of if they are “on” or “off” the corporate network.

3. Secure and Scalable Authentication, Authorisation and Accounting

3.1 SSO, Federation and Token Based Authentication

Sprawl and diversity of systems used by businesses these days has required an adjustment in authentication/authorisation methods to maintain a secure, manageable (scalable) environment.

Individual credentials for each different system or service is not scalable/ manageable into the future and common internal enterprise authentication systems such as native active directory (e.g kerberos) aren’t really functional/ secure across the public internet.

The use of tokenised federated authentication/authorisation systems provides enhancements in both security, scale, manageability and reduces the number passwords users need to memorise.

SAML and Oauth have become somewhat the lingua franca between authentication/authorisation systems providing seamless web based SSO for many external applications/services. Before purchasing or subscribing to any new business application/service it would be highly recommended to validate that it supports the requisite federated auth protocols so that it can be easily integrated.

Organisations should establish a documented standard around federated authentication, so new services can be integrated and consumed efficiently.

3.2 Multifactor Authentication

Multi-factor authentication has long been a staple for securing access to sensitive systems. It’s likely most organisations will increase the use of appropriately secured PKI client certificates as a secondary authentication method and as a way to validate that users are accessing the service from an appropriately secured corporate device. Client certificate private keys should be non-exportable and secured in hardware TPM chips (virtual smartcard) to provide some mitigation against certificate jailbreak attacks if the device is compromised by an attacker.

The use of PKI/ client certificate authentication for customer consumed services is currently still somewhat debatable, mostly associated with some of the complexity in remote implementation/ management and associated user experience (these are mostly mitigated in corporate managed SOE environments). In the meantime perhaps prefer regular federated authentication and OTP/ SMS etc if a secondary authentication is required for external customers/ unmanageable remote devices.

FIDO alliance compliant authentication devices/systems have been increasing in popularity and acceptance, and likely to continue into the future. Their focus on ease of use and cryptographic mitigations against credential phishing attacks are of particular note.

It can be highly useful if all staff have their mobile phone number registered as part of the businesses on-boarding process. As an example it could be used in the event of suspicious activity been detected on a users account, the active session could be interrupted and challenged for a OTP code which has been sent to the staff members mobile phone. This is a form of adaptive authentication which can be tied in with the businesses continuous monitoring and automated mitigation approach discussed below. 

4. Continuous Security Monitoring, real-time anomaly detection and alerting

As the threat landscape is constantly evolving, security monitoring needs to be a continual process, which is dynamically updated to detect new threats and anomalies. SIEM solutions which take existing infrastructure logs and system data in real-time, correlate and alert off any anomalies can meet this need.

There’s pro and cons about whether to utilise an outsourced managed SIEM service (such as Symantec MSSP, AlertLogic, Dell SecureWorks etc) or whether to operate you own in-house SIEM system. An in-house SIEM obviously has better context/ understanding around your own internal environment but also requires constant work from experienced tuners to be of any use.

Managed SIEM on the other hand outsources much of the management overhead but tends to have less of an in-depth understanding of your internal infrastructure and business workflows. However on the flip side managed SIEM services can sometimes have better “global” context (inherent knowledge around more IOC’s and experience from the wider variety of companies/ environments they support).

The next step from continuous monitoring is into some level of automated mitigation and remediation, sometimes referred to as “next generation SIEM”. Netflix’s open sourced FIDO(Fully Integrated Defence Operation) is an example of this, and there’s other commercial/open source products on the market. Several other major vendors have started dipping their toes into the same water. Microsoft have come to the party with some of their advanced threat analytics services Cloud App security and Office365  “advanced security management” with for example the ability to automatically suspend a user account when certain anomalous activities are identified.

Continuous Testing

Continuous testing can be performed using automated tools, and along with scheduled periodic manual penetration tests by experienced professionals would set the minimum bar for internet facing/ customer components (HD Moore’s law), .

More organisations should review the option of implementing a crowd sourced professional bug bounty service

However more organisations should review the option of implementing a crowd sourced professional bug bounty service such as SynAckBugCrowd or HackerOne etc. Which provides incentives for external security testers to continually monitor/ probe the environment and responsibly notify when they detect a vulnerability.

5. When and how to implement BYOD securely

When it comes to BYOD there’s a couple of branches in the decision tree to be aware of;

 Mobiles/ Tablets (specifically IOS and Android operating system)

  • The IOS and Android operating system architectures are reasonably well suited and matured at compartmentalising and securing corporate vs personal data when appropriate MDM/MAM controls are leveraged.
  • Lighter weight MAM only based implementations can be useful in providing managed access to some enterprise apps without necessarily having to enrol a devices in to a full blown MDM. This also gets around the “contractor/ vendor single MDM” dilemma whereby devices can only be enrolled in a single authoritative MDM at any given point in time, and contracting employees phones might already be enrolled in their own organisations MDM.
  • Microsoft “Outlook Mobile App” whilst there are some valid concerns with the way it proxies authentication. Much like a lightweight MAM some organisations will still likely find it a useful way to provide staff personal devices with a managed mobile mail/calendar client which has both a good consistent user experience and the capability for selective remote wipe without requiring staff to enrol their whole personal device into a full blown MDM.

Laptops (specifically Windows/ OSX operating systems)

  • For companies with lots of “sensitive” intellectual property (IP) or compliance requirements “secure” BYOD (as in personally owned) for laptops can currently still only be achieved by using some form of VDI model (Citrix, AWS Workspaces, MS-RDP etc). Primary reason being there’s too many questions around data control/ownership on remote personal devices and technological based limitations around centrally controlling that data. Tech is evolving though, and it’s likely in the future these issues will be mastered.
  • For companies without sensitive “IP” or strong compliance requirements the argument could be made that BYOD laptops and native access “may” be acceptable, providing there’s some form integrated EPM (end point management).

6. Inventory/ Configuration Management/ Orchestration Platforms

A fundamental principle of operating at scale is, “don’t provide any services for free within the business”.  Application/ service value to the business can be more accurately calculated when it’s specific operating costs are known vs hidden within a general IT budget allocation. Accurate and detailed on-charge billing can provide a positive incentive for business units to operate in a more efficient and as a useful byproduct a more secure manner.

Reducing an enterprise’s attack surface will significantly improve its operational security. The most comprehensive method for achieving this is to make sure legacy or under-utilised applications or services are actively removed/decommissioned once they are no longer critical to the business.

As mentioned a simple and effective incentive to drive this process is to implement detailed itemised billing for all resources and services on-charged down to the business unit level. Business units will tend to be more proactive around decommissioning legacy or underperforming systems once they perceive a dollar value associated with their ongoing maintenance rather than the operating cost been hidden in an overall IT budget.

Given the elasticity of AWS/Azure and other cloud service providers, it allows applications/services that are not in frequent use to be kept in a powered down state and just spun up on demand providing both a reduced attack surface and more cost effective operating model.

Detailed native billing report capabilities exist in most cloud platforms which can be leveraged initially for itemised billing and usage attribution. As the usage of multiple cloud services (AWS, Azure, Box, Zscaler etc) increases, keeping an accurate inventory of all services, data and ownership/attribution information also increases in complexity.  The business might find it useful to leverage third party product such as ServiceNow or similar to provide more comprehensive lifecycle management of the infrastructure/ services and integrated ITSM, Inventory/ CMDB, billing platform.

Effectively managing Windows corporate devices (laptops) across the internet presents some challenges, things such as Active Directory group policy and even computer account credential synchronisation generally don’t work natively across the internet.

Microsoft SCCM IBCM (Internet Based Client Management), Puppet and some other 3rd party EPM product can provide differing degrees of end point management capabilities such as managed updates/patches/ application deployments and some scripting capabilities natively across the internet. This may be sufficient for some organisations depending on their operating model.

“Always-on” client VPN service such as Cisco Anyconnect or Microsoft Direct Access and many others can provide an interim workaround by securely extending the corporate network across the internet to all devices regardless of their physical location. This allows existing native Active Directory controls to be implemented, but this approach can come with it’s own set of issues in both security and performance. Client VPNs might be seen as an interim workaround until more native internet based management solutions are available.

7. Rapid and Secure SDLC (Software Development Life Cycle)

An entire article needs to be dedicated to this topic (coming soon) however in the interim some high level points included below;

  • Integrating automated security testing (SAST/DAST) into the CI/CD (continous integration/ continuous deployment) process
  • Leveraging RASP (runtime application self-protection) for better application security and easier integrated security testing within your CI/CD process
  • Empowering developers with more self service capabilities inclduing within IAM 
  • Consensus based commits
  • Leveraging ORM (Object reference models) for secure database interactions
  • Secure credential storage/ access
  • Integrating vulnerability reports/ remediation work into developer’s existing bug/feature tracking system (don’t just email them a 70 page pdf report with identified vulns they need to remediate)

Conclusion

The overarching approach to enterprise security continues to be a risk management based approach.

Businesses however need to adjust some of their historical architectures and supporting processes in order to cope with the more rapid iteration of change and diversity associated with increased consumption of cloud and the newer ways of working.

Hopefully some of the approaches discussed above can assist organisations in both increasing security, reducing opex/capex expenditures and improving end user experience