Security Think Tank: Seven steps to edge security

0
27


Compute and storage decentralisation is a pattern that modern organisations are considering in their roadmap with growing interest. This is in order to cope with limited latency tolerance and bandwidth availability typical of 5G applications and emerging IoT (internet of things) architectures, and to provide compute that enables AI (artificial intelligence) and data analytics in distributed environments, such as smart buildings battlefield tech, connected and autonomous vehicles, or even on intelligent lamp posts for emergency alerting.

The change is influenced by key drivers such as the successful adoption of cloud services, the increasing reliance for data-intensive service delivery and regulations, and policy requirements of local data processing. Large cloud service providers are already targeting these spaces with dedicated solutions, including AWS Wavelength, Microsoft Azure Edge Zones and Google’s Global Mobile Edge Cloud.

Such an environment will see an explosion of new devices with great diversity and a variety of interconnectivity methodologies. Getting this distributed model right requires a considered approach to security and an additional set of challenges that security experts need to address, including:

  • Effective protection of data, which is fuelling innovative approaches, such as programmable privacy and confidential computing.
  • Management of risk for a variety of platforms, such as vulnerability management.
  • The security of the lightweight virtualised environments which are being deployed to meet the performance requirements.
  • The physical security of devices deployed in uncontrolled (and therefore, by definition, hostile) premises.

In the light of these challenges, security professionals should consider seven key principles:

Don’t reinvent the wheel. As the principles of security largely remain unchanged, it is always helpful to start from the threat modelling to define what type of specific new attack vectors the distributed architecture will introduce and what remediation is required. Authoritative sources such as the MITRE ATT&CK Framework are valuable for this task. It is also worth borrowing ideas from already established models, for example zero-trust environments.

Authentication is key. A healthy approach to local access in decentralised models is to always consider this taking place within untrusted environments and, as such, to require identification, authentication and authorisation. A distributed environment will require reliance on federated models, so these patterns must have a clean and secure implementation across the entire ecosystem.

Increase tracking focus. Moving away from centralised environments where trust lies implicitly with the infrastructure provider requires it to be spread across myriad different device and infrastructure operators, which makes it impossible to establish a reliable model of supply-chain assessment. In this scenario, distributed ledger technologies (blockchains) can support an operating model where any transaction between untrusted entities can be tracked and stored securely and therefore provide a supporting capability to augment the security based on trust.

Automate where possible. To manage the complexity of the distributed environment and the variety of devices in place, service providers should rely heavily on automation of device enrolment, identity governance, patching, vulnerability management, security monitoring and incident response. This can be supported by continuous monitoring and performance evaluation and improvement of the automation approach.

However, the processes need to be tested and reviewed in order to avoid automated tasks allowing vulnerabilities to be passed to the edge of the network. For example, firmware updates, patches and binaries need to be digitally signed, the update communication needs to be secure, and so on.

Protect your data. Key management is also particularly important in a scenario where not all the assets are easily accessible, so regular key rotation, secure key storage or reliance to secrets vaults are all things to be considered.

Focus on monitoring and damage control. Physical security of edge assets is impossible to guarantee most of the time, so the focus should shift to maximising the actions taken on the back of monitoring capabilities. For example, tampering should lead to the removal of local data (or encryption keys) and the compromised device being taken offline or quarantined, or even running with limited functionality. This is significantly easier to control using the frameworks put in place by major hyperscale cloud suppliers.

Boost your security architecture. Last but not least, decentralised, hybrid, multi-supplier environments and IoT devices need a strong security architecture to be managed securely. This includes the establishment of an enterprise-wide security architecture framework (such as SABSA). Also, a set of security principles, design patterns and non-functional requirements to drive architectural decisions and of a standard collection of security controls (based on NIST 800-53, for example) allowing measurement of risk mitigation effectiveness are fundamental steps towards achieving a robust, uniform and shared security architecture governance model.

Consumer, business services and industrial sector organisations looking to evolve their datacentre model should consider each of the above factors in building their security strategies. Although the distributed model has been around for only a few years and is still in evolution, the expectation is that it will be the prevalent architecture of the future. It’s time to prepare now.

Silvano Sogus and Alan Taberham are cyber security experts at PA Consulting.



Source link