Yes, containers are great, but look at the security risks



Holder Cyber ​​Security

Containers have revolutionized the development process, serving as a cornerstone for DevOps initiatives, but containers carry complex security risks that are not always obvious. Companies that do not mitigate these risks are at risk of being attacked

In this article, we outline how containers contribute to the rapid development that brings unique security risk containers into the picture – and what companies can do to secure containerized workloads outside of DivOps. DevSecOps.

Why was the container caught so quickly?

Holders are, in many ways, the evolution of virtualization. The goal was to speed up the development process, paving the way for more rapid development, testing and implementation – a method that is lighter than using a full-fledged virtual machine.

At the root of this problem is application compatibility, because applications require a specific version of the library – which may conflict with the requirements of other applications. Containers have solved this problem and are well connected to the development process and management infrastructure that drives these processes.

Containers do their job by taking virtualization to the next level. Virtualization abstracts the hardware layer, where containers abstract the operating system layer, essentially virtualizing the role of the OS. Containerization works by packaging applications into “containers” that include all the libraries necessary for an application to work, while each app assumes that it has its own OS, leaving them unaware of each other.

Practically, containers are quite simple – a container is a text file with a description that should include any elements in an instance. This simplicity and the lighter nature of a container makes it easier to use automation (orchestration) tools to fit across the life cycle of development.

DevOps to win… but security is also important

Containers have the potential to significantly increase development efficiency – which serves as the key to unlocking DevOps. This is probably the main reason why containers are so widely captured, Gartner estimates that by 2023, 70% of the organization will run containerized workload.

The process of developing, testing, and deploying applications was fraught with obstacles, including the constant backlash between developers and infrastructure overseers. Today, thanks to Containers, developers can create and test an environment that works and send the finished code along with a spec that defines that environment.

Operational side teams only implement this specification to create a matching environment that is ready for use. “Yeah, but it works on my machine …” never helped solve the problem – but today, it’s an expression that developers no longer have to use because there are no environmental issues to debug.

So, yes, DevOps means faster development. But there is one missing element: security. This is why we hear increasingly about DevSecOps because it evolved from DevOps because developers have noticed that the DevOps model alone does not adequately address security concerns.

Introduces various security risks in containers

Containers simplifies the development process but introduces complexity to the safety picture. When you pack a complete operating environment tightly in a container you only extend the attack surface and open the doors of different attack vectors just to distribute it widely. Any vulnerable libraries packaged with containers will spread these vulnerabilities under countless workloads.

There are several risks. One is a “supply chain attack” where a malicious actor attacks your application by altering the packages or components supplied with your application, not interfering with it. Thus, teams overseeing development efforts need to evaluate the application they are developing and draw from each library container configuration as a reliance.

Container security risks also include equipment that enables containers – albeit from dockers to orchestral equipment such as Cubarnets, as these tools need to be monitored and secured. You must not, for example, allow cisadmins to run docker containers as root. Similarly, you need to keep a close watch on your container registries so that they are not compromised.

Kernel security is at the core of container security

Some container-related security risks are less visible than others. Each container requires access to a kernel – after all, containers are just a form of advanced process isolation. But it’s easy to miss that all containers depend on the same kernel – it doesn’t matter that the applications inside the containers are separated from each other.

The kernel that apps see in a container is the same as the kernel that the host relies on to work. This brings up a few issues. If the host’s kernel that supports the container is vulnerable to exploitation, this vulnerability can be exploited by launching an attack from an app inside a container.

So sharing the kernel with all the containers on the host means that a defective kernel needs to be patched quickly, or all containers can be affected by rapid vulnerabilities.

Yet again, it comes down to patching

Keeping the host’s kernel up to date is therefore an important step in ensuring safe and secure container operation. And it’s not just the kernel that needs patching, patches must be applied to a library drawn by a container. But, as we know, it’s easier said than done. Probably a factor as to why they’re doing so poorly One study found that 75% of container analysis had a weakness Which is classified as complex or high risk.

These vulnerabilities, for example, can lead to breakout attacks where an attacker relies on a faulty library inside a container to be able to execute code outside the container. An intruder can eventually reach their intended target by violating a container is an application on the host system or another container.

Maintaining a secure library in a container can be a real headache in the context of – one needs to keep track of what has been patched and what is not, as well as new vulnerabilities. The process is laborious, but it also requires expertise that your organization must acquire if you do not already have it.

Considering the quality of regular, consistent patching, these factors aren’t enough for the hit-and-miss patching routines we see, but – especially when thinking about the OS kernel – the necessary reboots and related disruptions to maintain downtime windows significantly delay patching. Can. Live kernel patching This helps alleviate the problem, but it is not yet established by all agencies.

Always include safety goals in your container operations

It is common for sophisticated technologies to introduce new complexities in data protection. New tools usually lead to new and fancy exploits. This is also true for containers and while it does not reduce the overall quality of container usage under your workload, it does mean that you need to be aware of the risks posed by the containers.

Getting started with educating your developers and cesadmins about common container security flaws and best practices for mitigating these flaws. Patching Another important aspect. As always, taking the right steps to mitigate cybersecurity flaws will help protect your organization – and help your team benefit from those cutting-edge technologies without the hassle of a sleepless night.


Source link