Uncovering problems in segmented applications Container problems with security and monitoring
The importance of Docker and Kubernetes in current software development is growing. However, their beneficial properties also raise questions with regard to their monitoring and safety.
Related providers
Because container environments are constantly in motion and there is constant operation, they are quite difficult to monitor.
Docker and solutions like Kubernetes have obviously hit a nerve in agile software development. The use of containers in the provision of applications and services is becoming increasingly popular. Meanwhile, even ambitious home users are handling Docker images as a matter of course in order to install server applications on a PC or your NAS.
Containerization as the basis of current IT concepts
Container solutions are technical answers to a paradigm shift in IT that has been recognizable for years. This started with agile process models in development, which rely heavily on automation. The concept of DevOps today enables teams to build and scale their own infrastructure with “Infrastructure as Code” and “Configuration as Code”. Instead of updating individual components or libraries, new image files are created and automatically rolled out as a whole.
The splitting of extensive IT projects into smaller work packages, which are implemented in sprints, is accompanied by a splitting of the IT structures. Instead of monolithic applications, work is being done on microservices that are put together in orchestration environments for the overall application.
The advantages are obvious: application tests can be carried out even before deployment, in an environment that is no different from the production environment. In multi-cloud environments, containers can be moved with little effort. And potentially, each service can run in its own container.
More fragments mean more threat
However, it is precisely this fragmentation of the infrastructure that also raises questions with regard to IT security. Current infrastructures are a network of containers, cloud environments and network connections. This makes both checking for vulnerabilities and closing security gaps much more demanding. Potential threats to each individual container arise from:
- Application-level DDoS attacks and cross-site scripting attacks on publicly accessible containers.
- Compromising the container, which can already begin in the development process. Too often, developers assume that the code libraries used are safe. However, third-party libraries in particular can prove to be particularly risky. This also applies to external images, which should never be used unchecked. Dangers also arise here from the code base and possibly incorrectly set rights, which include too many privileges.
- Container breakouts, i.e. the insulation layer of the containers is overcome. This runs the risk of unauthorized access to other containers, the host system or even the data center.
- Vulnerabilities in the framework itself, i.e. vulnerabilities in the Docker environment or orchestration solutions such as Kubernetes.
As a direct consequence of these vulnerabilities, mechanisms could be introduced unnoticed that force the container to draw on a particularly large number of system resources in order to impair the performance of other containers or environments. In addition, there may be gaps in the communication protocols used for data exchange.
The detection and closing of vulnerabilities in the environment of containers is made more difficult by the fact that they often have a limited lifetime.
Docker and Kubernetes challenge monitoring
However, closing security gaps is only one aspect of using containerization. After all, it’s not just about safe, but working and resilient applications. The performance and availability of all applications within a container must be checked in the same way as CPU and memory consumption with regard to a maximum value. Request rates, data throughput and error rates provide important information about the health of individual containers or the overall application.
Classic application performance monitoring or infrastructure monitoring must inevitably reach their limits in an increasingly complex IT structure. Because the more fragmented the entire environment, the more difficult it becomes to carry out purely manual monitoring in order to determine operational key figures (work metrics) and values that affect the resources (resource metrics). Due to the chosen structure, a whole series of things must typically be kept in mind in parallel: the containers themselves, the respective cloud platform, individual services and also the Kubernetes environment in order to obtain information about failed starts or pod reboots, for example. These tasks are hardly manageable with classical monitoring approaches.
Safe operation, but with the right tools
For the reasons and prerequisites mentioned, it follows that a secure and stable operation of container environments and current infrastructure is best ensured by optimally coordinated monitoring solutions. This can and should be started as part of the development process. This is because monitoring can already provide important results in order to uncover weak points in the performance of the overall application.
On the one hand, a comprehensive monitoring solution for containerized environments takes security into account. “Static Application Security Testing” (SAST) can be carried out at an early stage, in which code does not yet have to be fully executable, in order to detect risks in containers and the libraries used. “Dynamic Application Security Testing” prevents the deployment of vulnerable code to prevent.
Precisely because a large number of containers and services with the corresponding number of metrics have to be monitored in complex environments, monitoring must be clear and have automations so that DevOps and DevSecOps teams can quickly identify potential problems. This is the only way you can act in time to prevent major failures or even downtimes.
Machine learning methods do a good job in monitoring because they do not only notice deviations from previously defined thresholds that describe a “healthy” system, but are able to point out possible problems based on their own forecasts. These can already be increasing latencies for database queries, which the team would not have noticed at all.
A secure and stable operation of container environments is thus possible if the monitoring covers security and performance, automatically detects anomalies and does not separate “real user monitoring” and “application performance monitoring” in separate data silos.
* Alexander Weber is Regional Vice President Central Europe at Datadog, the monitoring and security platform for cloud applications. Weber, who has almost 20 years of sales and management experience, is responsible for the operational business and growth of Datadog in the core regions of DACH, BeNeLux and Eastern Europe. Before joining Datadog at the end of 2021, Alexander Weber was employed by IT security provider Tanium as regional vice president for Germany. Prior to that, he was Director of Sales at ServiceNow. Other stations in his career include Juniper Networks, CA Technologies and Dell. In his new position at Datadog, Weber will report directly to Patrik Svanström, Vice President EMEA Sales.