App development Kubernetes in productive use-efficient and secure
In app development, microservices prevail. While most organizations are technically well-positioned, they often underestimate the organizational impact of large-scale microservice operations – especially in the areas of culture, complexity, and security.
Companies on the topic
In container clusters, a company can easily lose track.
Today, companies have to react faster and faster to changing requirements and provide new solutions. The year 2020 has just shown this. Agile development methods for applications based on microservices ensure greater flexibility. These were often introduced before the pandemic, but the momentum has accelerated significantly.
According to a survey of NGINX users, the share of companies that already use microservices productively increased from 40 to 60 percent last year. Containers are used more than twice as often as other modern app technologies.
The de facto standard for managing containerized apps is Kubernetes, according to a 2020 study by the Cloud Native Computing Foundation (CNCF). According to the survey, 91 percent of respondents use Kubernetes, 83 percent of them in production.
Challenges in three areas
When introducing Kubernetes, many companies are preparing well for the significant architectural changes. However, they are often surprised by the organizational impact of operating modern app technologies on a large scale. Significant challenges usually arise in three areas:
• Culture
Even when app teams use modern approaches such as agile development and DevOps, they usually remain subject to Conway’s law. According to them, “organizations that design systems, […] forced to create designs that reflect the communication structures of these organizations.“
In other words, distributed teams develop distributed applications. The teams usually work independently of each other, but use resources together. This structure allows them to work quickly, but it also promotes the formation of silos. This often leads to poor communication, security vulnerabilities, tool proliferation, inconsistent automation and disputes between teams.
• Complexity
Implementing enterprise-grade microservices requires a number of critical components that provide visibility, security, and traffic management. Typically, teams use infrastructure platforms, native cloud services, and open source tools. Although these meet many requirements, they can also contribute to a higher level of complexity.
To make matters worse, different teams within a company often choose different strategies for the same tasks . Or you can continue to use outdated processes and tools despite changing requirements for the deployment and operation of modern microservices-based applications.
The CNCF Cloud Native Interactive Landscape provides a good example of the complexity of the infrastructure needed to support microservices. This multitude of different technologies often means that you are dependent on the chosen infrastructure, shadow IT is created, tools are used uncontrollably and the people entrusted with the maintenance of the infrastructure are overwhelmed.
• Security
The security requirements for cloud-native and traditional applications differ significantly. Therefore, previous strategies such as perimeter protection are not practical with Kubernetes. The large overall system and distributed nature of containerized apps greatly expand the potential attack surface. In addition, dependence on external SaaS applications means that cybercriminals have much more opportunities to inject malware or steal information.
The challenges already presented in the areas of culture and complexity – in particular the uncontrolled spread of tools – also directly affect the security and resilience of modern apps. Using different tools to solve the same problem is not only inefficient, but also a big problem for SecOps teams. Because you have to configure and secure each component correctly – or prevent the use of certain tools.
Efficient and safe solution
As with most organizational challenges, the solution is a combination of technologies and processes. Since Kubernetes is an open source technology, there are numerous ways to implement it. While some companies develop custom solutions, others value the flexibility, predictability, and support of services.
These include Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), Microsoft Azure Kubernetes Service (AKS), Red Hat OpenShift Container Platform or Rancher, now under the leadership of SUSE. Such Kubernetes platforms usually make it easier to get started, as they offer many required services from a single source. As a rule, however, they do not contain the functions necessary for productive use on a large scale.
In particular, current network and security functions are often lacking to ensure the necessary stability and availability. For the productive use of Kubernetes, companies should therefore add three additional components:
1. A scalable Ingress-egress layer to route traffic to and from the cluster
An Ingress controller is a special load balancer that abstracts the complexity of the Kubernetes network. It builds a bridge between the services inside and outside a Kubernetes cluster. This component can be used productively if it includes features for high resiliency, fast scalability and self-service. This includes advanced health checks and Prometheus metrics, dynamic reconfiguration, and role-based access control.
2. Built-in security to protect against threats across the cluster
While general safeguards are sometimes sufficient outside the cluster, detailed measures are required within the cluster. Depending on the complexity, there are three places where a flexible Web Application Firewall (WAF) is necessary: on the Ingress controller, as a proxy per service and as a proxy per pod. A high degree of flexibility is needed to apply strict controls for sensitive applications such as invoicing and lighter controls for lower-risk applications.
3. Scalable east-West traffic layer to optimize traffic within the cluster
This component is needed once Kubernetes applications go beyond the level of complexity and scale that basic tools can handle. Then companies need a service mesh. This is an orchestration tool that provides even finer-grained security and traffic management for application services within the cluster. A service mesh is typically responsible for routing management between containerized applications. It ensures the provision and enforcement of autonomous service-to-service policies for mutual TLS (mTLS), as well as insights into the availability and security of the apps.
Care must be taken
When choosing these components, companies should pay attention to portability and transparency. Platform-agnostic components reduce complexity and improve security as teams use and secure fewer tools. In addition, workloads are easier to relocate depending on business needs.
Steffan Henke (Picture: F5)
The importance of visibility and monitoring cannot be overestimated. Integrations with tools such as Grafana and Prometheus create a unified view of the infrastructure. This ensures that teams detect issues before they impact customers. Kubernetes-based applications can thus be used productively in a safe and efficient manner.
* Steffan Henke is Technical Solutions Engineer, NGINX, at F5.
(ID: 47472795)