From DevSecOps to automation Monitoring and Observability Trends 2022
The market for Application Performance Management, APM for short, is one of the fastest growing in the IT industry. But which trends will shape monitoring in the future? We take a look at the most important developments.
Modern monitoring and observability approaches are intended to make the increasing complexity in modern application landscapes manageable.
Dealing with increased complexity as a result of increasingly heterogeneous and distributed application landscapes is currently one of the most important influencing factors on development in the APM environment. Data growth, security and a lack of standards place new demands on monitoring and observability solutions, to which manufacturers and users have to find new answers.
Against this background, five key trends can be observed:
Trend 1: APM goes Security
The increasing link between APM and cybersecurity is not a completely new phenomenon. Not least due to the Log4Shell incident, however, the question of how to identify and fix critical security vulnerabilities as quickly as possible has gained new relevance. More and more APM or monitoring tools are therefore integrating security features into their products, with which security-relevant vulnerabilities in the libraries used can already be detected during the development process.
For this purpose, the artifacts are automatically analyzed at runtime with each new deployment, compared with vulnerability databases online and give the dev team real-time feedback on possible security vulnerabilities. The immediate feedback in the context of the real-time analysis supports a quick fix of the critical problems without additional loops via security in the sense of a DevSecOps approach.
Trend 2: Smart storage instead of data explosion
Fueled by the trend towards microservice architectures, experts assume that the data volumes in monitoring are currently doubling every year. Whereas in the past log data and metrics were generated in a few places in the case of monoliths, today we are increasingly encountering microservice worlds with distributed multi-cloud systems, each of which generates various log information, metrics and traces and thus leads to an exponential increase in the amount of data to be stored.
Against this background, there is a growing need for intelligent storage solutions that not only store this data, but can standardize, unify and correlate the different data types originating from different cloud solutions. If a problem occurs, all the necessary data can be used in the APM analysis to clearly crystallize the cause, instead of having unlinked data that does not offer real added value for problem solving, but only drives up storage costs.
Trend 3: Automate or Die
The increase in multi-cloud approaches and a growing autonomy of the development teams in the selection of technologies used ensures increasingly heterogeneous application landscapes. Today, in applications distributed worldwide, one comes across increasingly widespread technology stacks, which makes it increasingly difficult for an ops team to (manually) find and understand errors.
The only way to counter this increased complexity is through greater automation. Tool providers are therefore increasingly relying on intelligent self-remediation or healing mechanisms that not only detect problems, but also automatically initiate countermeasures through stored events and a corresponding coupling of the toolchain (e.g. deployment toolchains, CI /CD pipeline), such as the deployment of additional resources at peak loads. Such event-driven automation logics in conjunction with machine learning or AI components will continue to gain in importance in the coming years.
Trend 4: End User Centricity
A central component of APM is the end user experience, which is becoming more and more important. The demands of end customers on the digital user experiences have increased significantly in the last two years of the pandemic. This results in the need to think more from the customer side of the digital services provided.
In the sense of APM, it is important to evaluate faults end-to-end, to understand the impact on the digital experience of the end users and, based on this, to be able to take the necessary measures to remedy them. In addition, the information about the behavior of the users provides important insights into sensitive points of the customer journey that help to secure the business processes.
Trend 5: OpenTelemetry
Especially against the background of increasingly complex, distributed application landscapes and different data types, the question of uniform standards for telemetry data, such as logs, metrics or traces, is playing an increasingly important role. According to Gartner, seven out of ten “Cloud Native App” providers will have implemented OpenTelemetry (OTL) as a format by 2025. OTL is a vendor-neutral open source data format that can be read and processed by all systems.
Although OpenTelemetry is still under development, providers and users of observability solutions should already be actively dealing with the new standard and starting proof of concepts (PoC) in order to identify suitable approaches for implementation: for example, are you relying on agents that automatically collect metrics or are you better using manual implementations in the code?
With questions like these, now is the right time to gain experience and lay the strategy for the future. Because one thing seems clear: OpenTelemetry will probably be a very important building block in the future when it comes to collecting metrics.
Monitoring becomes smart
After years of strong growth and short-cycle further developments, the monitoring and observability market seems to be entering a new phase of maturity, in which the main goal is to make the increasing degree of complexity in modern application landscapes manageable. Automation and intelligence in data processing and the linking of data with business requirements play a key role in this.
In the future, APM solutions will differ less in how they collect the data than in the intelligence they use to correlate the data with each other and thus make it usable. Open standards, such as OpenTelemetry, are becoming an important component, especially in highly distributed systems, in order to ensure consistency of monitoring even across technology and tool boundaries.
* Stephanie Köhnlein is Senior APM Consultant at iteratec GmbH.