How to ensure the availability of your own services remains modern applications on the test bench
Applications are increasingly based on a complex web of APIs, cloud services and Internet networks. It is crucial that accessibility is ensured at all times.
Companies on the topic
Especially with API-based applications, it is difficult to see what lies beneath the surface.
A comparison of modern applications with icebergs hits the nail on the head: only a tiny part of the whole, i.e. the tip of the iceberg, is visible to the user. This is usually where all communication with the user takes place-whether it’s input, interaction or feedback from the app.
The rest – a much larger part of the application-remains hidden from the user. However, users know that the app consists of other components in addition to the visible tip and also draw their own conclusions about the application. If the overall package of user experience and the associated reputation of the application is correct, these conclusions are desirably positive.
The actual range of the underlying applications becomes particularly clear if one considers banal processes, such as placing an article in a shopping cart. What is only of peripheral importance to the user usually requires countless actions that use REST-based API services and take up different platforms.
Most of the services and platforms are distributed over the Internet. This includes messaging services such as Twilio for cloud communications, an e-commerce website that uses Stripe, or delivery services that work with Google Maps for geolocation.
In order for all these services to work together smoothly and for the customer to experience an optimal digital experience, the overall performance of the applications must be optimized. This is done through an improved understanding of the way APIs work and function and the optimization of their accessibility via Internet and cloud provider networks.
Transparency and visibility are the key to solving problems
Problems with online service providers of any kind are first of all an annoying circumstance that can occur at any time. Due to failures, customers no longer have the opportunity to access the desired services and companies risk a significant loss of trust in the event of a longer problem resolution.
In particular, if problems arise that lead to a incomplete workflow, it must be clarified quickly how the full functionality can be achieved again. Due to the complexity of modern systems, it is not always easy to understand exactly where the cause of the failure lies.
A first step would be to use conventional network and monitoring services for applications as a precaution. However, since they do not offer the necessary transparency and flexibility for targeted problem finding and solving, their scope of application is limited. For example, Packet capture and Flow Analyzer do not work outside of your own digital environment.
A complete clarification of the possible causes of failures can therefore not take place. As a result, service providers recognize on the one hand that a problem exists and their own service is not reachable, on the other hand this problem is not caused by disturbances within their own infrastructure.
At this stage, no progress has been made on the issue. If third parties are now called in to solve the problem in the next step, necessary countermeasures often move into the distance. In this case, contact persons should first be informed and informed about the situation.
In addition, additional persuasion often has to be done so that it is clearly recognized that the problem is also to be found in the corresponding source and was not caused within the infrastructure of the service provider. Evidence that needs to be provided requires additional time and resources. During these elaborate steps, every minute that your own service is unavailable leads to reputational losses and loss of revenue.
Another challenge is the consistency of deployment paths. In the cloud, a stable state never exists. For example, if you depend on a third-party API based in Ireland, there is no guarantee that it will still be based there tomorrow. Data centers are arbitrarily established, relocated, or disappear entirely. All this can directly affect the functionality of the application. This means that more and better tools are constantly needed to solve problems in a timely and efficient manner.
Browser synthetics provide a critical tool for testing applications and evaluating user experience throughout the course of application workflows. However, there are cases where even a single browser request triggers multiple backend API interactions that are not directly traceable from the user’s perspective.
An example of this can be filling out an order form on an e-commerce website. The application makes a number of different API calls. Before the user finally receives his order confirmation, stock checks, payment processing and organizational work, such as creating an order number, are carried out in the background. These backend services are not visible to the customer. Any errors or problems regarding the service itself remain undetected, but at the same time have a direct impact on the service and thus customer satisfaction.
For application operators, the product owner testing approach makes the most sense here. As a result, not only the front-end interactions are assessed, but also the core applications are included in the test activities. As a result, the data movements of the network transport up to service and cloud providers can be traced and in this way problems can be identified in a targeted manner. An even more comprehensive approach is adaptive API monitoring.
Adaptive API monitoring paves the way for the future
Compared to the product owner testing approach, adaptive API monitoring addresses the needs of application operators even more deeply. To avoid dependencies, sequential, conditional, or iterative API calls are made. Thus, this type of monitoring provides a highly flexible synthetic testing framework that emulates the interactions of backend applications with remote API endpoints.
API monitoring tests are location-independent: They can be controlled remotely from external resources or via links that are located within the hosting environment. An advantage of the latter approach is that it also allows the specific network paths and performance between application to the API endpoints to be monitored.
Due to the strong concentration on web applications and the increasing complexity of interdependent APIs in the online environment, powerful monitoring options are an absolute must for all decision-makers and managers. Product owners must be able to dynamically record the performance of their own services. This also means that the interaction between individual functions must always be kept in mind in order to prevent failures as far as possible.
Likewise, the logic of complex workflows must be able to be validated at any time in order to be able to determine the cause beyond doubt in the event of a problem. After all, only through the extensive process visualization and the associated insights can the hidden side of the iceberg be ensured, so that users and customers always experience a flawless digital experience and ultimately remain loyal to the application.
* Joe Dougherty is Product Manager at ThousandEyes.