The previous part of our blog post series ended with the decoupled user interface component tm-ui-v1 for TicketMonster. This component runs as single Cloud Foundry app and forwards user actions to the endpoints of TicketMonster. Nevertheless, the monolith still contains its own UI as part of the deployment that handles 50% of traffic and works as fallback strategy in case of problems with tm-ui-v1.
This blog post focuses on removing the legacy UI from TicketMonster in a fearless manner. Therefore, we first want to understand whether the introduced tm-ui-v1 somehow impacts the users in a negative way. If the new front-facing component of TicketMonster behaves as intended and causes no performance deficit, the next step disconnects the old UI and removes it from the code base. This step creates a thinner backend version of TicketMonster (backend-v1) that will be introduced using canary release mechanism.
To follow this part of the blog post series, we need three projects available on GitHub:
Compare old vs new User Interface
At the end of the previous blog post, we made use of a http request header to differentiate between user calls directed to the monolithic application and user calls routed through the new user interface tm-ui-v1. In more detail, a script in the installed Apigee Edge API Proxy adds the request header X-Canary with either the value tm-ui-v1 or ticket-monster. Based on this request attribute, Dynatrace allows to simply compare, for example, the response time of the two request routes using custom charts.
A chart for TicketMonster’s traffic reveals a picture as shown below. There, we can see that the response time for the orange line, which represents the traffic through tm-ui-v1, is above the cyan one, which depicts the hits of TicketMonster’s legacy UI. In other words, users experience a higher response time when using the new UI. Of course, we should be aware of this issue, but it is inevitable when extending the service flow by an additional component since we introduced an additional hop that does not come for free. Consequently, the additional costs should be considered from the angle of absolute numbers and there we can see that worst response time was at 25 ms, a value that is acceptable in a common web application.
Before moving on, we want to quickly touch upon a concern that may arise regarding visibility. Is it still possible to trace a request even though we decoupled tm-ui-v1 from TicketMonster? Don’t worry, Dynatrace automatically captures all requests end-to-end and can still trace each step among services and on code level. Just check out the PurePath of a request from tm-ui-v1 and you will get a picture as shown below. This screenshot depicts tm-ui-v1 as request initiator that calls an endpoint of ticket-monster, which performs the method getAll. Moreover, we also see tables that get queried. All in all, the full end-to-end visibility of a service request is ensured.
Clean up TicketMonster’s Code Base
As we are confident that the old UI of TicketMonster became obsolete, it is time to remove the legacy code. In a real-world scenario, you should create a branch from the code repository of your monolithic application since this gives you the possibility to merge upcoming features, which still get pushed into the code base of the monolith, to the code base of the backend version without UI. For the sake of simplicity and since TicketMonster does not receive any new functionality, we copy the monolith to a new folder named backend-v1.
To clean up the new backend version of TicketMonster, we can delete the client part of the application (i.e., package ./src/main/webapp/) that we have extracted to tm-ui-v1 previously. As a result, this makes our monolith more compact as we got rid of the entire UI code. To deploy backend-v1 to PCF, I refer to the instructions summarized on GitHub in backend-v1. According to these steps, backend-v1 uses the same MySQL service instance like ticket-monster resulting in two applications feeding the same database.
To put a client-facing component in front of backend-v1, an additional version of tm-ui-v* is required that forwards its traffic to backend-v1. Like we did with tm-ui-v1, it is necessary to set the proxy and reverse proxy in httpd.conf to the target backend (see below). Afterwards, we change the version of this user interface from v1 to v2 before conducting the deployment steps summarized in tm-ui-v2.
Based on the result of the previous steps,
cf services should now list the additional applications backend-v1 and tm-ui-v2. If you open tm-ui-v2 in a browser and you used the source code from project tm-ui-v2, you should see the remark “This UI hits the BACKEND”. This remark is added to the index.html of this UI version to differentiate between tm-ui-v1 and tm-ui-v2 from a client-side perspective. To differentiate the traffic from a server-side perspective and to release the backend version combined with its user interface in a controlled manner, let’s continue expanding our skills regarding canary release.
Dark Launch and Canary Release as Deployment Strategy
At this step of our journey, backend-v1 and tm-ui-v2 take no load while ticket-monster and tm-ui-v1 are dealing with the entire user traffic. In other words, our new backend-v1 and tm-ui-v2 components have been deployed to production but have not been released to anyone. This allows us to perform an informal dark launch and to rollout the new deployment, for example, to an internal user group or maybe to a subset of users from a selected region, etc.
Since we have installed an Apigee Edge API Proxy in the previous blog post, let’s use this proxy to rollout the new backend-v1 and tm-ui-v2 gradually. For instance, internal users should be our test group, who have to work with the new deployments first. Those internal users can be identified by their IP, a http header, the browser version, etc. For this example, we’ll say any request with an http request header X-Dark-Launch that is set to ‘internal’ will be routed to the new backend-v1 and tm-ui-v2 service. The Apigee Edge script for this rule looks as follows:
To test the above rule and to get routed to our dark-launched components, a script in the load-generation project continuously clicks through TicketMonster while the http request header X-Dark-Launch is set to ‘internal’. Furthermore, we can use JMeter to create additional load on TicketMonster. At a certain point in time, we then activate the above rule to redirect user calls from tm-ui-v1 to tm-ui-v2 depending on X-Dark-Launch.
The result of turning on the rule is depicted by the Dynatrace custom chart below. According to this chart, tm-ui-v1 (yellow line) took the entire load of TicketMonster at the beginning of the time frame. Around 1:12 pm the rule became activated. This brought the dark-launched tm-ui-v2 and backend-v1 (green line) into play and transferred load of a simulated user group from tm-ui-v1 over to the new components.
We have now redirected a dedicated user group to the dark-launched services. From here we can start a deployment to our customer base by doing a canary release. For instance, we do 1% of live traffic to our new components and slowly increasing the traffic load by 5%, 10%, 50%, etc., if we observe no adverse effects. Here’s an extension of the above Apigee Edge routing rule that canaries the v2 traffic at 25%:
To show that this mechanism works, please consider the above screenshot (Deploying the new backend version of TicketMonster) around 1:20 pm. At this point in time, we can see that the traffic from tm-ui-v1 decreased by 25% and has been taken over by tm-ui-v2. Since none of the internal user group as well as no TicketMonster customer started complaining about the new UI, I raised the load of tm-ui-v2 by additional 25% in the Apigee Edge script manually.
You get worried about controlling a deployment using a few lines of code? Until now it was a pragmatic way but stay tuned to learn more about Apigee Edge and how to use it in a more convenient manner.
Since TicketMonstser’s client-facing part is successfully decoupled and gets step-wise deployed to customers, we can now start breaking up the business logic of the backend service. Hence, the next blog post is going into more details about extracting the first “real” microservice that is dealing with a specific domain – bounded context – of the monolithic application.