24 May Synchronize an asynchronous flow with IBM IIB
During an integration process, it often occurs that a consumer (client application) requires a synchronous flow. With other words, it is a requirement to guarantee the communication (complete round trip of the message) is synchronous. This can easily be provided using the default components of IIB. Using communication protocols which are by nature synchronous (think about SOAP) and implementing all functionalities inside the same message flow will aid to provide a synchronous service. However, it is not always as simple as that. It will occur that a DB or another back-end service is to be consumed to enrich data. It might also be possible that the architecture used defines that you will have to consume internal services. Even this can be implemented synchronous, although this will come with a risk. The risk of blocking the flow of messages.
Risk of synchronous flows
As long as the functionality provided is guaranteed to be delivered fast and without risks for blocks, a synchronous flow does not contain any dangers. But, when implementing complex procedures and especially when consuming (external) back-end services, the efficiency of the exposed services becomes at risk. Take next examples:
The ESB exposes 1 SOAP service which is being consumed by 1 client. The client sends an XML message based on “XSD schema A” and requires a response based on “XSD schema B”. no direct issues can be discovered in this basic synchronous service. The client is responsible to create a connection to the SOAP service and is responsible to wait for a response from the service exposed by the ESB. The SOAP service is his turn responsible to provide a response (an error or a success message with the required information). The client will wait for sending a new request when the response of the previous request has arrived.
In this example, performance of the service might suffer its first issues. Imagine this SOAP service is now consumed by 2 client applications but is only a single threaded service. This means that when both “Client 01” and “Client 02” send a request, the second request will have to wait for the first one to be resolved and responded before the second service can be threaded. The easiest solution for this is to raise the number of threads available for this service so no delays will occur as the SOAP service is capable to process several request at the same time. Although, raising the number of threads should not be taken lightly, this is a possible solution.
In this example, you become dependent on the response time of a DB and the speed of the network available. Mind that the DB is not part of the ESB and thus not part of your responsibilities as an integration developer to maintain. You will be consuming a view, table, stored procedure, … Just to make it more realistic, we will add extra complexity by saying the SOAP service exposes several functions. The transformation will make sure the correct format is built to communicate with the DB and provide a response understandable by the clients (Mind that Client n stands for an unknown number of clients). The consume DB component will call the DB and the required function. At this point, the efficiency depends on whether the DB response fast. Imagine that one of the functions is a heavy store procedure which will take time to run. This will result in requests which will be timed out and clients will have to resend the request. The same result might occur with an unstable network, … Previous solution might be useful although, as n clients might be a very high number, it is not recommended to raise the number of threads to a number that avoids the time-outs, as reserving these threads for this service results that other services might get block due to the lack of free threads available on the machine. So you will only be replacing the block to a different location.
This example is similar to the previous one. One SOAP service exposes multiple functions. Based on the functionality in the “Consume BE” the correct back-end will be consumed. Whether this back-end is an internal or external service, a DB or a SOAP service, … As long as your ESB service is single threaded, you are in risk that one of the back-end services will block your service as your service waits for the response and your consumers will receive time outs. Like already mentioned, raising the number of threads will only result in the block being replaced to a different process on the machine.
The title states clearly “possible solution” as there might be several possible solutions. The solution provided here is one I’ve been using for a time and always provided a high quality result. An extra advantage is that this solution only uses default IIB components and require only a limited amount of code that needs to be written.
The main idea is to split the service that the ESB is exposing into 2 levels.:
- 1st level: front side = responsible for the communication with the client application.
- 2nd level: back side = responsible for the functionalities provided by the ESB (Transformation, enrichments, routing, …)
Very often, the 2nd level is again split into an extra level which is only occupied with consuming extra (external/internal) services. Splitting the service in these 2 levels allows the first level to maintain all the request without having to worry about processing. This also allows the first level to continue accepting requests while back-end flows deal with the processing. The front end will only route the request to the required process and will store the connection in a sort of connection pool. The connection, which is being maintained by the client application, will be reused when the back-end service provides the required responds to the front-end service.
Mind that when splitting the service into several levels, theoretically, the service is no longer synchronous. This because most often, inside the ESB you will use MQ instead of SOAP services and because MQ is by nature asynchronous. Although, it is possible to keep the entire process synchronous.
Used components described
There are 3 crucial components for this to work:
- MQOutput node
- MQHeader node
- MQGet node
Each of these nodes will require a specific configuration for this to work.
Set new correlation ID. This is required so each message send out has a unique correlation ID. If not, conflicts will occur as the MQGet relies on this unique is to identify the correct message with the correct thread. Based on the screenshot of the message flow above, using this configuration the MQOutput node will write a message to the queue configured and will also sent a copy to the MQHeader node. Mind that the MQOutput does not create a MQMD header, but places the result header into the “localEnvironment”.
The MQGet node requires by default a MQMD header to provide the information it needs to be able to correlate the response with the request. Although the MQGet node contains the possibility to be configured to find the required information at a custom location. I do prefer to keep the MQGet default and use a MQHeader node to recreate the MQMD header. This prevents you from modifying the MQGet node as the responsibility to provide a clean header now lies with the node of whom the purpose is to create clean MQ headers.
Based on the information this node received from the MQHeader node (MQMD header) the MQGet node will start listening on the queue provided in the configuration. When a message is posted on this queue, the MQGet node will compare the correlation ID with the correlation ID in the message received from the MQHeader. When this is a match, the MQGet node will fetch that message from the queue and will put the responds back onto the initial thread allowing the first level flow to provide the response to the client application.
The back side is really dependent on what processes are used. Example given, when it only provides a local process and it doesn’t call any other services using any other protocol than MQ, no special actions are required. You can provide an MQOutput node to reply to the queue the MQGet node is listening on. However, when other services are being consumed using other protocols than MQ, mind that then the MQMD headers can be lost. Therefore, often the required variables from this MQMD header are saved into the GlobalEnvironment.
I do prefer to place it always in the same location, just to be consistent in my coding and to keep it reusable. à Environment.Variables.MQMDHeader. Mind that from here, you can forget about this information just until you will reply to the Front side. Just before provide the response you might want to set a new MQHeader node. This for the same reason as with the MQGet node. Keep the MQOutput or MQReply default and use a MQHeader node to recreate the MQMD header. This prevents you from modifying the reply node as the responsibility to provide a clean header now lies with the node of whom the purpose is to create clean MQ headers
Using this setup allows you to build a complete asynchronous service, which is consumed on a synchronous way by the client application. It creates the possibility to build loosely coupled functionalities in a service which is exposed as a synchronous service.
Although this is basic functionality in IIB, my experience has shown me that many developers have issues implementing and understanding this concept. Therefore, I tried to write in an easily readable way how this works and how it can be implemented.