Skip to main content

Notifications

Announcements

Dynamics 365 for Operations integration performance tuning tips

nmaenpaa Profile Picture nmaenpaa 101,146

There is surprisingly little documentation published about performance optimization with Dynamics 365 for Operations integrations. Lately I have been working with a customer with very high volume integrations and I'd like to share some tips that have helped us achieve the needed results.

Our customer is currently running AX2012 in Azure and they have high inbound transaction volumes from several interfaces. We are helping them upgrade to Dynamics 365 for Operations and obviously they would like to get at least similar performance form the new version than they have been used to in AX2012.

I must admit that at first we had quite non-pleasant surprises when we started our performance testing with some simple inbound interfaces. I created a data project and imported very simple files with a few thousand records. It seemed like ages to get them processed, even with Sandbox tier 3 environment. We  ordered a Sandbox tier 5 environment but still didn't get better results. But after some digging into the characteristics of Dynamics 365 for Operations in public cloud we managed to get results that satisfied both us and our customer. After some adjustments we were able to handle loads that resemble their current production environment loads.

The first thing that we noticed was that data import to onebox development environment seemed to be significantly faster than import to a sandbox environment with Azure SQL database. Actually this makes sense, since the development box has a local SQL Server and therefore almost no latency between the AOS and the database. In distributed environments like Sandbox tier 2 and higher the round trip from AOS to Azure SQL always takes a couple of milliseconds. Multiply this with tens of thousands of records and it starts to add up. How to deal with this characteristics? There are two approaches: split the communications to multiple simultaneous threads or reduce the number of round trips between the AOS and the database.

Approach 1: Multi threading

We started testing by submitting multiple messages at a time to the recurring data job. Depending on the amount of available batch threads (and behind that, processor cores) we were able to enhance processing speed significantly. Our first test case was a single file with 5000 records and 7 megabytes. Processing it took about 10 minutes. Then we submitted 13 files with 5000 records each to the job, and processing them took about 20 minutes. Quite nice. Then we tried to play around with the file size and threading a bit more, and split our test material to over 130 files with 500 records in each file. The result was very pleasant: everything was processed in 12 minutes - in practically the same time than it took to process the initial single file of 5000 records. Now we had achieved the same processing time that our customer has in their AX2012 production environment today.

One could ask why the material must be split to smaller files before submitting to the recurring data job. Couldn't this be handled in Dynamics 365 for Operations? The answer is that it should, but at least at the time of writing it doesn't really seem to work. In System Administration - Data management - Framework parameters form you can go to "Configure entity execution parameters" and set threading logic on entity level. You can set the number of rows for split and number of threads to use. But for some reason it seems that this approach is not working at the time of writing. Based on our tests, this kind of multi threading works only if the user manually imports the file on the data project and selects Import options - Import in batch. But if the file comes through an integration endpoint to the D365O queue, multi threading won't happen.

entityexecutionparameters.png

I will demonstrate this with two different examples using the standard Payment methods (RetailTenderTypeEntity) entity:

Example 1 - manual import and batch processing: Open data project, add sample file, select Import Options - Import in batch. Then go to Execution history - View execution log. You can see that the processing was split to multiple pieces . Processing time was less than 30 seconds.

 0820.stagingperftest01.png

3644.executionhistory01.png


Example 2 - import using recurring data job: Send the file to the recurring data job (using integration platform). Go to the data project - Execution history - View execution log. You can see that the processing wasn't split to multiple pieces. Also processing time was over 2 minutes.

5710.stagingperftest02.png

8424.executionhistory02.png 

Because of these results we ended up splitting the file in the integration platform. Now we could see that multi threaded processing actually happened, and the processing took 34 seconds total (Go to the data project - Manage - Manage recurring data job - Manage messages):

7506.processingstatus01.png

One thing to consider when splitting the files is that there is a bug in number sequence assignment in composite entities and parallel processing. The description of it can be found from LCS with id 3813737. Because of this bug, multiple records may get the same number, which leads to conflicts. On the issue information, Microsoft has stated that they won't fix it, and the workaround is to not submit messages simultaneously to the data job. So if you want to assign a unique primary key for your composite entity, you must either abandon parallel processing or generate that key before the file is submitted to the data job.

Approach 2: Set-based processing

Even with multi threading, each inbound record still means one round trip to the database. Can't we use set-based processing instead of this row by row execution? Actually we can - but there are some limitations and it definetely is not a viable solution for all scenarios.

4214.setbasedprocessing.png

On Data entities form there is a column called "Set-based processing" that can be used to enable that functionality on entity level. It means that instead of iterating the staging table and inserting records one by one to the target table, D365O gathers everything in one bunch and inserts it in one SQL statement. This completely eliminates any latency issues and you can import thousands of records in just a few seconds. But obviously with this approach you can't have any logic operations in the inbound operations - individual rows are not handled individually. But if you have to import journal lines or other transactional data "as is" you can achieve incredible performance with this option. However set-based processing doesn't work with composite entities, and as discussed, row level processing logic is not possible.

Using the same sample file that we used previously, we now got a processing time of only 4 seconds:

processingstatus02.png

Conclusion

If you want to fully utilize the power of your Dynamics 365 for Operations environment, some knowledge and planning is needed. You have to find out which approach works best for each scenario. For us it took a little while to find the optimal solutions, but at the end of the day we got our customer happy and that made us happy too.


Comments

*This post is locked for comments