Deploying Dataverse Reference Data using CI/CD Pipelines

Previously I walked through how we can commit, build and deploy a Power Platform solution using Azure DevOps pipelines. This allows us to deploy Apps, Flows and Dataverse schema, however, there are other assets that we may want to create in our development environment and push through the same process, such as document templates and reference data.

Most application data should not be synchronised across environments. Transactional data such as client details, payments, orders etc. should never be held in non-production environments. However, reference data, often used in drop-downs and lookups, may be used to drive workflows. If we have logic that operates on chosen values, then we can consider those values to be part of the application. We should author them in our dev environment and promote them through our application lifecycle as if they were code.

An example of such data could be a list of countries that, when a choice is made, determines the tax or shipping to be applied to an order. The list of countries and tax rates are reference data that we want our developers to author, protect and deploy.

Microsoft calls this Configuration Data and provides a tool to transfer data from one Dynamics environment to another: the Configuration Migration tool.

This is a desktop application that allows developers to define a schema containing the Dataverse fields and entities that they wish to extract data from and publish to other environments.

However, building upon our previous work, we can automate this process with an Azure DevOps pipeline. Here's how.

1) Create a schema file to define the data to be migrated
This step must be done manually using the Configuration Management tool. Choose the entities and fields, then save the schema as an XML file.
NB - this file will need to be updated when you want to synchronise additional or new entities and fields.

2) Create a Repo to hold the schema file and the extracted data
I chose to have a separate repo from my solution files. They are likely to have different deployment cadences and I wanted to keep them logically separate.
Commit the schema into the repo.

3) Create a Commit Pipeline to extract the data from the development environment and commit it to the Repo
I re-used and modified the YAML from my existing Solution Commit and Build pipeline, though in this case there are no build tasks. I added tasks to export the data and then to extract the files from the resulting zip file.
The YAML is on GitHub.

Assuming we already have some data in our development environment, we can now test the pipeline - the result should be a new folder in our repo containing data.

4) Create a Release Pipeline to deploy the data to our Test and UAT environments
Similarly, I re-used the YAML from my existing Solution Release pipeline. In this case we need to re-pack the files checked out from source control into a zip file and then import the data into our downstream environments.
Again, the YAML is on GitHub.

And that's it. We can now create reference data (or configuration data, if you prefer) in our development environment, commit it to source control and then deploy it to the Test, UAT and Production environments automatically.

Hope this helps!

Comments