This post covers how to connect Azure Custom Vision Service (over at with Microsoft Flow, enabling the use of the prediction within Canvas and Model-driven Apps, the Common Data Service and PowerBI. I’m a huge user and fan of Azure Cognitive Services (there are many) and Microsoft Flow, so being able to use Flow as the platform that connects an App to these services can offer tremendous value. Check out how to do this right now!

Think of all the places users directly add images;

  • Camera Control on a Canvas App
  • Upload Tool on a Canvas App
  • Notes on Model-Driven Apps
  • Camera Control on Model-Driven Apps

You can embed the functionality to take the file they upload and send it via API to your Custom Vision Model, which is a trained model based on your business case (Classification or Object Detection), returning back predictions. You can then report on those predictions and use them to make more informed decisions.

Note: There is a ‘Custom Vision’ Connector which is ready to use in Microsoft Flow, however, this is currently experiencing issues Microsoft are looking at resolving, so this post will detail how to connect to your Cutsom Vision Model using the HTTP Connector.


Get your Custom Vision service setup in the Azure Portal. You can do this by logging into and selecting the plus icon on the left panel ( ‘Create a Resource’) and typing ‘Custom Vision’. Setup all the fields, selecting the free tier for testing purposes if your following along on this post.

Make sure you have an instance of the PowerPlatform available. This example uses a model-driven app (sales via Dynamics 365) and I also use the Common Data Service, but are not required to implement the actions in principle. You do need a triggering action though, which can equally be a Canvas App etc, but the setup would be slightly different than this example. You can make a free trial over at

Let’s get Started – Setting up and Publishing your Custom Vision Model

  • Navigate to the Custom Vision Homepage and create a new Project.
  • Select ‘Classification’ as the Type and Single tag per image.
  • Create your Project

Now start adding your images

  • Select ‘Add Image’ – you can select more than one at once, and doing so will make this whole process easier as you will be uploading and tagging in batches.
  • Upload a batch and tag the image like in the example below. Do this for about 15 images at least. I used Cats and Dogs in this example.

  • Once you have all your images uploaded and tagged, select ‘Train’. Depending on the number of images, you might be waiting a bit of time, so go make some coffee.


Once it’s trained, you should start to get the page below, and once this has happened, hit publish


Once you have published you’ll see more details you would be using for your connection by clicking ‘Prediction URL’ which displays all the endpoints and header information.

Creating your Flow

Now it’s time to create your Flow. See the outline of the example below.

  • When a record is created: Trigger for a new Note in the CDS
  • Attachment Variable Creation: Initialise and Set the Variable to store the Attachment in
  • HTTP Action: POST to the API Cognitive Service, Custom Vision
  • Parse JSON: Read the response back from the API
  • Apply to Each (on the JSON returned) – Create a new Prediction Record: Take the response data and add it to the CDS

The Flow structure I’m using is fairly basic, I won’t go into depth here about using the HTTP Action because I did this in a previous post here you can review. You can get your endpoint and two header details by clicking on ‘Prediction URL’ in your Settings area of the Custom Vision Site.

The complication is taking the image file, ensuring this is stored as a String (no need to mess around converting it) as this is what you need to include in the body of your HTTP request. The strange thing is ‘Document’ is actually stored as a text value in CDS, so I’m not sure why we can’t just pass this through directly from the CDS, I’m guessing because it’s a different data type underneath.

The last piece is ‘Parse JSON’ – you can refer to the documentation here to use the official JSON schema model for the response from the API, or you can go ahead and run your flow at the HTTP Request (with no further action) and grab the response body from the run and use this to build a schema.

Getting your Results

You want to do something with your results. In my example below, I created a ‘Prediction’ entity and created a set of fields on the entity such as Prediction (Floating Point Type, important, as the JSON type is number), Tag ID (String) and Tag (String, but can equally be Optionset or another entity depending on your data model). I then dump my prediction information in this CDS entity for easy reporting.

You can see this in the screenshot below:

Now it’s in the Common Data Service, I can report on it via PowerBI, but also thinking about the other uses you can use for this data, such as taking the prediction and display this to the user

And there you have it – connecting a Custom Vision to PowerPlatform App Services via Microsoft Flow. Any questions please reach out to me on Twitter @dynamiccrmcat or leave a comment below!


Microsoft Docs: