I am very interested in automating tasks when possible, especially when the technology only needs a little exploration to utilize correctly and also makes the process less error-prone and adds additional value (in this case, having up-to-date data).

I was working with some colleagues to put together a Power BI dashboard of that consisted of a number of data sets that we wanted to join. Specifically, we had data in Qualtrics from surveys that students were taking at the beginning and end of an orientation course. Previously, we were downloading that data into a CSV file, manipulating it, and then using that in Power BI. I thought that there must be better way to do this.

It just so happened that our campus IT had recently enabled Microsoft Flow as a part of our Office 365. Using HTTP requests, I was able to query the Qualtrics API to access our survey results in JSON format and store those responses in our SQL database. Using a data gateway (both for Flow and Power BI) to that SQL server, it was then trivial to hook up relationships in Power BI and make the reports significantly more powerful.


Microsoft Flow uses something called the Workflow Definition Language that is also used in Azure Logic apps anytime you need to do something more complex than what is contained in a Flow step. WDL is highly JSON-centric, so it is mostly intuitive if you are familiar with manipulating JSON data – but it is still a something new to learn.

Using Flow can be a little arduous when you’re debugging (or just learning how data is returned in a particular step). The UI is a little slow, and to test even a single step, you need to save and run the entire Flow to debug anything. Certain functions (especially anything to do with arrays, it seems) run quite slowly even after turning on concurrency.

Finally, making changes to your survey becomes even harder. We try to lock in the survey, but if anything has to change in Qualtrics, I now have to account for it in three places: Flow, SQL, and Power BI.


I still have a number of Flows running every day. The alternative to using Flow would be to use a similar service or, what’s probably more common, have a bunch of Python or Perl scripts sitting on a server somewhere in your network. The challenge with the “bunch of scripts” method is that those scripts begin to get lost, forgotten, or abandoned over time. Flow allows you to create Team Flows which are shared among multiple people, which is good for teams of people working on a project.

I’m beginning to think that the best answer for this type of automation (for IT departments) really lies in a combination of DevOps software like Ansible and automation software like Huginn. It’s an area that I want to continue to explore.