In part one of this two-part automation series, we covered how to automate dashboard creation using Power BI and share it via the mobile-friendly Power BI app. This blog is part two; we’ll discuss the benefits and limitations of the Power BI API, and how to access it using PowerShell.
* This blog has been updated from its original version in 2016, Automation Series 1, Part 2: Powershell and Power BI.
Power BI is popular for good reason. It’s user-friendly, free for modest use, produces useful real-time charts quickly that can be shared easily and viewed via mobile device. It’s also part of Microsoft’s ecosystem. You’ll see interesting work being done using Power BI online, including building PowerShell modules.
But it does have some limitations, namely the amount of online storage it affords and the necessity of uploading live data to Power BI in order to consume it. There’s also a strain when it comes to API load, explained here:
Working with the Power BI API
Let’s take a look at the Power BI API and identify how we are going to be using it. In briefly outlining the steps we will take, our process looks like this:
- Create and find the Power BI data source we want to manipulate.
- Create and update the Power BI table within that data source.
- Load the incident data into the table.
In this scenario. we are using a lab environment. The data set is relatively small, so rather than complicate the issue by keeping track of what has been updated and what hasn’t each load, we are simply going to clear and refill the entire table each run of the script.
Let’s look at our PowerShell code. For this script, we are using the PowerBIPS PowerShell module.
# Import the PowerBIPS Powershell Module: https://github.com/DevScope/powerbi-powershell-modules
Import-Module ".\Modules\PowerBIPS" -Force
Import-Module smlets
#
# Helper Functions
#
#captures debug statements globally
function Debug {
param($text="")
Write-Output $text
}
#region PowerBI Settings
#region username and password
$user = "username"
$pass = "password" | ConvertTo-SecureString -AsPlainText -Force
#endregion
$cred = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $user, $pass
$authToken = Get-PBIAuthToken -ClientId "9a49ceec-57d9-4dba-85c6-c74a7362c233" -Credential $cred
#endregion
#
# API CALLS
#
$class = get-scsmclass WorkItem.Incident$
$incidents = Get-SCSMObject -Class $class | select Id, Title, Classification, Status, CreatedDate, LastModified
#
# TRANSFORM
#
$allIncidents = @()
foreach($incident in $incidents){
$obj = @{}
$obj.Id = $incident.Id
$obj.Title = $incident.Title
$obj.Classification = $incident.Classification.DisplayName
$obj.Status = $incident.Status.DisplayName
$obj.Created = get-date($incident.CreatedDate) -Format "yyyy-MM-dd"
$obj.Modified = get-date($incident.LastModified) -Format "yyyy-MM-dd"
$allIncidents += New-Object –TypeName PSObject –Prop $obj
}
#
# POWERBI CONFIGURATION AND UPLOAD
#
# If cannot find the DataSet create a new one with this schema
$incidentTable = @{
name = "IncidentData";
columns = @(
@{
name = "Id";
dataType = "String"
}
,@{
name = "Title";
dataType = "String"
}
,@{
name = "Classification";
dataType = "String"
}
,@{
name = "Status";
dataType = "String"
}
,@{
name = "Created";
dataType = "Datetime"
}
,@{
name = "Modified";
dataType = "Datetime"
}
)
}
$datasetName = "SCSMDataSet"
$dataSetSchema = @{
name = $datasetName;
tables = @(
$incidentTable
)
}
# Test the existence of the dataset
Debug -text ("Working on {0}" -f "Checking DataSet") -action "write"
if (!(Test-PBIDataSet -authToken $authToken -name $datasetName) -and $allIncidents.Count -gt 0)
{
#create the dataset
Debug -text ("Working on {0}" -f "Creating DataSet") -action "write"
$pbiDataSet = New-PBIDataSet -authToken $authToken -dataSet $dataSetSchema #-verbose
Debug -text ("Writing {0} [{1}]" -f "DataSet Values", $allUtil.Count) -action "write"
$allIncidents | Out-PowerBI -AuthToken $authToken -dataSetName $datasetName -tableName $incidentTable.Name -verbose
}
else
{
$pbiDataSet = Get-PBIDataSet -authToken $authToken -name $datasetName
#clear the dataset and update the schema if necessary
Debug -text ("Working on {0}" -f "Clearing DataSet") -action "write"
Clear-PBITableRows -authToken $authToken -dataSetName $datasetName -tableName $incidentTable.Name -verbose
Debug -text ("Working on {0}" -f "Updating DataSet Schema") -action "write"
Update-PBITableSchema -authToken $authToken -dataSetId $pbiDataSet.id -table $incidentTable -verbose
Debug -text ("Writing {0} [{1}]" -f "DataSet Values", $allUtil.Count) -action "write"
$allIncidents | Out-PowerBI -AuthToken $authToken -dataSetName $datasetName -tableName $incidentTable.Name -verbose
}
#endsection
#section OUTPUT
$allIncidents | Export-Csv IncidentsTable.csv -NoTypeInformation
#endsection
Authenticating with Power BI
Looking at our code above, lets first cover how we are authenticating with Power BI.
If you are familiar with using credentials in PowerShell, this is pretty straightforward. We are creating a credential object using a username and a password converted to a secure string. We then create a new credential object and pass that into the PowerBIPS ‘Get-PBIAuthToken’ cmdlet along with the ClientId property. Now we have a valid authentication token.
Before we continue, let’s quickly walk through how you get the Client ID that is required for you to pull your authentication token. Most of the required steps are found in these instructions from Microsoft.
But one step they fail to mention is that will affect most users: you first need to make sure that you’re logged into Power BI with your Azure user ID prior to adding the client application and then get your client ID.
Create the Data Source and Table Schema
Next, let’s look at the code required to create the data source and Table Schema:
As you can see in the example, the table is a hashtable with a name and columns property. The name is a simple string, and the columns property is an array of columns which are hashtables with a name and datatype property. This is pretty straight forward but be aware that there are only a limited set of data types available.
For the dataset itself, it’s the same: the dataset variable is a hashtable with a name and tables property. The name is a string, and the tables property is an array of table hashes.
Creating or Updating the Dataset and Tables
Finally, let’s look at the code for creating or updating the dataset and the tables:
Remove Debug Entries
This is actually quite simple, but it looks more complicated due to the debug entries throughout each step. If we were to remove those entries, it would compress down to this:
New Data Source in Power BI that Contains the Data Tables
Working through this, we are calling the ‘Test-PBIDataSet’ cmdlet in order to determine if the dataset already exists. If it does, then we are skipping over to the ELSE statement to update the table rather than create it. If it doesn’t exist yet, we are creating a new dataset using our table schema information, and then piping all of our data into the ‘Out-PowerBI’ cmdlet and targeting the new dataset and the correct table name we want to load the data into.
If we are updating the table instead, we get the dataset information and save it to a variable. In this instance we are then clearing the existing data from the table (to avoid duplicate data) and then while it’s empty, we update the table schema in case we added new columns or changed names or data types. After that process completes, we pipe our data into the ‘Out-PowerBI’ cmdlet in order to load our data into the table.
And now have a new data source in Power BI that contains our data tables. From here we can move into the Power BI web application. Using this new dataset, we can create our reports.
In the next entry in this series, Automation Series 1, Part 3: Power BI Reporting, we will focus on how we can build a few simple graphs and create a dashboard.
Find out more information about Power BI API. We’d love to hear your questions and comments in the Cireson Community.