Azure DevOpps: ARM template for SendGrid API Connection for Logic Apps

I had to automate some Logic Apps deployments containing a SendGrid API connection. Since I couldn’t find a ready to use ARM template for the SendGrid API connection, I did reverse engineering to generate the JSON.

Here is the ARM template for the SendGrid Connector:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "sendgridConnectionName": {
      "type": "string"
    },
    "sendgridApiKey": {
      "type": "string"
    }
  },
  "variables": {
    "location": "[resourceGroup().location]",
    "subscriptionId": "[subscription().subscriptionId]",
    "sendgridApiId": "[concat('/subscriptions/', variables('subscriptionId'), '/providers/Microsoft.Web/locations/', variables('location'),'/managedApis/sendgrid')]"
  },
  "resources": [
    {
      "type": "Microsoft.Web/connections",
      "apiVersion": "2016-06-01",
      "name": "[parameters('sendgridConnectionName')]",
      "location": "[variables('location')]",
      "properties": {
        "displayName": "[parameters('sendgridConnectionName')]",
        "customParameterValues": {},
        "api": {
          "id": "[variables('sendgridApiId')]"
        },
        "parameterValues": {
          "apiKey": "[parameters('sendgridApiKey')]"
        }
      }
    }
  ],
  "outputs": {}
}

API Connections are used to connect Logic Apps or Azure Functions to SaaS services. It contains information provided when configuring access to a SaaS service. SendGrid Connection Provider lets you send email and manage recipient lists.

Azure Private Link – a new feature for Enhanced Security

Azure is getting even more secure through the release of the Azure Private Link.

Azure Private Link provides private connectivity from a virtual network to Azure services, customer-owned or Microsoft partners services.

This means you can for example consume services like storages, databases, etc. within a VNet, without exposing the data to the Internet. All traffic to the service can be routed through the private endpoint, so no gateways, NAT devices, ExpressRoute or VPN connections, or public IP addresses are needed. Private Link keeps traffic on the Microsoft global network.

The configuration is straight forward. In the networking settings of the resource, you select Private endpoint for the connectivity method and create a new endpoint.

Note: at the time of this writing, Azure Private Link is available only in the US region.

Check here the availability for other regions.

Trying to configure Azure Private Link in a region where the feature is not available will generate this message:

Azure DevOps: Cosmos DB MongoError document does not contain shard key

When creating documents in Azure Cosmos databases for MongoDB API you might get the error message MongoError: document does not contain shard key.

The issue occurs for partitioned collections that have been created via the Azure CLI, because of the way the partitionKey path is being stored in the collection settings.

Reproduction steps

1. From the Azure Portal, create a Cosmos DB account with the Azure Cosmos DB for MongoDB API.

Account Name: xyzreprodbaccount

API: Azure Cosmos DB for MongoDB API

2. Create a database.

Database name: xyzreprodatabase

3. Create a collection with a shard key.

Collection id: collection1

Shard key: tid

4. Create a new document and save it.

a. Browse to Data Explorer > Database > Collection > Documents.

b. Click on New Document.

c. In the editor type in the following text:

{
     "id" : "1",
     "tid": "aaa"
}

d. Click Save.

e. Observe that the document has been created.

5. Create another collection using the Azure CLI.

You can create the collection also from the Azure Shell (https://shell.azure.com/). Use the following command:

$partitionKeyPath = '/tid'

az cosmosdb collection create --resource-group-name XYZ-repro --name xyzreprodbaccount --db-name xyzreprodatabase --collection-name collection2 --throughput 400 --partition-key-path $partitionKeyPath 

6. From the Azure Portal, browse the collection2 created at step 5 and insert a new document.

a. Browse to Data Explorer > Database > Collection > Documents.

b. Click on New Document.

c. In the editor type in the following text:

{
     "id" : "1",
     "tid": "aaa"
}

d. Click Save.

Expected result: the document to be saved

Actual result: error message “Command insert failed: document does not contain shard key”

When saving a document from an application, the error message is “MongoError: document does not contain shard key”.

Problem analysis

Looking at the collection properties using the command

az cosmosdb collection show --resource-group-name XYZ-repro --name xyzreprodbaccount --db-name xyzreprodatabase --collection-name collection1

we can see that for the first collection created from the Azure Portal, the path for the partitionKey is stored as:

"partitionKey": {
      "kind": "Hash",
      "paths": [
        "/'$v'/tid/'$v'"
      ]
    } 

While for the second collection created via Azure CLI, the path for the partitionKey is stored as:

"partitionKey": {
      "kind": "Hash",
      "paths": [
        "/tid"
      ]
    } 
Workaround

When creating collections via Azure CLI, specify the partition key as /’$v’/tid/’$v’

Here is the AZ CLI command:

$partitionKeyPath = '/tid'
$Bugfix_partitionKeyPath = '/''$v''' + $partitionKeyPath + '/''$v'''

az cosmosdb collection create --resource-group-name XYZ-repro --name xyzreprodbaccount --db-name xyzreprodatabase --collection-name collection2 --throughput 400 --partition-key-path $Bugfix_partitionKeyPath 

Azure DevOps: list all collections from all Azure Cosmos DB accounts

While creating an Azure Pipeline to backup all Azure Cosmos databases in a subscription, I had to list all collections from all databases. For that I wrote a script.

Feel free to adapt it in order to meet your needs! Enjoy!

#Retrieve database accounts
$databaseAccounts = Get-AzResource | where ResourceType -Like "*database*"

foreach($databaseAccount in $databaseAccounts){
    Write-Host "#RG:"  $databaseAccount.ResourceGroupName -ForegroundColor Green
    Write-Host "#   databaseAccountName:" $databaseAccount.Name -ForegroundColor Green
    
    #Retrieve databases
    $databases = az cosmosdb database list --resource-group-name $databaseAccount.ResourceGroupName --name $databaseAccount.Name
    $databasesConverted = $databases | ConvertFrom-Json
    foreach($database in $databasesConverted){
        $databaseName = $database.id
        Write-Host "#     databaseName:" $databaseName -ForegroundColor Green
        
        #Retrieve collections
        $allcollections = ""
        $collections = az cosmosdb collection list --resource-group-name $databaseAccount.ResourceGroupName --name $databaseAccount.Name --db-name $databaseName
        $collectionsConverted = $collections | ConvertFrom-Json
        foreach($collection in $collectionsConverted){
            $collectionName = $collection.id
            Write-Host "#       collectionName: " $collectionName -ForegroundColor Green
        }
    }
}

Azure DevOps: white list Azure Pipeline IP in Cosmos database firewall. How to add the Azure DevOps Hosted Agent IP address to a Cosmos database firewall.

I am currently doing the Azure backup strategy for one of our customers. While Azure takes regular backups of the Cosmos databases, in case of an application failure that would corrupt the data, they would not help. Because Azure would back up the already corrupted data.

The solution is to store our own backups. We use an Azure Pipeline which takes a daily backup of our databases.

But the problem is that the databases are IP restricted behind a firewall. And the Azure Pipeline fails because the Azure DevOps Hosted Agent cannot pass through the firewall.

The solution found is to add a step in our Azure Pipeline, which adds the Azure DevOps Agent IP address to the database white list and removes it at the end.

PowerShell script:

#Function to add current IP to database account
Function add-ip-databaseaccount ($customIP, $databaseResourceGroup, $databaseAccount){
    Write-Host "  " $customIP "is not allowed. Adding it now.." -ForegroundColor Green
    $databaseAccountIpRangeFilter = (Get-AzResource -Name $databaseAccount -ExpandProperties).Properties.ipRangeFilter
    $databaseAccountIpRangeFilter = $databaseAccountIpRangeFilter + "," + $customIP
    $databaseAccountProperties = @{"databaseAccountOfferType"="Standard"; "ipRangeFilter"=$databaseAccountIpRangeFilter}
    Set-AzResource -ResourceType "Microsoft.DocumentDb/databaseAccounts" -ResourceGroupName $databaseResourceGroup -Name $databaseAccount -Properties $databaseAccountProperties -Force
    Get-AzResource -Name $databaseAccount -ExpandProperties
    #Although the IP is being added to the allowed list, it takes some time until the changes are being used
    Start-Sleep -Seconds 600
}

#Function to remove current IP to database account
Function remove-ip-databaseaccount ($customIP, $databaseResourceGroup, $databaseAccount){
    $databaseAccountIpRangeFilter = (Get-AzResource -Name $databaseAccount -ExpandProperties).Properties.ipRangeFilter
    $databaseAccountIpRangeFilter = $databaseAccountIpRangeFilter.Split(',')
    Write-Host Checking firewall settings for database $databaseAccount :
    if ($customIP -in $databaseAccountIpRangeFilter) {
        Write-Host "  " $customIP "is allowed. Removing it now.." -ForegroundColor Green
        [System.Collections.ArrayList]$ArrayList = [array]$databaseAccountIpRangeFilter
        $ArrayList.Remove($ArrayList[$ArrayList.IndexOf($customIP)])
        $databaseAccountIpRangeFilter = $ArrayList -join ','
        $databaseAccountProperties = @{"databaseAccountOfferType"="Standard"; "ipRangeFilter"=$databaseAccountIpRangeFilter}
        Set-AzResource -ResourceType "Microsoft.DocumentDb/databaseAccounts" -ResourceGroupName $databaseResourceGroup -Name $databaseAccount -Properties $databaseAccountProperties -Force
        Get-AzResource -Name $databaseAccount -ExpandProperties
    }
    else {
        Write-Host "  " $customIP "was already not allowed." -ForegroundColor Green
    }
}

#######################################################################

#Retrieve the current IP
$ipinfo = Invoke-RestMethod http://ipinfo.io/json
$myip = $ipinfo.ip

#Check if DB has IP filtering active:
$dbAccountIpRangeFilter = (Get-AzResource -Name $dbAccount -ExpandProperties).Properties.ipRangeFilter

#If IP filtering is in place
if($dbAccountIpRangeFilter)
{
    Write-Host "IP filtering is active for " $dbAccount
    $dbAccountIpRangeFilter = (Get-AzResource -Name $dbAccount -ExpandProperties).Properties.ipRangeFilter
    $dbAccountIpRangeFilterArray = $dbAccountIpRangeFilter.Split(',')
    Write-Host Checking firewall settings for db $dbAccount :

    if ($myip -in $dbAccountIpRangeFilterArray) {
        Write-Host "  " $myip "is already allowed." -ForegroundColor Green
        backup-db $dbResourceGroup $dbAccount $dbName $dbCollections
    }
    else{
        add-ip-dbaccount $myIp $dbResourceGroup $dbAccount
        backup-db $dbResourceGroup $dbAccount $dbName $dbCollections
        remove-ip-dbaccount $myIp $dbResourceGroup $dbAccount    
    }
}
else
{
    Write-Host "IP filtering is not active for " $dbAccount
    backup-db $dbResourceGroup $dbAccount $dbName $dbCollections
}

I will try to explain the function backup-database in a separate blog post.

Microsoft Azure Portal App for Windows, iPhone and Android

Microsoft has released a preview version of the Azure Portal app for Windows. I used it for some time now and it works quite well. You get rid of the browser, while the functionalities remain the same as in the Azure portal.

Download for Windows: https://portal.azure.com/App/Download

When on the go, I recommend the Microsoft Azure app for iOS or Android.

App Store Download: https://itunes.apple.com/app/microsoft-azure/id1219013620?ls=1&mt=8

Google Play Download: https://play.google.com/store/apps/details?id=com.microsoft.azure

Azure Serverless Architectures: host a static website in Azure Storage

Azure Storage v2 accounts allow you to serve static content (HTML, CSS, JavaScript, and image files) directly from a storage container named $web. Taking advantage of hosting in Azure Storage allows you to use serverless architectures including Azure Functions and other PaaS services.

When you enable static website hosting on your storage account, you select the name of your default file and optionally provide a path to a custom 404 page. As the feature is enabled, a container named $web is created if it doesn’t already exist.

You can enable static website from the Azure CLI using this script:

#Parameters

param(
    [Parameter(Mandatory=$True, HelpMessage="Please specify the storage account name")][String]$Storage_Name_Web,
    [Parameter(Mandatory=$True, HelpMessage="Please specify the subscription ID")][String]$subscriptionId
)

#Connect to Azure
Connect-AzAccount

#Change the context to the Azure subscription
Set-AzContext -Subscription $subscriptionId

# Enable static website
az account set --subscription $subscriptionId
az storage blob service-properties update --account-name $Storage_Name_Web --static-website --404-document "404.html" --index-document "index.html"

Or you can enable static website from the Azure Portal:

To test the website, you can upload a sample html file to the blob storage. From the storage account, go to the blobs section and expand the $web container.

Upload a sample html file with the name “index.html”.

In my case the index.html contains a link to a picture:

<html>
  <body>
    <img src="blog.bmp">
  </body>
</html>

Now when you browse the blob storage endpoint, in my case
https://ranariwebstorage.z6.web.core.windows.net , you get the index.html: