Azure Stack TP3 Stability (Reboot the XRP VM)

Stack_logo.png

If you have been deploying and using Azure Stack TP3, you may have noticed after a few days the portal starts behaving slower and in my experience closer to a week it stops working altogether.  This will vary depending on what you're doing and your hardware.  Looking at the VM guests, you will notice that the XRP VM is consuming all its memory.  While you could give the machine more memory, this does mess with the expected infrastructure sizing and eventually, it will consume whatever memory you give it. This will hopefully be addressed soon. However, a simple workaround is to reboot the XRP VM. Why do anything manually when you can script it?  This very simple script creates a scheduled task that will run on Sunday night at 1 am.  The task will stop and start the XRP VM and then trigger the existing ColdStartMachine task that makes sure all the Azure Stack services are running.

[powershell] #run on host server as the AzureStackAdmin user $AzureStackAdminPAssword = 'YOURPASSWORD'

$Action = New-ScheduledTaskAction -Execute 'Powershell.exe' -argument '-command "Get-VM MAS-Xrp01 | Stop-VM -force;Get-VM MAS-Xrp01 | start-vm;sleep 180;Stop-ScheduledTask ColdStartMachine ;start-scheduledtask ColdStartMachine"' $trigger = New-ScheduledTaskTrigger -Weekly -DaysOfWeek Sunday -at 1am Register-ScheduledTask -Action $Action -Trigger $trigger -taskname "XRPReboot" -Description "Restart XRP VM weekly" -RunLevel Highest -User "$env:USERDOMAIN\$env:USERNAME" -Password $AzureStackAdminPAssword [/powershell]

 

 

How to calculate Azure VHDs used space

powershell.jpg

One of the most hotly topic in the Azure world, is estimate how much storage is currently used by deployed VMs. I have intentionally used the word "used" and not allocated because in Azure, when VHDs are stored a Standard Storage Account, they're like "thin" or dynamically disk if you prefer, you aren't really using all the allocated space and Azure portal confirms this. Below image shows how an Azure VHD of 127 GB used as OS disk is viewed from a Windows VM

Below image shows how Azure portal calculate usage space for above disk

For reporting and billing reason, you may need to get these information for all VMs deployed in a specific subscription.

This article will show how to retrieve these information for VHDs stored in both Standard Account and Premium Account.

A little of Azure theory..

Standard Storage Account:

When a new Azure Storage Account is created, by default, some hidden tables are created and one of these is the "$MetricsCapacityBlob". This table shows blobs capacity values.

Note: There others hidden tables which contains other info related to an Azure Storage Account like its transactions.

Premium Storage Account:

From Microsoft Web Site: "Billing for a premium storage disk/blob depends on the provisioned size of the disk/blob. Azure maps the provisioned size (rounded up) to the nearest premium storage disk option as specified in the table given in the Scalability and Performance Targets when using Premium Storage section. Each disk will map to one of the the supported provisioned sizes and will be billed accordingly. Billing for any provisioned disk is prorated hourly using the monthly price for the Premium Storage offer. For example, if you provisioned a P10 disk and deleted it after 20 hours, you are billed for the P10 offering prorated to 20 hours. This is regardless of the amount of actual data written to the disk or the IOPS/throughput used."

From a reporting point of view, this mean that the size of a deployed VHD matches the allocated space and you're billed for its size "regardless of the amount of actual data written to the disk or the IOPS/throughput used".

Before to start to write some PowerShell code, it's required to prepare your workstation to run the Azure Storage Report:

  • If OS is older then Windows Server 2016 or Windows 10, then it's required to download and install PowerShell 5.0 from here
  • Install ReportHTML module from PowerShell Gallery: Open a PowerShell console as administrator and execute the following code

[powershell]

Install-Module -Name ReportHTML

[/powershell]

Let's begin to write some PowerShell code

Note: Most of the below functions come from Get Billable Size of Windows Azure Blobs (w/Snapshots) in a Container or Account script developed by the Windows Azure Product Team Scripts. Their code has been updated to work with latest Azure PowerShell module and support script purpose

Open a PowerShell editor and create a new file called Module-Azure.ps1

This file will contain all functions invoked by the main script

[powershell] function global:Connect-Azure {

Login-AzureRmAccount

$subName = Get-AzureRmSubscription | select SubscriptionName | Out-GridView -Title "Select a subscription" -OutputMode Single | select -ExpandProperty SubscriptionName

Select-AzureRmSubscription -SubscriptionName $subName

$global:azureSubscription = Get-AzurermSubscription -SubscriptionName $subName

}

function global:Calculate-BlobSpace {

param( # The name of the storage account to enumerate. [Parameter(Mandatory = $true)] [string]$StorageAccountName ,

# The name of the storage container to enumerate. [Parameter(Mandatory = $false)] [ValidateNotNullOrEmpty()] [string]$ContainerName,

# The name of the storage account resource group. [Parameter(Mandatory = $true)] [ValidateNotNullOrEmpty()] [string] $StorageAccountRGName )

# Following modifies the Write-Verbose behavior to turn the messages on globally for this session $VerbosePreference = "Continue"

$storageAccount = Get-AzureRmStorageAccount -ResourceGroupName $StorageAccountRGName -Name $StorageAccountName -ErrorAction SilentlyContinue

if ($storageAccount -eq $null) { throw "The storage account specified does not exist in this subscription." }

# Instantiate a storage context for the storage account. $storagePrimaryKey = ((Get-AzureRmStorageAccountKey -ResourceGroupName $StorageAccountRGName -Name $StorageAccountName)[0]).Value

$storageContext = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $storagePrimaryKey

# Get a list of containers to process. $containers = New-Object System.Collections.ArrayList if ($ContainerName.Length -ne 0) { $container = Get-AzureStorageContainer -Context $storageContext ` -Name $ContainerName -ErrorAction SilentlyContinue | ForEach-Object { $containers.Add($_) } | Out-Null } else { Get-AzureStorageContainer -Context $storageContext -ErrorAction SilentlyContinue | ForEach-Object { $containers.Add($_) } | Out-Null }

# Calculate size. $sizeInBytes = 0 if ($containers.Count -gt 0) { $containers | ForEach-Object { $result = Get-ContainerBytes $_.CloudBlobContainer $sizeInBytes += $result.containerSize Write-Verbose ("Container '{0}' with {1} blobs has a size of {2:F2}MB." -f ` $_.CloudBlobContainer.Name, $result.blobCount, ($result.containerSize / 1MB)) } foreach ($container in $containers) {

$result = Get-ContainerBytes $container.CloudBlobContainer

$sizeInBytes += $result.containerSize

Write-Verbose ("Container '{0}' with {1} blobs has a size of {2:F2}MB." -f $container.CloudBlobContainer.Name, $result.blobCount, ($result.containerSize / 1MB)) }

$sizeInGB = [math]::Round($sizeInBytes / 1GB)

return $sizeInGB } else { Write-Warning "No containers found to process in storage account '$StorageAccountName'."

$sizeInGB = 0

return $sizeInGB } }

function global:Get-BlobBytes { param ( [Parameter(Mandatory=$true)] [Microsoft.WindowsAzure.Commands.Common.Storage.ResourceModel.AzureStorageBlob]$Blob)

# Base + blob name $blobSizeInBytes = 124 + $Blob.Name.Length * 2

# Get size of metadata $metadataEnumerator = $Blob.ICloudBlob.Metadata.GetEnumerator() while ($metadataEnumerator.MoveNext()) { $blobSizeInBytes += 3 + $metadataEnumerator.Current.Key.Length + $metadataEnumerator.Current.Value.Length }

if ($Blob.BlobType -eq [Microsoft.WindowsAzure.Storage.Blob.BlobType]::BlockBlob) { $blobSizeInBytes += 8 $Blob.ICloudBlob.DownloadBlockList() | ForEach-Object { $blobSizeInBytes += $_.Length + $_.Name.Length } } else { $Blob.ICloudBlob.GetPageRanges() | ForEach-Object { $blobSizeInBytes += 12 + $_.EndOffset - $_.StartOffset } }

return $blobSizeInBytes }

function global:Get-ContainerBytes { param ( [Parameter(Mandatory=$true)] [Microsoft.WindowsAzure.Storage.Blob.CloudBlobContainer]$Container)

# Base + name of container $containerSizeInBytes = 48 + $Container.Name.Length * 2

# Get size of metadata $metadataEnumerator = $Container.Metadata.GetEnumerator() while ($metadataEnumerator.MoveNext()) { $containerSizeInBytes += 3 + $metadataEnumerator.Current.Key.Length + $metadataEnumerator.Current.Value.Length }

# Get size for Shared Access Policies $containerSizeInBytes += $Container.Permission.SharedAccessPolicies.Count * 512

# Calculate size of all blobs. $blobCount = 0 $blobs = Get-AzureStorageBlob -Context $storageContext -Container $Container.Name foreach ($blobItem in $blobs) { #$blobItem | Get-Member

$containerSizeInBytes += Get-BlobBytes $blobItem

$blobCount++

}

return @{ "containerSize" = $containerSizeInBytes; "blobCount" = $blobCount } }

function global:ListBlobCapacity([System.Array]$arr, $StgAccountName, $stgAccountRGName) {

$Delimiter = ','

$Today = Get-Date

$storageAccountKey = ((Get-AzureRmStorageAccountKey -ResourceGroupName $stgAccountRGName -Name $StgAccountName)[0]).Value

$StorageCtx = New-AzureStorageContext –StorageAccountName $StgAccountName –StorageAccountKey $StorageAccountKey

$metrics = Get-AzureStorageServiceMetricsProperty -Context $StorageCtx -ServiceType "Blob" -MetricsType Hour -ErrorAction "SilentlyContinue"

# if storage account has Monitoring turned on, get the Capacity for the configured nbr of Retention Days if ( $metrics.MetricsLevel -ne "None" ) { $RetentionDays = $metrics.RetentionDays if ( $RetentionDays -eq $null -or $RetentionDays -eq '' ) { $RetentionDays = 0 } $table = GetTableReference $StgAccountName $StorageAccountKey '$MetricsCapacityBlob' # loop over days for( $d = $RetentionDays; $d -ge 0; $d = $d - 1) {

$date = (Get-Date $Today.AddDays(-$d) -format 'yyyyMMdd')

$partitionKey = $date + "T0000"

$result = $table.Execute([Microsoft.WindowsAzure.Storage.Table.TableOperation]::Retrieve($partitionKey, "data")) if ( $result.HttpStatusCode -eq "200") { $arr += CreateRowObject $StgAccountName (Get-Date $Today.AddDays(-$d)).ToString("d") } } } return $arr }

function global:GetBlobsCurrentCapacity($StgAccountName, $stgAccountRGName) {

$Delimiter = ','

$Today = Get-Date

$storageAccountKey = ((Get-AzureRmStorageAccountKey -ResourceGroupName $stgAccountRGName -Name $StgAccountName)[0]).Value

$StorageCtx = New-AzureStorageContext –StorageAccountName $StgAccountName –StorageAccountKey $StorageAccountKey

$metrics = Get-AzureStorageServiceMetricsProperty -Context $StorageCtx -ServiceType "Blob" -MetricsType Hour -ErrorAction "SilentlyContinue"

# if storage account has Monitoring turned on, get the Capacity for the configured nbr of Retention Days if ( $metrics.MetricsLevel -ne "None" ) { $table = GetTableReference $StgAccountName $StorageAccountKey '$MetricsCapacityBlob'

$date = (Get-Date $Today.AddDays(-1) -format 'yyyyMMdd')

$partitionKey = $date + "T0000"

$result = $table.Execute([Microsoft.WindowsAzure.Storage.Table.TableOperation]::Retrieve($partitionKey, "data"))

if ( $result.HttpStatusCode -eq "200") { $rowObj = CreateRowObject $StgAccountName (Get-Date $Today.AddDays(-1)).ToString("d") }

}

return $rowObj }

# setup access to Azure Table $TableName function global:GetTableReference($StgAccountName, $StorageAccountKey, $TableName) { $accountCredentials = New-Object "Microsoft.WindowsAzure.Storage.Auth.StorageCredentials" $StgAccountName, $StorageAccountKey $storageAccount = New-Object "Microsoft.WindowsAzure.Storage.CloudStorageAccount" $accountCredentials, $true $tableClient = $storageAccount.CreateCloudTableClient() $table = $tableClient.GetTableReference($TableName) return $table }

function global:CreateRowObject($StgAccountName, $DateTime) { $row = New-Object System.Object $row | Add-Member -type NoteProperty -name "StorageAccountName" -Value $StgAccountName $row | Add-Member -type NoteProperty -name "DateTime" -Value $DateTime foreach( $key in $result.Result.Properties.Keys ) { $val = $result.Result.Properties[$key].PropertyAsObject

if ( $Delimiter -eq ",") { $val = $val -replace ",","." } $row | Add-Member -type NoteProperty -name $key -Value $val } return $row }

function global:get-PremiumBlobGBSize { param ( [Microsoft.WindowsAzure.Commands.Common.Storage.ResourceModel.AzureStorageBlob] $blobobj )

$blobGBSize = [math]::Truncate(($blobObj.Length / 1GB))

return $blobGBSize

}

[/powershell]

Some comments:

  • All functions have been declared as global to be invoked from main script if required
  • Connect-Azure: Allow to select the Azure subscription against which execute reporting script and establish a connection
  • Calculate-BlobSpace: This is the function invoked by the main script which returns the sum of the spaces allocated to VHDs for a given Standard Storage Account
  • Get-PremiumBlobGBSize:  This is the function invoked by the main script which returns the sum of the spaces allocated to VHDs for a given Premium Storage Account

Now save Module-Azure.ps1 and in the same directory where it has been saved, create a new PowerShell file called "Generate-AzureReport.ps1". This will be the main file which will invoke Module-Azure functions.

Open Generate-AzureReport.ps1 with a PowerShell editor and paste the following code:

 

[powershell]

$ScriptDir = $PSScriptRoot

Write-Host "Current script directory is $ScriptDir"

Set-Location -Path $ScriptDir

.\module-azure.ps1

Connect-Azure

if (!(get-module ReportHTML)) { if (!( get-module -ListAvailable)) { write-host "Please Install ReportHTML module from PowerShell Gallery" } else { Write-Host "Importing Report-HTML module"

Import-Module ReportHTML } } else { Write-Host "Report-HTML module is already installed" }

$subname = $azureSubscription.SubscriptionName

$billingReportFolder = "C:\temp\billing"

if ( !(test-path $billingReportFolder) ) { New-Item $billingReportFolder -ItemType Directory }

# Analyzing Standard Storage Account Consumptions from Azure Storage Hiden Table $MetricsCapacityBlob

$sa = Find-AzureRmResource -ResourceType Microsoft.Storage/storageAccounts | Where-Object {$_.Sku.tier -ne "Premium" }

$saConsumptions = @()

foreach ($saItem in $sa) { $blobObj = GetBlobsCurrentCapacity -StgAccountName $saItem.Name -stgAccountRGName $saItem.ResourceGroupName

$blobCapacityGB = [math]::Truncate(($blobObj.Capacity / 1GB))

$blobSpaceItem = '' | select StorageAccountName,Allocated_GB

$blobSpaceItem.StorageAccountName = $saItem.Name

$blobSpaceItem.Allocated_GB = $blobCapacityGB

$saConsumptions += $blobSpaceItem

}

$saPremium = Find-AzureRmResource -ResourceType Microsoft.Storage/storageAccounts | Where-Object {$_.Sku.tier -eq "Premium" }

$saPremiumUsage = @()

foreach ($saPremiumItem in $saPremium) { $storageAccountKey = ((Get-AzureRmStorageAccountKey -ResourceGroupName $saPremiumItem.ResourceGroupName -Name $saPremiumItem.Name)[0]).Value

$StorageCtx = New-AzureStorageContext –StorageAccountName $saPremiumItem.Name –StorageAccountKey $StorageAccountKey

$containers = Get-AzureStorageContainer -Context $StorageCtx

$saPremiumUsageItem = '' | select StorageAccountName,Allocated_GB

$saPremiumUsageItem.StorageAccountName = $saPremiumItem.Name

$saPremiumUsageItem.Allocated_GB = 0

foreach ($container in $containers) { $blobs = Get-AzureStorageBlob -Context $StorageCtx -Container $container.Name

foreach ($blobItem in $blobs) { $blobsize = get-PremiumBlobGBSize ($blobItem)

$saPremiumUsageItem.Allocated_GB = $saPremiumUsageItem.Allocated_GB + $blobsize } }

$saPremiumUsage += $saPremiumUsageItem

}

# Calculate Totals

$saConsumptionsTotal = 0

foreach ($saConsumptionsItem in $saConsumptions) { $saConsumptionsTotal = $saConsumptionsTotal + $saConsumptionsItem.Allocated_GB }

$saPremiumUsageTotal = 0

foreach ($saPremiumUsageItem in $saPremiumUsage) { $saPremiumUsageTotal = $saPremiumUsageTotal + $saPremiumUsageItem.Allocated_GB }

# Generate Reports

$Rpt = @()

$TitleText = "Azure Usage Report "

$Rpt += Get-HTMLOpenPage -TitleText $TitleText -LeftLogoName "sample"

##

$Rpt += Get-HtmlContentOpen -HeaderText "Standard Storage Accounts Consumptions (GBs)"

$saConsumptionsTableStyle = Set-TableRowColor ($saConsumptions | Sort-Object -Property StorageAccountName) -Alternating

$Rpt += Get-HTMLContentTable ($saConsumptionsTableStyle) -Fixed

$Rpt += Get-HtmlContentClose

##

$Rpt += Get-HtmlContentOpen -HeaderText "Total of Standard Storage space allocated on Azure"

$Rpt += Get-HTMLContentText -Heading "Total (GB)" -Detail "$saConsumptionsTotal"

$Rpt += Get-HtmlContentClose

##

if ( $saPremiumUsage -ne $null) {

$Rpt += Get-HtmlContentOpen -HeaderText "Premium Storage Accounts Consumptions (GBs)"

$saPremiumUsageTableStyle = Set-TableRowColor ($saPremiumUsage | Sort-Object -Property StorageAccountName) -Alternating

$Rpt += Get-HTMLContentTable ($saPremiumUsageTableStyle) -Fixed

$Rpt += Get-HtmlContentClose

} ##

$Rpt += Get-HtmlContentOpen -HeaderText "Total of Premium Storage space allocated on Azure"

$Rpt += Get-HTMLContentText -Heading "Total (GB)" -Detail "$saPremiumUsageTotal "

$Rpt += Get-HtmlContentClose

##

$Rpt += Get-HTMLClosePage

$date = Get-Date -Format yyyy.MM.dd.hh.mm

$reportName = $subname + "_" + $date

Write-Host "Output folder is: C:\temp\Billing"

Write-Host "Report file name is : " $reportName

$file = Save-HTMLReport -ReportContent $rpt -ShowReport -ReportPath "C:\temp\Billing" -ReportName $reportName

[/powershell]

Save it

Some comments:

  • Line 7 execute Module-Azure, making available its functions
  • Line 9 invoke Connect-Azure function which is declared in Module-Azure as global
  • From Line 12 to 28 it's checked if ReportHTML is installed
  • Line 42 all Standard Storage Account available in the selected subscription are retrieved
  • From Line 44 to Line 60, VHDs allocated space stored in Standard Storage Account is calculated
  • Line 62 all Premium Storage Account available in the selected subscription are retrieved
  • From Line 64 to Line 95, VHDs allocated space stored in Premium Storage Account is calculated
  • From Line 97 to Line 112, sum of all Standard Storage Accounts and of all Premium Storage Account is calculated
  • From Line 115 to Line 168, report is formatted in HTML using ReportHTML functions
  • Line 174 save report in the default location and open default browser to show it

It's time to run the script and getting some reports !!!

From PowerShell editor or from a PowerShell console, run Generate-AzureReport.ps1

Provide Azure credentials

Select target Azure subscription and click on OK button

Sample of Azure Report

Sample of PowerShell output console

Note:

  • Output folder is the folder path where report has been saved
  • Report file name is report name

Thanks for your patience.  Any feedback is  appreciated

Moving VHDs from one Storage Account to Another (Part 2) - Updated 2017 08 18

redundancy_banner.jpg

This article will show how to automatically copy VHDs from a source storage account to a new one, without hardcoding values. Secondly how to create a new VM with the disks in the new Storage Account, reusing the same value of the original VM. The first thing is to create a PowerShell module file where keeps all functions that will be invoked by the main script.

Ideally, this module could be reused for other purposes and new functions should be added according to your needs.

Open your preferred PowerShell editor and creates a new file called "Module-Azure.ps1"

Note: all function will be declared as global in order to be available to others script

The first function to be added is called Connect-Azure and it will simplify Azure connection activities.

[powershell] function global:Connect-Azure { Login-AzureRmAccount $global:subName = (Get-AzureRmSubscription | select SubscriptionName | Out-GridView -Title "Select a subscription" -OutputMode Single).SubscriptionName Select-AzureRmSubscription -SubscriptionName $subName } [/powershell]

Above function, using Out-GridView cmdlets, will show all Azure subscriptions associated with your account and allow you to select the one against which execute script

The second function to be added is called CopyVHDs. It will take care of copy all VHDs from the selected source Storage Account to the selected destination Storage Account

[powershell]

function global:CopyVHDs { param ( $sourceSAItem, $destinationSAItem

)

$sourceSA = Get-AzureRmStorageAccount -ResourceGroupName $sourceSAItem.ResourceGroupName -Name $sourceSAItem.StorageAccountName

$sourceSAContainerName = "vhds"

$sourceSAKey = (Get-AzureRmStorageAccountKey -ResourceGroupName $sourceSAItem.ResourceGroupName -Name $sourceSAItem.StorageAccountName)[0].Value

$sourceSAContext = New-AzureStorageContext -StorageAccountName $sourceSAItem.StorageAccountName -StorageAccountKey $sourceSAKey

$blobItems = Get-AzureStorageBlob -Context $sourceSAContext -Container $sourceSAContainerName

$destinationSAKey = (Get-AzureRmStorageAccountKey -ResourceGroupName $destinationSAItem.ResourceGroupName -Name $destinationSAItem.StorageAccountName)[0].Value

$destinationContainerName = "vhds"

$destinationSAContext = New-AzureStorageContext -StorageAccountName $destinationSAItem.StorageAccountName -StorageAccountKey $destinationSAKey

foreach ( $blobItem in $blobItems) {

# Copy the blob Write-Host "Copying " $blobItem.Name " from " $sourceSAItem.StorageAccountName " to " $destinationSAItem.StorageAccountName

$blobCopy = Start-AzureStorageBlobCopy -DestContainer $destinationContainerName -DestContext $destinationSAContext -SrcBlob $blobItem.Name -Context $sourceSAContext -SrcContainer $sourceSAContainerName

$blobCopyStatus = Get-AzureStorageBlob -Blob $blobItem.Name -Container $destinationContainerName -Context $destinationSAContext | Get-AzureStorageBlobCopyState

[int] $i = 0;

while ( $blobCopyStatus.Status -ne "Success") { Start-Sleep -Seconds 180

$i = $i + 1

$blobCopyStatus = Get-AzureStorageBlob -Blob $blobItem.Name -Container $destinationContainerName -Context $destinationSAContext | Get-AzureStorageBlobCopyState

Write-Host "Blob copy status is " $blobCopyStatus.Status Write-Host "Bytes Copied: " $blobCopyStatus.BytesCopied Write-Host "Total Bytes: " $blobCopyStatus.TotalBytes

Write-Host "Cycle Number $i" }

Write-Host "Blob " $blobItem.Name " copied"

}

return $true }

[/powershell]

 

This function is basically executing the same commands that were showed in the first article. Of course the difference is the it takes as input two objects which contains required information to copy VHDs between the two Storage Account. A couple of notes:

  • Because it is unknown how many VHDs should be copied, there is foreach that will iterate over all VHDs that will copied
  • In order to minimize any side effects, aforementioned for each contains a while that will ensure that copy activity is really completed before return control

The third function to be added is called Create-AzureVMFromVHDs. It will take care of create a new VM using existing VHDs. In order to provide a PoC about what could be achieved, following assumptions have been made:

  • New VM will be deployed in an existing vnet / subnet
  • New VM will have the same size of the original VM
  • New VM will be deployed in a new Resource Group
  • New VM will be deployed in the same location of the (destination) Azure Storage Account where VHDs have been copied
  • New VM will have the same credentials of the source one
  • New VM will have assigned a new dynamic public IP
  • All VHDs copied from source Storage Account (which were attached to the source VM) will be attached to the new VM

[powershell] function global:Create-AzureVMFromVHDs { param ( $destinationVNETItem, $destinationSubnetItem, $destinationSAItem, $sourceVMItem )

$destinationSA = Get-AzureRmStorageAccount -Name $destinationSAItem.StorageAccountName -ResourceGroupName $destinationSAItem.ResourceGroupName

$Location = $destinationSA.PrimaryLocation

$destinationVMItem = '' | select name,ResourceGroupName

$destinationVMItem.name = ($sourceVMItem.Name + "02").ToLower()

$destinationVMItem.ResourceGroupName = ($sourceVMItem.ResourceGroupName + "02").ToLower()

$InterfaceName = $destinationVMItem.name + "-nic"

$destinationResourceGroup = New-AzureRmResourceGroup -location $Location -Name $destinationVMItem.ResourceGroupName

$sourceVM = get-azurermvm -Name $sourceVMItem.Name -ResourceGroupName $sourceVMItem.ResourceGroupName

$VMSize = $sourceVM.HardwareProfile.VmSize

$sourceVHDs = $sourceVM.StorageProfile.DataDisks

$OSDiskName = $sourceVM.StorageProfile.OsDisk.Name

$publicIPName = $destinationVMItem.name + "-pip"

$sourceVMOSDiskUri = $sourceVM.StorageProfile.OsDisk.Vhd.Uri

$OSDiskUri = $sourceVMOSDiskUri.Replace($sourceSAItem.StorageAccountName,$destinationSAItem.StorageAccountName)

# Network Script $VNet = Get-AzureRMVirtualNetwork -Name $destinationVNETItem.Name -ResourceGroupName $destinationVNETItem.ResourceGroupName $Subnet = Get-AzureRMVirtualNetworkSubnetConfig -Name $destinationSubnetItem.Name -VirtualNetwork $VNet

#Public IP script $publicIP = New-AzureRmPublicIpAddress -Name $publicIPName -ResourceGroupName $destinationVMItem.ResourceGroupName -Location $location -AllocationMethod Dynamic

# Create the Interface $Interface = New-AzureRMNetworkInterface -Name $InterfaceName -ResourceGroupName $destinationVMItem.ResourceGroupName -Location $Location -SubnetId $Subnet.Id -PublicIpAddressId $publicIP.Id

#Compute script $VirtualMachine = New-AzureRMVMConfig -VMName $destinationVMItem.name -VMSize $VMSize

$VirtualMachine = Add-AzureRMVMNetworkInterface -VM $VirtualMachine -Id $Interface.Id $VirtualMachine = Set-AzureRMVMOSDisk -VM $VirtualMachine -Name $OSDiskName -VhdUri $OSDiskUri -CreateOption Attach -Windows

$VirtualMachine = Set-AzureRmVMBootDiagnostics -VM $VirtualMachine -Disable

#Adding Data disk

if ( $sourceVHDs.Length -gt 0) { Write-Host "Found Data disks"

foreach ($sourceVHD in $sourceVHDs) { $destinationDataDiskUri = ($sourceVHD.Vhd.Uri).Replace($sourceSAItem.StorageAccountName,$destinationSAItem.StorageAccountName)

$VirtualMachine = Add-AzureRmVMDataDisk -VM $VirtualMachine -Name $sourceVHD.Name -VhdUri $destinationDataDiskUri -Lun $sourceVHD.Lun -Caching $sourceVHD.Caching -CreateOption Attach

}

} else { Write-Host "No Data disk found" }

# Create the VM in Azure New-AzureRMVM -ResourceGroupName $destinationVMItem.ResourceGroupName -Location $Location -VM $VirtualMachine

Write-Host "VM created. Well Done !!"

}

[/powershell]

A couple of note:

  • The URI of the VHDs copied in the destination Storage Account has been calculated replacing the source Storage Account name with destination Storage Account name in URI
  • destination VHDs will be attached in the same order (LUN) of source VHDs

Module-Azure.ps1 should have a structure like this:

Now it's time to create another file called Move-VM.ps1 which should be stored in the same folder of Module-Azure

Note: if you want to store in a different folder, then update line 7

Paste following code:

[powershell] $ScriptDir = $PSScriptRoot

Write-Host "Current script directory is $ScriptDir"

Set-Location -Path $ScriptDir

.\Module-Azure.ps1

Connect-Azure

$vmItem = Get-AzureRmVM | select ResourceGroupName,Name | Out-GridView -Title "Select VM" -OutputMode Single

$sourceSAItem = Get-AzureRmStorageAccount | select StorageAccountName,ResourceGroupName | Out-GridView -Title "Select Source Storage Account" -OutputMode Single

$destinationSAItem = Get-AzureRmStorageAccount | select StorageAccountName,ResourceGroupName | Out-GridView -Title "Select Destination Storage Account" -OutputMode Single

# Stop VM

Write-Host "Stopping VM " $vmItem.Name

get-azurermvm -name $vmItem.Name -ResourceGroupName $vmItem.ResourceGroupName | stop-azurermvm

Write-Host "Stopped VM " $vmItem.Name

CopyVHDs -sourceSAItem $sourceSAItem -destinationSAItem $destinationSAItem

$destinationVNETItem = Get-AzureRmVirtualNetwork | select Name,ResourceGroupName | Out-GridView -Title "Select Destination VNET" -OutputMode Single

$destinationVNET = Get-AzureRmVirtualNetwork -Name $destinationVNETItem.Name -ResourceGroupName $destinationVNETItem.ResourceGroupName

$destinationSubnetItem = Get-AzureRmVirtualNetworkSubnetConfig -VirtualNetwork $destinationVNET | select Name,AddressPrefix | Out-GridView -Title "Select Destination Subnet" -OutputMode Single

Create-AzureVMFromVHDs -destinationVNETItem $destinationVNETItem -destinationSubnetItem $destinationSubnetItem -destinationSAItem $destinationSAItem -sourceVMItem $vmItem

[/powershell]

Comments:

  • Line 7: Module-Azure function is invoked
  • Line 9: Connect-Azure function (declared in Module-Azure) is invoked. This is possible because it has been declared as global
  • From Line 11 to Line 15: a subset of source VM, source Storage Account and destination Storage Account info are retrieved. They will be used later
  • Line 19-23: source VM is stopped
  • Line 25: Copy-VHDs function (declared in Module-Azure) is invoked. This is possible because it has been declared as global. Note that we're just passing three previously retrieved parameters
  • From Line 27 to Line 31: VNET and subnet where new VM will be attached are retrieved
  • Line 33:  Create-AzureVMFromVHDs function (declared in Module-Azure) is invoked. This is possible because it has been declared as global. Note that we're just passing already retrieved parameters

Following screenshots shows an execution of Move-VM script:

Select Azure subscription

Select source VM

Select source Storage Account

Select Destination Storage Account

Confirm to stop VM

Select destination VNET

Select destination Subnet

Output sample #1

Output sample #2

Source VM Resource Group

Destination VM RG

Destination Storage Account RG

Source VHDs

Destination VHDs

Thanks for your patience.  Any feedback is  appreciated

Note: Above script has been tested with Azure PS 3.7.0 (March 2017).

Starting from Azure PS 4.x, this cmdlets returns an array of objects with the following properties: Name, Id, TenantId and State.

The function Connect-Azure is using the value SubscriptionName that is no more available. This is the reason why some people saw an empty Window.

Connect-Azure function should be modified as follows to work with Azure PS 4.x:

[powershell]

function global:Connect-Azure { Login-AzureRmAccount

$global:subName = (Get-AzureRmSubscription | select Name | Out-GridView -Title "Select a subscription" -OutputMode Single).Name

Select-AzureRmSubscription -SubscriptionName $subName }

[/powershell]

Publishing Microsoft Azure Stack TP3 on the Internet via NAT

TP3Stack.png

As you may know? Azure Stack TP3 is here. This blog will outline how to publish your azure stack instance on the internet using NAT rules to redirect your external IP Address to the internal, external IPs. Our group published another article on how to do this for TP2 and this is the updated version for TP3. Starting Point This article assumes you have a host ready for installation with the TP3 VHDx loaded onto your host and you are familiar with the Azure Stack installation Process. The code in this article is extracted from a larger process but should be enough to get you through the process end to end. Azure Stack Installation First things first, I like to install a few other tools to help me edit code and access the portal, this is not required.

[powershell] iwr https://chocolatey.org/install.ps1 -UseBasicParsing | iex choco install notepadplusplus -y choco install googlechrome -y --ignore-checksums choco install visualstudiocode -y choco install beyondcompare -y choco install baretail -y choco install powergui -y --ignore-checksums [/powershell]

Next, you want to open up this file C:\clouddeployment\setup\DeploySingleNode.ps1

Editing these values allow you to create different internal naming and external space.  As you can see the ExternalDomainFQDN is made up of the region and external suffix.

This is a lot easier now the domain parameters are used from the same place, no need to hunt down domain names in files.

[powershell] $AdminPassword = 'SuperSecret!'|ConvertTo-SecureString -AsPlainText -force $AadAdminPass= 'SuperSecret!'|ConvertTo-SecureString -AsPlainText -Force $aadCred = New-Object PSCredential('stackadmin@poc.xxxxx.com',$AadAdminPass)

. c:\clouddeployment\setup\InstallAzureStackPOC.ps1 -AzureEnvironment "AzureCloud" ` -AdminPassword $AdminPassword ` -PublicVLanId 97 ` -NATIPv4Subnet '172.20.51.0/24' ` -NATIPv4Address '172.20.51.51' ` -NATIPv4DefaultGateway '172.20.51.1' ` -InfraAzureDirectoryTenantAdminCredential $aadCred ` -InfraAzureDirectoryTenantName 'poc.xxxxx.com' ` -EnvironmentDNS '172.20.11.21' ` [/powershell]

Remember to only have one nic enabled. We also have slightly less than the minimum space required for the OS disk and simply edit the XML file here C:\CloudDeployment\Configuration\Roles\Infrastructure\BareMetal\OneNodeRole.xml and change the value of this node Role.PrivateInfo.ValidationRequirements.MinimumSizeOfSystemDiskGB. The rest is over to TP3 installation, so far our experience of TP3 is much more stable to install, just the occasional rerun using

[powershell]InstallAzureStackPOC.ps1 -rerun[/powershell]

Once the installation completes obviously check you can access the portal.  I use chrome as it asks a lot less questions to confirm the portal is running.  We use a JSON file defined by a larger automation script to deploy these NAT rules.   Here I will simply share a portion of the resulting JSON file that is saved to C:\CloudDeployment\Setup\StackRecord.json.

[xml] { "Region": "SV5", "ExternalDomain": "AS01.poc.xxxxx.com", "nr_Table": "192.168.102.2:80,443:172.20.51.133:3x.7x.xx5.133", "nr_Queue": "192.168.102.3:80,443:172.20.51.134:3x.7x.xx5.134", "nr_blob": "192.168.102.4:80,443:172.20.51.135:3x.7x.xx5.135", "nr_adfs": "192.168.102.5:80,443:172.20.51.136:3x.7x.xx5.136", "nr_graph": "192.168.102.6:80,443:172.20.51.137:3x.7x.xx5.137", "nr_api": "192.168.102.7:443:172.20.51.138:3x.7x.xx5.138", "nr_portal": "192.168.102.8:13011,30015,13001,13010,13021,13020,443,13003,13026,12648,12650,12499,12495,12647,12646,12649:172.20.51.139:3x.7x.xx5.139", "nr_publicapi": "192.168.102.9:443:172.20.51.140:3x.7x.xx5.140", "nr_publicportal": "192.168.102.10:13011,30015,13001,13010,13021,13020,443,13003,12495,12649:172.20.51.141:3x.7x.xx5.141", "nr_crl": "192.168.102.11:80:172.20.51.142:3x.7x.xx5.142", "nr_extensions": "192.168.102.12:443,12490,12491,12498:172.20.51.143:3x.7x.xx5.143", }

[/xml]

This is used by this script also saved to the setup folder

[powershell] param ( $StackBuildJSONPath='C:\CloudDeployment\Setup\StackRecord.json' )

$server = 'mas-bgpnat01' $StackBuild = Get-Content $StackBuildJSONPath | ConvertFrom-Json

[scriptblock]$ScriptBlockAddExternal = { param($ExIp) $NatSetup=Get-NetNat Write-Verbose 'Adding External Address $ExIp' Add-NetNatExternalAddress -NatName $NatSetup.Name -IPAddress $ExIp -PortStart 80 -PortEnd 63356 }

[scriptblock]$ScriptblockAddPorts = { param( $ExIp, $natport, $InternalIp ) Write-Verbose "Adding NAT Mapping $($ExIp):$($natport)->$($InternalIp):$($natport)" Add-NetNatStaticMapping -NatName $NatSetup.Name -Protocol TCP -ExternalIPAddress $ExIp -InternalIPAddress $InternalIp -ExternalPort $natport -InternalPort $NatPort }

$NatRules = @() $NatRuleNames = ($StackBuild | get-member | ? {$_.name -like "nr_*"}).name foreach ($NATName in $NatRuleNames ) { $NatRule = '' | select name, Internal, External, Ports $NatRule.name = $NATName.Replace('nr_','') $rules = $StackBuild.($NATName).split(':') $natrule.Internal = $rules[0] $natrule.External = $rules[2] $natrule.Ports = $rules[1] $NatRules += $NatRule }

$session = New-PSSession -ComputerName $server

foreach ($NatRule in $NatRules) { Invoke-Command -Session $session -ScriptBlock $ScriptBlockAddExternal -ArgumentList $NatRule.External $NatPorts = $NatRule.Ports.Split(',').trim() foreach ($NatPort in $NatPorts) { Invoke-Command -Session $session -ScriptBlock $ScriptblockAddPorts -ArgumentList $NatRule.External,$NatPort,$NatRule.Internal } }

remove-pssession $session [/powershell]

Next, you need to publish your DNS Records. You can do this by hand if you know your NAT Mappings and as a reference, you can open up the DNS server on the MAS-DC01.

However, here are some scripts I have created to help automate this process. I do run this from another machine but have edited it to run in the context of the AzureStack Host. First, we need a couple of reference files.

DNSMappings C:\clouddeployment\setup\DNSMapping.json

[xml] [ { "Name": "nr_Table", "A": "*", "Subdomain": "table", "Zone": "RegionZone.DomainZone" }, { "Name": "nr_Queue", "A": "*", "Subdomain": "queue", "Zone": "RegionZone.DomainZone" }, { "Name": "nr_blob", "A": "*", "Subdomain": "blob", "Zone": "RegionZone.DomainZone" }, { "Name": "nr_adfs", "A": "adfs", "Subdomain": "RegionZone", "Zone": "DomainZone" }, { "Name": "nr_graph", "A": "graph", "Subdomain": "RegionZone", "Zone": "DomainZone" }, { "Name": "nr_api", "A": "api", "Subdomain": "RegionZone", "Zone": "DomainZone" }, { "Name": "nr_portal", "A": "portal", "Subdomain": "RegionZone", "Zone": "DomainZone" }, { "Name": "nr_publicapi", "A": "publicapi", "Subdomain": "RegionZone", "Zone": "DomainZone" }, { "Name": "nr_publicportal", "A": "publicportal", "Subdomain": "RegionZone", "Zone": "DomainZone" }, { "Name": "nr_crl", "A": "crl", "Subdomain": "RegionZone", "Zone": "DomainZone" }, { "Name": "nr_extensions", "A": "*", "Subdomain": "vault", "Zone": "RegionZone.DomainZone" }, { "Name": "nr_extensions", "A": "*", "Subdomain": "vaultcore", "Zone": "RegionZone.DomainZone" } ]

[/xml]

ExternalMapping C:\clouddeployment\setup\ExternalMapping.json This is a smaller section the contain on the NAT mappings reference in this example.

[xml] [ { "External": "3x.7x.2xx.133", "Internal": "172.20.51.133" }, { "External": "3x.7x.2xx.134", "Internal": "172.20.51.134" }, { "External": "3x.7x.2xx.135", "Internal": "172.20.51.135" }, { "External": "3x.7x.2xx.136", "Internal": "172.20.51.136" }, { "External": "3x.7x.2xx.137", "Internal": "172.20.51.137" }, { "External": "3x.7x.2xx.138", "Internal": "172.20.51.138" }, { "External": "3x.7x.2xx.139", "Internal": "172.20.51.139" }, { "External": "3x.7x.2xx.140", "Internal": "172.20.51.140" }, { "External": "3x.7x.2xx.141", "Internal": "172.20.51.141" }, { "External": "3x.7x.2xx.142", "Internal": "172.20.51.142" }, { "External": "3x.7x.2xx.143", "Internal": "172.20.51.143" } ] [/xml]

Bringing it altogether with this script

[powershell] Param ( $StackJSONPath = 'c:\clouddeployment\setup\StackRecord.json' )

$stackRecord = Get-Content $StackJSONPath | ConvertFrom-Json $DNSMappings = get-content c:\clouddeployment\setup\DNSMapping.json | ConvertFrom-Json $ExternalMapping = get-content c:\clouddeployment\setup\ExternalMapping.json | ConvertFrom-Json

$DNSRecords = @() foreach ($DNSMapping in $DNSMappings) { $DNSRecord = '' | select Name, A, IP, Subdomain, Domain $DNS = $stackRecord.($DNSMapping.Name).split(':') $DNSRecord.IP = ($ExternalMapping | ? {$_.Internal -eq $DNS[2]}).external $DNSRecord.Name = $DNSMapping $DNSRecord.A = $DNSMapping.A $DNSRecord.Subdomain = $DNSMapping.Subdomain.Replace("RegionZone",$stackRecord.Region.ToLower()).Replace("DomainZone",$stackRecord.ExternalDomain.ToLower()) $DNSRecord.Domain = $DNSMapping.zone.Replace("RegionZone",$stackRecord.Region.ToLower()).Replace("DomainZone",$stackRecord.ExternalDomain.ToLower()) $DNSRecords += $DNSRecord } #here you can use this array to do what you need, 2 examples follow

#CSV host file for import $DNSRecords | select a,IP, Subdomain, domain | ConvertTo-CSV -NoTypeInformation | Set-Content c:\clouddeployment\setup\DNSRecords.csv

$SubDomains = $DNSRecords | group subdomain foreach ($SubDomain in ($SubDomains | Where {$_.name -ne ''}) ) { Write-Output ("Records for " +$SubDomain.name) foreach ($record in $SubDomain.Group) { # Initialize $resourceAName = $record.A $PublicIP = $record.ip $resourceSubDomainName = $record.Subdomain $zoneName = $record.Domain $resourceName = $resourceAName + "." + $resourceSubDomainName + "." + $zoneName

Write-Output ("Record for $resourceName ") #Create individual DNS records here

} } [/powershell]

The array will give you the records you need to create.

All things being equal and a little bit of luck...

To access this external Azure Stack instance via Powershell you will need a few details and IDs. Most of this is easy enough, however, to get your $EnvironmentID from the deployment Host, open c:\ecetore\ and find your deployment XML. Approx 573kb. Inside this file search for 'DeploymentGuid' This is your Environment ID.  Or you can run this code on the host, you may need to change the $deploymentfile parameter

[powershell] param ( $DeploymentFile = 'C:\EceStore\403314e1-d945-9558-fad2-42ba21985248\80e0921f-56b5-17d3-29f5-cd41bf862787' )

[Xml]$DeploymentStore=Get-Content $DeploymentFile | Out-String $InfraRole=$DeploymentStore.CustomerConfiguration.Role.Roles.Role|? Id -eq Infrastructure $BareMetalInfo=$InfraRole.Roles.Role|? Id -eq BareMetal|Select -ExpandProperty PublicInfo $PublicInfoRoles=$DeploymentStore.CustomerConfiguration.Role.Roles.Role.Roles.Role|Select Id,PublicInfo|Where-Object PublicInfo -ne $null $DeploymentDeets=@{ DeploymentGuid=$BareMetalInfo.DeploymentGuid; IdentityApplications=($PublicInfoRoles.PublicInfo|? IdentityApplications -ne $null|Select -ExpandProperty IdentityApplications|Select -ExpandProperty IdentityApplication|Select Name,ResourceId); VIPs=($PublicInfoRoles.PublicInfo|? Vips -ne $null|Select -ExpandProperty Vips|Select -ExpandProperty Vip); } $DeploymentDeets.DeploymentGuid [/powershell]

Plug all the details into this connection script to access your stack instance. Well Commented code credit to Chris Speers.

[powershell] #Random Per Insall $EnvironmentID='xxxxxxxx-xxxx-4e03-aac2-6c2e2f0a517a' #The DNS Domain used for the Install $StackDomain='sv5.as01.poc.xxxxx.com' #The AAD Domain Name (e.g. bobsdomain.onmicrosoft.com) $AADDomainName='poc.xxxxx.com' #The AAD Tenant ID $AADTenantID = 'poc.xxxxx.com' #The Username to be used $AADUserName='stackadmin@poc.xxxxx.com' #The Password to be used $AADPassword='SuperSecret!'|ConvertTo-SecureString -Force -AsPlainText #The Credential to be used. Alternatively could use Get-Credential $AADCredential=New-Object PSCredential($AADUserName,$AADPassword) #The AAD Application Resource URI $ApiAADResourceID="https://api.$StackDomain/$EnvironmentID" #The ARM Endpoint $StackARMUri="Https://api.$StackDomain/" #The Gallery Endpoint $StackGalleryUri="Https://portal.$($StackDomain):30016/" #The OAuth Redirect Uri $AadAuthUri="https://login.windows.net/$AADTenantID/" #The MS Graph API Endpoint $GraphApiEndpoint="graph.$($StackDomain)"

$ResourceManager = "https://api.$($StackDomain)/$($EnvironmentID)" $Portal = "https://portal.$($StackDomain)/$($EnvironmentID)" $PublicPortal = "https://publicportal.$($StackDomain)/$($EnvironmentID)" $Policy = "https://policy.$($StackDomain)/$($EnvironmentID)" $Monitoring = "https://monitoring.$($StackDomain)/$($EnvironmentID)"

#Add the Azure Stack Environment Get-azurermenvironment -Name 'Azure Stack AS01'|Remove-AzureRmEnvironment Add-AzureRmEnvironment -Name "Azure Stack AS01" ` -ActiveDirectoryEndpoint $AadAuthUri ` -ActiveDirectoryServiceEndpointResourceId $ApiAADResourceID ` -ResourceManagerEndpoint $StackARMUri ` -GalleryEndpoint $StackGalleryUri ` -GraphEndpoint $GraphApiEndpoint

#Add the environment to the context using the credential $env = Get-azurermenvironment -Name 'Azure Stack AS01' Add-AzureRmAccount -Environment $env -Credential $AADCredential -Verbose Login-AzureRmAccount -EnvironmentName 'Azure Stack AS01'

get-azurermcontext Write-output "ResourceManager" Write-output $ResourceManager Write-output "`nPortal" Write-output $Portal Write-output "`nPublicPortal" Write-output $PublicPortal Write-output "`nPolicy" Write-output $policy Write-output "`nMonitoring " Write-output $Monitoring [/powershell]

Returning something like this.

Thanks for reading.  Hopefullly this helped you in some way.

 

ExpressRoute Migration from ASM to ARM and legacy ASM Virtual Networks

word-image9.png

I recently ran into an issue where an ExpressRoute had been migrated from Classis (ASM) to the new portal (ARM), however legacy Classic Virtual Networks (VNets) were still in operation. These VNets refused to be deleted by either portal or PowerShell. Disconnecting the old VNet’s Gateway through the Classic portal would show success, but it would stay connected.

There’s no option to disconnect an ASM gateway in the ARM portal, only a delete option. Gave this a shot and predictably, this was the result:

C:\Users\will.van.allen\AppData\Local\Microsoft\Windows\INetCache\Content.Word\FailedDeleteGW.PNG

Ok, let’s go to PowerShell and look for that obstinate link. Running Get-AzureDedicatedCircuitLink resulted in the following error:

PS C:\> get-AzureDedicatedCircuitLink -ServiceKey $ServiceKey -VNetName $Vnet

get-AzureDedicatedCircuitLink : InternalError: The server encountered an internal error. Please retry the request.

At line:1 char:1

+ get-AzureDedicatedCircuitLink -ServiceKey xxxxxx-xxxx-xxxx-xxxx-xxx...

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo          : CloseError: (:) [Get-AzureDedicatedCircuitLink], CloudException

+ FullyQualifiedErrorId : Microsoft.WindowsAzure.Commands.ExpressRoute.GetAzureDedicatedCircuitLinkCommand

I couldn’t even find the link. Not only was modifying the circuit an issue, but reads were failing, too.

Turned out to be a simple setting change. When the ExpressRoute was migrated, as there were still Classic VNets, a final step of enabling the circuit for both deployment models was needed. Take a look at the culprit setting here, after running Get-AzureRMExpressRouteCircuit:

"serviceProviderProperties": {

"serviceProviderName": "equinix",

"peeringLocation": "Silicon Valley",

"bandwidthInMbps": 1000

},

"circuitProvisioningState": "Disabled",

"allowClassicOperations": false,

"gatewayManagerEtag": "",

"serviceKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",

"serviceProviderProvisioningState": "Provisioned"

AllowClassicOperations set to “false” blocks ASM operations from any access, including a simple “get” from the ExpressRoute circuit. Granting access is straightforward:

# Get details of the ExpressRoute circuit

$ckt = Get-AzureRmExpressRouteCircuit -Name "DemoCkt" -ResourceGroupName "DemoRG"

#Set "Allow Classic Operations" to TRUE

$ckt.AllowClassicOperations = $true

More info on this here.

But we still weren’t finished. I could now get a successful response from this:

get-AzureDedicatedCircuitLink -ServiceKey $ServiceKey -VNetName $Vnet

However this still failed:

Remove-AzureDedicatedCircuitLink -ServiceKey $ServiceKey -VNetName $Vnet

So reads worked, but no modify. Ah—I remembered the ARM portal lock feature, and sure enough, a Read-Only lock on the Resource Group was inherited by the ExpressRoute (more about those here). Once the lock was removed, voila, I could remove the stubborn VNets no problem.

# Remove the Circuit Link for the Vnet

Remove-AzureDedicatedCircuitLink -ServiceKey $ServiceKey -VNetName $Vnet

# Disconnect the gateway

Set-AzureVNetGateway -Disconnect –VnetName $Vnet –LocalNetworkSiteName <LocalNetworksitename>

# Delete the gateway

Remove-AzureVNetGateway –VNetName $Vnet

There’s still no command to remove a single Vnet, you have to use the portal (either will work) or you can use PowerShell to edit the NetworkConfig.xml file, then import it.

Once our legacy VNets were cleaned up, I re-enabled the Read-Only lock on the ExpressRoute.

In summary, nothing was “broken”, just an overlooked setting. I would recommend cleaning up your ASM/Classic Vnets before migrating your ExpressRoute, it’s so much easier and cleaner, but if you must leave some legacy virtual networks in place, remember to set the ExpressRoute setting “allowclassicoperations” setting to “True” after the migration is complete.

And don’t forget those pesky ARM Resource Group locks.