Azure

Moving VHDs from one Storage Account to Another (Part 2) - Updated 2017 08 18

redundancy_banner.jpg

This article will show how to automatically copy VHDs from a source storage account to a new one, without hardcoding values. Secondly how to create a new VM with the disks in the new Storage Account, reusing the same value of the original VM. The first thing is to create a PowerShell module file where keeps all functions that will be invoked by the main script.

Ideally, this module could be reused for other purposes and new functions should be added according to your needs.

Open your preferred PowerShell editor and creates a new file called "Module-Azure.ps1"

Note: all function will be declared as global in order to be available to others script

The first function to be added is called Connect-Azure and it will simplify Azure connection activities.

[powershell] function global:Connect-Azure { Login-AzureRmAccount $global:subName = (Get-AzureRmSubscription | select SubscriptionName | Out-GridView -Title "Select a subscription" -OutputMode Single).SubscriptionName Select-AzureRmSubscription -SubscriptionName $subName } [/powershell]

Above function, using Out-GridView cmdlets, will show all Azure subscriptions associated with your account and allow you to select the one against which execute script

The second function to be added is called CopyVHDs. It will take care of copy all VHDs from the selected source Storage Account to the selected destination Storage Account

[powershell]

function global:CopyVHDs { param ( $sourceSAItem, $destinationSAItem

)

$sourceSA = Get-AzureRmStorageAccount -ResourceGroupName $sourceSAItem.ResourceGroupName -Name $sourceSAItem.StorageAccountName

$sourceSAContainerName = "vhds"

$sourceSAKey = (Get-AzureRmStorageAccountKey -ResourceGroupName $sourceSAItem.ResourceGroupName -Name $sourceSAItem.StorageAccountName)[0].Value

$sourceSAContext = New-AzureStorageContext -StorageAccountName $sourceSAItem.StorageAccountName -StorageAccountKey $sourceSAKey

$blobItems = Get-AzureStorageBlob -Context $sourceSAContext -Container $sourceSAContainerName

$destinationSAKey = (Get-AzureRmStorageAccountKey -ResourceGroupName $destinationSAItem.ResourceGroupName -Name $destinationSAItem.StorageAccountName)[0].Value

$destinationContainerName = "vhds"

$destinationSAContext = New-AzureStorageContext -StorageAccountName $destinationSAItem.StorageAccountName -StorageAccountKey $destinationSAKey

foreach ( $blobItem in $blobItems) {

# Copy the blob Write-Host "Copying " $blobItem.Name " from " $sourceSAItem.StorageAccountName " to " $destinationSAItem.StorageAccountName

$blobCopy = Start-AzureStorageBlobCopy -DestContainer $destinationContainerName -DestContext $destinationSAContext -SrcBlob $blobItem.Name -Context $sourceSAContext -SrcContainer $sourceSAContainerName

$blobCopyStatus = Get-AzureStorageBlob -Blob $blobItem.Name -Container $destinationContainerName -Context $destinationSAContext | Get-AzureStorageBlobCopyState

[int] $i = 0;

while ( $blobCopyStatus.Status -ne "Success") { Start-Sleep -Seconds 180

$i = $i + 1

$blobCopyStatus = Get-AzureStorageBlob -Blob $blobItem.Name -Container $destinationContainerName -Context $destinationSAContext | Get-AzureStorageBlobCopyState

Write-Host "Blob copy status is " $blobCopyStatus.Status Write-Host "Bytes Copied: " $blobCopyStatus.BytesCopied Write-Host "Total Bytes: " $blobCopyStatus.TotalBytes

Write-Host "Cycle Number $i" }

Write-Host "Blob " $blobItem.Name " copied"

}

return $true }

[/powershell]

 

This function is basically executing the same commands that were showed in the first article. Of course the difference is the it takes as input two objects which contains required information to copy VHDs between the two Storage Account. A couple of notes:

  • Because it is unknown how many VHDs should be copied, there is foreach that will iterate over all VHDs that will copied
  • In order to minimize any side effects, aforementioned for each contains a while that will ensure that copy activity is really completed before return control

The third function to be added is called Create-AzureVMFromVHDs. It will take care of create a new VM using existing VHDs. In order to provide a PoC about what could be achieved, following assumptions have been made:

  • New VM will be deployed in an existing vnet / subnet
  • New VM will have the same size of the original VM
  • New VM will be deployed in a new Resource Group
  • New VM will be deployed in the same location of the (destination) Azure Storage Account where VHDs have been copied
  • New VM will have the same credentials of the source one
  • New VM will have assigned a new dynamic public IP
  • All VHDs copied from source Storage Account (which were attached to the source VM) will be attached to the new VM

[powershell] function global:Create-AzureVMFromVHDs { param ( $destinationVNETItem, $destinationSubnetItem, $destinationSAItem, $sourceVMItem )

$destinationSA = Get-AzureRmStorageAccount -Name $destinationSAItem.StorageAccountName -ResourceGroupName $destinationSAItem.ResourceGroupName

$Location = $destinationSA.PrimaryLocation

$destinationVMItem = '' | select name,ResourceGroupName

$destinationVMItem.name = ($sourceVMItem.Name + "02").ToLower()

$destinationVMItem.ResourceGroupName = ($sourceVMItem.ResourceGroupName + "02").ToLower()

$InterfaceName = $destinationVMItem.name + "-nic"

$destinationResourceGroup = New-AzureRmResourceGroup -location $Location -Name $destinationVMItem.ResourceGroupName

$sourceVM = get-azurermvm -Name $sourceVMItem.Name -ResourceGroupName $sourceVMItem.ResourceGroupName

$VMSize = $sourceVM.HardwareProfile.VmSize

$sourceVHDs = $sourceVM.StorageProfile.DataDisks

$OSDiskName = $sourceVM.StorageProfile.OsDisk.Name

$publicIPName = $destinationVMItem.name + "-pip"

$sourceVMOSDiskUri = $sourceVM.StorageProfile.OsDisk.Vhd.Uri

$OSDiskUri = $sourceVMOSDiskUri.Replace($sourceSAItem.StorageAccountName,$destinationSAItem.StorageAccountName)

# Network Script $VNet = Get-AzureRMVirtualNetwork -Name $destinationVNETItem.Name -ResourceGroupName $destinationVNETItem.ResourceGroupName $Subnet = Get-AzureRMVirtualNetworkSubnetConfig -Name $destinationSubnetItem.Name -VirtualNetwork $VNet

#Public IP script $publicIP = New-AzureRmPublicIpAddress -Name $publicIPName -ResourceGroupName $destinationVMItem.ResourceGroupName -Location $location -AllocationMethod Dynamic

# Create the Interface $Interface = New-AzureRMNetworkInterface -Name $InterfaceName -ResourceGroupName $destinationVMItem.ResourceGroupName -Location $Location -SubnetId $Subnet.Id -PublicIpAddressId $publicIP.Id

#Compute script $VirtualMachine = New-AzureRMVMConfig -VMName $destinationVMItem.name -VMSize $VMSize

$VirtualMachine = Add-AzureRMVMNetworkInterface -VM $VirtualMachine -Id $Interface.Id $VirtualMachine = Set-AzureRMVMOSDisk -VM $VirtualMachine -Name $OSDiskName -VhdUri $OSDiskUri -CreateOption Attach -Windows

$VirtualMachine = Set-AzureRmVMBootDiagnostics -VM $VirtualMachine -Disable

#Adding Data disk

if ( $sourceVHDs.Length -gt 0) { Write-Host "Found Data disks"

foreach ($sourceVHD in $sourceVHDs) { $destinationDataDiskUri = ($sourceVHD.Vhd.Uri).Replace($sourceSAItem.StorageAccountName,$destinationSAItem.StorageAccountName)

$VirtualMachine = Add-AzureRmVMDataDisk -VM $VirtualMachine -Name $sourceVHD.Name -VhdUri $destinationDataDiskUri -Lun $sourceVHD.Lun -Caching $sourceVHD.Caching -CreateOption Attach

}

} else { Write-Host "No Data disk found" }

# Create the VM in Azure New-AzureRMVM -ResourceGroupName $destinationVMItem.ResourceGroupName -Location $Location -VM $VirtualMachine

Write-Host "VM created. Well Done !!"

}

[/powershell]

A couple of note:

  • The URI of the VHDs copied in the destination Storage Account has been calculated replacing the source Storage Account name with destination Storage Account name in URI
  • destination VHDs will be attached in the same order (LUN) of source VHDs

Module-Azure.ps1 should have a structure like this:

Now it's time to create another file called Move-VM.ps1 which should be stored in the same folder of Module-Azure

Note: if you want to store in a different folder, then update line 7

Paste following code:

[powershell] $ScriptDir = $PSScriptRoot

Write-Host "Current script directory is $ScriptDir"

Set-Location -Path $ScriptDir

.\Module-Azure.ps1

Connect-Azure

$vmItem = Get-AzureRmVM | select ResourceGroupName,Name | Out-GridView -Title "Select VM" -OutputMode Single

$sourceSAItem = Get-AzureRmStorageAccount | select StorageAccountName,ResourceGroupName | Out-GridView -Title "Select Source Storage Account" -OutputMode Single

$destinationSAItem = Get-AzureRmStorageAccount | select StorageAccountName,ResourceGroupName | Out-GridView -Title "Select Destination Storage Account" -OutputMode Single

# Stop VM

Write-Host "Stopping VM " $vmItem.Name

get-azurermvm -name $vmItem.Name -ResourceGroupName $vmItem.ResourceGroupName | stop-azurermvm

Write-Host "Stopped VM " $vmItem.Name

CopyVHDs -sourceSAItem $sourceSAItem -destinationSAItem $destinationSAItem

$destinationVNETItem = Get-AzureRmVirtualNetwork | select Name,ResourceGroupName | Out-GridView -Title "Select Destination VNET" -OutputMode Single

$destinationVNET = Get-AzureRmVirtualNetwork -Name $destinationVNETItem.Name -ResourceGroupName $destinationVNETItem.ResourceGroupName

$destinationSubnetItem = Get-AzureRmVirtualNetworkSubnetConfig -VirtualNetwork $destinationVNET | select Name,AddressPrefix | Out-GridView -Title "Select Destination Subnet" -OutputMode Single

Create-AzureVMFromVHDs -destinationVNETItem $destinationVNETItem -destinationSubnetItem $destinationSubnetItem -destinationSAItem $destinationSAItem -sourceVMItem $vmItem

[/powershell]

Comments:

  • Line 7: Module-Azure function is invoked
  • Line 9: Connect-Azure function (declared in Module-Azure) is invoked. This is possible because it has been declared as global
  • From Line 11 to Line 15: a subset of source VM, source Storage Account and destination Storage Account info are retrieved. They will be used later
  • Line 19-23: source VM is stopped
  • Line 25: Copy-VHDs function (declared in Module-Azure) is invoked. This is possible because it has been declared as global. Note that we're just passing three previously retrieved parameters
  • From Line 27 to Line 31: VNET and subnet where new VM will be attached are retrieved
  • Line 33:  Create-AzureVMFromVHDs function (declared in Module-Azure) is invoked. This is possible because it has been declared as global. Note that we're just passing already retrieved parameters

Following screenshots shows an execution of Move-VM script:

Select Azure subscription

Select source VM

Select source Storage Account

Select Destination Storage Account

Confirm to stop VM

Select destination VNET

Select destination Subnet

Output sample #1

Output sample #2

Source VM Resource Group

Destination VM RG

Destination Storage Account RG

Source VHDs

Destination VHDs

Thanks for your patience.  Any feedback is  appreciated

Note: Above script has been tested with Azure PS 3.7.0 (March 2017).

Starting from Azure PS 4.x, this cmdlets returns an array of objects with the following properties: Name, Id, TenantId and State.

The function Connect-Azure is using the value SubscriptionName that is no more available. This is the reason why some people saw an empty Window.

Connect-Azure function should be modified as follows to work with Azure PS 4.x:

[powershell]

function global:Connect-Azure { Login-AzureRmAccount

$global:subName = (Get-AzureRmSubscription | select Name | Out-GridView -Title "Select a subscription" -OutputMode Single).Name

Select-AzureRmSubscription -SubscriptionName $subName }

[/powershell]

ExpressRoute Migration from ASM to ARM and legacy ASM Virtual Networks

word-image9.png

I recently ran into an issue where an ExpressRoute had been migrated from Classis (ASM) to the new portal (ARM), however legacy Classic Virtual Networks (VNets) were still in operation. These VNets refused to be deleted by either portal or PowerShell. Disconnecting the old VNet’s Gateway through the Classic portal would show success, but it would stay connected.

There’s no option to disconnect an ASM gateway in the ARM portal, only a delete option. Gave this a shot and predictably, this was the result:

C:\Users\will.van.allen\AppData\Local\Microsoft\Windows\INetCache\Content.Word\FailedDeleteGW.PNG

Ok, let’s go to PowerShell and look for that obstinate link. Running Get-AzureDedicatedCircuitLink resulted in the following error:

PS C:\> get-AzureDedicatedCircuitLink -ServiceKey $ServiceKey -VNetName $Vnet

get-AzureDedicatedCircuitLink : InternalError: The server encountered an internal error. Please retry the request.

At line:1 char:1

+ get-AzureDedicatedCircuitLink -ServiceKey xxxxxx-xxxx-xxxx-xxxx-xxx...

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo          : CloseError: (:) [Get-AzureDedicatedCircuitLink], CloudException

+ FullyQualifiedErrorId : Microsoft.WindowsAzure.Commands.ExpressRoute.GetAzureDedicatedCircuitLinkCommand

I couldn’t even find the link. Not only was modifying the circuit an issue, but reads were failing, too.

Turned out to be a simple setting change. When the ExpressRoute was migrated, as there were still Classic VNets, a final step of enabling the circuit for both deployment models was needed. Take a look at the culprit setting here, after running Get-AzureRMExpressRouteCircuit:

"serviceProviderProperties": {

"serviceProviderName": "equinix",

"peeringLocation": "Silicon Valley",

"bandwidthInMbps": 1000

},

"circuitProvisioningState": "Disabled",

"allowClassicOperations": false,

"gatewayManagerEtag": "",

"serviceKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",

"serviceProviderProvisioningState": "Provisioned"

AllowClassicOperations set to “false” blocks ASM operations from any access, including a simple “get” from the ExpressRoute circuit. Granting access is straightforward:

# Get details of the ExpressRoute circuit

$ckt = Get-AzureRmExpressRouteCircuit -Name "DemoCkt" -ResourceGroupName "DemoRG"

#Set "Allow Classic Operations" to TRUE

$ckt.AllowClassicOperations = $true

More info on this here.

But we still weren’t finished. I could now get a successful response from this:

get-AzureDedicatedCircuitLink -ServiceKey $ServiceKey -VNetName $Vnet

However this still failed:

Remove-AzureDedicatedCircuitLink -ServiceKey $ServiceKey -VNetName $Vnet

So reads worked, but no modify. Ah—I remembered the ARM portal lock feature, and sure enough, a Read-Only lock on the Resource Group was inherited by the ExpressRoute (more about those here). Once the lock was removed, voila, I could remove the stubborn VNets no problem.

# Remove the Circuit Link for the Vnet

Remove-AzureDedicatedCircuitLink -ServiceKey $ServiceKey -VNetName $Vnet

# Disconnect the gateway

Set-AzureVNetGateway -Disconnect –VnetName $Vnet –LocalNetworkSiteName <LocalNetworksitename>

# Delete the gateway

Remove-AzureVNetGateway –VNetName $Vnet

There’s still no command to remove a single Vnet, you have to use the portal (either will work) or you can use PowerShell to edit the NetworkConfig.xml file, then import it.

Once our legacy VNets were cleaned up, I re-enabled the Read-Only lock on the ExpressRoute.

In summary, nothing was “broken”, just an overlooked setting. I would recommend cleaning up your ASM/Classic Vnets before migrating your ExpressRoute, it’s so much easier and cleaner, but if you must leave some legacy virtual networks in place, remember to set the ExpressRoute setting “allowclassicoperations” setting to “True” after the migration is complete.

And don’t forget those pesky ARM Resource Group locks.

"Sitecore on Azure PaaS" - Geo-redundant Azure PaaS based Sitecore Reference Architecture

This blog gives an overview of "Sitecore on Azure PaaS" reference architecture and how it can be built on the complete stack of azure PaaS based services with geo-redundancy.

Sitecore Hosting Model

As we can see from below picture, Sitecore is capable of being hosted on-premise, IaaS, PaaS and SaaS.  From Sitecore version 8.2 Update-1 onwards, the Sitecore Experience Platform supports the Microsoft Azure App Service. This means that you are now able to deploy scalable Sitecore solutions on the modern Microsoft Azure PaaS infrastructure.

We will be covering the PaaS Hosting Model (shown as 3rd pillar below) in this blog.

Implementation guidance

  1. Web Apps - An App Service Web App runs in a single region, accessible to web and mobile browsers. A content management system like Sitecore provides service to manage and deploy content to the website.
  2. SQL Database - A SQL Database stores and serves data about the site.
  3. Application Insights - Application Insights, provides health and performance monitoring, and diagnostics.
  4. Content Delivery Network - A content delivery network serves static content such as images, script, and CSS, and reduces the load on the web app servers.
  5. Redis Cache - Redis Cache enables very fast queries, and improves scalability by reducing the load on the main database.
  6. Traffic Manager - Geo-route incoming traffic to your app for better perfomance and availability
  7. Azure Search - Cloud search service for web and mobile development

Publishing guidance

  1. Content Management Database (CM) - This is a centralized DB where the content from all region are posted. The content is then pushed to the Master CD server.
  2. Content Delivery Database (CD) - This serves up the content for all the region. The master lies in region 1 while the slaves lies in other 2 region. The content is replicated from master to slave using SQL Active Geo-Replication. This database will also be indexed by Azure Search.
  3. Content Management Web Site - The content can be published from any 3 regions but will be published on a centralized CM server which resides in region 1.
  4. Content Delivery Web App - The Content Delivery web site is hosted into all three region and serve up the content with low latency for all three user base with the help of traffic manager.

Fail-over guidance

Azure Traffic manager is key component of the fail-over.

  1. Create and publish a REST API endpoint which returns a 200 OK response code in case of success. The API can be programmed to check the state of CM Web App, CD Web App, and SQL and return response other than 200 in case one of them is not healthy.
  2. The API endpoint is registered with the traffic manager with a TTL. TM will redirect the traffic to either region 1 or 3 in case 2 is not healthy. This will be called a full stack fail over where any component of a region is down, the web traffic is diverted to other region.
Hope this gives a high level overview of how a logical architecture will look like if you are thinking to deploy Sitecore using Azure PaaS based services.

Real-World DevOps with Octopus, Part 1

octopus.png

So, like me, you’re thinking of dipping your toes in the new DevOps revolution. You’ve picked an app to start with and spun up an Octopus server. Now what? There are plenty of tutorials about Octopus Deploy that show how to use all of Octopus’s features and also how to integrate with TFS Build, but I have yet to find a good tutorial that shows best practices for a real-world setup of an Octopus project. If you have an application that consists of anything more complicated an Azure WebApp, you'll need to think a little hard about a consistent strategy for managing and configuring your deployment pipeline. My hope is that this can be one of those guides. As a disclaimer, I am not a DevOps or Octopus expert. I have, however, slogged through the bowels of Octopus trying to get two medium-complexity applications continuously deployed using a Visual Studio Online build and Octopus Deploy. My first foray, though functional, was a disaster to configure and maintain. But I learned a lot in the process. While configuring the second application, I applied what I previously learned and I am much happier with the result.

This first part of the series will lay some foundational guidance around configuring a deployment project. It may not be groundbreaking, but it is an important step for the future installments. So, without further ado, on with the show…

The Application

My application is hosted completely in Azure and my deployment, obviously, is very Azure centric. Having said that, it should be trivial to adapt some of this guidance for on-premise or other cloud providers.

My application consists of:

  • SQL Server with multiple databases
  • Key Vault
  • Service Bus
  • Azure Service Fabric containing multiple stateless micro-services and WebAPIs
  • Azure WebApp front end

The Service Fabric micro-services are the heart of the system and they communicate with each other via Service Bus queues.

The WebApp is the front end portal to system. It talks to some of the micro-services using their WebAPI endpoints. In hindsight, it would have been easier to host the website as an ASP.NET Core site in the fabric cluster, but unfortunately core wasn't baked yet when we started this project. So, alas, we live with the extra complexity.

Variables

The variable system in Octopus is extremely powerful. The capabilities of variable expansion continue to surprise me. Just when I think I’m going to break it using a hair-brained scheme, it effortlessly carries on bending to my will. Good job Octopus team! But, as my Uncle Ben always says, “with great power, comes great responsibility” (sorry).

I’m going to assume you already have a cursory understanding the variable system in Octopus. If not, please read their great documentation and then come back. All set? Good.

Variable Sets

The first hard lesson I learned was to use variable sets right from the beginning. It is tempting to shove all of your variables in the project itself, and that’s exactly how I started. This is probably fine at first, though hard to manage when your variable count grows large. But, you will soon come to a point where one of two things will happen:

  1. Your variable count grows so large that it’s hard to maintain and conceptualize.
  2. You want to split your project in half or add a new project, and you want to share the variables between the related projects.

Personally, I hit the latter. "Well," I thought, :I’ll just move all my variables into a variable set that I can share between my projects." Not so fast, mon frère! You see, there is no UI feature that allows you to move a variable from a project to a variable set, nor from a variable set to another variable set. So, you’re stuck with recreating all of your variables by hand, or using the Octopus REST APIs to copy from one to the other. The latter works fine, until you hit sensitive variables. You cannot retrieve sensitive variable values using the UI or the REST API, so your stuck with entering it again from the sticky note on your monitor (shame on you!). This why deciding on a variable set scheme is crucial right up front.

Ok, so we’re all agreed that you should create variable sets right away. But, you ask, should I create just one big one? Well, if you just create one variable set, you’ve solved issue #2, but not #1. Your variable set can still get pretty long and while Octopus does sort the variables by name, it can still be difficult to find the variable you want when the page seems to scroll indefinitely. So, I recommend creating a set of variable sets. While it is a bit more work to set everything up just right, trust me when I say, you will thank me later.

You can use any segregation scheme you wish, but I used these criteria for my variable sets:

  1. Resource Level These types of variable sets contain infrastructure level variables that have no concept of the applications that run on top of them. For instance, a SQL Server variable set may contain the name of the SQL Service instance, admin login and password, but not any of the application level database information (especially in my case where each micro-service uses an isolated database). Another example would be an Active Directory set that contains common things like your TenantId, Tenant Name, etc, but not any AAD application variables that you may want to create. The idea here, is like all infrastructure, you should be able to configure it once and never change it again.
  2. Application Level These types of variable sets contain variables that pertain to a logical application, service or component. You may have only one of these, or multiple, depending on your solution. This is where all the magic happens and where you will spend most of your time tweaking as your application changes. Things like app.config settings, AAD Applications, Database names and connection strings, etc. live in these sets. You may have variables in these sets that pertain to several different resource types, but that ok. The point is to group all of your variables pertaining to Component A into a single variable set so you know exactly where to go to change them.
  3. Project Level Granted, variables in the project itself are not technically a variable set, but it is useful to think of them as such. These variable should be kept to an absolute minimum since they cannot be shared by other projects. These should contain any overrides that you may need or wrappers around step output variables (more on this in a future post).

Now that you have a handful of variable sets, it’s important to name them appropriately. I used the scheme <ProjectGroup>.<Resource|Component>. Being a C# guy, I like periods instead of spaces, but that just be me. At the end of the day it doesn't really matter, since to Octopus set and variable names are just strings. The <ProjectGroup> part is optional if you only have 1 solution running on your Octopus server, but is crucial as soon as you want to onboard a completely unrelated solution and want to keep any semblance of sanity.

In the end, the naming and segregation scheme is completely up to you. The most important thing is that you thing decide on a scheme and stick to it. It takes much more effort to adapt to a scheme later than to do it up front.

One last convention that I tried to follow with variable sets is to keep the Environment Scoping of variables to a minimum within variable sets. This seems like it wouldn’t be a problem, and may not be for your situation, but if you wind up with multiple Octopus projects with different lifecycles sharing a variable set, it can become problematic. For example, if you are naming your websites differently in each environment (say with a –DEV suffix or something), the answer is NOT to create scoped versions of the website name in the set. The answer IS to use expansion (see further down for this). Anything that must be scoped to environments should either utilize clever expansions or be put in the project-level variables. The only exception I make to this rule is for sensitive data that needs to be shared with multiple Octopus projects and must be different for each environment. SQL admin password is a good example of this. In this case, it is beneficial store that as a scoped variable in the variable set, but you must remember this if you ever change the lifecycle of a project or add a new project with a different lifecycle.

Variable Names

Like variable sets, variables should follow a strict naming scheme. To optimize for the UI sorting, I picked <Resource>.<OptionalSubResource>.<Name>. This helps keep related variables together when viewing the UI. As an example, this is roughly what my variable sets look like for my SQL related variables:

  • MyProjectGroup.Database variable set Variable Set for MyProjectGroup.Database
  • MyProjectGroup.MyApplication variable set Variable set for MyProjectGroup.MyApplication

 Variable Expansion

Variable expansion is one of the features that sets Octopus apart from, say, Visual Studio Online Release Management. In VSO, you can do most anything else, but the VSO variable system is absolutely dwarfed by Octopus. I’ll assume you understand the basics of variable expansion and dive right into my usage of it. My goal was to have a good balance between adhering to the DRY no duplication principal and also having enough extension points in the variable system to change things without having to do large overhauls. To that end, I wind up having a decent amount of variables that simply reference another variable. But, defining it upfront means that I just need to change the variable value rather than creating a new variable and updating all the places in my code/deployment scripts that use the old variable. Make enough changes in your variables and you’ll begin to see how useful this is. The typical way to use variable expansions is to build things like connection strings with them. For example, if you have a database connection string, you could build the connection string by hand and stick it in a single variable and mark the whole thing as sensitive (since it has a password). But, now your stuck if the server or database name changes. Instead, something like this:

Name Value
SQL.Name MyDatabaseServer
SQL.Database.MyApplication.Name MyApplication
SQL.Database.MyApplication.Password ********
SQL.Database.MyApplication.Username MyApplicationUser
SQL.Database.MyApplication.ConnectionString Server=tcp:#{SQL.Name}.database.windows.net,1433; Initial Catalog=#{SQL.Database.MyApplication.Name}; Persist Security Info=False; User ID=#{SQL.Database.MyApplication.Username}; Password=#{SQL.Database.MyApplication.Password}; MultipleActiveResultSets=False; Encrypt=True; TrustServerCertificate=False; Connection Timeout=30;

The cool things is that Octopus is smart enough to know that the password fragment is sensitive and will replace it with stars whenever the connection string's expanded value is put in the logs or deployment output. Score 1 Octopus!

Another use for variable expansion is putting optional environment suffixes (like –DEV) on the names of resources. I’ll get into this in Part 2, but the keen eyed among you may have already spotted it in the screenshots.

Project Setup

Once you get your variable system up and running (I know it took a while), it’s time to create your project. Again, I’ll assume you know or have read about the basics, so I’ll only point out a few nuggets.

Don’t forget to reference all your many variable sets in your project. Also, if you add a new variable set in the future, don’t forget go into your project and add it there. I know it sounds silly to mention this, but trust me, you’ll forget. Ask me how I know...

One of the questions that I had, and to some extent still have, is whether you should break apart your system into multiple projects or a single large project. I have yet to find a compelling argument either way, except to say that Octopus’s guidance is to have a single project and, the approach of multiple projects is only a holdover from previous versions that couldn’t handle multiple steps in a project. While I somewhat agree with this, it is important to understand the tradeoffs of each approach. For the record, I have tried them both and I would tend towards a single project purely for simplicity purposes. I will make one caveat: if you’re planning on deploying your infrastructure as part of your pipeline, consider separating that into it’s own project. I’ll talk more about infrastructure deployment in Part 2.

Single Project

Single projects are generally much easier to manage and maintain. You have to think a little less about making sure your variable are in order and that all the components are on the same version. However, it does mean that you cannot easily rev your components individually. That’s not strictly true because you can set your project to skip steps where the package hasn’t changed, but it does mean that you can’t easily see at a glance what components have changed from version to version.

Having said that, this is still my preferred configuration since I find it much easier to maintain. Especially when you factor in certain common steps like gathering secret keys that can only be retrieved via powershell script (and thus are not part of your variable sets), such as storage account connection strings or AAD application IDs.

Multiple Projects

Having multiple projects does give you a clearer view of what versions of your components are everywhere. It also allows you move up or down with each component. While upgrading specific pieces of your application can be accomplished by careful management of your packages, it is still difficult to rollback a specific component while leaving the rest alone using a single project. You can accomplish this by creating a new release and customizing which packages to use, but man is that annoying!

The other downside of multiple projects is that it is difficult, if not impossible, to manage the timing of deployments. If Component B needs to be deployed only after Component A, there is no way to do it using multiple projects in an automated fashion. You would have to manually publish them in the right order and wait for any dependencies to finish before moving onto the next component. Since I’m looking for Continuous Deployment style pipeline, this is a deal-breaker for me.

In the end, I understand why there is no clear-cut guidance about which approach to use. It really depends on your application. If you have a simple application where all the components are meant to rev together, you should probably pick a single project. If any of your components are designed and expected to rev independently, or you need to very fine grained control over the releases you create, multiple projects might be the right fit.

Build

In my case, I used the VSO build system. All in all it is pretty straightforward on how to build and package your solution. There are really only a few places where you have to make changes.

I’m using GitVersion to automatically increment the build number. I’m also having it apply the version number to my AssemblyInfo files so all of my assemblies match versions.

The next step is to extract the version number from GitVersion and put it in a build variable to so it can be handed off to Octopus to use as the release version. This is convenient because GitVersion uses SemVer, which Octopus understands. So, all the releases created from my CI build  are automatically understood by Octopus to be pre-release. Here is the powershell for that task:

[powershell]

$UtcDateTime = (Get-Date).ToUniversalTime() $FormattedDateTime = (Get-Date -Date $UtcDateTime -Format "yyyyMMdd-HHmmss") $CI_Version = "$env:GITVERSION_MAJORMINORPATCH-ci-$FormattedDateTime" Write-Host "CI Version: $CI_Version" Write-Host ("##vso[task.setvariable variable=CI_Version;]$CI_Version")

[/powershell]

I used OctoPack to package my projects into NuGet packages. To get OctoPack to package your solution, simply add “/p:RunOctoPack=true /p:OctoPackPackageVersion=$(CI_Version) /p:OctoPackPublishPackageToFileShare=$(build.artifactstagingdirectory)\deployment” to the MSBuild arguments of the Visual Studio Build task. Alternatively, you can run a standard Package NuGet task.

Last, but not least, you need to push your packages to your Octopus server. There is an Octopus extension to VSO that does this very nicely. I recommend using that to communicate with the Octopus server. If you don’t have your project set to automatically create a release when a new package is detected, you’ll also need to add a create release task. In that task, I use the same CI_Version variable for the Release Number parameter.

The only question to ask when it comes to your build is the same question asked when creating your Octopus project: Should you create one build or multiple. I would argue that the answer is probably the same as for the project setup. If you need or want granular control the over packages you create, you’ll have to create multiple projects targeted at each component of your application. Unfortunately, VSO does not have any way to customize your build based on which files actually changed in your changesets, therefore a new package version of each component will be created for each build, even if nothing changed in it. For most projects this is acceptable. If it is not, the nearest I have come is to create multiple VSO build definitions, one for each component. In the build triggers tab, I added path filters for all of the projects that affect that component. Make sure that you include any dependencies that your component has. The downside of this is that it can be awfully brittle. You have to be careful to add new path filters for any new dependencies that are added to your projects. In the end, I found it not worth the hassle.

Wrap Up

Hopefully this gave you a good foundation for you Octopus deployment. There wasn’t much that was juicy here and much of it seems tedious and unnecessary, but I guarantee this up front work will pay off greatly in the future as your application evolves, requiring your build and deploy to evolve with it.

In the rest of the series, I will dig a little deeper into the especially tricky components of my sample application and some of the strange and sometimes hacky things I had to do to get Octopus to place nice with them. This will include: deploying my Azure infrastructure, creating/updating an Azure Active Directory application dynamically and deploying a Service Fabric cluster.

Overview of ASR for multi-tier applications using SQL AlwaysOn

ASR-300x171.png

Lately I have been working Azure Site Recovery. It provides some useful tools for orchestrating the failover of your on premise physical servers or virtual machines to Azure or another secondary location and then failback their original location. ASR can also simplify your Disaster Recovery plan by using Azure as a secondary site instead of requiring a secondary datacenter.  One thing that is not totally clear when first looking into ASR is how to handle multi-tier applications that use databases.  A lot of the examples I looked at showed a whole application being added to an ASR recovery plan and failed over together. While this could work if you have an extremely simple application or you are just doing some testing, this usually isn't the recommended way to to do it. I am going to give a overview of how you would use ASR for disaster recovery for an application that leverages SQL server AlwaysOn with a secondary site in Azure. A typical large, multi-tier application has a number of web and app tier servers and data being stored in a database. This approach involves running services on dedicated servers, grouping these servers into together into web, app and database tiers and then scaling out these groups of servers as needed. Typically an application leveraging Microsoft technologies will also be using Active Directory for identity. To make this all work will take a bit more planning than just setting up a recovery plan in Azure Site Recovery.

ASRSQLON1

Identity:

The first thing to consider is do you need Active Directory for your secondary site and if so how do you set it up? Most likely you are using Active Directory to manage users, computers, service accounts etc.. and you will need these to keep working once the application has failed over. If this is a dev environment or you only have one domain controller that could failover with your application then you can use ASR to failover the domain controller along with your application. This would work but it could cause disruptions or outages to your primary site if anything else relies on this domain controller. If this environment has a number of applications and is running an Active Directory forest it is recommended to setup an additional domain controller for the secondary site in Azure.

Azure Requirements:

For any Azure Site Recovery installation you will need to setup a Recovery Services vault along with a storage account and an azure virtual network. The storage account and vnet must be in the same region as the Recovery Services vault.

Setup DR for SQL:

The next thing would be to setup DR for your SQL server instance. Site recovery natively supports SQL AlwaysOn through the classic portal but is not yet available through the new azure portal. This allows you to be able to select an availability group in Azure as source and a new separate virtual machine running SQL Server in Azure as a target.

First, you will need to setup a SQL AlwaysOn availability group in Azure. Then, you will also need to setup another virtual machine in Azure that is running the same version of SQL server in Azure. ASR will use this separate SQL vm as the replication target. You can then add your availability group to your recovery plan and select the SQL virtual machine and that machine will be used as a target for replication for your availability group. When failover happens the availability group would become primary on the virtual machine you setup as the target.

DNS:

For applications that are internet facing it is recommended to use Traffic manager to point to your public IP once you failover as below.

Location Source Target
Public DNS Public DNS

Ex. mysite.mycompany.com

Traffic Manager

Mysite.trafficmanager.net

On premise DNS mysiteonprem. mycompany.com Public IP of on premise site

For internal applications you can just change your DNS entries for failover to the secondary site as below.

Location Source Target
On premise DNS Internal URL

Ex. https://mysite.mycompany.com

Site Name

https://webtiervmname

 

Create Recovery Plan:

Finally, you would finish setting up your recovery plan by adding your web and app tiers to your plan. When adding these machines to your recovery plan be sure to add them to the correct vnet. This vnet needs to be routable to your SQL tier. ASR has the concept of groups in the recovery plan. Each group in the recovery plan will failover seperately. Each tier of your application should be setup into separate groups in the order that you would like them to come back up after failing over. Ex. Group 1 would include SQL so that it comes up first. Group 2 would include your application tier and Group 3 would come up last with your web tier.

 

This has been a quick overview of things you need to consider for a SQL AlwaysOn application to use Azure Site Recovery for Disaster Recovery. For more details on how to setup Azure Site Recovery you can go to the following links.

 

Azure Site Recovery documentation

https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-overview

 

Azure Site Recovery and SQL Server

https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-sql