PowerShell

Generating HTML Reports in PowerShell - Part 1

HTMLReport.jpg

The problemPowerShell is an amazing tool for gathering, collecting, slicing, grouping, filtering and collating data.  However, trying to show that information or several sets of it on one report is not as easy.  A few years we ago built our own solution, we created a set of HTML reporting functions.  I have been using these functions for years to help myself, my team and customers to deliver Powershell data to people that just need the details and not a CSV file or a code snippet. We've now decided to make these available to the rest of you.

This solution This is developed as a Powershell module.  It is now available through the PowerShell Gallery or can be installed from PowerShell using install-module -name ReportHTML. Additionally it can be accessed from github ReportHTML, download and deploy to an appropriate module directory for example 'C:\Users\User\Documents\WindowsPowerShell\Modules\' It can be deployed and run ad-hoc or with your scheduled report Powershell jobs.  There are several ways to build up a report. In this post we will build five example reports based around an Azure VMs array.

The report at a glance This screenshot of a report that shows information about patching results grouped in two sections, successful and failed patches. ReportingSS1

This screenshot of a report that shows multiple collapsible sections with an array of functions from within a file displayed in an open section. ReportingSS2

Here is a more complete report I have created at the end of this series in More, Part 5 just to give you an idea of what is possible.  I provide the script and this sample in the link above.  Please note there is java script to enable hiding of sections which does generate a warning in your browser.  I do explain this later in this post. SystemReportScreenShot

This Blog Series There are lots of different ways to leverage this module including changing the logos, highlighting rows, creating different sections with code loops and much more.  This first article is going to work through 5 examples to get you started.  Here are two versions of the same code detailed below or you can create your own script with the code snippets as we work through the examples. You will need to save the file to a local folder or provide a report output path parameter. - A version without all the comments Report-AzureVMsExamples_Part1 - A version with all the comments Report-AzureVMsExamples_Part1WithComments

Reporting functions summary There are a handful of functions that generate HTML code, you string this code together and then save the content as a file.  This code was originally borrowed from Alan Renouf for a vSphere healthcheck report by Andrew Storrs and myself for a more dynamic reporting style, being able to create reports on the fly with minimal effort. In addition these reports once built can be scheduled to run, dropped on a file share or emailed.  I will outline the main functions and then build a report collecting information about virtual machines from an Azure subscription.  will walk through several examples of how to use the functions to generate different types of reports.

  • Get-HtmlOpen
  • Get-HtmlClose
  • Get-HtmlContentOpen
  • Get-HtmlContentClose
  • Get-HtmlContentTable
  • Get-HtmlContentText
  • Set-TableRowColor
  • New-HTMLPieChartObject
  • New-HTMLPieChart
  • New-HTMLBarChartObject
  • New-HTMLBarChart
  • Get-HTMLColumn1of2
  • Get-HTMLColumn2of2
  • Get-HTMLColumnClose

Let's get started First let's create the header section. This contains some parameters, report file path, reportname, loads the ReportHTML module and checks for an Azure Account and if one isn't present asks displays a login prompt.

[powershell] param ( $ReportOutputPath )

Import-Module ReportHtml Get-Command -Module ReportHtml

$ReportName = "Azure VMs Report"

if (!$ReportOutputPath) { $ReportOutputPath = Split-Path -Path $MyInvocation.MyCommand.Definition -Parent }

# see if we already have a session. If we don't don't re-authN if (!$AzureRMAccount.Context.Tenant) { $AzureRMAccount = Add-AzureRmAccount } [/powershell]

Building a recordset We will need a record set to work with.  I am going to take some code Barry Shilmover shared here and add resource group name as a property to or an array of VMs

[powershell] # Get arrary of VMs from ARM $RMVMs = get-azurermvm

$RMVMArray = @() ; $TotalVMs = $RMVMs.Count; $i =1

# Loop through VMs foreach ($vm in $RMVMs) { # Tracking progress Write-Progress -PercentComplete ($i / $TotalVMs * 100) -Activity "Building VM array" -CurrentOperation ($vm.Name + " in resource group " + $vm.ResourceGroupName)

# Get VM Status (for Power State) $vmStatus = Get-AzurermVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -Status

# Generate Array $RMVMArray += New-Object PSObject -Property @{`

# Collect Properties ResourceGroup = $vm.ResourceGroupName Name = $vm.Name; PowerState = (get-culture).TextInfo.ToTitleCase(($vmStatus.statuses)[1].code.split("/")[1]); Location = $vm.Location; Tags = $vm.Tags Size = $vm.HardwareProfile.VmSize; ImageSKU = $vm.StorageProfile.ImageReference.Sku; OSType = $vm.StorageProfile.OsDisk.OsType; OSDiskSizeGB = $vm.StorageProfile.OsDisk.DiskSizeGB; DataDiskCount = $vm.StorageProfile.DataDisks.Count; DataDisks = $vm.StorageProfile.DataDisks; } $I++ } [/powershell]

Testing the report We are just going to write a short function to generate and invoke the report file

[powershell] Function Test-Report { param ($TestName) $rptFile = join-path $ReportOutputPath ($ReportName.replace(" ","") + "-$TestName" + ".mht") $rpt | Set-Content -Path $rptFile -Force Invoke-Item $rptFile sleep 1 } [/powershell]

The process of building the report Let's run this line to see what happens and what output we get.

[powershell] Get-HtmlContentOpen -HeaderText "Virtual Machines" [/powershell]

You should see this HTML as output. Each function generates HTML code, incorporating the parameters you pass it. We are going to collect all this code into an array variable called $rpt.

[html]

<div class="section">

<div class="header"> <a name="Virtual Machines">Virtual Machines</a> </div>

<div class="content" style="background-color:#ffffff;"> [/html]

Building a Basic Report (Example 1) Let's generate a very quick report before we dive into some of the other features and functions.  We want to display the VM array in an table.

[powershell] ####### Example 1 ####### # Create an empty array for HTML strings $rpt = @()

# note from here on we always append to the $rpt array variable. # First, let's add the HTML header information including report title $rpt += Get-HtmlOpen -TitleText $ReportName

# This content open function add a section header $rpt += Get-HtmlContentOpen -HeaderText "Virtual Machines"

# This creates an HTML table of whatever array you pass into the function $rpt += Get-HtmlContentTable $RMVMArray

# This content close function closes the section $rpt += Get-HtmlContentClose

# This HTML close adds HTML footer $rpt += Get-HtmlClose

# Now let's test what we have Test-Report -TestName Example1 [/powershell]

RptExample1

Allow Blocked Content Depending on your browser settings you may receive an warning asking if you want to 'Allow blocked content'.  This is the Javascript function to optionally hide sections of the report. You can click allow blocked content or change your IE settings. ReportBlockContent

Order and grouping data (Example 2) Here we will select specific columns from the array, with the column we are grouping by being the first in the select statement

[powershell] ####### Example 2 ######## $rpt = @() $rpt += Get-HtmlOpen -TitleText $ReportName $rpt += Get-HtmlContentOpen -HeaderText "Virtual Machines"

# here we are going to filter the recordset, reorder the columns and group the results by location. # The value you group by must be first in the select statement $rpt += Get-HtmlContentTable ($RMVMArray | select Location, ResourceGroup, Name, Size,PowerState,DataDiskCount, ImageSKU ) -GroupBy Location $rpt += Get-HtmlContentClose $rpt += Get-HtmlClose

Test-Report -TestName Example2 [/powershell]

ReportExample2

Creating more sections and hiding them (Example 3) Let's create a summary section and a section about VM size counts and we will hide two sections

[powershell] ####### Example 3 ######## $rpt = @() $rpt += Get-HtmlOpen -TitleText $ReportName

# adding the summary section $rpt += Get-HtmlContentOpen -HeaderText "Summary Information" $rpt += Get-HtmlContenttext -Heading "Total VMs" -Detail ( $RMVMArray.Count) $rpt += Get-HtmlContenttext -Heading "VM Power State" -Detail ("Running " + ($RMVMArray | ? {$_.PowerState -eq 'Running'} | measure ).count + " / Deallocated " + ($RMVMArray | ? {$_.PowerState -eq 'Deallocated'} | measure ).count) $rpt += Get-HtmlContenttext -Heading "Total Data Disks" -Detail $RMVMArray.datadisks.count $rpt += Get-HtmlContentClose

# adding the VM size section. Note the -IsHidden switch $rpt += Get-HtmlContentOpen -HeaderText "VM Size Summary" -IsHidden $rpt += Get-HtmlContenttable ($RMVMArray | group size | select Name, Count | sort count -Descending ) -Fixed $rpt += Get-HtmlContentClose

# Note I have also added the -IsHidden Switch here $rpt += Get-HtmlContentOpen -HeaderText "Virtual Machines" -IsHidden $rpt += Get-HtmlContentTable ($RMVMArray | select Location, ResourceGroup, Name, Size,PowerState, DataDiskCount, ImageSKU ) -GroupBy Location $rpt += Get-HtmlContentClose $rpt += Get-HtmlClose

Test-Report -TestName Example3 [/powershell]

ReportExample3

Looping with foreach and section background shading (Example 4) We are going group the recordset by location and add a foreach loop.

[powershell] ####### Example 4 ######## $rpt = @() $rpt += Get-HtmlOpen -TitleText $ReportName $rpt += Get-HtmlContentOpen -HeaderText "Summary Information" $rpt += Get-HtmlContenttext -Heading "Total VMs" -Detail ( $RMVMArray.Count) $rpt += Get-HtmlContenttext -Heading "VM Power State" -Detail ("Running " + ($RMVMArray | ? {$_.PowerState -eq 'Running'} | measure ).count + " / Deallocated " + ($RMVMArray | ? {$_.PowerState -eq 'Deallocated'} | measure ).count) $rpt += Get-HtmlContenttext -Heading "Total Data Disks" -Detail $RMVMArray.datadisks.count $rpt += Get-HtmlContentClose $rpt += Get-HtmlContentOpen -HeaderText "VM Size Summary" -IsHidden $rpt += Get-HtmlContenttable ($RMVMArray | group size | select Name, Count | sort count -Descending ) -Fixed $rpt += Get-HtmlContentClose

# We are introducing -BackgroundShade 2 so that we can clearly see the sections. # This helps with larger reports and many when there are many levels to the sections $rpt += Get-HtmlContentOpen -HeaderText "Virtual Machines by location" -IsHidden

# adding the foreach loop for the group recordset. foreach ($Group in ($RMVMArray | select Location, ResourceGroup, Name, Size,PowerState, DataDiskCount, ImageSKU | group location ) ) {

#for every group that exists for a location we will create an HTML section. I have also specified the -BackgroupShade to 1 $rpt += Get-HtmlContentOpen -HeaderText ("Virtual Machines for location '" + $group.Name +"'") -IsHidden -BackgroundShade 1

# Each recordset may have different data in the columns and therefore create different width in the table columns. # We would like it to look the same. We can use the -Fixed switch to produce evenly space columns for the table $rpt += Get-HtmlContentTable ($Group.Group | select ResourceGroup, Name, Size,PowerState, DataDiskCount, ImageSKU ) -Fixed $rpt += Get-HtmlContentClose } $rpt += Get-HtmlContentClose $rpt += Get-HtmlClose

Test-Report -TestName Example4 [/powershell]

ReportExample4

Filtering Sections based on Conditions (Example 5) This will cover adding some IF conditions to the syntax to display a section or not.

[powershell] ####### Example 5 ######## $rpt = @() $rpt += Get-HtmlOpen -TitleText ($ReportName + "Example 5") $rpt += Get-HtmlContentOpen -HeaderText "Summary Information" -BackgroundShade 1 $rpt += Get-HtmlContenttext -Heading "Total VMs" -Detail ( $RMVMArray.Count) $rpt += Get-HtmlContenttext -Heading "VM Power State" -Detail ("Running " + ($RMVMArray | ? {$_.PowerState -eq 'Running'} | measure ).count + " / Deallocated " + ($RMVMArray | ? {$_.PowerState -eq 'Deallocated'} | measure ).count) $rpt += Get-HtmlContenttext -Heading "Total Data Disks" -Detail $RMVMArray.datadisks.count $rpt += Get-HtmlContentClose $rpt += Get-HtmlContentOpen -HeaderText "VM Size Summary" -IsHidden -BackgroundShade 1 $rpt += Get-HtmlContenttable ($RMVMArray | group size | select Name, Count | sort count -Descending ) -Fixed $rpt += Get-HtmlContentClose $rpt += Get-HtmlContentOpen -HeaderText "Virtual Machines by location" -BackgroundShade 3 foreach ($Group in ($RMVMArray | select Location, ResourceGroup, Name, Size,PowerState, DataDiskCount, ImageSKU | group location ) ) {

# Here we are creating a group to use for the IF condition, so we can create sections for VMs by powerstate, Running or Deallocated $PowerState = $Group.Group | group PowerState $rpt += Get-HtmlContentOpen -HeaderText ("Virtual Machines for location '" + $group.Name +"' - " + $Group.Group.Count + " VMs") -IsHidden -BackgroundShade 2

# If there are VMs in the running group, continue and create a section for them if (($PowerState | ? {$_.name -eq 'running'})) { $rpt += Get-HtmlContentOpen -HeaderText ("Running Virtual Machines") -BackgroundShade 1 $rpt += Get-HtmlContentTable ($Group.Group | where {$_.PowerState -eq "Running"} | select ResourceGroup, Name, Size, DataDiskCount, ImageSKU ) -Fixed $rpt += Get-HtmlContentClose }

# If there are VMs in the running group, continue and create a section for them if (($PowerState | ? {$_.name -eq 'Deallocated'})) { $rpt += Get-HtmlContentOpen -HeaderText ("Deallocated") -BackgroundShade 1 -IsHidden $rpt += Get-HtmlContentTable ($Group.Group | where {$_.PowerState -eq "Deallocated"} | select ResourceGroup, Name, Size, DataDiskCount, ImageSKU)-Fixed $rpt += Get-HtmlContentClose } $rpt += Get-HtmlContentClose } $rpt += Get-HtmlContentClose $rpt += Get-HtmlClose

Test-Report -TestName Example5 [/powershell]

HTML Reporting Example 5

Summary I hope you have had success working through these examples and can find a use for this code.  Part 2 in this series will move into some more techniques and reporting functions. Please share any questions or issues you have executing this content.

Part 1 | Part 2 | Part 3 | Part 4 | More

Azure, Azure Active Directory, and PowerShell. The Hard Way

poshoauth.png

In my opinion, a fundamental shift for Windows IT professionals occurred with the release of Exchange 2007.  This established PowerShell as the tool for managing and configuring Microsoft enterprise products and systems going forward.  I seem to remember hearing a story at the time that a mandate was established for every enterprisey product going forward; each GUI action would have a corresponding PowerShell execution.  If anyone remembers the Exchange 2007 console, you could see that in action.  I won’t bother corroborating this story, because the end results are self-evident.  I can’t stress how important this was.  Engineers and administrators with development and advanced scripting skills were spared the further indignity of committing crimes against Win32 and COM+ across a hodgepodge of usually awful languages.  Windows administrators for whom automation and scripting only meant batch files, a clear path forward was presented.

PowerShell and Leaky Abstractions

For roughly two years now, the scope of my work has been mostly comprised of Azure integration and automation.  Azure proved to be no exception to the PowerShell new world order. I entered with wide-eyed optimism and I quickly discovered a great deal of things, usually of a more advanced nature, that could not be done in the portal and purportedly only via PowerShell. As I continue to receive product briefings, I have developed a bit of a pedantic pet-peeve.  PowerShell is always front and center in the presentations when referencing management, configuration, and automation.  However, I continue to see a general hand wave given as to the underlying technologies (e.g. WMI/CIM, REST API) and requirements.  I absolutely understand the intent, PowerShell has always been meant to provide a truly powerful environment in a manner that was highly accessible and friendly to the IT professional.  It has been a resounding success in that regard.  A general concern, I have, is that of too much abstraction.  There is a direct correlation between your frustration level and how far your understanding of what is going on is when an inevitable edge case is hit and the abstraction leaks.

Getting Back to the Point

All of that is a really long preface to the actual point of this post. I’ve never been a fan of the Azure Cmdlets for a number of reasons, most of which I don’t necessarily impugn the decisions made by Microsoft. To be honest, I think  both Switch-AzureMode (for those that remember) and the rapid release cadence that has introduced many understandably unavoidable breaking changes has really prejudiced me; as a result I tend to use the REST API almost exclusively. The fact is, modern systems and especially all of the micro-service architectures being touted are all powered by REST API. In the case of the Microsoft cloud, with only a few notable exceptions, authentication and authorization is handled via Azure Active Directory. It behooves the engineer or developer focused on Microsoft technologies to have a cursory understanding.  Azure Active Directory, Azure, and Office 365 are intrinsically linked. Every Azure and/or Office 365 Subscription is linked with an Azure AD tenant as the primary identity provider. The modern web seems to have adopted OAuth as an authorization standard and Azure AD can greatly streamline the authorization of web applications and API. The management and other API surfaces of Azure (and Azure Stack) and Office 365 have always taken advantage of this. The term you’ve likely heard thrown around is Bearer Token. That is more accurately described as an authorization header on the HTTP request containing a JWT (JSON Web Token).  My largest issue with the Azure and PowerShell automation has been the necessity to jump through hoops to simply obtain that token via PowerShell.  In 2016 a somewhat disingenuously Cmdlet named Get-AzureStackToken in the AzureRM.AzureStackAdmin module finally appeared.  I’m certain a large portion of the potential reading audience has used a tool like Fiddler, Postman, or even more recently resources.azure.com to either inspect or interact with these services.  Those who have can feel free to skip the straight to where this applies to PowerShell.

There are two types of applications you can create within Azure AD, each of with are identified with a unique Client Id and valid redirect URI(s) as the most relevant properties we’ll focus on.

Web Applications

  • Web applications in Azure Active Directory are OAuth2 confidential clients and likely the most appropriate option for modern (web) use cases.

  • Tokens are obtained on behalf of a user using the OAuth2 authorization grant flow. An authorization code or id token will be supplied to the specified redirect URI.

  • If needed, client credentials (a rolling secret key) can be used to obtain tokens on behalf of the user or on it’s own from the web application itself.

Native Applications

  • Native applications in Azure Active Directory are OAuth2 public clients (e.g. an application on a desktop or mobile device).

  • These applications can obtain a token directly (with managed organizational accounts) or use the authorization grant flow, but application level permissions are not applicable.

Getting to the PowerShell

I will focus primarily on the Native application type as it is most relevant to PowerShell. Most of the content will use Cmdlets from a module that will be available with this post.   The module is heavily derived/inspired by the ADAL libraries, has no external dependencies and accept a friendly PSCredential (with the appropriate rights) for any user authentication.  The Azure Cmdlets use a Native application with a Client Id of 1950a258-227b-4e31-a9cf-717495945fc2 and a redirect URI of urn:ietf:wg:oauth:2.0:oob (the prescribed default for native applications).   We’ll use this for our first attempt at obtaining a token for use against Azure Resource Manager or the legacy Service Management API.  A peculiar detail of Azure management is that this one of the scenarios a token is fungible for disparate endpoints. I always use https://management.core.windows.net as my audience regardless of whether I will be working with ARM or SM.  A token obtained from that audience will work the same as one from https://management.azure.com .

If all you would like is a snippet to obtain a token using the Azure, I’ll offer you a chance to bail out now:


$Resource='https://management.core.windows.net'
$PoshClientId="1950a258-227b-4e31-a9cf-717495945fc2"
$TenantId="yourdomain.com"
$UserName="username@$TenantId"
$Password="asecurepassword"|ConvertTo-SecureString -AsPlainText -Force
$Credential=New-Object pscredential($UserName,$Password)
Get-AzureStackToken -Resource $Resource -AadTenantId $TenantId -ClientId $PoshClientId -Credential $Credential -Authority "https://login.microsoftonline.com/$TenantId" 

A good deal of the functionality around provisioning applications and service principals has come to the Azure Cmdlets.  You can now create applications, service principals from the applications, and role assignments to the service principals. To create an application, in this case one that would own a subscription, you would write something like this:


$ApplicationSecret="ASuperSecretPassword!"
$TenantId='e05b8b95-8c85-49af-9867-f8ac0a257778'
$SubscriptionId='bc3661fe-08f5-4b87-8529-9190f94c163e'
$AppDisplayName='The Subscription Owning Web App'
$HomePage='https://azurefieldnotes.com'
$IdentifierUris=@('https://whereeveryouwant.com')
$NewWebApp=New-AzureRmADApplication -DisplayName $AppDisplayName -HomePage $HomePage `
    -IdentifierUris $IdentifierUris -StartDate (Get-Date) -EndDate (Get-Date).AddYears(1) `
    -Password $ApplicationSecret
$WebAppServicePrincipal=New-AzureRmADServicePrincipal -ApplicationId $NewWebApp.ApplicationId
$NewRoleAssignment=New-AzureRmRoleAssignment -ObjectId $NewWebApp.Id -RoleDefinitionName 'owner' -Scope "/subscriptions/$SubscriptionId"
$ServicePrincipalCred=New-Object PScredential($NewWebApp.ApplicationId,($ApplicationSecret|ConvertTo-SecureString -AsPlainText -Force))
Add-AzureRmAccount -Credential $ServicePrincipalCred -TenantId $TenantId -ServicePrincipal 

For those that stuck around, let’s take a look at obtaining JWT(s), inspecting them, and putting them to use.

I added a method for decoding the tokens, so we will have a look at the access token.  A JWT is comprised of a header, payload, and signature.  I will leave explaining the claims within the payload to identity experts.

Now that we have a token, let's use it for something useful, in this case we will ask Azure (ARM) for our associated subscriptions.

Examining the OAuth2 Flow

If you are not interested in what is going on behind the scenes feel free to skip ahead.  Each application exposes a standard set of endpoints and I will not discuss the v2.0 endpoint as I do not have enough experience using it.  There are two endpoints in particular to make note of, https://login.microsoftonline.com/{tenantid}/oauth2/authorize and https://login.microsoftonline.com/{tenantid}/oauth2/token, where {tenantid} represents the tenant id (guid or domain name) e.g. yourcompany.com or common for multi-tenant applications.  Azure AD obviously supports federation and the directing traffic to the appropriate authorization endpoint is guided by a user realm detection API of various versions at https://login.microsoftonline.com/common/UserRealm.  If we inspect the result for a fully managed Azure AD account we see general tenant detail.

If we take a look at a federated user we will see a little difference, the AuthURL property.

userrealm federated

This show us the location of our federated authentication endpoint. The token will actually be requested via a SAML user assertion that is received from an STS, in this case ADFS.

The OAuth specification uses the request parameter collection for token and authorization code responses. A username and password combination can be used to directly request a token in the fully managed scenario public client scenario.

A POST request can go directly to the Token endpoint with the following query parameters:

client_id

The Application Id

resource

The Resource URI to access

grant_type

password

username

The username

password

The password

The ADFS/WSTrust will entail sending a SOAP request to the WSTrust endpoint to authenticate and use that response to create the assertion that is exchanged for an access token.  Through user realm detection we can find the ADFS username/password endpoint.  A SOAP envelope can be sent to  endpoint to receive a security token response, containing the assertions needed.

A POST request is sent to the Username/Password endpoint for ADFS with the following envelope with noteable values encased in {}:

<s:Envelope xmlns:s='http://www.w3.org/2003/05/soap-envelope' 
    xmlns:a='http://www.w3.org/2005/08/addressing' 
    xmlns:u='http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd'>
    <s:Header>
        <a:Action s:mustUnderstand='1'>http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue</a:Action>
        <a:messageID>urn:uuid:{Unique Identifier for the Request}</a:messageID>
        <a:ReplyTo>
            <a:Address>http://www.w3.org/2005/08/addressing/anonymous</a:Address>
        </a:ReplyTo>        <!-- The Username Password WSTrust Endpoint -->
        <a:To s:mustUnderstand='1'>{Username/Password Uri}</a:To>
        <o:Security s:mustUnderstand='1' 
            xmlns:o='http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd'>            <!-- The token length requested -->
            <u:Timestamp u:Id='_0'>
                <u:Created>{Token Start Time}</u:Created>
                <u:Expires>{Token Expiry Time}</u:Expires>
            </u:Timestamp>            <!-- The username and password used -->
            <o:UsernameToken u:Id='uuid-{Unique Identifier for the Request}'>
                <o:Username>{UserName to Authenticate}</o:Username>
                <o:Password>{Password to Authenticate}</o:Password>
            </o:UsernameToken>
        </o:Security>
    </s:Header>
    <s:Body>
        <trust:RequestSecurityToken xmlns:trust='http://docs.oasis-open.org/ws-sx/ws-trust/200512'>
            <wsp:AppliesTo xmlns:wsp='http://schemas.xmlsoap.org/ws/2004/09/policy'>
                <a:EndpointReference>
                    <a:Address>urn:federation:MicrosoftOnline</a:Address>
                </a:EndpointReference>
            </wsp:AppliesTo>
            <trust:KeyType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/Bearer</trust:KeyType>
            <trust:RequestType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/Issue</trust:RequestType>
        </trust:RequestSecurityToken>
    </s:Body>
</s:Envelope>

The token response is inspected for SAML assertion types (urn:oasis:names:tc:SAML:1.0:assertion or urn:oasis:names:tc:SAML:2.0:assertion) to find the matching token used for the OAuth token request.

<s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope" 
    xmlns:a="http://www.w3.org/2005/08/addressing" 
    xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
    <s:Header>
        <a:Action s:mustUnderstand="1">http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTRC/IssueFinal</a:Action>
        <o:Security s:mustUnderstand="1" 
            xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">
            <u:Timestamp u:Id="_0">
                <u:Created>2016-01-03T01:34:41.640Z</u:Created>
                <u:Expires>2016-01-03T01:39:41.640Z</u:Expires>
            </u:Timestamp>
        </o:Security>
    </s:Header>
    <s:Body>
        <trust:RequestSecurityTokenResponseCollection xmlns:trust="http://docs.oasis-open.org/ws-sx/ws-trust/200512">            <!-- Our Desired Token Response -->
            <trust:RequestSecurityTokenResponse>
                <trust:Lifetime>
                    <wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2016-01-03T01:34:41.622Z</wsu:Created>
                    <wsu:Expires xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2016-01-03T02:34:41.622Z</wsu:Expires>
                </trust:Lifetime>
                <wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">
                    <wsa:EndpointReference xmlns:wsa="http://www.w3.org/2005/08/addressing">
                        <wsa:Address>urn:federation:MicrosoftOnline</wsa:Address>
                    </wsa:EndpointReference>
                </wsp:AppliesTo>
                <trust:RequestedSecurityToken>                    <!-- The Assertion -->
                    <saml:Assertion MajorVersion="1" MinorVersion="1" AssertionID="_e3b09f2a-8b57-4350-b1e1-20a8f07b3d3b" Issuer="http://adfs.howtopimpacloud.com/adfs/services/trust" IssueInstant="2016-08-03T01:34:41.640Z" 
                        xmlns:saml="urn:oasis:names:tc:SAML:1.0:assertion">
                        <saml:Conditions NotBefore="2016-01-03T01:34:41.622Z" NotOnOrAfter="2016-01-03T02:34:41.622Z">
                            <saml:AudienceRestrictionCondition>
                                <saml:Audience>urn:federation:MicrosoftOnline</saml:Audience>
                            </saml:AudienceRestrictionCondition>
                        </saml:Conditions>
                        <saml:AttributeStatement>
                            <saml:Subject>
                                <saml:NameIdentifier Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified">130WEAH65kG8zfGrZFNlBQ==</saml:NameIdentifier>
                                <saml:SubjectConfirmation>
                                    <saml:ConfirmationMethod>urn:oasis:names:tc:SAML:1.0:cm:bearer</saml:ConfirmationMethod>
                                </saml:SubjectConfirmation>
                            </saml:Subject>
                            <saml:Attribute AttributeName="UPN" AttributeNamespace="http://schemas.xmlsoap.org/claims">
                                <saml:AttributeValue>chris@howtopimpacloud.com</saml:AttributeValue>
                            </saml:Attribute>
                            <saml:Attribute AttributeName="ImmutableID" AttributeNamespace="http://schemas.microsoft.com/LiveID/Federation/2008/05">
                                <saml:AttributeValue>130WEAH65kG8zfGrZEFlBQ==</saml:AttributeValue>
                            </saml:Attribute>
                        </saml:AttributeStatement>
                        <saml:AuthenticationStatement AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password" AuthenticationInstant="2016-08-03T01:34:41.607Z">
                            <saml:Subject>
                                <saml:NameIdentifier Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified">130WEAH65kG8sfGrZENlBQ==</saml:NameIdentifier>
                                <saml:SubjectConfirmation>
                                    <saml:ConfirmationMethod>urn:oasis:names:tc:SAML:1.0:cm:bearer</saml:ConfirmationMethod>
                                </saml:SubjectConfirmation>
                            </saml:Subject>
                        </saml:AuthenticationStatement>
                        <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
                            <ds:SignedInfo>
                                <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#" />
                                <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1" />
                                <ds:Reference URI="#_e3b09f2a-8b57-4350-b1e1-20a8f07b3d3b">
                                    <ds:Transforms>
                                        <ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature" />
                                        <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#" />
                                    </ds:Transforms>
                                    <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1" />
                                    <ds:DigestValue>itvzbQhlzA8CIZsMneHVR15FJlY=</ds:DigestValue>
                                </ds:Reference>
                            </ds:SignedInfo>
                            <ds:SignatureValue>gBCGUmhQrJxVpCxVsy2L1qh1kMklVVMoILvYJ5a8NOlezNUx3JNlEP7wZ389uxumP3sL7waKYfNUyVjmEpPkpqxdxrxVu5h1BDBK9WqzOICnFkt6JPx42+cyAhj3T7Nudeg8CP5A9ewRCLZu2jVd/JEHXQ8TvELH56oD5RUldzm0seb8ruxbaMKDjYFuE7X9U5sCMMuglU3WZDC3v6aqmUxpSd9Kelhddleu33XEBv7CQNw84JCud3B+CC7dUwtGxwv11Mk/P0t1fGbfs+I6aSMTecKq9YmscqP9tB8ZouD42jhjhYysOQSdulStmUi6gVzQz+c2l2taa5Amd+JCPg==</ds:SignatureValue>
                            <KeyInfo xmlns="http://www.w3.org/2000/09/xmldsig#">
                                <X509Data>
                                    <X509Certificate>MIIC4DCDAcigAwIBAgIQaYQ6QyYqcrBBmOHSGy0E1DANBgkqhkiG9w0BAQsFADArMSkwJwYDVQQDEyBBREZTIFNpZ25pbmcgLSBhZGZzLmNpLmF2YWhjLmNvbTAgFw0xNjA2MDQwNjA4MDdaGA8yMTE2MDUxMTA2MDgwN1owKzEpMCcGA1UEAxMgQURGUyBTaWduaW5nIC0gYWRmcy5jaS5hdmFoYy5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDH9J6/oWYAR8Y98QnacNouKyIBdtZbosEz0HyJVyrxVqKq2AsPvCEO3WFm9Gmt/xQN9PuLidZpgICAe8Ukuv4h/NldgmgtD64mObFNuEM5pzAPRXUv6FWlVE4fnUpIiD1gC0bbQ7Tzv/cVgfUChCDpFu3ePDTs/tv07ee22jXtoyT3N7tsbIX47xBMKgF9ItN9Oyqi0JyQHZghVQ1ebNOMH3/zNdl0WcZ+Pl+osD3iufoH6H+qC9XY09B5YOWy8fJoqf+HFeSWZCHH5vJJfsPTsSilvLHCpMGlrMFaTBKqmv+m9Z3FtbzOcnKHS5PJVAymqLctkH+HbFzaDblaSRhhAgMBAAEwDQYJKoZIhvcNAQELBQADggEBAFB0E2Cj+O24aPM61JsCXLIAB28q4h4qLxMwV+ypYjFxxcQ5GzgqaPJ7BARCnW1gm3PyvNfUut9RYrT9wTJlBVY9WDBoX33jsS87riMj+JONXJ7lG/zAozxs0xIiW+PNlFdOt7xyvYstrFgPJS1E05jhiZ2PR8MS20uSlMNkVPinpz4seyyMQeM+1GbpbDE1EwwtEVKgatJN7t6nAn9mw8cHIk1et7CYOGeWCnMA9EljzNiD8wEwsG51aKfuvGrPK8Q8N/G89SPgstpe0Te5+EtWT6latXfpCwdNWxvinH49SKKa25l1VoLLNwKiQF6vK1Iw0F7dP7QkO5YdE7/MTDU=</X509Certificate>
                                </X509Data>
                            </KeyInfo>
                        </ds:Signature>
                    </saml:Assertion>
                </trust:RequestedSecurityToken>
                <trust:RequestedAttachedReference>
                    <o:SecurityTokenReference k:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV1.1" 
                        xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" 
                        xmlns:k="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd">
                        <o:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.0#SAMLAssertionID">_e3b09f2a-8b57-4350-b1e1-20a8f07b3d3b</o:KeyIdentifier>
                    </o:SecurityTokenReference>
                </trust:RequestedAttachedReference>
                <trust:RequestedUnattachedReference>
                    <o:SecurityTokenReference k:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV1.1" 
                        xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" 
                        xmlns:k="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd">
                        <o:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.0#SAMLAssertionID">_e3b09f2a-8b57-4350-b1e1-20a8f07b3d3b</o:KeyIdentifier>
                    </o:SecurityTokenReference>
                </trust:RequestedUnattachedReference>
                <trust:TokenType>urn:oasis:names:tc:SAML:1.0:assertion</trust:TokenType>
                <trust:RequestType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/Issue</trust:RequestType>
                <trust:KeyType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/Bearer</trust:KeyType>
            </trust:RequestSecurityTokenResponse>
        </trust:RequestSecurityTokenResponseCollection>
    </s:Body>
</s:Envelope>

A POST request is sent to the Token endpoint with the following query parameters:

client_id

The Application Id

resource

The Resource URI to access

assertion

The base64 encoded SAML token

grant_type

urn:ietf:params:oauth:grant-type:saml1_1-bearer urn:ietf:params:oauth:grant-type:saml2-bearer

scope

openid

A GET request is sent to the Authorize endpoint with some similar query parameters:

client_id

The Application Id

redirect_uri

The location within the application to handle the authorization code

response_type

code

prompt

login consent admin_consent

scope

optional scope for access (app uri or openid scope)

The endpoint should redirect you to the appropriate login screen via user realm detection.  Once the user login is completed, the code is added to the redirect address as either query parameters (default) or a form POST.  Once the code is retrieved it can be exchanged for a token. A POST request is sent to the Token endpoint as demonstrated before with some slightly different parameters:

client_id

The Application Id

resource

The Resource URI to access

code

The authorization code

grant_type

authorization_code

scope

previous scope

client_secret

required if confidential client

Tying it All Together

To try to show some value for your reading time, lets explore how this can be used as the solutions you support and deploy become more tightly integrated with the Microsoft cloud.  We'll start by creating a new Native application in the legacy portal.

appnative1
appnative2

I used https://itdoesnotmatter here, but you might as well follow the guidance of using urn:ietf:wg:oauth:2.0:oob.  We will now grant permissions to Azure Active Directory and Azure Service Management (for ARM too).

ADPermissions
ADServiceMgmt

I will avoid discussing configuring the application to be multi-tenant as the processes I outline are identical, it is simply a matter of the targeted tenant.  You should end up with something looking like this.

Native

Let's now try to go get a token for our new application and put it to use.  This should look exactly the same as retrieving the previous token.


$AuthCode=Approve-AzureADApplication -ClientId $NewClientId -RedirectUri 'https://itdoesnotmatter/' -TenantId sendthewolf.com -AdminConsent

nativefirstattempt

Epic failure!  Unfortunately we run into a common annoyance, the application must be consented to interactively.  I do not know of any tooling that exists to make this easy.  I added a function to make this a little easier and it supports a switch of AdminConsent to approve the application for all users within the tenant.  And step through the consent process to receive an authorization code.


$AuthCode=Approve-AzureADApplication -ClientId $NewClientId -RedirectUri 'https://itdoesnotmatter/' -TenantId sendthewolf.com -AdminConsent

Approve
Approve App

Once the authorization code is obtained it can be exchanged for a token, for which I provided another function.  That token can now be used in the exact same manner as the Azure Cmdlet application.


$TokenResult=Get-AzureADAccessTokenFromCode 'https://management.core.windows.net/' -ClientId $NewClientId -RedirectUri 'https://itdoesnotmatter/' -TenantId sendthewolf.com -AuthorizationCode $AuthCode

Authorize2

If you wanted to handle some Azure Active Directory objects, we can target a different audience, and execute actions appropriate to the account's privilege level.   In the following example we will create a new user.


$GraphBaseUri="https://graph.windows.net/"
$GraphUriBuilder=New-Object System.UriBuilder($GraphBaseUri)
$GraphUriBuilder.Path="$TenantId/users"
$GraphUriBuilder.Query="api-version=1.6"
$NewUserJSON=@"
{
    "accountEnabled": true, 
    "displayName": "Johnny Law", 
    "mailNickName" : "thelaw", 
    "passwordProfile": { 
        "password": "Password1234!", 
        "forceChangePasswordNextLogin": false 
    }, 
    "userPrincipalName": "johhny.law@$TenantId" 
}
"@
$AuthResult=Get-AzureADUserToken -Resource $GraphBaseUri -ClientId $NewClientId -Credential $Credential -TenantId $TenantId
$AuthHeaders=@{Authorization="Bearer $($AuthResult.access_token)"}
$NewUser=Invoke-RestMethod -Uri $GraphUriBuilder.Uri -Method Post -Headers $AuthHeaders -Body $NewUserJSON -ContentType "application/json"

If we want to continue the “fun” with Office 365 we can apply the exact sample approach with the Office 365 Sharepoint Online application permissions.  In the interest of moving along and with no regard for constraining access, we will configure the permissions in the following manner.

sharepoint

We’ll now do some querying of the Office 365 SharePoint video API with some more script.


$SharepointUri='https://yourdomain.sharepoint.com/'
$SpUriBuilder=New-Object System.UriBuilder($SharepointUri)
$SpUriBuilder.Path="_api/VideoService.Discover"
$AuthResult=Get-AzureADUserToken -Resource $SharepointUri -ClientId $NewClientId -Credential $Credential
$Headers=@{Authorization="Bearer $($AuthResult.access_token)";Accept="application/json";}
$VideoDisco=Invoke-RestMethod -Uri $SpUriBuilder.Uri -Headers $Headers $VideoDisco|Format-List
$VideoChannelId="306488ae-5562-4d3e-a19f-fdb367928b96"
$VideoPortalUrl=$VideoDisco.VideoPortalUrl
$ChannelUrlBuilder=New-Object System.UriBuilder($VideoPortalUrl)
$ChannelUrlBuilder.Path+="/_api/VideoService/Channels"
$ChannelOData=Invoke-RestMethod -Uri $ChannelUrlBuilder.Uri -Headers $Headers
$ChannelRoot=$ChannelUrlBuilder.Path
foreach ($Channel in $ChannelOData.Value)
{  
    $VideoUriBuilder=New-Object System.UriBuilder($Channel.'odata.id')
    $VideoUriBuilder.Path+="/Videos"
    Invoke-RestMethod -Uri $VideoUriBuilder.Uri -Headers $Headers|Select-Object -ExpandProperty value
}

We should see some output that looks like this:

spvideos

I’ve had Enough! Please Just Show me the Code.

For those who have endured or even skipped straight here, I present the following module for any use your dare apply.  The standard liability waiver applies and it is presented primarily for educational purposes.  It came from a need to access the assortment of Microsoft cloud API in environments where we could not always ensure the plethora of correct Cmdlets are installed.  Initially, being a .Net guy, I just wrapped standard use cases around ADAL .Net.  I really wanted to make sure that I really understood OAuth and OpenId Connect authorization flows as is relates to Azure Active Directory.  The entire theme of this lengthy tome is to emphasize the importance of having a relatively advanced understanding of these concepts.  Regardless of your milieu, if it has a significant Microsoft component, the demand to both integrate and support the integration(s) of numerous offerings will only grow larger.  The module is primarily targeted at the Native Client application type, however there is support for the client secret and implicit authorization flows.  There are also a few utility methods that are exposed as they may have some diagnostic use or otherwise.  The module exposes the following methods all of which support Get-Help:

  • Approve-AzureADApplication

    • Approves an Azure AD Application Interactively and returns the Authorization Code

    • ConvertFrom-EncodedJWT

      • Converts an encoded JWT to an object representation

      • Get-AzureADAccessTokenFromCode

        • Retrieves an access token from a consent authorization code

        • Get-AzureADClientToken

          • Retrieves an access token as a an OAuth confidential client

          • Get-AzureADUserToken

            • Retrieves an access token as a an OAuth public client

            • Get-AzureADImplicitFlowToken

              • Retrieves an access token interactively for a web application with OAuth implicit flow enabled

              • Get-AzureADOpenIdConfiguration

                • Retrieves the OpenId connect configuration for the specified application

                • Get-AzureADUserRealm

                  • Retrieves a the aggregate user realm data for the specified user principal name(s

                  • Get-WSTrustUserRealmDetails

                    • Retrieves the WSFederation details for a given user prinicpal name

Get it here: Azure AD Module

I hope you find it useful and remember not to fear doing things the hard way every so often.

Simulating an Azure storage account failure

redundancy_banner.jpg

Storage Redundancy in the Cloud Redundancy and failover is always an important factor when designing and deploying applications.  As we start to build out applications in the cloud we have seen major disruptions due to a single Azure storage account being used across and entire application.  Logically this makes sense as a single container for all items when considering redundancy this becomes a single point of failure.  Just because it is in the cloud doesn't mean it's redundant.

While Azure storage is generally something you can consider stable, this is IT and anything that can happen, will happen.  We have seen developers and administrators accidentally deleting accounts and Azure has had outages which include storage account failures and in some cases data loss although far less common.  At this point I should mention deployment and use of RBAC might have prevented some of these accidental deletions, but not in every case.

An Example Application

In this example let's consider you are building an application with two web front end servers using IaaS VMs you would like to ensure is as redundant as possible.  We could use an Azure load balancers and deploy two VMs into an availability set and as shown below.  While the load balancer handles traffic and the availability set handles fault and upgrade domains for the VMs, these VMs are still all present on a single storage account that is outside of the availability set protection. If you lose the storage account both of the VMs will fail and your application will go offline.

Capture1_thumb.jpg

Adding Redundancy

If we take this design and add a second storage account for one of the IaaS VMs we can eliminate several scenarios where the application might go offline.  There are several options for redundancy in the storage which you can evaluate depending on your needs, budget and performance.  As well there are limits to the number of storage accounts you can provision and management could also become more complex.

Capture2.jpg

This recommendation is focusing on a single storage account going 'offline' for whatever reason. As you scale up to larger applications you may want to have two or more storage accounts supporting multiple VMs. To reduce the number of storage accounts you could consider striping storage accounts across multiple load balanced application tiers. It's worth noting, that this will help protect against accidental deletion, but even having two storage accounts may not protect you against Azure failures. There is no garantee that a second storage account wont be on the same hardware, or otherwise within the same failure envelope as the first. Its better then nothing, but if you need a higher RTO/RPO, you need to look at proper active/active configuration in seperate regions.

While LRS is the recommended strategy for VHDs to increase performance and reduce costs you may want to consider more resilient options and use ZRS or GRS storage replication for at least one of the storage accounts. Note, there are limitations to using ZRS and GRS as a VMs, specifically around performance and corruption when disk striping. You may even consider deploying more VMs in another availability set to another region depending on the applications requirements.

Simulating Storage Account Failure 

As an administrator, if you are trying to test a redundant application, there is no ‘offline’ option to allow us to test storage failure.  There are some options to simulate this, if you break the lease to the blob you can simulate a hard stop, but this is potentially destructive to the OS. You can find more information about breaking a lease at this Microsoft Reference. You could stop and start the VMs yourself but as the application scales and grows more complex, again you can introduce human error and you still have to go and start them all again anyway. To help with this administrative task, I have modified a script for stopping and starting VMs.

Script and Source

Using an existing script I made some minor modifications and combined code from Darren Robinson here that utilizes RamblingCookieMonster's invoke-parallel function and put them into a single script.

This script allows the administrator to specify a resource group and a storage account, the script will find all VMs on that storage account in that resource group and shutdown the VMs gracefully.  The invoke-parallel allow for tasks to be run at the same time, saving time.  You can then conduct application testing. Once testing is complete, you can use the same script to start the VMs again.

The script Change-VMStateByStorageAccount will ask for your Azure Credentials if they aren't present in your PowerShell session. The script itself requires three parameters as follows (ResourceGroup, StorageAccount, Power (Stop or Start))

Example to stop all VMs

[powershell] Change-VMStateByStorageAccount -ResourceGroup "MyResourceGroup" -StorageAccount "StorageAccount01" -Power "Stop" [/powershell]

PowerShell Code

[powershell] Param( [Parameter(Mandatory=$true)] [String] $ResourceGroup, [Parameter(Mandatory=$true)] [String] $StorageAccount, [Parameter(Mandatory=$true)] [String] $Power )

$StorageSuffix = "blob.core.windows.net"

if (!$Power){Write-host "No powerstate specified. Use -Power start|stop"} if (!$ResourceGroup){Write-host "No Azure Resource Group specified. Use -ResourceGroup 'ResourceGroupName'"} if (!$StorageAccount){Write-host "No Azure Storage Accout name specified. Use -StorageAccount 'storageaccount'"}

function Invoke-Parallel { [cmdletbinding(DefaultParameterSetName='ScriptBlock')] Param ( [Parameter(Mandatory=$false,position=0,ParameterSetName='ScriptBlock')] [System.Management.Automation.ScriptBlock]$ScriptBlock,

[Parameter(Mandatory=$false,ParameterSetName='ScriptFile')] [ValidateScript({test-path $_ -pathtype leaf})] $ScriptFile,

[Parameter(Mandatory=$true,ValueFromPipeline=$true)] [Alias('CN','__Server','IPAddress','Server','ComputerName')] [PSObject]$InputObject,

[PSObject]$Parameter,

[switch]$ImportVariables,

[switch]$ImportModules,

[int]$Throttle = 20,

[int]$SleepTimer = 200,

[int]$RunspaceTimeout = 0,

[switch]$NoCloseOnTimeout = $false,

[int]$MaxQueue,

[validatescript({Test-Path (Split-Path $_ -parent)})] [string]$LogFile = "C:\temp\log.log",

[switch] $Quiet = $false )

Begin {

#No max queue specified? Estimate one. #We use the script scope to resolve an odd PowerShell 2 issue where MaxQueue isn't seen later in the function if ( -not $PSBoundParameters.ContainsKey('MaxQueue')) { if($RunspaceTimeout -ne 0){ $script:MaxQueue = $Throttle } else{ $script:MaxQueue = $Throttle * 3 } } else { $script:MaxQueue = $MaxQueue }

Write-Verbose "Throttle: '$throttle' SleepTimer '$sleepTimer' runSpaceTimeout '$runspaceTimeout' maxQueue '$maxQueue' logFile '$logFile'"

#If they want to import variables or modules, create a clean runspace, get loaded items, use those to exclude items if ($ImportVariables -or $ImportModules) { $StandardUserEnv = [powershell]::Create().addscript({

#Get modules and snapins in this clean runspace $Modules = Get-Module | Select -ExpandProperty Name $Snapins = Get-PSSnapin | Select -ExpandProperty Name

#Get variables in this clean runspace #Called last to get vars like $? into session $Variables = Get-Variable | Select -ExpandProperty Name

#Return a hashtable where we can access each. @{ Variables = $Variables Modules = $Modules Snapins = $Snapins } }).invoke()[0]

if ($ImportVariables) { #Exclude common parameters, bound parameters, and automatic variables Function _temp {[cmdletbinding()] param() } $VariablesToExclude = @( (Get-Command _temp | Select -ExpandProperty parameters).Keys + $PSBoundParameters.Keys + $StandardUserEnv.Variables ) Write-Verbose "Excluding variables $( ($VariablesToExclude | sort ) -join ", ")"

# we don't use 'Get-Variable -Exclude', because it uses regexps. # One of the veriables that we pass is '$?'. # There could be other variables with such problems. # Scope 2 required if we move to a real module $UserVariables = @( Get-Variable | Where { -not ($VariablesToExclude -contains $_.Name) } ) Write-Verbose "Found variables to import: $( ($UserVariables | Select -expandproperty Name | Sort ) -join ", " | Out-String).`n"

}

if ($ImportModules) { $UserModules = @( Get-Module | Where {$StandardUserEnv.Modules -notcontains $_.Name -and (Test-Path $_.Path -ErrorAction SilentlyContinue)} | Select -ExpandProperty Path ) $UserSnapins = @( Get-PSSnapin | Select -ExpandProperty Name | Where {$StandardUserEnv.Snapins -notcontains $_ } ) } }

#region functions

Function Get-RunspaceData { [cmdletbinding()] param( [switch]$Wait )

#loop through runspaces #if $wait is specified, keep looping until all complete Do {

#set more to false for tracking completion $more = $false

#Progress bar if we have inputobject count (bound parameter) if (-not $Quiet) { Write-Progress -Activity "Running Query" -Status "Starting threads"` -CurrentOperation "$startedCount threads defined - $totalCount input objects - $script:completedCount input objects processed"` -PercentComplete $( Try { $script:completedCount / $totalCount * 100 } Catch {0} ) }

#run through each runspace. Foreach($runspace in $runspaces) {

#get the duration - inaccurate $currentdate = Get-Date $runtime = $currentdate - $runspace.startTime $runMin = [math]::Round( $runtime.totalminutes ,2 )

#set up log object $log = "" | select Date, Action, Runtime, Status, Details $log.Action = "Removing:'$($runspace.object)'" $log.Date = $currentdate $log.Runtime = "$runMin minutes"

#If runspace completed, end invoke, dispose, recycle, counter++ If ($runspace.Runspace.isCompleted) {

$script:completedCount++

#check if there were errors if($runspace.powershell.Streams.Error.Count -gt 0) {

#set the logging info and move the file to completed $log.status = "CompletedWithErrors" Write-Verbose ($log | ConvertTo-Csv -Delimiter ";" -NoTypeInformation)[1] foreach($ErrorRecord in $runspace.powershell.Streams.Error) { Write-Error -ErrorRecord $ErrorRecord } } else {

#add logging details and cleanup $log.status = "Completed" Write-Verbose ($log | ConvertTo-Csv -Delimiter ";" -NoTypeInformation)[1] }

#everything is logged, clean up the runspace $runspace.powershell.EndInvoke($runspace.Runspace) $runspace.powershell.dispose() $runspace.Runspace = $null $runspace.powershell = $null

}

#If runtime exceeds max, dispose the runspace ElseIf ( $runspaceTimeout -ne 0 -and $runtime.totalseconds -gt $runspaceTimeout) {

$script:completedCount++ $timedOutTasks = $true

#add logging details and cleanup $log.status = "TimedOut" Write-Verbose ($log | ConvertTo-Csv -Delimiter ";" -NoTypeInformation)[1] Write-Error "Runspace timed out at $($runtime.totalseconds) seconds for the object:`n$($runspace.object | out-string)"

#Depending on how it hangs, we could still get stuck here as dispose calls a synchronous method on the powershell instance if (!$noCloseOnTimeout) { $runspace.powershell.dispose() } $runspace.Runspace = $null $runspace.powershell = $null $completedCount++

}

#If runspace isn't null set more to true ElseIf ($runspace.Runspace -ne $null ) { $log = $null $more = $true }

#log the results if a log file was indicated if($logFile -and $log){ ($log | ConvertTo-Csv -Delimiter ";" -NoTypeInformation)[1] | out-file $LogFile -append } }

#Clean out unused runspace jobs $temphash = $runspaces.clone() $temphash | Where { $_.runspace -eq $Null } | ForEach { $Runspaces.remove($_) }

#sleep for a bit if we will loop again if($PSBoundParameters['Wait']){ Start-Sleep -milliseconds $SleepTimer }

#Loop again only if -wait parameter and there are more runspaces to process } while ($more -and $PSBoundParameters['Wait'])

#End of runspace function }

#endregion functions

#region Init

if($PSCmdlet.ParameterSetName -eq 'ScriptFile') { $ScriptBlock = [scriptblock]::Create( $(Get-Content $ScriptFile | out-string) ) } elseif($PSCmdlet.ParameterSetName -eq 'ScriptBlock') { #Start building parameter names for the param block [string[]]$ParamsToAdd = '$_' if( $PSBoundParameters.ContainsKey('Parameter') ) { $ParamsToAdd += '$Parameter' }

$UsingVariableData = $Null

# This code enables $Using support through the AST. # This is entirely from Boe Prox, and his https://github.com/proxb/PoshRSJob module; all credit to Boe!

if($PSVersionTable.PSVersion.Major -gt 2) { #Extract using references $UsingVariables = $ScriptBlock.ast.FindAll({$args[0] -is [System.Management.Automation.Language.UsingExpressionAst]},$True)

If ($UsingVariables) { $List = New-Object 'System.Collections.Generic.List`1[System.Management.Automation.Language.VariableExpressionAst]' ForEach ($Ast in $UsingVariables) { [void]$list.Add($Ast.SubExpression) }

$UsingVar = $UsingVariables | Group SubExpression | ForEach {$_.Group | Select -First 1}

#Extract the name, value, and create replacements for each $UsingVariableData = ForEach ($Var in $UsingVar) { Try { $Value = Get-Variable -Name $Var.SubExpression.VariablePath.UserPath -ErrorAction Stop [pscustomobject]@{ Name = $Var.SubExpression.Extent.Text Value = $Value.Value NewName = ('$__using_{0}' -f $Var.SubExpression.VariablePath.UserPath) NewVarName = ('__using_{0}' -f $Var.SubExpression.VariablePath.UserPath) } } Catch { Write-Error "$($Var.SubExpression.Extent.Text) is not a valid Using: variable!" } } $ParamsToAdd += $UsingVariableData | Select -ExpandProperty NewName -Unique

$NewParams = $UsingVariableData.NewName -join ', ' $Tuple = [Tuple]::Create($list, $NewParams) $bindingFlags = [Reflection.BindingFlags]"Default,NonPublic,Instance" $GetWithInputHandlingForInvokeCommandImpl = ($ScriptBlock.ast.gettype().GetMethod('GetWithInputHandlingForInvokeCommandImpl',$bindingFlags))

$StringScriptBlock = $GetWithInputHandlingForInvokeCommandImpl.Invoke($ScriptBlock.ast,@($Tuple))

$ScriptBlock = [scriptblock]::Create($StringScriptBlock)

Write-Verbose $StringScriptBlock } }

$ScriptBlock = $ExecutionContext.InvokeCommand.NewScriptBlock("param($($ParamsToAdd -Join ", "))`r`n" + $Scriptblock.ToString()) } else { Throw "Must provide ScriptBlock or ScriptFile"; Break }

Write-Debug "`$ScriptBlock: $($ScriptBlock | Out-String)" Write-Verbose "Creating runspace pool and session states"

#If specified, add variables and modules/snapins to session state $sessionstate = [System.Management.Automation.Runspaces.InitialSessionState]::CreateDefault() if ($ImportVariables) { if($UserVariables.count -gt 0) { foreach($Variable in $UserVariables) { $sessionstate.Variables.Add( (New-Object -TypeName System.Management.Automation.Runspaces.SessionStateVariableEntry -ArgumentList $Variable.Name, $Variable.Value, $null) ) } } } if ($ImportModules) { if($UserModules.count -gt 0) { foreach($ModulePath in $UserModules) { $sessionstate.ImportPSModule($ModulePath) } } if($UserSnapins.count -gt 0) { foreach($PSSnapin in $UserSnapins) { [void]$sessionstate.ImportPSSnapIn($PSSnapin, [ref]$null) } } }

#Create runspace pool $runspacepool = [runspacefactory]::CreateRunspacePool(1, $Throttle, $sessionstate, $Host) $runspacepool.Open()

Write-Verbose "Creating empty collection to hold runspace jobs" $Script:runspaces = New-Object System.Collections.ArrayList

#If inputObject is bound get a total count and set bound to true $bound = $PSBoundParameters.keys -contains "InputObject" if(-not $bound) { [System.Collections.ArrayList]$allObjects = @() }

#Set up log file if specified if( $LogFile ){ New-Item -ItemType file -path $logFile -force | Out-Null ("" | Select Date, Action, Runtime, Status, Details | ConvertTo-Csv -NoTypeInformation -Delimiter ";")[0] | Out-File $LogFile }

#write initial log entry $log = "" | Select Date, Action, Runtime, Status, Details $log.Date = Get-Date $log.Action = "Batch processing started" $log.Runtime = $null $log.Status = "Started" $log.Details = $null if($logFile) { ($log | convertto-csv -Delimiter ";" -NoTypeInformation)[1] | Out-File $LogFile -Append }

$timedOutTasks = $false

#endregion INIT }

Process {

#add piped objects to all objects or set all objects to bound input object parameter if($bound) { $allObjects = $InputObject } Else { [void]$allObjects.add( $InputObject ) } }

End {

#Use Try/Finally to catch Ctrl+C and clean up. Try { #counts for progress $totalCount = $allObjects.count $script:completedCount = 0 $startedCount = 0

foreach($object in $allObjects){

#region add scripts to runspace pool

#Create the powershell instance, set verbose if needed, supply the scriptblock and parameters $powershell = [powershell]::Create()

if ($VerbosePreference -eq 'Continue') { [void]$PowerShell.AddScript({$VerbosePreference = 'Continue'}) }

[void]$PowerShell.AddScript($ScriptBlock).AddArgument($object)

if ($parameter) { [void]$PowerShell.AddArgument($parameter) }

# $Using support from Boe Prox if ($UsingVariableData) { Foreach($UsingVariable in $UsingVariableData) { Write-Verbose "Adding $($UsingVariable.Name) with value: $($UsingVariable.Value)" [void]$PowerShell.AddArgument($UsingVariable.Value) } }

#Add the runspace into the powershell instance $powershell.RunspacePool = $runspacepool

#Create a temporary collection for each runspace $temp = "" | Select-Object PowerShell, StartTime, object, Runspace $temp.PowerShell = $powershell $temp.StartTime = Get-Date $temp.object = $object

#Save the handle output when calling BeginInvoke() that will be used later to end the runspace $temp.Runspace = $powershell.BeginInvoke() $startedCount++

#Add the temp tracking info to $runspaces collection Write-Verbose ( "Adding {0} to collection at {1}" -f $temp.object, $temp.starttime.tostring() ) $runspaces.Add($temp) | Out-Null

#loop through existing runspaces one time Get-RunspaceData

#If we have more running than max queue (used to control timeout accuracy) #Script scope resolves odd PowerShell 2 issue $firstRun = $true while ($runspaces.count -ge $Script:MaxQueue) {

#give verbose output if($firstRun){ Write-Verbose "$($runspaces.count) items running - exceeded $Script:MaxQueue limit." } $firstRun = $false

#run get-runspace data and sleep for a short while Get-RunspaceData Start-Sleep -Milliseconds $sleepTimer

}

#endregion add scripts to runspace pool }

Write-Verbose ( "Finish processing the remaining runspace jobs: {0}" -f ( @($runspaces | Where {$_.Runspace -ne $Null}).Count) ) Get-RunspaceData -wait

if (-not $quiet) { Write-Progress -Activity "Running Query" -Status "Starting threads" -Completed } } Finally { #Close the runspace pool, unless we specified no close on timeout and something timed out if ( ($timedOutTasks -eq $false) -or ( ($timedOutTasks -eq $true) -and ($noCloseOnTimeout -eq $false) ) ) { Write-Verbose "Closing the runspace pool" $runspacepool.close() }

#collect garbage [gc]::Collect() } } }

$StorageAccountName = $StorageAccount.ToLower()

# see if we already have a session. If we don't don't re-authN if (!$AzureRMAccount.Context.Tenant) { $AzureRMAccount = Add-AzureRmAccount }

$SubscriptionName = Get-AzureRmSubscription | sort SubscriptionName | Select SubscriptionName $TenantId = $AzureRMAccount.Context.Tenant.TenantId

Select-AzureRmSubscription -TenantId $TenantId write-host "Enumerating VM's from AzureRM in Resource Group '" $ResourceGroup "' from '" $StorageAccountName "'"

$StorageVMs = get-azurermvm | where {$_.storageprofile.osdisk.vhd.uri -like "*$StorageAccountName.$storageSuffix*"} $vmrunninglist = @() $vmstoppedlist = @()

Foreach($vmonstore in $StorageVMs) { $vmstatus = Get-AzureRMVM -ResourceGroupName $ResourceGroup -name $vmonstore.name -Status $PowerState = (get-culture).TextInfo.ToTitleCase(($vmstatus.statuses)[1].code.split("/")[1])

write-host "VM: '"$vmonstore.Name"' is" $PowerState if ($Powerstate -eq 'Running') { $vmrunninglist = $vmrunninglist + $vmonstore.name } if ($Powerstate -eq 'Deallocated') { $vmstoppedlist = $vmstoppedlist + $vmonstore.name } }

if ($Power -eq 'start') { write-host "Starting VM's "$vmstoppedlist " in Resource Group "$ResourceGroup $vmstoppedlist | Invoke-Parallel -ImportVariables -NoCloseOnTimeout -ScriptBlock { Start-AzureRMVM -ResourceGroupName $ResourceGroup -Name $_ -Verbose } }

if ($Power -eq 'stop') { write-host "Stopping VM's "$vmrunninglist " in Resource Group "$ResourceGroup $vmrunninglist | Invoke-Parallel -ImportVariables -NoCloseOnTimeout -ScriptBlock { Stop-AzureRMVM -ResourceGroupName $ResourceGroup -Name $_ -Verbose -Force } } [/powershell]

Moving VHDs from one Storage Account to Another (Part 1)

powershell.jpg

We are often asked to review a customer's Azure environment. Part of that includes the review of specific applications for stability, availability, scalability and overall design. As customers move to Azure, they have to forget some of the best practices they've implemented in their on-premise environments. The cloud is different. Many customer's first experience with Azure is an IaaS experience. Even there, things are different. Features like Availability Sets, Storage Accounts, Upgrade and Fault Domains come into play. Decisions with these features can have long lasting effects on their applications and environments. One relatively common scenario we see is that while deploying systems for an application, the Storage Account is normally ignored and all VMs of a specific role (or even all instances of an application) are placed in a single Storage Account. In fact, we see Availability Sets and Storage Accounts map 1 to 1 fairly regularly. We wont go into some of the issues that you may run into with a single Storage Account (that's a whole other blog post), but will focus on the fact that it can become a single point of failure. If all the VMs of a specific role reside in a single Storage Account and something happens to that Storage Account, no more VMs. Availability Set be damned. Your VMs (and application) are offline.

Lets see how you can use PowerShell to move VHDs (OS and/or Data) from one Storage Account to another.  Let's get started...

Once you have your PowerShell console up, you will have to log into Azure:

[powershell]Login-AzureRmAccount[/powershell]

Note that this may fail if you have never used your computer to connect to Azure using PowerShell. If that is the case, you will need to download and import the PublishSettings file form Windows Azure and import it into your computer. Instructions on how to do this are found here.

At this point, you will be presented with an Azure login dialog, enter in your credentials and your PowerShell session will be connected to and authenticated against Azure.

Next, you'll need the Resource Group that the VM (and therefore VHD) resides in. You can find that in the Azure Portal by simply navigating to the Virtual Machines Blade:

ResourceGroupViaVMBlade

Let's store that in a variable:

[powershell]$RGname = "Default-Web-WestUS"[/powershell]

We'll also need the name of the VM:

[powershell]$vmname = "VM1"[/powershell]

Before we can move the VHD to the second Storage Account, the VM has to be turned off (can't move the disk if its being accessed). As you can see in the screenshot above, my VM is already stopped. The following command will stop the VM if it happens to be running:

[powershell]get-azurermvm -name $vmname -ResourceGroupName $RGname | stop-azurermvm[/powershell]

The last VM specific property we need is the name of the VHD. This can be found in the Storage Account Blade (Storage Accounts -> sourceStorageAccount -> Blobs -> ContainerName):

StorageAccounts

Blobs

Blob

Notice that the Container name is vhds, which is the default name when creating your first VM in the Storage Account. Yours might be different, so make a note of it.

Let's store the name of the VHD in a variable:

[powershell]$vhdName = "VM12016621122342.vhd"[/powershell]

Next, We'll need some information about the source and destination Storage Accounts:

  • Names of the Storage Accounts
  • Access Keys for the Storage Accounts
  • Names of the Containers storing the VHDs

The names of the Storage Accounts and the names of the Containers were shown in the previous screenshots. The Access Keys for the Storage Accounts are stored in the Settings blade for the Storage Account:

AccessKeys

Note: Keep these keys secret as they can grant anyone access to your Storage Account. Microsoft recommends that you keep one key for connection, and regenerate the other. Be aware that if you regenerate the keys, any application that is connecting to this Storage Account will need to be updated with the newly regenerated key.

Now that we have this information, we can set up the next group of variables.  For the source Storage Account:

[powershell]$sourceSAName = "storageaccountmigration1"

$sourceSAKey = "InsertKeyHere"

$sourceSAContainerName = "vhds" [/powershell]

And for the destination Storage Account:

[powershell]$destinationSAName = "storageaccountmigration2"

$destinationSAKey = "InsertKeyHere"

$destinationContainerName = "vhds" [/powershell]

A Storage context for each of the Storage Accounts is needed. This Storage Context will be used when copying the VHDs between the 2 Blob Storage locations (more information about the Storage Context commands can be found here):

[powershell]$sourceContext = New-AzureStorageContext -StorageAccountName $sourceSAName -StorageAccountKey $sourceSAKey

$destinationContext = New-AzureStorageContext –StorageAccountName $destinationSAName -StorageAccountKey $destinationSAKey [/powershell]

Note: This script assumes that the destination container has already been created. If you need to create this container, you can either use the Portal, or the following PowerShell command:

[powershell]$destinationContainerName = "destinationvhds"

New-AzureStorageContainer -Name $destinationContainerName -Context $destinationContext[/powershell]

The final step is to start the blob copy. For this, we use the Start-AzureStorageBlobCopy command.

[powershell]$blobCopy = Start-AzureStorageBlobCopy -DestContainer $destinationContainerName -DestContext $destinationContext -SrcBlob $vhdName -Context $sourceContext -SrcContainer $sourceSAContainerName [/powershell]

You will notice that executing this cmdlet will return the prompt after a few seconds. The blob copy is not actually done, it's just running in the background.  To following command will show you its current status:

[powershell]

($blobCopy | Get-AzureStorageBlobCopyState).Status [/powershell]

The status will be Pending until the copy is completed. Alternately, running the following will run in a loop and show you the actual progress of the copy (refreshed every 10 seconds):

[powershell]$TotalBytes = ($blobCopy | Get-AzureStorageBlobCopyState).TotalBytes

cls

while(($blobCopy | Get-AzureStorageBlobCopyState).Status -eq "Pending")

{

Start-Sleep 1

$BytesCopied = ($blobCopy | Get-AzureStorageBlobCopyState).BytesCopied

$PercentCopied = [math]::Round($BytesCopied/$TotalBytes * 100,2)

Write-Progress -Activity "Blob Copy in Progress" -Status "$PercentCopied% Complete:" -PercentComplete $PercentCopied

}[/powershell]

Progress

Once the copy is completed, the dialog box will disappear and you will be returned to the PowerShell prompt.

And the full script:

[powershell]# Login to Azure</pre>

Login-AzureRmAccount

# Set Resource Group Name

$RGname= "Default-Web-WestUS"

# Set VM Name

$vmname = "VM1"

# Stop VM

get-azurermvm -name $vmname -ResourceGroupName $RGname | stop-azurermvm

# Set name of VHD blob to copy

$vhdName = "VM12016621122342.vhd"

# Source Storage Account Information

$sourceSAName = "storageaccountmigration1"

$sourceSAKey = "1UzJEeop8MW/jOE5eX9ejilO1x6gwxxcMGIVdO36uchtwL128h3LzGQAt1CpFxs03E5FlGveCNkwhpvxQTCTTA=="

$sourceSAContainerName = "vhds"

# Destination Storage Account Information

$destinationSAName = "storageaccountmigration2"

$destinationSAKey = "dN6rMnqeUxkBkzpeOLS5wns6UJcL2zjGIj7cTGZ8if0ZNumyvrdDytW9LuiW6Qc/knkeoeTg+ejrFrHsmqzb4w=="

$destinationContainerName = "vhds"

# Source Storage Account Context

$sourceContext = New-AzureStorageContext -StorageAccountName $sourceSAName -StorageAccountKey $sourceSAKey

# Destination Storage Account Context

$destinationContext = New-AzureStorageContext –StorageAccountName $destinationSAName -StorageAccountKey $destinationSAKey

# Copy the blob

$blobCopy = Start-AzureStorageBlobCopy -DestContainer $destinationContainerName -DestContext $destinationContext -SrcBlob $vhdName -Context $sourceContext -SrcContainer $sourceSAContainerName [/powershell]

Note that you can also use the AzCopy command to perform some of these actions.

We've shown you how straightforward it is to move your VHDs from one Storage Account to another. In part 2 (coming soon) we will look at two things; first, we'll automate the script to remove the hardcoded variables and make it simpler to select each of the properties needed for the VHD move. Second we'll look at actually creating the new VM with the disks in the new Storage Account.

Exporting Azure Resource Manager VM properties with PowerShell

powershell.jpg

On a recent project, I needed a list of all the VMs running in a subscription with some of each VMs properties. We had an Excel Spreadsheet with all the VMs and properties, but going through that was a real pain.  So, I wrote a basic PowerShell script to collect the information I needed and figured I would share it. The script is pretty straightforward (full script is at the end of the post) and does the following:

  • Logs into Azure
  • Gets all the Virtual Machines
  • Gets specific properties of each VM
  • Generates a Grid View with all the selected properties

Logging into Azure

Logging into Azure from PowerShell is a simple command:

[powershell]Login-AzureRmAccount[/powershell]

Note that this may fail if you have never used your computer to connect to Azure using PowerShell. If that is the case, you will need to download and import the PublishSettings file form Windows Azure and import it into your computer. Instructions on how to do this are found here.

Once that command is executed, you'll be presented with the Azure Login page (as shown below). Simply log into Azure using your Azure credentials and your PowerShell session will be authenticated with your Azure account.

AzureLoginPage

If the login is successful, you'll get something similar to the screenshot below:

SuccessfulLogin

Get Virtual Machines

Next, let's get all the VMs in our subscription using the Get-AzureRMVM command. Once you run that command, you'll see a ton of information scroll by on the screen, to make it simpler, lets store the output in a variable:

[powershell]$RMVMs=Get-AzurermVM[/powershell]

We'll use this object in the next section...

Get VM Properties

One of the most important properties we'll use is the name of the VM. If you look at the text that flew by earlier (or type in $RMVMs to see it again), you'll notice all the properties that are available to you. One of them, is Name:

VMName

We can display just the names of the VMs in the $RMVMs object by appending .Name on the end:

[powershell]$RMVMs.Name[/powershell]

The output will return just the names:

[powershell]VM1

VM2[/powershell]

Nested properties, such as OSType are just as straightforward to get:

[powershell]$RMVMs.storageprofile.osdisk.ostype[/powershell]

Will return:

[powershell]Windows

Windows[/powershell]

Feel free to look through the properties available.  Next, we will display it in the Grid View.

Display in a Grid View

For the Grid View, we need an array with all the properties that we want to collect.  First, we need an empty array that we'll call $RMVMArray:

[powershell]$RMVMArray = @()[/powershell]

Next, we'll loop through each of the VMs in the RMVMs object:

[powershell]foreach ($vm in $RMVMs) { ... }[/powershell]

And add some properties we want to see:

[powershell] foreach ($vm in $RMVMs) { # Generate Array $RMVMArray += New-Object PSObject -Property @{

Name = $vm.Name; Location = $vm.Location; OSType = $vm.StorageProfile.OsDisk.OsType; } }[/powershell]

Finally, we can display the Grid View:

[powershell]$RMVMArray | Out-Gridview[/powershell]

You'll get a Grid View similar to the one below:

GridView

Pulling it all together

Now that we have each of the pieces, lets pull it into a script that we can simply run every time we want to get a list of the VMs in our Subscription and their properties. In my case, I called the script GetRMVMProperties.ps1.  And the full code:

[powershell]# Log in to Azure Login-AzureRmAccount

# Make sure there is at least one VM in the Subscription ($RMVMs=Get-AzurermVM) &amp;gt; 0

# Create array to contain all the VMs in the subscription $RMVMArray = @()

# Loop through VMs foreach ($vm in $RMVMs) { # Get VM Status (for Power State) $vmStatus = Get-AzurermVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -Status

# Generate Array $RMVMArray += New-Object PSObject -Property @{`

# Collect Properties Name = $vm.Name; PowerState = (get-culture).TextInfo.ToTitleCase(($vmStatus.statuses)[1].code.split("/")[1]); Location = $vm.Location; Tags = $vm.Tags Size = $vm.HardwareProfile.VmSize; ImageSKU = $vm.StorageProfile.ImageReference.Sku; OSType = $vm.StorageProfile.OsDisk.OsType; OSDiskSizeGB = $vm.StorageProfile.OsDisk.DiskSizeGB; DataDiskCount = $vm.StorageProfile.DataDisks.Count; DataDisks = $vm.StorageProfile.DataDisks; } }

# Gridview output $title = "VMs in the '{0}' Subscription" -f $Subscriptions.SubscriptionName $RMVMArray | Sort-Object -Property Name | Out-Gridview -Title $title[/powershell]

Next Steps

It actually took considerably longer to write this blog post than to write the script. The customer that I wrote this script for has multiple subscriptions, so that's the next step.... Modify the script to automatically step through each subscription and generate a Grid View for each.