Azure Stack

What Azure Stack is and what it isn't.

TheDeal.jpg

BossManagement has been asking for that viability report about Azure Stack.YouThe release has been delayed a few times and it’s been taking longer than I thought. BossIf they ask again what should I tell them? YouIt's coming.  Tell them, it's a think piece, about a high-level cloud organization struggling with their own limitations of being able to deliver hybrid cloud in the harsh face of an ever-evolving business demand.

The current deal I’m guessing you've been running your own private cloud for a few years now and you’ve heard Azure Stack is almost here and wondering if this is the solution to all your problems, well, you could be right.  Previously you would start ordering hardware from your favorite hardware vendor along with your choice of networking gear or perhaps you're hoping you can save some money and reuse some existing switches from the last project you just finished, this is not the case. Over the last few years as Azure Stack has gone from a rumor to a product we have seen a lot of reactions to the announcements from Microsoft about what Azure Stack is and what it isn’t.

If you were thinking Azure Stack would be Windows 2016 running Hyper-V with some System Center components running with a flashy Azure skin on top with ARM bolted on deployed in 6 hours with PDT V2, you’d be completely wrong, mostly.  It's true Hyper-V is there under the covers and there is something called the ECE engine orchestrating and deploying this, however, you should not be thinking about Azure Stack as individual software components you have to install and configure. What you're buying is 'Azure' in your own data center and this requires a level of control and abstraction, it requires a change in the way we think about delivering service.  Perhaps asked another way, is your organization ready to consume and deliver Azure-as-a-Service to itself?

Jumping over some hurdles As the Stack vision gained clarity and more information was released it became clearer that traditional IT private cloud administration has a limited place in the Azure Stack story. This gradual release of specifications and functionality to conference rooms of people received a mixed response. There was no cheering or applause, generally, it was met with confusion and mutterings throughout the room and an out lash of confrontational commentary in online forums.

Originally it seemed like you could bring your own hardware, then there was a certain hardware requirement and now there are certified vendors that you have to purchase an Azure Stack stamp from (which FYI doesn't include Microsoft).  On top of that, (I'm sure to the dismay of every storage vendor) Storage Spaces Direct is currently the only storage option meaning you cannot reuse that expensive SAN sitting in your data center humming away with all its deduplication, replication, pointer and snapshot technology you have come to rely on.

Whether this was a lack of understanding, lack of maturity, bad communication of a greater Microsoft vision the reasons are up to you. You may feel this was unfair or not what you think you wanted or the journey you had signed up for.  Instead of complaining I'd suggest you 'pray they don't alter the deal any further'.  All that said, if you feel like you had been given the run-around, take a breath and a step back from what you expected and realize this the new agile Microsoft we demanded.  Accept this is the new face of a vendor attempting to adapt and drive a fundamental change and evolve IT for the better.  Have a look at the now almost fully formed Azure Stack offer and be prepared to jump over some of the hurdles to trust Azure Stack and call it your own cloud.

The black box The first hurdle to jump over is understanding Azure Stack is designed to be a black box appliance.  As an organization considering purchasing Stack, you should consider looking at all the Azure Stack vendors and SKUs and offerings provided and possibly look to build a new alliance.  You are not buying hardware, you’re buying an appliance and the service to support that appliance.  The idea you will be able to setup Azure Stack and log in and start moving some sliders to change the cloud profile is something you will have to let go of.  You cannot choose your oversubscription level, you cannot create your own custom VM sizes, VM sizes will be a limited subset based on existing Azure SKUs, you cannot log into the console of an Azure Stack Node and browse around just to see how it works.

You can't walk into an Azure data center and login into a console of an Azure Node to check on the disk usage of your VMs, this solution is no different. This entire system is secured and even your Azure Stack vendor doesn’t have the keys, save your breath and don’t ask.  The service you are buying is Azure in your data center. What you should be asking is what services they can provide, what else will I get when I purchase a shiny new Azure Stack stamp from you?

Cloud consistency The true consistency hurdle and understanding what ‘Azure Consistency’ really means.  While many vendors have hybrid cloud stories, there is a big difference between modern hybrid cloud and a traditional virtualized environment running in more than one place.  Azure obviously has a massive amount of iron behind it as well as multiple teams of developers creating resource providers and adding features to existing services. A substantial amount of time has been put into creating as-a-service services. It seems almost every day something else pops up in the alerts informing you of new features and services.

For some reason, the Azure codebase couldn’t simply be scaled down and offered to clients as an on-premise solution.  However, the key piece of this hybrid puzzle is ARM. We are told Stack is using the same code base for ARM as Azure, meanwhile many of the underlying services are seemingly having to be recreated in a way they can be consumed through ARM on Stack. Because you have Azure Stack, it does not mean you have every service available on-premise, nor should you expect it in time. For instance, Service Bus on Stack, there are questions about this topic on forums that are left unanswered and there is currently no official announcement about Service Bus on Stack.  It also does not mean as you see a new feature released in Azure you will be able to go to Stack and deploy the same thing immediately.

Azure Stack will lag behind Azure and currently, the story for true feature parity is to apply policies that will limit your Azure subscription to the level of Azure Stack.  While you don’t have to do this you need to be aware of what is and isn’t available and what is consistent between the Azure and Azure Stack.  However, just like Office 365, once you're onboard, without extra work you will receive access to new features as they are provided. Microsoft's current goal is for Azure Stack to be around 3 months behind regarding feature parity with services provided on Azure Stack (currently its lagging around a year). Through Stack's PUP mechanism (patch and update) you (or your stack vendor) will be able to execute an update and you will receive new Stack services.

Expectations The expectation hurdle, what are you expecting from Azure Stack?  What is the problem or use case you are trying to solve for?  Perhaps it's a pitch for app modernization, except generally this is a massive task with limited ROI. The app is running, leave it where it is. Maybe you think this is a chance to migrate your existing VMs into Azure Stack in a classic lift and shift.  Consolidating old systems onto new you may want to re-think this if this is the primary reason.  How about the cloudburst pitch or use for Stack as a DR or HA site? While many of these use cases make for a great presentation you need to really walk through your specific use case and understand the gap you are trying to fill.

Microsoft’s early guidance was leaning to Azure Stack being used by service providers and in extreme edge cases where customers could not completely use Azure public. However, the idea that everything would move to the public cloud may work for a few consumers, unfortunately, this seems like it is not the story for most consumers.  There are some scenarios where your current private cloud is probably going to do the job better than Azure Stack can today.

Gartner and other big think groups seem to be now mirroring these opinions and predicting most enterprise clients will have some on-premise cloud. Re-enter the hybrid cloud story.  More and more organizations have reasons for running workloads on-premise the public cloud may never solve from data sovereignty, latency issues, disconnected and highly secure environments, manned space flight to Mars, connectivity to locally positioned IoT devices this list goes on. Every organization will need to evaluate the use case for stack and check that it is fit for purpose.

A crisis of self The existential hurdle, where everyone thinks their scenario is special.  In this brave new world of cattle not pets, you are not a special unique snowflake.  Don’t be offended, this doesn’t mean they don’t care about you, in fact quite the opposite.  Any public cloud has shown us that the economy of scale is real.  They care enough to put a large amount of effort into ensuring Azure Stack will run your workloads as they run in Azure and ensure these workloads can survive a variety of hardware failures. Azure Stack has a surprisingly large amount of redundancy built in and the list of systemic failures it can sustain is impressive. The more nodes (and eventually regions and scale units) you have supporting Stack, the resiliency of your on-premise Azure service will increase, this comes at a price, literally, why have one when you can have two at twice the cost.  As architects, we have spent days and weeks designing, implementing and months testing and maintaining redundant infrastructure.  Realize you are buying a service, a redundant cloud platform. Where would you prefer to spend your time? Trust the platform and you can focus on something else.

These choices of how stack will be offered have been introduced (in my opinion) to save us from ourselves and try to truly bring the as-a-service story and consumption-based IT to the business.  We had freedom with system center 2012 and Windows Azure Pack and these layers of complexity, if you could master would provide great rewards, unfortunately, there is a flip side to that coin, with many clouds while very functional have fallen short of their original dream.

The things you consider great about consuming Azure are now going to be available to you but at a cost.  The pitch is that the wizard will remain behind the curtain and if you trust him he will keep your system running so you can focus on other things, moving up the layers of the maturity models, building out new modern highly resilient micro service applications. Perhaps venturing into the new frontier of DevOps is something you want to explore, utilizing continuous integration and continuous delivery allowing you to be able to release code 3 or 4 times every day on Stack.  There is still plenty of work to be done it's just going to require a different set of skills to execute.

The ‘Easy’ button As someone that worked with early releases from TP1, I can say it has been a long journey to get here and simultaneously Stack has come a long way.  With the initial announcement of three vendors offering solutions and now and with follow up announcements more vendors are stepping into this space striving to supply you a little piece of Azure you can call your own.

Azure Stack’s goal is to be a turn-key appliance, but that does not mean it’s an easy button.  If your organization is truly a dynamic business on the maturity model, you may not even know you’re consuming Stack.  However, for most of us deploying and integrating Stack into your organization I’m sure will come along with a new set of problems which you will need to work through with your staff, Stack vendor, end users and Microsoft.

It is a Journey With the shift in Microsoft’s direction and releasing code and features more often, getting feedback from customers, utilizing the votes captured through 'user voice' yammers surveys and other avenues, these direction changes have seen more testing pushed back onto the community and customers.  Microsoft teams are working directly with interested parties to develop technology to solve real business problems.

We are all now on a cloud journey of some sort and Azure Stack will be no different.  You can be as involved as you want to be.  Either on the bleeding edge cutting your teeth on private releases, logging bugs explaining what you actually need or sitting back and waiting a few months for a well-tested GA release.  As we have moved to the cloud our thinking has had to change, we do things differently, this is another big step in that journey.  If you are preparing to take the leap to Azure Stack I believe the best advice can be summed up in a single quote “Free your mind”.

How to shutdown and start an Azure Stack system.

How to shutdown and start an Azure Stack system.

I've been intending for a couple of months on how to shut down an Azure Stack integrated system 'the right way'.  Why?  Because I had to turn off an instance a couple of months ago due to the location hosting the appliance having planned utility maintenance (it hosts pilot/demo kit only so no need for generators), and didn't want any issues with tenant workloads or S2D. Anyway, I don't particularly need to detail the process now as Microsoft have recently updated their documentation detailing the process (get it here).

Azure Stack TP3 Stability (Reboot the XRP VM)

Stack_logo.png

If you have been deploying and using Azure Stack TP3, you may have noticed after a few days the portal starts behaving slower and in my experience closer to a week it stops working altogether.  This will vary depending on what you're doing and your hardware.  Looking at the VM guests, you will notice that the XRP VM is consuming all its memory.  While you could give the machine more memory, this does mess with the expected infrastructure sizing and eventually, it will consume whatever memory you give it. This will hopefully be addressed soon. However, a simple workaround is to reboot the XRP VM. Why do anything manually when you can script it?  This very simple script creates a scheduled task that will run on Sunday night at 1 am.  The task will stop and start the XRP VM and then trigger the existing ColdStartMachine task that makes sure all the Azure Stack services are running.

[powershell] #run on host server as the AzureStackAdmin user $AzureStackAdminPAssword = 'YOURPASSWORD'

$Action = New-ScheduledTaskAction -Execute 'Powershell.exe' -argument '-command "Get-VM MAS-Xrp01 | Stop-VM -force;Get-VM MAS-Xrp01 | start-vm;sleep 180;Stop-ScheduledTask ColdStartMachine ;start-scheduledtask ColdStartMachine"' $trigger = New-ScheduledTaskTrigger -Weekly -DaysOfWeek Sunday -at 1am Register-ScheduledTask -Action $Action -Trigger $trigger -taskname "XRPReboot" -Description "Restart XRP VM weekly" -RunLevel Highest -User "$env:USERDOMAIN\$env:USERNAME" -Password $AzureStackAdminPAssword [/powershell]

 

 

Publishing Microsoft Azure Stack TP3 on the Internet via NAT

TP3Stack.png

As you may know? Azure Stack TP3 is here. This blog will outline how to publish your azure stack instance on the internet using NAT rules to redirect your external IP Address to the internal, external IPs. Our group published another article on how to do this for TP2 and this is the updated version for TP3. Starting Point This article assumes you have a host ready for installation with the TP3 VHDx loaded onto your host and you are familiar with the Azure Stack installation Process. The code in this article is extracted from a larger process but should be enough to get you through the process end to end. Azure Stack Installation First things first, I like to install a few other tools to help me edit code and access the portal, this is not required.

[powershell] iwr https://chocolatey.org/install.ps1 -UseBasicParsing | iex choco install notepadplusplus -y choco install googlechrome -y --ignore-checksums choco install visualstudiocode -y choco install beyondcompare -y choco install baretail -y choco install powergui -y --ignore-checksums [/powershell]

Next, you want to open up this file C:\clouddeployment\setup\DeploySingleNode.ps1

Editing these values allow you to create different internal naming and external space.  As you can see the ExternalDomainFQDN is made up of the region and external suffix.

This is a lot easier now the domain parameters are used from the same place, no need to hunt down domain names in files.

[powershell] $AdminPassword = 'SuperSecret!'|ConvertTo-SecureString -AsPlainText -force $AadAdminPass= 'SuperSecret!'|ConvertTo-SecureString -AsPlainText -Force $aadCred = New-Object PSCredential('stackadmin@poc.xxxxx.com',$AadAdminPass)

. c:\clouddeployment\setup\InstallAzureStackPOC.ps1 -AzureEnvironment "AzureCloud" ` -AdminPassword $AdminPassword ` -PublicVLanId 97 ` -NATIPv4Subnet '172.20.51.0/24' ` -NATIPv4Address '172.20.51.51' ` -NATIPv4DefaultGateway '172.20.51.1' ` -InfraAzureDirectoryTenantAdminCredential $aadCred ` -InfraAzureDirectoryTenantName 'poc.xxxxx.com' ` -EnvironmentDNS '172.20.11.21' ` [/powershell]

Remember to only have one nic enabled. We also have slightly less than the minimum space required for the OS disk and simply edit the XML file here C:\CloudDeployment\Configuration\Roles\Infrastructure\BareMetal\OneNodeRole.xml and change the value of this node Role.PrivateInfo.ValidationRequirements.MinimumSizeOfSystemDiskGB. The rest is over to TP3 installation, so far our experience of TP3 is much more stable to install, just the occasional rerun using

[powershell]InstallAzureStackPOC.ps1 -rerun[/powershell]

Once the installation completes obviously check you can access the portal.  I use chrome as it asks a lot less questions to confirm the portal is running.  We use a JSON file defined by a larger automation script to deploy these NAT rules.   Here I will simply share a portion of the resulting JSON file that is saved to C:\CloudDeployment\Setup\StackRecord.json.

[xml] { "Region": "SV5", "ExternalDomain": "AS01.poc.xxxxx.com", "nr_Table": "192.168.102.2:80,443:172.20.51.133:3x.7x.xx5.133", "nr_Queue": "192.168.102.3:80,443:172.20.51.134:3x.7x.xx5.134", "nr_blob": "192.168.102.4:80,443:172.20.51.135:3x.7x.xx5.135", "nr_adfs": "192.168.102.5:80,443:172.20.51.136:3x.7x.xx5.136", "nr_graph": "192.168.102.6:80,443:172.20.51.137:3x.7x.xx5.137", "nr_api": "192.168.102.7:443:172.20.51.138:3x.7x.xx5.138", "nr_portal": "192.168.102.8:13011,30015,13001,13010,13021,13020,443,13003,13026,12648,12650,12499,12495,12647,12646,12649:172.20.51.139:3x.7x.xx5.139", "nr_publicapi": "192.168.102.9:443:172.20.51.140:3x.7x.xx5.140", "nr_publicportal": "192.168.102.10:13011,30015,13001,13010,13021,13020,443,13003,12495,12649:172.20.51.141:3x.7x.xx5.141", "nr_crl": "192.168.102.11:80:172.20.51.142:3x.7x.xx5.142", "nr_extensions": "192.168.102.12:443,12490,12491,12498:172.20.51.143:3x.7x.xx5.143", }

[/xml]

This is used by this script also saved to the setup folder

[powershell] param ( $StackBuildJSONPath='C:\CloudDeployment\Setup\StackRecord.json' )

$server = 'mas-bgpnat01' $StackBuild = Get-Content $StackBuildJSONPath | ConvertFrom-Json

[scriptblock]$ScriptBlockAddExternal = { param($ExIp) $NatSetup=Get-NetNat Write-Verbose 'Adding External Address $ExIp' Add-NetNatExternalAddress -NatName $NatSetup.Name -IPAddress $ExIp -PortStart 80 -PortEnd 63356 }

[scriptblock]$ScriptblockAddPorts = { param( $ExIp, $natport, $InternalIp ) Write-Verbose "Adding NAT Mapping $($ExIp):$($natport)->$($InternalIp):$($natport)" Add-NetNatStaticMapping -NatName $NatSetup.Name -Protocol TCP -ExternalIPAddress $ExIp -InternalIPAddress $InternalIp -ExternalPort $natport -InternalPort $NatPort }

$NatRules = @() $NatRuleNames = ($StackBuild | get-member | ? {$_.name -like "nr_*"}).name foreach ($NATName in $NatRuleNames ) { $NatRule = '' | select name, Internal, External, Ports $NatRule.name = $NATName.Replace('nr_','') $rules = $StackBuild.($NATName).split(':') $natrule.Internal = $rules[0] $natrule.External = $rules[2] $natrule.Ports = $rules[1] $NatRules += $NatRule }

$session = New-PSSession -ComputerName $server

foreach ($NatRule in $NatRules) { Invoke-Command -Session $session -ScriptBlock $ScriptBlockAddExternal -ArgumentList $NatRule.External $NatPorts = $NatRule.Ports.Split(',').trim() foreach ($NatPort in $NatPorts) { Invoke-Command -Session $session -ScriptBlock $ScriptblockAddPorts -ArgumentList $NatRule.External,$NatPort,$NatRule.Internal } }

remove-pssession $session [/powershell]

Next, you need to publish your DNS Records. You can do this by hand if you know your NAT Mappings and as a reference, you can open up the DNS server on the MAS-DC01.

However, here are some scripts I have created to help automate this process. I do run this from another machine but have edited it to run in the context of the AzureStack Host. First, we need a couple of reference files.

DNSMappings C:\clouddeployment\setup\DNSMapping.json

[xml] [ { "Name": "nr_Table", "A": "*", "Subdomain": "table", "Zone": "RegionZone.DomainZone" }, { "Name": "nr_Queue", "A": "*", "Subdomain": "queue", "Zone": "RegionZone.DomainZone" }, { "Name": "nr_blob", "A": "*", "Subdomain": "blob", "Zone": "RegionZone.DomainZone" }, { "Name": "nr_adfs", "A": "adfs", "Subdomain": "RegionZone", "Zone": "DomainZone" }, { "Name": "nr_graph", "A": "graph", "Subdomain": "RegionZone", "Zone": "DomainZone" }, { "Name": "nr_api", "A": "api", "Subdomain": "RegionZone", "Zone": "DomainZone" }, { "Name": "nr_portal", "A": "portal", "Subdomain": "RegionZone", "Zone": "DomainZone" }, { "Name": "nr_publicapi", "A": "publicapi", "Subdomain": "RegionZone", "Zone": "DomainZone" }, { "Name": "nr_publicportal", "A": "publicportal", "Subdomain": "RegionZone", "Zone": "DomainZone" }, { "Name": "nr_crl", "A": "crl", "Subdomain": "RegionZone", "Zone": "DomainZone" }, { "Name": "nr_extensions", "A": "*", "Subdomain": "vault", "Zone": "RegionZone.DomainZone" }, { "Name": "nr_extensions", "A": "*", "Subdomain": "vaultcore", "Zone": "RegionZone.DomainZone" } ]

[/xml]

ExternalMapping C:\clouddeployment\setup\ExternalMapping.json This is a smaller section the contain on the NAT mappings reference in this example.

[xml] [ { "External": "3x.7x.2xx.133", "Internal": "172.20.51.133" }, { "External": "3x.7x.2xx.134", "Internal": "172.20.51.134" }, { "External": "3x.7x.2xx.135", "Internal": "172.20.51.135" }, { "External": "3x.7x.2xx.136", "Internal": "172.20.51.136" }, { "External": "3x.7x.2xx.137", "Internal": "172.20.51.137" }, { "External": "3x.7x.2xx.138", "Internal": "172.20.51.138" }, { "External": "3x.7x.2xx.139", "Internal": "172.20.51.139" }, { "External": "3x.7x.2xx.140", "Internal": "172.20.51.140" }, { "External": "3x.7x.2xx.141", "Internal": "172.20.51.141" }, { "External": "3x.7x.2xx.142", "Internal": "172.20.51.142" }, { "External": "3x.7x.2xx.143", "Internal": "172.20.51.143" } ] [/xml]

Bringing it altogether with this script

[powershell] Param ( $StackJSONPath = 'c:\clouddeployment\setup\StackRecord.json' )

$stackRecord = Get-Content $StackJSONPath | ConvertFrom-Json $DNSMappings = get-content c:\clouddeployment\setup\DNSMapping.json | ConvertFrom-Json $ExternalMapping = get-content c:\clouddeployment\setup\ExternalMapping.json | ConvertFrom-Json

$DNSRecords = @() foreach ($DNSMapping in $DNSMappings) { $DNSRecord = '' | select Name, A, IP, Subdomain, Domain $DNS = $stackRecord.($DNSMapping.Name).split(':') $DNSRecord.IP = ($ExternalMapping | ? {$_.Internal -eq $DNS[2]}).external $DNSRecord.Name = $DNSMapping $DNSRecord.A = $DNSMapping.A $DNSRecord.Subdomain = $DNSMapping.Subdomain.Replace("RegionZone",$stackRecord.Region.ToLower()).Replace("DomainZone",$stackRecord.ExternalDomain.ToLower()) $DNSRecord.Domain = $DNSMapping.zone.Replace("RegionZone",$stackRecord.Region.ToLower()).Replace("DomainZone",$stackRecord.ExternalDomain.ToLower()) $DNSRecords += $DNSRecord } #here you can use this array to do what you need, 2 examples follow

#CSV host file for import $DNSRecords | select a,IP, Subdomain, domain | ConvertTo-CSV -NoTypeInformation | Set-Content c:\clouddeployment\setup\DNSRecords.csv

$SubDomains = $DNSRecords | group subdomain foreach ($SubDomain in ($SubDomains | Where {$_.name -ne ''}) ) { Write-Output ("Records for " +$SubDomain.name) foreach ($record in $SubDomain.Group) { # Initialize $resourceAName = $record.A $PublicIP = $record.ip $resourceSubDomainName = $record.Subdomain $zoneName = $record.Domain $resourceName = $resourceAName + "." + $resourceSubDomainName + "." + $zoneName

Write-Output ("Record for $resourceName ") #Create individual DNS records here

} } [/powershell]

The array will give you the records you need to create.

All things being equal and a little bit of luck...

To access this external Azure Stack instance via Powershell you will need a few details and IDs. Most of this is easy enough, however, to get your $EnvironmentID from the deployment Host, open c:\ecetore\ and find your deployment XML. Approx 573kb. Inside this file search for 'DeploymentGuid' This is your Environment ID.  Or you can run this code on the host, you may need to change the $deploymentfile parameter

[powershell] param ( $DeploymentFile = 'C:\EceStore\403314e1-d945-9558-fad2-42ba21985248\80e0921f-56b5-17d3-29f5-cd41bf862787' )

[Xml]$DeploymentStore=Get-Content $DeploymentFile | Out-String $InfraRole=$DeploymentStore.CustomerConfiguration.Role.Roles.Role|? Id -eq Infrastructure $BareMetalInfo=$InfraRole.Roles.Role|? Id -eq BareMetal|Select -ExpandProperty PublicInfo $PublicInfoRoles=$DeploymentStore.CustomerConfiguration.Role.Roles.Role.Roles.Role|Select Id,PublicInfo|Where-Object PublicInfo -ne $null $DeploymentDeets=@{ DeploymentGuid=$BareMetalInfo.DeploymentGuid; IdentityApplications=($PublicInfoRoles.PublicInfo|? IdentityApplications -ne $null|Select -ExpandProperty IdentityApplications|Select -ExpandProperty IdentityApplication|Select Name,ResourceId); VIPs=($PublicInfoRoles.PublicInfo|? Vips -ne $null|Select -ExpandProperty Vips|Select -ExpandProperty Vip); } $DeploymentDeets.DeploymentGuid [/powershell]

Plug all the details into this connection script to access your stack instance. Well Commented code credit to Chris Speers.

[powershell] #Random Per Insall $EnvironmentID='xxxxxxxx-xxxx-4e03-aac2-6c2e2f0a517a' #The DNS Domain used for the Install $StackDomain='sv5.as01.poc.xxxxx.com' #The AAD Domain Name (e.g. bobsdomain.onmicrosoft.com) $AADDomainName='poc.xxxxx.com' #The AAD Tenant ID $AADTenantID = 'poc.xxxxx.com' #The Username to be used $AADUserName='stackadmin@poc.xxxxx.com' #The Password to be used $AADPassword='SuperSecret!'|ConvertTo-SecureString -Force -AsPlainText #The Credential to be used. Alternatively could use Get-Credential $AADCredential=New-Object PSCredential($AADUserName,$AADPassword) #The AAD Application Resource URI $ApiAADResourceID="https://api.$StackDomain/$EnvironmentID" #The ARM Endpoint $StackARMUri="Https://api.$StackDomain/" #The Gallery Endpoint $StackGalleryUri="Https://portal.$($StackDomain):30016/" #The OAuth Redirect Uri $AadAuthUri="https://login.windows.net/$AADTenantID/" #The MS Graph API Endpoint $GraphApiEndpoint="graph.$($StackDomain)"

$ResourceManager = "https://api.$($StackDomain)/$($EnvironmentID)" $Portal = "https://portal.$($StackDomain)/$($EnvironmentID)" $PublicPortal = "https://publicportal.$($StackDomain)/$($EnvironmentID)" $Policy = "https://policy.$($StackDomain)/$($EnvironmentID)" $Monitoring = "https://monitoring.$($StackDomain)/$($EnvironmentID)"

#Add the Azure Stack Environment Get-azurermenvironment -Name 'Azure Stack AS01'|Remove-AzureRmEnvironment Add-AzureRmEnvironment -Name "Azure Stack AS01" ` -ActiveDirectoryEndpoint $AadAuthUri ` -ActiveDirectoryServiceEndpointResourceId $ApiAADResourceID ` -ResourceManagerEndpoint $StackARMUri ` -GalleryEndpoint $StackGalleryUri ` -GraphEndpoint $GraphApiEndpoint

#Add the environment to the context using the credential $env = Get-azurermenvironment -Name 'Azure Stack AS01' Add-AzureRmAccount -Environment $env -Credential $AADCredential -Verbose Login-AzureRmAccount -EnvironmentName 'Azure Stack AS01'

get-azurermcontext Write-output "ResourceManager" Write-output $ResourceManager Write-output "`nPortal" Write-output $Portal Write-output "`nPublicPortal" Write-output $PublicPortal Write-output "`nPolicy" Write-output $policy Write-output "`nMonitoring " Write-output $Monitoring [/powershell]

Returning something like this.

Thanks for reading.  Hopefullly this helped you in some way.