Showing posts with label azurecli. Show all posts
Showing posts with label azurecli. Show all posts

Reading Notes #361

MVIMG_20190110_075409_2Cloud



Programming



Miscellaneous



Books

ted-talk


TED Talks: The Official TED Guide to Public Speaking

Author: Chris J. Anderson

Fantastic book that covers a lot of topics related to presenting. It covers the before, during, and even the after very smartly. There are no recipes here and this is exactly what makes this book so great. A must.











~


Reading Notes #354

Cloud


Programming


Books

Extreme Ownership_coverExtreme Ownership: How U.S. Navy SEALs Lead and Win (Jocko Willink, Leif Babin) - Very interesting book. Yes, it contains a lot of battle details, and first I was not sure, but then things "fall" all in place when you understand what the story was "demonstrating." It also contains more business focus examples. Everything is very clear, well explained in plain English.









~

What happens when you mix Asp.Net Core, different versions of Docker and Azure for the first time

For a project I have, I wanted to validate if containers were easier to use compare to regular code with services in Azure. I needed to refresh myself with Docker, so I decide to do what I thought would be a simple test: Create an Asp.Net Core web site in a container and access it on my machine.

This post is about my journey to finally achieve this goal, as you may guess it didn't work on the first attempt.

The Goal


One reason why I was looking at containers, it's because it's supposed to be working everywhere right? Well yes but sometimes with a little of effort. The goal here is to be able to run the same container on my main PC, my surface, a Linux VM and of course in Azure.

The context


I have a different setup on my main machine and on my surface. On my PC, I'm using VirtualBox for my VMs so I'm not running Docker for windows, but Docker Toolbox. This flavor (older version) of Docker will create a VM in VitualBox instead of Hyper-V. I couldn't use Docker for Windows like on my Surface, because the two virtualization softwares don't run side by side.

I also wanted to use only tools available on each of this platform, so I decided not to use Visual Studio IDE (the big one). Moreover, I wanted to understand what was happening so I didn't want too much magic involve. Visual Studio is a fantastic tool and I love it. :)

Installing Docker


I needed to install Docker on my Surface. I downloaded Docker Community Edition (CE), and because Hyper-V was already installed everything ran smoothly. On Windows, you need to share the "C" drive from the Docker setting. However, I was getting a strange "bug" when trying to share mine. It was asking my to login with AzureAD and was ignoring my request by letting the share drive uncheckeddockerazureadcredentials.

Thanks to my new friend Tom Chantler, I did search for too long. See the thing is I'm using an AzureAD account to login, and something is not working right at the moment. As explained in Tom's post: Sharing your C drive with Docker for Windows when using Azure Active Directory, to walkaround this situation, I only had to create a new user account with the exact name as my AzureAD account, but without the AzureAD prefix (ex: AzureAD\FBoucher became FBoucher). Once that was done I could share the drive without any issue.

Let's get started with the documentation


The HelloWord container worked like a charm, so I was ready to create my Asp.Net Core website. My reflex was to go on docs.docker.com and follow the instruction from Create a Dockerfile for an ASP.NET Core application. I was probably doing something wrong, because it didn't work. So I decided to start from scratch and do every step manually... I always learn more that way.

Let's start by the beginning


Before moving everything in a container, we need a web application. This can be easily done from the terminal/ command prompt, with the commands:

dotnet new mvc -o dotnetcoredockerappservicedemo

cd dotnetcoredockerappservicedemo

dotnet restore

dotnet publish -c release -o app/ .

Here we create a new folder with a website using the mcv template. I then go in that new folder and restore the Nuget package. To test the we site locally simply use dotnet run. And finally, we build and publish the application into the subfolder app.


Moving to Docker


Now that we have our app it's time to containerize it. We need to add some Docker instruction in a dockerfile. Add a new file name dockerfile (no extension) to the root folder and copy/paste these commandes:

# dockerfile

FROM microsoft/dotnet:2.1-aspnetcore-runtime 
WORKDIR /app COPY /app /app 
ENTRYPOINT [ "dotnet" , "dotnetcoredockerappservicedemo.dll"]

To start Docker with Docker Tool just start the Docker Quickstart Terminal
This instruction will specify how to build our container. First, it will download the image microsoft/aspnetcore or microsoft/dotnet:2.1-aspnetcore-runtime. We specify the work directory, then copy the app folder to app folder inside the container. Finally, we specify the entry point of our application telling it to start with dotnet.
Like Git and it's gitIgnore file docker has the same thing with .dockerignore (no extension). Add that file into your folder to ignore the bin and obj folder.


# .dockerignore
bin\ obj\

Now that the instructions about how to build our container are completed, we can build our container. Execute the following command:

docker build -t dotnetcoredockerappservicedemo .

This will build dotnetcoredockerappservicedemo from the current folder.

Running Docker container locally


Everything is in place, the only thing missing is to run it. If you want to run it locally just go with this command:

docker run -p 8181:80 dotnetcoredockerappservicedemo

On my machine, the port 80 is always used. So I remap the port 80 to 8181, feel free to change it at your convenience. The website will be available at localhost:8181

If you are running Docker Tool (older version of Docker), you need to get the IP of your VM. To get it do

docker-machine ip

Running in the cloud


To run our container into Azure you will need to publish it to the cloud first. It could be on DockerHub or in a private registry on Azure. I decided to go with Azure. First, we need to create a registry, then publish our container.

az group create --name dotnetcoredockerappservicedemo --location eastus

az acr create --resource-group dotnetcoredockerappservicedemo --name frankContainerDemo01 --sku Basic --admin-enabled true

az acr credential show -n frankContainerDemo01

The last command az acr credential show will provides information to tag our container with our repository name and also gives us the credential to be able to push. Of course, you could go to the portal.azure.com and get the information from the Registry's Access Keys blade.

docker tag dotnetcoredockerappservicedemo frankcontainerdemo01.azurecr.io/dotnetcoredockerappservicedemo:v1

Let's connect our docker to our registry, and then push (upload) our container to Azure.


# The https:// is important...

docker login https://frankcontainerdemo01.azurecr.io -u frankContainerDemo01 -p <Password_Retreived>

docker push frankcontainerdemo01.azurecr.io/dotnetcoredockerappservicedemo:v1


Great the container is in Azure. Now let's create a quick webApp to see it. We could also use the Azure Container Instance (ACI) that would be only one command, but because the demo is a website, it won't make sense to use ACI for that.

To get an Application service, we need a Service plan, and then we will create an "empty" webapp. To do that we will specify the runtime without providing any code/binary/container. I wasn't able to create a webapp from a private Azure registry in one command, so this is why I'm doing it in two.

az appservice plan create --name demoplan --resource-group dotnetcoredockerappservicedemo --sku S1 --is-linux

az webapp create -g dotnetcoredockerappservicedemo -p demoplan -n frankdockerdemo --runtime "DOTNETCORE|2.1"

On Windows, I got the following error message: '2.1' is not recognized as an internal or external command, operable program or batch file. The PowerShell command line escape "--%" solves the problem: az --% webapp create -g dotnetcoredockerappservicedemo -p demoplan -n frankdockerdemo --runtime "DOTNETCORE|2.1"

If you check the website right now you should have page saying that the site is up but empty. Let's update the container settings with our registry and container settings.

az webapp config container set -n frankdockerdemo -g dotnetcoredockerappservicedemo --docker-custom-image-name frankcontainerdemo01.azurecr.io/dotnetcoredockerappservicedemo:v1 --docker-registry-server-url https://frankcontainerdemo01.azurecr.io --docker-registry-server-user frankContainerDemo01 --docker-registry-server-password <Password_Retreived> 

It's works of course!
final2


Conclusion


It's only four steps: create the .Net Core application, package it into a Docker container, publish our container into our Azure Registry, and create an application service base on that container. However, because all this tech are cross-platform, sometimes you get some little tiny differences between the platform, and those could become time-consuming. It was a great little project that turned out to be a lot more than expected, but I learn so much!

I'm very happy with the result... expect more of Docker in the future!


In a video, please!


I also have a video of this post if you prefer.




References

How create a static website on Azure Storage

I have been waiting for this feature for so long! I know; it's not a major feature, but it fills an important gap in the Azure offer. We can now create static websites in the Azure Blob Storage (as I'm writing this post the service is still in preview). In this post, I will explain why I think it's a really good news, show how to create and publish on a static website.

Why It's an Awesome News


The cloud is the perfect place when you need to build something huge very quickly. It's also an excellent solution when you have a lot of variance in the number of resources it required. Because Azure is a service, it will provide you as many resources as you would like in few minutes. And when you are done with the resources you stop paying for them; and it's really great like that!
However, if the only thing you need was to host a little something like a blog or a little website for an event or some temporary publicity Azure was not the best place for it. I mean yes of course, you could build a service and host many little websites on it (Scott Hanselman as excellent posts about that like this one), but it felt always a bit overkill for most of the users. Some people kept an "old style" host provider just for that. I mean it's fine, it works... But with Azure storage, it will be really reliable, and at a lower cost! Let's see how we can create one.

Create a Static Website


To have the static website feature you need to create an Azure Blob Storage account the same way you created them before, however, it needs to be of kind General Purpose V2 (GPV2). Today if you install the Azure CLI Storage-extension Preview, you can use it to create one, or simply go on the portal.azure.com. Let's use the portal since it's more visual.

createStorage
Once the storage is created, open it. On the left menu of the storage blade, click on the Static website (preview) option. That will open the configuration page for our static website. First, click the Enabled button then enter the initial/ index document name (ex:index.html). Finally, click the Save button on the top of the blade.

ConfigureStatic
The shell for our website is now created. A new Azure Blob Storage container named $web h been created. The Primary and secondary endpoint should now be displayed (ex: https://frankdemo.z13.web.core.windows.net/). If you test this URL, you will see and message saying that the content doesn't exist... and it's normal.

emptywebsite

Create some content


This is the part where it all depends on your needs. You may already have some HTML pages ready, or you may want to code them all yourself, or the website may previously exist. For this post, I will create a brand-new blog using a static website generator named Wyam (if you would like to see how to do it with Jekyll, another generator, I used it in the video)
To create a new template with Wyam you use the following command in a command prompt. That will create a new website in the subfolder output.
wyam --recipe Blog --theme CleanBlog

Publish to Azure


It's now time to upload our content to the Azure blob Storage. The easiest is probably directly from the portal. To upload a file, click on the $web container, then the Upload button. From the new form, select the file and upload it.

portalUpload
The main problem with this method is the that it only works one file at the time... And a website usually has many of those...
A more efficient way would be to use Azure Explorer or some script. Azure Explorer doesn't support yet the Azure Storage Static Website, but it will be soon. So that leads us to scripts or command lines.

AzCopy


I really like AZCopy as it's very efficient and easy to use. Unfortunately, as I'm writing this post, AzCopy doesn't support the Azure Storage Static Website. I try to upload all content from the output folder (and sub folders)) with a command like this, but it fails.
azcopy --source ./output --destination https://frankdemo.blob.core.windows.net/$web --dest-key fec1acb473aa47cba3aa77fa6ca0c5fdfec1acb473aa47cba3aa77fa6ca0c5fd== --recursive

Azure CLI


An Azure CLI extension preview is also available. Like I mentioned previously, the extension gives you the possibility to create a static website or update the configuration, to upload files you have two options the batch would be more efficient of course, but the file by file option also works. Thanks to Carl-Hugo (@CarlHugoM) for your help with those commands.


az storage blob upload-batch -s "./output" -d $"web" --account-key fec1acb473aa47cba3aa77fa6ca0c5fdfec1acb473aa47cba3aa77fa6ca0c5fd== --account-name frankdemo

az storage blob upload -f "./output/index.html" -c $"web" -n index.html ---account-key fec1acb473aa47cba3aa77fa6ca0c5fdfec1acb473aa47cba3aa77fa6ca0c5fd== --account-name frankdemo

Visual Studio Code Azure Storage Extension

I finally tried the Visual Studio Code Stogare Extension. After installing it, you need to add a User Setting Ctrl + ,. Then add "azureStorage.preview.staticWebsites" : true to your configuration. Now you just need to click on the extension, then select Azure blob storage from your subscription, and right click to be able to upload a folder.

vscodeupload
Depending on how many files, and their sizes it will take a moment. VSCode will notify you when it's done. You will then be able to get back online and refresh your website to see the result.

website

Conclusion


I'm very happy to see that feature because it fills a need that was not really cover yet by the Microsoft offer. Right now, it's an early preview so even if the service is very stable, not all the tools support it but that only temporary. Right not you can set your custom domain name, however, HTTPS is not supported.
So what do we do with it? Should we wait or jump right on? Well as the best practices imply when a feature is in preview don't put your core business on it yet. If you are just looking to build a personal website, a little promo than... enjoy!

In video, please!


I also have a video of this post if you prefer.




References




How to Deploy your Azure Functions Faster and Easily with Zip Push

Azure functions are great. I used to do a lot of "csx" version (C# scripted version) but more recently I switched to the compile version, and I definitely loved it! However, I was looking for a way to keep my deployment short and sweet, because sometimes I don't have time to setup a "big" CI/CD or simply because sometimes I'm not the one doing the deployment... In those cases, I need a simple script that will deploy everything! In this post, I will share with you how you can deploy everything with one easy script.

The Context


In this demo, I will deploy a simple C# (full .Net framework) Azure functions. I will create the Azure Function App and storage using an Azure Resource Manager (ARM template) and deploy with a method named Zip push or ZipDeploy. All the code, script, a template is available on my Github.

The Azure Functions Code


The Azure Function doesn't have to be special, and it can be any language supported by Azure Functions. Simply to show you everything, here the code of my function.


namespace AzFunctionZipDeploy
{
    public static class Function1
    {
        [FunctionName("GetTopRunner")]
        public static async Task Run([HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]HttpRequestMessage req, TraceWriter log)
        {
            log.Info("C# HTTP trigger function processed a request.");

            string top = req.GetQueryNameValuePairs()
                .FirstOrDefault(q => string.Compare(q.Key, "top", true) == 0)
                .Value;

            if (top == null)
            {
                dynamic data = await req.Content.ReadAsAsync< object>();
                top = data?.top;
            }

        return top == null
                ? req.CreateResponse(HttpStatusCode.BadRequest, "Please pass a number to get your top x runner on the query string or in the request body")
                : req.CreateResponse(HttpStatusCode.OK, new { message = $"Hello, here is your Top {top} runners", runners = A.ListOf(int.Parse(top)) });
        }
    }

    class Person
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public int Age { get; set; }
    }
}

It's a really simple function that will return a list of Person generated on the fly. The list will contain as many person as the number passed in parameter. I'm using the very useful GenFu library, from my buddies: ASP.NET Monsters.

The only thing we need to do is to create our compress file (Zip or Rar) that contains everything our project required.

createZip

In this case, it's the project file (AzFunction-ZipDeploy.csproj), the function's code (Function1.cs) the host (host.json) and local settings of our function (local.settings.json).

The ARM template


For this demo, we need one Azure Function App. I will use a template that is part of the Azure Quickstart Templates. A quick look to the azuredeploy.parameters.json file and we see that the only parameter we really need to set is the name of our application.


{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "appName": {
        "value": "zipdeploydemo"
        }
    }
}

To be able to ZipDeploy, we need to add one Application Setting to let the Kudu interface we need its help to compile our code. To do that let's open the azuredeploy.json and go to the appSettings section. We need to add a new variable named: SCM_DO_BUILD_DURING_DEPLOYMENT and set it to true. After adding the setting it should look like this (see the last one... that's our new one):


"appSettings": [
    {
    "name": "AzureWebJobsDashboard",
    "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2015-05-01-preview').key1)]"
    },
    {
    "name": "AzureWebJobsStorage",
    "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2015-05-01-preview').key1)]"
    },
    {
    "name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
    "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2015-05-01-preview').key1)]"
    },
    {
    "name": "WEBSITE_CONTENTSHARE",
    "value": "[toLower(variables('functionAppName'))]"
    },
    {
    "name": "FUNCTIONS_EXTENSION_VERSION",
    "value": "~1"
    },
    {
    "name": "WEBSITE_NODE_DEFAULT_VERSION",
    "value": "6.5.0"
    },
    {
    "name": "SCM_DO_BUILD_DURING_DEPLOYMENT",
    "value": true
    }
]

The Deployment Script


Now that all the pieces are ready it's time to put it together one script. In fact, only the two last commands are required; everything else is just stuff to make it easier to re-use it. Check out my previous post 5 Simple Steps to Get a Clean ARM Template, to learn more about the best practices related to ARM template. So let's see that script, it's pretty simple.

    # script to Create an Azure Gramophone-PoC Solution

    resourceGroupName=$1
    resourceGroupLocation=$2

    templateFilePath="./arm/azuredeploy.json"
    parameterFilePath="./arm/azuredeploy.parameters.json"

    dateToken=`date '+%Y%m%d%H%M'`
    deploymentName="FrankDemo"$dateToken

    # az login

    # You can select a specific subscription if you do not want to use the default
    # az account set -s SUBSCRIPTION_ID

    if !( $(az group exists -g  $resourceGroupName) ) then
        echo "---> Creating the Resourcegroup: " $resourceGroupName
        az group create -g $resourceGroupName -l $resourceGroupLocation
    else
        echo "---> Resourcegroup:" $resourceGroupName "already exists."
    fi

    az group deployment create --name $deploymentName --resource-group $resourceGroupName --template-file $templateFilePath --parameters $parameterFilePath --verbose

    echo "---> Deploying Function Code"
    az functionapp deployment source config-zip -g $resourceGroupName -n zipdeploydemo --src "./zip/AzFunction-ZipDeploy.zip"

    echo "---> done <--- code="">

The only "new" thing is the last command functionapp deployment source config-zip. That where we specify to the Azure Function App to look to --src to get our source. Because I'm running it locally, the path is pointing to a local folder. However, you could execute this command also in the CloudShell, and that would become a URI... to an Azure Blob Storage by example.

Deploy and Test


If you didn't notice yet, I did my script in bash and Azure CLI. That because I want my script to be compatible with all platforms. Of course, you could have done it in PowerShell or anything else that would call the REST API.

To deploy, just execute the script passing the ResourceGroup name, and its location.

    ./Deploy-AZ-Gramophone.sh cloud5mins eastus

ScriptOutputs

To get to Function URL, go to the Azure portal (portal.azure.com) and click on the Function App that we just deploy. Click on the function GetTopRunner in this case, and click on the </> Getfunction URL button.

GetFunctionURL

Use that URL in postman and pass another parameter top to see we the deployment ws successful.

postmanTest

In Video Please


If you prefer, I also have a video version of this post.



~Enjoy!

5 Simple Steps to Get a Clean ARM Template

You have a solution that is already deployed in Azure, and you would like to reproduce it. You know that Azure Resource Manager (ARM) template could help you to do that, unfortunately, you don't know how to get started. In this post, I will share with you the best practices and how I implement them while working on ARM template.

How to Get your ARM Template


Of course, you could build your ARM template from scratch. However, there many quickstart templates available on GitHubd. Even more, you could also get Azure to generate the template for you!

If your building a new solution, go in the Azure portal (portal.azure.com) and start creating your resource as usual. But stop just before clicking on the Create button. Instead click on the link on his side named Download template and parameters. That will open a new blade where you will be able to download the template, parameters files, and a few scripts in different languages to deploy it.

Arm_fromNew

If your solution is already deployed, you still have a way to get the template. Again, from the Azure portal, go to the resource group of your solution. In the left option panel, click on Automation script.

ARM_fromLive

Step 1 - Use Git


Once you have your ARM template and a parameter file, move them in a folder and initialize a Git Repository. Even if it's only a local one this will give you an infinite of Ctrl-Z. Doing multiple commit along your journey to get a better and cleaner template, you will always have options to get back when your template was "functional".

A fantastic tool to edit ARM template is Visual Studio Code. It's free, it supports natively Git, and you can install great extensions to help you.

Step 2 - Validate, Validate, Validate, then Commit

az group deployment validate --resource-group cloud5mins --template-file .\template.json --parameters .\parameters.json

Step 3 - Reduce the Number of Parameters


Nobody like tons of questions. Too many parameters is exactly like too many questions. So reduce them to the maximum. We cannot just delete those unwanted parameters, but they are still providing important information. Instead move them in the variables section.

You can do that in different ways, let me share mine. I start with the parameter files and bubble-up any parameter that I would like to keep. Next Cut/Paste all the unwanted parameters to a new file. Then I use the multi-cursor selection of VSCode to clean them in 2 clicks.

Once we have all parameters "converted" in variables, copy them into the variables section of the ARM template. You will need to delete the parameter equivalent from the top of the template.

Now that we have a clean list of parameters, and variables, we must fix the references to the converted parameters. To do that replace all

parameters() references by variables().

For exemple this:

parameters('networkInterfaceName')

will become that:

variables('networkInterfaceName')

Now that we have a more respectable list of parameters, we must be sure that what we expect from them is clear. To do that we have two simple feature at our disposal. The first one of course the name. Use a complete and clear name. Resist the temptation to shorten everything or use too many acronyms. The second is to use metadata description. This information will be displayed to users through the portal as tooltips.

    "adminUsername": {
        "type": "string",
        "metadata": {
            "description": "Name of Administrator user on the VM"
        }
    }

Step 4 - Use Use Unique String


When you deploy in Azure some names are global, and by definition need to be unique. This is why adding a suffix or a unique identifier to your named is a good practice. An excellent way to get an identifier is to use the function uniqueString(). This function will create a 64Bits hash based on the information passed in parameter.

"suffix": "[uniqueString(resourceGroup().id, resourceGroup().location)]"

In the example just above, we pass the identifier of the resource group and its name. It means that every time you will be deploying in the same resource group and at that location suffix will be the same. However, if your solution is deployed in multiple locations (for a disaster recovery, or another scenario), suffix will have a different value.

To use it, let's say the name of a virtual machine was passed as a parameter. Then we will create a variable and concatenate the parameter and our suffix.

"VMName": "[toLower(concat(parameters('virtualMachineName'), variables('suffix')))]",

Then instead of using the parameter inside your ARM template, you will be using this new variable.

Step 5 - Use Variables


One of the great strengths of using ARM template is that we can use them over and over. This is why we want to avoid anything that his static name or value. When we generated template from the Azure portal, these templates are a snapshot of that particular instances. The best way to stay structured and avoid too fixed names is to leverage variables.

When you use an ARM template generated from a "live" and already deployed solution the ARM will contains a lot of very specific information about this instance (Comments, ResourceIDs, States, etc.). When you are building a generic template don't hesitate to delete those.
Let's see some examples.


"RGName": "[toLower(resourceGroup().name)]",
"VMName": "[toLower(concat(parameters('virtualMachineName'), variables('suffix')))]",

"virtualNetworkName": "[concat(variables('RGName'), '-vnet')]",
"networkInterfaceName": "[toLower(concat(variables('VMName'),'-nic-', variables('suffix')))]",
"networkSecurityGroupName": "[toLower(concat(variables('VMName'),'-nsg-', variables('suffix')))]",

"diagnosticsStorageAccountName": "[substring(concat(variables('RGName'), 'diag', variables('suffix')), 0, 24)]",

You may wonder why we need the first variable RGName , since the resource group name is already available through the resourceGroup() function? Some resources, like Azure Blob Storage's name, must only contain lowercase characters. By making a variable we avoid repeating the to toLower() every time.

You can concatenate two, or more variables and/or string with the "very popular" function concat(). Sometimes, the name built by all those string is too long. You can trim it by using the function substring(stringToParse, startIndex, length). In this case, the Azure Blob Storage required a name with a maximum of 24 characters.

To learn more about all the available function and how to use it visit the Azure Resource Manager template functions page from the Microsoft documentation.

Step 6 - Create "T-Shirt Size" or smart options


The best way to build a good template is to think like the people who will use it. Therefore, a developer may not know what the difference between a Standard_D2s_v3, a Standard_F8 or a Standard_H8. But will clearly know if he needs a medium, a large, or a web development VM.

That means that we will create a parameter with only specific values allowed, and base on that simple selection we will take more specific and technical decision. See the declaration of the following parameter.


    "EnvironmentSize": {
        "type": "string",
        "defaultValue": "medium",
        "allowedValues": [
            "medium",
            "large"
        ],
        "metadata": {
            "description": "Medium for regular development. Large for huge memory usage"
        }
    }

This parameter will only allowed two string "medium" or "large", anything else will return a validation error. If nothing is passed the default value will be "medium". And finally using a metadata description to make sure the purpose of the parameter is clear and well defined.

Then you define your variable (ex: TS-Size) as an object with two properties, or as many as you have allowed values. For each of these properties, you could have many other properties.

"TS-Size":{
    "medium":{
        "VMSize": "Standard_D2s_v3",
        "maxScale": 1
    },
    "large":{
        "VMSize": "Standard_D8s_v3",
        "maxScale": 2
    }
}

Then to use it, we just need to chained the variables and parameter. Notice how we have nested square brackets... This will use the TS-Size.medium.VMSize value by default.

"vmSize": "[variables('TS-Size')[parameters('EnvironmentSize')].VMSize]"

I hope you will find those tips as useful, as I found they are. If you have other suggestions or recommendations, don't hesitate to add them in the comment section or reach me out.

The full ARM template is available at : https://gist.github.com/FBoucher/adea0acd95f86e5838cf812c010564cf

In Video Please!


If you prefer, I also have a video version of that post.





Don't install your software yourself

I don't know for you, but I don't like losing time. This is why a few years ago I started using scripts to install all the software I need on my computer. Got a new laptop? N You just need to execute this script, go grab a coffee and when I'm back all my favorite (and required) softwares are all installed. On Linux, you could use apt-get, and on Windows, my current favorite is Chocolatey. Recently I needed to use more virtual machine (VM) in the cloud and I deceided that I should try using a Chocolatey script during the deployment. This way once the VM is created the softwares, I need is already installed! This post is all about my journey to get there, all scripts, issues and workarounds will be explained.

The Goal


Creating a new VM on premises applying the OS update and installing all the tools you need (like Visual Stutio IDE) will takes hours... This solution should be done under 10 minutes (~7min in my case).
Once the VM is available, it should have Visual Studio 2017 Enterprise, VSCode, Git and Node.Js installed. In fact, I would like to use the same Chocolatey script I use regularly.
# Install Chocolatey
Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))

# Install Software
choco install visualstudiocode -y
choco install git -y 
choco install nodejs-lts  -y

(Available on gist.github)

The Tools


In this post I will use Azure CLI, because it will works on any environment. However, PowerShell can also be use only a few command will be different. The VM will be deploy with an Azure resource Manager (ARM) template. To create and edit the ARM template I like to use VSCode, you don't need it but it's so much easier with it! I use two extension.
The first one Azure Resource Manager Snippets will help by generating the schema for our needs. In a JSON file you just need to type arm en voila! You ahave a long list of ARM template!

armSnippets

The second is Azure Resource Manager Tools. This extension provides language support for ARM and some validate. Very useful...

toolvalidation

Creating the ARM Template


To Get started create a new JSon file. Then type arm and select the first option; to get an empty skeleton. Then add an extra line in resources and type again arm. This time scroll until you see arm-vm-windows.

step2Here

A multi-cursor will allow you to edit the name of your VM everywhere in the file in one shot. Hit Tab to navigate automatically to the userName, and Tab again to go to the password.

createARM
Now we have a functional ARM template that we could deploy. However, let's add a few things first.

Searching the Image SKUs by Code


One of my favorite VM images for a DevBox is the one that includes Visual Studio pre-installed. One thing to know is those images are only deployable in an MSDN subscription. To specify wich image you want to use you need to pass a publisher, offer, and sku.
Here how to do it with Azure CLI commands
# List all the Publishers that contain VisualStudio (It's case sensitive)
az vm image list-publishers --location eastus --output table --query "[?contains(name,'VisualStudio')]"

# List all offers for the Publisher MicrosoftVisualStudio
az vm image list-offers --location eastus --publisher MicrosoftVisualStudio  --output table

# List all availables SKUs for the Publisher MicrosoftVisualStudio with the Offer VisualStudio
az vm image list-skus --location eastus --publisher MicrosoftVisualStudio --offer VisualStudio --output table


Now that all the information is found, search in the ARM template and replace the current values by the one found. In my case, here are the new values.

"imageReference": {
                    "publisher": "MicrosoftVisualStudio",
                    "offer": "VisualStudio",
                    "sku": "VS-2017-Ent-Win10-N",
                    "version": "latest"
                }

Adding our Custom Script


Great now we have a VM with Visual Studio but our applications are still not installed. That will be done by adding the Custom Script Extension for Windows to our template. documentation page, a sample schema is there ready to be use.
The last node of your template is currently another extension. For the purpose of this blog post let's remove it. You should have something like this.

newExtensionPlace

We will copy/ paste the snippet from the documentation page a change a few little things. Change the type (thank to our VSCode Extension for that catch). Update the dependencies to reflet our demo.

To use the extension your script needs to be available online. It could be in a blob storage (with some security) or just publicly available. In this case, the script is publicly available from my gist.github page. I created a variable in the variables section that contains the RAW URL of my script, and a reference to that varaibale is used in the fileUris.

The extension will download the script and then execute a function locally. Change the commandToExecute to call our script with unrestricted execution policy.

You have a timed window of ~30 minutes to execute your script. If it takes longer then that, your deployment will fail.

{
        "apiVersion": "2015-06-15",
        "type": "extensions",
        "name": "config-app",
        "location": "[resourceGroup().location]",
        "dependsOn": [
            "[concat('Microsoft.Compute/virtualMachines/', 'FrankDevBox')]"
        ],
        "tags": {
            "displayName": "config-app"
        },
        "properties": {
            "publisher": "Microsoft.Compute",
            "type": "CustomScriptExtension",
            "typeHandlerVersion": "1.9",
            "autoUpgradeMinorVersion": true,
            "settings": {
                "fileUris": [
                    "varaiables('scriptURL')]"
                ]
            },
            "protectedSettings": {
                "commandToExecute": "[concat('powershell -ExecutionPolicy Unrestricted -File ', './SimpleDevBox.ps1')]"
            }
        }
    }
`

The ARM Template


It's finally time to deploy our VM.

# First, we need a Resource Group
    az group create --name frankDemo --location eastus

    # ALWAYS, always validate first... you will save a lot of time
    az group deployment validate --resource-group frankDemo --template-file /home/frank/Dev/DevBox/FrankDevBox.json

    #Finally deploy. This script should take between 5 to 10 minutes
    az group deployment create --name FrankDevBoxDemo --resource-group frankDemo --template-file /home/frank/Dev/DevBox/FrankDevBox.json --verbose

What's Next?!


We created one template; you could make it better.

Deploy from anywhere


By moving the computerName, adminUsername, adminPassword, and the script url in the parameters section, you could then put the template in a public place like GitHub. Then with use the one click deploy!

Directly from the Github page or from anywhere you just need to build a URL from those two parts: https://portal.azure.com/#create/Microsoft.Template/uri/ and the HTML Encoded URL to your template.

If my template is available at https://raw.githubusercontent.com/FBoucher/SimpleDevBox/master/azure-deploy.json then the full url become:
https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FFBoucher%2FSimpleDevBox%2Fmaster%2Fazure-deploy.json

Clicking that URL will bring you to the Azure Portal (portal.azure.com) in a customized form to deploy your template.

DeployForm

It cannot be easier! You can see mine on GitHub.

Auto shutdown


It's very easy to forget to turn off those VM. And whatever you are paying for them or your using the limited MSDN credit it's a really good practice to turn them down. Why not do that automatically!
That can be very simply done by adding a new resource in the template.

{
        "name": "[concat('autoshutdown-', 'FrankDevBox')]",
        "type": "Microsoft.DevTestLab/schedules",
        "apiVersion": "2017-04-26-preview",
        "location": "[resourceGroup().location]",
        "properties": {
            "status": "Enabled",
            "taskType": "ComputeVmShutdownTask",
            "dailyRecurrence": {
                "time": "19:00"
            },
            "timeZoneId": "UTC",
            "targetResourceId": "[resourceId('Microsoft.Compute/virtualMachines', 'FrankDevBox')]",
            "notificationSettings": {
                "status": "Enabled",
                "emailRecipient": "frank@frankysnotes.com",
                "notificationLocale": "en",
                "timeInMinutes": "30"
            }
        },
        "dependsOn": [
            "[concat('Microsoft.Compute/virtualMachines/', 'FrankDevBox')]"
        ]
    }


In Video Please!


If you prefer, I also have a video version of that post.

How to Create an Azure VM with Chocolatey


~Enjoy!


References:



How to copy files between Azure subscription from Windows, Linux, OS X, or the cloud

(en français: ici)

Copy, Download or Upload from-to any combination of Windows, Linux, OS X, or the cloud


Data is and will always be our primary concern. Whether shaped as text files, images, VM VHDs or any other ways, at some point in time, our data will need to be moved. I already wrote about it previously, and the content of this post is still valuable today, but I wanted to share new options and convert all ground (meaning Linux, Windows and OS X).

Scenarios


Here few scenarios why you would want to move data.
  • Your Microsoft Azure trial is ending, and you wish to keep all the data.
  • You are creating a new web application, and all those images need to be moved to the Azure subscription.
  • You have a Virtual Machine that you would like to move to the cloud or to a different subscription.
  • ...

AZCopy


AzCopy is a fantastic command-line tool for copying data to and from Microsoft Azure Blob, File, and Table storage. At the moment, to write this post AzCopy is only available for Windows users. Another solution will be introduced later in this post for Mac and Linux users. Before AzCopy was only available on Windows. However, recently a second version built with .NET Core Framework is available. The commands are very similar but not exactly the same.

AzCopy on Windows


In his simplest expression, an AzCopy command looks like this:
AzCopy /Source:<source> /Dest:<destination> [Options]
If you earlier have installed an Azure SDK on your machine, you already have it. By default, AzCopy is installed to %ProgramFiles(x86)%\Microsoft SDKs\Azure\AzCopy (64-bit Windows) or %ProgramFiles%\Microsoft SDKs\Azure\AzCopy (32-bit Windows).

If you need only AzCopy for a server, you can download the latest version of AzCopy.
Let's see some frequent usage. First let's say you need do move all those images from your server to an Azure blob storage.
AzCopy /Source:C:\MyWebApp\images /Dest:https://frankysnotes.blob.core.windows.net/blog /DestKey:4YvvYDTg3UUpky8Rj5bDG4KO/R1FdtssxVnunsEd/4rAS04V2LkO0F8mXbddAv39WtCo5LW6JyvfhA== /S
CopyAllImages

Then to copy those images to another subscription very easy.
AzCopy /Source:https://frankysnotes.blob.core.windows.net/blog /Dest:https://frankshare.blob.core.windows.net/imagesbackup /SourceKey:4YvvYDTg3UUpky8Rj5bDG4KO/R1FdtssxVnunsEd/4rAS04V2LkO0F8mXbddAv39WtCo5LW6JyvfhA== /DestKey:EwXpZ2uZ3zrjEbpBGDfsefWkj3G2QY5fJcb6kMqV2A0+2TsGno+mk9vEXc5Uw1XiouvAiTS7Kr5OGzA== /S

AzCopy Parameters


These examples were simple, but AzCopy is a very powerful tool. I invite you to type one of the following commands to discover more about using AzCopy:
  • For detailed command-line help for AzCopy: AzCopy /?
  • For command-line examples: AzCopy /?:Samples

AzCopy on Linux


Before you could install AzCopy you will need to install .Net Core. This is done very simply with few commands.
curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg
sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial main" > /etc/apt/sources.list.d/dotnetdev.list'
sudo apt-get update
sudo apt-get install dotnet-sdk-2.0.2
Then to install it, you just need to get it with a wget command, unzip it, and execute the install script.
wget -O azcopy.tar.gz https://aka.ms/downloadazcopyprlinux 
tar -xf azcopy.tar.gz 
sudo ./install.sh
In his simplest expression, the .Net Core version of AzCopy command looks like this:
azcopy --source <source> --destination <destination> [Options]
It is very similar to the original version, but parameters are using -- and - instead of the / and where a : was required, it's now a simple space.

Uploading to Azure


Here an example, to copy a single file GlobalDevopsBootcamp.jpg to an Azure Blob Storage. We pass the full local path to the file into --source, the destination is the full URI, and finally the destination blob storage key. Of course, you could also use SAS token if you prefer.
azcopy \
--source /home/frank/demo/GlobalDevopsBootcamp.jpg \
--destination https://frankysnotes.blob.core.windows.net/blog/GlobalDevopsBootcamp.jpg \
--dest-key 4YvvYDTg3UUpky8Rj5bDG4KO/R1FdtssxVnunsEd/4rAS04V2LkO0F8mXbddAv39WtCo5LW6JyvfhA== 

Copying Between Azure Subscriptions


To copy the image to a second Azure subscription, we use the command the source is now an Azure Storage URI, and we pass the source and the destination keys:
azcopy \
--source https://frankysnotes.blob.core.windows.net/blog/GlobalDevopsBootcamp.jpg \
--destination https://frankshare.blob.core.windows.net/imagesbackup/GlobalDevopsBootcamp.jpg \
--source-key 4YvvYDTg3UUpky8Rj5bDG4KO/R1FdtssxVnunsEd/4rAS04V2LkO0F8mXbddAv39WtCo5LW6JyvfhA== \
--dest-key EwXpZ2uZ3zrjEbpBGDfsefWkj3G2QY5fJcb6kMqV2A0+2TsGno+mk9vEXc5Uw1XiouvAiTS7Kr5OGzA== 

Azure CLI


Azure CLI is a set of cross-platform commands for the Azure Platform. It gives tools to manipulate all Azure components, but this post will focus on azure storage features.

There are two versions of the Azure Command-Line Interface (CLI) currently available:

  • Azure CLI 2.0: written in Python, conpatible only with the Resource Manager deployment model.
  • Azure CLI 1.0: written in Node.js, compatible with both the classic and Resource Manager deployment models.

Azure CLI 1.0 is deprecated and should only be used for support with the Azure Service Management (ASM) model with "classic" resources.

Installing Azure CLI


Let's start by installing Azure CLI. Of course, you can download an installer but since everything is evolving very fast with not getting it from Node Package Manager (npm). The install will be the same, you just need to specify the version if you absolutely need Azure CLI 1.0.

sudo npm install azure-cli -g

AzureCli_installed

To keep the previous scenario, let's try to copy all images to a blob storage. Unfortunately, Azure CLI doesn't offer the same flexibility as AzCopy,and you must upload the file one by one. However, to upload all images from a folder, we can easily put the command in a loop.

for f in Documents/images/*.jpg
do
   azure storage blob upload -a frankysnotes -k YoMjXMDe+694FGgOaN0oaRdOF6s1ktMgkB6pBx2vnAr8AOXm3HTF7tT0NQWvGrWnWj5m4X1U0HIPUIAA==  $f blogimages
done

azurecli_allimages

In the previous command -a was the account name, and -k was the Access key. This two information can easily be found in the Azure portal. From the portal (https://portal.azure.com), select the storage account. In the right band click-on Access keys.

StorageAccessKeys

To copy a file (ex: a VM disk aka VHD) from one storage to another one in a different subscription or region, it's really easy. This time we will use the command azure storage blob copy start and the -a and -k are related to our destination.

azure storage blob copy start 'https://frankysnotes.blob.core.windows.net/blogimages/20151011_151451.MOV' imagesbackup -k EwXpZ2uZ3zrjEbpBGDfsefWkj3GnuFdPCt2QY5fJcb6kMqV2A0+2TsGno+mk9vEXc5Uw1XiouvAiTS7Kr5OGzA== -a frankshare

The nice thing about this command is that it's asynchronous. To see the status of your copy just execute the command azure storage blob copy show

azure storage blob copy show -a frankshare -k YoMjXMDe+694FGgOaN0oPaRdOF6s1ktMgkB6pBx2vnAr8AOXm3HTF7tT0NQVxsqhWvGrWnWj5m4X1U0HIPUIAA== imagesbackup 20151011_151451.MOV

CopyStatus1


CopyStatus2

Azure CLI 2.0 (Windows, Linux, OS X, Docker, Cloud Shell)


The Azure CLI 2.0 is Azure's new command-line optimized for managing and administering Azure resources that work against the Azure Resource Manager. Like the previous version, it will work perfectly on Windows, Linux, OS X, Docker but also from the Cloud Shell!


Cloud Shell is available right from the Azure Portal, without any plugging.

Uploading to Azure


The command if the same as the previous version except that now the command is named az. Here an example to upload a single file into an Azure Blob Storage.

az storage blob upload --file /home/frank/demo/CloudIsStrong.jpg \
--account-name frankysnotes \
--container-name blogimages --name CloudIsStrong.jpg \
--account-key 4YvvYDTg3UUpky8Rj5bDG4KO/R1FdtssxVnunsEd/4rAS04V2LkO0F8mXbddAv39WtCo5LW6JyvfhA==

Copying Between Subscriptions


Let's now copy the file to another Azure subscription. A think to be aware is that --account-name and --account-key are for the destination, even if it's not specified.
az storage blob copy start \
--source-account-name frankysnotes  --source-account-key 4YvvYDTg3UUpky8Rj5bDG4KO/R1FdtssxVnunsEd/4rAS04V2LkO0F8mXbddAv39WtCo5LW6JyvfhA== \
--source-container blogimages --source-blob CloudIsStrong.jpg   \
--account-name frankshare  --account-key EwXpZ2uZ3zrjEbpBGDfsefWkj3G2QY5fJcb6kMqV2A0+2TsGno+mk9vEXc5Uw1XiouvAiTS7Kr5OGzA== \
--destination-container imagesbackup  \
--destination-blob CloudIsStrong.jpg 

In Video Please!


If you prefer, I also have a video version of that post.



One More Thing


Sometimes, we don't need to script things, and a graphic interface is much better. For this kind of situation, the must is the Azure Storage Explorer. It does a lot! Upload, download, and manage blobs, files, queues, tables, and Cosmos DB entities. And it works on Windows, macOS, and Linux!


It's just the beginning


This post was just an introduction to two very powerful tools. I strongly suggest to go read in the official documentation to learn more. Use the comment to share all your questions and suggestion.

References:

Automating Docker Deployment with Azure Resource Manager

Recently, I had to build a solution where Docker container were appropriate. The idea behind the container is that once there are built you just have to run it. While it's true, my journey was not exactly that, nothing dramatic, only few gotchas that I will explain while sharing my journey.

The Goal

The solution is classic, and this post will focus on a single Virtual Machine (VM). The Linux VM needs a container that automatically runs when the VM starts. Some files first download from a secure location (Azure blob storage) then passed to the container. The solution is deployed using Azure resources manager (ARM). For the simplicity, I will use Nginx container but the same principle applies to any container. Here is the diagram of the solution.

Docker-in-Azure2

The Solution

I decided to use Azure CLI to deploy since this will be also the command-line used inside the Linux VM, but Azure PowerShell could do the same. We will be deploying an ARM template containing a Linux VM, and two VM-Extension: DockerExtension and CustomScriptForLinux. Once the VM is provisioned, a bash script will be downloaded by CustomScriptForLinux extension from the secure Azure blob storage myprojectsafe, then executed.