Showing posts with label post. Show all posts
Showing posts with label post. Show all posts

How to re-open a repository cloned in a volume, from Visual Studio Code

I love Dev Containers, I use them a lot for most of my development. One of my favorite options is to clone a repository directly in a docker volume.


It takes a few seconds and you can work on your code without installing any SDKs or language that your current machine doesn't have. Marvelous!

Ideally, at the end of your session, you push your code to another repository (ex: GitHub). However, sometimes I forget or am interrupted I start working on something else, and my changes are not pushed.

How do you re-open that environment?! In this post, I want to share two ways that I use.

Open Recent

The first method is to use the history of the editor! For example, here in Visual Studio Code, select the File menu and Open Recent.


If you didn't open too many files since you used that dev container, it should be present as displayed in the image. It should look like: <Name of the repository> in a unique [Dev Container].

Make sure docker is already running and select it. Voila, in a snap you are back into the dev environment with your last changes waiting for you.

Open the Container

There are a few different options to do the next solution, I will share the one I consider the easiest for the people who are not Docker experts.

First, if not already present to your VSCode, add the Docker extension identifier: ms-azuretools.vscode-docker. Then from this new extension in the top section named containers search for your container. It should start with "vsc", for Visual Studio Code, then hyphen the name of the repository you cloned. Right-click on it and select start. After a few seconds, the container should have a little green triangle on its side and be ready to continue.


The next step consists of attaching the container to VSCode. Once more, from the Docker extension, right-click on the container and select Attach Visual Studio Code.


This will open a new VSCode window, we are mostly done but there is one last step to do. You will notice that the file explorer is empty. No worries we will fix everything with this last step. The terminal should be open in the home folder of the root user. Let's open our project folder by executing the command:

cd /workspaces/<repository-name>

Then the final command is to re-open VSCode in this folder and let the Dev Container do his magic. Execute the command:

code . -r

(the -r is to re-use the same VSCode windows. It's optional, if not provided it will open a new VSCode instance.)


And voila! The Dev Container is just as it was before.


If you know other ways to achieve this, leave a comment or reach out, I'm always happy to learn more.


~ frank



How to simplify a Docker run command

I recently wanted to create animated GIFs from videos. The idea was to get video previews, in a very lightweight file. After a quick search online, I found FFMPEG, a fantastic multimedia framework to manipulate media. There is also a few wrappers that exists in different languages (ex: C#, JavaScript) but you still need to install FFMPEG locally, and I didn't want that. In fact, I wanted a simple solution that doesn't require any installation locally and something in the cloud. In this post, I want to share how I achieved the first one.

All the code and the container are available on Github and Docker Hub.

First Contact

The ffmpeg framework is very powerful and can do so many things; therefore it's normal that it has a ton of possible parameters and extensions. After time spent on the documentation and a few trials and errors, I found how to do exactly what I needed calling it this way:

ffmpeg -r 60 -i $INPUTFILE -loop 0 -vf scale=320:-1 -c:v gif -f gif -ss 00:00:00.500 -r 10 -t 5 - > $OUTPUTFILE

This will create a five second animated GIF from a video. It speeds up the video and lowers the framerate of the GIF to keep the output lightweight. Here is an example.

Hello World episode 5 seconds preview

This is great, but this is not very friendly. How can someone who only creates a video once in a while be expected to remember all those parameters?! And even harder, when the video is vertical some parameters have different values. It was time to simplify, and here is how I did it. Note that I'm a Docker beginner and if you think there is a simpler or better way to do some steps, let me know, and let's learn together.

The Plan

The plan is simple: execute a simple Docker command like docker run fboucher/aciffmpeg -i NotInTheSky.mp4 and generate a video preview. To build our ephemeral container we will start with something lightweight like alpine, install ffmpeg and add a script that would be executed as the container runs. That sounds like an excellent plan, let's do it!

Writing the Script

The script is simple, but I learned a few things writing it. This is why it's included in this post. The goal was simple: execute the ffmpeg command using some values from the parameters: file path, and if the video is vertical. Here is the script:

#!/bin/sh

while getopts ":i:v" opt; do
  case $opt in
    i) inputFile="$OPTARG"
    ;;
    v) isVertical=true
    ;;
    \?) echo "Invalid option -$OPTARG" >&2
    exit 1
    ;;
  esac

  case $OPTARG in
    -*) echo "Option $opt needs a valid argument"
    exit 1
    ;;
  esac
done

if [ -z "$isVertical" ]; then isVertical=false; fi

# used for bash 
#IFS='.'
#read -a filePart <<< "$inputFile"
#outputFile="${filePart[0]}.gif"

# used for dash 
filename=$(echo "$inputFile" | cut -d "." -f 1)
outputFile="$filename.gif"

if $isVertical
then
  ffmpeg -r 60 -i $inputFile -loop 0 -vf scale=-1:320 -c:v gif -f gif -ss 00:00:00.500 -r 10 -t 5 - > $outputFile
else
  ffmpeg -r 60 -i $inputFile -loop 0 -vf scale=320:-1 -c:v gif -f gif -ss 00:00:00.500 -r 10 -t 5 - > $outputFile
fi

Things I learned: Parameter without values

The script needs to be as friendly as possible, therefore any unnecessary information should be removed. Most videos will be horizontal, so let's make the parameter optional. However, I don't want users to have to specify the value script.sh -i myvideo.mp4 -v true but instead script.sh -i myvideo.mp4 -v. This is very simple to do, once you know it. On the first line of code when I get the parameters: getopts ":i:v" notes that there is no ":" after the "v". This is to specify that we are not expecting any values.

Things I Learned: Bash and Dash

As mentioned earlier the container will be built from Alpine. And Alpine doesn't have bash but instead uses dash as a shell. It's mostly the same, but there are some differences. The first one will be the shebang (aka "#!/bin/sh" on the first line). And the second was the string manipulation. To generate a new file with the same name but a different extension of the script, split the file name at the ".". This can be done IFS ... read... <<< command (commented in the script) on bash but this will give syntax error: unexpected redirection and this is because there is no <<< in bash. Instead, you need to use the command cut -d "." -f 1 (where -d specifies the CHAR to use as the delimiter, and -f return only this field).

Building the image

It's now time to connect all the dots in the dockerfile.

FROM alpine:3.13
LABEL Name=aciffmpeg Version=0.0.2
RUN apk add ffmpeg
COPY ./src/myscript.sh /
RUN chmod +x /myscript.sh
ENTRYPOINT ["/myscript.sh"]

The file is not extremely complex but let’s pass through it line by line. 

  • We start FROM Alpine version 3.13 and apply a LABEL
  • RUN Will execute the command to install ffmpeg. The apk is the default utility on Alpine to install apps just like apt on Ubuntu. 
  • COPY Is copying the script from our local machine into the container at the root. 
  • The second RUN command is to make sure the script is executable. 
  • Finally, ENTRYPOINT will allow us to configure the container to run as an executable in this case as the script. All parameters passed to Docker will be passed to the script.

The only things left now are to build, tag, and push it on Docker Hub.

docker build -t fboucher/aciffmpeg .

docker tag  0f42a672d000 fboucher/aciffmpeg:2.0

docker push fboucher/aciffmpeg:2.0

The Simplified version

And now to create a preview of any video you just need to map a volume and specify the file path and optionally mention if the video is vertical.

On Linux/ WSL the command would look like this:

docker run -v /mnt/c/dev/test:/video fboucher/aciffmpeg -i /video/sample.mp4 -v

And on PowerShell like that:

docker run -v c/dev/test:/video fboucher/aciffmpeg -i /video/sample.mp4 -v

I learned a lot about Docker doing that project and now I have a very useful tool. What are the tools you built using containers that simplify your life or work?

Video Version

I recorded a video version if you are interested. 


~frank



Learning how to Build, Secure, and Deploy your Azure Static Web App in C#

Recently I participated in a series of videos about Azure Static Web Apps: Azure Tips and Tricks: Static Web Apps on Microsoft Channel 9. The series is perfect to get started and cover multiple different scenarios in different Javascript frameworks and C#. In this post, I wanted to regroup the four videos related to .Net Blazor. I also added the GitHub links part of the references at to end.

How to create a web app in C# with Blazor & Azure Static Web Apps

In this video, we start from scratch. We will build and deploy a brand new static website with .Net Blazor.



How to add a C# API to your Blazor web app

Now that you built your web app with C# and Blazor, what about adding a serverless C# API to it? Have a look!



How to secure your C# API with Azure Static Web Apps

Prevent unwanted users to access your C# API by configuring authentication and authorization in your Blazor Azure Static Web Apps.



I hope those videos will help you to get started. If you have questions and/or comments don't hesitate to reach out (comments, DM, GitHub issues), it's always a pleasure.

How CI/CD and preview branches work with Azure Static Web Apps

In this video, I wanted to show one of the great features of Azure Static Web App Learn: the creation of pre-production environments. Using the CI/CD workflow, you can preview your pull requests changes before it's in production leveraging the automatic creation of pre-production environments!



References:

How to Create a Continuous Integration Continuous Deployment (CI-CD) Solution for a Docker Project


I'm not a Docker master, but I understand that it's very useful and I like to use it from time to time in some projects. Another thing I like is DevOps and automation and in a project I have, I was missing that. In the previous setup, the container was built and publish to DockerHub with the date as a tag. Nice but not very easy to now with versions are "stable" and wish one are "in progress".

This post is about how I build a continuous integration and continuous deployment solution for my docker project. All the code is on GitHub and Docker Hub. I sharing my journey so others can enjoy that automation and not spend a weekend building it.

The Goal

By the end of this build, there will be two GitHub Action to build and publish a different version of the application on Docker Hub.

The release version: every time a release is published on GitHub a container tag with the matching version number will be built and published. (ex: myapp:v1)

The beta version: At every push in my branch on GitHub a container will be published with a specific tag. The tag will be matching the draft release version number with -beta. (ex: myapp:v2-beta).

In this post, the application is a Node.js Twitch chatbot. The type of application is not important the post focus on the delivery.

Publishing the release version

Every time a release is published on GitHub, the workflow will be triggered. It will first retrieve the "release version" then build and tag the container with it and finally publish (aka push) it to Docker hub. Because a "release" is also a "stable" version it will also update the container tag latest.

Let's look at the full YAML definition of the GitHub Action and I will break it down after.

name: Release Docker Image CI

on:
  release:
    types: [published]
jobs:
  update:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - name: Set outputs
      id: vars
      run: echo ::set-output name=RELEASE_VERSION::$(echo ${GITHUB_REF:10})
    - name: Publish to Registry
      uses: elgohr/Publish-Docker-Github-Action@master
      with:
        name: ${{secrets.DOCKER_USER}}/cloudbot
        username: ${{ secrets.DOCKER_USER }}
        password: ${{ secrets.DOCKER_PASSWORD }}
        tags: "latest,${{ steps.vars.outputs.RELEASE_VERSION }}"

To limit how many times the workflow is triggered I used on: release and the type: published, adjust as you like.

The next interesting part is the lines in the step vars.

- name: Set outputs
    id: vars
    run: echo ::set-output name=RELEASE_VERSION::$(echo ${GITHUB_REF:10})

Here I use the environment variable GITHUB_REF (striped of the 10 first characters contains "refs/tags/") to initialize a local variable RELEASE_VERSION. The value is available from the outputs of that step, like on the last line of the YAML.

tags: "latest,${{ steps.vars.outputs.RELEASE_VERSION }}"

From the steps identified by the id vars I retrieved from the outputs the value of RELEASE_VERSION.

In this GitHub Action, I used elgohr/Publish-Docker-Github-Action@master because it was simple and was doing what I need. You can execute docker commands directly if you prefer or use the docker/github-actions.

There are many options available from the GitHub marketplace.

Publishing the beta version

Every time a push is done on GitHub, the workflow will be triggered. It will first retrieve the "release version" of the most recent release in draft mode. The workflow will happen -beta to the retrieved version and use this to tag the container. Finally, publish (aka push) it to Docker hub.

Once more, here full YAML, I will break it down after.

name: Build Docker Images
on: [push]
jobs:
  build:
    name: cloudbot-beta
    runs-on: ubuntu-latest
    steps:
    - id: last_release
      uses: InsonusK/get-latest-release@v1.0.1
      with:
          myToken: ${{ github.token }}
          exclude_types: "release, prerelease"
          view_top: 1  
    - uses: actions/checkout@v2
    - name: Publish to Registry
      uses: elgohr/Publish-Docker-Github-Action@master
      with:
        name: ${{secrets.DOCKER_USER}}/cloudbot
        username: ${{ secrets.DOCKER_USER }}
        password: ${{ secrets.DOCKER_PASSWORD }}
        tags: "${{ steps.last_release.outputs.tag_name }}-beta"

Here the difficulty was that I wanted to create a tag from a "future" version. I decided to use the Draft Releases because those are not visible by everyone, therefore they look like the future.

If your last release is version 1 (v1.0), to make this workflow possible you will need to create a new release and save it in Draft.



Like in the Release workflow, I need to retrieve the version. Because drafts are only visible to some people we will need to get access. This is easily done by using a github.token. Those are created automatically when the GitHub Action starts.

Then using the step InsonusK/get-latest-release we will retrieve the version.

- id: last_release
    uses: InsonusK/get-latest-release@v1.0.1
    with:
        myToken: ${{ github.token }}
        exclude_types: "release, prerelease"
        view_top: 1  

This time when passing the value for the tag we will concatenate "-beta" to it.

tags: "${{ steps.last_release.outputs.tag_name }}-beta"

Wrapping Up

And voila, a very simple and easy to implement ci-cd for a container project. There are many different options, looking forward to learning how you did yours?

How to configure a secured custom domain on a Azure Function or website

I wanted to create this tutorial for a long time. How to map a naked domain on an Azure resource. It looks so complicated, but once you know what to do it's kind of simple in fact. In this post, I will share the three simple steps to do exactly this.

Step 1: Add Custom Domain


The first step is to map a domain on the application. The method I will explain uses a "www" domain (ex: www.fboucher.dev). To map directly a naked domain (ex: fboucher.dev) you would need to buy a wildcard certificate. However, I will show you in step three how to walk around this issue by using DNS rules.

From the Azure portal, open the Azure Function or App Service. From the left menu search for "custom", click the Custom domains option. In this panel click the button Add custom domain, and enter your www domain.



Click the validate button and follow the instruction to make the connection between the App Service and your domain provider.

Step 2: Adding a Certificate


Now that your custom domain is mapped, let's fix the "not secure" warning by adding a certificate. From the Azure portal return in the App blade. Repeat the previous search for "custom", and select the option TLS/SSL settings. Click the Private Key Certificates, and the Create App Service Managed Certificate button. Select the domain previously added and saved. It will take a few moments to create the certificate.



Go back in the Custom domains blade and click the Add binding button. Select the domain and certificate, don't forget to select the SNI SSL option and click the Add Binding button.




Step 3: Create the DNS Rules

Create an account in cloudflare.com and add a site for your domain. We will need to customize the DNS and create some Page Rules.



On the cloudflare.com note the 2 nameservers addresses. Go to the origin name provider (in my case godaddy) and replace the names of the nameservers with the value found on cloudflare.



Create a rule that will redirect all the incoming traffic from the naked-domain to www.domain. On the option on the top, click the Pages Rules (B). Then Click the Button Create Page Rule



In the field for If the URL matches: enter the naked-domain follow by /*. That will match everything coming from that URL

For the settings select Forwarding URL and 301- Permanent Redirect. Then the destination URL should be https://www. with your domain and /$1.




References

🔗 Map an existing custom DNS name to Azure App Service: https://c5m.ca/customDomain 

🔗 Secure a custom DNS name with a TLS/SSL binding in Azure App Service: https://c5m.ca/tls-ssl

How to Manage Azure Resources

What ever you are on running on Linux, Mac or Windows that you are on the road or in the comfort of your office there is many different ways to manage your Azure resources. In this video I will show you five ways to do it and explain the pros and cons of each:

  1. Azure Portal
  2. Azure PowerShell
  3. Azure CLI
  4. Azure Mobile App
  5. Azure Cloud Shell

There is also an excellent Microsoft Learn module that will teach you how to use the Azure portal efficiently.

References

Simplify your deployment with nested Azure Resource Manager (ARM) templates


Most solutions, if not all, are composed of multiple parts: backend, frontend, services, APIs, etc. Because all parts could have a different life-cycle it's important to be able to deploy them individually. However, sometimes we would like to deploy everything at once. It's exactly the scenario I had in a project I'm working on where with backend and one frontend.

In this post, I will explain how I use nested Azure Resource Manager (ARM) templates and conditions to let the user decide if he wants to deploy only the backend or the backend with a frontend of his choice. All the code will be available in GitHub and if you prefer, a video version is available below.
(This post is also available in French)

The Context


The project used in this post my open-source budget-friendly Azure URL Shortener. Like mentioned previously the project is composed of two parts. The backend leverage Microsoft serverless Azure Functions, it a perfect match in this case because it will only run when someone clicks a link. The second part is a frontend, and it's totally optional. Because the Azure Functions are HTTP triggers they act as an API, therefore, they can be called from anything able to do an HTTP call. Both are very easily deployable using an ARM template by a PowerShell or CLI command or by a one-click button directly from GitHub.

The Goal


At the end of this post, we will be able from one-click to deploy just the Azure Functions or to deploy them with a frontend of our choice (I only have one right now, but more will come). To do this, we will modify the "backend" ARM template using condition and nest the ARM template responsible for the frontend deployment.

The ARM templates are available here in there [initial](https://github.com/FBoucher/AzUrlShortener/tree/master/tutorials/optional-arm/before) and [final](https://github.com/FBoucher/AzUrlShortener/tree/master/tutorials/optional-arm/before/after) versions.

Adding New Inputs


We will nest the ARM templates, this means that our backend template (azureDeploy.json) will call the frontend template (adminBlazorWebsite-deployAzure.json). Therefore we need to add all the required information to azureDeploy.json to make sure it's able to deploy adminBlazorWebsite-deployAzure.json successfully. Looking at the parameter required for the second template, we only two need values AdminEMail and AdminPassword. All the other can be generated or we already have them.

We will need also another parameter the will act as our selection option. So let's add a parameter named frontend and allowed only two values: none and adminBlazorWebsite. If the value is none we only deploy the Azure Function. When the value is adminBlazorWebsite we will deploy the Azure Function, of course, but we will also deploy an admin website to go with it.

Following the best practices, we add clear detail and add those three parameters in the parameters section of the ARM template

"frontend": {
    "type": "string",
    "allowedValues": [
        "none",
        "adminBlazorWebsite"
    ],
    "defaultValue": "adminBlazorWebsite",
    "metadata": {
        "description": "Select the frontend that will be deploy. Select 'none', if you don't want any. Frontend available: adminBlazorWebsite, none. "
    }
},
"frontend-AdminEMail": {
    "type": "string",
    "defaultValue": "",
    "metadata": {
        "description": "(Required only if frontend = adminBlazorWebsite) The EMail use to connect into the admin Blazor Website."
    }
},
"frontend-AdminPassword": {
    "type": "securestring",
    "defaultValue": "",
    "metadata": {
        "description": "(Required only if frontend = adminBlazorWebsite) Password use to connect into the admin Blazor Website."
    }
}

Nested Templates


Let's assume for now that we always deploy the website when we deploy the Azure Function, to keep things simple. What we need now is to used nested ARM template, and that when you deploy an ARM template from inside another ARM template. This is done with a Microsoft.Resources/deployments node. Let's look at the code:

{
    "name": "FrontendDeployment",
    "type": "Microsoft.Resources/deployments",
    "dependsOn": [
        "[resourceId('Microsoft.Web/sites/', variables('funcAppName'))]",
        "[resourceId('Microsoft.Web/sites/sourcecontrols', variables('funcAppName'), 'web')]"
    ],
    "resourceGroup": "[resourceGroup().name]",
    "apiVersion": "2019-10-01",
    "properties": {
        "mode": "Incremental",
        "templateLink": {
            "uri": "[variables('frontendInfo')[parameters('frontend')].armTemplateUrl]"
        },
        "parameters": {
            "basename": {
                "value" : "[concat('adm', parameters('baseName'))]"
            },
            "AdminEMail": {
                "value" : "[parameters('frontend-AdminEMail')]"
            },
            "AdminPassword": {
                "value" : "[parameters('frontend-AdminPassword')]"
            },
            "AzureFunctionUrlListUrl": {
                "value" : "[concat('https://', reference(resourceId('Microsoft.Web/sites/', variables('funcAppName')), '2018-02-01').hostNames[0], '/api/UrlList?code=', listkeys(concat(resourceId('Microsoft.Web/sites/', variables('funcAppName')), '/host/default/'),'2016-08-01').functionKeys.default)]"
            },
            "AzureFunctionUrlShortenerUrl": {
                "value" : "[concat('https://', reference(resourceId('Microsoft.Web/sites/', variables('funcAppName')), '2018-02-01').hostNames[0], '/api/UrlShortener?code=', listkeys(concat(resourceId('Microsoft.Web/sites/', variables('funcAppName')), '/host/default/'),'2016-08-01').functionKeys.default)]"
            },
            "GitHubURL": {
                "value" : "[parameters('GitHubURL')]"
            },
            "GitHubBranch": {
                "value" : "[parameters('GitHubBranch')]"
            },
            "ExpireOn": {
                "value" : "[parameters('ExpireOn')]"
            },
            "OwnerName": {
                "value" : "[parameters('OwnerName')]"
            }

        }
    }
}

If we examine this node, we have the classic: name, type, dependsOn, resourceGroup, apiVersion. Here We really want the Azure Functions to be fully deployed so we need the FunctionApp to be created AND the GitHub sync to be complete, this is why there is also a dependency on Microsoft.Web/sites/sourcecontrols.

In properties we will pass the mode as Incremental as it will leave unchanged resources that exist in the resource group but aren't specified in the template.

Learn more about the Azure Resource Manager deployment modes here as they are very powerful.

The second property is templateLink. This is really important as it's the URL to the other ARM template. That URI must not be a local file or a file that is only available on your local network. You must provide a URI value that downloadable as HTTP or HTTPS. In this case, it's a variable that contains the GitHub URL where the template is available.

Finally, we have the parameters, and this is how we pass the values to the second template. Let's skip those where I just pass the parameter value from the caller to the called, and focus on basename, AzureFunctionUrlListUrl, and AzureFunctionUrlShortenerUrl.

For basename I just add a prefix to the parameter basename received, this way the resource names will be different but we can still see the "connection". That's purely optional, you could have added this value in a parameter to azureDeploy.json, I prefer keeping the parameters a minimum as possible as I think it simplifies the deployment for the users.

Finally for AzureFunctionUrlListUrl, and AzureFunctionUrlShortenerUrl I needed to retrieve the URL of the Azure Function with the security token because they are secured. I do that by concatenating different parts.

Component Value
Beginning of the URL 'https://'
Reference the Function App, return the value of hostname reference(resourceId('Microsoft.Web/sites/', variables('funcAppName')), '2018-02-01').hostNames[0]
Specify the Function targeted in this case UrlList. And starting the querystring to pass the code (aka. security token) '/api/UrlList?code='
Using the new listkeys function to retrieve the default Function key. listkeys(concat(resourceId('Microsoft.Web/sites/', variables('funcAppName')), '/host/default/'),'2016-08-01').functionKeys.default

Conditional parts


Now that the second ARM template can be deployed, let's add a condition so it gets, indeed, deploy only when we desire. To do this it's very simple, we need to add a property condition.

{
    "name": "FrontendDeployment",
    "type": "Microsoft.Resources/deployments",
    "condition": "[not(equals(parameters('frontend'), 'none'))]",
    "dependsOn": [
        "[resourceId('Microsoft.Web/sites/', variables('funcAppName'))]",
        "[resourceId('Microsoft.Web/sites/sourcecontrols', variables('funcAppName'), 'web')]"
    ]
}

In this case, is the value of the parameter is different then none, the nested template will be deployed. When a condition end-up being "false", the entire resource will be ignored during the deployment. How simple or complex are your conditions... that's your choice!

Happy deployment. :)




How I Build a Budget-friendly URL Shortener Easy to Deploy and Customized


Available in French here

I don't know for you but I share links/ URLs very often. And a lot of time it's from videos, so it needs to be short and easy to remember. Something like https://c5m.ca/project is better than a random string (aka. GUID). And this is how I started a project to build a URL Shortener. I wanted to be budget-friendly, easy to deploy and customizable.

In this post, I will share how I build it, how you can use it, and how you can help!

Azure Url Shortener

How I build it, with the community


This tool was build during live streams coding sessions on Twitch (all videos are in available in my YouTube archive). It's composed of two parts: a Serverless backend leveraging the Azure Function & Azure Storage, and a frontend of your choice.

The backend is composed of a few Azure Functions that act as an on-demand HTTP API. They only consume when they are called. They are in .Net Core, C# to be specific. When publishing this post, there are four functions:

  • UrlShortener: To create a short URL.
  • UrlRedirect: That's the one called when a short link is used. An Azure Function Proxy is forwarding all call to the root.
  • UrlClickStats: Return the statistic for a specific URL.
  • UrlList: Return the list of all URLs created.

All the information like long url, short url, click count are save in an Azure Storage Table.
And that's it. Super light, very cost-efficient. IF you are curious about the price I'll but references in the footnotes

The frontend could be anything that can make HTTP requests. Right now in the project, I explain how to use a tool call Postman, there is also a very simple interface done that you can easily deploy.



This simple interface is of course protected and gives you the options to see all URLs and create new ones.

How YOU can use it


All the code is available into GitHub, and it's deployable with a one-click button!

Deploy to Azure

This will deploy the backend in your Azure subscription in a few minutes. If you don't own an Azure subscription already, you can create your free Azure account today.

Then you will probably want an interface to create your precious URLs. Once more in the GitHub repository, there is a List of available Admin interfaces and ready to be used. The Admin Blazor Website is currently the most friendly and can also be deployed in one-click.

How You can help and participate


Right now, there is really only one interface (and some instructions on how to use Postman to do the HTTP calls). But AzUrlShortener is an open-source project, meaning you can participate. Here some suggestions:

  • Build a new interface (in the language of your choice)
  • Improve current interface(s) with
    • logos
    • designs
    • Better UI 🙂
  • Register bugs in GitHub
  • Make feature request
  • Help with documentation/ translation

What's Next


Definitely come see the GitHub repo https://github.com/FBoucher/AzUrlShortener, click those deploy buttons. On my side, I will continue to add more features and make it better. See you there!


Video version





References

How to know how much your application is consuming on Azure

Are you worried when deploying a new application about our billing? Or afraid that you will receive an invoice from your cloud provider at the and of the month that put you in a bad situation? You are not alone, and this is why you should definitely look at the Azure Cost Manager.



In the video below, I explain how to use the Azure Cost Manager to see the actual cost and forecast for a specific application in the cloud and build a dashboard with your favorites information. Note that I said "cloud" because Azure Cost Manager can monitor both Azure and AWS resources. That's perfect for when we are "in" the portal, and that's why I will also show how you can make budgets, and set alerts, so you can track your resources automatically.



Useful Links


~

Deploy to Azure Directly From the Repository with GitHub Actions


You hear about that new GitHub Actions. Or maybe you didn't but would like to add a continuous integration, continuous deployment (CI-CD) to your web application. In this post, I will show you how to add a CI-CD to deploy automatically to Azure using the GitHub Actions.

What are GitHub Actions


GitHub Actions are automated workflows to do things. One of these could be a CI-CD. Using a workflow you could decide to compile and execute some unit tests at every push or pull request (PR). Another workflow could be that you deploy that application.

In this article, I will deploy a .Net Core application in Azure. However, you can use any languages you would like and deploy anywhere you like... I just needed to pick one :)

Now, let's get started.

Step 1 - The Code.


We need some code in a GitHub repo. Create a GitHub repo, clone it locally. And your app in it. I created mine with dotnet new blazorserver -n cloud5minsdemo -o src. Then commit and push.

Step 2 - Define the workflow


We got the code, now it's time to define our workflow. I will be providing all the code snippets required for the scenario cover in this post, but there is tons of template ready to be used available directly from your GitHub repository! Let's have a look. From your repository click on the Action tab, and voila!


When I wrote this post, a lot of available templates assumed the Azure resources already existed and you and adding a CI-CD to the mixt to automated your deployment. It's great but in my case, I was building a brand new web site so those didn't fit my needs. This is why I created my own template. The workflow I created was inspired by Azure/webapps-deploy. And there a lot of information also available on Deploy to App Service using GitHub Actions.

Let's add our template to our solution. GitHub will look in the folder .github/workflows/ from the root of the repository. Then create a file with the extension .yml

Here the code for my dotnet.yml, as any YAML file the secret is in the indentation as it is whitespace sensitive:

on: [push,pull_request]

env:
  AZURE_WEBAPP_NAME: cloud5minsdemo   # set this to your application's name
  AZURE_GROUP_NAME: cloud5mins2

jobs:
  build-and-deploy:
    runs-on: ubuntu-latest
    steps:

    # checkout the repo
    - uses: actions/checkout@master
    
    - name: Setup .NET Core
      uses: actions/setup-dotnet@v1
      with:
        dotnet-version:  3.0.101


    # dotnet build and publish
    - name: Build with dotnet
      run: dotnet build ./src --configuration Release
    - name: dotnet publish 
      run: |
        dotnet publish ./src -c Release -o myapp 

    - uses: azure/login@v1
      with:
        creds: ${{ secrets.AZURE_CREDENTIALS }}

    - run: |
        az group create -n ${{ env.AZURE_GROUP_NAME }} -l eastus 
        az group deployment create -n ghaction -g ${{ env.AZURE_GROUP_NAME }} --template-file deployment/azuredepoy.json


    # deploy web app using Azure credentials
    - name: 'Azure webapp deploy'
      uses: azure/webapps-deploy@v1
      with:
        app-name: ${{ env.AZURE_WEBAPP_NAME }}
        package: './myapp' 

    # Azure logout 
    - name: logout
      run: |
        az logout


The Agent


There is a lot in there let's start by the first line. The on: is to define the trigger, in this case, the workflow will be trigger at every push or PR.

The env: is where you can declare variables. It's totally optional, but I think it will help then templates are more complex or simply to reuse them easily.

Then comes the jobs: definition. In this case, we will use the latest version of Ubuntu as our build agent. Of course, in a production environment, you should be more specify and select the OS that matches your needs. This job will have multiples steps defined in the, you guess it, steps: section/

We specify a branch to work with and set up our agent by:

uses: actions/setup-dotnet@v1
with:
  dotnet-version: 3.0.101

This is because I have a .Net Core project. For Node.js project, it would be

uses: actions/setup-node@v1
with:
  node-version: 10.x

And it would be a better idea to set the version as an environment variable to be able to change it quickly.

The next two instructions are really .Net Core focus as they will build and package the application into a folder myapp. Of course, in the "section" you could execute some unit test or any other validation that you may find useful.

The next section may be less obvious.

- uses: azure/login@v1
  with:
    creds: ${{ secrets.AZURE_CREDENTIALS }}

Access and Secrets


To have our GitHub Action to be able to create resources and deploy the code it needs to have access. The azure/login@v1 will let the Action login, using a Service Principal. In other words, we will create an authentication in the Azure Active Directory, with enough permission to do what we need.

Let's examine the following Azure CLI command:

`az ad sp create-for-rbac --name "c5m-Frankdemo" --role contributor --scopes /subscriptions/{subscription-id} --sdk-auth`

This will create a Service Principal named "c5m-Frankdemo" with the role "contributor" on the subscription specified. The role contributor can do mostly anything except granting permission.

Because no resources already existed the GitHub Action will require more permission. If you create the Resource Group outside of the CI-CD, you could limit the access only to this specific resource group. Using this command instead:

`az ad sp create-for-rbac --name "c5m-Frankdemo" --role contributor --scopes /subscriptions/{subscription-id}/resourceGroups/{resource-group} --sdk-auth`

The Azure CLI command will return a JSON. We will copy-paste this JSON into a GitHub secret. GitHub secrets encrypted secrets and allow you to store sensitive information, such as access tokens, in your repository. To access them go in the Settings of the repository and select Secrets from the left menu.


Click the Add a new secret button, and type AZURE_CREDENTIALS as the name. It could be anything, as long as you use that value in the YAML file describing the workflow. Put the JSON including the curly brackets in the Value textbox and click the save button.

Provisioning the Azure Resources


Now that the workflow has access we could execute some Azure CLI commands, but let's see what missing:

- run: |
    az group create -n ${{ env.AZURE_GROUP_NAME }} -l eastus 
    az group deployment create -n ghaction -g ${{ env.AZURE_GROUP_NAME }} --template-file deployment/azuredepoy.json --parameters myWebAppName=${{ env.AZURE_WEBAPP_NAME }}

The first command will create an Azure Resource Group, where all the resources will be created. The second one will deploy the website using an Azure Resource Manager (ARM) template. The --template-file deployment/azuredepoy.json tells us the template is a file named azuredeploy.json located in the folder deployment. Notice that the application name is passed to a parameter myWebAppName, using the environment variable.

An ARM template is simply a flat file that a lot like a JSON document. Use can use any text editor, I like doing mine with Visual Studio Code and two extensions: Azure Resource Manager Snippets, and Azure Resource Manager (ARM) Tools With those tools I can build ARM template very efficiently. For this template, we need a service plane and a web App. Here what the template looks like.

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "myWebAppName": {
           "type": "string",
           "metadata": {
                "description": "WebAppName"
            }
        }
    },
    "variables": {},
    "resources": [
        {
            "name": "[parameters('myWebAppName')]",
            "type": "Microsoft.Web/sites",
            "apiVersion": "2016-08-01",
            "location": "[resourceGroup().location]",
            "tags": {
                "[concat('hidden-related:', resourceGroup().id, '/providers/Microsoft.Web/serverfarms/frankdemoplan')]": "Resource",
                "displayName": "[parameters('myWebAppName')]"
            },
            "dependsOn": [
                "[resourceId('Microsoft.Web/serverfarms', 'frankdemoplan')]"
            ],
            "properties": {
                "name": "[parameters('myWebAppName')]",
                "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', 'frankdemoplan')]"
            }
        },
        {
            "name": "frankdemoplan",
            "type": "Microsoft.Web/serverfarms",
            "apiVersion": "2018-02-01",
            "location": "[resourceGroup().location]",
            "sku": {
                "name": "F1",
                "capacity": 1
            },
            "tags": {
                "displayName": "frankdemoplan"
            },
            "properties": {
                "name": "frankdemoplan"
            }
        }
    ],
    "outputs": {},
    "functions": []
}

This template is simple, it only contains the two required resources: a service plan, and a web app. To learn more about the ARM Template you can read my other post or check out this excellent introduction in the documentation.

Once the template is created and saved in its folder.

The deployment


There are only two last steps to the YAML file: the deployment and logout. Let's have a quick look at the deployment.

# deploy web app using Azure credentials
- name: 'Azure webapp deploy'
  uses: azure/webapps-deploy@v1
  with:
    app-name: ${{ env.AZURE_WEBAPP_NAME }}
    package: './myapp' 


Now that we are sure the resources exist in Azure we can deploy the code. This will be done with azure/webapps-deploy@v1 that will take the package generated by dotnet into myapp. Since we are already authenticated there is no need to specify anything at this point.

Everything is ready for the deployment. You just need to commit and push (into master) and the GitHub Action will be triggered. You can follow the deployment by going into the Actions tab.



After a few minutes, the website should be available in Azure. This post only shows a very simple build and deployment, but you can do so many things with those GitHub Actions, like executing tasks or packaging a container... I would love to know how you use them. Leave a comment or reach out on social media.


If you prefer, I also did a video of this post:



~

Let's the adventure beginning!


It's coming!  I'm not talking about winter here but I'm about Microsoft Ignite. One of the biggest Microsoft events for the DevOps and Dev amount us. This 4 days long event is a fantastic opportunity to get up to date with your favorite technology, learn the best practices, get certified and meet tons of experts to talk about your projects!



This year it's very special for me because I have the great pleasure to be part of the adventure, I will be presenting two sessions that are part of the Learning Paths.

Learning Path is a series of connected learning modules that include sessions, hands-on experiences, technical workshops, certifications, and expert connections. Learning Path’s work together to build upon what you’ve learned to provide a comprehensive set of skills to help you reach your goals.

Figuring out Azure Functions (AFUN95)
Tailwind Traders is curious about the concept behind “serverless” computing – the idea that they can run small pieces of code in the cloud, without having to worry about the underlying infrastructure. In this session, we cover the world of Azure Functions, starting with an explanation of the servers behind serverless, exploring the languages and integrations available, and ending with a demo of when to use Logic Apps and Microsoft Flow.

Options for building and running your app in the cloud (APPS10)
See how Tailwind Traders avoided a single point of failure using cloud services to deploy their company website to multiple regions. We cover all the options they considered, explain how and why they made their decisions, then dive into the components of their implementation. In this session, see how they used Microsoft technologies like Visual Studio Code, Azure Portal, and Azure CLI to build a secure application that runs and scales on Linux and Windows VMs and Azure Web Apps with a companion phone app.

But wait, there is more...


For the second time, after the Microsoft Ignite "the event", will starts another event a tour! This year, it is thirty (30) cities that Microsoft Ignite The Tour will visits! All continents, except for Antarctica (why?! 😉), will be visited. Check the complete list of the cities on the website and mark the dates on your calendar!

I will also be presenting many different sessions during this tour. While I'm not doing all of them, I'll be present on many occasions.

Looking forward to meeting you


So if you are planning to go to one of those events, and would like to meet to talk about your project, show some bugs, ask questions, or just to chat! Reach-out, It's ALWAYS a pleasure, and I'll bring some stickers and some special swag1 with me...

Drop me a message on Twitter, LinkedIn, YouTube, Instagram, and Facebook.


1: Follow me on Twitter to know more about this ;)

Cleaning your mess in the cloud automatically



We all do it. We create resources in the cloud for a demo, or a presentation and forget about them. Then at the end of the month, we receive a bigger invoice then expected and it's the panic.

This is why I thought about AzSubscriptionCleaner. It's an open-source project that could be deployed in your subscription very easily. The goal is to have it deployed by one click directly from GitHub.

The tool can be deployed in two versions, using Azure Automation, or Azure Functions. Based on a schedule it will execute a query to search all resources with a tag expireOn with the value is older than now(), and delete them.

I wrote two blog posts, paired with a YouTube video that explain how to tools where built.

Azure Automation


Read the complete post on Dev.to: Keep your Azure Subscription Clean Automatically

Video:


Azure Functions


Read the complete post on Dev.to: Use Azure Function to Clean-up your Mess, Automatically

Video:


GitHub Repo


This is an open-source project github.com/FBoucher/AzSubcriptionCleaner, you are welcome to see the code, clone the repository, ask for more feature or do a pull request to add a new one!

~

How to Deploy your Azure Function Automatically with ARM template (4 different ways)

It's so nice to be able to add some serverless components in our solution to make them better in a snap. But how do we manage them? In this post, I will explain how to create an Azure resource manager (ARM) template to deploy any Azure Function and show how I used this structure to deploy an open-source project I've been working on these days.

Part 1 - The ARM template

An ARM template is a JSON file that describes our architecture. To deploy an Azure Function we need at least three recourses: a functionApp, a service plan, and a storage account.


The FunctionApp is, of course, our function. The service plan could be set as dynamic or describe the type of resource that will be used by your function. The storage account is where is our code.


In the previous image, you can see how those components interact more with each other. Inside the Function, we will have a list of properties. One of those properties will be the Runtime, for example, in the AZUnzipEverything demo, it will be dotnet. Another property will be the connection string to our storage account that is also part of our ARM template. Since that resource doesn't exist yet, we will need to use the dynamic code.

The Function node will contain a sub-resource of type storageAccount. This is where we will specify where is our code, so it cant be clone to Azure.

Building ARM for a Simple Function


Let's see a template for a simple Azure Function that doesn't require any dependency, and we will examine it after.

You can use any text editor to edit your ARM template. However, the bundle VSCode with the extensions Azure Resource Manager Tools and Azure Resource Manager Snippets is particularly efficient.
{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {},
    "variables": {},
    "resources": [
        {
            "type": "Microsoft.Storage/storageAccounts",
            "apiVersion": "2018-07-01",
            "name": "storageFunc",
            "location": "[resourceGroup().location]",
            "tags": {
                "displayName": "storageFunc"
            },
            "sku": {
                "name": "Standard_LRS"
            },
            "kind": "StorageV2"
        },
        {
            "type": "Microsoft.Web/serverfarms",
            "apiVersion": "2018-02-01",
            "name": "servicePlan",
            "location": "[resourceGroup().location]",
            "sku": {
                "name": "Y1",
                "tier": "Dynamic"
            },
            "properties": {
                "name": "servicePlan",
                "computeMode": "Dynamic"
            },
            "tags": {
                "displayName": "servicePlan"
            }
        },
         {
              "apiVersion": "2015-08-01",
              "type": "Microsoft.Web/sites",
              "name": "functionApp",
              "location": "[resourceGroup().location]",
              "kind": "functionapp",
              "dependsOn": [
                "[resourceId('Microsoft.Web/serverfarms', 'servicePlan')]",
                "[resourceId('Microsoft.Storage/storageAccounts', 'storageFunc')]"
              ],
              "properties": {
                "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', 'servicePlan')]",
                "siteConfig": {
                  "appSettings": [
                    {
                      "name": "AzureWebJobsDashboard",
                      "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', 'storageFunc', ';AccountKey=', listKeys('storageFunc','2015-05-01-preview').key1)]"
                    },
                    {
                      "name": "AzureWebJobsStorage",
                      "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', 'storageFunc', ';AccountKey=', listKeys('storageFunc','2015-05-01-preview').key1)]"
                    },
                    {
                      "name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
                      "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', 'storageFunc', ';AccountKey=', listKeys('storageFunc','2015-05-01-preview').key1)]"
                    },
                    {
                      "name": "WEBSITE_CONTENTSHARE",
                      "value": "storageFunc"
                    },
                    {
                      "name": "FUNCTIONS_EXTENSION_VERSION",
                      "value": "~2"
                    },
                    {
                      "name": "FUNCTIONS_WORKER_RUNTIME",
                      "value": "dotnet"
                    }
                  ]
                }
              },
              "resources": [
                  {
                      "apiVersion": "2015-08-01",
                      "name": "web",
                      "type": "sourcecontrols",
                      "dependsOn": [
                        "[resourceId('Microsoft.Web/sites/', 'functionApp')]"
                      ],
                      "properties": {
                          "RepoUrl": "https://github.com/FBoucher/AzUnzipEverything.git",
                          "branch": "master",
                          "publishRunbook": true,
                          "IsManualIntegration": true
                      }
                 }
              ]
            }
        
    ],
    "outputs": {}
}

The Storage Account


The first resources listed in the template is the Account Storage. There nothing specific about it.

The Service Plan


The service plan is the second resource in the list. It's important to notice that to be able to use the SKU Dynamic you will need at least the API version of apiVersion to be "2018-02-01". Then you specify the SKU.

    "sku": {
        "name": "Y1",
        "tier": "Dynamic"
    }

Of course, you can use the other SKU if you prefer.

The Function App


Final resources added to the mixt, and this is where all the pieces are getting together. It's important to notice that the other in which the resources are listed are not considered by Azure while deploying (it's only for us ;) ). To let Azure knows you need to add dependencies.

"dependsOn": [
    "[resourceId('Microsoft.Web/serverfarms', 'servicePlan')]",
    "[resourceId('Microsoft.Storage/storageAccounts', 'storageFunc')]"
]

This way the Azure Function will be created after the service plan and the storage account are available. Then in the properties we will be able to build the ConnectionString to the blob storage using a reference.

{
    "name": "AzureWebJobsDashboard",
    "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', 'storageFunc', ';AccountKey=', listKeys('storageFunc','2015-05-01-preview').key1)]"
}

The last piece of the puzzle is the sub-resource sourcecontrol inside the FunctionApp. This will define where Azure should clone the code from and in which branch.

"resources": [
    {
        "apiVersion": "2015-08-01",
        "name": "web",
        "type": "sourcecontrols",
        "dependsOn": [
        "[resourceId('Microsoft.Web/sites/', 'functionApp')]"
        ],
        "properties": {
            "RepoUrl": "https://github.com/FBoucher/AzUnzipEverything.git",
            "branch": "master",
            "publishRunbook": true,
            "IsManualIntegration": true
        }
    }
]

To be sure that everything is fully automatic the properties publishRunbook and IsManualIntegration must be set as true. Otherwise, you will need to do a synchronization between your Git (in this case on GitHub), and the Git in Azure.

There is excellent documentation that explains many deferent scenarios to Automate resource deployment for your function app in Azure Functions

Azure Unzip Everything


To deploy the project AzUnzipEverything available on GitHub, I needed one more Azure Storage with pre-define containers (folders).


Of course, all the source code of both the Azure Function and the ARM template are available on GitHub, but let me highlight how the containers are defined from an ARM template.

"resources": [
    {
        "type": "blobServices/containers",
        "apiVersion": "2018-07-01",
        "name": "[concat('default/', 'input-files')]",
        "dependsOn": [
            "storageFiles"
        ],
        "properties": {
            "publicAccess": "Blob"
        }
    }
]

Just like with sourcecontrol, we will need to add a list of sub-resources to our storage account. The name MUST start by 'default/'.

Part 2 - Four Deployment Options

Now that we have a template that describes our needs we just need to deploy it. There are multiple ways it could be done, but let's see four of them.

Deploy from the Azure Portal


Navigate to the Azure Portal (https://azure.portal.com), from your favorite browser and search for "deploy a custom template" directly in the search bar located at the top of the screen (in the middle). Or go at https://portal.azure.com/#create/Microsoft.Template. One in the Custom deployment page, click on the link Build your own template in the editor. From there, you can copy-paste or upload your ARM template. You need to save it to see the real deployment form.


Deploy with a script


Would it be in PowerShell or in Azure CLI you can easily deploy your template with these two commands.

In Azure CLI

# create resource group
az group create -n AzUnzipEverything -l eastus

# deploy it
az group deployment create -n cloud5mins -g AzUnzipEverything --template-file "deployment\deployAzure.json" --parameters "deployment\deployAzure.parameters.json"  

In PowerShell

# create resource group
New-AzResourceGroup -Name AzUnzipEverything -Location eastus

# deploy it
New-AzResourceGroupDeployment -ResourceGroupName  AzUnzipEverything -TemplateFile deployment\deployAzure.json

Deploy to Azure Button


One of the best way to help people to deploy your solution in their Azure subscription is the Deploy to Azure Button.



You need to create an image link (in HTML or Markdown) to this to a special destination build in two-part.

The first one is a link to the Azure Portal:

https://portal.azure.com/#create/Microsoft.Template/uri/

And the second one is the location of your ARM template:

https%3A%2F%2Fraw.githubusercontent.com%2FFBoucher%2FAzUnzipEverything%2Fmaster%2Fdeployment%2FdeployAzure.json

However, this URL needs to be encoded. There is plenty of encoders online, but you can also do it from the terminal with the following command (A big thanks to @BrettMiller_IT who showed me this trick during one of my live streams).

[System.Web.HttpUtility]::UrlEncode("https://raw.githubusercontent.com/FBoucher/Not-a-Dog-Workshop/master/deployment/deployAzure.json")

Clicking the button will bring the user at the same page on the Azure Portal but in the user subscription.

Azure DevOps Pipeline

From the Azure DevOps portal (https://dev.azure.com), select your project and create a new Release Pipeline. Click on the + Add an artifact button to connect your Git repository.



Once it's added, you need to add a task the current job. Click on the link 1 job, 0 task (4). Now you just need to specify your Azure subscription, the name of the resource group and select the location of your ARM template inside your repository. To make the deployment automatic with each push in the repository, click that little lightning bolt and enable the Continuous deployment trigger.


Wrapping-up

Voila, you know have four different ways to deploy your Azure Function automatically. But don't take my word for it, try it yourself! If you need more details you can visit the project on GitHub or watch this video where I demo the content of this post.


The Dog-Not-a-Dog Workshop


I recently presented, a workshop at the TOHack to get started with Azure. The goal was to try different Azure services, and see how we could augment an existing website using serverless function and artificial intelligence.
(Aussi disponible en français)


During this workshop, a website is deployed automatically from GitHub. Then by adding an Azure Function and using the Vision API of Azure Cognitive Services, the final solution is able to detect when uploaded pictures are or not dogs and keep our image folder "clean". We call that application: The automatic Not a Dog application.

The step by step instruction with the code can be found on GitHub - Not-a-Dog-Workshop. The workshop can be done in about 45-60 minutes.

I also did a video that is available on my YouTube channel:



You have questions, you are blocked, it will be a pleasure to help you.


Be more Productive by using Inline Code in your Azure Logic App

In a project using Azure Logic Apps that I am working on, I needed to manipulate strings. I could create APIs or Azure Functions, but the code is very simple and is not using any external libraries. In this post, I will show you how to use the new Inline Code to execute your code snippet directly inside your Logic Apps.

Quick Context


The Logic App will read a file from my OneDrive (it will also work with DropBox, Box, etc.). Here an example of the file:

Nice tutorial that explains how to build, using postman, an efficient API.[cloud.azure.postman.tools]

The goal is to extract tags, contained between the square brackets, from the text.

Logic App: Get File Content


From the Azure Portal, create a new Logic App by clicking the big green "+" button in the top left corner and searching for Logic App.

For this demo, I will use the Interval as a trigger because I will execute the Logic App manually.

The first step will be a Get File Content action from the OneDrive connector. Once you authorized Azure to access your OneDrive folder, select the file you want to read. For me, it's /dev/simpleNote.txt

Integration Account


To access the workflowContext the Azure Logic App required an Integration account. Next step would be to create one. Save the current Logic App, and click on the big "+" button in the top right corner. This time search for integration. Select Integration Account, and complete the form to create it.


We now need to assign it to our Logic App. From the Logic App blade, in the options list select Workflow Settings. Then select your integration account, and don't forget to save!

Logic App: Inline Code


To add the action at the end of your workflow, click the New step button. Search for Inline Code, and select the action Execute JavaScript Code.


Before copy-pasting the code into the new Inline Code action let's have a quick look.

var note = "" + workflowContext.actions.Get_file_content.outputs.body;
var posTag = note.lastIndexOf("[") + 1;
var cleanNote = {};

if(posTag > 0){
        cleanNote.tags = note.substring(posTag, note.length-1);
        cleanNote.msg = note.substring(0,posTag-1);
    }
return cleanNote;

On the first line, we assign a variable note the content of the Get_file_content outputs. We access it using the workflowContext. This context has access to the trigger and the actions. To find the name of the action you can replace the spaces by the underscore character "_".


You can also switch to Code View, and see the name of all components from the JSON code.

Logic App: Use Inline Code Result


Of course, you can use the output of your Inline Code with other steps. You just need to use the Result from the dynamic content menu.


If for some reason the dynamic content list doesn't contain your Inline Code, you can always add the code directly @body('Cleaning_Note')?['body'].


Your Logic App should now look like this:


Verdict


The Inline code is very promising. Right now it's limited to JAvaScript and cannot access variable nor loops. However, for simple code that doesn't require any references, it's easier to maintain and deploy. You can learn more about what is exactly covered or not here.
And it works as this result shows.


You prefer watching instead or Reading


I also have a video of this post if you prefer.



References