Showing posts with label post. Show all posts
Showing posts with label post. Show all posts

It's Time To Ditch your USB Keys

On day 2 of GitKon 2023, I presented a short beginner-friendly introduction to Git without using any "command lines". Too many are still using USB keys today to share files and collaborate on documents. When asked why they don't use Git, the answer is most likely that it's too complicated, too technical, and too much work.

Here is the good news, it doesn't need to be! This video shares the why and how Git is for everyone and share simple tips to make the how accessible!

It's now available on-demand. πŸ™

Useful Links

Database to go! The perfect database for developer

When building a new project, we don't need a big database that scales and has lots of data, but we do still need some kind of data source. Of course, it is possible to fake it and have some hardcoded value returned by an API but that takes time to create and it's not a database. In this post, I want to share a solution to have a portable, self-healing, disposable, disconnected database that doesn't require any installation.

The solution? Put the database in a container! It doesn't matter what database you are planning to use or on which OS you are developing. Most databases will have an official image available on Docker Hub and Docker runs on all platforms. If you feel uncomfortable with containers, have no fear, this post is beginner-friendly.

This post is part of a series where I share my experience while building a Dungeon crawler game. The code can be found on GitHub.


The Plan

Have a database ready at the "press of a button". By "ready", I mean up and running, with data in it, and accessible to all developer tools.

Preparation for the Database

We need a script to create the database schema and some data. There are many ways to achieve this. A beginner-friendly way is to create an empty database and use a tool like Azure Data Studio to help create the SQL scripts. Doing it this way will validate that the script works.

The Docker command to create the database's container will change a little depending on the database you are using but here what's a MySQL one look like:

docker run --name some-mysql -e MYSQL_ROOT_PASSWORD='rootPassword' -p 3306:3306 -d mysql 

Where some-mysql is the name you want to assign to your container, rootPassword is the password to be set for the MySQL root user and -d means that the container will run detached. The -p option is used to map the port 3306 of the container to the port 3306 of the host. This is required to be able to connect to the database from the host.

output of the docker run command


Now, a MySQL server is running inside the container. To connect to the server with Azure Data Studio use the extension MySQL extension for Azure Data Studio. Microsoft has a QuickStart: Use Azure Data Studio to connect and query MySQL if needed. Create a new connection in Azure Data Studio, then create a database (ex: 2d6db).

Create a new connection in Azure Data Studio

You can use the MySQL command-line tool if you prefer, but Azure Data Studio offers a lot of help when you are not that familiar with SQL. You can even use the Copilot extension and ask it to write the SQL statement for you. It's pretty good!

If you want to learn more about this, check the Open at Microsoft episode: Copilot is now in Azure Data Studio and this is how it can help you! to see it in action.

It's fantastic to generate a first draft of the create statements and to make queries.

Copilot writing SQL

Let's create two SQL scripts. The first one will be to create the schema with all the tables. The idea here is to write the script and execute it to validate the results. Here is an example creating only one table to keep the post simple.

-- schema.sql

CREATE TABLE IF NOT EXISTS 2d6db.rooms (
  id int NOT NULL AUTO_INCREMENT,
  roll int DEFAULT 0,
  level int DEFAULT 1,
  size varchar(10) DEFAULT NULL,
  room_type varchar(255) DEFAULT NULL,
  description varchar(255) DEFAULT NULL,
  encounter varchar(255) DEFAULT NULL,
  exits varchar(255) DEFAULT NULL,
  is_unique bool DEFAULT false,
  PRIMARY KEY (id)
);

Now that there are tables in the database, let's fill them with seed data. For this, the second SQL script will contain insert statement to populate the tables. We don't need all the data but only what will be useful when developing. Think about creating data to cover all types or scenarios, it's a development database so it should contain data to help you code.

-- data.sql

INSERT INTO 2d6db.rooms(roll, level, room_type, size, description, exits, is_unique)
VALUES (2,1,'Empty space', 'small','There is nothing in this small space', 'Archways',false);

INSERT INTO 2d6db.rooms(roll, level, room_type, size, description, exits, is_unique)
VALUES (3,1,'Strange Text', 'small','This narrow room connects the corridors and has no furniture. On the wall though...', 'Archways',false);

INSERT INTO 2d6db.rooms(roll, level, room_type, size, description, exits, is_unique)
VALUES (4,1,'Grakada Mural', 'small','There is a large mural of Grakada here. Her old faces smiles...', 'Archways',true);

Note: You can now stop the database's container with the command: docker stop some-mysql. We don't need it anymore.

Putting All the Pieces Together

This is when the magic starts to show up. Using a Docker Compose file, we will start the database container and execute the two SQL scripts to create and populate the database.

# docker-compose.yml

services:
  
  2d6server:
    image: mysql
    command: --default-authentication-plugin=mysql_native_password
    environment:
      MYSQL_DATABASE: '2d6db'
      MYSQL_ROOT_PASSWORD: rootPassword
    ports:
       - "3306:3306"
    volumes:
      - "../database/scripts/schema.sql:/docker-entrypoint-initdb.d/1.sql"
      - "../database/scripts/data.sql:/docker-entrypoint-initdb.d/2.sql"

The docker-compose.yml file are in YAML and usually are used to start multiple containers at once, but it doesn't need to. In this scenario, the file defines a single container named 2d6server using just like the previous Docker command and MySQL image and configuration. The last command volumes is new. It maps the path where the SQL scripts are located to /docker-entrypoint-initdb.d inside the container. When MySQL starts it will execute the files in that specific folder in alphabetic order. This is why the scripts are renamed 1.sql and 2.sql, as the table must be created first.

Do get the database up and ready, we will execute the docker compose up.


# start the database
docker compose -f /path_to_your_file/docker-compose.yml up -d 

# stop the database
docker compose -f /path_to_your_file/docker-compose.yml down -d 

By default, the command looks for a docker-compose.yml file. If you have a different name use the argument -f to specify the filename. Optionally, to free the prompt you can pass the argument -d to be in detached mode.

Docker Compose commands

When you are done coding and you don't need the database anymore, execute the docker compose down command to free up your computer. Compared to when the server is installed locally, a container will leave no trace; your computer is not "polluted".

When you need to update the database, edit the SQL script first. When the scripts are ready, execute the docker-compose restart to get the database refreshed.

To Conclude

Now, you only need to execute one simple command get a fresh database, when you want. All the developers don't need to have a database server installed and configured locally. And you don't need to be worried when deleting or modifying data, like when using a shared database. After cloning the repository all developers will have everything they need to start coding.

In a next post, I will share how I used Azure Data API Builder to generate a complete API on top of the database using the same docker compose method.

Video version!

If you prefer watching instead of reading here the video version of this post!

My Ubuntu laptop finally works with my docking station!

Targus USB-C Docking Station

I've been running Ubuntu on my personal laptop, a Dell Inspiron 13. I love it. It's slim, performant and I can code, play, read, stream without any difficulty. Yet when I try to use my bigger PC monitors by connecting my laptop to a docking station, I've had a lot of trouble. I've tried 2 docking stations, but none of them worked with Ubuntu. Some solutions found online suggest rebuilding my kernel, but I didn't want to do that, it felt too extreme for what should be trivial.

This post is about how I finally got my docking station to work with Ubuntu 22.04.

My current docking station is a Targus USB-C Docking Station and it works perfectly with other Windows devices. While doing some cleaning on my desk a logo catches my attention on the dock; the Displaylink logo. And that was the beginning of the end for the problem with that dock station.


DisplayLink logo on a Targus USB-C Docking Station


After a quick search (aka Googling with DuckDuckGo) I found exactly what I was looking for on Synaptics web site. A solution for the older Linux versions! 

Following the instructions: How to install DisplayLink Software - Ubuntu, I was able to downloads the Synaptics Ubuntu Driver Download and install them in a few minutes. After a reboot... It worked!

I hope it will help others with the same problem. 

How to edit a JSON object inside an Azure Logic App

I use Azure Logic Apps in many of my solutions, I find them so conviennant to integrate different systemes. Recently one of them was failling and by lookink at the error message: The property value exceeds the maximum allowed size, I knew what was wrong. I was tying to save a JSON object into a Storage table but one property was too long. For this particular case I didn't need the value contain in the property so the plan was to delete it. At first, I thought it wasn't possible to edit a variable of type object in a Logic Apps, but it is!

In this post I will show how to use Compose action and setProperty to perform data operations in Azure Logic Apps.

The Context

First think to know is that the Compose action does not update the the current object but creates a new one. In this demo the JSON object use is quite short to simplyfy the demo.

{
	"firstname":"Frank",
	"lastname":"Boucher",
	"alias":"fboucheros",
	"bio":"With many years of experience in the IT industry, FranΓ§ois (Frank) Boucher is a trusted Microsoft Azure professional whose expertise and bilingual service are relied upon in large Canadian markets (Ottawa and Montreal) as well as internationally. Among his many accolades, Francois has been awarded four times Microsoft Azure MVP status, named a Microsoft Azure Advisor, and Microsoft Azure P-Seller. Frank created the “Cloud 5 minutes” show. Where every second week, a new episode that answers a different technical question, is published both in French and English (cloud5mins.com). "
}

The Details

The Logic App recieve a JSON object from the request body. This is transfom as an Person object. To empty the bio property the action Compose will be used.

Inside the Compose action, use the context menu to find setProperty in the Expression section. The expression setProperty takes three parameters: the object, the property name, and the edited value. In this demo the goal was to empty the property therefore and empty string will be assign like this:

  setProperty(variables('Person'),'bio','').

The edited object is accessible from the output of the Compose action and this what will be return in the Response action.

The Result

By adding a single action it's possible to edit JSON object in a Logic App without requiring to use inline code or external tools. A demo wouldn't be complite without a end-to-end run so here the result of a HTTP POST to the Logic App passing the Person JSON and the returned result.

Video version

If you prefer, I also have a video version of this post.

Useful links


~ Frank



How to re-open a repository cloned in a volume, from Visual Studio Code

I love Dev Containers, I use them a lot for most of my development. One of my favorite options is to clone a repository directly in a docker volume.


It takes a few seconds and you can work on your code without installing any SDKs or language that your current machine doesn't have. Marvelous!

Ideally, at the end of your session, you push your code to another repository (ex: GitHub). However, sometimes I forget or am interrupted I start working on something else, and my changes are not pushed.

How do you re-open that environment?! In this post, I want to share two ways that I use.

Open Recent

The first method is to use the history of the editor! For example, here in Visual Studio Code, select the File menu and Open Recent.


If you didn't open too many files since you used that dev container, it should be present as displayed in the image. It should look like: <Name of the repository> in a unique [Dev Container].

Make sure docker is already running and select it. Voila, in a snap you are back into the dev environment with your last changes waiting for you.

Open the Container

There are a few different options to do the next solution, I will share the one I consider the easiest for the people who are not Docker experts.

First, if not already present to your VSCode, add the Docker extension identifier: ms-azuretools.vscode-docker. Then from this new extension in the top section named containers search for your container. It should start with "vsc", for Visual Studio Code, then hyphen the name of the repository you cloned. Right-click on it and select start. After a few seconds, the container should have a little green triangle on its side and be ready to continue.


The next step consists of attaching the container to VSCode. Once more, from the Docker extension, right-click on the container and select Attach Visual Studio Code.


This will open a new VSCode window, we are mostly done but there is one last step to do. You will notice that the file explorer is empty. No worries we will fix everything with this last step. The terminal should be open in the home folder of the root user. Let's open our project folder by executing the command:

cd /workspaces/<repository-name>

Then the final command is to re-open VSCode in this folder and let the Dev Container do his magic. Execute the command:

code . -r

(the -r is to re-use the same VSCode windows. It's optional, if not provided it will open a new VSCode instance.)


And voila! The Dev Container is just as it was before.


If you know other ways to achieve this, leave a comment or reach out, I'm always happy to learn more.


~ frank



How to simplify a Docker run command

I recently wanted to create animated GIFs from videos. The idea was to get video previews, in a very lightweight file. After a quick search online, I found FFMPEG, a fantastic multimedia framework to manipulate media. There is also a few wrappers that exists in different languages (ex: C#, JavaScript) but you still need to install FFMPEG locally, and I didn't want that. In fact, I wanted a simple solution that doesn't require any installation locally and something in the cloud. In this post, I want to share how I achieved the first one.

All the code and the container are available on Github and Docker Hub.

First Contact

The ffmpeg framework is very powerful and can do so many things; therefore it's normal that it has a ton of possible parameters and extensions. After time spent on the documentation and a few trials and errors, I found how to do exactly what I needed calling it this way:

ffmpeg -r 60 -i $INPUTFILE -loop 0 -vf scale=320:-1 -c:v gif -f gif -ss 00:00:00.500 -r 10 -t 5 - > $OUTPUTFILE

This will create a five second animated GIF from a video. It speeds up the video and lowers the framerate of the GIF to keep the output lightweight. Here is an example.

Hello World episode 5 seconds preview

This is great, but this is not very friendly. How can someone who only creates a video once in a while be expected to remember all those parameters?! And even harder, when the video is vertical some parameters have different values. It was time to simplify, and here is how I did it. Note that I'm a Docker beginner and if you think there is a simpler or better way to do some steps, let me know, and let's learn together.

The Plan

The plan is simple: execute a simple Docker command like docker run fboucher/aciffmpeg -i NotInTheSky.mp4 and generate a video preview. To build our ephemeral container we will start with something lightweight like alpine, install ffmpeg and add a script that would be executed as the container runs. That sounds like an excellent plan, let's do it!

Writing the Script

The script is simple, but I learned a few things writing it. This is why it's included in this post. The goal was simple: execute the ffmpeg command using some values from the parameters: file path, and if the video is vertical. Here is the script:

#!/bin/sh

while getopts ":i:v" opt; do
  case $opt in
    i) inputFile="$OPTARG"
    ;;
    v) isVertical=true
    ;;
    \?) echo "Invalid option -$OPTARG" >&2
    exit 1
    ;;
  esac

  case $OPTARG in
    -*) echo "Option $opt needs a valid argument"
    exit 1
    ;;
  esac
done

if [ -z "$isVertical" ]; then isVertical=false; fi

# used for bash 
#IFS='.'
#read -a filePart <<< "$inputFile"
#outputFile="${filePart[0]}.gif"

# used for dash 
filename=$(echo "$inputFile" | cut -d "." -f 1)
outputFile="$filename.gif"

if $isVertical
then
  ffmpeg -r 60 -i $inputFile -loop 0 -vf scale=-1:320 -c:v gif -f gif -ss 00:00:00.500 -r 10 -t 5 - > $outputFile
else
  ffmpeg -r 60 -i $inputFile -loop 0 -vf scale=320:-1 -c:v gif -f gif -ss 00:00:00.500 -r 10 -t 5 - > $outputFile
fi

Things I learned: Parameter without values

The script needs to be as friendly as possible, therefore any unnecessary information should be removed. Most videos will be horizontal, so let's make the parameter optional. However, I don't want users to have to specify the value script.sh -i myvideo.mp4 -v true but instead script.sh -i myvideo.mp4 -v. This is very simple to do, once you know it. On the first line of code when I get the parameters: getopts ":i:v" notes that there is no ":" after the "v". This is to specify that we are not expecting any values.

Things I Learned: Bash and Dash

As mentioned earlier the container will be built from Alpine. And Alpine doesn't have bash but instead uses dash as a shell. It's mostly the same, but there are some differences. The first one will be the shebang (aka "#!/bin/sh" on the first line). And the second was the string manipulation. To generate a new file with the same name but a different extension of the script, split the file name at the ".". This can be done IFS ... read... <<< command (commented in the script) on bash but this will give syntax error: unexpected redirection and this is because there is no <<< in bash. Instead, you need to use the command cut -d "." -f 1 (where -d specifies the CHAR to use as the delimiter, and -f return only this field).

Building the image

It's now time to connect all the dots in the dockerfile.

FROM alpine:3.13
LABEL Name=aciffmpeg Version=0.0.2
RUN apk add ffmpeg
COPY ./src/myscript.sh /
RUN chmod +x /myscript.sh
ENTRYPOINT ["/myscript.sh"]

The file is not extremely complex but let’s pass through it line by line. 

  • We start FROM Alpine version 3.13 and apply a LABEL
  • RUN Will execute the command to install ffmpeg. The apk is the default utility on Alpine to install apps just like apt on Ubuntu. 
  • COPY Is copying the script from our local machine into the container at the root. 
  • The second RUN command is to make sure the script is executable. 
  • Finally, ENTRYPOINT will allow us to configure the container to run as an executable in this case as the script. All parameters passed to Docker will be passed to the script.

The only things left now are to build, tag, and push it on Docker Hub.

docker build -t fboucher/aciffmpeg .

docker tag  0f42a672d000 fboucher/aciffmpeg:2.0

docker push fboucher/aciffmpeg:2.0

The Simplified version

And now to create a preview of any video you just need to map a volume and specify the file path and optionally mention if the video is vertical.

On Linux/ WSL the command would look like this:

docker run -v /mnt/c/dev/test:/video fboucher/aciffmpeg -i /video/sample.mp4 -v

And on PowerShell like that:

docker run -v c/dev/test:/video fboucher/aciffmpeg -i /video/sample.mp4 -v

I learned a lot about Docker doing that project and now I have a very useful tool. What are the tools you built using containers that simplify your life or work?

Video Version

I recorded a video version if you are interested. 


~frank



Learning how to Build, Secure, and Deploy your Azure Static Web App in C#

Recently I participated in a series of videos about Azure Static Web Apps: Azure Tips and Tricks: Static Web Apps on Microsoft Channel 9. The series is perfect to get started and cover multiple different scenarios in different Javascript frameworks and C#. In this post, I wanted to regroup the four videos related to .Net Blazor. I also added the GitHub links part of the references at to end.

How to create a web app in C# with Blazor & Azure Static Web Apps

In this video, we start from scratch. We will build and deploy a brand new static website with .Net Blazor.



How to add a C# API to your Blazor web app

Now that you built your web app with C# and Blazor, what about adding a serverless C# API to it? Have a look!



How to secure your C# API with Azure Static Web Apps

Prevent unwanted users to access your C# API by configuring authentication and authorization in your Blazor Azure Static Web Apps.



I hope those videos will help you to get started. If you have questions and/or comments don't hesitate to reach out (comments, DM, GitHub issues), it's always a pleasure.

How CI/CD and preview branches work with Azure Static Web Apps

In this video, I wanted to show one of the great features of Azure Static Web App Learn: the creation of pre-production environments. Using the CI/CD workflow, you can preview your pull requests changes before it's in production leveraging the automatic creation of pre-production environments!



References:

How to Create a Continuous Integration Continuous Deployment (CI-CD) Solution for a Docker Project


I'm not a Docker master, but I understand that it's very useful and I like to use it from time to time in some projects. Another thing I like is DevOps and automation and in a project I have, I was missing that. In the previous setup, the container was built and publish to DockerHub with the date as a tag. Nice but not very easy to now with versions are "stable" and wish one are "in progress".

This post is about how I build a continuous integration and continuous deployment solution for my docker project. All the code is on GitHub and Docker Hub. I sharing my journey so others can enjoy that automation and not spend a weekend building it.

The Goal

By the end of this build, there will be two GitHub Action to build and publish a different version of the application on Docker Hub.

The release version: every time a release is published on GitHub a container tag with the matching version number will be built and published. (ex: myapp:v1)

The beta version: At every push in my branch on GitHub a container will be published with a specific tag. The tag will be matching the draft release version number with -beta. (ex: myapp:v2-beta).

In this post, the application is a Node.js Twitch chatbot. The type of application is not important the post focus on the delivery.

Publishing the release version

Every time a release is published on GitHub, the workflow will be triggered. It will first retrieve the "release version" then build and tag the container with it and finally publish (aka push) it to Docker hub. Because a "release" is also a "stable" version it will also update the container tag latest.

Let's look at the full YAML definition of the GitHub Action and I will break it down after.

name: Release Docker Image CI

on:
  release:
    types: [published]
jobs:
  update:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - name: Set outputs
      id: vars
      run: echo ::set-output name=RELEASE_VERSION::$(echo ${GITHUB_REF:10})
    - name: Publish to Registry
      uses: elgohr/Publish-Docker-Github-Action@master
      with:
        name: ${{secrets.DOCKER_USER}}/cloudbot
        username: ${{ secrets.DOCKER_USER }}
        password: ${{ secrets.DOCKER_PASSWORD }}
        tags: "latest,${{ steps.vars.outputs.RELEASE_VERSION }}"

To limit how many times the workflow is triggered I used on: release and the type: published, adjust as you like.

The next interesting part is the lines in the step vars.

- name: Set outputs
    id: vars
    run: echo ::set-output name=RELEASE_VERSION::$(echo ${GITHUB_REF:10})

Here I use the environment variable GITHUB_REF (striped of the 10 first characters contains "refs/tags/") to initialize a local variable RELEASE_VERSION. The value is available from the outputs of that step, like on the last line of the YAML.

tags: "latest,${{ steps.vars.outputs.RELEASE_VERSION }}"

From the steps identified by the id vars I retrieved from the outputs the value of RELEASE_VERSION.

In this GitHub Action, I used elgohr/Publish-Docker-Github-Action@master because it was simple and was doing what I need. You can execute docker commands directly if you prefer or use the docker/github-actions.

There are many options available from the GitHub marketplace.

Publishing the beta version

Every time a push is done on GitHub, the workflow will be triggered. It will first retrieve the "release version" of the most recent release in draft mode. The workflow will happen -beta to the retrieved version and use this to tag the container. Finally, publish (aka push) it to Docker hub.

Once more, here full YAML, I will break it down after.

name: Build Docker Images
on: [push]
jobs:
  build:
    name: cloudbot-beta
    runs-on: ubuntu-latest
    steps:
    - id: last_release
      uses: InsonusK/get-latest-release@v1.0.1
      with:
          myToken: ${{ github.token }}
          exclude_types: "release, prerelease"
          view_top: 1  
    - uses: actions/checkout@v2
    - name: Publish to Registry
      uses: elgohr/Publish-Docker-Github-Action@master
      with:
        name: ${{secrets.DOCKER_USER}}/cloudbot
        username: ${{ secrets.DOCKER_USER }}
        password: ${{ secrets.DOCKER_PASSWORD }}
        tags: "${{ steps.last_release.outputs.tag_name }}-beta"

Here the difficulty was that I wanted to create a tag from a "future" version. I decided to use the Draft Releases because those are not visible by everyone, therefore they look like the future.

If your last release is version 1 (v1.0), to make this workflow possible you will need to create a new release and save it in Draft.



Like in the Release workflow, I need to retrieve the version. Because drafts are only visible to some people we will need to get access. This is easily done by using a github.token. Those are created automatically when the GitHub Action starts.

Then using the step InsonusK/get-latest-release we will retrieve the version.

- id: last_release
    uses: InsonusK/get-latest-release@v1.0.1
    with:
        myToken: ${{ github.token }}
        exclude_types: "release, prerelease"
        view_top: 1  

This time when passing the value for the tag we will concatenate "-beta" to it.

tags: "${{ steps.last_release.outputs.tag_name }}-beta"

Wrapping Up

And voila, a very simple and easy to implement ci-cd for a container project. There are many different options, looking forward to learning how you did yours?

How to configure a secured custom domain on a Azure Function or website

I wanted to create this tutorial for a long time. How to map a naked domain on an Azure resource. It looks so complicated, but once you know what to do it's kind of simple in fact. In this post, I will share the three simple steps to do exactly this.

Step 1: Add Custom Domain


The first step is to map a domain on the application. The method I will explain uses a "www" domain (ex: www.fboucher.dev). To map directly a naked domain (ex: fboucher.dev) you would need to buy a wildcard certificate. However, I will show you in step three how to walk around this issue by using DNS rules.

From the Azure portal, open the Azure Function or App Service. From the left menu search for "custom", click the Custom domains option. In this panel click the button Add custom domain, and enter your www domain.



Click the validate button and follow the instruction to make the connection between the App Service and your domain provider.

Step 2: Adding a Certificate


Now that your custom domain is mapped, let's fix the "not secure" warning by adding a certificate. From the Azure portal return in the App blade. Repeat the previous search for "custom", and select the option TLS/SSL settings. Click the Private Key Certificates, and the Create App Service Managed Certificate button. Select the domain previously added and saved. It will take a few moments to create the certificate.



Go back in the Custom domains blade and click the Add binding button. Select the domain and certificate, don't forget to select the SNI SSL option and click the Add Binding button.




Step 3: Create the DNS Rules

Create an account in cloudflare.com and add a site for your domain. We will need to customize the DNS and create some Page Rules.



On the cloudflare.com note the 2 nameservers addresses. Go to the origin name provider (in my case godaddy) and replace the names of the nameservers with the value found on cloudflare.



Create a rule that will redirect all the incoming traffic from the naked-domain to www.domain. On the option on the top, click the Pages Rules (B). Then Click the Button Create Page Rule



In the field for If the URL matches: enter the naked-domain follow by /*. That will match everything coming from that URL

For the settings select Forwarding URL and 301- Permanent Redirect. Then the destination URL should be https://www. with your domain and /$1.




References

πŸ”— Map an existing custom DNS name to Azure App Service: https://c5m.ca/customDomain 

πŸ”— Secure a custom DNS name with a TLS/SSL binding in Azure App Service: https://c5m.ca/tls-ssl

How to Manage Azure Resources

What ever you are on running on Linux, Mac or Windows that you are on the road or in the comfort of your office there is many different ways to manage your Azure resources. In this video I will show you five ways to do it and explain the pros and cons of each:

  1. Azure Portal
  2. Azure PowerShell
  3. Azure CLI
  4. Azure Mobile App
  5. Azure Cloud Shell

There is also an excellent Microsoft Learn module that will teach you how to use the Azure portal efficiently.

References

Simplify your deployment with nested Azure Resource Manager (ARM) templates


Most solutions, if not all, are composed of multiple parts: backend, frontend, services, APIs, etc. Because all parts could have a different life-cycle it's important to be able to deploy them individually. However, sometimes we would like to deploy everything at once. It's exactly the scenario I had in a project I'm working on where with backend and one frontend.

In this post, I will explain how I use nested Azure Resource Manager (ARM) templates and conditions to let the user decide if he wants to deploy only the backend or the backend with a frontend of his choice. All the code will be available in GitHub and if you prefer, a video version is available below.
(This post is also available in French)

The Context


The project used in this post my open-source budget-friendly Azure URL Shortener. Like mentioned previously the project is composed of two parts. The backend leverage Microsoft serverless Azure Functions, it a perfect match in this case because it will only run when someone clicks a link. The second part is a frontend, and it's totally optional. Because the Azure Functions are HTTP triggers they act as an API, therefore, they can be called from anything able to do an HTTP call. Both are very easily deployable using an ARM template by a PowerShell or CLI command or by a one-click button directly from GitHub.

The Goal


At the end of this post, we will be able from one-click to deploy just the Azure Functions or to deploy them with a frontend of our choice (I only have one right now, but more will come). To do this, we will modify the "backend" ARM template using condition and nest the ARM template responsible for the frontend deployment.

The ARM templates are available here in there [initial](https://github.com/FBoucher/AzUrlShortener/tree/master/tutorials/optional-arm/before) and [final](https://github.com/FBoucher/AzUrlShortener/tree/master/tutorials/optional-arm/before/after) versions.

Adding New Inputs


We will nest the ARM templates, this means that our backend template (azureDeploy.json) will call the frontend template (adminBlazorWebsite-deployAzure.json). Therefore we need to add all the required information to azureDeploy.json to make sure it's able to deploy adminBlazorWebsite-deployAzure.json successfully. Looking at the parameter required for the second template, we only two need values AdminEMail and AdminPassword. All the other can be generated or we already have them.

We will need also another parameter the will act as our selection option. So let's add a parameter named frontend and allowed only two values: none and adminBlazorWebsite. If the value is none we only deploy the Azure Function. When the value is adminBlazorWebsite we will deploy the Azure Function, of course, but we will also deploy an admin website to go with it.

Following the best practices, we add clear detail and add those three parameters in the parameters section of the ARM template

"frontend": {
    "type": "string",
    "allowedValues": [
        "none",
        "adminBlazorWebsite"
    ],
    "defaultValue": "adminBlazorWebsite",
    "metadata": {
        "description": "Select the frontend that will be deploy. Select 'none', if you don't want any. Frontend available: adminBlazorWebsite, none. "
    }
},
"frontend-AdminEMail": {
    "type": "string",
    "defaultValue": "",
    "metadata": {
        "description": "(Required only if frontend = adminBlazorWebsite) The EMail use to connect into the admin Blazor Website."
    }
},
"frontend-AdminPassword": {
    "type": "securestring",
    "defaultValue": "",
    "metadata": {
        "description": "(Required only if frontend = adminBlazorWebsite) Password use to connect into the admin Blazor Website."
    }
}

Nested Templates


Let's assume for now that we always deploy the website when we deploy the Azure Function, to keep things simple. What we need now is to used nested ARM template, and that when you deploy an ARM template from inside another ARM template. This is done with a Microsoft.Resources/deployments node. Let's look at the code:

{
    "name": "FrontendDeployment",
    "type": "Microsoft.Resources/deployments",
    "dependsOn": [
        "[resourceId('Microsoft.Web/sites/', variables('funcAppName'))]",
        "[resourceId('Microsoft.Web/sites/sourcecontrols', variables('funcAppName'), 'web')]"
    ],
    "resourceGroup": "[resourceGroup().name]",
    "apiVersion": "2019-10-01",
    "properties": {
        "mode": "Incremental",
        "templateLink": {
            "uri": "[variables('frontendInfo')[parameters('frontend')].armTemplateUrl]"
        },
        "parameters": {
            "basename": {
                "value" : "[concat('adm', parameters('baseName'))]"
            },
            "AdminEMail": {
                "value" : "[parameters('frontend-AdminEMail')]"
            },
            "AdminPassword": {
                "value" : "[parameters('frontend-AdminPassword')]"
            },
            "AzureFunctionUrlListUrl": {
                "value" : "[concat('https://', reference(resourceId('Microsoft.Web/sites/', variables('funcAppName')), '2018-02-01').hostNames[0], '/api/UrlList?code=', listkeys(concat(resourceId('Microsoft.Web/sites/', variables('funcAppName')), '/host/default/'),'2016-08-01').functionKeys.default)]"
            },
            "AzureFunctionUrlShortenerUrl": {
                "value" : "[concat('https://', reference(resourceId('Microsoft.Web/sites/', variables('funcAppName')), '2018-02-01').hostNames[0], '/api/UrlShortener?code=', listkeys(concat(resourceId('Microsoft.Web/sites/', variables('funcAppName')), '/host/default/'),'2016-08-01').functionKeys.default)]"
            },
            "GitHubURL": {
                "value" : "[parameters('GitHubURL')]"
            },
            "GitHubBranch": {
                "value" : "[parameters('GitHubBranch')]"
            },
            "ExpireOn": {
                "value" : "[parameters('ExpireOn')]"
            },
            "OwnerName": {
                "value" : "[parameters('OwnerName')]"
            }

        }
    }
}

If we examine this node, we have the classic: name, type, dependsOn, resourceGroup, apiVersion. Here We really want the Azure Functions to be fully deployed so we need the FunctionApp to be created AND the GitHub sync to be complete, this is why there is also a dependency on Microsoft.Web/sites/sourcecontrols.

In properties we will pass the mode as Incremental as it will leave unchanged resources that exist in the resource group but aren't specified in the template.

Learn more about the Azure Resource Manager deployment modes here as they are very powerful.

The second property is templateLink. This is really important as it's the URL to the other ARM template. That URI must not be a local file or a file that is only available on your local network. You must provide a URI value that downloadable as HTTP or HTTPS. In this case, it's a variable that contains the GitHub URL where the template is available.

Finally, we have the parameters, and this is how we pass the values to the second template. Let's skip those where I just pass the parameter value from the caller to the called, and focus on basename, AzureFunctionUrlListUrl, and AzureFunctionUrlShortenerUrl.

For basename I just add a prefix to the parameter basename received, this way the resource names will be different but we can still see the "connection". That's purely optional, you could have added this value in a parameter to azureDeploy.json, I prefer keeping the parameters a minimum as possible as I think it simplifies the deployment for the users.

Finally for AzureFunctionUrlListUrl, and AzureFunctionUrlShortenerUrl I needed to retrieve the URL of the Azure Function with the security token because they are secured. I do that by concatenating different parts.

Component Value
Beginning of the URL 'https://'
Reference the Function App, return the value of hostname reference(resourceId('Microsoft.Web/sites/', variables('funcAppName')), '2018-02-01').hostNames[0]
Specify the Function targeted in this case UrlList. And starting the querystring to pass the code (aka. security token) '/api/UrlList?code='
Using the new listkeys function to retrieve the default Function key. listkeys(concat(resourceId('Microsoft.Web/sites/', variables('funcAppName')), '/host/default/'),'2016-08-01').functionKeys.default

Conditional parts


Now that the second ARM template can be deployed, let's add a condition so it gets, indeed, deploy only when we desire. To do this it's very simple, we need to add a property condition.

{
    "name": "FrontendDeployment",
    "type": "Microsoft.Resources/deployments",
    "condition": "[not(equals(parameters('frontend'), 'none'))]",
    "dependsOn": [
        "[resourceId('Microsoft.Web/sites/', variables('funcAppName'))]",
        "[resourceId('Microsoft.Web/sites/sourcecontrols', variables('funcAppName'), 'web')]"
    ]
}

In this case, is the value of the parameter is different then none, the nested template will be deployed. When a condition end-up being "false", the entire resource will be ignored during the deployment. How simple or complex are your conditions... that's your choice!

Happy deployment. :)




How I Build a Budget-friendly URL Shortener Easy to Deploy and Customized


Available in French here

I don't know for you but I share links/ URLs very often. And a lot of time it's from videos, so it needs to be short and easy to remember. Something like https://c5m.ca/project is better than a random string (aka. GUID). And this is how I started a project to build a URL Shortener. I wanted to be budget-friendly, easy to deploy and customizable.

In this post, I will share how I build it, how you can use it, and how you can help!

Azure Url Shortener

How I build it, with the community


This tool was build during live streams coding sessions on Twitch (all videos are in available in my YouTube archive). It's composed of two parts: a Serverless backend leveraging the Azure Function & Azure Storage, and a frontend of your choice.

The backend is composed of a few Azure Functions that act as an on-demand HTTP API. They only consume when they are called. They are in .Net Core, C# to be specific. When publishing this post, there are four functions:

  • UrlShortener: To create a short URL.
  • UrlRedirect: That's the one called when a short link is used. An Azure Function Proxy is forwarding all call to the root.
  • UrlClickStats: Return the statistic for a specific URL.
  • UrlList: Return the list of all URLs created.

All the information like long url, short url, click count are save in an Azure Storage Table.
And that's it. Super light, very cost-efficient. IF you are curious about the price I'll but references in the footnotes

The frontend could be anything that can make HTTP requests. Right now in the project, I explain how to use a tool call Postman, there is also a very simple interface done that you can easily deploy.



This simple interface is of course protected and gives you the options to see all URLs and create new ones.

How YOU can use it


All the code is available into GitHub, and it's deployable with a one-click button!

Deploy to Azure

This will deploy the backend in your Azure subscription in a few minutes. If you don't own an Azure subscription already, you can create your free Azure account today.

Then you will probably want an interface to create your precious URLs. Once more in the GitHub repository, there is a List of available Admin interfaces and ready to be used. The Admin Blazor Website is currently the most friendly and can also be deployed in one-click.

How You can help and participate


Right now, there is really only one interface (and some instructions on how to use Postman to do the HTTP calls). But AzUrlShortener is an open-source project, meaning you can participate. Here some suggestions:

  • Build a new interface (in the language of your choice)
  • Improve current interface(s) with
    • logos
    • designs
    • Better UI πŸ™‚
  • Register bugs in GitHub
  • Make feature request
  • Help with documentation/ translation

What's Next


Definitely come see the GitHub repo https://github.com/FBoucher/AzUrlShortener, click those deploy buttons. On my side, I will continue to add more features and make it better. See you there!


Video version





References

How to know how much your application is consuming on Azure

Are you worried when deploying a new application about our billing? Or afraid that you will receive an invoice from your cloud provider at the and of the month that put you in a bad situation? You are not alone, and this is why you should definitely look at the Azure Cost Manager.



In the video below, I explain how to use the Azure Cost Manager to see the actual cost and forecast for a specific application in the cloud and build a dashboard with your favorites information. Note that I said "cloud" because Azure Cost Manager can monitor both Azure and AWS resources. That's perfect for when we are "in" the portal, and that's why I will also show how you can make budgets, and set alerts, so you can track your resources automatically.



Useful Links


~

Deploy to Azure Directly From the Repository with GitHub Actions


You hear about that new GitHub Actions. Or maybe you didn't but would like to add a continuous integration, continuous deployment (CI-CD) to your web application. In this post, I will show you how to add a CI-CD to deploy automatically to Azure using the GitHub Actions.

What are GitHub Actions


GitHub Actions are automated workflows to do things. One of these could be a CI-CD. Using a workflow you could decide to compile and execute some unit tests at every push or pull request (PR). Another workflow could be that you deploy that application.

In this article, I will deploy a .Net Core application in Azure. However, you can use any languages you would like and deploy anywhere you like... I just needed to pick one :)

Now, let's get started.

Step 1 - The Code.


We need some code in a GitHub repo. Create a GitHub repo, clone it locally. And your app in it. I created mine with dotnet new blazorserver -n cloud5minsdemo -o src. Then commit and push.

Step 2 - Define the workflow


We got the code, now it's time to define our workflow. I will be providing all the code snippets required for the scenario cover in this post, but there is tons of template ready to be used available directly from your GitHub repository! Let's have a look. From your repository click on the Action tab, and voila!


When I wrote this post, a lot of available templates assumed the Azure resources already existed and you and adding a CI-CD to the mixt to automated your deployment. It's great but in my case, I was building a brand new web site so those didn't fit my needs. This is why I created my own template. The workflow I created was inspired by Azure/webapps-deploy. And there a lot of information also available on Deploy to App Service using GitHub Actions.

Let's add our template to our solution. GitHub will look in the folder .github/workflows/ from the root of the repository. Then create a file with the extension .yml

Here the code for my dotnet.yml, as any YAML file the secret is in the indentation as it is whitespace sensitive:

on: [push,pull_request]

env:
  AZURE_WEBAPP_NAME: cloud5minsdemo   # set this to your application's name
  AZURE_GROUP_NAME: cloud5mins2

jobs:
  build-and-deploy:
    runs-on: ubuntu-latest
    steps:

    # checkout the repo
    - uses: actions/checkout@master
    
    - name: Setup .NET Core
      uses: actions/setup-dotnet@v1
      with:
        dotnet-version:  3.0.101


    # dotnet build and publish
    - name: Build with dotnet
      run: dotnet build ./src --configuration Release
    - name: dotnet publish 
      run: |
        dotnet publish ./src -c Release -o myapp 

    - uses: azure/login@v1
      with:
        creds: ${{ secrets.AZURE_CREDENTIALS }}

    - run: |
        az group create -n ${{ env.AZURE_GROUP_NAME }} -l eastus 
        az group deployment create -n ghaction -g ${{ env.AZURE_GROUP_NAME }} --template-file deployment/azuredepoy.json


    # deploy web app using Azure credentials
    - name: 'Azure webapp deploy'
      uses: azure/webapps-deploy@v1
      with:
        app-name: ${{ env.AZURE_WEBAPP_NAME }}
        package: './myapp' 

    # Azure logout 
    - name: logout
      run: |
        az logout


The Agent


There is a lot in there let's start by the first line. The on: is to define the trigger, in this case, the workflow will be trigger at every push or PR.

The env: is where you can declare variables. It's totally optional, but I think it will help then templates are more complex or simply to reuse them easily.

Then comes the jobs: definition. In this case, we will use the latest version of Ubuntu as our build agent. Of course, in a production environment, you should be more specify and select the OS that matches your needs. This job will have multiples steps defined in the, you guess it, steps: section/

We specify a branch to work with and set up our agent by:

uses: actions/setup-dotnet@v1
with:
  dotnet-version: 3.0.101

This is because I have a .Net Core project. For Node.js project, it would be

uses: actions/setup-node@v1
with:
  node-version: 10.x

And it would be a better idea to set the version as an environment variable to be able to change it quickly.

The next two instructions are really .Net Core focus as they will build and package the application into a folder myapp. Of course, in the "section" you could execute some unit test or any other validation that you may find useful.

The next section may be less obvious.

- uses: azure/login@v1
  with:
    creds: ${{ secrets.AZURE_CREDENTIALS }}

Access and Secrets


To have our GitHub Action to be able to create resources and deploy the code it needs to have access. The azure/login@v1 will let the Action login, using a Service Principal. In other words, we will create an authentication in the Azure Active Directory, with enough permission to do what we need.

Let's examine the following Azure CLI command:

`az ad sp create-for-rbac --name "c5m-Frankdemo" --role contributor --scopes /subscriptions/{subscription-id} --sdk-auth`

This will create a Service Principal named "c5m-Frankdemo" with the role "contributor" on the subscription specified. The role contributor can do mostly anything except granting permission.

Because no resources already existed the GitHub Action will require more permission. If you create the Resource Group outside of the CI-CD, you could limit the access only to this specific resource group. Using this command instead:

`az ad sp create-for-rbac --name "c5m-Frankdemo" --role contributor --scopes /subscriptions/{subscription-id}/resourceGroups/{resource-group} --sdk-auth`

The Azure CLI command will return a JSON. We will copy-paste this JSON into a GitHub secret. GitHub secrets encrypted secrets and allow you to store sensitive information, such as access tokens, in your repository. To access them go in the Settings of the repository and select Secrets from the left menu.


Click the Add a new secret button, and type AZURE_CREDENTIALS as the name. It could be anything, as long as you use that value in the YAML file describing the workflow. Put the JSON including the curly brackets in the Value textbox and click the save button.

Provisioning the Azure Resources


Now that the workflow has access we could execute some Azure CLI commands, but let's see what missing:

- run: |
    az group create -n ${{ env.AZURE_GROUP_NAME }} -l eastus 
    az group deployment create -n ghaction -g ${{ env.AZURE_GROUP_NAME }} --template-file deployment/azuredepoy.json --parameters myWebAppName=${{ env.AZURE_WEBAPP_NAME }}

The first command will create an Azure Resource Group, where all the resources will be created. The second one will deploy the website using an Azure Resource Manager (ARM) template. The --template-file deployment/azuredepoy.json tells us the template is a file named azuredeploy.json located in the folder deployment. Notice that the application name is passed to a parameter myWebAppName, using the environment variable.

An ARM template is simply a flat file that a lot like a JSON document. Use can use any text editor, I like doing mine with Visual Studio Code and two extensions: Azure Resource Manager Snippets, and Azure Resource Manager (ARM) Tools With those tools I can build ARM template very efficiently. For this template, we need a service plane and a web App. Here what the template looks like.

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "myWebAppName": {
           "type": "string",
           "metadata": {
                "description": "WebAppName"
            }
        }
    },
    "variables": {},
    "resources": [
        {
            "name": "[parameters('myWebAppName')]",
            "type": "Microsoft.Web/sites",
            "apiVersion": "2016-08-01",
            "location": "[resourceGroup().location]",
            "tags": {
                "[concat('hidden-related:', resourceGroup().id, '/providers/Microsoft.Web/serverfarms/frankdemoplan')]": "Resource",
                "displayName": "[parameters('myWebAppName')]"
            },
            "dependsOn": [
                "[resourceId('Microsoft.Web/serverfarms', 'frankdemoplan')]"
            ],
            "properties": {
                "name": "[parameters('myWebAppName')]",
                "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', 'frankdemoplan')]"
            }
        },
        {
            "name": "frankdemoplan",
            "type": "Microsoft.Web/serverfarms",
            "apiVersion": "2018-02-01",
            "location": "[resourceGroup().location]",
            "sku": {
                "name": "F1",
                "capacity": 1
            },
            "tags": {
                "displayName": "frankdemoplan"
            },
            "properties": {
                "name": "frankdemoplan"
            }
        }
    ],
    "outputs": {},
    "functions": []
}

This template is simple, it only contains the two required resources: a service plan, and a web app. To learn more about the ARM Template you can read my other post or check out this excellent introduction in the documentation.

Once the template is created and saved in its folder.

The deployment


There are only two last steps to the YAML file: the deployment and logout. Let's have a quick look at the deployment.

# deploy web app using Azure credentials
- name: 'Azure webapp deploy'
  uses: azure/webapps-deploy@v1
  with:
    app-name: ${{ env.AZURE_WEBAPP_NAME }}
    package: './myapp' 


Now that we are sure the resources exist in Azure we can deploy the code. This will be done with azure/webapps-deploy@v1 that will take the package generated by dotnet into myapp. Since we are already authenticated there is no need to specify anything at this point.

Everything is ready for the deployment. You just need to commit and push (into master) and the GitHub Action will be triggered. You can follow the deployment by going into the Actions tab.



After a few minutes, the website should be available in Azure. This post only shows a very simple build and deployment, but you can do so many things with those GitHub Actions, like executing tasks or packaging a container... I would love to know how you use them. Leave a comment or reach out on social media.


If you prefer, I also did a video of this post:



~

Let's the adventure beginning!


It's coming!  I'm not talking about winter here but I'm about Microsoft Ignite. One of the biggest Microsoft events for the DevOps and Dev amount us. This 4 days long event is a fantastic opportunity to get up to date with your favorite technology, learn the best practices, get certified and meet tons of experts to talk about your projects!



This year it's very special for me because I have the great pleasure to be part of the adventure, I will be presenting two sessions that are part of the Learning Paths.

Learning Path is a series of connected learning modules that include sessions, hands-on experiences, technical workshops, certifications, and expert connections. Learning Path’s work together to build upon what you’ve learned to provide a comprehensive set of skills to help you reach your goals.

Figuring out Azure Functions (AFUN95)
Tailwind Traders is curious about the concept behind “serverless” computing – the idea that they can run small pieces of code in the cloud, without having to worry about the underlying infrastructure. In this session, we cover the world of Azure Functions, starting with an explanation of the servers behind serverless, exploring the languages and integrations available, and ending with a demo of when to use Logic Apps and Microsoft Flow.

Options for building and running your app in the cloud (APPS10)
See how Tailwind Traders avoided a single point of failure using cloud services to deploy their company website to multiple regions. We cover all the options they considered, explain how and why they made their decisions, then dive into the components of their implementation. In this session, see how they used Microsoft technologies like Visual Studio Code, Azure Portal, and Azure CLI to build a secure application that runs and scales on Linux and Windows VMs and Azure Web Apps with a companion phone app.

But wait, there is more...


For the second time, after the Microsoft Ignite "the event", will starts another event a tour! This year, it is thirty (30) cities that Microsoft Ignite The Tour will visits! All continents, except for Antarctica (why?! πŸ˜‰), will be visited. Check the complete list of the cities on the website and mark the dates on your calendar!

I will also be presenting many different sessions during this tour. While I'm not doing all of them, I'll be present on many occasions.

Looking forward to meeting you


So if you are planning to go to one of those events, and would like to meet to talk about your project, show some bugs, ask questions, or just to chat! Reach-out, It's ALWAYS a pleasure, and I'll bring some stickers and some special swag1 with me...

Drop me a message on Twitter, LinkedIn, YouTube, Instagram, and Facebook.


1: Follow me on Twitter to know more about this ;)