Showing posts with label post. Show all posts
Showing posts with label post. Show all posts

Converting a Blazor WASM to FluentUI Blazor server

TL;DR: This post walks through migrating a Blazor WebAssembly project to FluentUI Blazor server, highlighting key improvements in UI, authentication, and containerization using Azure Container Apps and .NET Aspire.

(👓Version en français ici

Why Migrate?

The migration to FluentUI Blazor server brought three major benefits:

  • 🎨 Modern UI with FluentUI components
  • 🔒 Simplified authentication using Azure Container Apps
  • 🚀 Better development experience with .NET Aspire

In this post, I'm sharing my journey while migrating a Blazor WebAssembly (WASM) project to a FluentUI Blazor server project. The goal was to use the new FluentUI Blazor components library, take advantage of .NET Aspire and be able to execute the project in a container.

Recently, I've been working on the migration of AzUrlShortener. Upgrading SDKs and packages, improving the security, and changing the architecture of the solution. This post is part of a series of posts where I share a few interesting details, tips, and tricks I learned while working on this project.

AzUrlShortener is an Open source project that consist of simple URL shortener that I built a few years ago. The goal was simple: having a budget friendly solution to share short URL that would be secure, easy to use and where the data would stay mine. Each instance is hosted in Azure and consist of an API (Azure Function), an Blazor WebAssembly website (Azure Static Web App), and Data Storage (Azure Storage table).

This post is part of a series about modernizing the AzUrlShortener project:

Migration Strategy: Fresh Start vs. Refactor

When migrating an existing project, you have two options: Editing the existing project to reshaping it into the new type or creating a new project and copy-pasting pieces of code from the old project to the new one. In this case, I chose to create a new project and copy-paste the code. This way, I could keep the old project as a backup in case something went wrong.

Creating a New Project

Like mentioned earlier I wanted to use the new FluentUI Blazor components library. It's an open-source project that provides a set of components for building web applications using the Fluent Design System. It makes it easy to create beautiful and responsive user interfaces that are consistent. To create a new project we can use the available template.

dotnet new fluentblazor -n Cloud5mins.ShortenerTools.TinyBlazorAdmin

Dark Mode & Theming Support 🌙

The one thing I do to all my FluentUI Blazor projects is to add a settings page. This page allows the user to change the theme and color of the application. I should do a template to save time, but until then here the required code to add the settings page.

Settings.razor

Let's start by creating that new page called Settings.razor. With two selects, one for the theme (dark or light) and one for the accent color.

@page "/settings"

@using Microsoft.FluentUI.AspNetCore.Components.Extensions

@rendermode InteractiveServer

<FluentDesignTheme @bind-Mode="@Mode"
				   @bind-OfficeColor="@OfficeColor"
				   StorageName="theme" />

<h3>Settings</h3>

<div>
	<FluentStack Orientation="Orientation.Vertical">
		<FluentSelect   Label="Theme" Width="150px"
						Items="@(Enum.GetValues<DesignThemeModes>())"
						@bind-SelectedOption="@Mode" />
		<FluentSelect   Label="Color"
						Items="@(Enum.GetValues<OfficeColor>().Select(i => (OfficeColor?)i))"
			Height="200px" Width="250px" @bind-SelectedOption="@OfficeColor">
			<OptionTemplate>
				<FluentStack>
					<FluentIcon Value="@(new Icons.Filled.Size20.RectangleLandscape())" Color="Color.Custom"
						CustomColor="@(@context.ToAttributeValue() != "default" ? context.ToAttributeValue() : "#036ac4" )" />
					<FluentLabel>@context</FluentLabel>
				</FluentStack>
			</OptionTemplate>
		</FluentSelect>
	</FluentStack>
</div>

@code {
    public DesignThemeModes Mode { get; set; }
    public OfficeColor? OfficeColor { get; set; }
}

App.razor

In the App it self, we need to some JavaScript and the loading theme component. Just after the </body> tag, we need to add the following code:

<!-- Set the default theme -->

<script src="_content/Microsoft.FluentUI.AspNetCore.Components/js/loading-theme.js" type="text/javascript"></script>

<loading-theme storage-name="theme"></loading-theme>

Imports.razor

I noticed some warning in the code about missing using directives. To fix that, find the line that reference to Components.Icons in the _Imports.razor and change it by the following. The Icons alias should resolve the problem.

@using Icons = Microsoft.FluentUI.AspNetCore.Components.Icons

MainLayout.razor

The main layout is our base page by default. We need to add Mode and OfficeColor to make the accessible to the entire application.

@code {
    public DesignThemeModes Mode { get; set; }
    public OfficeColor? OfficeColor { get; set; }
}

NavMenu.razor

The only thing left is to be able to easily access this new page. This can be done simply by adding an option in the navigation menu.

<FluentNavLink Href="/settings" Match="NavLinkMatch.All" Icon="@(new Icons.Regular.Size20.TextBulletListSquareSettings())">Settings</FluentNavLink>

Test it

And voilà! You should now have a settings page that allows you to change the theme and color of the application. This is all great and it's not really related to the migration, but it's a nice addition to have. Dark mode for the win!

The migration

The migration itself had many little pieces, but wasn't that complex. The project is part of a .NET Aspire solution, so I added the project to the solution dotnet sln add ./Cloud5mins.ShortenerTools.TinyBlazorAdmin. Then added the references to Cloud5mins.ShortenerTools.Core (the class library with all the model, and services) and the ServiceDefault project that was generated when we added Aspire to the solution.

The next logical step was to add our Blazor project the the orchestrator with those lines in the Program.cs of the AppHost project.

builder.AddProject<Projects.Cloud5mins_ShortenerTools_TinyBlazorAdmin>("admin")
	.WithExternalHttpEndpoints()
	.WithReference(manAPI);

This will make sure the TinyBlazorAdmin project starts with a reference to the API and have accessible endpoints. Without the .WithExternalHttpEndpoints() the project wouldn't be accessible once deployed to Azure.

To use the capability of .NET Aspire to orchestrate the different projects, we need to replace our previous HttpClient creation in the Program.cs of the TinyBlazorAdmin by the following code:

builder.Services.AddHttpClient<UrlManagerClient>(client => 
{
    client.BaseAddress = new Uri("https+http://api");
});

This will make sure the UrlManagerClient receives an httpClient using the correct address and port when calling the API. Let's have a look at the UrlManagerClient class and one of the method that will be used to call the API.

public class UrlManagerClient(HttpClient httpClient)
{

	public async Task<IQueryable<ShortUrlEntity>?> GetUrls()
    {
		IQueryable<ShortUrlEntity> urlList = null;
		try{
			using var response = await httpClient.GetAsync("/api/UrlList");
			if(response.IsSuccessStatusCode){
				var urls = await response.Content.ReadFromJsonAsync<ListResponse>();
				urlList = urls!.UrlList.AsQueryable<ShortUrlEntity>();
			}
		}
		catch(Exception ex){
			Console.WriteLine(ex.Message);
		}
        
		return urlList;
    }
	// ...
}

As the code shows the httpClient is injected in the constructor and used to call the API. The GetUrls method is a simple GET request that returns a list of ShortUrlEntity. The API is the one created in a previous post: How to use Azure Storage Table with .NET Aspire and a Minimal API, and all the classes are part of the Cloud5mins.ShortenerTools.Core project.

The URL Grid

Part of the migration was also to replace the Syncfusion grid by the new FluentUI Blazor Grid. Not that Syncfusion controls are not great, quite the contrary, but because the AzUrlShortener project has moved to a different owner, I think it would be better to use components that required no licenses.

For this initial iteration, the Syncfusion grid will be replace by the FluentUI Blazor Grid. In a future iteration the Syncfusion Charts component will also be replace. Thank you Syncfusion for the community license used in this project.

The code of UrlManager.razor changed quite a lot as the to grid were a bit different in there naming and usage. The sorting required a bit more code as the column name are not the same as the property name. To provide an example the "Vanity" column is in fact the RowKey property of the ShortUrlEntity class. To be able to sort the column, we need to create a GridSort object that will be used in the TemplateColumn definition.

Definition of the column in the grid:

<TemplateColumn Title="Vanity" Width="150px" Sortable="true" SortBy="@sortByVanities">
    <FluentAnchor Href="@context!.ShortUrl" Target="_blank" Aearance="Appearance.Hypeext">@context!.RowKey</FluentAnchor>
</TemplateColumn>

Definition of the GridSort object:

GridSort<ShortUrlEntity> sortByVanities = GridSort<ShortUrlEntity>.ByAscending(p => p.RowKey);

One big improvement that could be done in the future would be to use the virtual grid. The virtual grid is a great way to improve the performance of the grid when dealing with large amount of data as it only loads the data that is visible on the screen. I show how to use the virtual grid in a previous post: How use a Blazor QuickGrid with GraphQL.

Removing the fake popup div

One of the FluentUI Blazor component is the FluentUIDialogue. This component is used to display a popup window, and will help us keeping the code more structure and clean. Instead of having <div> in the code, we can typed <FluentUIDialog> and it will be rendered as a popup.

var dialog = await DialogService.ShowDialogAsync<NewUrlDialog>(shortUrlRequest, new DialogParameters()
	{
		Title = "Create a new Short Url",
		PreventDismissOnOverlayClick = true,
		PreventScroll = true
	});




Replacing the Authentication

Instead of having to implementing the authentication in the Blazor project, we will be using the a feature of Azure Container Apps that required no code changes! You don't need to change a single line of code to secure your application deployed on Azure Container Apps (ACA)! Instead, your application is automatically protected simply by enabling the authentication feature, called EasyAuth.

Once the solution is deployed to Azure the TinyBlazorAdmin will be installed in a container app named "admin". To secured it, navigate to the Azure Portal, and select the Container App you want to secure. In this case, it will be the "admin" container app. From the left menu, select Authentication and click Add identity provider.

You can choose between multiple providers, but let's use Microsoft since it's deployed in Azure and you are already logged in. Once Microsoft is chosen, you will see many configuration options. Select the recommended client secret expiration (e.g., 180 days). You can keep all the other default settings. Click Add. After a few seconds, you should see a notification in the top right corner that the identity provider was added successfully.

Voila! Your app now has authentication. Next time you navigate to the app, you will be prompted to log in with your Microsoft account. Notice that your entire app is protected. No page is accessible without authentication.

Conclusion

The migration from Blazor WebAssembly to FluentUI Blazor Server has been a successful journey that brought several meaningful improvements to the project:

  • Enhanced user interface with modern FluentUI components
  • Cleaner, more maintainable code structure
  • Simplified authentication using Azure Container Apps' EasyAuth
  • Improved local development experience with .NET Aspire orchestration

The end result is a modern, containerized application that's both easier to maintain and more pleasant to use. The addition of dark mode support and theming capabilities are great improvements to the user experience.

Want to Learn more?

To learn more about Azure Container Apps I strongly suggest this repository: Getting Started .NET on Azure Container Apps, it contains many step-by-step tutorials (with videos) to learn how to use Azure Container Apps with .NET.

Have questions about the migration process or want to share your own experiences with FluentUI Blazor? Feel free to reach out to me on @fboucheros.bsky.social or open an issue on the AzUrlShortener GitHub repository. I'd love to hear your thoughts!


References

Making AI smarter with an MCP server that manages short URLs

Have you ever wanted to give your AI assistants access to your own custom tools and data? That's exactly what Model Context Protocol (MCP) allows us to do, and I've been experimenting with it lately.

(Version française ici)

I read a lot recently about Model Context Protocol (MCP) and how it is changing the way AI interacts with external systems. I was curious to see how it works and how I can use it in my own projects. There are many tutorial available online but one of my favorite was written by James Montemagno Build a Model Context Protocol (MCP) server in C#. This post isn't a tutorial, but rather a summary of my experience and what I learned along the way while building a real MCP server that manages short URLs.

MCP doesn't change AI itself, it's a protocol that helps your AI model to interact with external resources: API, databases, etc. The protocol simplifies the way AI can access an external system, and it allows the AI to discover the available tools from those resources. Recently I was working on a project that manages short URLs, and I thought it would be a great opportunity to build an MCP server that manages short URLs. I wanted to see how easy it is to build and then use it in VSCode with GitHub Copilot Chat.

Code: All the code of this post is available in the branch exp/mcp-server of the AzUrlShortener repo on GitHub.

Setting Up: Adding an MCP Server to a .NET Aspire Solution

The AzUrlShortener is a web solution that uses .NET Aspire, so the first thing I did was create a new project using the command:

dotnet new web -n Cloud5mins.ShortenerTools.MCPServer -o ./mcpserver

Required Dependencies

To transform this into an MCP server, I added these essential NuGet packages:

  • Microsoft.Extensions.Hosting
  • ModelContextProtocol.AspNetCore

Since this project is part of a .NET Aspire solution, I also added references to:

  • The ServiceDefaults project (for consistent service configuration)
  • The ShortenerTools.Core project (where the business logic lives)

Integrating with Aspire

Next, I needed to integrate the MCP server into the AppHost project, which defines all services in our solution. Here's how I added it to the existing services:

var manAPI = builder.AddProject<Projects.Cloud5mins_ShortenerTools_Api>("api")
						.WithReference(strTables)
						.WaitFor(strTables)
						.WithEnvironment("CustomDomain",customDomain)
						.WithEnvironment("DefaultRedirectUrl",defaultRedirectUrl);

builder.AddProject<Projects.Cloud5mins_ShortenerTools_TinyBlazorAdmin>("admin")
		.WithExternalHttpEndpoints()
		.WithReference(manAPI);

// 👇👇👇 new code for MCP Server
builder.AddProject<Projects.Cloud5mins_ShortenerTools_MCPServer>("mcp")
		.WithReference(manAPI)
		.WithExternalHttpEndpoints();

Notice how I added the MCP server with a reference to the manAPI - this is crucial as it needs access to the URL management API.

Configuring the MCP Server

To complete the setup, I needed to configure the dependency injection in the program.cs file of the MCPServer project. The key part was specifying the BaseAddress of the httpClient:

var builder = WebApplication.CreateBuilder(args);       
builder.Logging.AddConsole(consoleLogOptions =>
{
    // Configure all logs to go to stderr
    consoleLogOptions.LogToStandardErrorThreshold = LogLevel.Trace;
});
builder.Services.AddMcpServer()
    .WithTools<UrlShortenerTool>();

builder.AddServiceDefaults();

builder.Services.AddHttpClient<UrlManagerClient>(client => 
            {
                client.BaseAddress = new Uri("https+http://api");
            });
            
var app = builder.Build();

app.MapMcp();

app.Run();

That's all that was needed! Thanks to .NET Aspire, integrating the MCP server was straightforward. When you run the solution, the MCP server starts alongside other projects and will be available at http://localhost:{some port}/sse. The /sse part of the endpoint means (Server-Sent Events) and is critical - it's the URL that AI assistants will use to discover available tools.

Implementing the MCP Server Tools

Looking at the code above, two key lines make everything work:

  1. builder.Services.AddMcpServer().WithTools<UrlShortenerTool>(); - registers the MCP server and specifies which tools will be available
  2. app.MapMcp(); - maps the MCP server to the ASP.NET Core pipeline

Defining Tools with Attributes

The UrlShortenerTool class contains all the methods that will be exposed to AI assistants. Let's examine the ListUrl method:

[McpServerTool, Description("Provide a list of all short URLs.")]
public List<ShortUrlEntity> ListUrl()
{
	var urlList = _urlManager.GetUrls().Result.ToList<ShortUrlEntity>();
	return urlList;
}

The [McpServerTool] attribute marks this method as a tool the AI can use. I prefer keeping tool definitions simple, delegating the actual implementation to the UrlManager class that's injected in the constructor: UrlShortenerTool(UrlManagerClient urlManager).

The URL Manager Client

The UrlManagerClient follows standard HttpClient patterns. It receives the pre-configured httpClient in its constructor and uses it to communicate with the API:

public class UrlManagerClient(HttpClient httpClient)
{
	public async Task<IQueryable<ShortUrlEntity>?> GetUrls()
    {
		IQueryable<ShortUrlEntity> urlList = null;
		try{
			using var response = await httpClient.GetAsync("/api/UrlList");
			if(response.IsSuccessStatusCode){
				var urls = await response.Content.ReadFromJsonAsync<ListResponse>();
				urlList = urls!.UrlList.AsQueryable<ShortUrlEntity>();
			}
		}
		catch(Exception ex){
			Console.WriteLine(ex.Message);
		}
        
		return urlList;
    }

	// other methods to manage short URLs
}

This separation of concerns keeps the code clean - tools handle the MCP interface, while the client handles the API communication.

Using the MCP Server with GitHub Copilot Chat

Now for the exciting part - connecting your MCP server to GitHub Copilot Chat! This is where you'll see your custom tools in action.

Configuring Copilot to Use Your MCP Server

Once the server is running (either deployed in Azure or locally), follow these steps:

  1. Open GitHub Copilot Chat in VS Code
  2. Change the mode to Agent by clicking the dropdown in the chat panel
  3. Click the Select Tools... button, then Add More Tools
Set GitHub Copilot mode to Agent

Selecting the Connection Type

GitHub Copilot supports several ways to connect to MCP servers:

All MCP Server types

There are multiple options available - you could have your server in a container or run it via command line. For our scenario, we'll use HTTP.

Note: At the time of writing this post, I needed to use the HTTP URL of the MCP server rather than HTTPS. You can get this URL from the Aspire dashboard by clicking on the resource and checking the available Endpoints.

After selecting your connection type, Copilot will display the configuration file, which you can modify anytime.

GitHub Copilot Chat Configuration

Interacting with Your Custom Tools

Now comes the fun part! You can interact with your MCP server in two ways:

  1. Natural language queries: Ask questions like "How many short URLs do I have?"
  2. Direct tool references: Use the pound sign to call specific tools: "With #azShortURL list all URLs"

The azShortURL is the name we gave to our MCP server in the configuration.

GitHub Copilot question and response example


Key Learnings and Future Directions

Building this MCP server for AzUrlShortener taught me several valuable lessons:

What Worked Well

  • Integration with .NET Aspire was remarkably straightforward
  • The attribute-based approach to defining tools is clean and intuitive
  • The separation of tool definitions from implementation logic keeps the code maintainable

Challenges and Considerations

  • The csharp-SDK is only a few weeks old and still in preview
  • OAuth authentication isn't defined yet (though it's being actively worked on)
  • Documentation is present but evolving rapidly as the technology matures, so some features may not be fully documented yet

For the AzUrlShortener project specifically, I'm keeping this MCP server implementation in the experimental branch mcp-server until I can properly secure it. However, I'm already envisioning numerous other scenarios where MCP servers could add great value.

If you're interested in exploring this technology, I encourage you to:

  • Check out the GitHub repo
  • Fork it and create your own MCP server
  • Experiment with different tools and capabilities

Join the Community

If you have questions or want to share your experiences with others, I invite you to join the Azure AI Community Discord server:

Join Azure AI Community Discord

The MCP ecosystem is growing rapidly, and it's an exciting time to be part of this community!


~Frank


Migrating AzUrlShortener from Azure Static WebApp to Azure Container Apps

It's been already 2 years since I stopped working on the AzUrlShortener project. Not that I didn't like it, but I was busy with other projects. Recently, the opportunity to work on it again came up, and I jumped on it. So many things changed in two years, and I was excited to see how I could improve the solution's developer experience and modernize the user interface and architecture.

This post is the first of a series where I share a few interesting details, tips, and tricks I learned while working on this project.


AzUrlShortener is an Open source project that consists of a simple URL shortener. The goal was simple: having a budget-friendly solution to share short URLs that would be secure, easy to use, and where the data would stay ours. Each instance is hosted in Azure and used to consist of an API (Azure Function), a Blazor WebAssembly website (Azure Static Web App), and Data Storage (Azure Storage table).

 

Key Changes at a Glance

  • Migration from Azure Static Web Apps to Azure Container Apps
  • Upgrade to .NET 9.0 and integration with .NET Aspire
  • Enhanced security with separated API responsibilities
  • Simplified deployment using Azure Developer CLI (azd)
  • Modern UI with FluentUI Blazor

Upgrading SDKs and packages

As mentioned earlier, a lot changed in two years. The first thing I did was to upgrade the SDKs to .NET 9.0. That was a breeze; I changed the target framework in the project file then used dotnet-updated to list all the packages that needed to be updated. I then used dotnet outdated -u to update all the packages.

And while we were at it, since I started using .NET Aspire, I couldn't resist using it as much as possible. It simplifies the development cycle so much, and the code is much cleaner. So I added that to the mix. It added two more projects to the solution, but now the entire solution is defined in C# in the AppHost project (the Aspire orchestrator).

So now the solution looks like this:

src/
├── Cloud5mins.ShortenerTools.Api               # Internal management API
├── Cloud5mins.ShortenerTools.AppHost           # .NET Aspire orchestrator
├── Cloud5mins.ShortenerTools.Core              # Shared business logic
├── Cloud5mins.ShortenerTools.FunctionsLight    # Public redirect API
├── Cloud5mins.ShortenerTools.ServiceDefaults   # Common service configurations
└── Cloud5mins.ShortenerTools.TinyBlazorAdmin   # Frontend application


Changes to Improve Security

Security should come first, and I wanted to make sure that the solution was as secure as possible. I decided to split the API into two parts. The first part is the API that redirects the users, and it can only do that. The second part is the internal API to manage all the URLs and other data.

I decided to migrate the solution to use Azure Container Apps and have it running in two containers: the TinyBlazorAdmin and the Api. With Azure Container Apps, I can use Microsoft Entra, without any line of code, to secure TinyBlazorAdmin. The API will only be accessible from the TinyBlazorAdmin and won't be exposed to the Internet. As a bonus, since TinyBlazorAdmin and the API are now running inside containers, you could also decide to run them locally.

The storage access got also a security upgrade. Instead of using a connection string, I will be using a Managed Identity to access the Azure Storage Table. This is a much more secure way to access Azure resources, and thanks to .NET Aspire, it is also very easy to implement.


Architecture

The architecture is changing a little. The API is now split in two: FunctionsLight and API. The two APIs use services from Core to avoid code duplication. The TinyBlazorAdmin runs in a container and is secured by Microsoft Entra. API is also running in a container and is not exposed to the Internet. And Azure Storage Table is still our faithful data source.


Previous Architecture

  • Azure Function (API)
  • Azure Storage (Function Code)
  • Azure Static Web App (Blazor WebAssembly)
  • Azure Storage Table (Data)
  • Application Insights


New Architecture

  • Container registry (Docker images)
  • Container Apps Environment
    • Container App/ Function: FunctionsLight Public redirect-only API
    • Container App: Internal API Protected management interface
    • Container App: TinyBlazorAdmin Secured Blazor website
  • Azure Storage Table (Data)
  • Managed Identity
  • Log Analytics


AzUrlShortener Global Diagram

Deployment

The Deployment is also changing. Instead of using a button from my GitHub repo, we will be using the Azure Developer CLI (azd) or a GitHub Action from your own repo (aka fork). The deployment will take ~10 minutes and will be done with one simple command azd up. The entire solution will still have Infrastructure as Code (IaC) using Bicep, instead of ARM.

Here a video about it




Were there any challenges?

While working, there were a few challenges or "detours" that slowed the progress of the migration a little, but most of them were due to decisions made to improve the solution. Let's take a look at the first one in the next post: Franky's Notes: Converting a Blazor WASM to FluentUI Blazor server.


Want to Learn more?

To learn more about Azure Container Apps I strongly suggest this repository: Getting Started .NET on Azure Container Apps, it contains many step-by-step tutorials (with videos) to learn how to use Azure Container Apps with .NET.


Visual Countdown Days Until [a date]

During the holidays, I embarked on a fun project to create a visual countdown for important dates. Inspired by howmanysleeps and hometime from veebch, I wanted to build a countdown that didn't rely on Google Calendar. Instead, I used a Raspberry Pi Pico and some custom code to achieve this.

💾 You can find the full code on GitHub


Raspberry Pi pico and the light using custom colors

What It Is

This project consists of two main parts:

  • Python code for the Raspberry Pi Pico
  • A .NET website to update the configuration, allowing you to set:
    • The important date
    • Two custom colors or random ones
    • The RGB values for the custom colors


screenshot of the configuration website

What You Need

How to Deploy the Configuration Website

After cloning the repo, navigate to the src/NextEvent/ folder and use the Azure Developer CLI to initialize the project:

azd init

Enter a meaningful name for your resource group in Azure. To deploy, use the deployment command:

azd up

Specify the Azure subscription and location when prompted. After a few minutes, everything should be deployed. You can access the URL from the output in the terminal or retrieve it from the Azure Portal.

How to Set Up the Raspberry Pi Pico

Edit the config.py file to add your Wi-Fi information and update the number of lights on your light strip.

You can use Thonny to copy the Python code to the device. Copy both main.py and config.py to the Raspberry Pi Pico.

How It Works

  • The website creates a JSON file and saves it in a publicly accessible Azure storage.
  • When the Pi is powered on, it will:
    • Turn green one by one all the lights of the strip
    • Change the color of the entire light strip a few times, then turn it off
    • Try to connect to the Wi-Fi
    • Retrieve the timezone, current date, and settings from the JSON file
    • If the important date is within 24 days, the countdown will be displayed using random colors or the specified colors.
    • If the date has passed, the light strip will display a breathing effect with a random color of the day.

The Code on the Raspberry Pi Pico

The main code for the Raspberry Pi Pico is written in Python. Here's a brief overview of what it does:

  1. Connect to Wi-Fi: The connect_to_wifi function connects the Raspberry Pi Pico to the specified Wi-Fi network.
  2. Get Timezone and Local Time: The get_timezone and get_local_time functions fetch the current timezone and local time using online APIs.
  3. Fetch Light Settings: The get_light_settings function retrieves the important date and RGB colors from the JSON file stored in Azure.
  4. Calculate Sleeps Until Special Day: The sleeps_until_special_day function calculates the number of days until the important date.
  5. Control the LED Strip: The progress function controls the LED strip, displaying the countdown or a breathing effect based on the current date and settings.

The Configuration Website

The configuration website is built in C#. It's a Blazor server webapp, and I used .NET Aspire to make it easy to run it locally. The UI uses FluentUI-Blazor so it looks pretty, without effort. 

The website allows you to update the settings for the Raspberry Pi Pico. You can set the important date, choose custom colors, and save these settings to a JSON file in Azure storage.

Little Extra

The website is deployed in Azure Container App with a minimum scaling to zero to save on costs. This may cause a slight delay when loading the site for the first time, but it will work just fine and return to "dormant" mode after a while.

I hope you enjoyed reading about my holiday project! It was a fun and educational experience, and I look forward to working on more projects like this in the future.

What's Next?

Currently the project does a 24 days countdown (inspired from the advent calendar). I would like to add a feature to allow the user to set the number of days for the countdown. I would also like to add the possibility to set the color for the breathing effect (or keep it random) when the important date has passed. And lastly, I would like to add the time of the day when the light strip should turn on and off, because we all have different schedule 😉 .

Last thoughts

I really enjoyed doing this project. It was a fun way to learn more about the Raspberry Pi Pico, micro-Python (I didn't even know it was a thing), and FluentUI Blazor. I hope you enjoyed reading about it and that it inspired you to create your own fun projects. If you have any questions or suggestions, feel free to reach out, I'm fboucheros on most socials.

~Frank

It turns out, it's not difficult to remove all passwords from our Docker Compose files

I used to hardcode my password in my demos and code samples. I know it's not a good practice, but it's just for demo purposes, it cannot be that dramatic, right? I know there are proper ways to manage sensitive information, but this is only temporary! And it must be complicated to remove all the passwords from a deployment... It turns out, IT IS NOT difficult at all, and that will prevent serious threats.

In this post, I will share how to remove all passwords from a docker-compose file using environment variables. It's quick to setup and easy to remember. For production deployment, it's better to use secrets, because environment variables will be visible in logs. That said, for demos and debugging and testing, it's nice to see those values. The code will be available on GitHub. This deployment was used for my talks during Azure Developers .NET Days: Auto-Generate and Host Data API Builder on Azure Static Web Apps and The most minimal API code of all... none

The Before Picture

For this deployment, I used a docker-compose file to deploy an SQL Server in a first container and Data API Builder (DAB) in a second one. When the database container starts, I run a script to create the database tables and populate them.

services:

  dab:
    image: "mcr.microsoft.com/azure-databases/data-api-builder:latest"
    container_name: trekapi
    restart: on-failure
    volumes:
      - "./startrek.json:/App/dab-config.json"
    ports:
      - "5000:5000"
    depends_on:
      - sqlDatabase

  sqlDatabase:
    image: mcr.microsoft.com/mssql/server
    container_name: trekdb
    hostname: sqltrek
    environment:
      ACCEPT_EULA: "Y"
      MSSQL_SA_PASSWORD: "1rootP@ssword"
    ports:
      - "1433:1433"
    volumes:
      - ./startrek.sql:/startrek.sql
    entrypoint:
      - /bin/bash
      - -c
      - |
        /opt/mssql/bin/sqlservr & sleep 30
        /opt/mssql-tools/bin/sqlcmd -U sa -P "1rootP@ssword" -d master -i /startrek.sql
        sleep infinity

As we can see, the password is in clear text twice, in the configuration of the database container and in the parameter for sqlcmd when populating the database. Same thing for the DAB configuration file. Here the data-source node where the password is in clear text in the connection string.

"data-source": {
 	"database-type": "mssql",
	"connection-string": "Server=localhost;Database=trek;User ID=sa;Password=myPassword!;",
	"options": {
		"set-session-context": false
	}
}

First Pass: Environment Variables

The easiest password instance to remove was in the sqlcmd command. When defining the container, an environment variable was used... Why not use it! To refer to an environment variable in a docker-compose file, you use the syntax $$VAR_NAME. I used the name of the environment variable MSSQL_SA_PASSWORD to replace the hardcoded password.

/opt/mssql-tools/bin/sqlcmd -U sa -P $$MSSQL_SA_PASSWORD -d master -i /startrek.sql

Second Pass: .env File

That's great but the value is still hardcoded when we assign the environment variable. Here comes the environment file. They are text files that holds the values in key-value paired style. The file is not committed to the repository, and it's used to store sensitive information. The file is read by the docker-compose and the values are injected. Here is the final docker-compose file:

services:

  dab:
    image: "mcr.microsoft.com/azure-databases/data-api-builder:latest"
    container_name: trekapi
    restart: on-failure
    env_file:
      - .env
    environment:
      MY_CONN_STRING: "Server=host.docker.internal;Initial Catalog=trek;User ID=sa;Password=${SA_PWD};TrustServerCertificate=True"
    volumes:
      - "./startrek.json:/App/dab-config.json"
    ports:
      - "5000:5000"
    depends_on:
      - sqlDatabase

  sqlDatabase:
    image: mcr.microsoft.com/mssql/server
    container_name: trekdb
    hostname: sqltrek
    environment:
      ACCEPT_EULA: "Y"
      MSSQL_SA_PASSWORD: ${SA_PWD}
    env_file:
      - .env
    ports:
      - "1433:1433"
    volumes:
      - ./startrek.sql:/startrek.sql
    entrypoint:
      - /bin/bash
      - -c
      - |
        /opt/mssql/bin/sqlservr & sleep 30
        /opt/mssql-tools/bin/sqlcmd -U sa -P $$MSSQL_SA_PASSWORD -d master -i /startrek.sql
        sleep infinity

Note the env_file directive in the services definition. The file .env is the name of the file used. The ${SA_PWD} tells docker compose to look for SA_PWD in the .env file. Here is what the file looks like:

SA_PWD=This!s@very$trongP@ssw0rd

Conclusion

Simple and quick. There are no reasons to still have the password in clear text in the docker compose files anymore. Even for a quick demo! Of course for a production deployment there are stronger ways to manage sensitive information, but for a demo it's perfect and it's secure.

During Microsoft Build Keynote on day 2, Julia Liuson and John Lambert talked about how trade actors are not only looking for the big fishes, but also looking at simple demos and old pieces of code, looking for passwords, keys and sensitive information.

How to Deploy a .NET isolated Azure Function using Zip Deploy in One-Click

In this post, I will share a few things that we need our attention when deploying a .NET isolated Azure Function from GitHub to Azure using the Zip Deploy method. This method is great for fast deployment and when your artefacts are zipped in a package.

Note The complete code for this post is available on GitHub


Understanding Zip Push/Zip Deploy

Zip Push allows us to deploy a compressed package, such as a zip file, directly to Azure. It could be part of a continuous integration and continuous deployment (CI-CD) or like in this example it could replace it. This approach is particularly useful when you want to ensure your artifacts remain unchanged across different environments or when aiming for the fastest deployment experience for users.

While CI-CD is excellent for keeping your code up-to-date, zip deployment offers the advantage of speed and consistency. It eliminates the need for compilation, leading to quicker uploads and deployments.


Preparing Your Package

It’s crucial to package with all necessary dependencies the code required. There is no operation to fetch any external packages during the deployment, the zip file will be decompressed and that's it. The best way to ensure you have everything you need is to publish your code, to a folder and then go in that folder and zip all the files.

dotnet publish -c Release -o ./out

Don't zip the folder, it won't work as expected.

Don't zip the publish folder it won't works

You need to go inside the folder and select all the files and zip them to create your deployment artefact.

From in the publish folder zip all files

The next step is to make your artefact available online. There are many ways, but for this post we are using GitHub Realease. From the GitHub repository, create a new release, upload the zipped file created earlier and publish it. Note the URL of zipped files from the release.


Preparing The ARM Template

For this one-click deployment, we need an Azure Resource Manager (ARM) template. This is a document that describes the resources that we want to deploy to Azure. To deploy the zipped file into the Azure Function there are two particularities that required our attention.

Here the sections of the template.

[...]
"resources": [
    {
        "apiVersion": "2022-03-01",
        "name": "[variables('funcAppName')]",
        "type": "Microsoft.Web/sites",
        "kind": "functionapp",
        "location": "[resourceGroup().location]",
        "properties": {
            "name": "[variables('funcAppName')]",
            "siteConfig": {
                "appSettings": [
                    {
                        "name": "FUNCTIONS_WORKER_RUNTIME",
                        "value": "dotnet-isolated"
                    },
                    {
                        "name": "WEBSITE_RUN_FROM_PACKAGE",
                        "value": "1"
                    },
                    [...]

Here we define an Windows Azure Function and the WEBSITE_RUN_FROM_PACKAGE needs to be set to 1. The WEBSITE_RUN_FROM_PACKAGE is the key that tells Azure to use the zip file as the deployment artefact.

Then to specify where the zip file is located we need to add an extension to the Azure Function.

    {
      "type": "Microsoft.Web/sites/extensions",
      "apiVersion": "2021-02-01",
      "name": "[format('{0}/ZipDeploy', variables('funcAppName'))]",
      "properties": {
        "packageUri": "https://github.com/FBoucher/ZipDeploy-AzFunc/releases/download/v1/ZipDeploy-package-v1.zip",
        "appOffline": true
      },
      "dependsOn": [
        "[concat('Microsoft.Web/sites/', variables('funcAppName'))]"
      ]
    }

The packageUri property is the URL of the zipped file from the GitHub release. Note the dependsOn property that ensures the Azure Function is created before the extension is added. The complete ARM template is available in the GitHub repository.


One-click Deployment

When you have your artefact and the ARM template uploaded to your GitHub repository, you can create a one-click deployment button. This button will take the user to the Azure portal and pre-fill the deployment form with the information from the ARM template. Here is an example of the button for markdown.

[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FFBoucher%2FZipDeploy-AzFunc%2Fmain%2Fdeployment%2Fazuredeploy.json)

The has three parts, the first is the image that will be displayed on the button, the second is the link to the Azure portal and the third is the URL of the ARM template. The URL of the ARM template is the raw URL of the file in the GitHub repository, and it needs to be URL encoded. The URL encoding can be done using a tool like URL Encode/Decode.

Final Thoughts

Zip deployment is a powerful tool in your Azure arsenal by itself of part of a more complex CI-CD pipeline. It's a great way to make it easier for people to deploy your solution in their Azure subscription without having to clone/ fork the repository.


Video version

If you prefer, there is also have a video version of this post.

References

Problem in my local paradise: Func CLI doesn't upgrade

Last Friday, I encountered an issue while trying to run my Azure Function locally using VS Code. Despite having installed the Azure Function extension and the Azure Functions Core Tools, I was unable to execute the func start command without encountering an error saying that no functions could be found. 

In this post, I will share the various troubleshooting steps I took, what didn’t work, and how I ultimately resolved the issue. Spoiler alert: everything is now working correctly.


The Problem

My Azure Function is a .NET 8 Isolated HTTP trigger. When I attempted to execute the func start command, it failed to find any functions. A quick look at the documentation, I discovered that version 4 of the Core Tools was required for type Isolated process. However, I had already installed version 4 via the update popup in VS Code.

VS Code tool update

Something was wrong. I tried func --version and it returned 3.x.xx, weird... And this is how I knew there was a problem.


Failed attempts

Following the Azure Functions Core Tools documentation I found that there were multiple methods to install the Core Tools. Because that laptop was on Windows 11, I started by downloading the func-cli-x64.msi installer and run it. It didn't work, the version 3 was still there.

I tried to install the Core Tools v4 using NPM: npm install -g azure-functions-core-tools@4. It didn't work.

I tried to uninstall the version 3 with npm uninstall -g azure-functions-core-tools. I tried using the command palette in VSCode

VS Code uninstall Core Tool

Still nothing was changing anything, the version 3 was still there.


The Solution

What works, was using Chocolatey command choco uninstall azure-functions-core-tools to uninstall the version 3. Some how, it must have been install at the different location or some "config" got lost at some point (it's a developer laptop after all), and the other methods (npm, msi, vscode) couldn't see that version 3 was installed.

After that, I installed the version 4 using NPM npm install -g azure-functions-core-tools@4. And it worked! The func --version returned 4.0.5571 and the func start command found my function.

I wrote this quick post hoping that it can help someone else, as I cannot be the only one with this problem.


~Frank

It's Time To Ditch your USB Keys

On day 2 of GitKon 2023, I presented a short beginner-friendly introduction to Git without using any "command lines". Too many are still using USB keys today to share files and collaborate on documents. When asked why they don't use Git, the answer is most likely that it's too complicated, too technical, and too much work.

Here is the good news, it doesn't need to be! This video shares the why and how Git is for everyone and share simple tips to make the how accessible!

It's now available on-demand. 🐙

Useful Links

Database to go! The perfect database for developer

When building a new project, we don't need a big database that scales and has lots of data, but we do still need some kind of data source. Of course, it is possible to fake it and have some hardcoded value returned by an API but that takes time to create and it's not a database. In this post, I want to share a solution to have a portable, self-healing, disposable, disconnected database that doesn't require any installation.

The solution? Put the database in a container! It doesn't matter what database you are planning to use or on which OS you are developing. Most databases will have an official image available on Docker Hub and Docker runs on all platforms. If you feel uncomfortable with containers, have no fear, this post is beginner-friendly.

This post is part of a series where I share my experience while building a Dungeon crawler game. The code can be found on GitHub.


The Plan

Have a database ready at the "press of a button". By "ready", I mean up and running, with data in it, and accessible to all developer tools.

Preparation for the Database

We need a script to create the database schema and some data. There are many ways to achieve this. A beginner-friendly way is to create an empty database and use a tool like Azure Data Studio to help create the SQL scripts. Doing it this way will validate that the script works.

The Docker command to create the database's container will change a little depending on the database you are using but here what's a MySQL one look like:

docker run --name some-mysql -e MYSQL_ROOT_PASSWORD='rootPassword' -p 3306:3306 -d mysql 

Where some-mysql is the name you want to assign to your container, rootPassword is the password to be set for the MySQL root user and -d means that the container will run detached. The -p option is used to map the port 3306 of the container to the port 3306 of the host. This is required to be able to connect to the database from the host.

output of the docker run command


Now, a MySQL server is running inside the container. To connect to the server with Azure Data Studio use the extension MySQL extension for Azure Data Studio. Microsoft has a QuickStart: Use Azure Data Studio to connect and query MySQL if needed. Create a new connection in Azure Data Studio, then create a database (ex: 2d6db).

Create a new connection in Azure Data Studio

You can use the MySQL command-line tool if you prefer, but Azure Data Studio offers a lot of help when you are not that familiar with SQL. You can even use the Copilot extension and ask it to write the SQL statement for you. It's pretty good!

If you want to learn more about this, check the Open at Microsoft episode: Copilot is now in Azure Data Studio and this is how it can help you! to see it in action.

It's fantastic to generate a first draft of the create statements and to make queries.

Copilot writing SQL

Let's create two SQL scripts. The first one will be to create the schema with all the tables. The idea here is to write the script and execute it to validate the results. Here is an example creating only one table to keep the post simple.

-- schema.sql

CREATE TABLE IF NOT EXISTS 2d6db.rooms (
  id int NOT NULL AUTO_INCREMENT,
  roll int DEFAULT 0,
  level int DEFAULT 1,
  size varchar(10) DEFAULT NULL,
  room_type varchar(255) DEFAULT NULL,
  description varchar(255) DEFAULT NULL,
  encounter varchar(255) DEFAULT NULL,
  exits varchar(255) DEFAULT NULL,
  is_unique bool DEFAULT false,
  PRIMARY KEY (id)
);

Now that there are tables in the database, let's fill them with seed data. For this, the second SQL script will contain insert statement to populate the tables. We don't need all the data but only what will be useful when developing. Think about creating data to cover all types or scenarios, it's a development database so it should contain data to help you code.

-- data.sql

INSERT INTO 2d6db.rooms(roll, level, room_type, size, description, exits, is_unique)
VALUES (2,1,'Empty space', 'small','There is nothing in this small space', 'Archways',false);

INSERT INTO 2d6db.rooms(roll, level, room_type, size, description, exits, is_unique)
VALUES (3,1,'Strange Text', 'small','This narrow room connects the corridors and has no furniture. On the wall though...', 'Archways',false);

INSERT INTO 2d6db.rooms(roll, level, room_type, size, description, exits, is_unique)
VALUES (4,1,'Grakada Mural', 'small','There is a large mural of Grakada here. Her old faces smiles...', 'Archways',true);

Note: You can now stop the database's container with the command: docker stop some-mysql. We don't need it anymore.

Putting All the Pieces Together

This is when the magic starts to show up. Using a Docker Compose file, we will start the database container and execute the two SQL scripts to create and populate the database.

# docker-compose.yml

services:
  
  2d6server:
    image: mysql
    command: --default-authentication-plugin=mysql_native_password
    environment:
      MYSQL_DATABASE: '2d6db'
      MYSQL_ROOT_PASSWORD: rootPassword
    ports:
       - "3306:3306"
    volumes:
      - "../database/scripts/schema.sql:/docker-entrypoint-initdb.d/1.sql"
      - "../database/scripts/data.sql:/docker-entrypoint-initdb.d/2.sql"

The docker-compose.yml file are in YAML and usually are used to start multiple containers at once, but it doesn't need to. In this scenario, the file defines a single container named 2d6server using just like the previous Docker command and MySQL image and configuration. The last command volumes is new. It maps the path where the SQL scripts are located to /docker-entrypoint-initdb.d inside the container. When MySQL starts it will execute the files in that specific folder in alphabetic order. This is why the scripts are renamed 1.sql and 2.sql, as the table must be created first.

Do get the database up and ready, we will execute the docker compose up.


# start the database
docker compose -f /path_to_your_file/docker-compose.yml up -d 

# stop the database
docker compose -f /path_to_your_file/docker-compose.yml down -d 

By default, the command looks for a docker-compose.yml file. If you have a different name use the argument -f to specify the filename. Optionally, to free the prompt you can pass the argument -d to be in detached mode.

Docker Compose commands

When you are done coding and you don't need the database anymore, execute the docker compose down command to free up your computer. Compared to when the server is installed locally, a container will leave no trace; your computer is not "polluted".

When you need to update the database, edit the SQL script first. When the scripts are ready, execute the docker-compose restart to get the database refreshed.

To Conclude

Now, you only need to execute one simple command get a fresh database, when you want. All the developers don't need to have a database server installed and configured locally. And you don't need to be worried when deleting or modifying data, like when using a shared database. After cloning the repository all developers will have everything they need to start coding.

In a next post, I will share how I used Azure Data API Builder to generate a complete API on top of the database using the same docker compose method.

Video version!

If you prefer watching instead of reading here the video version of this post!

My Ubuntu laptop finally works with my docking station!

Targus USB-C Docking Station

I've been running Ubuntu on my personal laptop, a Dell Inspiron 13. I love it. It's slim, performant and I can code, play, read, stream without any difficulty. Yet when I try to use my bigger PC monitors by connecting my laptop to a docking station, I've had a lot of trouble. I've tried 2 docking stations, but none of them worked with Ubuntu. Some solutions found online suggest rebuilding my kernel, but I didn't want to do that, it felt too extreme for what should be trivial.

This post is about how I finally got my docking station to work with Ubuntu 22.04.

My current docking station is a Targus USB-C Docking Station and it works perfectly with other Windows devices. While doing some cleaning on my desk a logo catches my attention on the dock; the Displaylink logo. And that was the beginning of the end for the problem with that dock station.


DisplayLink logo on a Targus USB-C Docking Station


After a quick search (aka Googling with DuckDuckGo) I found exactly what I was looking for on Synaptics web site. A solution for the older Linux versions! 

Following the instructions: How to install DisplayLink Software - Ubuntu, I was able to downloads the Synaptics Ubuntu Driver Download and install them in a few minutes. After a reboot... It worked!

I hope it will help others with the same problem. 

How to edit a JSON object inside an Azure Logic App

I use Azure Logic Apps in many of my solutions, I find them so conviennant to integrate different systemes. Recently one of them was failling and by lookink at the error message: The property value exceeds the maximum allowed size, I knew what was wrong. I was tying to save a JSON object into a Storage table but one property was too long. For this particular case I didn't need the value contain in the property so the plan was to delete it. At first, I thought it wasn't possible to edit a variable of type object in a Logic Apps, but it is!

In this post I will show how to use Compose action and setProperty to perform data operations in Azure Logic Apps.

The Context

First think to know is that the Compose action does not update the the current object but creates a new one. In this demo the JSON object use is quite short to simplyfy the demo.

{
	"firstname":"Frank",
	"lastname":"Boucher",
	"alias":"fboucheros",
	"bio":"With many years of experience in the IT industry, François (Frank) Boucher is a trusted Microsoft Azure professional whose expertise and bilingual service are relied upon in large Canadian markets (Ottawa and Montreal) as well as internationally. Among his many accolades, Francois has been awarded four times Microsoft Azure MVP status, named a Microsoft Azure Advisor, and Microsoft Azure P-Seller. Frank created the “Cloud 5 minutes” show. Where every second week, a new episode that answers a different technical question, is published both in French and English (cloud5mins.com). "
}

The Details

The Logic App recieve a JSON object from the request body. This is transfom as an Person object. To empty the bio property the action Compose will be used.

Inside the Compose action, use the context menu to find setProperty in the Expression section. The expression setProperty takes three parameters: the object, the property name, and the edited value. In this demo the goal was to empty the property therefore and empty string will be assign like this:

  setProperty(variables('Person'),'bio','').

The edited object is accessible from the output of the Compose action and this what will be return in the Response action.

The Result

By adding a single action it's possible to edit JSON object in a Logic App without requiring to use inline code or external tools. A demo wouldn't be complite without a end-to-end run so here the result of a HTTP POST to the Logic App passing the Person JSON and the returned result.

Video version

If you prefer, I also have a video version of this post.

Useful links


~ Frank



How to re-open a repository cloned in a volume, from Visual Studio Code

I love Dev Containers, I use them a lot for most of my development. One of my favorite options is to clone a repository directly in a docker volume.


It takes a few seconds and you can work on your code without installing any SDKs or language that your current machine doesn't have. Marvelous!

Ideally, at the end of your session, you push your code to another repository (ex: GitHub). However, sometimes I forget or am interrupted I start working on something else, and my changes are not pushed.

How do you re-open that environment?! In this post, I want to share two ways that I use.

Open Recent

The first method is to use the history of the editor! For example, here in Visual Studio Code, select the File menu and Open Recent.


If you didn't open too many files since you used that dev container, it should be present as displayed in the image. It should look like: <Name of the repository> in a unique [Dev Container].

Make sure docker is already running and select it. Voila, in a snap you are back into the dev environment with your last changes waiting for you.

Open the Container

There are a few different options to do the next solution, I will share the one I consider the easiest for the people who are not Docker experts.

First, if not already present to your VSCode, add the Docker extension identifier: ms-azuretools.vscode-docker. Then from this new extension in the top section named containers search for your container. It should start with "vsc", for Visual Studio Code, then hyphen the name of the repository you cloned. Right-click on it and select start. After a few seconds, the container should have a little green triangle on its side and be ready to continue.


The next step consists of attaching the container to VSCode. Once more, from the Docker extension, right-click on the container and select Attach Visual Studio Code.


This will open a new VSCode window, we are mostly done but there is one last step to do. You will notice that the file explorer is empty. No worries we will fix everything with this last step. The terminal should be open in the home folder of the root user. Let's open our project folder by executing the command:

cd /workspaces/<repository-name>

Then the final command is to re-open VSCode in this folder and let the Dev Container do his magic. Execute the command:

code . -r

(the -r is to re-use the same VSCode windows. It's optional, if not provided it will open a new VSCode instance.)


And voila! The Dev Container is just as it was before.


If you know other ways to achieve this, leave a comment or reach out, I'm always happy to learn more.


~ frank



How to simplify a Docker run command

I recently wanted to create animated GIFs from videos. The idea was to get video previews, in a very lightweight file. After a quick search online, I found FFMPEG, a fantastic multimedia framework to manipulate media. There is also a few wrappers that exists in different languages (ex: C#, JavaScript) but you still need to install FFMPEG locally, and I didn't want that. In fact, I wanted a simple solution that doesn't require any installation locally and something in the cloud. In this post, I want to share how I achieved the first one.

All the code and the container are available on Github and Docker Hub.

First Contact

The ffmpeg framework is very powerful and can do so many things; therefore it's normal that it has a ton of possible parameters and extensions. After time spent on the documentation and a few trials and errors, I found how to do exactly what I needed calling it this way:

ffmpeg -r 60 -i $INPUTFILE -loop 0 -vf scale=320:-1 -c:v gif -f gif -ss 00:00:00.500 -r 10 -t 5 - > $OUTPUTFILE

This will create a five second animated GIF from a video. It speeds up the video and lowers the framerate of the GIF to keep the output lightweight. Here is an example.

Hello World episode 5 seconds preview

This is great, but this is not very friendly. How can someone who only creates a video once in a while be expected to remember all those parameters?! And even harder, when the video is vertical some parameters have different values. It was time to simplify, and here is how I did it. Note that I'm a Docker beginner and if you think there is a simpler or better way to do some steps, let me know, and let's learn together.

The Plan

The plan is simple: execute a simple Docker command like docker run fboucher/aciffmpeg -i NotInTheSky.mp4 and generate a video preview. To build our ephemeral container we will start with something lightweight like alpine, install ffmpeg and add a script that would be executed as the container runs. That sounds like an excellent plan, let's do it!

Writing the Script

The script is simple, but I learned a few things writing it. This is why it's included in this post. The goal was simple: execute the ffmpeg command using some values from the parameters: file path, and if the video is vertical. Here is the script:

#!/bin/sh

while getopts ":i:v" opt; do
  case $opt in
    i) inputFile="$OPTARG"
    ;;
    v) isVertical=true
    ;;
    \?) echo "Invalid option -$OPTARG" >&2
    exit 1
    ;;
  esac

  case $OPTARG in
    -*) echo "Option $opt needs a valid argument"
    exit 1
    ;;
  esac
done

if [ -z "$isVertical" ]; then isVertical=false; fi

# used for bash 
#IFS='.'
#read -a filePart <<< "$inputFile"
#outputFile="${filePart[0]}.gif"

# used for dash 
filename=$(echo "$inputFile" | cut -d "." -f 1)
outputFile="$filename.gif"

if $isVertical
then
  ffmpeg -r 60 -i $inputFile -loop 0 -vf scale=-1:320 -c:v gif -f gif -ss 00:00:00.500 -r 10 -t 5 - > $outputFile
else
  ffmpeg -r 60 -i $inputFile -loop 0 -vf scale=320:-1 -c:v gif -f gif -ss 00:00:00.500 -r 10 -t 5 - > $outputFile
fi

Things I learned: Parameter without values

The script needs to be as friendly as possible, therefore any unnecessary information should be removed. Most videos will be horizontal, so let's make the parameter optional. However, I don't want users to have to specify the value script.sh -i myvideo.mp4 -v true but instead script.sh -i myvideo.mp4 -v. This is very simple to do, once you know it. On the first line of code when I get the parameters: getopts ":i:v" notes that there is no ":" after the "v". This is to specify that we are not expecting any values.

Things I Learned: Bash and Dash

As mentioned earlier the container will be built from Alpine. And Alpine doesn't have bash but instead uses dash as a shell. It's mostly the same, but there are some differences. The first one will be the shebang (aka "#!/bin/sh" on the first line). And the second was the string manipulation. To generate a new file with the same name but a different extension of the script, split the file name at the ".". This can be done IFS ... read... <<< command (commented in the script) on bash but this will give syntax error: unexpected redirection and this is because there is no <<< in bash. Instead, you need to use the command cut -d "." -f 1 (where -d specifies the CHAR to use as the delimiter, and -f return only this field).

Building the image

It's now time to connect all the dots in the dockerfile.

FROM alpine:3.13
LABEL Name=aciffmpeg Version=0.0.2
RUN apk add ffmpeg
COPY ./src/myscript.sh /
RUN chmod +x /myscript.sh
ENTRYPOINT ["/myscript.sh"]

The file is not extremely complex but let’s pass through it line by line. 

  • We start FROM Alpine version 3.13 and apply a LABEL
  • RUN Will execute the command to install ffmpeg. The apk is the default utility on Alpine to install apps just like apt on Ubuntu. 
  • COPY Is copying the script from our local machine into the container at the root. 
  • The second RUN command is to make sure the script is executable. 
  • Finally, ENTRYPOINT will allow us to configure the container to run as an executable in this case as the script. All parameters passed to Docker will be passed to the script.

The only things left now are to build, tag, and push it on Docker Hub.

docker build -t fboucher/aciffmpeg .

docker tag  0f42a672d000 fboucher/aciffmpeg:2.0

docker push fboucher/aciffmpeg:2.0

The Simplified version

And now to create a preview of any video you just need to map a volume and specify the file path and optionally mention if the video is vertical.

On Linux/ WSL the command would look like this:

docker run -v /mnt/c/dev/test:/video fboucher/aciffmpeg -i /video/sample.mp4 -v

And on PowerShell like that:

docker run -v c/dev/test:/video fboucher/aciffmpeg -i /video/sample.mp4 -v

I learned a lot about Docker doing that project and now I have a very useful tool. What are the tools you built using containers that simplify your life or work?

Video Version

I recorded a video version if you are interested. 


~frank