Showing posts with label video. Show all posts
Showing posts with label video. Show all posts

AI Vision: Turning Your Videos into Comedy Gold (or Cringe)

I've spent most of my career building software in C# and .NET, and only used Python in IoT projects. When I wanted to build a fun project—an app that uses AI to roast videos, I knew it was the perfect opportunity to finally dig into Python web development.

The question was: where do I start? I hopped into a brainstorming session with Reka's AI chat and asked about options for building web apps in Python. It mentioned Flask, and I remembered friends talking about it being lightweight and perfect for getting started. That sounded right.

In this post, I share how I built "Roast My Life," a Flask app using the Reka Vision API.

The Vision (Pun Intended)

The app needed three core things:

  1. List videos: Show me what videos are in my collection
  2. Upload videos: Let me add new ones via URL
  3. Roast a video: Send a selected video to an AI and get back some hilarious commentary

See it in action

Part 1: Getting Started Environment Setup

The first hurdle was always going to be environment setup. I'm serious about keeping my Python projects isolated, so I did the standard dance:

Before even touching dependencies, I scaffolded a super bare-bones Flask app. Then one thing I enjoy from C# is that all dependencies are brought in one shot, so I like doing the same with my python projects using requirements.txt instead of installing things ad‑hoc (pip install flask then later freezing).

Dropping that file in first means the setup snippet below is deterministic. When you run pip install -r requirements.txt, Flask spins up using the exact versions I tested with, and you won't accidentally grab a breaking major update.

Here's the shell dance that activates the virtual environment and installs everything:

cd roast_my_life/workshop
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

Then came the configuration. We will need an API key and I don't want to have it hardoced so I created a .env file to store my API credentials:

API_KEY=YOUR_REKA_API_KEY
BASE_URL=https://vision-agent.api.reka.ai

To get that API key, I visited the Reka Platform and grabbed a free one. Seriously, a free key for playing with AI vision APIs? I was in.

With python app.py, I fired up the Flask development server and opened http://127.0.0.1:5000 in my browser. The UI was there, but... it was dead. Nothing worked.

Perfect. Time to build.

The Backend: Flask Routing and API Integration

Coming from ASP.NET Core's controller-based routing and Blazor, Flask's decorator-based approach felt just like home. All the code code goes in the app.py file, and each route is defined with a simple decorator. But first things first: loading configuration from the .env file using python-dotenv:

from flask import Flask, request, jsonify
import requests
import os
from dotenv import load_dotenv

app = Flask(__name__)

# Load environment variables (like appsettings.json)
load_dotenv()
api_key = os.environ.get('API_KEY')
base_url = os.environ.get('BASE_URL')

All the imports packages are the same ones that needs to be in the requirements.txt. And we retreive the API key and base URL from environment variables, just like in .NET Core.

Now, to be able to get roasted we need first to upload a video to the Reka Vision API. Here's the code—I'll go over some details after.


@app.route('/api/upload_video', methods=['POST'])
def upload_video():
    """Upload a video to Reka Vision API"""
    data = request.get_json() or {}
    video_name = data.get('video_name', '').strip()
    video_url = data.get('video_url', '').strip()
    
    if not video_name or not video_url:
        return jsonify({"error": "Both video_name and video_url are required"}), 400
    
    if not api_key:
        return jsonify({"error": "API key not configured"}), 500
    
    try:
        response = requests.post(
            f"{base_url.rstrip('/')}/videos/upload",
            headers={"X-Api-Key": api_key},
            data={
                'video_name': video_name,
                'index': 'true',  # Required: tells Reka to process the video
                'video_url': video_url
            },
            timeout=30
        )
        
        response_data = response.json() if response.ok else {}
        
        if response.ok:
            video_id = response_data.get('video_id', 'unknown')
            return jsonify({
                "success": True,
                "video_id": video_id,
                "message": "Video uploaded successfully"
            })
        else:
            error_msg = response_data.get('error', f"HTTP {response.status_code}")
            return jsonify({"success": False, "error": error_msg}), response.status_code
            
    except requests.Timeout:
        return jsonify({"success": False, "error": "Request timed out"}), 504
    except Exception as e:
        return jsonify({"success": False, "error": f"Upload failed: {str(e)}"}), 500

Once the information from the frontend is validated we make a POST request to the Reka Vision API's /videos/upload endpoint. The parameters are sent as form data, and we include the API key in the headers for authentication. Here I was using URLs to upload videos, but you can also upload local files by adjusting the request accordingly. As you can see, it's pretty straightforward, and the documentation from Reka made it easy to understand what was needed.

The Magic: Sending Roast Requests to Reka Vision API

Here's where things get interesting. Once a video is uploaded, we can ask the AI to analyze it and generate content. The Reka Vision API supports conversational queries about video content:

def call_reka_vision_qa(video_id: str) -> Dict[str, Any]:
    """Call the Reka Video QA API to generate a roast"""
    
    headers = {'X-Api-Key': api_key} if api_key else {}
    
    payload = {
        "video_id": video_id,
        "messages": [
            {
                "role": "user",
                "content": "Write a funny and gentle roast about the person, or the voice in this video. Reply in markdown format."
            }
        ]
    }
    
    try:
        resp = requests.post(
            f"{base_url}/qa/chat",
            headers=headers,
            json=payload,
            timeout=30
        )
        
        data = resp.json() if resp.ok else {"error": f"HTTP {resp.status_code}"}
        
        if not resp.ok and 'error' not in data:
            data['error'] = f"HTTP {resp.status_code} calling chat endpoint"
        
        return data
        
    except requests.Timeout:
        return {"error": "Request to chat API timed out"}
    except Exception as e:
        return {"error": f"Chat API call failed: {e}"}

Here we pass the video ID and a prompt asking for a "funny and gentle roast." The API responds with AI-generated content, which we can then send back to the frontend for display. I try to give more "freedom" to the AI by asking it to reply in markdown format, which makes the output more engaging.

Try It Yourself!

The complete project is available on GitHub: reka-ai/api-examples-python

What Makes Reka Vision API So nice to use

What really stood out to me was how approachable the Reka Vision API is. You don't need any special SDK—just the requests library making standard HTTP calls. And honestly, it doesn't matter what language you're used to; an HTTP call is pretty much always simple to do. Whether you're coming from .NET, Python, JavaScript, or anything else, you're just sending JSON and getting JSON back.

Authentication is refreshingly straightforward: just pop your API key in the header and you're good to go. No complex SDKs, no multi-step authentication flows, no wrestling with binary data streams. The conversational interface lets you ask questions in natural language, and you get back structured JSON responses with clear fields.

One thing worth noting: in this example, the videos are pre-uploaded and indexed, which means the responses come back fast. But here's the impressive part—the AI actually looks at the video content. It's not just reading a transcript or metadata; it's genuinely analyzing the visual elements. That's what makes the roasts so spot-on and contextual.

Final Thoughts

The Reka Vision API itself deserves credit for making video AI accessible. No complicated SDKs, no multi-GB model downloads, no GPU requirements. Just simple HTTP requests and powerful AI capabilities. I'm not saying I'm switching to Python full-time, but expect to see me sharing more Python projects in the future!

References and Resources


Using AI with .NET 10 Scripts: What Worked, What Didn’t, and Lessons Learned

I wanted to kick the tires on the upcoming .NET 10 C# script experience and see how far I could get calling Reka’s Research LLM from a single file, no project scaffolding, no .csproj. This isn’t a benchmark; it’s a practical tour to compare ergonomics, setup, and the little gotchas you hit along the way. I’ll share what worked, what didn’t, and a few notes you might find useful if you try the same.

All the sample code (and a bit more) is here: reka-ai/api-examples-dotnet · csharp10-script. The scripts run a small “top 3 restaurants” prompt so you can validate everything quickly.

We’ll make the same request in three ways:

  1. OpenAI SDK
  2. Microsoft.Extensions.AI for OpenAI
  3. Raw HttpClient

What you need

The C# "script" feature used below ships with the upcoming .NET 10 and is currently available in preview. If you prefer not to install a preview SDK, you can run everything inside the provided Dev Container or on GitHub Codespaces. I include a .devcontainer folder with everything set up in the repo.

Set up your API key

We are talking about APIs here, so of course, you need an API key. The good news is that it's free to sign up with Reka and get one! It's a 2-click process, more details in the repo. The API key is then stored in a .env file, and each script loads environment variables using DotNetEnv.Env.Load(), so your key is picked up automatically. I went this way instead of using dotnet user-secrets because I thought it would be the way it would be done in a CI/CD pipeline or a quick script.

Run the demos

From the csharp10-script folder, run any of these scripts. Each line is an alternative

dotnet run 1-try-reka-openai.cs
dotnet run 2-try-reka-ms-ext.cs
dotnet run 3-try-reka-http.cs

You should see a short list of restaurant suggestions.

Propmt Result: 3 restaurants


OpenAI SDK with a custom endpoint

Reka's API is using the OpenAI format; therefore, I thought of using the NuGet package OpenAI. To reference a package in a script, you use the #:package [package name]@[package version] directive at the top of the file. Here is an example:

#:package OpenAI@2.3.0

// ...

var baseUrl = "http://api.reka.ai/v1";

var openAiClient = new OpenAIClient(new ApiKeyCredential(REKA_API_KEY), new OpenAIClientOptions
{
    Endpoint = new Uri(baseUrl)
});

var client = openAiClient.GetChatClient("reka-flash-research");

string prompt = "Give me 3 nice, not crazy expensive, restaurants for a romantic dinner in Montreal";

var completion = await client.CompleteChatAsync(
    new List<ChatMessage>
    {
        new UserChatMessage(prompt)
    }
);

var generatedText = completion.Value.Content[0].Text;

Console.WriteLine($" Result: \n{generatedText}");

The rest of the code is more straightforward. You create a chat client, specify the Reka API URL, select the model, and then you send a prompt. And it works just as expected. However, not everything was perfect, but before I share more about that part, let's talk about Microsoft.Extensions.AI.

Microsoft Extensions AI for OpenAI

Another common way to use LLM in .NET is to use one ot the Microsoft.Extensions.AI NuGet package. In our case Microsoft.Extensions.AI.OpenAI was used.

#:package Microsoft.Extensions.AI.OpenAI@9.8.0-preview.1.25412.6

// ....

var baseUrl = "http://api.reka.ai/v1";

IChatClient client = new ChatClient("reka-flash-research", new ApiKeyCredential(REKA_API_KEY), new OpenAIClientOptions
{
    Endpoint = new Uri(baseUrl)
}).AsIChatClient();

string prompt = "Give me 3 nice, not crazy expensive, restaurants for a romantic dinner in Montreal";

Console.WriteLine(await client.GetResponseAsync(prompt));

As you can see, the code is very similar. Create a chat client, set the URL, the model, and add your prompt, and it works just as well.

That's two ways to use Reka API with different SDKs, but maybe you would prefer to go "SDKless", let's see how to do that.

Raw HttpClient calling the REST API

Without any SDK to help, there is a bit more line of code to write, but it's still pretty straightforward. Let's see the code:

using var httpClient = new HttpClient();

var baseUrl = "http://api.reka.ai/v1/chat/completions";

var requestPayload = new
{
    model = "reka-flash-research",
    messages = new[]
            {
                new
                {
                    role = "user",
                    content = "Give me 3 nice, not crazy expensive, restaurants for a romantic dinner in New York city"
                }
            }
};

using var request = new HttpRequestMessage(HttpMethod.Post, baseUrl);
request.Headers.Add("Authorization", $"Bearer {REKA_API_KEY}");
request.Content = new StringContent(jsonPayload, Encoding.UTF8, "application/json");

var response = await httpClient.SendAsync(request);

var responseContent = await response.Content.ReadAsStringAsync();

var jsonDocument = JsonDocument.Parse(responseContent);

var contentString = jsonDocument.RootElement
    .GetProperty("choices")[0]
    .GetProperty("message")
    .GetProperty("content")
    .GetString();

Console.WriteLine(contentString);

So you create an HttpClient, prepare a request with the right headers and payload, send it, get the response, and parse the JSON to extract the text. In this case, you have to know the JSON structure of the response, but it follows the OpenAI format.

What did I learn from this experiment?

I used VS Code while trying the script functionality. One thing that surprised me was that I didn't get any IntelliSense or autocompletion. I try to disable the DevKit extension and change the setting for OmniSharp, but no luck. My guess is that because it's in preview, and it will work just fine in November 2025 when .NET 10 will be released.

In this light environment, I encountered some issues where, for some reason, I couldn't use an https endpoint, so I had to use http. In the raw httpClient script, I had some errors with the Reflection that wasn't available. It could be related to the preview or something else, I didn't investigate further.

For the most part, everything worked as expected. You can use C# code to quickly execute some tasks without any project scaffolding. It's a great way to try out the Reka API and see how it works.

What's Next?

While writing those scripts, I encountered multiple issues that aren't related to .NET but more about the SDKs when trying to do more advanced functionalities like optimization of the query and formatting the response output. Since it goes beyond the scope of this post, I will share my findings in a follow-up post. Stay tuned!

Video version

Here is a video version of this post


Learn more

Get an AI assistant to do your research

Introducing Reka Research: Your AI Research Assistant

An AI Assistant searching in documents
Meet Reka Research a powerful AI agent that can search the web and analyze your files to answer complex questions in minutes. Whether you're staying up to date with AI news, screening resumes, or researching technical topics, Reka Research does the heavy lifting for you.


What Makes Reka Research Special?

Reka Research stands out in four key areas:

  • Top Performance: Best in class results on research benchmarks
  • Fast Results: Get thorough answers in 1-3 minutes
  • Full Transparency: See exactly how the AI reached its conclusions. All steps are visible.

Smart Web Search That Actually Works

Ever wished you could ask someone to research the latest AI developments while you focus on other work? That's exactly what Reka Research does.

Watch how it works:

In this demo, Jess and Sharath shows how Reka Research can automatically gather the most important AI news from the past week. The AI visits multiple websites, takes notes, and presents a clean summary with sources. You can even restrict searches to specific domains or set limits on how many sites to check.

File Search for Your Private Documents

Sometimes the information you need isn't on the web - it's in your company's documents, meeting notes, or file archives. Reka Research can search through thousands of private files to find exactly what you're looking for.

See it in action:  

In this example, ess and Sharath shows how HR teams can use Reka Research to quickly screen resumes. Instead of manually reviewing hundreds of applications, the AI finds candidates who meet specific requirements (like having a computer science degree and 3+ years of backend experience) in seconds!

Writing Better Prompts Gets Better Results

Like any AI tool, Reka Research works best when you know how to ask the right questions. The key is being specific about what you want and providing context.

Learn the techniques:

Jess and Yi shares practical tips for getting the most out of Reka Research. Instead of asking "summarize meeting minutes," try "summarize April meeting minutes about public participation." The more specific you are, the better your results will be.

Ready to Try Reka Research?

Reka Research is currently available for everyone! Try it via the playground, or using directly the API. Whether you're researching competitors, analyzing documents, or staying current with industry trends, it can save you hours of work.

Want to learn more and connect with other users? Join our Discord community where you can:

  • Get tips from other Reka Research users
  • Share your use cases and success stories
  • Stay updated on new features and releases
  • Ask questions directly to our team

Join our Discord Community

Ready to transform how you research? Visit reka.ai to learn more about Reka Research and our other AI tools.

Reading Notes #582

It is time to share new reading notes. It is a habit I started a long time ago where I share a list of all the articles, blog posts, and books that catch my interest during the week. 

 If you think you may have interesting content, share it!
Fridge with wings, realistic rendering 


 

Cloud

Data


Programming


~Frank

Database to go! The perfect database for developer

When building a new project, we don't need a big database that scales and has lots of data, but we do still need some kind of data source. Of course, it is possible to fake it and have some hardcoded value returned by an API but that takes time to create and it's not a database. In this post, I want to share a solution to have a portable, self-healing, disposable, disconnected database that doesn't require any installation.

The solution? Put the database in a container! It doesn't matter what database you are planning to use or on which OS you are developing. Most databases will have an official image available on Docker Hub and Docker runs on all platforms. If you feel uncomfortable with containers, have no fear, this post is beginner-friendly.

This post is part of a series where I share my experience while building a Dungeon crawler game. The code can be found on GitHub.


The Plan

Have a database ready at the "press of a button". By "ready", I mean up and running, with data in it, and accessible to all developer tools.

Preparation for the Database

We need a script to create the database schema and some data. There are many ways to achieve this. A beginner-friendly way is to create an empty database and use a tool like Azure Data Studio to help create the SQL scripts. Doing it this way will validate that the script works.

The Docker command to create the database's container will change a little depending on the database you are using but here what's a MySQL one look like:

docker run --name some-mysql -e MYSQL_ROOT_PASSWORD='rootPassword' -p 3306:3306 -d mysql 

Where some-mysql is the name you want to assign to your container, rootPassword is the password to be set for the MySQL root user and -d means that the container will run detached. The -p option is used to map the port 3306 of the container to the port 3306 of the host. This is required to be able to connect to the database from the host.

output of the docker run command


Now, a MySQL server is running inside the container. To connect to the server with Azure Data Studio use the extension MySQL extension for Azure Data Studio. Microsoft has a QuickStart: Use Azure Data Studio to connect and query MySQL if needed. Create a new connection in Azure Data Studio, then create a database (ex: 2d6db).

Create a new connection in Azure Data Studio

You can use the MySQL command-line tool if you prefer, but Azure Data Studio offers a lot of help when you are not that familiar with SQL. You can even use the Copilot extension and ask it to write the SQL statement for you. It's pretty good!

If you want to learn more about this, check the Open at Microsoft episode: Copilot is now in Azure Data Studio and this is how it can help you! to see it in action.

It's fantastic to generate a first draft of the create statements and to make queries.

Copilot writing SQL

Let's create two SQL scripts. The first one will be to create the schema with all the tables. The idea here is to write the script and execute it to validate the results. Here is an example creating only one table to keep the post simple.

-- schema.sql

CREATE TABLE IF NOT EXISTS 2d6db.rooms (
  id int NOT NULL AUTO_INCREMENT,
  roll int DEFAULT 0,
  level int DEFAULT 1,
  size varchar(10) DEFAULT NULL,
  room_type varchar(255) DEFAULT NULL,
  description varchar(255) DEFAULT NULL,
  encounter varchar(255) DEFAULT NULL,
  exits varchar(255) DEFAULT NULL,
  is_unique bool DEFAULT false,
  PRIMARY KEY (id)
);

Now that there are tables in the database, let's fill them with seed data. For this, the second SQL script will contain insert statement to populate the tables. We don't need all the data but only what will be useful when developing. Think about creating data to cover all types or scenarios, it's a development database so it should contain data to help you code.

-- data.sql

INSERT INTO 2d6db.rooms(roll, level, room_type, size, description, exits, is_unique)
VALUES (2,1,'Empty space', 'small','There is nothing in this small space', 'Archways',false);

INSERT INTO 2d6db.rooms(roll, level, room_type, size, description, exits, is_unique)
VALUES (3,1,'Strange Text', 'small','This narrow room connects the corridors and has no furniture. On the wall though...', 'Archways',false);

INSERT INTO 2d6db.rooms(roll, level, room_type, size, description, exits, is_unique)
VALUES (4,1,'Grakada Mural', 'small','There is a large mural of Grakada here. Her old faces smiles...', 'Archways',true);

Note: You can now stop the database's container with the command: docker stop some-mysql. We don't need it anymore.

Putting All the Pieces Together

This is when the magic starts to show up. Using a Docker Compose file, we will start the database container and execute the two SQL scripts to create and populate the database.

# docker-compose.yml

services:
  
  2d6server:
    image: mysql
    command: --default-authentication-plugin=mysql_native_password
    environment:
      MYSQL_DATABASE: '2d6db'
      MYSQL_ROOT_PASSWORD: rootPassword
    ports:
       - "3306:3306"
    volumes:
      - "../database/scripts/schema.sql:/docker-entrypoint-initdb.d/1.sql"
      - "../database/scripts/data.sql:/docker-entrypoint-initdb.d/2.sql"

The docker-compose.yml file are in YAML and usually are used to start multiple containers at once, but it doesn't need to. In this scenario, the file defines a single container named 2d6server using just like the previous Docker command and MySQL image and configuration. The last command volumes is new. It maps the path where the SQL scripts are located to /docker-entrypoint-initdb.d inside the container. When MySQL starts it will execute the files in that specific folder in alphabetic order. This is why the scripts are renamed 1.sql and 2.sql, as the table must be created first.

Do get the database up and ready, we will execute the docker compose up.


# start the database
docker compose -f /path_to_your_file/docker-compose.yml up -d 

# stop the database
docker compose -f /path_to_your_file/docker-compose.yml down -d 

By default, the command looks for a docker-compose.yml file. If you have a different name use the argument -f to specify the filename. Optionally, to free the prompt you can pass the argument -d to be in detached mode.

Docker Compose commands

When you are done coding and you don't need the database anymore, execute the docker compose down command to free up your computer. Compared to when the server is installed locally, a container will leave no trace; your computer is not "polluted".

When you need to update the database, edit the SQL script first. When the scripts are ready, execute the docker-compose restart to get the database refreshed.

To Conclude

Now, you only need to execute one simple command get a fresh database, when you want. All the developers don't need to have a database server installed and configured locally. And you don't need to be worried when deleting or modifying data, like when using a shared database. After cloning the repository all developers will have everything they need to start coding.

In a next post, I will share how I used Azure Data API Builder to generate a complete API on top of the database using the same docker compose method.

Video version!

If you prefer watching instead of reading here the video version of this post!

Reading Notes #570

It is time to share new reading notes. It is a habit I started a long time ago where I share a list of all the articles, blog posts, and books that catch my interest during the week. 


If you think you may have interesting content, share it!

 

Programming

Open Source

Low Code

Miscellaneous


~Enjoy!

Reading Notes #562


It is time to share new reading notes. It is a habit I started a long time ago where I share a list of all the articles, blog posts, and books that catch my interest during the week. 

If you think you may have interesting content, share it!

Cloud

Programming

Open Source

  • How to write a perfect README for your GitHub project (Marc Seitz) - This is a nice and simple guide to make sure your readme help visitor to land well on your project.

  • 5 Ways OpenTelemetry Can Reduce Costs (Morgan McLean) - Great tools from the open-source community to help save money.

  • Open at Microsoft – OmniBOR (Aeva Black) - Did you know there is a tool that can help you see the security flaws or your dependencies? Learn more about this project in this post and video.

  • Dapr (AugustaUd) - Learn more about Dapr with this series of 3 videos that each contains short demos. With a very active community, there is no dough that OSS project is healthy.

Low Code

Miscellaneous

~Frank

Reading Notes #498


Good Monday, Already time to share my reading notes of the week. A list of all the articles, blog posts, that catch my interest during the week.

You think you may have interesting content, share it!

Cloud

Programming

Miscellaneous


~frank

How to configure a secured custom domain on a Azure Function or website

I wanted to create this tutorial for a long time. How to map a naked domain on an Azure resource. It looks so complicated, but once you know what to do it's kind of simple in fact. In this post, I will share the three simple steps to do exactly this.

Step 1: Add Custom Domain


The first step is to map a domain on the application. The method I will explain uses a "www" domain (ex: www.fboucher.dev). To map directly a naked domain (ex: fboucher.dev) you would need to buy a wildcard certificate. However, I will show you in step three how to walk around this issue by using DNS rules.

From the Azure portal, open the Azure Function or App Service. From the left menu search for "custom", click the Custom domains option. In this panel click the button Add custom domain, and enter your www domain.



Click the validate button and follow the instruction to make the connection between the App Service and your domain provider.

Step 2: Adding a Certificate


Now that your custom domain is mapped, let's fix the "not secure" warning by adding a certificate. From the Azure portal return in the App blade. Repeat the previous search for "custom", and select the option TLS/SSL settings. Click the Private Key Certificates, and the Create App Service Managed Certificate button. Select the domain previously added and saved. It will take a few moments to create the certificate.



Go back in the Custom domains blade and click the Add binding button. Select the domain and certificate, don't forget to select the SNI SSL option and click the Add Binding button.




Step 3: Create the DNS Rules

Create an account in cloudflare.com and add a site for your domain. We will need to customize the DNS and create some Page Rules.



On the cloudflare.com note the 2 nameservers addresses. Go to the origin name provider (in my case godaddy) and replace the names of the nameservers with the value found on cloudflare.



Create a rule that will redirect all the incoming traffic from the naked-domain to www.domain. On the option on the top, click the Pages Rules (B). Then Click the Button Create Page Rule



In the field for If the URL matches: enter the naked-domain follow by /*. That will match everything coming from that URL

For the settings select Forwarding URL and 301- Permanent Redirect. Then the destination URL should be https://www. with your domain and /$1.




References

🔗 Map an existing custom DNS name to Azure App Service: https://c5m.ca/customDomain 

🔗 Secure a custom DNS name with a TLS/SSL binding in Azure App Service: https://c5m.ca/tls-ssl

Summary of Week 34

Every beginning weekend, I will share a recap of the week and at the same time a summary of my streams. Those videos are at least two hours longs, so I thought a short summary to know if the topic interest you could be useful. Watch only the summary or relax and enjoy the longer version that up to you!

   




~

Recap/ Summary week #33

 Every beginning of weekend, I will share a recap of the week and at the same time a summary of my streams. Those videos are at least two hour longs, so I thought a short summary to know if topic interest you could be useful. Watch only the summary or relax and enjoy the longer version that up to you!

   

  • Coding Python: Deploy Django and Postgres Apps to Azure with VS Code - c5m.ca/aaa-ep25
  • Stream 119 - How easy can we make the deployment of TinyBlazorAdmin -  c5m.ca/stream-ep119
  • Stream120 - Celebrating 500 followers and working on the Chat bot -  c5m.ca/stream-ep120

~

How to Manage Azure Resources

What ever you are on running on Linux, Mac or Windows that you are on the road or in the comfort of your office there is many different ways to manage your Azure resources. In this video I will show you five ways to do it and explain the pros and cons of each:

  1. Azure Portal
  2. Azure PowerShell
  3. Azure CLI
  4. Azure Mobile App
  5. Azure Cloud Shell

There is also an excellent Microsoft Learn module that will teach you how to use the Azure portal efficiently.

References

Recap / Summary of the week #28

This is some basic, sample markdown. Every Friday, starting now, I will share a recap of the week and at the same time a summary of my streams. Those videos are at least two hour longs, so I thought a short summary to know if topic interest you could be useful.


Useful Links:


~☁️

Reading Notes #428


Frank in Kayak smilling at the camera
Every Monday, I share my "reading notes". Those are a curated list of all the articles, blog posts, podcast episodes, and books that catch my interest during the week and that I found interesting.

It's a mix of the actuality and what I consumed.

You think you may have interesting content, share it!

Cloud


Programming


Miscellaneous

  • APIs in the 2020s Panel (.NET Rocks!) - A virtual panel of awesome speakers that talked about API, REST, GraphQL, oData and so more. Lovely episode.
  • 471: How to Say No Without Saying No, with Lois Frankel (Coaching for Leaders) - Saying No... being open. Really interesting topic. I put Lois Frankel: Nice Girls Don’t Speak Up or Stand Out in my to read list, that book maybe written for women in mind, but I think it is really interesting.
  • Leveraging Our Emotional Goals (Developer Tea) - An interesting episode that talk about goals and what we need to do (or not) to achieved them.
  • 203: Updating Open Source Projects (https://castbox.fm/episode/203%3A-Updating-Open-Source-Projects-id2117504-id267802246) - As I just create my first version in one of my open-source project I found the topic more than interesting... Thank you, great show.

Miscellaneous


Books

Cover of the Book The Impossible first.
  The Impossible First: From Fire to Ice—Crossing Antarctica Alone

  Author: Colin O'Brady

  An incredible adventure yes around the globe, but more important over the personal limits. I found this book very inspiring. I felt following him across the Antarctica... In the blizzard or in those deep moment. Great memoir, great adventure.





~enjoy!

 

Simplify your deployment with nested Azure Resource Manager (ARM) templates


Most solutions, if not all, are composed of multiple parts: backend, frontend, services, APIs, etc. Because all parts could have a different life-cycle it's important to be able to deploy them individually. However, sometimes we would like to deploy everything at once. It's exactly the scenario I had in a project I'm working on where with backend and one frontend.

In this post, I will explain how I use nested Azure Resource Manager (ARM) templates and conditions to let the user decide if he wants to deploy only the backend or the backend with a frontend of his choice. All the code will be available in GitHub and if you prefer, a video version is available below.
(This post is also available in French)

The Context


The project used in this post my open-source budget-friendly Azure URL Shortener. Like mentioned previously the project is composed of two parts. The backend leverage Microsoft serverless Azure Functions, it a perfect match in this case because it will only run when someone clicks a link. The second part is a frontend, and it's totally optional. Because the Azure Functions are HTTP triggers they act as an API, therefore, they can be called from anything able to do an HTTP call. Both are very easily deployable using an ARM template by a PowerShell or CLI command or by a one-click button directly from GitHub.

The Goal


At the end of this post, we will be able from one-click to deploy just the Azure Functions or to deploy them with a frontend of our choice (I only have one right now, but more will come). To do this, we will modify the "backend" ARM template using condition and nest the ARM template responsible for the frontend deployment.

The ARM templates are available here in there [initial](https://github.com/FBoucher/AzUrlShortener/tree/master/tutorials/optional-arm/before) and [final](https://github.com/FBoucher/AzUrlShortener/tree/master/tutorials/optional-arm/before/after) versions.

Adding New Inputs


We will nest the ARM templates, this means that our backend template (azureDeploy.json) will call the frontend template (adminBlazorWebsite-deployAzure.json). Therefore we need to add all the required information to azureDeploy.json to make sure it's able to deploy adminBlazorWebsite-deployAzure.json successfully. Looking at the parameter required for the second template, we only two need values AdminEMail and AdminPassword. All the other can be generated or we already have them.

We will need also another parameter the will act as our selection option. So let's add a parameter named frontend and allowed only two values: none and adminBlazorWebsite. If the value is none we only deploy the Azure Function. When the value is adminBlazorWebsite we will deploy the Azure Function, of course, but we will also deploy an admin website to go with it.

Following the best practices, we add clear detail and add those three parameters in the parameters section of the ARM template

"frontend": {
    "type": "string",
    "allowedValues": [
        "none",
        "adminBlazorWebsite"
    ],
    "defaultValue": "adminBlazorWebsite",
    "metadata": {
        "description": "Select the frontend that will be deploy. Select 'none', if you don't want any. Frontend available: adminBlazorWebsite, none. "
    }
},
"frontend-AdminEMail": {
    "type": "string",
    "defaultValue": "",
    "metadata": {
        "description": "(Required only if frontend = adminBlazorWebsite) The EMail use to connect into the admin Blazor Website."
    }
},
"frontend-AdminPassword": {
    "type": "securestring",
    "defaultValue": "",
    "metadata": {
        "description": "(Required only if frontend = adminBlazorWebsite) Password use to connect into the admin Blazor Website."
    }
}

Nested Templates


Let's assume for now that we always deploy the website when we deploy the Azure Function, to keep things simple. What we need now is to used nested ARM template, and that when you deploy an ARM template from inside another ARM template. This is done with a Microsoft.Resources/deployments node. Let's look at the code:

{
    "name": "FrontendDeployment",
    "type": "Microsoft.Resources/deployments",
    "dependsOn": [
        "[resourceId('Microsoft.Web/sites/', variables('funcAppName'))]",
        "[resourceId('Microsoft.Web/sites/sourcecontrols', variables('funcAppName'), 'web')]"
    ],
    "resourceGroup": "[resourceGroup().name]",
    "apiVersion": "2019-10-01",
    "properties": {
        "mode": "Incremental",
        "templateLink": {
            "uri": "[variables('frontendInfo')[parameters('frontend')].armTemplateUrl]"
        },
        "parameters": {
            "basename": {
                "value" : "[concat('adm', parameters('baseName'))]"
            },
            "AdminEMail": {
                "value" : "[parameters('frontend-AdminEMail')]"
            },
            "AdminPassword": {
                "value" : "[parameters('frontend-AdminPassword')]"
            },
            "AzureFunctionUrlListUrl": {
                "value" : "[concat('https://', reference(resourceId('Microsoft.Web/sites/', variables('funcAppName')), '2018-02-01').hostNames[0], '/api/UrlList?code=', listkeys(concat(resourceId('Microsoft.Web/sites/', variables('funcAppName')), '/host/default/'),'2016-08-01').functionKeys.default)]"
            },
            "AzureFunctionUrlShortenerUrl": {
                "value" : "[concat('https://', reference(resourceId('Microsoft.Web/sites/', variables('funcAppName')), '2018-02-01').hostNames[0], '/api/UrlShortener?code=', listkeys(concat(resourceId('Microsoft.Web/sites/', variables('funcAppName')), '/host/default/'),'2016-08-01').functionKeys.default)]"
            },
            "GitHubURL": {
                "value" : "[parameters('GitHubURL')]"
            },
            "GitHubBranch": {
                "value" : "[parameters('GitHubBranch')]"
            },
            "ExpireOn": {
                "value" : "[parameters('ExpireOn')]"
            },
            "OwnerName": {
                "value" : "[parameters('OwnerName')]"
            }

        }
    }
}

If we examine this node, we have the classic: name, type, dependsOn, resourceGroup, apiVersion. Here We really want the Azure Functions to be fully deployed so we need the FunctionApp to be created AND the GitHub sync to be complete, this is why there is also a dependency on Microsoft.Web/sites/sourcecontrols.

In properties we will pass the mode as Incremental as it will leave unchanged resources that exist in the resource group but aren't specified in the template.

Learn more about the Azure Resource Manager deployment modes here as they are very powerful.

The second property is templateLink. This is really important as it's the URL to the other ARM template. That URI must not be a local file or a file that is only available on your local network. You must provide a URI value that downloadable as HTTP or HTTPS. In this case, it's a variable that contains the GitHub URL where the template is available.

Finally, we have the parameters, and this is how we pass the values to the second template. Let's skip those where I just pass the parameter value from the caller to the called, and focus on basename, AzureFunctionUrlListUrl, and AzureFunctionUrlShortenerUrl.

For basename I just add a prefix to the parameter basename received, this way the resource names will be different but we can still see the "connection". That's purely optional, you could have added this value in a parameter to azureDeploy.json, I prefer keeping the parameters a minimum as possible as I think it simplifies the deployment for the users.

Finally for AzureFunctionUrlListUrl, and AzureFunctionUrlShortenerUrl I needed to retrieve the URL of the Azure Function with the security token because they are secured. I do that by concatenating different parts.

Component Value
Beginning of the URL 'https://'
Reference the Function App, return the value of hostname reference(resourceId('Microsoft.Web/sites/', variables('funcAppName')), '2018-02-01').hostNames[0]
Specify the Function targeted in this case UrlList. And starting the querystring to pass the code (aka. security token) '/api/UrlList?code='
Using the new listkeys function to retrieve the default Function key. listkeys(concat(resourceId('Microsoft.Web/sites/', variables('funcAppName')), '/host/default/'),'2016-08-01').functionKeys.default

Conditional parts


Now that the second ARM template can be deployed, let's add a condition so it gets, indeed, deploy only when we desire. To do this it's very simple, we need to add a property condition.

{
    "name": "FrontendDeployment",
    "type": "Microsoft.Resources/deployments",
    "condition": "[not(equals(parameters('frontend'), 'none'))]",
    "dependsOn": [
        "[resourceId('Microsoft.Web/sites/', variables('funcAppName'))]",
        "[resourceId('Microsoft.Web/sites/sourcecontrols', variables('funcAppName'), 'web')]"
    ]
}

In this case, is the value of the parameter is different then none, the nested template will be deployed. When a condition end-up being "false", the entire resource will be ignored during the deployment. How simple or complex are your conditions... that's your choice!

Happy deployment. :)




How I Build a Budget-friendly URL Shortener Easy to Deploy and Customized


Available in French here

I don't know for you but I share links/ URLs very often. And a lot of time it's from videos, so it needs to be short and easy to remember. Something like https://c5m.ca/project is better than a random string (aka. GUID). And this is how I started a project to build a URL Shortener. I wanted to be budget-friendly, easy to deploy and customizable.

In this post, I will share how I build it, how you can use it, and how you can help!

Azure Url Shortener

How I build it, with the community


This tool was build during live streams coding sessions on Twitch (all videos are in available in my YouTube archive). It's composed of two parts: a Serverless backend leveraging the Azure Function & Azure Storage, and a frontend of your choice.

The backend is composed of a few Azure Functions that act as an on-demand HTTP API. They only consume when they are called. They are in .Net Core, C# to be specific. When publishing this post, there are four functions:

  • UrlShortener: To create a short URL.
  • UrlRedirect: That's the one called when a short link is used. An Azure Function Proxy is forwarding all call to the root.
  • UrlClickStats: Return the statistic for a specific URL.
  • UrlList: Return the list of all URLs created.

All the information like long url, short url, click count are save in an Azure Storage Table.
And that's it. Super light, very cost-efficient. IF you are curious about the price I'll but references in the footnotes

The frontend could be anything that can make HTTP requests. Right now in the project, I explain how to use a tool call Postman, there is also a very simple interface done that you can easily deploy.



This simple interface is of course protected and gives you the options to see all URLs and create new ones.

How YOU can use it


All the code is available into GitHub, and it's deployable with a one-click button!

Deploy to Azure

This will deploy the backend in your Azure subscription in a few minutes. If you don't own an Azure subscription already, you can create your free Azure account today.

Then you will probably want an interface to create your precious URLs. Once more in the GitHub repository, there is a List of available Admin interfaces and ready to be used. The Admin Blazor Website is currently the most friendly and can also be deployed in one-click.

How You can help and participate


Right now, there is really only one interface (and some instructions on how to use Postman to do the HTTP calls). But AzUrlShortener is an open-source project, meaning you can participate. Here some suggestions:

  • Build a new interface (in the language of your choice)
  • Improve current interface(s) with
    • logos
    • designs
    • Better UI 🙂
  • Register bugs in GitHub
  • Make feature request
  • Help with documentation/ translation

What's Next


Definitely come see the GitHub repo https://github.com/FBoucher/AzUrlShortener, click those deploy buttons. On my side, I will continue to add more features and make it better. See you there!


Video version





References

How to know how much your application is consuming on Azure

Are you worried when deploying a new application about our billing? Or afraid that you will receive an invoice from your cloud provider at the and of the month that put you in a bad situation? You are not alone, and this is why you should definitely look at the Azure Cost Manager.



In the video below, I explain how to use the Azure Cost Manager to see the actual cost and forecast for a specific application in the cloud and build a dashboard with your favorites information. Note that I said "cloud" because Azure Cost Manager can monitor both Azure and AWS resources. That's perfect for when we are "in" the portal, and that's why I will also show how you can make budgets, and set alerts, so you can track your resources automatically.



Useful Links


~

Reading Notes #413


Cloud

Programming

Miscellaneous

☁️

Reading Notes #407


Every Monday, I share my reading notes. Those are the articles, blog posts, podcast episodes, and books that catch my interest during the week and that I found interesting. It's a mix of the actuality and what I consumed.

Cloud

  • Generating Images with Azure Functions (Aaron Powell) - Brilliant usage of Azure function and its the first one I see in F#! All the code is available in GitHub, definitely worth the detour.

Programming

  • Use MongoDB in Your C# ASP.NET Apps (Terje Kolderup) - This is a very complete tutorial the shows all the code and explains step by step how to add, configure and use MongoDB.

Podcasts


Miscellaneous

  • Is that position available for remote? (Mark Downie) - A very interesting post that, I think, explains well the 'behind the scene' of the response we can get when asking the remote question.
~