Showing posts with label video. Show all posts
Showing posts with label video. Show all posts

Be more Productive by using Inline Code in your Azure Logic App

In a project using Azure Logic Apps that I am working on, I needed to manipulate strings. I could create APIs or Azure Functions, but the code is very simple and is not using any external libraries. In this post, I will show you how to use the new Inline Code to execute your code snippet directly inside your Logic Apps.

Quick Context


The Logic App will read a file from my OneDrive (it will also work with DropBox, Box, etc.). Here an example of the file:

Nice tutorial that explains how to build, using postman, an efficient API.[cloud.azure.postman.tools]

The goal is to extract tags, contained between the square brackets, from the text.

Logic App: Get File Content


From the Azure Portal, create a new Logic App by clicking the big green "+" button in the top left corner and searching for Logic App.

For this demo, I will use the Interval as a trigger because I will execute the Logic App manually.

The first step will be a Get File Content action from the OneDrive connector. Once you authorized Azure to access your OneDrive folder, select the file you want to read. For me, it's /dev/simpleNote.txt

Integration Account


To access the workflowContext the Azure Logic App required an Integration account. Next step would be to create one. Save the current Logic App, and click on the big "+" button in the top right corner. This time search for integration. Select Integration Account, and complete the form to create it.


We now need to assign it to our Logic App. From the Logic App blade, in the options list select Workflow Settings. Then select your integration account, and don't forget to save!

Logic App: Inline Code


To add the action at the end of your workflow, click the New step button. Search for Inline Code, and select the action Execute JavaScript Code.


Before copy-pasting the code into the new Inline Code action let's have a quick look.

var note = "" + workflowContext.actions.Get_file_content.outputs.body;
var posTag = note.lastIndexOf("[") + 1;
var cleanNote = {};

if(posTag > 0){
        cleanNote.tags = note.substring(posTag, note.length-1);
        cleanNote.msg = note.substring(0,posTag-1);
    }
return cleanNote;

On the first line, we assign a variable note the content of the Get_file_content outputs. We access it using the workflowContext. This context has access to the trigger and the actions. To find the name of the action you can replace the spaces by the underscore character "_".


You can also switch to Code View, and see the name of all components from the JSON code.

Logic App: Use Inline Code Result


Of course, you can use the output of your Inline Code with other steps. You just need to use the Result from the dynamic content menu.


If for some reason the dynamic content list doesn't contain your Inline Code, you can always add the code directly @body('Cleaning_Note')?['body'].


Your Logic App should now look like this:


Verdict


The Inline code is very promising. Right now it's limited to JAvaScript and cannot access variable nor loops. However, for simple code that doesn't require any references, it's easier to maintain and deploy. You can learn more about what is exactly covered or not here.
And it works as this result shows.


You prefer watching instead or Reading


I also have a video of this post if you prefer.



References


How to make your deployments successful every time

You are done with your code and you are ready to deploy it in Azure. You execute the PowerShell or Bash script you have and BOOM! The error message saying that this name is already taken. In this post, I will show you a simple way to look like a boss and make your deployment working all the time.

____ with given name ____ already exists.

The tricks other use


You could try to add a digit at the end of the resource name (ex: demo-app1, demo-app2, demo-app123...), but that’s not really professional. You could create a random string and append it to the name. Yes, that will works, once. If you are trying to redeploy your resources that value will change, therefore it will never be the same.
The solution would be to have a unique string that is constant in our environment.

The solution


The solution is to use the function UniqueString() part of the Azure Resource Manager (ARM) template. If we look in the documentation, UniqueString creates a deterministic hash string based on the values provided as parameters. Let’s see a quick example of an ARM template to deploy a website named demo-app.

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {},
    "variables": {
        "webAppName": "demo-app"
    },
    "resources": [
        {
            "type": "Microsoft.Web/sites",
            "apiVersion": "2015-08-01",
            "name": "[variables('webAppName')]",
            "location": "[resourceGroup().location]",
            "tags": {
                "[concat('hidden-related:', resourceGroup().id, '/providers/Microsoft.Web/serverfarms/frankdemo-plan')]": "Resource",
                "displayName": "[variables('webAppName')]"
            },
            "dependsOn": [
                "Microsoft.Web/serverfarms/frankdemo-plan"
            ],
            "properties": {
                "name": "[variables('webAppName')]",
                "serverFarmId": "[resourceId('Microsoft.Web/serverfarms/', 'frankdemo-plan')]"
            }
        },
        {
            "type": "Microsoft.Web/serverfarms",
            "apiVersion": "2016-09-01",
            "name": "frankdemo-plan",
            "location": "[resourceGroup().location]",
            "sku": {
                "name": "F1",
                "capacity": 1
            },
            "tags": {
                "displayName": "frankdemo-plan"
            },
            "properties": {
                "name": "frankdemo-plan"
            }
        }
    ],
    "outputs": {}
}

If you try to deploy this template, you will have an error because the name demo-app is already taken... no surprise here.

Let’s create a new variable suffix and we will use the Resource Group Id and Location as values. Then we just need to append this value to our name using the function concat().

    "variables": {
        "suffix": "[uniqueString(resourceGroup().id, resourceGroup().location)]",
        "webAppName": "[concat('demo-app', variables('suffix'))]"
    }

It’s that simple! Now every time you will deploy a unique string will be added to your resource name. That string will always be the same for a Resource Group-Location deployment.

Because some resource types are more restrictive than others you may need adapt your new name. Maybe the name of your resource plus those thirteen characters hash will be too long... No problem, you can easily make it shorter and all lower case just by using substring() and toLower().

 "parameters": {},
    "variables": {
        "suffix": "[substring(toLower(uniqueString(resourceGroup().id, resourceGroup().location)),0,5)]",
        "webAppName": "[concat('demo-app', variables('suffix'))]"
    }

Voila, and now by using ARM template you can deploy and redeploy without any problem reproducing the same solution you built. To learn move about the ARM template you can jump in the documentation, where you will find samples, step-by-step tutorials and more.


If you have a specific question about ARM templates or if you would like to see more tips like this one, don't hesitate to ask in the comments section or reach out on social media!

In a video, please!


I also have a video of this post if you prefer.




Image by StartupStockPhotos from Pixabay

Reading Notes #377

Cloud


Programming


Miscellaneous


Books



Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones (James Clear) - An excellent book that is very pleasant to read. I really appreciated the way things are broken in tiny pieces. I don't think this book re-invented the molecular physic, but by cutting, dissecting our habits that way it's hard to think that you can fail. It's easier to get started right now; even starting new habits before finishing the book!






~

    Deploy automatically a static website into an Azure Blob storage with Azure DevOps Pipeline

    Static websites are lightning fast, and running them inside an Azure Blob Storage instead of a WebApp is incredibly economical (less than $1/ month). Does it mean you need to do everything manually? Absolutely not! In a previous post I explained how to automatically generated your static website using a Build Pipeline inside Azure DevOps. In this post, let's complete the CI-CD by creating a Release Pipeline to deploy it.

    The Azure Resource Manager (ARM) Template


    First thing first. If we want our release pipeline to deploy our website in Azure, we need first to be sure our Resources are available "up there." The best way to do this is by using an Azure Resource Manager (ARM template). I will use the same project started in the previous post, feel free to adapt to your structure or copy it from it.

    Create a new file named deploy.json in the deployment folder. We need a simple storage account.

    {
        "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
        "contentVersion": "1.0.0.0",
        "parameters": {
            "StorageName": {
                "type":"string",
                "defaultValue": "cloudenfrancaisv2",
                "maxLength": 24
            }
        },
        "variables": {},
        "resources": [
            {
                "type": "Microsoft.Storage/storageAccounts",
                "apiVersion": "2018-07-01",
                "name": "[parameters('StorageName')]",
                "location": "[resourceGroup().location]",
                "tags": {
                    "displayName": "[parameters('StorageName')]"
                },
                "sku": {
                    "name": "Standard_LRS"
                },
                "kind": "StorageV2"
            }
        ],
        "outputs": {}
    }
    

    I used a parameter (StorageName) to define the name of the storage account. This way I could have multiple pipelines deploying in different storages.

    Not to make the ARM template accessible to the release pipeline we also need to publish it. The easiest way to do it is to add another Copyfile task in our azure-pipeline. Add this task just before the PublishBuildArtifacts.

    - task: CopyFiles@2
    displayName: 'Copy deployment content'
    inputs: 
        SourceFolder: '$(Build.SourcesDirectory)/deployment'
        contents: '**\*' 
        targetFolder: $(Build.ArtifactStagingDirectory)/deployment
        cleanTargetFolder: true
    

    Once you commit and push these changes, it will trigger a build. When done, the ARM template will be available, and we will be able to start working on the release pipeline.

    The Release Pipeline


    Navigate to the DevOps project created in the previous post. This time, create a new Release Pipeline. When asked, select an empty template, we will pick manually the tasks we need.

    First, we need to define the trigger and where are our artifacts. Click on the thing at the left of the screen. Select the build projects and let's use the latest version of the artifact to our deployment.

    To get a continuous deployment, you need to enable it by clicking on the lightning bolt and selecting the enabled button.

    Now let's select our tasks. Click on the "+" sign to add new tasks. We need three of these: Azure Resource Group Deployment, Azure CLI, and Azure File Copy.



    Task 1 - Azure Resource Group Deployment


    The first one will be an Azure Resource Group Deployment. The will be used to deploy our ARM template and be sure that the resources are available in Azure.

    To configure the ARM deployment we need to select the Azure subscription and authorize the pipeline to have access. Then you will need to specify the name of the resource group you will be deploying into; it's location and finally points where is the linked ARM template.


    Task 2 - Azure CLI


    The second one is an Azure CLI. As I am writing this post, it's not possible to enable the static website property of a storage account. Therefore we will execute an Azure CLI command to change that configuration. Once you picked the Azure subscription, select inline script and enter this Azure CLI command:

    az storage blob service-properties update --account-name wyamfrankdemo --static-website  --index-document index.html
    

    This will enable the static website property of the storage account named wyamfrankdemo, and set the default document to index.html.

    Task 3 - Azure File Copy


    The last task is an Azure File Copy to copy all our files from $(System.DefaultWorkingDirectory)/drop/drop/outpout to the $web container (in our Azure Blob storage). The container must be named $web, that's the name used by Azure for the static website.

    Wrapping up


    Once you are done configuring the Release Pipeline, it's time to save it and run it. After only a minute or two (this demo is pretty small) the blog should be available into Azure. To find your endpoint (aka URL) you can go into the portal.azure.com and look at the static website property of the blob storage that we just create.

    In a video, please!


    I also have a video of this post if you prefer.





    How to use Azure DevOps Pipeline and Cake to generate a static website

    I have that little website, a blog that doesn't consume much bandwidth, and I was looking to optimize it. Since Azure blob storage is such a low expensive resource, I thought it would be the perfect fit. I could use a static website generator to transform my markdown file into a nice looking blog and publish that in Azure! Using Azure DevOps pipeline I could at every "git push)" do that all automatically without having anything installed on my machine... meaning I could write a new blog post from anywhere and still be able to update my blog.

    In this post, I will explain all the steps required to create a continuous integration and continuous deployment process to deploy a static website into Azure.

    The Goal


    The idea here is to have on a local machine a folder tracked by git. At every push, we want that change to trigger our CI-CD process. The Build Pipeline will generates the static website. The Release Pipeline will create our Azure resources and publish those artifacts.

    The Static Website


    In this post, I'm using Wyam.io as static website generator. However, it doesn't matter. There is a ton of excellent generator available: Jekyll, Hugo, Hexo, etc. I selected Wyam because it is written in .Net and If eventually, I want to dig dipper it would be easier for me.

    For all those generated websites, it the same pattern. You have an input folder where you have all your posts and images and an output folder that contains the generated result. You don't need to track the content of your output folder, so it would be a good practice to modify the .gitignore file accordingly. As an example here how look mine.

    output/
    
    tool/
    tools/
    
    wwwroot/
    
    config.wyam.dll
    config.wyam.hash
    config.wyam.packages.xml
    

    Build Pipeline


    The build pipeline will generate our website for us. There so, it needs to have the generator installed. A great tool to do this kind of tasks is Cake. What I like with that tool is that it is cross platform so I can use it without worrying on wish OS it will run.rd.

    The Azure pipeline is defined in an azure-pipeline.yml file. Installing Cake should definitely be in our first steps. To know how to do that, navigate to the Get started page of the Cake's website, it's explained that we need to execute a build.ps1 or build.sh (depending on your build setup). That will install Cake and execute the file build.cake. Those files can be found on the GitHub repository as mentioned on the website.

    On the Wyam website, in the deployment section of the documentation, you will find a sample for our required build.cake file. It looks like this:

    #tool nuget:?package=Wyam&version=2.2.0
    #addin nuget:?package=Cake.Wyam&version=2.1.3
    
    var target = Argument("target", "Build");
    
    Task("Build")
        .Does(() =>
        {
            Wyam( new WyamSettings {
                Recipe = "Blog",
                Theme = "CleanBlog"
            });        
        });
    
    Task("Preview")
        .Does(() =>
        {
            Wyam(new WyamSettings
            {
                Recipe = "Blog",
                Theme = "CleanBlog",
                Preview = true,
                Watch = true
            });        
        });
    
    RunTarget(target);
    

    On the first line, it will install the required NuGet package (you should definitely specify the version). Then it defines some tasks, and run the generation command. Create that file at the root of the website folder.

    Now let's have a look at the azure-pipeline.yml file.

    trigger:
    - master
    
    variables:
    DOTNET_SDK_VERSION: '2.1.401'
    
    pool:
    vmImage: 'VS2017-Win2016'
    
    steps:
    - task: DotNetCoreInstaller@0
    displayName: 'Use .NET Core SDK $(DOTNET_SDK_VERSION)'
    inputs:
        version: '$(DOTNET_SDK_VERSION)'
    
    - powershell: ./build.ps1
    displayName: 'Execute Cake PowerShell Bootstrapper'
    
    - task: CopyFiles@2
    displayName: 'Copy generated content'
    inputs: 
        SourceFolder: '$(Build.SourcesDirectory)/output'
        contents: '**\*' 
        targetFolder: $(Build.ArtifactStagingDirectory)/outpout
        cleanTargetFolder: true
    
    - task: PublishBuildArtifacts@1
    displayName: 'Publish Artifact'
    inputs:
        PathtoPublish: '$(build.artifactstagingdirectory)'
    

    The first line is to specify the pipeline trigger. In our case, we will look at the master branch. Then I declare a variable to keep the .Net Core version. That way, it will be easier to maintain the script in the future.

    The pool command is to specify what kind of server is created. Here I'm using a Windows one, yet I could have used Linux too (all components are cross-platform).

    Then comes the list of steps. The first one install .Net Core. The second step is a powershell command to execute our build.ps1 file. At this stage, the static website should be generated in a subfolder output. The last two steps are to copy the content of the output folder into the ArtifactStagingDirectory and then publish it. This way the Release Pipeline can access the artifacts.

    There is detailed information about all the commands for a YAML Azure Pipeline file in the documentation. Create your own or copy-paste this one in a new azure-pipeline.yml file under a subfolder named deployment. Once your file is created, commit and push them to GitHub or any repository.

    Navigate to Azure DevOps (dev.azure.com). Open your project, or create a new one. Now from the left menu click on the Pipeline (the rocket icon), to create a new one. If you are using an external repository, like me, you will need to authorize Azure DevOps to your repo.

    To configure the pipeline, since we already have created the azure-pipeline.yml file, select the Existing Azure Pipeline YAML file option and point it to our file in the deployment folder.



    It will open our YAML file. If you wish you could update it. Run it, by clicking to Run blue button in the top-right corner. Your build pipeline is done. Now every time you will push changes into your repository that build will get triggered and generate the static website.

    (Next post in the series - Deploy automatically a static website into an Azure Blob storage with Azure DevOps Pipeline)


    In a video, please!

    I also have a video of this post if you prefer.






    References

    How to Unzip Automatically your Files with Azure Function v2

    I published a video that explains how to UnZip files without any code by using Logic Apps. However, that solution didn't work for bigger files or different archive type. This post is to explain how you can use the Azure Function to cover those situations. This first iteration supports "Zip" files; all the code is available in my GitHub.

    Prerequisites


    To create the Azure Function, I will use the excellent Azure Function extension of Visual Studio Code. You don't "need" it. However, it makes thing very easy.


    You can easily install the extension from Inside Visual Studio Code by clicking on the extension button in the left menu. You will also need to install the Azure Function Core Tools

    Creating the Function


    Once the extension installed, you will find a new button in the left menu. That opens a new section with four new option: Create New Project, Create Function, Deploy to Function App, and Refresh.


    Click on the first option Create New Project. Select a local folder and a language; for this demo, I will use C#. This will create a few files and folder. Now let's create our Function. From the extension menu, select the second option Create Function. Create a Blob Trigger named UnzipThis into the folder we just created, and select (or create) Resource Group, Storage Account, and location in your subscription. After a few seconds, another question will pop asking the name of the container that our blob trigger monitors. For this demo, input-files is used.

    Once the function is created you will see this warning message.


    What that means is that to be able to debug locally we will need to set the setting AzureWebJobsStorage to UseDevelopmentStorage=true in the local.settings.json file. It will look like this.

    {
        "IsEncrypted": false,
        "Values": {
            "AzureWebJobsStorage": "UseDevelopmentStorage=true",
            "FUNCTIONS_WORKER_RUNTIME": "dotnet",
            "unziptools_STORAGE": "DefaultEndpointsProtocol=https;AccountName=unziptools;AccountKey=XXXXXXXXX;EndpointSuffix=core.windows.net",
        }
    }

    Open the file UnzipThis.cs; this is our function. On the first line of the function, you can see that the Blob trigger is defined.

    [BlobTrigger("input-files/{name}", Connection = "cloud5mins_storage")]Stream myBlob

    The binding is attached to the container named input-files, from the storage account reachable by the connection "cloud5mins_storage". The real connectionString is in the local.settings.json file.

    Now, let's put the code we need for our demo:

    [FunctionName("Unzipthis")]
    public static async Task Run([BlobTrigger("input-files/{name}", Connection = "cloud5mins_storage")]CloudBlockBlob myBlob, string name, ILogger log)
    {
        log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name}");
    
        string destinationStorage = Environment.GetEnvironmentVariable("destinationStorage");
        string destinationContainer = Environment.GetEnvironmentVariable("destinationContainer");
    
        try{
            if(name.Split('.').Last().ToLower() == "zip"){
    
                CloudStorageAccount storageAccount = CloudStorageAccount.Parse(destinationStorage);
                CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
                CloudBlobContainer container = blobClient.GetContainerReference(destinationContainer);
                
                using(MemoryStream blobMemStream = new MemoryStream()){
    
                    await myBlob.DownloadToStreamAsync(blobMemStream);
    
                    using(ZipArchive archive = new ZipArchive(blobMemStream))
                    {
                        foreach (ZipArchiveEntry entry in archive.Entries)
                        {
                            log.LogInformation($"Now processing {entry.FullName}");
    
                            //Replace all NO digits, letters, or "-" by a "-" Azure storage is specific on valid characters
                            string valideName = Regex.Replace(entry.Name,@"[^a-zA-Z0-9\-]","-").ToLower();
    
                            CloudBlockBlob blockBlob = container.GetBlockBlobReference(valideName);
                            using (var fileStream = entry.Open())
                            {
                                await blockBlob.UploadFromStreamAsync(fileStream);
                            }
                        }
                    }
                }
            }
        }
        catch(Exception ex){
            log.LogInformation($"Error! Something went wrong: {ex.Message}");
    
        }            
    }

    UPDATED: Thanks to Stefano Tedeschi who found a bug and suggested a fix.

    The source of our compressed file is defined in the trigger. To define the destination destinationStorage and destinationContainer are used. Their value are saved into local.settings.json. Then because this function only supports .zip file a little validation was required.

    Next, we create an archive instance using the new System.IO.Compression library. We then create references to the storage account, blob, and container. It not possible to used second binding here because for one archive file you have a variable number of potential extracted files. The bindings are static; therefore we need to use the regular storage API.

    Then for every file (aka entry) in the archive the code upload it to the destination storage.

    Deploying


    To deploy the function, from the Azure Function extension click on the third option: Deploy to Function App. Select your subscription and Function App name.

    Now we need to configure our settings in Azure. By default, the local.setting are NOT used. Once again the extension is handy.


    Under the subscription expand the freshly deployed Function App AzUnzipEverything, and right-click on Application Settings. Use Add New Setting to create cloud5mins_storage, destinationStorage and destinationContainer.

    The function is now deployed and the settings are set, now we only need to create the blob storage containers, and we will be able to test the function. You can easily do that directly from the Azure portal (portal.azure.com).

    You are now ready to upload a file into the input-files container.

    Let's Code Together


    This first iteration only supports "Zip" files. All the code is available on GitHub. Feel free to use it. If you would like to see or add support for other archive types join me on GitHub!.

    In a video, please!


    I also have a video of this post if you prefer.



    I also have an extended version where I introduce more the Visual Studio Extension to work with Azure Function. And explain more details about the Azure Function V2.

    My journey as a technical YouTuber


    Sixty-five videos and one year later
    One year already that I start sharing videos on YouTube, I've been blogging for many years and wanted to try something new. Creating videos seems like the next logical step. And this is how I decided to start sharing short videos to answer technical questions about Microsoft Azure on YouTube. This post, I will explain what I learned during the first year of my journey.

    The Beginning

    I started my YouTube channel in French. During the two first months, I published one video by week. I learned a lot through this period, how to prepare my code snippets, my files, and my screen. The biggest takeaway was that: preparation is the key. The more prepared you are, the more efficient you will be in front of the camera. I also discover that compare to writing a blog post; you need to pay more attention, way earlier in the process, to the tiny details.

    It may sound like a cliché, but it was challenging to watch myself. But it's the best way to improve. You need to accept those mistakes and promise yourself that next time, you will do better. It's normal and okay not to be perfect.

    Let's be bilingual

    One thing that I was looking forward was to be able to share my videos with all my clients/ friends/ community members. However, I couldn't because while some are perfectly comfortable in French (and only French), others only understand English. And this is why I decided to record all my videos in both French and English. I knew that would mean doubling the workload, so I adapted my schedule to publish every second week.

    Doing all does videos I found my beat, my style, the things I like and dislike. I also spend a lot of time learning the YouTube platform through books and by watching other YouTubers. It's such an incredible resourceful community!

    A New Camera

    The software (Camtasia from TechSmith)  I was using, and that I'm still using today, can record both your screen and your webcam. It keeps all the tracks separated, and this is great because you can change the size and position of the mortise in post-production.

    My challenge was that I wanted to record both in 1080P (full HD) because during my intros and conclusion when the webcam input fill the screen I want good image quality. However, the software (or more likely my PC), did not let me record in 1080P (full HD) the screen and the webcam input.
    To upgrade the quality of my recordings, I decided to use my DSLR instead of my webcam. Of course, it represents more work in post-production because now I need to synchronize the two videos sources, but now everything is 1080.

    The famous algorithm

    The more I was studying my statistics and reading comments (thank you all for your constructive comments by the way), I started to understand that having all my videos in French and English on the same channel was not the best idea.

    First, for the YouTube algorithm, it was very confusing because it couldn't identify the video's language. Therefore, it's harder to do its job and make recommendations.

    Secondly, it didn't cross my mind at the beginning, because I understand both languages, but for a subscriber that understand only one of the language, it's was very confusing and not very pleasing to have all those "others" un-useful videos in the feed.

    After a very long (and hard) reflection, I decided to create a second channel and "move" all my French content to that channel. I say "move" but you cannot "move" contents on YouTube, and that was the principal reason why I was hesitating to split my channel. By creating a new channel, and re-uploading my older (aka French) videos I was losing all my stats (views, subscribers), and references.

    The effect was immediate on the English channel as the views and subscribers exploded a week after. And on the other side, my brand new French channel didn't really catch-up. Never the less I intend to continue to create French content. ;)

    Today's Status

    As this post gets published, I have a cumulative total of sixty-five videos online and more than a thousand subscribers. I publish every second week, usually on Thursday, two videos on the same topic one in French on fr.cloud5mins.com and one in English on cloud5mins.com. I have a fantastic community super supportive, asking questions and suggesting topics.



    I'm very thankful for all that positive feedback. To help me you can subscribe to my YouTube channel of course, or become one of my Patreon at patreon.com/fboucheros

    What's Next

    I'm definitely continuing to create videos. I plan to start doing some streaming on Twitch, where I will building cloud solutions.

    See you soon,


    Reading Notes #343

    2018-09-02_20-42-43Cloud


    Programming


    Reading Notes #342

    Connector

    Cloud


    Programming

    • Writing a Blazor App (David Pine) - This tutorial shows how to build a simple blazor app...and it's NOT the hello-word or todo.

    Miscellaneous



    Reading Notes #332

    IMG_20180616_101111

    Cloud


    Programming


    Books

    • [Invisible Ink: A Practical Guide to Building Stories That Resonate] (Brian McDonald)  - We all know it, a story is the element that will give that little plus to our post, and video. This short book explains how to really make an effective one talking about the not visual things...
      Really interesting.

      ISBN 0984178627 (ISBN13: 9780984178629)

    Reading Notes #330

    IMG_20180602_073928

    Cloud


    Programming


    Data

    • Apply a Filter to a Slicer (Mike Carlo) - Sooooo useful. If you don't know how to do it (yet), just watch the video, you'll thanks me later.

    Miscellaneous



    Reading Notes #329

    IMG_20180527_154913

    Suggestion of the week



    Cloud



    Programming



    Books

    jab_cover
    Jab, Jab, Jab, Right Hook: How to Tell Your Story in a Noisy Social World (Gary Vaynerchuk) - Great book, for all of us you are trying to tell something, pass a message on the social media... This is a must.




    Miscellaneous


    How to Automatically Generate Video Sub-Title in Another Language

    I recently started a French YouTube channel. Quickly, I got a message asking to add English sub-title, and got also a suggestion to leverage Azure Logic App and some Cognitive Services to help me in that task. I really liked the idea, so I gave it a shot. I recorded myself and in twenty minutes I was done. Even though, it was not the success I was hoping for, the application works perfectly. It's just that speaking in French with a lot of English technical word was a little bite too hard for the Video Indexer. However, If you are speaking only one language in your video that solution would work perfectly. In this post, I will show you how to create that Logic App with Azure Video Indexer and Cognitive Services.

    The Idea


    Once a video is dropped in an OneDrive folder (or any file system accessible from Azure), a Logic App will get triggered and uploads the file to the Azure Video Indexer, generate a Video Text Tracks (VTT) file, and save this new file in another folder. A second Logic App will get started and use the Translator Text API from Azure Cognitive Service to translate the VTT file, and save it into the final folder.

    GenerateSubTitle


    The Generation


    Before getting started, you will need to create your Video Indexer API. To do this, login to the Video Indexer developer portal, and subscribe at the Video Indexer APIs - Production in the Product tab. You should then get your API keys.


    ApiKeyPage

    To get more detail on the subscription refer to the documentation. To know the names, parameters, code sample to all the methods available in your new API, click on APIs tab.

    APIDetails

    Now let's create our first Logic App. I always prefer to start with a blank template, but take what fits you. Any Online file system's trigger will do, in this case I'm using the When a file is created from OneDrive. I got some issue with the trigger. It was not always getting fired by a new file. I tried the When a file is modified trigger, but it didn't solve the problem. If you think, you know what I was doing wrong feel free to leave a comment :).

    First reel action is to upload the file to the Azure Video Indexer. We can to that ery easily by using the method Upload video and and index, passing the name and content from the trigger.

    Of course, the longer is the video the longer will be the process, so we will need to wait. A way to do that is by adding a waiting loop. Will use the method Get processing state from the Video Indexer and loop until the status is processed. To slow down your loop just add a wait action and set it at tree or five minutes.

    When the file is completely processed, it will be time to retrieve the VTT file. This is done in two simple step. First, we will get the URL by calling the method Get the transcript URL, then with a simple HTTP GET we will download the file. The last thing we will need to do will be to save it in a folder where our second Logic App will be watching for new drop.

    In the visual designer, the Logic App should look to this.

    LogicAppGenerateVTT


    The Translation


    The second Logic App is very short. Once again, it will get triggered by a new file trigger in our OneDrive Folder. Then it will be time to call our Translator Text API from Azure Cognitive Service. That's to the great Logic App interface it's very intuitive to fill all the parameter for our call. Once we got the translation, we need to save it into our final destination.

    The Logic App should look like this.

    LogicAppTranslate


    Conclusion


    It was much easier than I expected. I really like implementing those integration projects with Logic App. It's so easy to "plug" all those APIs together with this interface. And yes like I mentioned in the introduction the result was not "great". I run test with video purely in English (even with my accent) or only in French (no mix) and the result was really good. So I think the problem is really the fact that I mix French and English. I could improve the Indexer by spending time providing files so the service could understand better my "Franglish". However, in twenty minutes, I'm really impressed by the way, in turned out. If you have idea on how to improve this solution, or if you have some questions, feel free to leave a comment. You can also watch my French YouTube video.

    All the code is available online on Github - Cloud5VideoHelper.

    References:



    Reading Notes #296

    IMG_20170910_134750

    Cloud


    Databases


    Miscellaneous




    Reading Notes #295

    Sketch002

    Cloud


    Programming


    Databases




    Reading Notes #266

    Retropie_SplashSuggestion of the week


    Cloud


    Programming


    Databases

    • SQL Database Query Editor available in Azure Portal (Ninar Nuemah) - I was looking for this since the old query tool was removed. I will probably continue to use SQL studio management or VsCode, put what a time saving, and you are investigating a problem... Open a blade right from the Azure portal and voila!

    Miscellaneous

    • MVP API Intro (Daron Yöndem) - I love it! I already have few ideas in mind, and I'm curious to see what you will do guys.


    Reading Notes #257

    2016-11-20_21-14-57Suggestion of the week

    • Microsoft Connect(); 2016 Recap (Joseph Hill) - Three full day of great content. However, if you are like me, you didn't that much free time. Fortunately, all the presentations are available in video on demand. You need a summary because even if the keynote was really good... it is still 2h30, this post is the place to start.

    Cloud


    Programming


    Miscellaneous



    Reading Notes #256

    IMG_20161107_104028Cloud

    Programming

    Miscellaneous



    ReadingNotes #256


    Cloud



    Programming



    Miscellaneous