Showing posts with label management. Show all posts
Showing posts with label management. Show all posts

Reading Notes #536 (Book edition)

Good first Monday of 2023!
As a way to close the Reading Notes of 2022, I changed a little today's post by sharing the books I read during the year. I didn't include the novel and thriller as I'm not sure they would be relevant in this kind of post.  However, if you are interested, find me at goodreads.com/fboucheros.

Let's make this new year a better one, and as usual, I'm always happy to receive your blog post and recommendations.

Books

Listen Like You Mean It: Reclaiming the Lost Art of True Connection  
Author: Ximena Vengoechea

I like the stories and examples in this book. I mean it's 'easy' to say listen carefully but demonstrating how the opportunity was missed, and how to correct it is way more complex. Nice book.



Making Numbers Count: The Art and Science of Communicating Numbers
Author: Chip Heath, Karla Starr

I really enjoyed reading this book. Data is in the middle of everything, and when communicating that information it's often translated into numbers. To have an impact, the way we communicate those number is crucial. A book to read and probably read again after a while to refresh our memory.




Switch: How to Change Things When Change Is Hard
Authors: Chip Heath, Dan Heath

This book oversimplifies a really complex topic, but I believe it was important to make each point clear. Nice read, it will help to be less afraid of making changes.

Author: Cal Newport

A nice book that mixt great advice and more 'easy' ones. Nevertheless, it was interesting to revisit what I already knew and to learn new things. With so many distractions around us, focusing is important.






Slaying the Dragon: A Secret History of Dungeons & Dragons 
Author:  Ben Riggs

Big fan of D&D and this book is a great read. It is a history of the game and the people behind it. It is an excellent read for anyone who is interested in the game or its history of it. I had no idea of all the "drama", and honestly even I didn't know either what it takes to create those books/games.

    It Doesn't Have to Be Crazy at Work
     
    Authors: Jason Fried, David Heinemeier Hansson

    The important message here where we can achieve a lot more with less. The book shares many examples and breaks them down into simple principles. I recommend it to anyone who is looking to improve their work-life balance.





    Can't Hurt Me: Master Your Mind and Defy the Odds 
    Author: David Goggins

    The mind is incredible. Goggins pushed it to not eleven... to twelve! We all encounter difficulties in our lives, and the lesson here is that the real limit is way past what we think. Great book and he just published a new one.




      Anti-Time Management: Reclaim Your Time and Revolutionize Your Results with the Power of Time Tipping 
      Author: Richie Norton

      The first time I heard of that book it was in a podcast and it seems very promising. Even though I didn`t fall in love with that book the ideas are interesting and definitely worth the time to learn them.





      ~Frank


      Reading Notes #464


      Every Monday, I share my "reading notes". This is a curated list of all the articles, blog posts, and books that catch my interest during the week and that I found interesting. It's a mix of the actuality and what I consumed.

      Cloud


      Programming


      Miscellaneous



      ~Frank


      Reading Notes #461

      Every Monday, I share my "reading notes". This is a curated list of all the articles, blog posts, and books that catch my interest during the week and that I found interesting. It's a mix of the actuality and what I consumed.

      The suggestions of the week

      Cloud

      • What Is Azure Functions (Mahesh Chand) - The great simple post that explains what is an Azure Function and how to create/ debug/ deploy one

      Programming

      Podcast

      • Épisode 11 - La famine en Ukraine (Les Pires Moments de l'Histoire) - OMG! I have nothing else to say. I knew that part of the history was dark and complex... but I knew nothing. Great episode.
      • 633 - How to Use Rejection to Your Advantage (Modern Mentor) - In our lives, we will encounter many nos or rejection. This episode talks about how you could use those detours to learn more, and transform them in opportunities.

      Miscellaneous


      You think you may have interesting content, share it!

      ~Frank


      PowerBI and Microsoft Azure Consumption

      Recently, I needed to check and compare Azure consumption for a client. What a repetitive task: download the csv files from the Azure billing portal, open it in Excel to clean/merge/customize it… Would it be great if it could be easily done? Well, it is! Power BI is the perfect tool to do that (and a lot more).  In this post, I will explain how I created my Power Query and solved different problem I encountered in my journey.

      The Goal


      I want PowerBI to read (dynamically) all csv files in a folder and updates all my charts and graph, so I can share them easily why my clients.

      The Tools


      To create Power Queries, you can use the new Power BI Desktop available online for free or Excel. With Excel 2016, the Power query editor tools is included, for the previous version you need to install the Microsoft Power Query for Excel add-in. In both cases, many tutorials explain how to get started with these tools (see the references at the end of this post).

      The Problem


      Creating our query should be pretty straight forward, since we can create a Power Query by selecting a folder as a source.
      Import_auto_csv
      The problem is that our file contains three types of records: Provisioning Status, Statement, and Daily Usage. These “tables” are very different and don’t have the same number of columns. This is why when we try to merge them; we got some Error.

      Expend_all_fail
      Error_Auto_import

      The Solution


      The way to solve this problem is to create a function that will parse one file to extract one recordset, and call that function for all the file in the folder.

      Note:
      The simplest way to get started is to work with one file, then convert it to a function. The way to that is to replace the path of the csv file by a variable that will be passed as a parameter: (filePath) =>.
      To keep the code as simple as possible, I kept the function and the looping query separated, but they can be merged in only query.

      Extract “Daily Usage”


      Here are the queries to extract the Daily Usage (third recordSet) from the csv file and some code description.
       // -- fcnCleanOneCSV_v2 ----------------------------------------
      
      (filePath) =>
      let
         fnRawFileContents = (fullpath as text) as table =>
      let
         Value = Table.FromList(Lines.FromBinary(File.Contents(fullpath)),Splitter.SplitByNothing())
      in Value,
      
         Source = fnRawFileContents(filePath),
         #"Daily Usage Row" = Table.SelectRows(Source, each Text.Contains([Column1], "Daily Usage")),
         #"DailyPosition" = Table.PositionOf(Source, #"Daily Usage Row" {0}),
         #"TopRemoved" = Table.Skip(Source, (DailyPosition + 1)),
         #"Result" = Table.PromoteHeaders(TopRemoved)
      in 
         Result
      The first part is to load the content of the file as a one column table. Then DailyPosition is used to store the position where Daily Usage data starts. This value is used in Table.Skip(Source, (DailyPosition + 1)) to keep only the rows after, since Daily usage is the last recordSet it works perfectly.
       //== Process Folder CSV_v2 for Daily Usage==============================
      
      let
         Source = Folder.Files("C:\Azure_Consumption_demo\CSV_v2\"),
         MergedColumns = Table.CombineColumns(Source,{"Folder Path", "Name"},Combiner.CombineTextByDelimiter("", QuoteStyle.None),"Merged"),
         RemovedOtherColumns = Table.SelectColumns(MergedColumns,{"Merged"}),
         #"Results" = Table.AddColumn(RemovedOtherColumns , "GetCsvs", each fcnCleanOneCSV_v2([Merged])),
         #"Removed Columns" = Table.RemoveColumns(Results,{"Merged"}),
         #"Expanded GetCsvs" = Table.ExpandTableColumn(#"Removed Columns", "GetCsvs", {"Usage Date,Meter Category,Meter Id,Meter Sub-category,Meter Name,Meter Region,Unit,Consumed Quantity,Resource Location,Consumed Service,Resource Group,Instance Id,Tags,Additional Info,Service Info 1,Service Info 2"}, {"Usage Date,Meter Category,Meter Id,Meter Sub-category,Meter Name,Meter Region,Unit,Consumed Quantity,Resource Location,Consumed Service,Resource Group,Instance Id,Tags,Additional Info,Service Info 1,Service Info 2"}),
      
      
         #"Demoted Headers" = Table.DemoteHeaders(#"Expanded GetCsvs"),
         #"Split Column by Delimiter" = Table.SplitColumn(#"Demoted Headers","Column1",Splitter.SplitTextByDelimiter(","),{"Column1.1", "Column1.2", "Column1.3", "Column1.4", "Column1.5", "Column1.6", "Column1.7", "Column1.8", "Column1.9", "Column1.10", "Column1.11", "Column1.12", "Column1.13", "Column1.14", "Column1.15", "Column1.16"}),
         #"Changed Type" = Table.TransformColumnTypes(#"Split Column by Delimiter",{{"Column1.1", type text}, {"Column1.2", type text}, {"Column1.3", type text}, {"Column1.4", type text}, {"Column1.5", type text}, {"Column1.6", type text}, {"Column1.7", type text}, {"Column1.8", type text}, {"Column1.9", type text}, {"Column1.10", type text}, {"Column1.11", type text}, {"Column1.12", type text}, {"Column1.13", type text}, {"Column1.14", type text}, {"Column1.15", type text}, {"Column1.16", type text}}),
         #"Promoted Headers" = Table.PromoteHeaders(#"Changed Type"),
         #"Changed Type1" = Table.TransformColumnTypes(#"Promoted Headers",{{"Usage Date", type date}, {"Meter Region", type text}}),
         #"Replaced Value" = Table.ReplaceValue(#"Changed Type1","""","",Replacer.ReplaceText,{"Meter Category", "Meter Id", "Meter Sub-category", "Meter Name", "Meter Region", "Unit", "Resource Location", "Consumed Service", "Instance Id", "Tags", "Additional Info", "Service Info 1", "Service Info 2"}),
         #"Changed Type2" = Table.TransformColumnTypes(#"Replaced Value",{{"Consumed Quantity", type number}})
      in
        #"Changed Type2"
      From row 1 to 6, we get all the file in the folder then combine columns to get a full path for each file. We then pass that to our function previously defined. With the command Table.SplitColumn, on line 11, we re-built the result as a table with multiple columns.
      The rest of the query is to clean-up the result by changing the column’s type or removing undesired character.


      Extract “Statement”


      To get the Statement recordSet, it’s the same thing except that we will Table.Range, since the rows that we are looking for are between Provisioning Status and Daily Usage.
      //== fcnGetStatement ========================================== 
      
      (filePath) =>
      let
         fnRawFileContents = (fullpath as text) as table =>
      let
         Value = Table.FromList(Lines.FromBinary(File.Contents(fullpath)),Splitter.SplitByNothing())
      in Value,
      
          Source = fnRawFileContents(filePath),
          #"Daily Usage Row" = Table.SelectRows(Source, each Text.Contains([Column1], "Daily Usage")),
          #"DailyPosition" = Table.PositionOf(Source, #"Daily Usage Row" {0}),
          #"Statement Row" = Table.SelectRows(Source, each Text.Contains([Column1], "Statement")),
          #"StatementPosition" = Table.PositionOf(Source, #"Statement Row" {0}),
          #"SelectedRows" = Table.Range(Source,(StatementPosition+1),(DailyPosition - StatementPosition )-2),
          #"Result" = Table.PromoteHeaders(SelectedRows)
      in
          Result
      And once again we loop through every file and do some clean-up.
      //== Query Statements ========================================
      
      let
          Source = Folder.Files("C:\Azure_Consumption_demo\CSV_v2\"),
          MergedColumns = Table.CombineColumns(Source,{"Folder Path", "Name"},Combiner.CombineTextByDelimiter("", QuoteStyle.None),"Merged"),
          RemovedOtherColumns = Table.SelectColumns(MergedColumns,{"Merged"}),
          #"Results" = Table.AddColumn(RemovedOtherColumns , "GetCsvs", each fcnGetStatement([Merged])),
          #"Removed Columns" = Table.RemoveColumns(Results,{"Merged"}),
          #"Expanded GetCsvs" = Table.ExpandTableColumn(#"Removed Columns", "GetCsvs", {"Billing Period,Meter Category,Meter Sub-category,Meter Name,Meter Region,SKU,Unit,Consumed Quantity,Included Quantity,Within Commitment,Overage Quantity,Currency,Overage,Commitment Rate,Rate,Value"}, {"Billing Period,Meter Category,Meter Sub-category,Meter Name,Meter Region,SKU,Unit,Consumed Quantity,Included Quantity,Within Commitment,Overage Quantity,Currency,Overage,Commitment Rate,Rate,Value"}),
      
      
          #"Demoted Headers" = Table.DemoteHeaders(#"Expanded GetCsvs"),
          #"Split Column by Delimiter" = Table.SplitColumn(#"Demoted Headers","Column1",Splitter.SplitTextByDelimiter(",", QuoteStyle.Csv),{"Column1.1", "Column1.2", "Column1.3", "Column1.4", "Column1.5", "Column1.6", "Column1.7", "Column1.8", "Column1.9", "Column1.10", "Column1.11", "Column1.12", "Column1.13", "Column1.14", "Column1.15", "Column1.16"}),
          #"Promoted Headers" = Table.PromoteHeaders(#"Split Column by Delimiter"),
          #"Replaced Value" = Table.ReplaceValue(#"Promoted Headers","""","",Replacer.ReplaceText,{"Meter Category", "Meter Sub-category", "Meter Name", "Meter Region", "SKU", "Unit"})
      in
          #"Replaced Value"

      Once all that is done… Now the fun can begin!




      References