In this edition, we explore modern development's evolving landscape. From Microsoft's .NET Aspire simplifying distributed applications to AI security considerations, Git workflow optimizations, and backlog management strategies, there's something here to spark your next breakthrough.
The tech world never sleeps, and neither does innovation. Let's explore what caught my attention this week and might just spark your next big idea or solve that problem you've been wrestling with.
Identity and Access Management for .NET (Khalid Abuhakmeh) - This package looks very interesting to add multiple handlers to an HTTP client. The first question that pops in my mind is why this is not already in .NET, I think it should. I'll definitely give it a try.
AI Injection Attacks (ericlaw) - Great post that talks about the current risk when using AI and how we should try to do our best to protect the important information.
How to Get Things Done, Stay Focused, and Be More Productive (The Mel Robbins Podcast) - This compelling episode (available in audio and video) takes a fresh approach to productivity. Having read their books, I found the conversation particularly engaging and highly recommend it.
Welcome to another edition of my weekly reading notes! This week's collection brings together some fascinating developments across the tech landscape. From the intricacies of building cross-platform .NET tools to impressive AI breakthroughs like Warp's stellar performance on SWE-bench, there's plenty to explore. I've also discovered some thought-provoking content about leadership, product management, and the art of meaningful communication. Whether you're interested in the latest AI tools, looking for career insights, or simply want to stay current with industry trends, this week's selection has something valuable for every developer and tech professional.
Programming
Using and authoring .NET tools (Andrew Lock) - Interesting post that shares the behind-the-scenes when you're building a tool for multiple targets and the challenge that it represents. Those also share the new ways of .NET 10
Design at GitHub with Diana Mounter (.NET Rocks!) - Very interesting, discussion about so many things: career, the balance between design and engineering, GitHub, and so much more.
How to Lead with Value with Dr. Morgan Depenbusch (How to Lead with Value with Dr. Morgan Depenbusch) - I really enjoyed this episode about the little things we can do to shift the way we interact with others.
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.
I wanted to kick the tires on the upcoming .NET 10 C# script experience and see how far I could get calling Reka’s Research LLM from a single file, no project scaffolding, no .csproj. This isn’t a benchmark; it’s a practical tour to compare ergonomics, setup, and the little gotchas you hit along the way. I’ll share what worked, what didn’t, and a few notes you might find useful if you try the same.
All the sample code (and a bit more) is here: reka-ai/api-examples-dotnet · csharp10-script. The scripts run a small “top 3 restaurants” prompt so you can validate everything quickly.
We’ll make the same request in three ways:
OpenAI SDK
Microsoft.Extensions.AI for OpenAI
Raw HttpClient
What you need
The C# "script" feature used below ships with the upcoming .NET 10 and is currently available in preview. If you prefer not to install a preview SDK, you can run everything inside the provided Dev Container or on GitHub Codespaces. I include a .devcontainer folder with everything set up in the repo.
Set up your API key
We are talking about APIs here, so of course, you need an API key. The good news is that it's free to sign up with Reka and get one! It's a 2-click process, more details in the repo. The API key is then stored in a .env file, and each script loads environment variables using DotNetEnv.Env.Load(), so your key is picked up automatically. I went this way instead of using dotnet user-secrets because I thought it would be the way it would be done in a CI/CD pipeline or a quick script.
Run the demos
From the csharp10-script folder, run any of these scripts. Each line is an alternative
dotnet run 1-try-reka-openai.cs
dotnet run 2-try-reka-ms-ext.cs
dotnet run 3-try-reka-http.cs
You should see a short list of restaurant suggestions.
OpenAI SDK with a custom endpoint
Reka's API is using the OpenAI format; therefore, I thought of using the NuGet package OpenAI. To reference a package in a script, you use the #:package [package name]@[package version] directive at the top of the file. Here is an example:
#:package OpenAI@2.3.0
// ...
var baseUrl = "http://api.reka.ai/v1";
var openAiClient = new OpenAIClient(new ApiKeyCredential(REKA_API_KEY), new OpenAIClientOptions
{
Endpoint = new Uri(baseUrl)
});
var client = openAiClient.GetChatClient("reka-flash-research");
string prompt = "Give me 3 nice, not crazy expensive, restaurants for a romantic dinner in Montreal";
var completion = await client.CompleteChatAsync(
new List<ChatMessage>
{
new UserChatMessage(prompt)
}
);
var generatedText = completion.Value.Content[0].Text;
Console.WriteLine($" Result: \n{generatedText}");
The rest of the code is more straightforward. You create a chat client, specify the Reka API URL, select the model, and then you send a prompt. And it works just as expected. However, not everything was perfect, but before I share more about that part, let's talk about Microsoft.Extensions.AI.
Microsoft Extensions AI for OpenAI
Another common way to use LLM in .NET is to use one ot the Microsoft.Extensions.AI NuGet package. In our case Microsoft.Extensions.AI.OpenAI was used.
#:package Microsoft.Extensions.AI.OpenAI@9.8.0-preview.1.25412.6
// ....
var baseUrl = "http://api.reka.ai/v1";
IChatClient client = new ChatClient("reka-flash-research", new ApiKeyCredential(REKA_API_KEY), new OpenAIClientOptions
{
Endpoint = new Uri(baseUrl)
}).AsIChatClient();
string prompt = "Give me 3 nice, not crazy expensive, restaurants for a romantic dinner in Montreal";
Console.WriteLine(await client.GetResponseAsync(prompt));
As you can see, the code is very similar. Create a chat client, set the URL, the model, and add your prompt, and it works just as well.
That's two ways to use Reka API with different SDKs, but maybe you would prefer to go "SDKless", let's see how to do that.
Raw HttpClient calling the REST API
Without any SDK to help, there is a bit more line of code to write, but it's still pretty straightforward. Let's see the code:
using var httpClient = new HttpClient();
var baseUrl = "http://api.reka.ai/v1/chat/completions";
var requestPayload = new
{
model = "reka-flash-research",
messages = new[]
{
new
{
role = "user",
content = "Give me 3 nice, not crazy expensive, restaurants for a romantic dinner in New York city"
}
}
};
using var request = new HttpRequestMessage(HttpMethod.Post, baseUrl);
request.Headers.Add("Authorization", $"Bearer {REKA_API_KEY}");
request.Content = new StringContent(jsonPayload, Encoding.UTF8, "application/json");
var response = await httpClient.SendAsync(request);
var responseContent = await response.Content.ReadAsStringAsync();
var jsonDocument = JsonDocument.Parse(responseContent);
var contentString = jsonDocument.RootElement
.GetProperty("choices")[0]
.GetProperty("message")
.GetProperty("content")
.GetString();
Console.WriteLine(contentString);
So you create an HttpClient, prepare a request with the right headers and payload, send it, get the response, and parse the JSON to extract the text. In this case, you have to know the JSON structure of the response, but it follows the OpenAI format.
What did I learn from this experiment?
I used VS Code while trying the script functionality. One thing that surprised me was that I didn't get any IntelliSense or autocompletion. I try to disable the DevKit extension and change the setting for OmniSharp, but no luck. My guess is that because it's in preview, and it will work just fine in November 2025 when .NET 10 will be released.
In this light environment, I encountered some issues where, for some reason, I couldn't use an https endpoint, so I had to use http. In the raw httpClient script, I had some errors with the Reflection that wasn't available. It could be related to the preview or something else, I didn't investigate further.
For the most part, everything worked as expected. You can use C# code to quickly execute some tasks without any project scaffolding. It's a great way to try out the Reka API and see how it works.
What's Next?
While writing those scripts, I encountered multiple issues that aren't related to .NET but more about the SDKs when trying to do more advanced functionalities like optimization of the query and formatting the response output. Since it goes beyond the scope of this post, I will share my findings in a follow-up post. Stay tuned!
Here are my reading notes for the week: a mix of AI research and evaluation, .NET and Linux troubleshooting, testing framework changes, and JavaScript/TypeScript perspectives, plus a few podcast episodes on C#, work design, and software modernization that I found worthwhile.
AI
Introducing Research-Eval: A Benchmark for Search-Augmented LLMs (Reka Team) - One thing that has fascinated me since the beginning of this AI trend is how they test and measure the efficiency of those models. This post is going to go into details and share the benchmark (oss) and the results very interesting
Converting an xUnit test project to TUnit (Andrew Lock) - Like Andrew said in this post, changing your test framework is a big deal, but I will definitely consider TUnit for my next project. A very interesting post.
C# 14 with Dustin Campbell (.NET Rocks!) - Nice episode talking about C# and more precisely things that are related to Razor Pages. Always nice to listen to Carl and Richard.
How work design can reignite tremendous results (Modern Mentor) - Two Modern Mentor episodes this week, I love those shorter, concentrated episodes. This one focuses on ideas to help leaders redesign how work gets done.
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.
This week post collects concise links and takeaways across .NET, AI, Docker, open source security, DevOps, and broader developer topics. From the .NET Conf call for content and Copilot prompts to Docker MCP tooling, container debugging tips, running .NET as WASM, and a fresh look at the 10x engineer idea.
Running .NET in the browser without Blazor (Andrew Lock) - I never thought about it but it's true we can execute .NET code as WASM without Blazor. This tutorial shows you all the code.
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.
This week’s notes cover GenAI vs agentic AI, fresh Docker and Aspire news, how to run WordPress in containers, and building apps with React and .NET. Plus a few podcasts worth a listen.
Does it Make Sense to Run WordPress in Docker? (Lukas Mauser) - Looking at different options to run WordPress? Check out this blog post. All the code to do it in a docker container is shared and also details the reasons why you should do it or not
This week's reading notes cover a variety of insightful topics, from enhancing your development environment with dev containers on Windows to prioritizing open-source bugs effectively. You'll also find helpful posts on integrating MFA into your login process, exploring RavenDB's vector search capabilities, and understanding the differences between Ask Mode and Agent Mode in Visual Studio.
Why You Should Incorporate MFA into Your Login Process (Suzanne Scacca) - You think the answer is simple, think again. Nice post that explains the difference between 2FA and MFA and why you should or should not implement one of those
Aspire Dashboard (Joseph Guadagno) - Great deep dive about the Aspire dashboard, learn all the features packed inside it
Open Source
How I Prioritize OSS Bugs (jeremydmiller) - A very instructive post on a real-life issue. It's harder than people think to prioritize. And it may help you write better bug reports...
MCP server integration in Visual Studio (Mark Downie) - Great update! That security stuff is very important and a good example using get up to store your tokens love it
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.
Creating a Landing Page in Blazor (Héctor Pérez) - Nice tutorial. Not sure I would do the landing page at the root, but other than that, everything is great
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.
In a recent post, I shared how to set up a CI/CD pipeline for a .NET Aspire project on GitLab. The pipeline includes unit tests, security scanning, and secret detection, and if any of those fail, the pipeline would fail. Great, but what about code coverage for the unit tests? The pipeline included code coverage commands, but the coverage was not visible in the GitLab interface. Let's fix that.
One thing I initially thought was that the regex used to extract the coverage was incorrect. The regex used in the pipeline was:
coverage: '/Total\s*\|\s*(\d+(?:\.\d+)?)%/'
That regex came directly from the GitLab documentation, so I thought it should work correctly. However, coverage still wasn't visible in the GitLab interface.
So with the help of GitHub Copilot, we wrote a few commands to validate:
That the coverage.cobertura.xml was in a consistent location (instead of being in a folder with a GUID name)
That the coverage.cobertura.xml file was in a valid format
What exactly the regex was looking for
Everything checked out fine, so why was the coverage not visible?
The Solution
It turns out that the coverage command with the regex expression is scanning the console output and not the coverage.cobertura.xml file. Aha! One solution was to install dotnet-tools to changing where the the test results was persisted; to the console instead of the XML file, but I preferred keeping the .NET environment unchanged.
The solution I ended up implementing was executing a grep command to extract the coverage from the coverage.cobertura.xml file and then echoing it to the console. Here's what it looks like:
I hope this helps others save time when setting up code coverage for their .NET projects on GitLab. The key insight is that GitLab's coverage regex works on console output, not on the files (XML or other formats).
If you have any questions or suggestions, feel free to reach out!
This week's collection of interesting articles and resources covers AI development, DevOps practices, and open source tools. From GitHub Copilot customization to local AI deployments and containerization best practices, here are the highlights worth your attention.
Top 5 MCP Server Best Practices (Ivan Pedrazas) - mCP server is a very hot topic, thinking about riding your own, here are five best practices to make sure you will be successful.
Containerize Your Apps with Ask Gordon (Steve Buchanan) - I already have Docker desktop on my Windows pc, I should definitely give Gordon a try more to come
DevOps
Local Deploy with Bicep (Sam Cogan) - A perfect short story, I'll explain why the hell bicep can now deploy locally and how to do it
Open Source
Introducing OpenCLI (Patrik Svensson) - A standard that describes CLI so both humans and agents can understand how it works. Love it!
This week, we're exploring a wide range of topics, from .NET 10 previews and A/B testing to the latest in Azure development and AI. Plus, a selection of insightful podcast episodes to keep you informed and inspired.
Docker Model Runner ( DevOps and Docker Talk: Cloud Native Interviews and Tooling) - I tried the new model feature of Docker and had many questions. All of them were answered during this episode.
Michael Washington: The Nature Of Data - Episode 353 (Azure & DevOps Podcast) - Interesting discussion about data, and a bit more about a really cool project, Michael's Data warehouse, because sometimes we need something that runs locally.
Welcome to the 655th Reading Notes. This edition explores embedding Python in .NET, working with stacked git branches, and an introduction to cloud-native. Plus, a quick tip for the Azure Portal and using local AI for code reviews.
Introduction to Cloud Native Computing (TNS Staff) - Very complete and interesting article that is the perfect point to get started with clout native application covering what it is the strategy the architecture everything
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.
Welcome to another edition of my reading notes! This week, I’ve gathered a selection of insightful articles and resources covering topics like AI, cloud security, open source, and developer productivity. Whether you’re interested in best practices, new tools, or thought-provoking perspectives, there’s something here for everyone.
Dive in and enjoy the highlights!
Suggestion of the week
Copilot, The Good Parts: Efficiency (Rob Conery) - I love that post, it's so true! There are good and bad ways to use any tools. And I personally would really like seeing Rob build his stuff. Let's him know If you think like me.
Fantastic Alert Messages Using SweetAlert (Héctor Pérez ) - A great component to manage our alerts in C# instead of having JavaScript. That makes code easier to test when all is in the same language.
You DON’T Need Microservices for Serverless! (Derek Comartin) - A great post that explained the difference between cold coupling and monolithics versus microservices great post.
Open Source
How to convince your boss to sponsor Open Web Docs (Patrick Brosset) - Open source is important! And to contribute, it doesn't have to be code. Nice post that shares ideas and explains a few things about OSS.
Local code review with Docker and smollm2 before pushing to git (Gerardo Lopez) - This is a great idea! Definitely a good way to avoid the light of shame and be able to quickly validate that your code looks okay. It's also a great way to experiment free hook.
Welcome to Reading Notes #653 another packed edition of insights, tools, and updates from the tech world! This week's roundup dives into legendary engineering wisdom, AI controversies, and the latest innovations in Docker, Azure, and VS Code. Whether you're exploring MCP, refining your scripting skills, or gearing up for the newest Azure Developer CLI release, there's something here for every developer.
Let’s get into it!
Cloud
Azure Developer CLI (azd) - June 2025 (Kristen Womack) - Love that tool, great updates, so many new features and improvements in this version, very looking forward to try all of them, turning them all
AI
Publishing AI models to Docker Hub (Kevin Wittek) - Running model locally is a lot of people are looking forward to it, so this is good news can't wait to try it
This week, we explore a variety of topics, from database containerization and AI security risks to the evolving landscape of gaming devices and cloud technologies. We also explore the shift towards security-first development and the integration of .NET Aspire with SQL Server for integration testing.
Let's dive in!
Suggestion of the week
GitHub MCP Exploited: Accessing private repositories via MCP (Marco Milanta, Luca Beurer-Kellner) - AI tools are very powerful but also pretty new in our lives. It's important to stay up to date and understand the risks. Not scared, but to see the potential flaws and how to avoid them.
Beyond DevSecOps: The Rise of Security-First Development (Industry Perspectives) - DevSecOps was a wake-up call, and we need to build our app security first. That seems to make sense, right? Have a read of this post to dig deeper into this idea and understand its foundation
Testing has always been one of those tasks that developers know is essential but often find tedious. When I decided to add comprehensive unit tests to my NoteBookmark project, I thought: why not make this an experiment in AI-assisted development? What followed was a fascinating 4-hour journey that resulted in 88 unit tests, a complete CI/CD pipeline, and some valuable insights about working with AI coding assistants.
NoteBookmark is a .NET application built with C# that helps users manage and organize their reading notes and bookmarks. The project includes an API, a Blazor frontend, and uses Azure services for storage. You can check out the complete project on GitHub.
The Challenge: Starting from Zero
I'll be honest - it had been a while since I'd written comprehensive unit tests. Rather than diving in myself, I decided to see how different AI models would approach this task. My initial request was deliberately vague: "add a test project" without any other specifications.
Looking back, I realize I should have been more specific about which parts of the code I wanted covered. This would have made the review process easier and given me better control over the scope. But sometimes, the best learning comes from letting the AI surprise you.
The Great AI Model Comparison
GPT-4.1: Competent but Quiet
GPT-4.1 delivered decent results, but the experience felt somewhat mechanical. The code it generated was functional, but I found myself wanting more context. The explanations were minimal, and I often had to ask follow-up questions to understand the reasoning behind certain test approaches.
Gemini: The False Start
My experience with Gemini was... strange. Perhaps it was a glitch or an off day, but most of what was generated simply didn't work. I didn't persist with this model for long, as debugging AI-generated code that fundamentally doesn't function defeats the purpose of the exercise. Note that at the time of this writing, Gemini was still in preview, so I expect it to improve over time.
Claude Sonnet: The Clear Winner
This is where the magic happened. Claude Sonnet became my co-pilot of choice for this project. What set it apart wasn't just the quality of the code (though that was excellent), but the quality of the conversation. It felt like having a thoughtful colleague thinking out loud with me.
The explanations were clear and educational. When Claude suggested a particular testing approach, it would explain why. When it encountered a complex scenario, it would walk through its reasoning. I tried different versions of Claude Sonnet but didn't notice significant differences in results - they were all consistently good.
The Development Process: A 4-Hour Journey
Hour 1-2: Getting to Compilation
The first iteration couldn't compile. This wasn't surprising given the complexity of the codebase and the vague initial request. But here's where the AI collaboration really shined. Instead of manually debugging everything myself, I worked with Copilot to identify and fix issues iteratively.
We went through several rounds of:
Identify compilation errors
Discuss the best approach to fix them
Let the AI implement the fixes
Review and refine
After about 2 hours, we had a test project with 88 unit tests that compiled successfully. The AI had chosen xUnit as the testing framework, which I was happy with - it's a solid choice that I might not have picked myself if I was rusty on the current .NET testing landscape.
Hour 2.5-3.5: Making Tests Pass
Getting the tests to compile was one thing; getting them to pass was another challenge entirely. This phase taught me a lot about both my codebase and xUnit features I wasn't familiar with.
I relied heavily on the /explain feature during this phase. When tests failed, I'd ask Claude to explain what was happening and why. This was invaluable for understanding not just the immediate fix, but the underlying testing concepts.
One of those moment was learning about [InlineData(true)] and other xUnit data attributes. These weren't features I was familiar with, and having them explained in context made them immediately useful.
Hour 3.5-4: Structure and Style
Once all tests were passing, I spent time ensuring I understood each test and requesting structural changes to match my preferences. This phase was crucial for taking ownership of the code. Just because AI wrote it doesn't mean it should remain a black box. Let's repeat this: Understanding the code is essential; just because AI wrote it doesn't mean it's good.
Beyond Testing: CI/CD Integration
With the tests complete, I asked Copilot to create a GitHub Actions workflow to run tests on every push to main and v-next branches, plus PR reviews. Initially it started modifiying my existing workflow that takess care of the Azure deployment. I wanted a separate workflow for testing, so I interrupted (that's nice I wasn't "forced" to wait), and asked it to create a new one instead. The result was the running-unit-tests.yml workflow that worked perfectly on the first try.
This was genuinely surprising. CI/CD configurations often require tweaking, but the generated workflow handled:
Multi-version .NET setup
Dependency restoration
Building and testing
Test result reporting
Code coverage analysis
Artifact uploading
The PR Enhancement Adventure
Here's where things got interesting. When I asked Copilot to enhance the workflow to show test results in PRs, it started adding components, then paused and asked if it could delete the current version and start from scratch.
I said yes, and I'm glad I did. The rebuilt version created beautiful PR comments showing:
Test results summary
Code coverage reports (which I didn't ask for but appreciated)
Detailed breakdowns.
The Finishing Touches
No project is complete without proper status indicators. I added a test status badge to the README, giving anyone visiting the repository immediate visibility into the project's health.
Key Takeaways
What Worked Well
AI as a Learning Partner: Having Copilot explain testing concepts and xUnit features was like having a patient teacher
Iterative Refinement: The back-and-forth process felt natural and productive
Comprehensive Solutions: The AI didn't just write tests; it created a complete testing infrastructure
Quality Over Speed: While it took 4 hours, the result was thorough and well-structured
What I'd Do Differently
Be More Specific Initially: Starting with clearer scope would have streamlined the process
Set Testing Priorities: Identifying critical paths first would have been valuable
Plan for Visual Test Reports: Thinking about test result visualization from the start
Lessons About AI Collaboration
Model Choice Matters: The difference between AI models was significant
Conversation Quality Matters: Clear explanations make the collaboration more valuable
Trust but Verify: Understanding every piece of generated code is crucial
Embrace Iteration: The best results come from multiple refinement cycles
The Bigger Picture
This experiment reinforced my belief that AI coding assistants are most powerful when they're true collaborators rather than code generators. The value wasn't just in the 88 tests that were written, but in the learning that happened along the way.
For developers hesitant about AI assistance in testing: this isn't about replacing your testing skills, it's about augmenting them. The AI handles the boilerplate and suggests patterns, but you bring the domain knowledge and quality judgment.
Conclusion
Would I do this again? Absolutely. The combination of comprehensive test coverage, learning opportunities, and time efficiency made this a clear win. The 4 hours invested created not just tests, but a complete testing infrastructure that will pay dividends throughout the project's lifecycle.
If you're considering AI-assisted testing for your own projects, my advice is simple: start the conversation, be prepared to iterate, and don't be afraid to ask "why" at every step. The goal isn't just working code - it's understanding and owning that code.
The complete test suite and CI/CD pipeline are available in the NoteBookmark repository if you want to see the results of this AI collaboration in action.
Welcome to another edition of my reading notes! This week brings some fascinating insights into AI's real-world impact, exciting developments in .NET and containerization, plus practical tools for improving our development workflows.
From local AI-powered code reviews to Docker security hardening and the upcoming .NET 10 features, there's plenty to explore.
AI
The promise that wasn’t kept (Salma Alam Maylor) - Interesting thoughts about AI and its impact on our work but also on our life, and the planet.
Inside GitHub: How we hardened our SAML implementation (Greg Ose, Taylor Reis) - Very interesting post that pushes the curtain a little bit so we could see behind the scene how this very used but notification system works and has been updated
Enhance productivity with AI + Remote Dev (Brigit Murtaugh, Christof Marti, Josh Spicer, Olivia Guzzardo McVicker) - I love the dev container environments, they are so useful! And I also use the remote one when I'm not on my dev device so easy. Happy to see that Copilot will be right there with me.
It's time for another edition of Reading Notes! This week brings exciting developments in the open source world, with major announcements from Microsoft making WSL and VS Code's AI features open source. We've also got updates on Azure Container Apps, .NET Aspire, and some great insights on developer productivity tools.
Let's dive into these interesting reads that caught my attention this week.
Cloud
Happy 5th Birthday Bicep! (Sam Cogan) - What?! Five years already! That's incredible, I remember all the discussion about how we make our business better and honestly, bicep is a big success. Congrats to the team
Have I Been Pwned 2.0 is Now Live! (Troy Hunt) - New look, new merch, and confetti, all without API breaking changes! Learn all about this major update in this post.
What's new in .NET Aspire 9.3 (David Pine) - Wow! How so many great new features can be added in a single version?! Aspire is a must for all .NET developers.
Accelerate Your .NET Upgrades with GitHub Copilot (McKenna Barlow) - That's the tool I've been waiting for ages! Adding a Copilot to the extension is the smartest move they could make. I'm going to update an app right away. I'll share more later
Open Source
Edit is now open source (Christopher Nguyen) - That's a great news! I installed it half through the post and it great! Fast, simple, and tiny!! Love it!
Agent mode for every developer (Katie Savage) - Great new for everyone as the agent mode become available in so many different editor. This post also contains videos to shows some scenarios.
Podcasts
Reimagining the Windows Terminal with Warp's Zach Lloyd (Scott Hanselman) - A very interesting talk with the CEO of Warp that answers so many questions I had about this very different terminal. Really interesting episode, and terminal too BTW)
The experience is enough (Salma Alam-Naylor) - Whether we like it or not, we are people creature. We all need to stop hiding behind our screens and get out there!
Automating deployments is something I always enjoy. However, it's true that it often takes more time than a simple "right-click deploy." Plus, you may need to know different technologies and scripting languages.
But what if there was a tool that could help you write everything you need—Infrastructure as Code (IaC) files, scripts to copy files, and scripts to populate a database? In this post, we'll explore how the Azure Developer CLI (azd) can make deployments much easier.
What do we want to do?
Our goal: Deploy the 2D6 Dungeon App to Azure Container Apps.
This .NET Aspire solution includes:
A frontend
A data API
A database
The Problem
In a previous post, we showed how azd up can easily deploy web apps to Azure.
If we try the same command for this solution, the deployment will be successful—but incomplete:
The .NET Blazor frontend is deployed perfectly.
However, the app fails when trying to access data.
Looking at the logs, we see the database wasn't created or populated, and the API container fails to start.
Let's look more closely at these issues.
The Database
When running the solution locally, Aspire creates a MySQL container and executes SQL scripts to create and populate the tables. This is specified in the AppHost project:
var mysql = builder.AddMySql("sqlsvr2d6")
.WithLifetime(ContainerLifetime.Persistent);
var db2d6 = mysql.AddDatabase("db2d6");
mysql.WithInitBindMount(source: "../../database/scripts", isReadOnly: false);
When MySQL starts, it looks for SQL files in a specific folder and executes them. Locally, this works because the bind mount is mapped to a local folder with the files.
However, when deployed to Azure:
The mounts are created in Azure Storage Files
The files are missing!
The Data API
This project uses Data API Builder (dab). Based on a single config file, a full data API is built and hosted in a container.
Locally, Aspire creates a DAB container and reads the JSON config file to create the API. This is specified in the AppHost project:
var dab = builder.AddDataAPIBuilder("dab", ["../../database/dab-config.json"])
.WithReference(db2d6)
.WaitFor(db2d6);
But once again, when deployed to Azure, the file is missing. The DAB container starts but fails to find the config file.
The Solution
The solution is simple: the SQL scripts and DAB config file need to be uploaded into Azure Storage Files during deployment.
You can do this by adding a post-provision hook in the azure.yaml file to execute a script that uploads the files. See an example of a post-provision hook in this post.
Alternatively, you can leverage azd alpha features: azd.operations and infraSynth.
azd.operations extends the provisioning providers and will upload the files for us.
infraSynth generates the IaC files for the entire solution.
💡Note: These features are in alpha and subject to change.
Each azd alpha feature can be turned on individually. To see all features:
azd config list-alpha
To activate the features we need:
azd config set alpha.azd.operations on
azd config set alpha.infraSynth on
Let's Try It
Once the azd.operation feature is activated, any azd up will now upload the files into Azure. If you check the database, you'll see that the db2d6 database was created and populated. Yay!
However, the DAB API will still fail to start. Why? Because, currently, DAB looks for a file, not a folder, when it starts. This can be fixed by modifying the IaC files.
One Last Step: Synthesize the IaC Files
First, let's synthesize the IaC files. These Bicep files describe the required infrastructure for our solution.
With the infraSynth feature activated, run:
azd infra synth
You'll now see a new infra folder under the AppHost project, with YAML files matching the container names. Each file contains the details for creating a container.
Open the dab.tmpl.yaml file to see the DAB API configuration. Look for the volumeMounts section. To help DAB find its config file, add subPath: dab-config.json to make the binding more specific:
You can also specify the scaling minimum and maximum number of replicas if you wish.
Now that the IaC files are created, azd will use them. If you run azd up again, the execution time will be much faster—azd deployment is incremental and only does "what changed."
The Final Result
The solution is now fully deployed:
The database is there with the data
The API works as expected
You can use your application!
Bonus: Deploying with CI/CD
Want to deploy with CI/CD? First, generate the GitHub Action (or Azure DevOps) workflow with:
azd pipeline config
Then, add a step to activate the alpha feature before the provisioning step in the azure-dev.yml file generated by the previous command.
- name: Extends provisioning providers with azd operations
run: azd config set alpha.azd.operations on
With these changes, and assuming the infra files are included in the repo, the deployment will work on the first try.
Conclusion
It's exciting to see how tools like azd are shaping the future of development and deployment. Not only do they make the developer's life easier today by automating complex tasks, but they also ensure you're ready for production with all the necessary Infrastructure as Code (IaC) files in place. The journey from code to cloud has never been smoother!
If you have any questions or feedback, I'm always happy to help—just reach out on your favorite social media platform.