This #rd explores practical insights on leveraging GitHub Copilot for enhanced .NET testing, the rise of AI-driven documentation solutions, and the importance of security in coding agents. From dissecting Docker’s MCP servers to debating the merits of Minimal APIs, we cover a mix of .NET updates, developer workflows, and emerging best practices. Whether you’re refining build processes, optimizing codebases, or staying ahead of security trends, these notes offer a curated selection of ideas to spark your next project or refactor.
Ask AI from Anywhere: No GUI, No Heavy Clients, No Friction (Frank Boucher) - A cool little tools that you can call from any terminal (yes it works via ssh too!) that call AI and ask a question or does a web research for you. The post and the video explains how it works and where to find the code.
Programming
Reinventing how .NET Builds and Ships (Again) (Matt Mitchell) - This post is not a 30-second read, but it's a detailed story that explains all the facts and how it was built. That great new version 10 in terms of the build.
Ever wished you could ask AI from anywhere without needing an interface? Imagine just typing ? and your question in any terminal the moment it pops into your head, and getting the answer right away! In this post, I explain how I wrote a tiny shell script that turns this idea into reality, transforming the terminal into a universal AI client. You can query Reka, OpenAI, or a local Ollama model from any editor, tab, or pipeline—no GUI, no heavy clients, no friction.
Small, lightweight, and surprisingly powerful: once you make it part of your workflow, it becomes indispensable.
There is almost always a terminal within reach—embedded in your editor, sitting in a spare tab, or already where you live while building, debugging, and piping data around. So why break your flow to open a separate chat UI? I wanted to just type a single character (?) plus my question and get an answer right there. No window hopping. No heavy client.
How It Works
The trick is delightfully small: send a single JSON POST request to whichever AI provider you feel like (Reka, OpenAI, Ollama locally, etc.):
# Example: Reka
curl https://api.reka.ai/v1/chat
-H "X-Api-Key: <API_KEY>"
-d {
"messages": [
{
"role": "user",
"content": "What is the origin of thanksgiving?"
}
],
"model": "reka-core",
"stream": false
}
# Example: Ollama local
curl http://127.0.0.1:11434/api/chat
-d {
"model": "llama3",
"messages": [
{
"role": "user",
"content": "What is the origin of thanksgiving?"
}],
"stream": false
}
Once we get the response, we extract the answer field from it. A thin shell wrapper turns that into a universal “ask” verb for your terminal. Add a short alias (?) and you have the most minimalist AI client imaginable.
Let's go into the details
Let me walk you through the core script step-by-step using reka-chat.sh, so you can customize it the way you like. Maybe this is a good moment to mention that Reka has a free tier that's more than enough for this. Go grab your key—after all, it's free!
The script (reka-chat.sh) does four things:
Captures your question
Loads an API key from ~/.config/reka/api_key
Sends a JSON payload to the chat endpoint with curl.
Extracts the answer using jq for clean plain text.
1. Capture Your Question
This part of the script is a pure laziness hack. I wanted to save keystrokes by not requiring quotes when passing a question as an argument. So ? What is 32C in F works just as well as ? "What is 32C in F".
if [ $# -eq 0 ]; then
if [ ! -t 0 ]; then
QUERY="$(cat)"
else
exit 1
fi
else
QUERY="$*"
fi
2. Load Your API Key
If you're running Ollama locally you don't need any key, but for all other AI providers you do. I store mine in a locked-down file at ~/.config/reka/api_key, then read and trim trailing whitespace like this:
Finally, we parse the JSON response with jq to pull out just the answer text. If jq isn't installed we display the raw response, but a formatted answer is much nicer. If you are customizing for another provider, you may need to adjust the JSON path here. You can add echo "$RESPONSE" >> data_sample.json to the script to log raw responses for tinkering.
Now that we have the script, make it executable with chmod +x reka-chat.sh, and let's add an alias to your shell config to make it super easy to use. Add one line to your .zshrc or .bashrc that looks like this:
alias \\?=\"$REKA_CHAT_SCRIPT\"
Because ? is a special character in the shell, we escape it with a backslash. After adding this line, reload your shell configuration with source ~/.zshrc or source ~/.bashrc, and you are all set!
The Result
Now you can ask questions directly from your terminal. Wanna know what is origin of Thanksgiving, ask it like this:
? What is the origin of Thanksgiving
And if you want to keep the quotes, please you do you!
Extra: Web research
I couldn't stop there! Reka also supports web research, which means it can fetch and read web pages to provide more informed answers. Following the same pattern described previously, I wrote a similar script called reka-research.sh that sends a request to Reka's research endpoint. This obviously takes a bit more time to answer, as it's making different web queries and processing them, but the results are often worth the wait—and they are up to date! I used the alias ?? for this one.
On the GitHub repository, you can find both scripts (reka-chat.sh and reka-research.sh) along with a script to create the aliases automatically. Feel free to customize them to fit your workflow and preferred AI provider. Enjoy the newfound superpower of instant AI access right from your terminal!
What's Next?
With this setup, the possibilities are endless. Reka supports questions related to audio and video, which could be interesting to explore next. The project is open source, so feel free to contribute or suggest improvements. You can also join the Reka community on Discord to share your experiences and learn from others.
Ever wished you could ask a question and have the answer come only from a handful of trusted documentation sites—no random blogs, no stale forum posts? That’s exactly what the Check-In Doc MCP Server does. It’s a lightweight Model Context Protocol (MCP) server you can run locally (or host) to funnel questions to selected documentation domains and get a clean AI-generated answer back.
What It Is
The project (GitHub: https://github.com/fboucher/check-in-doc-mcp) is a Dockerized MCP server that:
Accepts a user question.
Calls the Reka AI Research API with constraints (only allowed domains).
Returns a synthesized answer based on live documentation retrieval.
You control which sites are searchable by passing a comma‑separated list of domains (e.g. docs.reka.ai,docs.github.com). That keeps > results focused, reliable, and relevant.
What Is the Reka AI Research API?
Reka AI’s Research API lets you blend language model reasoning with targeted, on‑the‑fly web/document retrieval. Instead of a model hallucinating an answer from static training data, it can:
Perform limited domain‑scoped web searches.
Pull fresh snippets.
Integrate them into a structured response.
In this project, we use the research feature with a web_search block specifying:
allowed_domains: Only the documentation sites you trust.
max_uses: Caps how many retrieval calls it makes per query (controls cost & latency).
Details used here:
Model:reka-flash-research
Endpoint:http://api.reka.ai/v1/chat/completions
Auth: Bearer API key (generated from the Reka dashboard: https://link.reka.ai/free)
How It Works Internally
The core logic lives in ResearchService (src/Domain/ResearchService.cs). Simplified flow:
Initialization
Stores the API key + array of allowed domains, sets model & endpoint, logs a safe startup message.
Build Request Payload
The CheckInDoc(string question) method creates a JSON payload:
var requestPayload = new {
model,
messages = new[] { new { role = "user", content = question } },
research = new {
web_search = new {
allowed_domains = allowedDomains,
max_uses = 4
}
}
};
Send Request
Creates a HttpRequestMessage (POST), adds Authorization: Bearer <APIKEY>, sends JSON to Reka.
Parse Response
Deserializes into a RekaResponse domain object, returns the first answer string.
Adding It to VS Code (MCP Extension)
You can run it as a Docker-based MCP server. Two simple approaches:
Option 1: Via “Add MCP Server” UI
In VS Code (with MCP extension), click Add MCP Server.
Choose type: Docker image.
Image name: fboucher/check-in-doc-mcp.
Enter allowed domains and your Reka API key when prompted.
Option 2: Via mcp.json (Recommended)
Alternatively, you can manually configure it in your mcp.json file. This will make sure your API key isn't displayed in plain text. Add or merge this configuration:
{
"servers": {
"check-in-docs": {
"type": "stdio",
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"ALLOWED_DOMAINS=${input:allowed_domains}
",
"-e",
"APIKEY=${input:apikey}",
"fboucher/check-in-doc-mcp"
]
}
},
"inputs": [
{
"id": "allowed_domains",
"type": "promptString",
"description": "Enter the comma-separated list of documentation domains to allow (e.g. docs.reka.ai,docs.github.com):"
},
{
"id": "apikey",
"type": "promptString",
"password": true,
"description": "Enter your Reka Platform API key:"
}
]
}
How to Use It
To use it ask to Check In Doc something or You can now use the SearchInDoc tool in your MCP-enabled environment. Just ask a question, and it will search only the specified documentation domains.
Final Thoughts
It’s intentionally simple—no giant orchestration layer. Just a clean bridge between a question, curated domains, and a research-enabled model. Sometimes that’s all you need to get focused, trustworthy answers.
If this sparks an idea, clone it and adapt away. If you improve it (citations, richer error handling, multi-turn context)—send a PR!
I've spent most of my career building software in C# and .NET, and only used Python in IoT projects. When I wanted to build a fun project—an app that uses AI to roast videos, I knew it was the perfect opportunity to finally dig into Python web development.
The question was: where do I start? I hopped into a brainstorming session with Reka's AI chat and asked about options for building web apps in Python. It mentioned Flask, and I remembered friends talking about it being lightweight and perfect for getting started. That sounded right.
In this post, I share how I built "Roast My Life," a Flask app using the Reka Vision API.
The Vision (Pun Intended)
The app needed three core things:
List videos: Show me what videos are in my collection
Upload videos: Let me add new ones via URL
Roast a video: Send a selected video to an AI and get back some hilarious commentary
See it in action
Part 1: Getting Started Environment Setup
The first hurdle was always going to be environment setup. I'm serious about keeping my Python projects isolated, so I did the standard dance:
Before even touching dependencies, I scaffolded a super bare-bones Flask app. Then one thing I enjoy from C# is that all dependencies are brought in one shot, so I like doing the same with my python projects using requirements.txt instead of installing things ad‑hoc (pip install flask then later freezing).
Dropping that file in first means the setup snippet below is deterministic. When you run pip install -r requirements.txt, Flask spins up using the exact versions I tested with, and you won't accidentally grab a breaking major update.
Here's the shell dance that activates the virtual environment and installs everything:
To get that API key, I visited the Reka Platform and grabbed a free one. Seriously, a free key for playing with AI vision APIs? I was in.
With python app.py, I fired up the Flask development server and opened http://127.0.0.1:5000 in my browser. The UI was there, but... it was dead. Nothing worked.
Perfect. Time to build.
The Backend: Flask Routing and API Integration
Coming from ASP.NET Core's controller-based routing and Blazor, Flask's decorator-based approach felt just like home. All the code code goes in the app.py file, and each route is defined with a simple decorator. But first things first: loading configuration from the .env file using python-dotenv:
from flask import Flask, request, jsonify
import requests
import os
from dotenv import load_dotenv
app = Flask(__name__)
# Load environment variables (like appsettings.json)
load_dotenv()
api_key = os.environ.get('API_KEY')
base_url = os.environ.get('BASE_URL')
All the imports packages are the same ones that needs to be in the requirements.txt. And we retreive the API key and base URL from environment variables, just like in .NET Core.
Now, to be able to get roasted we need first to upload a video to the Reka Vision API. Here's the code—I'll go over some details after.
@app.route('/api/upload_video', methods=['POST'])
def upload_video():
"""Upload a video to Reka Vision API"""
data = request.get_json() or {}
video_name = data.get('video_name', '').strip()
video_url = data.get('video_url', '').strip()
if not video_name or not video_url:
return jsonify({"error": "Both video_name and video_url are required"}), 400
if not api_key:
return jsonify({"error": "API key not configured"}), 500
try:
response = requests.post(
f"{base_url.rstrip('/')}/videos/upload",
headers={"X-Api-Key": api_key},
data={
'video_name': video_name,
'index': 'true', # Required: tells Reka to process the video
'video_url': video_url
},
timeout=30
)
response_data = response.json() if response.ok else {}
if response.ok:
video_id = response_data.get('video_id', 'unknown')
return jsonify({
"success": True,
"video_id": video_id,
"message": "Video uploaded successfully"
})
else:
error_msg = response_data.get('error', f"HTTP {response.status_code}")
return jsonify({"success": False, "error": error_msg}), response.status_code
except requests.Timeout:
return jsonify({"success": False, "error": "Request timed out"}), 504
except Exception as e:
return jsonify({"success": False, "error": f"Upload failed: {str(e)}"}), 500
Once the information from the frontend is validated we make a POST request to the Reka Vision API's /videos/upload endpoint. The parameters are sent as form data, and we include the API key in the headers for authentication. Here I was using URLs to upload videos, but you can also upload local files by adjusting the request accordingly. As you can see, it's pretty straightforward, and the documentation from Reka made it easy to understand what was needed.
The Magic: Sending Roast Requests to Reka Vision API
Here's where things get interesting. Once a video is uploaded, we can ask the AI to analyze it and generate content. The Reka Vision API supports conversational queries about video content:
def call_reka_vision_qa(video_id: str) -> Dict[str, Any]:
"""Call the Reka Video QA API to generate a roast"""
headers = {'X-Api-Key': api_key} if api_key else {}
payload = {
"video_id": video_id,
"messages": [
{
"role": "user",
"content": "Write a funny and gentle roast about the person, or the voice in this video. Reply in markdown format."
}
]
}
try:
resp = requests.post(
f"{base_url}/qa/chat",
headers=headers,
json=payload,
timeout=30
)
data = resp.json() if resp.ok else {"error": f"HTTP {resp.status_code}"}
if not resp.ok and 'error' not in data:
data['error'] = f"HTTP {resp.status_code} calling chat endpoint"
return data
except requests.Timeout:
return {"error": "Request to chat API timed out"}
except Exception as e:
return {"error": f"Chat API call failed: {e}"}
Here we pass the video ID and a prompt asking for a "funny and gentle roast." The API responds with AI-generated content, which we can then send back to the frontend for display. I try to give more "freedom" to the AI by asking it to reply in markdown format, which makes the output more engaging.
What really stood out to me was how approachable the Reka Vision API is. You don't need any special SDK—just the requests library making standard HTTP calls. And honestly, it doesn't matter what language you're used to; an HTTP call is pretty much always simple to do. Whether you're coming from .NET, Python, JavaScript, or anything else, you're just sending JSON and getting JSON back.
Authentication is refreshingly straightforward: just pop your API key in the header and you're good to go. No complex SDKs, no multi-step authentication flows, no wrestling with binary data streams. The conversational interface lets you ask questions in natural language, and you get back structured JSON responses with clear fields.
One thing worth noting: in this example, the videos are pre-uploaded and indexed, which means the responses come back fast. But here's the impressive part—the AI actually looks at the video content. It's not just reading a transcript or metadata; it's genuinely analyzing the visual elements. That's what makes the roasts so spot-on and contextual.
Final Thoughts
The Reka Vision API itself deserves credit for making video AI accessible. No complicated SDKs, no multi-GB model downloads, no GPU requirements. Just simple HTTP requests and powerful AI capabilities. I'm not saying I'm switching to Python full-time, but expect to see me sharing more Python projects in the future!
Dives into the intersection of AI and development, exploring tools like GitHub Copilot’s AGENTS.md and the MCP Toolkit for automations, alongside .NET 10.0’s performance gains and OpenAI’s recent updates. Whether you’re optimizing serverless APIs with AWS Lambda or mastering the Web Animation API, this post highlights breakthroughs in code efficiency, model customization, and cloud innovation. Dive into these thought-provoking reads to stay ahead in a rapidly changing world.
Add MCP Servers to Claude Code with MCP Toolkit (Ajeet Singh Raina) - Very cool way to create automations. I have used a lot of low-code and different alternatives to create automation workflows in the past, but this mCP server interactivity is pretty awesome.
Low-Rank Adaptation (LoRA) Explained (Ignasi Lopez Luna) - Nice post that shows how we can specialize a model ourselves for a very narrow, specific topic
The Web Animation API (Christian Nwamba) - It's the first time I've read about this web animation API, pretty cool even if we need to be careful, I think that precision offers could be very interesting for some animations.
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.
This week post explores the intersection of AI, cloud, and DevOps, featuring updates on Microsoft’s Logic Apps integration, practical .NET tools for system automation, and strategies to enhance documentation for AI-driven workflows. Whether you’re refining enterprise security practices with NuGet’s Trusted Publishing or diving into the ethical nuances of AI through vector databases, this post offers a blend of technical deep dives and thought-provoking discussions. Don’t miss the podcast highlights, from DevOps innovation to the business impact of employee well-being, perfect for developers, architects, and curious minds alike. Let’s connect the dots in a world where code, creativity, and collaboration drive progress.
Prompt Files and Instructions Files Explained - .NET Blog (Wendy Breiding) - One question I often hear is how can I tell Copilot the standard and conventions my enterprise uses. Well, now I can send them to this post. Well done!
ohn Bristowe: The Latest from Octopus Deploy - Episode 368 (Azure & DevOps Podcast) - Loved this episode, I don't know a lot about Octopus. Discussing deployment pipelines, AI integration in DevOps, and the evolution from manual weekend deployments to automated, reliable workflows.
The business case for employee well-being (Modern Mentor) - Really enjoyed this episode on how workplace wellbeing isn't just about perks but building better work systems that boost both results and employee experience
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.
I wanted to kick the tires on the upcoming .NET 10 C# script experience and see how far I could get calling Reka’s Research LLM from a single file, no project scaffolding, no .csproj. This isn’t a benchmark; it’s a practical tour to compare ergonomics, setup, and the little gotchas you hit along the way. I’ll share what worked, what didn’t, and a few notes you might find useful if you try the same.
All the sample code (and a bit more) is here: reka-ai/api-examples-dotnet · csharp10-script. The scripts run a small “top 3 restaurants” prompt so you can validate everything quickly.
We’ll make the same request in three ways:
OpenAI SDK
Microsoft.Extensions.AI for OpenAI
Raw HttpClient
What you need
The C# "script" feature used below ships with the upcoming .NET 10 and is currently available in preview. If you prefer not to install a preview SDK, you can run everything inside the provided Dev Container or on GitHub Codespaces. I include a .devcontainer folder with everything set up in the repo.
Set up your API key
We are talking about APIs here, so of course, you need an API key. The good news is that it's free to sign up with Reka and get one! It's a 2-click process, more details in the repo. The API key is then stored in a .env file, and each script loads environment variables using DotNetEnv.Env.Load(), so your key is picked up automatically. I went this way instead of using dotnet user-secrets because I thought it would be the way it would be done in a CI/CD pipeline or a quick script.
Run the demos
From the csharp10-script folder, run any of these scripts. Each line is an alternative
dotnet run 1-try-reka-openai.cs
dotnet run 2-try-reka-ms-ext.cs
dotnet run 3-try-reka-http.cs
You should see a short list of restaurant suggestions.
OpenAI SDK with a custom endpoint
Reka's API is using the OpenAI format; therefore, I thought of using the NuGet package OpenAI. To reference a package in a script, you use the #:package [package name]@[package version] directive at the top of the file. Here is an example:
#:package OpenAI@2.3.0
// ...
var baseUrl = "http://api.reka.ai/v1";
var openAiClient = new OpenAIClient(new ApiKeyCredential(REKA_API_KEY), new OpenAIClientOptions
{
Endpoint = new Uri(baseUrl)
});
var client = openAiClient.GetChatClient("reka-flash-research");
string prompt = "Give me 3 nice, not crazy expensive, restaurants for a romantic dinner in Montreal";
var completion = await client.CompleteChatAsync(
new List<ChatMessage>
{
new UserChatMessage(prompt)
}
);
var generatedText = completion.Value.Content[0].Text;
Console.WriteLine($" Result: \n{generatedText}");
The rest of the code is more straightforward. You create a chat client, specify the Reka API URL, select the model, and then you send a prompt. And it works just as expected. However, not everything was perfect, but before I share more about that part, let's talk about Microsoft.Extensions.AI.
Microsoft Extensions AI for OpenAI
Another common way to use LLM in .NET is to use one ot the Microsoft.Extensions.AI NuGet package. In our case Microsoft.Extensions.AI.OpenAI was used.
#:package Microsoft.Extensions.AI.OpenAI@9.8.0-preview.1.25412.6
// ....
var baseUrl = "http://api.reka.ai/v1";
IChatClient client = new ChatClient("reka-flash-research", new ApiKeyCredential(REKA_API_KEY), new OpenAIClientOptions
{
Endpoint = new Uri(baseUrl)
}).AsIChatClient();
string prompt = "Give me 3 nice, not crazy expensive, restaurants for a romantic dinner in Montreal";
Console.WriteLine(await client.GetResponseAsync(prompt));
As you can see, the code is very similar. Create a chat client, set the URL, the model, and add your prompt, and it works just as well.
That's two ways to use Reka API with different SDKs, but maybe you would prefer to go "SDKless", let's see how to do that.
Raw HttpClient calling the REST API
Without any SDK to help, there is a bit more line of code to write, but it's still pretty straightforward. Let's see the code:
using var httpClient = new HttpClient();
var baseUrl = "http://api.reka.ai/v1/chat/completions";
var requestPayload = new
{
model = "reka-flash-research",
messages = new[]
{
new
{
role = "user",
content = "Give me 3 nice, not crazy expensive, restaurants for a romantic dinner in New York city"
}
}
};
using var request = new HttpRequestMessage(HttpMethod.Post, baseUrl);
request.Headers.Add("Authorization", $"Bearer {REKA_API_KEY}");
request.Content = new StringContent(jsonPayload, Encoding.UTF8, "application/json");
var response = await httpClient.SendAsync(request);
var responseContent = await response.Content.ReadAsStringAsync();
var jsonDocument = JsonDocument.Parse(responseContent);
var contentString = jsonDocument.RootElement
.GetProperty("choices")[0]
.GetProperty("message")
.GetProperty("content")
.GetString();
Console.WriteLine(contentString);
So you create an HttpClient, prepare a request with the right headers and payload, send it, get the response, and parse the JSON to extract the text. In this case, you have to know the JSON structure of the response, but it follows the OpenAI format.
What did I learn from this experiment?
I used VS Code while trying the script functionality. One thing that surprised me was that I didn't get any IntelliSense or autocompletion. I try to disable the DevKit extension and change the setting for OmniSharp, but no luck. My guess is that because it's in preview, and it will work just fine in November 2025 when .NET 10 will be released.
In this light environment, I encountered some issues where, for some reason, I couldn't use an https endpoint, so I had to use http. In the raw httpClient script, I had some errors with the Reflection that wasn't available. It could be related to the preview or something else, I didn't investigate further.
For the most part, everything worked as expected. You can use C# code to quickly execute some tasks without any project scaffolding. It's a great way to try out the Reka API and see how it works.
What's Next?
While writing those scripts, I encountered multiple issues that aren't related to .NET but more about the SDKs when trying to do more advanced functionalities like optimization of the query and formatting the response output. Since it goes beyond the scope of this post, I will share my findings in a follow-up post. Stay tuned!
This week post collects concise links and takeaways across .NET, AI, Docker, open source security, DevOps, and broader developer topics. From the .NET Conf call for content and Copilot prompts to Docker MCP tooling, container debugging tips, running .NET as WASM, and a fresh look at the 10x engineer idea.
Running .NET in the browser without Blazor (Andrew Lock) - I never thought about it but it's true we can execute .NET code as WASM without Blazor. This tutorial shows you all the code.
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.
This week, we're exploring a wide range of topics, from .NET 10 previews and A/B testing to the latest in Azure development and AI. Plus, a selection of insightful podcast episodes to keep you informed and inspired.
Docker Model Runner ( DevOps and Docker Talk: Cloud Native Interviews and Tooling) - I tried the new model feature of Docker and had many questions. All of them were answered during this episode.
Michael Washington: The Nature Of Data - Episode 353 (Azure & DevOps Podcast) - Interesting discussion about data, and a bit more about a really cool project, Michael's Data warehouse, because sometimes we need something that runs locally.
Recently, someone asked me an interesting question: "Can GitHub Copilot or AI help me convert an application from one language to another?" My answer was a definitive yes! AI can not only help you write code in a new language, but it can also improve team collaboration and bridge the knowledge gap between developers who know different programming languages.
To demonstrate this capability, I decided to convert a COBOL application to Java—a perfect test case since I don't know either language well, which means I really needed Copilot to do the heavy lifting. All the code is available on GitHub.
The first step was setting up a proper development environment. I used a dev container and asked Copilot to help me build it. I also asked for recommendations on the best VS Code extensions for Java development. Within just a few minutes, I had a fully configured environment ready for Java development.
Choosing the Right Copilot Agent
When working with GitHub Copilot for code conversion, you have different mode to choose from:
Ask: Great for general questions (like asking about Java extensions)
Edit: Perfect for simple document editing (like modifying the generated code)
Agent: The powerhouse for complex tasks involving multiple files, imports, and structural changes
For code conversion projects, the Agent is your best friend. It can look at different source files, understand project structure, edit code, and even create new files on your behalf.
The Conversion Process
I used Claude 3.5 Sonnet for this conversion. Here's the simple prompt I used:
"Convert this hello business COBOL application into Java"
Copilot didn't just convert the code, it also provided detailed information about how to execute the Java application, which was invaluable since I had no Java experience.
The results varied depending on the AI model used (Claude, GPT, Gemini, etc.), but the core functionality remained consistent across different attempts. Since the original application was simple, I converted it multiple times using different prompts and models to test the consistency. Sometimes it generated a single file, other times it created multiple files: a main application and an Employee class (which wasn't in my original COBOL version). Sometimes it updated the Makefile to allow compilation and execution using make, while other times it provided instructions to use javac and java commands directly.
This variability is expected with generative AI results will differ between runs, but the core functionality remains reliable.
Real-World Challenges
Of course, the conversion wasn't perfect on the first try. For example, I encountered runtime errors when executing the application. The issue was with the data format—the original file used a flat file format with fixed length records (19 characters per record) and no line breaks.
I went back to Copilot, highlighted the error message from the terminal, and provided additional context about the 19 character record format. This iterative approach is key to successful AI assisted conversion.
"It's not working as expected, check the error in #terminalSelection. The records have fixed length of 19 characters without line breaks. Adjust the code to handle this format"
The Results
After the iterative improvements, my Java application successfully:
Compiled without errors
Processed all employee records
Generated a report with employee data
Calculated total salary (a nice addition that wasn't in the original)
While the output format wasn't identical to the original COBOL version (missing leading zeros, different line spacing), the core functionality was preserved.
Video Demonstration
Watch the complete conversion process in action:
Best Practices for AI-Assisted Code Conversion
Based on this experience, here are my recommendations:
1. Start with Small Pieces
Don't try to convert thousands of lines at once. Break your conversion into manageable modules or functions.
2. Set Up Project Standards
Consider creating a .github folder at your project root with an instructions.md file containing:
Best practices for your target language
Patterns and tools to use
Specific versions and frameworks
Enterprise standards to follow
3. Stay Involved in the Process
You're not just a spectator - you're an active participant. Review the changes, test the output, and provide feedback when things don't work as expected.
4. Iterate and Improve
Don't expect perfection on the first try. In my case, the converted application worked but produced slightly different output formatting. This is normal and expected, after all you are converting between two different languages with different conventions and styles.
Can AI Really Help with Code Conversion?
Absolutely, yes! GitHub Copilot can significantly:
Speed up the conversion process
Help with syntax and language specific patterns
Provide guidance on running and compiling the target language
Bridge knowledge gaps between team members
Generate supporting files and documentation
However, remember that it's generative AI, results will vary between runs, and you shouldn't expect identical output every time.
Final Thoughts
GitHub Copilot is definitely a tool you need in your toolkit for conversion projects. It won't replace the need for human oversight and testing, but it will dramatically accelerate the process and help teams collaborate more effectively across different programming languages.
The key is to approach it as a collaborative process where AI does the heavy lifting while you provide guidance, context, and quality assurance. Start small, iterate often, and don't be afraid to ask for clarification or corrections when the output isn't quite right.
Have you tried using AI for code conversion? I'd love to hear about your experiences in the comments below! Visit c5m.ca/copilot to get started with GitHub Copilot.
Welcome to the 655th Reading Notes. This edition explores embedding Python in .NET, working with stacked git branches, and an introduction to cloud-native. Plus, a quick tip for the Azure Portal and using local AI for code reviews.
Introduction to Cloud Native Computing (TNS Staff) - Very complete and interesting article that is the perfect point to get started with clout native application covering what it is the strategy the architecture everything
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.
Testing has always been one of those tasks that developers know is essential but often find tedious. When I decided to add comprehensive unit tests to my NoteBookmark project, I thought: why not make this an experiment in AI-assisted development? What followed was a fascinating 4-hour journey that resulted in 88 unit tests, a complete CI/CD pipeline, and some valuable insights about working with AI coding assistants.
NoteBookmark is a .NET application built with C# that helps users manage and organize their reading notes and bookmarks. The project includes an API, a Blazor frontend, and uses Azure services for storage. You can check out the complete project on GitHub.
The Challenge: Starting from Zero
I'll be honest - it had been a while since I'd written comprehensive unit tests. Rather than diving in myself, I decided to see how different AI models would approach this task. My initial request was deliberately vague: "add a test project" without any other specifications.
Looking back, I realize I should have been more specific about which parts of the code I wanted covered. This would have made the review process easier and given me better control over the scope. But sometimes, the best learning comes from letting the AI surprise you.
The Great AI Model Comparison
GPT-4.1: Competent but Quiet
GPT-4.1 delivered decent results, but the experience felt somewhat mechanical. The code it generated was functional, but I found myself wanting more context. The explanations were minimal, and I often had to ask follow-up questions to understand the reasoning behind certain test approaches.
Gemini: The False Start
My experience with Gemini was... strange. Perhaps it was a glitch or an off day, but most of what was generated simply didn't work. I didn't persist with this model for long, as debugging AI-generated code that fundamentally doesn't function defeats the purpose of the exercise. Note that at the time of this writing, Gemini was still in preview, so I expect it to improve over time.
Claude Sonnet: The Clear Winner
This is where the magic happened. Claude Sonnet became my co-pilot of choice for this project. What set it apart wasn't just the quality of the code (though that was excellent), but the quality of the conversation. It felt like having a thoughtful colleague thinking out loud with me.
The explanations were clear and educational. When Claude suggested a particular testing approach, it would explain why. When it encountered a complex scenario, it would walk through its reasoning. I tried different versions of Claude Sonnet but didn't notice significant differences in results - they were all consistently good.
The Development Process: A 4-Hour Journey
Hour 1-2: Getting to Compilation
The first iteration couldn't compile. This wasn't surprising given the complexity of the codebase and the vague initial request. But here's where the AI collaboration really shined. Instead of manually debugging everything myself, I worked with Copilot to identify and fix issues iteratively.
We went through several rounds of:
Identify compilation errors
Discuss the best approach to fix them
Let the AI implement the fixes
Review and refine
After about 2 hours, we had a test project with 88 unit tests that compiled successfully. The AI had chosen xUnit as the testing framework, which I was happy with - it's a solid choice that I might not have picked myself if I was rusty on the current .NET testing landscape.
Hour 2.5-3.5: Making Tests Pass
Getting the tests to compile was one thing; getting them to pass was another challenge entirely. This phase taught me a lot about both my codebase and xUnit features I wasn't familiar with.
I relied heavily on the /explain feature during this phase. When tests failed, I'd ask Claude to explain what was happening and why. This was invaluable for understanding not just the immediate fix, but the underlying testing concepts.
One of those moment was learning about [InlineData(true)] and other xUnit data attributes. These weren't features I was familiar with, and having them explained in context made them immediately useful.
Hour 3.5-4: Structure and Style
Once all tests were passing, I spent time ensuring I understood each test and requesting structural changes to match my preferences. This phase was crucial for taking ownership of the code. Just because AI wrote it doesn't mean it should remain a black box. Let's repeat this: Understanding the code is essential; just because AI wrote it doesn't mean it's good.
Beyond Testing: CI/CD Integration
With the tests complete, I asked Copilot to create a GitHub Actions workflow to run tests on every push to main and v-next branches, plus PR reviews. Initially it started modifiying my existing workflow that takess care of the Azure deployment. I wanted a separate workflow for testing, so I interrupted (that's nice I wasn't "forced" to wait), and asked it to create a new one instead. The result was the running-unit-tests.yml workflow that worked perfectly on the first try.
This was genuinely surprising. CI/CD configurations often require tweaking, but the generated workflow handled:
Multi-version .NET setup
Dependency restoration
Building and testing
Test result reporting
Code coverage analysis
Artifact uploading
The PR Enhancement Adventure
Here's where things got interesting. When I asked Copilot to enhance the workflow to show test results in PRs, it started adding components, then paused and asked if it could delete the current version and start from scratch.
I said yes, and I'm glad I did. The rebuilt version created beautiful PR comments showing:
Test results summary
Code coverage reports (which I didn't ask for but appreciated)
Detailed breakdowns.
The Finishing Touches
No project is complete without proper status indicators. I added a test status badge to the README, giving anyone visiting the repository immediate visibility into the project's health.
Key Takeaways
What Worked Well
AI as a Learning Partner: Having Copilot explain testing concepts and xUnit features was like having a patient teacher
Iterative Refinement: The back-and-forth process felt natural and productive
Comprehensive Solutions: The AI didn't just write tests; it created a complete testing infrastructure
Quality Over Speed: While it took 4 hours, the result was thorough and well-structured
What I'd Do Differently
Be More Specific Initially: Starting with clearer scope would have streamlined the process
Set Testing Priorities: Identifying critical paths first would have been valuable
Plan for Visual Test Reports: Thinking about test result visualization from the start
Lessons About AI Collaboration
Model Choice Matters: The difference between AI models was significant
Conversation Quality Matters: Clear explanations make the collaboration more valuable
Trust but Verify: Understanding every piece of generated code is crucial
Embrace Iteration: The best results come from multiple refinement cycles
The Bigger Picture
This experiment reinforced my belief that AI coding assistants are most powerful when they're true collaborators rather than code generators. The value wasn't just in the 88 tests that were written, but in the learning that happened along the way.
For developers hesitant about AI assistance in testing: this isn't about replacing your testing skills, it's about augmenting them. The AI handles the boilerplate and suggests patterns, but you bring the domain knowledge and quality judgment.
Conclusion
Would I do this again? Absolutely. The combination of comprehensive test coverage, learning opportunities, and time efficiency made this a clear win. The 4 hours invested created not just tests, but a complete testing infrastructure that will pay dividends throughout the project's lifecycle.
If you're considering AI-assisted testing for your own projects, my advice is simple: start the conversation, be prepared to iterate, and don't be afraid to ask "why" at every step. The goal isn't just working code - it's understanding and owning that code.
The complete test suite and CI/CD pipeline are available in the NoteBookmark repository if you want to see the results of this AI collaboration in action.
Welcome to another edition of my reading notes! This week brings some fascinating insights into AI's real-world impact, exciting developments in .NET and containerization, plus practical tools for improving our development workflows.
From local AI-powered code reviews to Docker security hardening and the upcoming .NET 10 features, there's plenty to explore.
AI
The promise that wasn’t kept (Salma Alam Maylor) - Interesting thoughts about AI and its impact on our work but also on our life, and the planet.
Inside GitHub: How we hardened our SAML implementation (Greg Ose, Taylor Reis) - Very interesting post that pushes the curtain a little bit so we could see behind the scene how this very used but notification system works and has been updated
Enhance productivity with AI + Remote Dev (Brigit Murtaugh, Christof Marti, Josh Spicer, Olivia Guzzardo McVicker) - I love the dev container environments, they are so useful! And I also use the remote one when I'm not on my dev device so easy. Happy to see that Copilot will be right there with me.
This edition of Readings Notes covers the synergies between C# and Rust, open-source software licensing challenges, and securing the software supply chain. It also feature a beginner’s guide to programming C# in Visual Studio Code. Finally, we delve into the evaluation process of AI models for GitHub Copilot.
Enjoy the read!
Programming
Why Every C# Developer Should Explore Rust (Chris Woodruff) - Even though I heard of Rust I don't think I ever wrote a single line. This post gives a brief list of ideas where we could make a combo c#-rust, and where to get started. Hmmm.
.NET OSS Projects: Better to Re-license or Die? (Aaron Stannard) - What are the options when a free OSS project changes its license? Nice post that provides an overview of a recent change like this.
How we evaluate AI models and LLMs for GitHub Copilot (Connor Adams, Klint Finley) - This is an interesting post that shares the methods used to test the different models before they can be included. Testing a model is different than testing a regular piece of code, what are they looking for, and what's important?
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.
This edition of ReadingNotes covers new AI tools, WSL updates, .NET 9 features, and debugging tips with GitHub Copilot. Plus, insightful podcasts on .NET Aspire, productivity tools, and frontend engineering.
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.
For this week reading notes, I have some exciting blog posts and podcast episodes. Covering topics including .NET scaffolding, Visual Studio updates, the Builder Pattern in C#, and OpenAPI in .NET 9. Plus, tips on validating identity with GitHub, improving Azure Identity, and podcast highlights on GitHub Universe and presentation skills.
The Builder Pattern in C# [2024] - This is a very nice post that explains clearly with code different scenarios where that pattern would make sense.
OpenAPI document generation in .NET 9 (Mike Kistler) - This is a great new feature in NET9 when building APIs. The post shares how to update existing APIs to get it. I'm looking forward to migrating my stuff.
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.