This #rd explores practical insights on leveraging GitHub Copilot for enhanced .NET testing, the rise of AI-driven documentation solutions, and the importance of security in coding agents. From dissecting Docker’s MCP servers to debating the merits of Minimal APIs, we cover a mix of .NET updates, developer workflows, and emerging best practices. Whether you’re refining build processes, optimizing codebases, or staying ahead of security trends, these notes offer a curated selection of ideas to spark your next project or refactor.
Ask AI from Anywhere: No GUI, No Heavy Clients, No Friction (Frank Boucher) - A cool little tools that you can call from any terminal (yes it works via ssh too!) that call AI and ask a question or does a web research for you. The post and the video explains how it works and where to find the code.
Programming
Reinventing how .NET Builds and Ships (Again) (Matt Mitchell) - This post is not a 30-second read, but it's a detailed story that explains all the facts and how it was built. That great new version 10 in terms of the build.
Ever wished you could ask AI from anywhere without needing an interface? Imagine just typing ? and your question in any terminal the moment it pops into your head, and getting the answer right away! In this post, I explain how I wrote a tiny shell script that turns this idea into reality, transforming the terminal into a universal AI client. You can query Reka, OpenAI, or a local Ollama model from any editor, tab, or pipeline—no GUI, no heavy clients, no friction.
Small, lightweight, and surprisingly powerful: once you make it part of your workflow, it becomes indispensable.
There is almost always a terminal within reach—embedded in your editor, sitting in a spare tab, or already where you live while building, debugging, and piping data around. So why break your flow to open a separate chat UI? I wanted to just type a single character (?) plus my question and get an answer right there. No window hopping. No heavy client.
How It Works
The trick is delightfully small: send a single JSON POST request to whichever AI provider you feel like (Reka, OpenAI, Ollama locally, etc.):
# Example: Reka
curl https://api.reka.ai/v1/chat
-H "X-Api-Key: <API_KEY>"
-d {
"messages": [
{
"role": "user",
"content": "What is the origin of thanksgiving?"
}
],
"model": "reka-core",
"stream": false
}
# Example: Ollama local
curl http://127.0.0.1:11434/api/chat
-d {
"model": "llama3",
"messages": [
{
"role": "user",
"content": "What is the origin of thanksgiving?"
}],
"stream": false
}
Once we get the response, we extract the answer field from it. A thin shell wrapper turns that into a universal “ask” verb for your terminal. Add a short alias (?) and you have the most minimalist AI client imaginable.
Let's go into the details
Let me walk you through the core script step-by-step using reka-chat.sh, so you can customize it the way you like. Maybe this is a good moment to mention that Reka has a free tier that's more than enough for this. Go grab your key—after all, it's free!
The script (reka-chat.sh) does four things:
Captures your question
Loads an API key from ~/.config/reka/api_key
Sends a JSON payload to the chat endpoint with curl.
Extracts the answer using jq for clean plain text.
1. Capture Your Question
This part of the script is a pure laziness hack. I wanted to save keystrokes by not requiring quotes when passing a question as an argument. So ? What is 32C in F works just as well as ? "What is 32C in F".
if [ $# -eq 0 ]; then
if [ ! -t 0 ]; then
QUERY="$(cat)"
else
exit 1
fi
else
QUERY="$*"
fi
2. Load Your API Key
If you're running Ollama locally you don't need any key, but for all other AI providers you do. I store mine in a locked-down file at ~/.config/reka/api_key, then read and trim trailing whitespace like this:
Finally, we parse the JSON response with jq to pull out just the answer text. If jq isn't installed we display the raw response, but a formatted answer is much nicer. If you are customizing for another provider, you may need to adjust the JSON path here. You can add echo "$RESPONSE" >> data_sample.json to the script to log raw responses for tinkering.
Now that we have the script, make it executable with chmod +x reka-chat.sh, and let's add an alias to your shell config to make it super easy to use. Add one line to your .zshrc or .bashrc that looks like this:
alias \\?=\"$REKA_CHAT_SCRIPT\"
Because ? is a special character in the shell, we escape it with a backslash. After adding this line, reload your shell configuration with source ~/.zshrc or source ~/.bashrc, and you are all set!
The Result
Now you can ask questions directly from your terminal. Wanna know what is origin of Thanksgiving, ask it like this:
? What is the origin of Thanksgiving
And if you want to keep the quotes, please you do you!
Extra: Web research
I couldn't stop there! Reka also supports web research, which means it can fetch and read web pages to provide more informed answers. Following the same pattern described previously, I wrote a similar script called reka-research.sh that sends a request to Reka's research endpoint. This obviously takes a bit more time to answer, as it's making different web queries and processing them, but the results are often worth the wait—and they are up to date! I used the alias ?? for this one.
On the GitHub repository, you can find both scripts (reka-chat.sh and reka-research.sh) along with a script to create the aliases automatically. Feel free to customize them to fit your workflow and preferred AI provider. Enjoy the newfound superpower of instant AI access right from your terminal!
What's Next?
With this setup, the possibilities are endless. Reka supports questions related to audio and video, which could be interesting to explore next. The project is open source, so feel free to contribute or suggest improvements. You can also join the Reka community on Discord to share your experiences and learn from others.
Here’s a compact roundup of links and highlights I found interesting this week. You’ll find updates on Git, Chrome DevTools tips, C# 14 and .NET 10 coverage, Blazor upgrade notes, a practical Copilot + Visual Studio guide, plus a few useful tools and AI announcements. Enjoy, and tell me which item you want me to explore next.
Highlights from Git 2.52 (Taylor Blau) - I'm probably repeating myself, but I'm so surprised how little I know this fantastic tool. This post shares the last update, it's all very impressive.
Introducing C# 14 - .NET Blog (Bill Wagner) - You would think that at version 14, there's not much to add or change? Well you couldn't be more wrong be prepared for a long and detailed post.
Blazor and .NET 10: Breaking Changes, Fixes, and New Features (Simon Foster) - There's always a little risk when you're upgrading to a new version. Of course, .NET 10 could bring some... In this post, the author shares a problem had and the link where he found the breaking changes good read
Release v3.2.0 · basecamp/omarchy (basecamp team) - Nice! A lot of fixes and some nice new features: Ghostty new default (I was already using it), new key bindings (looking forward to try the workspace-related ones), and a new project named: try... Ooh! I love it!
Miscellaneous
ZoomIt v9.21 | Microsoft Community Hub (Alex Mihaiuc) - If you are on Windows, please do you do yourself a favour and try Zoom it. Since I switched to Mac and Linux, it's the thing that I miss the most.
Ever wished you could ask a question and have the answer come only from a handful of trusted documentation sites—no random blogs, no stale forum posts? That’s exactly what the Check-In Doc MCP Server does. It’s a lightweight Model Context Protocol (MCP) server you can run locally (or host) to funnel questions to selected documentation domains and get a clean AI-generated answer back.
What It Is
The project (GitHub: https://github.com/fboucher/check-in-doc-mcp) is a Dockerized MCP server that:
Accepts a user question.
Calls the Reka AI Research API with constraints (only allowed domains).
Returns a synthesized answer based on live documentation retrieval.
You control which sites are searchable by passing a comma‑separated list of domains (e.g. docs.reka.ai,docs.github.com). That keeps > results focused, reliable, and relevant.
What Is the Reka AI Research API?
Reka AI’s Research API lets you blend language model reasoning with targeted, on‑the‑fly web/document retrieval. Instead of a model hallucinating an answer from static training data, it can:
Perform limited domain‑scoped web searches.
Pull fresh snippets.
Integrate them into a structured response.
In this project, we use the research feature with a web_search block specifying:
allowed_domains: Only the documentation sites you trust.
max_uses: Caps how many retrieval calls it makes per query (controls cost & latency).
Details used here:
Model:reka-flash-research
Endpoint:http://api.reka.ai/v1/chat/completions
Auth: Bearer API key (generated from the Reka dashboard: https://link.reka.ai/free)
How It Works Internally
The core logic lives in ResearchService (src/Domain/ResearchService.cs). Simplified flow:
Initialization
Stores the API key + array of allowed domains, sets model & endpoint, logs a safe startup message.
Build Request Payload
The CheckInDoc(string question) method creates a JSON payload:
var requestPayload = new {
model,
messages = new[] { new { role = "user", content = question } },
research = new {
web_search = new {
allowed_domains = allowedDomains,
max_uses = 4
}
}
};
Send Request
Creates a HttpRequestMessage (POST), adds Authorization: Bearer <APIKEY>, sends JSON to Reka.
Parse Response
Deserializes into a RekaResponse domain object, returns the first answer string.
Adding It to VS Code (MCP Extension)
You can run it as a Docker-based MCP server. Two simple approaches:
Option 1: Via “Add MCP Server” UI
In VS Code (with MCP extension), click Add MCP Server.
Choose type: Docker image.
Image name: fboucher/check-in-doc-mcp.
Enter allowed domains and your Reka API key when prompted.
Option 2: Via mcp.json (Recommended)
Alternatively, you can manually configure it in your mcp.json file. This will make sure your API key isn't displayed in plain text. Add or merge this configuration:
{
"servers": {
"check-in-docs": {
"type": "stdio",
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"ALLOWED_DOMAINS=${input:allowed_domains}
",
"-e",
"APIKEY=${input:apikey}",
"fboucher/check-in-doc-mcp"
]
}
},
"inputs": [
{
"id": "allowed_domains",
"type": "promptString",
"description": "Enter the comma-separated list of documentation domains to allow (e.g. docs.reka.ai,docs.github.com):"
},
{
"id": "apikey",
"type": "promptString",
"password": true,
"description": "Enter your Reka Platform API key:"
}
]
}
How to Use It
To use it ask to Check In Doc something or You can now use the SearchInDoc tool in your MCP-enabled environment. Just ask a question, and it will search only the specified documentation domains.
Final Thoughts
It’s intentionally simple—no giant orchestration layer. Just a clean bridge between a question, curated domains, and a research-enabled model. Sometimes that’s all you need to get focused, trustworthy answers.
If this sparks an idea, clone it and adapt away. If you improve it (citations, richer error handling, multi-turn context)—send a PR!
This week: Cake v6.0.0 is out, Docker Desktop adds helpful debugging tools, and .NET 10 brings a ton of changes worth exploring. Plus some thoughts on working with AI coding assistants and a great cybersecurity podcast.
Cake v6.0.0 released - Great news! I will have to upgrade my pipeline. Hopefully, the upgrade will be smooth.
Docker Desktop 4.50 Release (Deanna Sparks) - Nice update, and oh wow! I'm looking forward to try that debug, that's great news
Programming
Get Ready for .NET Conf 2025! (Jon Galloway) - 3-day event and all about that net dispose chair, what is happening each day, so a good way to plan your listening
Announcing .NET 10 - .NET Blog (.NET team) - There are so many changes. This post list a lot of things that just change with that .NET 10, and I really looking forward to watch in that video and read a more specific article to see demos and try it myself
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.
I've spent most of my career building software in C# and .NET, and only used Python in IoT projects. When I wanted to build a fun project—an app that uses AI to roast videos, I knew it was the perfect opportunity to finally dig into Python web development.
The question was: where do I start? I hopped into a brainstorming session with Reka's AI chat and asked about options for building web apps in Python. It mentioned Flask, and I remembered friends talking about it being lightweight and perfect for getting started. That sounded right.
In this post, I share how I built "Roast My Life," a Flask app using the Reka Vision API.
The Vision (Pun Intended)
The app needed three core things:
List videos: Show me what videos are in my collection
Upload videos: Let me add new ones via URL
Roast a video: Send a selected video to an AI and get back some hilarious commentary
See it in action
Part 1: Getting Started Environment Setup
The first hurdle was always going to be environment setup. I'm serious about keeping my Python projects isolated, so I did the standard dance:
Before even touching dependencies, I scaffolded a super bare-bones Flask app. Then one thing I enjoy from C# is that all dependencies are brought in one shot, so I like doing the same with my python projects using requirements.txt instead of installing things ad‑hoc (pip install flask then later freezing).
Dropping that file in first means the setup snippet below is deterministic. When you run pip install -r requirements.txt, Flask spins up using the exact versions I tested with, and you won't accidentally grab a breaking major update.
Here's the shell dance that activates the virtual environment and installs everything:
To get that API key, I visited the Reka Platform and grabbed a free one. Seriously, a free key for playing with AI vision APIs? I was in.
With python app.py, I fired up the Flask development server and opened http://127.0.0.1:5000 in my browser. The UI was there, but... it was dead. Nothing worked.
Perfect. Time to build.
The Backend: Flask Routing and API Integration
Coming from ASP.NET Core's controller-based routing and Blazor, Flask's decorator-based approach felt just like home. All the code code goes in the app.py file, and each route is defined with a simple decorator. But first things first: loading configuration from the .env file using python-dotenv:
from flask import Flask, request, jsonify
import requests
import os
from dotenv import load_dotenv
app = Flask(__name__)
# Load environment variables (like appsettings.json)
load_dotenv()
api_key = os.environ.get('API_KEY')
base_url = os.environ.get('BASE_URL')
All the imports packages are the same ones that needs to be in the requirements.txt. And we retreive the API key and base URL from environment variables, just like in .NET Core.
Now, to be able to get roasted we need first to upload a video to the Reka Vision API. Here's the code—I'll go over some details after.
@app.route('/api/upload_video', methods=['POST'])
def upload_video():
"""Upload a video to Reka Vision API"""
data = request.get_json() or {}
video_name = data.get('video_name', '').strip()
video_url = data.get('video_url', '').strip()
if not video_name or not video_url:
return jsonify({"error": "Both video_name and video_url are required"}), 400
if not api_key:
return jsonify({"error": "API key not configured"}), 500
try:
response = requests.post(
f"{base_url.rstrip('/')}/videos/upload",
headers={"X-Api-Key": api_key},
data={
'video_name': video_name,
'index': 'true', # Required: tells Reka to process the video
'video_url': video_url
},
timeout=30
)
response_data = response.json() if response.ok else {}
if response.ok:
video_id = response_data.get('video_id', 'unknown')
return jsonify({
"success": True,
"video_id": video_id,
"message": "Video uploaded successfully"
})
else:
error_msg = response_data.get('error', f"HTTP {response.status_code}")
return jsonify({"success": False, "error": error_msg}), response.status_code
except requests.Timeout:
return jsonify({"success": False, "error": "Request timed out"}), 504
except Exception as e:
return jsonify({"success": False, "error": f"Upload failed: {str(e)}"}), 500
Once the information from the frontend is validated we make a POST request to the Reka Vision API's /videos/upload endpoint. The parameters are sent as form data, and we include the API key in the headers for authentication. Here I was using URLs to upload videos, but you can also upload local files by adjusting the request accordingly. As you can see, it's pretty straightforward, and the documentation from Reka made it easy to understand what was needed.
The Magic: Sending Roast Requests to Reka Vision API
Here's where things get interesting. Once a video is uploaded, we can ask the AI to analyze it and generate content. The Reka Vision API supports conversational queries about video content:
def call_reka_vision_qa(video_id: str) -> Dict[str, Any]:
"""Call the Reka Video QA API to generate a roast"""
headers = {'X-Api-Key': api_key} if api_key else {}
payload = {
"video_id": video_id,
"messages": [
{
"role": "user",
"content": "Write a funny and gentle roast about the person, or the voice in this video. Reply in markdown format."
}
]
}
try:
resp = requests.post(
f"{base_url}/qa/chat",
headers=headers,
json=payload,
timeout=30
)
data = resp.json() if resp.ok else {"error": f"HTTP {resp.status_code}"}
if not resp.ok and 'error' not in data:
data['error'] = f"HTTP {resp.status_code} calling chat endpoint"
return data
except requests.Timeout:
return {"error": "Request to chat API timed out"}
except Exception as e:
return {"error": f"Chat API call failed: {e}"}
Here we pass the video ID and a prompt asking for a "funny and gentle roast." The API responds with AI-generated content, which we can then send back to the frontend for display. I try to give more "freedom" to the AI by asking it to reply in markdown format, which makes the output more engaging.
What really stood out to me was how approachable the Reka Vision API is. You don't need any special SDK—just the requests library making standard HTTP calls. And honestly, it doesn't matter what language you're used to; an HTTP call is pretty much always simple to do. Whether you're coming from .NET, Python, JavaScript, or anything else, you're just sending JSON and getting JSON back.
Authentication is refreshingly straightforward: just pop your API key in the header and you're good to go. No complex SDKs, no multi-step authentication flows, no wrestling with binary data streams. The conversational interface lets you ask questions in natural language, and you get back structured JSON responses with clear fields.
One thing worth noting: in this example, the videos are pre-uploaded and indexed, which means the responses come back fast. But here's the impressive part—the AI actually looks at the video content. It's not just reading a transcript or metadata; it's genuinely analyzing the visual elements. That's what makes the roasts so spot-on and contextual.
Final Thoughts
The Reka Vision API itself deserves credit for making video AI accessible. No complicated SDKs, no multi-GB model downloads, no GPU requirements. Just simple HTTP requests and powerful AI capabilities. I'm not saying I'm switching to Python full-time, but expect to see me sharing more Python projects in the future!
This week’s notes focus on where AI meets everyday development: Copilot and Azure for tighter, faster workflows, a thoughtful overhaul of Aspire’s deploy CLI, and a hands‑on look at building MCP servers in C#. Security threads through it all with practical DevSecOps and Shadow IT reminders plus podcast picks on teaching, acronyms, and tackling imposter syndrome.
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.
Welcome to this week's reading notes! We're exploring topics that might seem refreshingly old-school at first, developer experience improvements, cross-platform packaging, logging strategies, and even podcast succession planning. You know, the kind of practical stuff that actually keeps our projects running. Though I should mention there's a conversation about local AI models tucked in there too, because apparently it's still 2025. Sometimes the best way forward is making sure our foundations are solid.
482: 1 Day Apps (Merge Conflict) - My first thought was 1 Day? yeah right! lol Cool episode sharing their journey with those building those types of apps.
Local AI Models with Joe Finney (.NET Rocks!) - Nice episode chatting about local AI. Why would you do that, what are your options yadi yada.
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.
From debugging Docker builds to refining your .NET setup, this week’s Reading Notes delivers a sharp mix of practical dev tips and forward-looking tech insights. We revisit jQuery’s place in today’s web stack, explore AI-enhancing MCP servers, and spotlight open-source projects shaping tomorrow’s tools. Plus, PowerToys gets a sleek upgrade to streamline your Windows workflow.
Let’s check out the ideas and updates that keep your skills fresh and your systems humming.
Programming
A quick look at Dev Tunnels (Mike Irving) - This post shares a very cool feature where you can make your local Host available to someone else to reproduce a bug or try an API without having to deploy. Pretty cool,
6 Steps for Setting Up a New .NET Project the Right Way (Milan Jovanović) - This nice post provides some simple best practices. It helps to keep the code consistent and efficient when the solution contains multiple projects or when multiple developers are involved.
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.
Dives into the intersection of AI and development, exploring tools like GitHub Copilot’s AGENTS.md and the MCP Toolkit for automations, alongside .NET 10.0’s performance gains and OpenAI’s recent updates. Whether you’re optimizing serverless APIs with AWS Lambda or mastering the Web Animation API, this post highlights breakthroughs in code efficiency, model customization, and cloud innovation. Dive into these thought-provoking reads to stay ahead in a rapidly changing world.
Add MCP Servers to Claude Code with MCP Toolkit (Ajeet Singh Raina) - Very cool way to create automations. I have used a lot of low-code and different alternatives to create automation workflows in the past, but this mCP server interactivity is pretty awesome.
Low-Rank Adaptation (LoRA) Explained (Ignasi Lopez Luna) - Nice post that shows how we can specialize a model ourselves for a very narrow, specific topic
The Web Animation API (Christian Nwamba) - It's the first time I've read about this web animation API, pretty cool even if we need to be careful, I think that precision offers could be very interesting for some animations.
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.
This week explores the synergy between Dapr and WebAssembly for modern microservices, highlights the transformative potential of Wasm 3.0 for .NET developers, and delves into best practices for structuring Rust web services. In the AI realm, we examine the emergence of developer-friendly AI frameworks like Microsoft’s Agent Framework and Google’s Jules Tools, which bridge AI capabilities directly into terminals and workflows. Additionally, we examine AI obfuscation techniques and their implications, alongside updates on Perplexity’s free Comet AI browser and its new background assistant. Whether you’re building scalable systems, optimizing code, or integrating cutting-edge AI tools, this post offers a snapshot of trends shaping tech today.
Obfuscation for AI: How it Works, Best Practices, and Metrics – PreEmptive (preemptive) - It's been a while since I last used a station in my code because I'm doing demos most of the time, but it always fascinated me. I feel I should try it with AI, it looks very interesting and powerful. This post shares a lot about what's possible, the risk, and so much more
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.