Showing posts with label oss. Show all posts
Showing posts with label oss. Show all posts

Ask AI from Anywhere: No GUI, No Heavy Clients, No Friction

Ever wished you could ask AI from anywhere without needing an interface? Imagine just typing ? and your question in any terminal the moment it pops into your head, and getting the answer right away! In this post, I explain how I wrote a tiny shell script that turns this idea into reality, transforming the terminal into a universal AI client. You can query Reka, OpenAI, or a local Ollama model from any editor, tab, or pipeline—no GUI, no heavy clients, no friction.

Small, lightweight, and surprisingly powerful: once you make it part of your workflow, it becomes indispensable.

💡 All the code scripts are available at: https://github.com/reka-ai/terminal-tools


The Core Idea

There is almost always a terminal within reach—embedded in your editor, sitting in a spare tab, or already where you live while building, debugging, and piping data around. So why break your flow to open a separate chat UI? I wanted to just type a single character (?) plus my question and get an answer right there. No window hopping. No heavy client.

How It Works

The trick is delightfully small: send a single JSON POST request to whichever AI provider you feel like (Reka, OpenAI, Ollama locally, etc.):

# Example: Reka
curl https://api.reka.ai/v1/chat
     -H "X-Api-Key: <API_KEY>" 
     -d {
          "messages": [
            {
              "role": "user",
              "content": "What is the origin of thanksgiving?"
            }
          ],
          "model": "reka-core",
          "stream": false
        }
# Example: Ollama local
curl http://127.0.0.1:11434/api/chat 
-d {  
      "model": "llama3",   
      "messages": [
        {
          "role": "user", 
          "content": "What is the origin of thanksgiving?"
        }], 
      "stream": false
    }

Once we get the response, we extract the answer field from it. A thin shell wrapper turns that into a universal “ask” verb for your terminal. Add a short alias (?) and you have the most minimalist AI client imaginable.

Let's go into the details

Let me walk you through the core script step-by-step using reka-chat.sh, so you can customize it the way you like. Maybe this is a good moment to mention that Reka has a free tier that's more than enough for this. Go grab your key—after all, it's free!

The script (reka-chat.sh) does four things:

  1. Captures your question
  2. Loads an API key from ~/.config/reka/api_key
  3. Sends a JSON payload to the chat endpoint with curl.
  4. Extracts the answer using jq for clean plain text.

1. Capture Your Question

This part of the script is a pure laziness hack. I wanted to save keystrokes by not requiring quotes when passing a question as an argument. So ? What is 32C in F works just as well as ? "What is 32C in F".

if [ $# -eq 0 ]; then
    if [ ! -t 0 ]; then
        QUERY="$(cat)"
    else
        exit 1
    fi
else
    QUERY="$*"
fi

2. Load Your API Key

If you're running Ollama locally you don't need any key, but for all other AI providers you do. I store mine in a locked-down file at ~/.config/reka/api_key, then read and trim trailing whitespace like this:

API_KEY_FILE="$HOME/.config/reka/api_key"
API_KEY=$(cat "$API_KEY_FILE" | tr -d '[:space:]')

3. Send The JSON Payload

Building the JSON payload is the heart of the script, including the API_ENDPOINT, API_KEY, and obviously our QUERY. Here’s how I do it for Reka:

RESPONSE=$(curl -s -X POST "$API_ENDPOINT" \
     -H "X-Api-Key: $API_KEY" \
     -H "Content-Type: application/json" \
     -d "{
  \"messages\": [
    {
      \"role\": \"user\",
      \"content\": $(echo "$QUERY" | jq -R -s .)
    }
  ],
  \"model\": \"reka-core\",
  \"stream\": false
}")

4. Extract The Answer

Finally, we parse the JSON response with jq to pull out just the answer text. If jq isn't installed we display the raw response, but a formatted answer is much nicer. If you are customizing for another provider, you may need to adjust the JSON path here. You can add echo "$RESPONSE" >> data_sample.json to the script to log raw responses for tinkering.

With Reka, the response look like this:

{
    "id": "cb7c371b-3a7b-48d2-829d-70ffacf565c6",
    "model": "reka-core",
    "usage": {
        "input_tokens": 16,
        "output_tokens": 460,
        "reasoning_tokens": 0
    },
    "responses": [
        {
            "finish_reason": "stop",
            "message": {
                "role": "assistant",
                "content": " The origin of Thanksgiving ..."
            }
        }
    ]
}
The value we are looking for and want to display is the `content` field inside `responses[0].message`. Using `jq`, we do:
echo "$RESPONSE" | jq -r '.responses[0].message.content // .error // "Error: Unexpected response format"'

Putting It All Together

Now that we have the script, make it executable with chmod +x reka-chat.sh, and let's add an alias to your shell config to make it super easy to use. Add one line to your .zshrc or .bashrc that looks like this:

alias \\?=\"$REKA_CHAT_SCRIPT\"

Because ? is a special character in the shell, we escape it with a backslash. After adding this line, reload your shell configuration with source ~/.zshrc or source ~/.bashrc, and you are all set!

The Result

Now you can ask questions directly from your terminal. Wanna know what is origin of Thanksgiving, ask it like this:

? What is the origin of Thanksgiving

And if you want to keep the quotes, please you do you!

Extra: Web research

I couldn't stop there! Reka also supports web research, which means it can fetch and read web pages to provide more informed answers. Following the same pattern described previously, I wrote a similar script called reka-research.sh that sends a request to Reka's research endpoint. This obviously takes a bit more time to answer, as it's making different web queries and processing them, but the results are often worth the wait—and they are up to date! I used the alias ?? for this one.

On the GitHub repository, you can find both scripts (reka-chat.sh and reka-research.sh) along with a script to create the aliases automatically. Feel free to customize them to fit your workflow and preferred AI provider. Enjoy the newfound superpower of instant AI access right from your terminal!

What's Next?

With this setup, the possibilities are endless. Reka supports questions related to audio and video, which could be interesting to explore next. The project is open source, so feel free to contribute or suggest improvements. You can also join the Reka community on Discord to share your experiences and learn from others.


Resources




Check-In Doc MCP Server: A Handy Way to Search Only the Docs You Trust

Ever wished you could ask a question and have the answer come only from a handful of trusted documentation sites—no random blogs, no stale forum posts? That’s exactly what the Check-In Doc MCP Server does. It’s a lightweight Model Context Protocol (MCP) server you can run locally (or host) to funnel questions to selected documentation domains and get a clean AI-generated answer back.

What It Is

The project (GitHub: https://github.com/fboucher/check-in-doc-mcp) is a Dockerized MCP server that:

  • Accepts a user question.
  • Calls the Reka AI Research API with constraints (only allowed domains).
  • Returns a synthesized answer based on live documentation retrieval.

You control which sites are searchable by passing a comma‑separated list of domains (e.g. docs.reka.ai,docs.github.com). That keeps > results focused, reliable, and relevant.

What Is the Reka AI Research API?

Reka AI’s Research API lets you blend language model reasoning with targeted, on‑the‑fly web/document retrieval. Instead of a model hallucinating an answer from static training data, it can:

  • Perform limited domain‑scoped web searches.
  • Pull fresh snippets.
  • Integrate them into a structured response.

In this project, we use the research feature with a web_search block specifying:

  • allowed_domains: Only the documentation sites you trust.
  • max_uses: Caps how many retrieval calls it makes per query (controls cost & latency).

Details used here:

  • Model: reka-flash-research
  • Endpoint: http://api.reka.ai/v1/chat/completions
  • Auth: Bearer API key (generated from the Reka dashboard: https://link.reka.ai/free)

How It Works Internally

The core logic lives in ResearchService (src/Domain/ResearchService.cs). Simplified flow:

  1. Initialization
    Stores the API key + array of allowed domains, sets model & endpoint, logs a safe startup message.

  2. Build Request Payload
    The CheckInDoc(string question) method creates a JSON payload:

    var requestPayload = new {
      model,
      messages = new[] { new { role = "user", content = question } },
      research = new {
        web_search = new {
          allowed_domains = allowedDomains,
          max_uses = 4
        }
      }
    };
    
  3. Send Request
    Creates a HttpRequestMessage (POST), adds Authorization: Bearer <APIKEY>, sends JSON to Reka.

  4. Parse Response
    Deserializes into a RekaResponse domain object, returns the first answer string.

Adding It to VS Code (MCP Extension)

You can run it as a Docker-based MCP server. Two simple approaches:

Option 1: Via “Add MCP Server” UI

  1. In VS Code (with MCP extension), click Add MCP Server.
  2. Choose type: Docker image.
  3. Image name: fboucher/check-in-doc-mcp.
  4. Enter allowed domains and your Reka API key when prompted.

Option 2: Via mcp.json (Recommended)

Alternatively, you can manually configure it in your mcp.json file. This will make sure your API key isn't displayed in plain text. Add or merge this configuration:

{
  "servers": {
    "check-in-docs": {
      "type": "stdio",
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "-e",
        "ALLOWED_DOMAINS=${input:allowed_domains}
        ",
        "-e",
        "APIKEY=${input:apikey}",
        "fboucher/check-in-doc-mcp"
      ]
    }
  },
  "inputs": [
    {
      "id": "allowed_domains",
      "type": "promptString",
      "description": "Enter the comma-separated list of documentation domains to allow (e.g. docs.reka.ai,docs.github.com):"
    },
    {
      "id": "apikey",
      "type": "promptString",
      "password": true,
      "description": "Enter your Reka Platform API key:"
    }
  ]
}

How to Use It

To use it ask to Check In Doc something or You can now use the SearchInDoc tool in your MCP-enabled environment. Just ask a question, and it will search only the specified documentation domains.

Final Thoughts

It’s intentionally simple—no giant orchestration layer. Just a clean bridge between a question, curated domains, and a research-enabled model. Sometimes that’s all you need to get focused, trustworthy answers.

If this sparks an idea, clone it and adapt away. If you improve it (citations, richer error handling, multi-turn context)—send a PR!

Watch a quick demo


Links & References

Reading Notes #671

From debugging Docker builds to refining your .NET setup, this week’s Reading Notes delivers a sharp mix of practical dev tips and forward-looking tech insights. We revisit jQuery’s place in today’s web stack, explore AI-enhancing MCP servers, and spotlight open-source projects shaping tomorrow’s tools. Plus, PowerToys gets a sleek upgrade to streamline your Windows workflow. 

Let’s check out the ideas and updates that keep your skills fresh and your systems humming.

Programming

AI

Miscellaneous

Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.

If you have interesting content, share it!

~frank

Reading Notes #665

In this edition, we explore modern development's evolving landscape. From Microsoft's .NET Aspire simplifying distributed applications to AI security considerations, Git workflow optimizations, and backlog management strategies, there's something here to spark your next breakthrough.


The tech world never sleeps, and neither does innovation. Let's explore what caught my attention this week and might just spark your next big idea or solve that problem you've been wrestling with.

Programming


AI


Open Source


Podcast


~frank

Reading Notes #661

This week post collects concise links and takeaways across .NET, AI, Docker, open source security, DevOps, and broader developer topics. From the .NET Conf call for content and Copilot prompts to Docker MCP tooling, container debugging tips, running .NET as WASM, and a fresh look at the 10x engineer idea.


Suggestion of the week

AI

Open Source

DevOps

Programming

Miscellaneous


Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.

If you have interesting content, share it!

~frank

Reading Notes #659

This week's reading notes cover a variety of insightful topics, from enhancing your development environment with dev containers on Windows to prioritizing open-source bugs effectively. You'll also find helpful posts on integrating MFA into your login process, exploring RavenDB's vector search capabilities, and understanding the differences between Ask Mode and Agent Mode in Visual Studio.

Happy reading!

a wild turkey in my driveway
A wild turkey in my driveway!?

Suggestion of the week


Databases


Programming

  • Why You Should Incorporate MFA into Your Login Process (Suzanne Scacca) - You think the answer is simple, think again. Nice post that explains the difference between 2FA and MFA and why you should or should not implement one of those

  • Aspire Dashboard (Joseph Guadagno) - Great deep dive about the Aspire dashboard, learn all the features packed inside it


Open Source

  • How I Prioritize OSS Bugs (jeremydmiller) - A very instructive post on a real-life issue. It's harder than people think to prioritize. And it may help you write better bug reports...

AI


Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week. 

If you have interesting content, share it! 

~frank

Reading Notes #657

a rocky path ends at the shore of a lake
This week's collection of interesting articles and resources covers AI development, DevOps practices, and open source tools. From GitHub Copilot customization to local AI deployments and containerization best practices, here are the highlights worth your attention.

AI

DevOps

  • Local Deploy with Bicep (Sam Cogan) - A perfect short story, I'll explain why the hell bicep can now deploy locally and how to do it

Open Source

  • Introducing OpenCLI (Patrik Svensson) - A standard that describes CLI so both humans and agents can understand how it works. Love it!

~frank


Reading Notes #655

Welcome to the 655th Reading Notes. This edition explores embedding Python in .NET, working with stacked git branches, and an introduction to cloud-native. Plus, a quick tip for the Azure Portal and using local AI for code reviews. 

a kayak on the water with a tree at the horizon

Open Source

Programming

Cloud

AI


Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week. 

If you have interesting content, share it!


~frank


Reading Notes #654

Welcome to another edition of my reading notes! This week, I’ve gathered a selection of insightful articles and resources covering topics like AI, cloud security, open source, and developer productivity. Whether you’re interested in best practices, new tools, or thought-provoking perspectives, there’s something here for everyone. 

Dive in and enjoy the highlights!

Suggestion of the week

  • Copilot, The Good Parts: Efficiency (Rob Conery) - I love that post, it's so true! There are good and bad ways to use any tools. And I personally would really like seeing Rob build his stuff. Let's him know If you think like me.

Programming

Open Source

Databases

Miscellaneous


~frank


Reading Notes #650

It's time for another edition of Reading Notes! This week brings exciting developments in the open source world, with major announcements from Microsoft making WSL and VS Code's AI features open source. We've also got updates on Azure Container Apps, .NET Aspire, and some great insights on developer productivity tools.
 
Let's dive into these interesting reads that caught my attention this week.

Cloud

Programming

Open Source

AI

  • Agent mode for every developer (Katie Savage) - Great new for everyone as the agent mode become available in so many different editor. This post also contains videos to shows some scenarios.

Podcasts

Miscellaneous

  • The experience is enough (Salma Alam-Naylor) - Whether we like it or not, we are people creature. We all need to stop hiding behind our screens and get out there!

~frank

Reading Notes #648

Welcome to this week's reading notes! Dive into the latest on Microsoft's new AI chat template, explore Docker's MCP CLI, learn about integrating AI into .NET applications, and discover how to automate .NET MAUI library publishing with GitHub Actions. 

Whether you're interested in AI advancements, programming techniques, or DevOps practices, there's something valuable waiting for you below.

MsDevMtl Meetup - Uno
MsDevMtl Meetup - Uno


Suggestion of the week

  • Exploring the new AI chat template (Andrew Lock) - This new .NET template looks pretty useful and has a lot of components already baked in. This post explains those and shows how to customize it for our usage.

Programming


DevOps


AI


Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week. 

If you have interesting content, share it!

~Frank


Making AI smarter with an MCP server that manages short URLs

Have you ever wanted to give your AI assistants access to your own custom tools and data? That's exactly what Model Context Protocol (MCP) allows us to do, and I've been experimenting with it lately.

(Version française ici)

I read a lot recently about Model Context Protocol (MCP) and how it is changing the way AI interacts with external systems. I was curious to see how it works and how I can use it in my own projects. There are many tutorial available online but one of my favorite was written by James Montemagno Build a Model Context Protocol (MCP) server in C#. This post isn't a tutorial, but rather a summary of my experience and what I learned along the way while building a real MCP server that manages short URLs.

MCP doesn't change AI itself, it's a protocol that helps your AI model to interact with external resources: API, databases, etc. The protocol simplifies the way AI can access an external system, and it allows the AI to discover the available tools from those resources. Recently I was working on a project that manages short URLs, and I thought it would be a great opportunity to build an MCP server that manages short URLs. I wanted to see how easy it is to build and then use it in VSCode with GitHub Copilot Chat.

Code: All the code of this post is available in the branch exp/mcp-server of the AzUrlShortener repo on GitHub.

Setting Up: Adding an MCP Server to a .NET Aspire Solution

The AzUrlShortener is a web solution that uses .NET Aspire, so the first thing I did was create a new project using the command:

dotnet new web -n Cloud5mins.ShortenerTools.MCPServer -o ./mcpserver

Required Dependencies

To transform this into an MCP server, I added these essential NuGet packages:

  • Microsoft.Extensions.Hosting
  • ModelContextProtocol.AspNetCore

Since this project is part of a .NET Aspire solution, I also added references to:

  • The ServiceDefaults project (for consistent service configuration)
  • The ShortenerTools.Core project (where the business logic lives)

Integrating with Aspire

Next, I needed to integrate the MCP server into the AppHost project, which defines all services in our solution. Here's how I added it to the existing services:

var manAPI = builder.AddProject<Projects.Cloud5mins_ShortenerTools_Api>("api")
						.WithReference(strTables)
						.WaitFor(strTables)
						.WithEnvironment("CustomDomain",customDomain)
						.WithEnvironment("DefaultRedirectUrl",defaultRedirectUrl);

builder.AddProject<Projects.Cloud5mins_ShortenerTools_TinyBlazorAdmin>("admin")
		.WithExternalHttpEndpoints()
		.WithReference(manAPI);

// 👇👇👇 new code for MCP Server
builder.AddProject<Projects.Cloud5mins_ShortenerTools_MCPServer>("mcp")
		.WithReference(manAPI)
		.WithExternalHttpEndpoints();

Notice how I added the MCP server with a reference to the manAPI - this is crucial as it needs access to the URL management API.

Configuring the MCP Server

To complete the setup, I needed to configure the dependency injection in the program.cs file of the MCPServer project. The key part was specifying the BaseAddress of the httpClient:

var builder = WebApplication.CreateBuilder(args);       
builder.Logging.AddConsole(consoleLogOptions =>
{
    // Configure all logs to go to stderr
    consoleLogOptions.LogToStandardErrorThreshold = LogLevel.Trace;
});
builder.Services.AddMcpServer()
    .WithTools<UrlShortenerTool>();

builder.AddServiceDefaults();

builder.Services.AddHttpClient<UrlManagerClient>(client => 
            {
                client.BaseAddress = new Uri("https+http://api");
            });
            
var app = builder.Build();

app.MapMcp();

app.Run();

That's all that was needed! Thanks to .NET Aspire, integrating the MCP server was straightforward. When you run the solution, the MCP server starts alongside other projects and will be available at http://localhost:{some port}/sse. The /sse part of the endpoint means (Server-Sent Events) and is critical - it's the URL that AI assistants will use to discover available tools.

Implementing the MCP Server Tools

Looking at the code above, two key lines make everything work:

  1. builder.Services.AddMcpServer().WithTools<UrlShortenerTool>(); - registers the MCP server and specifies which tools will be available
  2. app.MapMcp(); - maps the MCP server to the ASP.NET Core pipeline

Defining Tools with Attributes

The UrlShortenerTool class contains all the methods that will be exposed to AI assistants. Let's examine the ListUrl method:

[McpServerTool, Description("Provide a list of all short URLs.")]
public List<ShortUrlEntity> ListUrl()
{
	var urlList = _urlManager.GetUrls().Result.ToList<ShortUrlEntity>();
	return urlList;
}

The [McpServerTool] attribute marks this method as a tool the AI can use. I prefer keeping tool definitions simple, delegating the actual implementation to the UrlManager class that's injected in the constructor: UrlShortenerTool(UrlManagerClient urlManager).

The URL Manager Client

The UrlManagerClient follows standard HttpClient patterns. It receives the pre-configured httpClient in its constructor and uses it to communicate with the API:

public class UrlManagerClient(HttpClient httpClient)
{
	public async Task<IQueryable<ShortUrlEntity>?> GetUrls()
    {
		IQueryable<ShortUrlEntity> urlList = null;
		try{
			using var response = await httpClient.GetAsync("/api/UrlList");
			if(response.IsSuccessStatusCode){
				var urls = await response.Content.ReadFromJsonAsync<ListResponse>();
				urlList = urls!.UrlList.AsQueryable<ShortUrlEntity>();
			}
		}
		catch(Exception ex){
			Console.WriteLine(ex.Message);
		}
        
		return urlList;
    }

	// other methods to manage short URLs
}

This separation of concerns keeps the code clean - tools handle the MCP interface, while the client handles the API communication.

Using the MCP Server with GitHub Copilot Chat

Now for the exciting part - connecting your MCP server to GitHub Copilot Chat! This is where you'll see your custom tools in action.

Configuring Copilot to Use Your MCP Server

Once the server is running (either deployed in Azure or locally), follow these steps:

  1. Open GitHub Copilot Chat in VS Code
  2. Change the mode to Agent by clicking the dropdown in the chat panel
  3. Click the Select Tools... button, then Add More Tools
Set GitHub Copilot mode to Agent

Selecting the Connection Type

GitHub Copilot supports several ways to connect to MCP servers:

All MCP Server types

There are multiple options available - you could have your server in a container or run it via command line. For our scenario, we'll use HTTP.

Note: At the time of writing this post, I needed to use the HTTP URL of the MCP server rather than HTTPS. You can get this URL from the Aspire dashboard by clicking on the resource and checking the available Endpoints.

After selecting your connection type, Copilot will display the configuration file, which you can modify anytime.

GitHub Copilot Chat Configuration

Interacting with Your Custom Tools

Now comes the fun part! You can interact with your MCP server in two ways:

  1. Natural language queries: Ask questions like "How many short URLs do I have?"
  2. Direct tool references: Use the pound sign to call specific tools: "With #azShortURL list all URLs"

The azShortURL is the name we gave to our MCP server in the configuration.

GitHub Copilot question and response example


Key Learnings and Future Directions

Building this MCP server for AzUrlShortener taught me several valuable lessons:

What Worked Well

  • Integration with .NET Aspire was remarkably straightforward
  • The attribute-based approach to defining tools is clean and intuitive
  • The separation of tool definitions from implementation logic keeps the code maintainable

Challenges and Considerations

  • The csharp-SDK is only a few weeks old and still in preview
  • OAuth authentication isn't defined yet (though it's being actively worked on)
  • Documentation is present but evolving rapidly as the technology matures, so some features may not be fully documented yet

For the AzUrlShortener project specifically, I'm keeping this MCP server implementation in the experimental branch mcp-server until I can properly secure it. However, I'm already envisioning numerous other scenarios where MCP servers could add great value.

If you're interested in exploring this technology, I encourage you to:

  • Check out the GitHub repo
  • Fork it and create your own MCP server
  • Experiment with different tools and capabilities

Join the Community

If you have questions or want to share your experiences with others, I invite you to join the Azure AI Community Discord server:

Join Azure AI Community Discord

The MCP ecosystem is growing rapidly, and it's an exciting time to be part of this community!


~Frank


Reading Notes #643

In this week Reading Notes, we explore a diverse range of updates and insights from the tech world. From the latest features in the Azure SDK and Developer CLI, to an introduction to .NET Aspire and its innovative approach to Infrastructure as Code, there's plenty to catch up on. 

Jump into discussions on AI productivity, free Azure SQL tiers, and even a refreshing podcast on stress-free living. 


Let's get started!


Cloud

Databases

AI

Programming

Podcasts


Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week. 


If you have interesting content, share it!

~Frank

Reading Notes #636

Welcome back to another edition of my Reading Notes! This week, we've got some great content lined up, including insights on cloud development with Azure Developer CLI, tips for promoting your open source projects, updates on .NET, and more. Dive in to explore a wealth of knowledge and stay updated with the latest trends and developments.

Cloud

Programming

AI

Miscellaneous


Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week. 

 If you have interesting content, share

~Frank


Reading Notes #633

This edition of Readings Notes covers the synergies between C# and Rust, open-source software licensing challenges, and securing the software supply chain. It also feature a beginner’s guide to programming C# in Visual Studio Code. Finally, we delve into the evaluation process of AI models for GitHub Copilot. 

Enjoy the read!

man looking at the camera with a frozen lake behind him
Programming

Miscellaneous

  • How we evaluate AI models and LLMs for GitHub Copilot (Connor Adams, Klint Finley) - This is an interesting post that shares the methods used to test the different models before they can be included. Testing a model is different than testing a regular piece of code, what are they looking for, and what's important?
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.

If you have interesting content, share 

~frank

Visual Countdown Days Until [a date]

During the holidays, I embarked on a fun project to create a visual countdown for important dates. Inspired by howmanysleeps and hometime from veebch, I wanted to build a countdown that didn't rely on Google Calendar. Instead, I used a Raspberry Pi Pico and some custom code to achieve this.

💾 You can find the full code on GitHub


Raspberry Pi pico and the light using custom colors

What It Is

This project consists of two main parts:

  • Python code for the Raspberry Pi Pico
  • A .NET website to update the configuration, allowing you to set:
    • The important date
    • Two custom colors or random ones
    • The RGB values for the custom colors


screenshot of the configuration website

What You Need

How to Deploy the Configuration Website

After cloning the repo, navigate to the src/NextEvent/ folder and use the Azure Developer CLI to initialize the project:

azd init

Enter a meaningful name for your resource group in Azure. To deploy, use the deployment command:

azd up

Specify the Azure subscription and location when prompted. After a few minutes, everything should be deployed. You can access the URL from the output in the terminal or retrieve it from the Azure Portal.

How to Set Up the Raspberry Pi Pico

Edit the config.py file to add your Wi-Fi information and update the number of lights on your light strip.

You can use Thonny to copy the Python code to the device. Copy both main.py and config.py to the Raspberry Pi Pico.

How It Works

  • The website creates a JSON file and saves it in a publicly accessible Azure storage.
  • When the Pi is powered on, it will:
    • Turn green one by one all the lights of the strip
    • Change the color of the entire light strip a few times, then turn it off
    • Try to connect to the Wi-Fi
    • Retrieve the timezone, current date, and settings from the JSON file
    • If the important date is within 24 days, the countdown will be displayed using random colors or the specified colors.
    • If the date has passed, the light strip will display a breathing effect with a random color of the day.

The Code on the Raspberry Pi Pico

The main code for the Raspberry Pi Pico is written in Python. Here's a brief overview of what it does:

  1. Connect to Wi-Fi: The connect_to_wifi function connects the Raspberry Pi Pico to the specified Wi-Fi network.
  2. Get Timezone and Local Time: The get_timezone and get_local_time functions fetch the current timezone and local time using online APIs.
  3. Fetch Light Settings: The get_light_settings function retrieves the important date and RGB colors from the JSON file stored in Azure.
  4. Calculate Sleeps Until Special Day: The sleeps_until_special_day function calculates the number of days until the important date.
  5. Control the LED Strip: The progress function controls the LED strip, displaying the countdown or a breathing effect based on the current date and settings.

The Configuration Website

The configuration website is built in C#. It's a Blazor server webapp, and I used .NET Aspire to make it easy to run it locally. The UI uses FluentUI-Blazor so it looks pretty, without effort. 

The website allows you to update the settings for the Raspberry Pi Pico. You can set the important date, choose custom colors, and save these settings to a JSON file in Azure storage.

Little Extra

The website is deployed in Azure Container App with a minimum scaling to zero to save on costs. This may cause a slight delay when loading the site for the first time, but it will work just fine and return to "dormant" mode after a while.

I hope you enjoyed reading about my holiday project! It was a fun and educational experience, and I look forward to working on more projects like this in the future.

What's Next?

Currently the project does a 24 days countdown (inspired from the advent calendar). I would like to add a feature to allow the user to set the number of days for the countdown. I would also like to add the possibility to set the color for the breathing effect (or keep it random) when the important date has passed. And lastly, I would like to add the time of the day when the light strip should turn on and off, because we all have different schedule 😉 .

Last thoughts

I really enjoyed doing this project. It was a fun way to learn more about the Raspberry Pi Pico, micro-Python (I didn't even know it was a thing), and FluentUI Blazor. I hope you enjoyed reading about it and that it inspired you to create your own fun projects. If you have any questions or suggestions, feel free to reach out, I'm fboucheros on most socials.

~Frank

Reading Notes #627


This week, I stumbled upon some fascinating reads. From the announcement of .NET 9 and its incredible versatility to an intriguing new type of failover for Azure Storage, there's plenty to explore. Discover how to get .NET 9 running on your Raspberry Pi, check out the latest Blazorise update, and delve into the power of GitHub Models in .NET with Semantic Kernel. Plus, don't miss out on the introduction of GitHub Copilot for Azure and a new season of AI-related sessions in Visual Studio. And for my fellow open-source enthusiasts, the .NET Aspire Community Toolkit is a game-changer. 

Dive in and let's geek out together! 🌟

Suggestion of the week

  • Announcing .NET 9 - .NET Blog (.NET Team) - You can build anything with C# (aka .NET) and I love it! With runs everywhere, it's open source, it's fast and free!

Cloud

Programming

  • Install and use Microsoft Dot NET 9 with the Raspberry Pi (Pete Codes) - C# everywhere! I love it! I do have some code that run on a Pi as a mini server, bubi need to have a look for a IoT library that could be used.

  • Blazorise v1.7 (Mladen Macanović) - New version of a nice looking CSS Framework for our Blazor website with more features and better performance.

AI

Data

Open Source

Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.
If you have interesting content, share it!


~ Frank


Reading Notes #626

Kick off your week with this curated list of must-read tech articles. From .NET modernisation patterns and new C# 13 LINQ methods to open-source contributions and thought-provoking reads, there's plenty to explore. 


Enjoy!

Programming

Open-Source

Miscellaneous

Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.

If you have interesting content, share it!


~ Frank

Reading Notes #625

Welcome to another edition of Reading Notes! This week, dive into the latest updates on Azure DevOps, Docker best practices, System.Text.Json enhancements in .NET 9, AI innovations from GitHub Universe, and more. 

Enjoy your reading!

Cloud

Programming

AI

  • GitHub Spark (Devon Rifkin, Terkel Gjervig Nielsen, Cole Bemis, Alice Li) - Fascinating news from GitHub Universe. A new spin on the lowcode app but with code. Looking forward to trying it and see what I can build with it.

  • GitHub Copilot in Windows Terminal (Christopher Nguyen) - There it is, Copilot making his entrance into our beloved Terminal. It's only in version Canary for now, but I'm sure it will help many of us when no sure what command to use, or the equivalent bash/ PowerShell.

Miscellaneous


Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.

If you have interesting content, share it!


~ Frank