Reading Notes #688

I'm always on the lookout for innovative ideas to streamline my development workflow. This week, I stumbled upon some fascinating reads that caught my eye, among them, an article about building an AI-powered pull request agent using GitHub Copilot SDK, and another demonstrating the secure use of OpenClaw in Docker sandboxes.


AI

Programming

~frank


Reading Notes #687

Welcome to this new Reading Notes post, a collection of interesting articles and resources I've been absorbing lately! This week's roundup dives into a variety of topics, from practical storage solutions and leveraging AI for code upgrades to exploring the intersection of AI and business value. Get ready for a diverse mix of tech insights and management reflections.


Programming

AI

Books



The Making of a Manager: What to Do When Everyone Looks to You
(Julie Zhuo) - Most management books are written by advanced managers, people with a lot of experience who already have the "manager" mindset well established in their heads. This book feels different, more accessible, closer to a conversational tone. In this book, Julie shares her stories of becoming a manager and the advice she learned along the way. I think it's a good book to get started on this topic, especially if you are new to that position or thinking about it, to understand and be better equipped for the new challenges coming your way.


Miscellaneous

~frank



Reading Notes #686

This week's Reading Notes is packed with AI insights, open-source discoveries, programming tips, and podcast episodes that will leave you eager to dive in. From Ralph Wiggum's coding secrets to the dangers of one-shot glamour, we've got it all covered. So grab your favourite beverage, settle in, and get ready to level up your tech game!

AI


Open Source


Programming


Podcast

~frank


Reading Notes #685

This week's collection of interesting articles, blog posts, and insights from the world of technology, programming, and AI. From the latest developments in Claude code and AI models for coding to discussions on the security of AI assistants and the future of the craft of programming, there's something for everyone in this edition of Reading Notes. 

Enjoy!

snow in a forest with the shadow of the trees
zebra snow

AI

Programming

  • Is the craft dead? (Scott Hanselman) - Good question! What do you think? Is it still there? I'm personally sure it still is.

Miscellaneous

~frank

Reading Notes #684

Balancing cloud innovation with AI practicality, this week’s notes blend Azure updates, .NET’s AI roadmap, and clever Python hacks. A sharp reminder on burnout prevention anchors the mix, while creative teams and DevOps culture inspire fresh perspectives. From Docker model runners to Git worktrees, every corner here offers actionable insights or a spark of curiosity, no clichés, just tools and truths for developers navigating the stormy seas of tech.


Suggestion of the week

Cloud

AI

Podcasts

Miscellaneous

~frank

Reading Notes #683

A lot of good stuff crossed my radar this week. From Aspire’s continued evolution and local AI workflows with Ollama, to smarter, more contextual help in GitHub Copilot, the theme is clear: better tools, used more intentionally. I also bookmarked a few thoughtful pieces on leadership and communication that are worth slowing down for. Plenty here to explore, whether you’re deep in code or thinking about how teams actually work.

Meetup MsDevMtl

Programming

AI

Open Source

  • The end of the curl bug-bounty (Daniel Stenberg) - I didn't know about this effort, and it's sad to learn about it too now, of course, but I'm glad those programs exist.

Miscellaneous

  • Why I Still Write Code as an Engineering Manager (James Sturtevant) - There is still hope, everyone! But more seriously, an inspiring post that managers should read.

  • The Art of the Oner (Golnaz) - Another great post from Golnaz talks about how to help the message to land. How and why one takes are helping when presenting and the effort it represents.

Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.

If you have interesting content, share it!

~frank



Automatically Create AI Clips with This n8n Template

I'm excited to share that my new n8n template has been approved and is now available for everyone to use! This template automates the process of creating AI-generated video clips from YouTube videos and sending notifications directly to your inbox.

French version of this post here

Try the template here: https://link.reka.ai/n8n-template-api


What Does This Template Do?

If you've ever wanted to automatically create short clips from long YouTube videos, this template is for you. It watches a YouTube channel of your choice, and whenever a new video is published, it uses AI to generate engaging short clips perfect for social media. You get notified by email when your clip is ready to download.

How It Works

The workflow is straightforward and runs completely on autopilot:

  1. Monitor YouTube channels - The template watches the RSS feed of any YouTube channel you specify. When a new video appears, the automation kicks off.

  2. Request AI clip generation - Using Reka's Vision API, the workflow sends the video for AI processing. You have full control over the output:

    • Write a custom prompt to guide the AI on what kind of clip to create
    • Choose whether to include captions
    • Set minimum and maximum clip duration
  3. Smart status checking - When the clips are ready, you receive a success email with your download link. As a safety feature, if the job takes too long, you'll get an error notification instead.

Getting Started is Easy

The best part? You can install this template with just one click from the n8n Templates page. No complex setup required!

After installation, you'll just need two quick things:

  • A free Reka AI API key (get yours from Reka)
  • A Gmail account (or use any email provider you like)

That's it! The template comes ready to use. Simply add your YouTube channel RSS feed, connect your API key, and you're ready to start generating clips automatically. The whole setup takes just a few minutes.

If you run into any questions or want to share what you've built, join the Reka Discord community. I'd love to hear how you're using this template!

Show Me

In this short  video I show you how to get that template into your n8n and how to configure it.

Happy clipping!

Reading Notes #682

This week’s Reading Notes bring together programming tips, AI experiments, and cloud updates. Learn to build Python CLI tools. Untangle GitHub issue workflows. Try running AI models locally. Catch up on Azure news. And explore ideas around privacy and cloud architecture. Short reads. Useful takeaways.


Programming

AI

Miscellaneous

~frank

Writing My First Custom n8n Node: A Step-by-Step Guide

Recently, I decided to create a custom node for n8n, the workflow automation tool I've been using. I'm not an expert in Node.js development, but I wanted to understand how n8n nodes work under the hood. This blog post shares my journey and the steps that actually worked for me.

French version here

Why I Did This

Before starting this project, I was curious about how n8n nodes are built. The best way to learn something is by doing it, so I decided to create a simple custom node following n8n's official tutorial. Now that I understand the basics, I'm planning to build a more complex node featuring AI Vision capabilities, but that's for another blog post!

The Challenge

I started with the official n8n tutorial: Build a declarative-style node. While the tutorial is well-written, I ran into some issues along the way. The steps didn't work exactly as described, so I had to figure out what was missing. This post documents what actually worked for me, in case you're facing similar challenges. I already have an n8n instance running in a container. In Step 8, I'll explain how I run a second instance for development purposes.

Prerequisites

Before you start, you'll need:

  • Node.js and npm - I used Node.js version 24.12.0
  • Basic understanding of JavaScript/TypeScript - you don't need to be an expert

Step 1: Fixing the Missing Prerequisites

I didn't have Node.js installed on my machine, so my first step was getting that sorted out. Instead of installing Node.js directly, I used nvm (Node Version Manager), which makes it easy to manage different Node.js versions. Installation details are available on the nvm GitHub repository. Once nvm was set up, I installed Node.js version 24.12.0.

Most of the time, I use VS Code as my code editor. I created a new profile and used the template for Node.js development to get the right extensions and settings.

Step 2: Cloning the Starter Repository

n8n provides a n8n-nodes-starter on GitHub that includes all the basic files and dependencies you need. You can clone it or use it as a template for your own project. Since this was just a "learning exercise" for me, I cloned the repository directly:

git clone https://github.com/n8n-io/n8n-nodes-starter
cd n8n-nodes-starter

Step 3: Getting Started with the Tutorial

I won't repeat the entire tutorial here; it's clear enough, but I'll highlight some details along the way that I found useful.

The tutorial makes you create a "NasaPics" node and provides a logo for it. It's great, but I suggest you use your own logo images and have light and dark versions. Add both images in a new folder icons (same level as nodes and the credentials folder). Having two versions of the logo will make your node look better, whatever theme the user is using in n8n (light or dark). The tutorial only adds the logo in NasaPics.node.ts, but I found that adding it also in the credentials file NasaPicsApi.credentials.ts makes the node look more consistent.

Replace or add the logo line with this, and add Icon to the import statement at the top of the file:

icon: Icon = { light: 'file:MyLogo-dark.svg', dark: 'file:MyLogo-light.svg' };

Note: the darker logo should be used in light mode, and vice versa.

Step 4: Following the Tutorial (With Adjustments)

Here's where things got interesting. I followed the official tutorial to create the node files, but I had to make some adjustments that weren't mentioned in the documentation.

Adjustment 1: Making the Node Usable as a Tool

In the NasaPics.node.ts file, I added this line just before the properties array:

requestDefaults: {
      baseURL: 'https://api.nasa.gov',
      headers: {
         Accept: 'application/json',
         'Content-Type': 'application/json',
      },
   },
   usableAsTool: true, // <-- Added this line
   properties: [
      // Resources and operations will go here

This setting allows the node to be used as a tool within n8n workflows and also fixes warnings from the lint tool.

Adjustment 2: Securing the API Key Field

In the NasaPicsApi.credentials.ts file, I added a typeOptions to make the API key field a password field. This ensures the API key is hidden when users enter it, which is a security best practice.

properties: INodeProperties[] = [
   {
      displayName: 'API Key',
      name: 'apiKey',
      type: 'string',
      typeOptions: { password: true }, // <-- Added this line
      default: '',
   },
];

A Note on Errors

I noticed there were some other errors showing up in the credentials file. If you read the error message, you'll see that it's complaining about missing test properties. To fix this, I added a test property at the end of the class that implements ICredentialTestRequest. I also added the interface import at the top of the file.

authenticate: IAuthenticateGeneric = {
   type: 'generic',
   properties: {
      qs: {
         api_key: '={{$credentials.apiKey}}',
      },
   },
};

// Add this at the end of the class
test: ICredentialTestRequest = {
   request: {
      baseURL: 'https://api.nasa.gov/',
      url: '/user',
      method: 'GET',
   },
};

Step 5: Building and Linking the Package

Once I had all my files ready, it was time to build the node. From the root of my node project folder, I ran:

npm i
npm run build
npm link

During the build process, pay attention to the package name that gets generated. In my case, it was n8n-nodes-nasapics. You'll need this name in the next steps.

> n8n-nodes-nasapics@0.1.0 build
> n8n-node build

┌   n8n-node build 
│
◓  Building TypeScript files│
◇  TypeScript build successful
│
◇  Copied static files
│
└  ✓ Build successful

Step 6: Setting Up the n8n Custom Folder

n8n looks for custom nodes in a specific location: ~/.n8n/custom/. If this folder doesn't exist, you need to create it:

mkdir -p ~/.n8n/custom
cd ~/.n8n/custom

Then initialize a new npm package in this folder: run npm init and press Enter to accept all the defaults.

Step 7: Linking Your Node to n8n

Now comes the magic part - linking your custom node so n8n can find it. Replace n8n-nodes-nasapics with whatever your package name is. From the ~/.n8n/custom folder, run:

npm link n8n-nodes-nasapics

Step 8: Running n8n

This is where my setup differs from the standard tutorial. As mentioned at the beginning, I already have an instance of n8n running in a container and didn't want to install it. So I decided to run a second container using a different port. Here's the command I used:

docker run -d --name n8n-DEV -p 5680:5678 \
  -e N8N_COMMUNITY_PACKAGES_ENABLED=true \
  -v ~/.n8n/custom/node_modules/n8n-nodes-nasapics:/home/node/.n8n/custom/node_modules/n8n-nodes-nasapics \
  n8nio/n8n

Let me break down what this command does:

  • -d: Runs the container in detached mode (in the background)
  • --name n8n-DEV: Names the container for easy reference
  • -p 5680:5678: Maps port 5678 from the container to port 5680 on my machine so it doesn't conflict with my existing n8n instance
  • -e N8N_COMMUNITY_PACKAGES_ENABLED=true: Enables community packages — you need this to use custom nodes
  • -v: Mounts my custom node folder into the container, which lets me try my custom node without having to publish it.
  • n8nio/n8n: The official n8n container image

If you're running n8n directly on your machine (not in a container), you can simply start it.

Step 9: Testing Your Node

Once n8n-DEV is running, open your browser and navigate to it. Create a new workflow and search for your node. In my case, I searched for "NasaPics" and my custom node appeared!

To test it:

  1. Add your node to the workflow
  2. Configure the credentials with a NASA API key (you can get one for free at api.nasa.gov)
  3. Execute the node
  4. Check if the data is retrieved correctly

Updating Your Node

During development, you'll likely need to make changes to your code (aka node). Once done, you have to rebuild npm run build and restart the n8n container docker restart n8n-DEV to see the changes.

What's Next?

Now that I understand the basics of building custom n8n nodes, I'm ready to tackle something more ambitious. My next project will be creating a node that uses AI Vision capabilities. Spoiler alert: It's done and I'll be sharing the details in an upcoming blog post!

If you're interested in creating your own custom nodes, I encourage you to give it a try. Start with something simple, like I did, and build from there. Don't be afraid to experiment and make mistakes - that's how we learn!

Resources

Reading Notes #681

This week’s reads blend cutting-edge tech with practical insights, like how Aspire elevates JavaScript to a first-class citizen in modern development, or why AI’s push toward typed languages might just be the future. From building a self-hosted model registry to uncovering AI’s surprising role in video

little snowman

production (who knew Adobe had a sound AI gem?), there’s plenty to unpack. And if data-driven wardrobe experiments count as quirky, this week’s got you covered too.

Programming

  • Aspire for JavaScript developers (David Pine) - JavaScript and all its frameworks are now first citizen in Aspire. This post explains what it means and what the benefits are for developers.

AI

Miscellaneous

~frank


Reading Notes #680

In this edition of Reading Notes, I’m sharing articles about the evolving tech landscape, exploring WebAssembly’s potential through Blazor, uncovering the simplicity of .NET’s file-based apps, and reflecting on how 2025 reshaped software development. From podcasts dissecting 2026’s challenges to a heartfelt tech community milestone, this round-up blends cutting-edge tools with practical wisdom, proving innovation thrives in unexpected corners.


Ready to geek out? Let’s roll.

DevOps

Programming

  • File-based apps - .NET - Amazing source of information. It's all in one place. I used to call it projectless, but from now on, it's file-based

AI

Podcasts

Miscellaneous

~Frank


Exposing Home Container with Traefik and Cloudflare Tunnel

I love the cloud, in fact most people probably know me because of my shared content related to that. But sometimes our apps don't need scaling, or redundancy. Sometimes we just want to host them somewhere.

(post en français ici)

It was the holidays, and during my time off I worked on a few small personal projects. I packaged them in containers so it's easy to deploy anywhere. I deployed them on a mini-PC that I have at home and it is great... as long as I stay home. But what if I would like to access it from elsewhere (ex: my in-laws' house)?

I set up a nice Cloudflare tunnel to a Traefik container that proxies the traffic to the correct container based on the prefix or second-level domain. So dev.c5m.ca goes to container X and test.c5m.ca goes to container Y. In this post, I wanted to share how I did it (and also have it somewhere for me in case I need to do it again 😉). It's simple once you know all the pieces work together.

generated by Microsoft designer
generated by Microsoft designer

The Setup

The architecture is straightforward: Cloudflare Tunnel creates a secure connection from my home network to Cloudflare's edge, and Traefik acts as a reverse proxy that routes dynamically incoming requests to the appropriate container based on the subdomain. This way, I can access multi ple services through different subdomains without exposing my home network directly to the internet.

Step 1: Cloudflare Tunnel

First, assuming you already owne a domain name, you'll need to create a Cloudflare tunnel. You can do this through the Cloudflare dashboard under Zero Trust → Networks → Tunnels. Once created, you'll get a tunnel token that you'll use in the configuration.

Here's my cloudflare-docker-compose.yaml:

name: cloudflare-tunnel

services:
  cloudflared:
    image: cloudflare/cloudflared:latest
    container_name: cloudflared
    restart: unless-stopped
    env_file:
      - .env
    environment:
      - TUNNEL_TOKEN=${TUNNEL_TOKEN}
    command: ["tunnel", "--no-autoupdate", "run", "--token", "${TUNNEL_TOKEN}"]

The tunnel token is stored in a .env file for security. The --no-autoupdate flag prevents the container from trying to update itself automatically, which is useful in a controlled environment.

Step 2: DNS Configuration

In Cloudflare dashboard, create a CNAME Record with a wildcard. For example for my domain "c5m.ca" that record will look like this: *.c5m.ca.

Step 3: Traefik Configuration

Traefik is the reverse proxy that will route traffic to your containers. I have two configuration files: one for Traefik itself and one for the Docker Compose setup.

Here's my traefik.yaml:

global:
  checkNewVersion: false
  sendAnonymousUsage: false

api:
  dashboard: false #true
  insecure: true

entryPoints:
  web:
    address: :8082
  websecure:
    address: :8043

providers:
  docker:
    endpoint: "unix:///var/run/docker.sock"
    exposedByDefault: false 

I've configured two entry points: web on port 8082 (HTTP) and websecure on port 8043 (HTTPS). I did it that way because the default 80 and 443 where already taken. The Docker provider watches for containers with Traefik labels and automatically configures routing. exposedByDefault: false means containers won't be exposed unless explicitly enabled with labels. You won't have to change Traefik config to add more containers, it's all dynamic.

And here's the traefik-docker-compose.yaml:

name: traefik

services:
  traefik:
    image: "traefik:v3.4"
    container_name: "traefik-app"
    restart: unless-stopped
    networks:
      - proxy

    ports:
      - "8888:8080" # Dashboard port
      - "8082:8082"
      - "8043:8043" # remap 443
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
      - "./config/traefik.yaml:/etc/traefik/traefik.yaml:ro"

networks:
  proxy:
    name: proxy

The key points here:

  • Traefik is connected to a Docker network called proxy that will be shared with other containers. You can name it the way you like.
  • Port 8888 maps to Traefik's dashboard (currently disabled in the config)
  • Ports 8082 and 8043 are exposed for HTTP and HTTPS traffic
  • The Docker socket is mounted read-only so Traefik can discover containers
  • The configuration file is mounted from ./config/traefik.yaml

Step 4: Configuring Services

Now, any container you want to expose through Traefik needs to:

  1. Be on the same proxy network
  2. Have Traefik labels configured

Here's a simple example with an nginx container (nginx-docker-compose.yaml):

name: "test-tools"

services:
  nginx:
    image: "nginx:latest"
    container_name: "nginx-test"
    restart: unless-stopped
    networks:
      - proxy
    volumes:
      - "./html:/usr/share/nginx/html:ro"
      
    labels:
      - traefik.enable=true
      - traefik.http.routers.nginxtest.rule=Host(`test.c5m.ca`) 
      - traefik.http.routers.nginxtest.entrypoints=web

networks:
  proxy:
    external: true

The labels tell Traefik:

  • traefik.enable=true: This container should be exposed
  • nginxtest is the unique name for routing this container.
  • traefik.http.routers.nginxtest.rule=Host(...): Route requests for test.c5m.ca to this container
  • traefik.http.routers.nginxtest.entrypoints=web: Use the web entry point (port 8082)

Bonus: A More Complex Example

For a more realistic scenario, let's share how I could expose 2D6 Dungeon App here's a simplified version of my 2d6-docker-compose.yaml which includes a multi-container application:

name: 2d6-dungeon

services:
  database:
    container_name: 2d6_db
    ports:
      - "${MYSQL_PORT:-3306}:3306"
    networks:
      - proxy
    ...

  dab:
    container_name: 2d6_dab
    ...
    depends_on:
      database:
        condition: service_healthy
    ports:
      - "${DAB_PORT:-5000}:5000"
    networks:
      - proxy

  webapp:
    container_name: 2d6_app
    depends_on:
      - dab
    environment:
      ConnectionStrings__dab: http://dab:5000
      services__dab__http__0: http://dab:5000

    labels:
      - traefik.enable=true
      - traefik.http.routers.twodsix.rule=Host(`2d6.c5m.ca`)
      - traefik.http.routers.twodsix.entrypoints=web,websecure
      - traefik.http.services.twodsix.loadbalancer.server.port=${WEBAPP_PORT:-8080}

    networks:
      - proxy

    ports:
      - "${WEBAPP_PORT:-8080}:${WEBAPP_PORT:-8080}"

networks:
  proxy:
    external: true

This example shows:

  • Multiple services working together (database, API, web app)
  • Only the webapp is exposed through Traefik (the database and API are internal)
  • The webapp uses both web and websecure entry points
  • Important note here is that container part of the same network can use their internal port (ex: 5000 for DAB, 3306 for MySQL)
  • The external network is the proxy created previously

Cloudflare Tunnel Configuration

In your Cloudflare dashboard, you'll need to configure the tunnel to route traffic to Traefik. Create a public hostname that points to http://<local-ip>:8082. Use the local IP of your server something like "192.168.1.123" You can use wildcards like *.c5m.ca to route all subdomains to Traefik, which will then handle the routing based on the hostname.

Wrapping Up

That's it! Once everything is set up:

  1. The Cloudflare tunnel creates a secure connection from your home to Cloudflare
  2. Traffic comes in through Cloudflare and gets routed to Traefik
  3. Traefik reads the hostname and routes to the appropriate container
  4. Each service can be accessed via its own subdomain
  5. Only the containers with the Traefik labels are accessible from outside my network
  6. It's dynamic! Any new container, with the labels, will be routed without changing the config in Traefik nor Cloudflare

It's a simple setup that works great for personal projects. The best part is that you don't need to expose any ports on your router or deal with dynamic DNS, Cloudflare handles all of that.

Next step will be to add some authentication and authorization (ex: using Keycloak), but that's for another post. For now, this gives me a way to access my home-hosted services from anywhere, and I thought it could be useful to share.

From Hours to Minutes: AI That Finds Tech Events for You

TL;DR

I built an AI research agent that actually browses the live web and finds tech events, no search loops, no retry logic, no hallucinations. Just ask a question and get structured JSON back with the reasoning steps included. The secret? An API that handles multi-step research automatically. Built with .NET/Blazor in a weekend. Watch the video | Get the code | Free API key
(version française)

Happy New Year! I wanted to share something I recently presented at the AI Agents Conference 2025: how to build intelligent research assistants that can search the live web and return structured, reliable results.

Coming back from the holidays, I'm reminded of a universal problem: information overload. Whether it's finding relevant tech conferences, catching up on industry news, or wading through piles of documentation that accumulated during time off, we all need tools that can quickly search and synthesize information for us. That's what Reka Research does, it's an agentic AI that browses the web (or your private documents), answers complex questions, and turns hours of research into minutes. I built a practical demo to show this in action: an Event Finder that searches the live internet for upcoming tech conferences.

The full presentation is available on YouTube if you want to follow along: How to Build Agentic Web Research Assistants

The Problem: Finding Events Isn't Just a Simple Search

Let me paint a picture. You want to find upcoming tech conferences about AI in your area. You need specific information: the event name, start and end dates, location, and most importantly, the registration URL.

A simple web search or basic LLM query falls short because:

  • You might get outdated information
  • The first search result rarely contains all required details
  • You need to cross-reference multiple sources
  • Without structure, the data is hard to use in an application

This is where Reka's Research API shines. It doesn't just search, it reasons through multiple steps, aggregates information, and returns structured, grounded results.

Event finder interface

The Solution: Multi-Step Research That Actually Works

The core innovation here is multi-step grounding. Instead of making a single query and hoping for the best, the Research API acts like a diligent human researcher:

  1. It makes an initial search based on your query
  2. Checks what information is missing
  3. Performs additional targeted searches
  4. Aggregates and validates the data
  5. Returns a complete, structured response

As a developer, you simply send your question, and the API handles the complex iteration. No need to build your own search loops or retry logic.

How It Works: The Developer Experience

Here's what surprised me most: the simplicity. You define your data structure, ask a question, and the API handles all the complex research orchestration. No retry logic, no search loop management.

The key is structured output. Instead of parsing messy text, you tell the API exactly what JSON schema you want:

public class TechEvent
{
    public string? Name { get; set; }
    public DateTime? StartDate { get; set; }
    public DateTime? EndDate { get; set; }
    public string? City { get; set; }
    public string? Country { get; set; }
    public string? Url { get; set; }
}

Then you send your query with the schema, and it returns perfectly structured data every time. The API uses OpenAI-compatible format, so if you've worked with ChatGPT's API, this feels instantly familiar.

The real magic? You also get back the reasoning steps, the actual web searches it performed and how it arrived at the answer. Perfect for debugging and understanding the agent's thought process.

I walk through the complete implementation, including domain filtering, location-aware search, and handling the async research calls in the video. The full source code is on GitHub if you want to dive deeper.


Try It Yourself

The complete source code is on GitHub. Clone it, grab a free API key, and you'll have it running in under 5 minutes.

I'm curious what you'll build with this. Research agents that monitor news? Product comparison tools? Documentation synthesizers? The API works for any web research task. If you build something, tag me.  I'd love to see it.

Happy New Year! 🎉

Reading Notes #679

Exploring the intersection of AI and code this week, I stumbled on a treasure trove of practical insights, from building AI agents in n8n to Meta’s groundbreaking SAM Audio model. The blend of low-code tools, IDE integrations, and deep dives into .NET profiling shows how innovation is bridging creativity and technical rigor. Whether you’re automating workflows or decoding audio separation, there’s something here to spark curiosity and curiosity-driven coding.


AI

Programming

~frank


Building an AI-Powered YouTube Clipper using n8n

My colleague Annie loves clipping videos from her favorite creators. You know that feeling when you catch a great moment and turn it into a perfect short? That's her jam. But she kept running into this frustrating problem: by the time she saw a new video and got around to clipping it, everyone else had already done it. She was always late to the party.

When she told me about this, I thought, "What if we could automatically clip videos the moment they're published?" That way, she'd have her clips ready to post while the content is still fresh.

So I put my experience with integration tools to work and built something for her—and for anyone else who has this same problem. And you know what? I'm pretty excited to share it with you.

French version here: Automatiser le clipping vidéo YouTube avec l'IA et n8n

What I Created

I put together an open-source n8n templates that automatically clips YouTube videos using AI. Here's how it works:

  1. It watches for new videos from your favorite YouTube channel
  2. Sends the video to Reka's AI to create clips automatically
  3. Checks when the clips are ready and sends you an email with the download link

The whole thing runs on n8n (it's a free automation platform), and it uses Reka's Clips API to do the AI magic. Best part? It's completely free to use and set up.

How It Actually Works

I built this using two n8n workflows that work together:

Workflow 1: Submit Reel Creation


This one's the watcher. It monitors a YouTube channel's RSS feed, and the moment a new video drops, it springs into action:

  • Grabs the video URL
  • Sends it to Reka's API with instructions like "Create an engaging short video highlighting the best moments"
  • Gets back a job ID so we can track the progress
  • Saves everything to a n8n data table

The cool thing is you can customize how the clips are made. Want vertical videos for TikTok? Done. Need subtitles? Got it. You can set the clip length anywhere from 0 to 30 seconds. It's all in the JSON configuration.

{
  "video_urls": ["{{ $json.link }}"],
  "prompt": "Create an engaging short video highlighting the best moments",
  "generation_config": {
    "template": "moments",
    "num_generations": 1,
    "min_duration_seconds": 0,
    "max_duration_seconds": 30
  },
  "rendering_config": {
    "subtitles": true,
    "aspect_ratio": "9:16"
  }
}

Workflow 2: Check Reel Status


This one's the patient checker. Since AI takes time to analyze a video and create clips (could be several minutes depending on the video length), we need to check in periodically:

  • Looks at all the pending jobs in our data table
  • Asks Reka's API "Hey, is this one done yet?"
  • When a clip is ready, sends you an email with the download link
  • Marks the job as complete so we don't check it again

I set mine to check every 15-30 minutes. No need to spam the API—good things take time! 😉

Setting It Up (It's Easier Than You Think)

When I was helping Annie set this up (you can watch the full walkthrough below), we got it working in just a few minutes. Here's what you need to do:

Step 1: Create Your Data Table

In n8n, create a new data table. Here's a pro tip I learned the hard way: don't name it "videos"—use something like "clip_jobs" or "reel_records" instead. Trust me on this one; it'll save you some headaches.

Your table needs four columns (all strings):

  • video_title - The name of the video
  • video_url - The YouTube URL
  • job_id - The ID Reka gives us to track the clip
  • job_status - Where we are in the process (queued, processing, completed, etc.)

Step 2: Import the Workflows

Download the two JSON files from the GitHub repo and import them into n8n. They'll show up with some errors at first—that's totally normal! We need to configure them.

Step 3: Configure "Submit Reel Creation"

  1. RSS Feed Trigger: Replace my YouTube channel ID with the one you want to monitor. You can find any channel's ID in their channel URL.

  2. API Key: Head to platform.reka.ai and grab your free API key. Pop it into the Bearer Auth field. Give it a memorable name like "Reka API key" so you know what it is later.

  3. Clip Settings: This is where you tell the AI what kind of clips you want. The default settings create one vertical video (9:16 aspect ratio) up to 30 seconds long with subtitles. But you can change anything:

    • The prompt ("Create an engaging short video highlighting the best moments")
    • Duration limits
    • Aspect ratio (square, vertical, horizontal—your choice)
    • Whether to include subtitles
  4. Data Table: Connect it to that table you created in Step 1.

Step 4: Configure "Check Reel Status"

  1. Trigger: Start with the manual trigger while you're testing. Once everything works, switch it to a schedule trigger (I recommend every 15-30 minutes).

  2. API Key: Same deal as before—add your Reka API key.

  3. Email: Update the email node with your email address. You can customize the subject and body if you want, but the default works great.

  4. Data Table: Make sure all the data table nodes point to your table from Step 1.

Watching It Work

When Annie and I tested it live, that moment when the first clip job came through with a "queued" status? That was exciting. Then checking back and seeing "completed"? Even better. And when that email arrived with the download link? Perfect.

The clips Reka AI creates are actually really good. It analyzes the entire video, finds the best key moments (or what ever your prompt asks), adds subtitles, and packages it all up in a format ready for social media.

Wrap Up

This tool works great whether you're a clipper enthusiast or a content creator looking to generate clips for your own channel. Once you set it up, it just runs. New video drops at 3 AM? Your clip is already processing. You wake up to a download link in your inbox.

It's open source and free to use. Take it, customize it, make it your own. And if you come up with improvements or have ideas, I'd love to hear about them. Share your updates on GitHub or join the conversation in the Reka Community Discord.

Watch the Full Setup

I recorded the entire setup process with Annie (she was testing it for the first time). You can see every step, every click, and yes, even the little mistakes we made along the way. That's real learning right there.


Get Started

Ready to try it? Here's everything you need:

🔗 n8n template/ Github: https://link.reka.ai/n8n-clip
🔗 Reka API key: https://link.reka.ai/free (renewable & free)


~frank



Reading Notes #678

In the ever-evolving tech landscape, this week’s reading notes blend cutting-edge tools with timeless insights. From Python’s growing role in .NET ecosystems to hands-on experiments with AI-powered data ingestion, there’s plenty to explore. Meanwhile, reflections on community, confidence, and finding our “second place” in a fast-paced world add a human touch. Jump into how developers are pushing boundaries, embracing new editors, and learning that growth starts with choosing courage, even when it’s scary.


Programming

Podcasts

Miscellaneous

  • All good things must come to an end (Salma Alam Maylor) - I totally understand, but it is sad news to see her go out of the streaming business. She is amazing, I'm sure she's still rocks whatever she does.
~frank


Reading Notes #677

This week I'm looking at some interesting .NET stuff like Typemock's architecture and how Copilot Studio uses WebAssembly to boost performance. There's also a good reminder about why setting up CI/CD early (when your app is tiny) saves you tons of headaches later. Plus, I found a couple of great podcast episodes on building modern SaaS products and what actually makes a personal brand different from just having a reputation.

the toughther salad resisting to snow 

Programming

DevOps

AI

Podcast

~frank

Reading Notes #676

This #rd explores practical insights on leveraging GitHub Copilot for enhanced .NET testing, the rise of AI-driven documentation solutions, and the importance of security in coding agents. From dissecting Docker’s MCP servers to debating the merits of Minimal APIs, we cover a mix of .NET updates, developer workflows, and emerging best practices. Whether you’re refining build processes, optimizing codebases, or staying ahead of security trends, these notes offer a curated selection of ideas to spark your next project or refactor.



Let’s unpack what’s new and impactful in tech!

AI

Programming

~frank


Ask AI from Anywhere: No GUI, No Heavy Clients, No Friction

Ever wished you could ask AI from anywhere without needing an interface? Imagine just typing ? and your question in any terminal the moment it pops into your head, and getting the answer right away! In this post, I explain how I wrote a tiny shell script that turns this idea into reality, transforming the terminal into a universal AI client. You can query Reka, OpenAI, or a local Ollama model from any editor, tab, or pipeline—no GUI, no heavy clients, no friction.

Small, lightweight, and surprisingly powerful: once you make it part of your workflow, it becomes indispensable.

💡 All the code scripts are available at: https://github.com/reka-ai/terminal-tools


The Core Idea

There is almost always a terminal within reach—embedded in your editor, sitting in a spare tab, or already where you live while building, debugging, and piping data around. So why break your flow to open a separate chat UI? I wanted to just type a single character (?) plus my question and get an answer right there. No window hopping. No heavy client.

How It Works

The trick is delightfully small: send a single JSON POST request to whichever AI provider you feel like (Reka, OpenAI, Ollama locally, etc.):

# Example: Reka
curl https://api.reka.ai/v1/chat
     -H "X-Api-Key: <API_KEY>" 
     -d {
          "messages": [
            {
              "role": "user",
              "content": "What is the origin of thanksgiving?"
            }
          ],
          "model": "reka-core",
          "stream": false
        }
# Example: Ollama local
curl http://127.0.0.1:11434/api/chat 
-d {  
      "model": "llama3",   
      "messages": [
        {
          "role": "user", 
          "content": "What is the origin of thanksgiving?"
        }], 
      "stream": false
    }

Once we get the response, we extract the answer field from it. A thin shell wrapper turns that into a universal “ask” verb for your terminal. Add a short alias (?) and you have the most minimalist AI client imaginable.

Let's go into the details

Let me walk you through the core script step-by-step using reka-chat.sh, so you can customize it the way you like. Maybe this is a good moment to mention that Reka has a free tier that's more than enough for this. Go grab your key—after all, it's free!

The script (reka-chat.sh) does four things:

  1. Captures your question
  2. Loads an API key from ~/.config/reka/api_key
  3. Sends a JSON payload to the chat endpoint with curl.
  4. Extracts the answer using jq for clean plain text.

1. Capture Your Question

This part of the script is a pure laziness hack. I wanted to save keystrokes by not requiring quotes when passing a question as an argument. So ? What is 32C in F works just as well as ? "What is 32C in F".

if [ $# -eq 0 ]; then
    if [ ! -t 0 ]; then
        QUERY="$(cat)"
    else
        exit 1
    fi
else
    QUERY="$*"
fi

2. Load Your API Key

If you're running Ollama locally you don't need any key, but for all other AI providers you do. I store mine in a locked-down file at ~/.config/reka/api_key, then read and trim trailing whitespace like this:

API_KEY_FILE="$HOME/.config/reka/api_key"
API_KEY=$(cat "$API_KEY_FILE" | tr -d '[:space:]')

3. Send The JSON Payload

Building the JSON payload is the heart of the script, including the API_ENDPOINT, API_KEY, and obviously our QUERY. Here’s how I do it for Reka:

RESPONSE=$(curl -s -X POST "$API_ENDPOINT" \
     -H "X-Api-Key: $API_KEY" \
     -H "Content-Type: application/json" \
     -d "{
  \"messages\": [
    {
      \"role\": \"user\",
      \"content\": $(echo "$QUERY" | jq -R -s .)
    }
  ],
  \"model\": \"reka-core\",
  \"stream\": false
}")

4. Extract The Answer

Finally, we parse the JSON response with jq to pull out just the answer text. If jq isn't installed we display the raw response, but a formatted answer is much nicer. If you are customizing for another provider, you may need to adjust the JSON path here. You can add echo "$RESPONSE" >> data_sample.json to the script to log raw responses for tinkering.

With Reka, the response look like this:

{
    "id": "cb7c371b-3a7b-48d2-829d-70ffacf565c6",
    "model": "reka-core",
    "usage": {
        "input_tokens": 16,
        "output_tokens": 460,
        "reasoning_tokens": 0
    },
    "responses": [
        {
            "finish_reason": "stop",
            "message": {
                "role": "assistant",
                "content": " The origin of Thanksgiving ..."
            }
        }
    ]
}
The value we are looking for and want to display is the `content` field inside `responses[0].message`. Using `jq`, we do:
echo "$RESPONSE" | jq -r '.responses[0].message.content // .error // "Error: Unexpected response format"'

Putting It All Together

Now that we have the script, make it executable with chmod +x reka-chat.sh, and let's add an alias to your shell config to make it super easy to use. Add one line to your .zshrc or .bashrc that looks like this:

alias \\?=\"$REKA_CHAT_SCRIPT\"

Because ? is a special character in the shell, we escape it with a backslash. After adding this line, reload your shell configuration with source ~/.zshrc or source ~/.bashrc, and you are all set!

The Result

Now you can ask questions directly from your terminal. Wanna know what is origin of Thanksgiving, ask it like this:

? What is the origin of Thanksgiving

And if you want to keep the quotes, please you do you!

Extra: Web research

I couldn't stop there! Reka also supports web research, which means it can fetch and read web pages to provide more informed answers. Following the same pattern described previously, I wrote a similar script called reka-research.sh that sends a request to Reka's research endpoint. This obviously takes a bit more time to answer, as it's making different web queries and processing them, but the results are often worth the wait—and they are up to date! I used the alias ?? for this one.

On the GitHub repository, you can find both scripts (reka-chat.sh and reka-research.sh) along with a script to create the aliases automatically. Feel free to customize them to fit your workflow and preferred AI provider. Enjoy the newfound superpower of instant AI access right from your terminal!

What's Next?

With this setup, the possibilities are endless. Reka supports questions related to audio and video, which could be interesting to explore next. The project is open source, so feel free to contribute or suggest improvements. You can also join the Reka community on Discord to share your experiences and learn from others.


Resources