Exposing Home Container with Traefik and Cloudflare Tunnel

I love the cloud, in fact most people probably know me because of my shared content related to that. But sometimes our apps don't need scaling, or redundancy. Sometimes we just want to host them somewhere.

(post en français ici)

It was the holidays, and during my time off I worked on a few small personal projects. I packaged them in containers so it's easy to deploy anywhere. I deployed them on a mini-PC that I have at home and it is great... as long as I stay home. But what if I would like to access it from elsewhere (ex: my in-laws' house)?

I set up a nice Cloudflare tunnel to a Traefik container that proxies the traffic to the correct container based on the prefix or second-level domain. So dev.c5m.ca goes to container X and test.c5m.ca goes to container Y. In this post, I wanted to share how I did it (and also have it somewhere for me in case I need to do it again 😉). It's simple once you know all the pieces work together.

generated by Microsoft designer
generated by Microsoft designer

The Setup

The architecture is straightforward: Cloudflare Tunnel creates a secure connection from my home network to Cloudflare's edge, and Traefik acts as a reverse proxy that routes dynamically incoming requests to the appropriate container based on the subdomain. This way, I can access multi ple services through different subdomains without exposing my home network directly to the internet.

Step 1: Cloudflare Tunnel

First, assuming you already owne a domain name, you'll need to create a Cloudflare tunnel. You can do this through the Cloudflare dashboard under Zero Trust → Networks → Tunnels. Once created, you'll get a tunnel token that you'll use in the configuration.

Here's my cloudflare-docker-compose.yaml:

name: cloudflare-tunnel

services:
  cloudflared:
    image: cloudflare/cloudflared:latest
    container_name: cloudflared
    restart: unless-stopped
    env_file:
      - .env
    environment:
      - TUNNEL_TOKEN=${TUNNEL_TOKEN}
    command: ["tunnel", "--no-autoupdate", "run", "--token", "${TUNNEL_TOKEN}"]

The tunnel token is stored in a .env file for security. The --no-autoupdate flag prevents the container from trying to update itself automatically, which is useful in a controlled environment.

Step 2: DNS Configuration

In Cloudflare dashboard, create a CNAME Record with a wildcard. For example for my domain "c5m.ca" that record will look like this: *.c5m.ca.

Step 3: Traefik Configuration

Traefik is the reverse proxy that will route traffic to your containers. I have two configuration files: one for Traefik itself and one for the Docker Compose setup.

Here's my traefik.yaml:

global:
  checkNewVersion: false
  sendAnonymousUsage: false

api:
  dashboard: false #true
  insecure: true

entryPoints:
  web:
    address: :8082
  websecure:
    address: :8043

providers:
  docker:
    endpoint: "unix:///var/run/docker.sock"
    exposedByDefault: false 

I've configured two entry points: web on port 8082 (HTTP) and websecure on port 8043 (HTTPS). I did it that way because the default 80 and 443 where already taken. The Docker provider watches for containers with Traefik labels and automatically configures routing. exposedByDefault: false means containers won't be exposed unless explicitly enabled with labels. You won't have to change Traefik config to add more containers, it's all dynamic.

And here's the traefik-docker-compose.yaml:

name: traefik

services:
  traefik:
    image: "traefik:v3.4"
    container_name: "traefik-app"
    restart: unless-stopped
    networks:
      - proxy

    ports:
      - "8888:8080" # Dashboard port
      - "8082:8082"
      - "8043:8043" # remap 443
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
      - "./config/traefik.yaml:/etc/traefik/traefik.yaml:ro"

networks:
  proxy:
    name: proxy

The key points here:

  • Traefik is connected to a Docker network called proxy that will be shared with other containers. You can name it the way you like.
  • Port 8888 maps to Traefik's dashboard (currently disabled in the config)
  • Ports 8082 and 8043 are exposed for HTTP and HTTPS traffic
  • The Docker socket is mounted read-only so Traefik can discover containers
  • The configuration file is mounted from ./config/traefik.yaml

Step 4: Configuring Services

Now, any container you want to expose through Traefik needs to:

  1. Be on the same proxy network
  2. Have Traefik labels configured

Here's a simple example with an nginx container (nginx-docker-compose.yaml):

name: "test-tools"

services:
  nginx:
    image: "nginx:latest"
    container_name: "nginx-test"
    restart: unless-stopped
    networks:
      - proxy
    volumes:
      - "./html:/usr/share/nginx/html:ro"
      
    labels:
      - traefik.enable=true
      - traefik.http.routers.nginxtest.rule=Host(`test.c5m.ca`) 
      - traefik.http.routers.nginxtest.entrypoints=web

networks:
  proxy:
    external: true

The labels tell Traefik:

  • traefik.enable=true: This container should be exposed
  • nginxtest is the unique name for routing this container.
  • traefik.http.routers.nginxtest.rule=Host(...): Route requests for test.c5m.ca to this container
  • traefik.http.routers.nginxtest.entrypoints=web: Use the web entry point (port 8082)

Bonus: A More Complex Example

For a more realistic scenario, let's share how I could expose 2D6 Dungeon App here's a simplified version of my 2d6-docker-compose.yaml which includes a multi-container application:

name: 2d6-dungeon

services:
  database:
    container_name: 2d6_db
    ports:
      - "${MYSQL_PORT:-3306}:3306"
    networks:
      - proxy
    ...

  dab:
    container_name: 2d6_dab
    ...
    depends_on:
      database:
        condition: service_healthy
    ports:
      - "${DAB_PORT:-5000}:5000"
    networks:
      - proxy

  webapp:
    container_name: 2d6_app
    depends_on:
      - dab
    environment:
      ConnectionStrings__dab: http://dab:5000
      services__dab__http__0: http://dab:5000

    labels:
      - traefik.enable=true
      - traefik.http.routers.twodsix.rule=Host(`2d6.c5m.ca`)
      - traefik.http.routers.twodsix.entrypoints=web,websecure
      - traefik.http.services.twodsix.loadbalancer.server.port=${WEBAPP_PORT:-8080}

    networks:
      - proxy

    ports:
      - "${WEBAPP_PORT:-8080}:${WEBAPP_PORT:-8080}"

networks:
  proxy:
    external: true

This example shows:

  • Multiple services working together (database, API, web app)
  • Only the webapp is exposed through Traefik (the database and API are internal)
  • The webapp uses both web and websecure entry points
  • Important note here is that container part of the same network can use their internal port (ex: 5000 for DAB, 3306 for MySQL)
  • The external network is the proxy created previously

Cloudflare Tunnel Configuration

In your Cloudflare dashboard, you'll need to configure the tunnel to route traffic to Traefik. Create a public hostname that points to http://<local-ip>:8082. Use the local IP of your server something like "192.168.1.123" You can use wildcards like *.c5m.ca to route all subdomains to Traefik, which will then handle the routing based on the hostname.

Wrapping Up

That's it! Once everything is set up:

  1. The Cloudflare tunnel creates a secure connection from your home to Cloudflare
  2. Traffic comes in through Cloudflare and gets routed to Traefik
  3. Traefik reads the hostname and routes to the appropriate container
  4. Each service can be accessed via its own subdomain
  5. Only the containers with the Traefik labels are accessible from outside my network
  6. It's dynamic! Any new container, with the labels, will be routed without changing the config in Traefik nor Cloudflare

It's a simple setup that works great for personal projects. The best part is that you don't need to expose any ports on your router or deal with dynamic DNS, Cloudflare handles all of that.

Next step will be to add some authentication and authorization (ex: using Keycloak), but that's for another post. For now, this gives me a way to access my home-hosted services from anywhere, and I thought it could be useful to share.

From Hours to Minutes: AI That Finds Tech Events for You

TL;DR

I built an AI research agent that actually browses the live web and finds tech events, no search loops, no retry logic, no hallucinations. Just ask a question and get structured JSON back with the reasoning steps included. The secret? An API that handles multi-step research automatically. Built with .NET/Blazor in a weekend. Watch the video | Get the code | Free API key
(version française)

Happy New Year! I wanted to share something I recently presented at the AI Agents Conference 2025: how to build intelligent research assistants that can search the live web and return structured, reliable results.

Coming back from the holidays, I'm reminded of a universal problem: information overload. Whether it's finding relevant tech conferences, catching up on industry news, or wading through piles of documentation that accumulated during time off, we all need tools that can quickly search and synthesize information for us. That's what Reka Research does, it's an agentic AI that browses the web (or your private documents), answers complex questions, and turns hours of research into minutes. I built a practical demo to show this in action: an Event Finder that searches the live internet for upcoming tech conferences.

The full presentation is available on YouTube if you want to follow along: How to Build Agentic Web Research Assistants

The Problem: Finding Events Isn't Just a Simple Search

Let me paint a picture. You want to find upcoming tech conferences about AI in your area. You need specific information: the event name, start and end dates, location, and most importantly, the registration URL.

A simple web search or basic LLM query falls short because:

  • You might get outdated information
  • The first search result rarely contains all required details
  • You need to cross-reference multiple sources
  • Without structure, the data is hard to use in an application

This is where Reka's Research API shines. It doesn't just search, it reasons through multiple steps, aggregates information, and returns structured, grounded results.

Event finder interface

The Solution: Multi-Step Research That Actually Works

The core innovation here is multi-step grounding. Instead of making a single query and hoping for the best, the Research API acts like a diligent human researcher:

  1. It makes an initial search based on your query
  2. Checks what information is missing
  3. Performs additional targeted searches
  4. Aggregates and validates the data
  5. Returns a complete, structured response

As a developer, you simply send your question, and the API handles the complex iteration. No need to build your own search loops or retry logic.

How It Works: The Developer Experience

Here's what surprised me most: the simplicity. You define your data structure, ask a question, and the API handles all the complex research orchestration. No retry logic, no search loop management.

The key is structured output. Instead of parsing messy text, you tell the API exactly what JSON schema you want:

public class TechEvent
{
    public string? Name { get; set; }
    public DateTime? StartDate { get; set; }
    public DateTime? EndDate { get; set; }
    public string? City { get; set; }
    public string? Country { get; set; }
    public string? Url { get; set; }
}

Then you send your query with the schema, and it returns perfectly structured data every time. The API uses OpenAI-compatible format, so if you've worked with ChatGPT's API, this feels instantly familiar.

The real magic? You also get back the reasoning steps, the actual web searches it performed and how it arrived at the answer. Perfect for debugging and understanding the agent's thought process.

I walk through the complete implementation, including domain filtering, location-aware search, and handling the async research calls in the video. The full source code is on GitHub if you want to dive deeper.


Try It Yourself

The complete source code is on GitHub. Clone it, grab a free API key, and you'll have it running in under 5 minutes.

I'm curious what you'll build with this. Research agents that monitor news? Product comparison tools? Documentation synthesizers? The API works for any web research task. If you build something, tag me.  I'd love to see it.

Happy New Year! 🎉

Reading Notes #679

Exploring the intersection of AI and code this week, I stumbled on a treasure trove of practical insights, from building AI agents in n8n to Meta’s groundbreaking SAM Audio model. The blend of low-code tools, IDE integrations, and deep dives into .NET profiling shows how innovation is bridging creativity and technical rigor. Whether you’re automating workflows or decoding audio separation, there’s something here to spark curiosity and curiosity-driven coding.


AI

Programming

~frank