This week’s reading notes cover a wide range of topics, from local AI workflows and Docker agent fleets to data privacy, SQL tips, and developer tooling updates. There’s also an interesting look at how AI may be reshaping platforms like GitHub, alongside practical articles and podcasts packed with ideas for developers and tech enthusiasts alike.
Ghostty Is Leaving GitHub (Mitchell Hashimoto) - I didn't realize it was that bad! It's true that I spend less time there, but did AI cause all those outages (by generating peaks of traffic)
Docker AI, what’s new with MCP, Agents, Sandboxes, and more (DevOps and Docker Talk: Cloud Native Interviews and Tooling) - Michael Irwin from Docker is on this episode and they go through alllll the recent releases and some major upcoming stuff, rellay interesting episode
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.
I was wearing a t-shirt with a partial Reka logo at the edge of the frame. I never said the word "Reka" in that segment. The model caught the logo, connected it to the topic I was discussing, and mentioned it unprompted in the output it generated.
That is not a transcript trick. The model was watching.
At the AI Agents Conference 2026, I gave a talk called "Apps That See" — six live demos showing how to build applications that understand images and video. Every project is open source and ready to clone. This post walks through each one so you have enough context to pick it up, run it, and adapt it to something useful in your own work.
Vision AI Is Accessible Now
Not long ago, working with visual AI meant GPU clusters, specialized teams, and weeks of training. Today a compressed 4B model like Qwen or Gemini 3 runs on a regular laptop and handles image description well enough to prototype. Step up to a 7B model like Reka Edge and the quality improves meaningfully. It also runs locally: a gaming PC with a decent GPU is enough. No server required.
For tasks that need more power, cloud APIs give you faster results without local hardware requirements. The tradeoff is that your images and video go to a third-party provider. For corridor cameras or stock photos that is usually acceptable. For private or sensitive content, local is the better default.
The practical pattern: start local to build and test, then decide whether the task actually requires cloud.
What You Can Build With This
Accessibility: Describe a scene in real time for visually impaired users, or identify objects on demand.
Content creation: Extract structure from a video and turn it into a blog post, caption set, or highlight reel.
Productivity: Search through thousands of videos for a specific object or topic, even when the title gives no indication of the content.
Automation: Trigger actions only when specific visual conditions are met, such as an unrecognized person entering a room.
Fun: Most developers' first contact with AI is building something for themselves, and that is a perfectly valid starting point.
Demo 1: Caption This — Generate a Prompt from Any Image
If you work with image generation models, you end up with a lot of images to test and compare. Writing the text prompt that would reproduce a specific image is tedious. This tool does it for you: give it an image, get back a prompt you can use to regenerate something similar.
The demo uses an HTTP client extension in VS Code to call the API directly, no SDK. Pass an image, ask for a plain-text prompt that would recreate it. One prompt detail that improved results noticeably: add no markdown to the instruction.
POST https://api.reka.ai/v1/chat
Content-Type: application/json
{
"model": "reka-flash",
"messages": [{
"role": "user",
"content": [
{ "type": "image_url", "image_url": { "url": "https://..." } },
{ "type": "text", "text": "Write a prompt in plain text, no markdown, that would generate the exact same image." }
]
}]
}
One thing to know when testing this across different models: some accept an image URL directly, others require the image as a base64-encoded string. Same task, same prompt, different input contract. If you plan to swap models in your app, account for this difference from the start.
Demo 2: Media Library — Compare Vision Models Side by Side
This is a web app that connects to multiple vision backends and lets you switch between them at runtime. The motivation: benchmark Reka Edge running locally — via OpenRouter or directly through the Reka API — against other models on real tasks.
Object detection surfaces the biggest portability problem. Some models return bounding boxes in an HTML-style bracket format with pixel coordinates. Others use a 2D box structure with a different coordinate scheme. If you code against one format and then swap models, your rendering breaks. There is no standard here — handle the differences at the application layer, not the model layer.
The app uses the OpenAI API format as the common interface across all backends. Any model with a compatible endpoint can be swapped in with minimal changes. It does not eliminate the per-model quirks, but it reduces the friction of switching to a configuration change rather than a rewrite.
Video input is supported too, though far fewer models handle it than images. Of the models tested, Reka Edge is the standout for video — the others either reject it or behave inconsistently.
Demo 3: Video2Blog — Turn a Video into a Structured Post
I built this for myself. I do a lot of tutorial videos and I wanted a tool that would turn a recording into a structured blog post without me having to write one from scratch.
The tool sends the video to a vision model with a detailed prompt: target structure, tone, format, and an instruction to flag moments where a screenshot would add value. The model returns timestamps — it cannot extract frames itself, but it tells you exactly where to look, and you pull them locally with ffmpeg.
That creates one architectural quirk worth knowing: the video lives in two places. ffmpeg needs it locally to extract frames. The hosted model needs it uploaded to analyze content. For a one-evening project it works well enough, and I use it often enough that it has paid for itself many times over.
After the first draft, you stay in a conversation loop: change the tone, translate to French, swap a timestamp, restructure a section. The model holds context and iterates with you until the result is what you want.
Demo 4: Video Analyzer — Search and Query Your Video Library
Most video search runs on titles, descriptions, and transcribed audio. This demo searches by what is actually visible on screen.
The app pre-indexes a video library by sending each video through a vision model ahead of time. When a query arrives, the heavy work is already done. A search for "robot arm" returns the right video — a clip of a robotic arm animation. It also returns a false positive: fast-moving hands apparently looked close enough to fool the model. Useful, not perfect, and worth designing around in your UX.
The Q&A feature goes further. You pick a video and ask a specific question. "What database was used?" returned MySQL — and noted it was running in a Docker container. The model identified that from watching the screen, not from audio. No transcript needed.
From there, you can generate study materials from any recorded session. The demo produces a multiple-choice quiz with answer options, correct answers, and explanations. The model is doing comprehension, not transcription.
Demo 5: Roast My Life — What the Model Actually Sees
I never mentioned the pictures on my wall. The model did.
In a video about Python and AI, the model's generated blog post made a remark about the artwork hanging behind me. I had said nothing about it. The model noticed, mentioned it, and moved on as if it were obvious.
Then there was the t-shirt moment described at the top of this post. A partial logo, half out of frame, no mention of it anywhere in the audio — and the model connected it to the topic anyway.
This demo is named Roast My Life because the model ends up commenting on things you never intended to share. But the real point is what it reveals: a vision model is not a smarter transcript. It is watching. The larger models do this particularly well, and once you see it, it changes how you think about what these tools can do — and what they will pick up without you asking.
Demo 6: N8N Automation — No-Code Video Clipping Pipeline
Vision AI does not always need custom code. This demo wires everything together in N8N, a visual workflow tool, with no programming required.
The trigger is a new video published to YouTube. The workflow finds an engaging clip, reformats it from horizontal to vertical, adds captions in a specific style (all lowercase, specific colors — chosen to be obviously distinct from any default), and sends an email with the finished clip attached. The whole thing runs automatically.
For developers, this pattern is worth knowing even if you code everything else. Many real business workflows have a vision AI step that fits cleanly into a larger automation, and a no-code tool is often the fastest way to ship it.
Watch the Full Talk
The demos above are the written version. The live version, with the actual code running, models responding in real time, and a few things going sideways in interesting ways, is on YouTube.
All the Code
The demos span Python, C#, raw HTTP, Go, and N8N. Vision AI is not tied to a specific stack — if your environment can make an HTTP request, it can call a vision model.
This week's collection highlights the rapid evolution of AI agents, exploring their asynchronous capabilities, deployment journeys, and their impact on DevOps and video editing. On the programming front, we explore new Git features and API versioning with OpenAPI in .NET 10. We also dive into some fascinating podcast discussions ranging from the GUI vs. CLI debate to generational perspectives in the workplace.
510: AI Agents: Claws, Copilot, GUI vs CLI Debate (Merge Conflict) - In this episode, James is put in the spotlight and needs to talk about this phone situation. Honestly, interesting discussion and they finally end up talking about the AI agent, you know, CLI versus UI.
Can AI Agents Safely Become DevOps Engineers? (Agentic DevOps : AI + Infra Ops) - I read and consume a lot of AI as a developer, it's interesting to see the DevOps side waking up and are building AI agents that focus on DevOps.
Chet Husk: .NET Tooling - Episode 399 (AI DevOps Podcast) - I never thought about that competition, the human versus the machine, related to the consumption of the output of a CLI. This was a very interesting episode about the performance and the priorities.
A mix of thoughtful perspectives and practical updates this week. From evolving AI tools and model selection guidance to changes in developer workflows and tooling, there’s plenty to reflect on. Add in insights on streaming and a strong push toward more secure environments, and you get a well-rounded set of reads worth your time.
Changes to GitHub Copilot Individual plans (Joe Binder) - Big changes for Copilot that will probably affect your workflow. This post shares the details and reasons of this disturbance; it's all for a good reason
It's Time for a Visual Studio Upgrade - This post does a comparison between the old and the new versions of the Visual Studio IDE and shares details about the most impactful changes.
Miscellaneous
Livestreaming Before It Was Cool (Golnaz) - Curious to learn more about the streaming options from the different platforms to the tools, and the pro and cons of each? This post is for you, and on top of that, you get the Microsoft story.
A fast-moving mix this week: AI tooling, ARM readiness, Docker sandboxes, and real-world lessons from agents. Practical insights across .NET, DevOps, and local-first workflows.
Our Favorite Agent Setups (Agentic DevOps) - Nice discussion that goes through many AI harnesses, agents, models, and what they are playing with right now. OpenClaw, OpenCode, Claude Code, Copilot, and all of it.
I'm always on the lookout for innovative ways to enhance my coding experience, and this week's Reading Notes are filled with exciting discoveries! From cutting-edge UI libraries to secure sandbox environments for AI agents, I've curated a selection of articles that showcase the latest programming trends and technologies.
Whether you're interested in harnessing the power of Docker sandboxes or exploring the potential of smart glasses integration, there's something on this list for everyone.
Running an AI model as a one-shot script is useful, but it forces you to restart the model every time you need a result. Setting it up as a service lets any application send requests to it continuously, without reloading the model. This guide shows how to serve Reka Edge using vLLM and an open-source plugin, then connect a web app to it for image description and object detection.
You need a machine with a GPU and either Linux, macOS, or Windows (with WSL). I use UV, a fast Python package and project manager, or pip + venv if you prefer.
Clone the vLLM Reka Plugin
Reka models require a dedicated plugin to run under vLLM, not all models need this extra step, but Reka's architecture requires it. Clone the plugin repository and enter the directory:
git clone https://github.com/reka-ai/vllm-reka
cd vllm-reka
The repository contains the plugin code and a serve.sh script you will use to start the service.
Download the Reka Edge Model
Before starting the service, you need the model weights locally. Install the Hugging Face Hub CLI and use it to pull the reka-edge-2603 model into your project directory:
This is a large model, so make sure you have enough disk space and a stable connection.
Start the Service
Once the model is downloaded, start the vLLM service using the serve.sh script included in the plugin:
uv run bash serve.sh ./models/reka-edge-2603
The script accepts environment variables to configure which model to load and how much GPU memory to allocate. If your GPU cannot fit the model at default settings, open serve.sh and adjust the variables at the top. The repository README lists the available options. The service takes a few seconds to load the model weights, then starts listening for HTTP requests.
As an example with an NVIDIA GeForce RTX 5070, here are the settings I used to run the model:
With the backend running, time to start the Media Library app. Clone the repository, jump into the directory, and run it with Docker:
git clone https://github.com/fboucher/media-library
cd media-library
docker compose up --build -d
Open http://localhost:8080 in your browser, then add a new connection with these settings:
Name: local (or any label you want)
IP address: your machine's local network IP (e.g. 192.168.x.x)
API key: leave blank or enter anything — no key is required for a local connection
Model: reka-edge-2603
Click Test to confirm the connection, then save it.
Try It: Image Description and Object Detection
Select an image in the app and choose your local connection, then click Fill with AI. The app sends the image to your vLLM service, and the model returns a natural language description. You can watch the request hit your backend in the terminal where the service is running.
Reka Edge also supports object detection. Type a prompt asking the model to locate a specific feature (ex: "face") and the model returns bounding-box coordinates. The app renders these as red boxes overlaid on the image. This works for any region you can describe in a prompt.
Switch to the Reka Cloud API
If your local GPU is too slow for production use, you can point the app at the Reka APIs instead. Add a new connection in the app and set the base URL to the Reka API endpoint. Get your API key from platform.reka.ai. OpenRouter is another option if you prefer a unified API across providers.
The model name stays the same (reka-edge-2603), so switching between local and cloud is just a matter of selecting a different connection in the app. The cloud API is noticeably faster because Reka's servers are more powerful than a local GPU (at least mine :) ). During development, use the local service to avoid burning credits; switch to the API for speed when you need it.
What You Can Build
The service you just set up accepts any image, or video via HTTP — point a script at a folder and you have a batch pipeline for descriptions, tags, or bounding boxes. Swap the prompt and you change what it extracts. The workflow is the same whether you are running locally or through the API.
The tech landscape is constantly evolving, and keeping up with the latest developments can be overwhelming. From AI-powered tools like Ollama and OpenClaw, to new ways of programming with Aspire Docs and Azure CLI, it seems like there's always something new to explore. In this edition of Reading Notes, I'll share some of the interesting things that caught my eye recently, from AI advancements to developer tools and beyond.
NDI Tools: The Unsung Hero of Video Production (Golnaz) - I started using NDI tools for my stream back in 2020, when I was bringing one of my friends into my stream. Amazing, and you must read this post
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, podcasts and books that catch my interest during the week.
This week’s notes bounce between terminals, copilots, and the shifting shape of AI tools in our daily work. From real-world experiments in large .NET projects to small quality-of-life improvements that just make coding smoother, there’s a lot to chew on. A few links stood out more than expected and might change how you approach your setup or your workflow.
You no longer have to wait for Copilot to finish thinking (Bart Wullems) - Amazing! I love it! Another scenario where this is useful is when you press the wrong new-line key combination, and it sends your request instead of adding a new line.
My 'Grill Me' Skill Went Viral (Matt Pocock) - After trying grill me, I knew I couldn't do without it. It is a great skill as it forces us to really flesh out our idea.
AI keeps changing how we build, think, and even feel about software. This batch of posts & episodes mixes practical agent skills, vibe coding, and faster shipping with a bit of reflection on the old internet and why it still sticks with us.
AI
5 Agent Skills I Use Every Day (Matt Pocock) - This post shares really useful and impactful skills and explains how they work.
9 Ways AI Coding Has Rewired My Brain (Matt Pocock) - An interesting post that not only describes an efficient way to code with AI but also to stay relevant.
Podcasts
Your Images are Out of Date (probably) - The Silent Rebuilds problem (DevOps and Docker Talk: Cloud Native Interviews and Tooling) - Very interesting episode. I guess I never realized how true it is that as soon as you download your image, they are outdated. This episode talks about the concept of silent rebuilds and tools to help us solve that issue.
503: Welcome to Tiny Tool Town (Merge Conflict) - With a name like Tiny Tool Town, my head always goes to Looney Tunes. No idea why, but this episode is not about that. It's about the collection of open source tools named: Tiny Tool Town, and they also talk about different models in GitHub Copilot.
Building Software using Squad with Brady Gaster (.NET Rocks!) - Turn your Coplot to 11 with Squad. Carl and Richard talk to Brady Gaster about Squad, a tool for creating an AI development team using GitHub Copilot.
Daniel Ward: AI Agents - Episode 393 (Azure & DevOps Podcast) - In this episode, they talk about the different AI tools used by developers and DevOps people, and the trends.
Everything Is a Graph (Even Your Dad Jokes) with Roi Lipman (Screaming in the Cloud) - Nice episode about different database and most obviously about graph databases. Very interesting to learn more about all that explosion of database types.
Miscellaneous
Kill Your Ego, Ship Your Work (Golnaz) - Great advice from Golnaz in this post. This is why so many people want to work with her.
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, podcasts and books that catch my interest during the week.
Reka just released Reka Edge, a compact but powerful vision-language model that runs entirely on your own machine. No API keys, no cloud, no data leaving your computer. I work at Reka and putting together this tutorial was genuinely fun; I hope you enjoy running it as much as I did.
In three steps, you'll go from zero to asking an AI what's in any image or video.
What You'll Need
A machine with enough RAM to run a 7B parameter model (~16 GB recommended)
Git
uv, a fast Python package manager. Install it with:
curl -LsSf https://astral.sh/uv/install.sh | sh
This works on macOS, Linux, and Windows (WSL). If you're on Windows without WSL, grab the Windows installer instead.
Step 1: Get the Model and Inference Code
Clone the Reka Edge repository from Hugging Face. This includes both the model weights and the inference code:
git clone https://huggingface.co/RekaAI/reka-edge-2603
cd reka-edge-2603
Step 2: Fetch the Large Files
Hugging Face stores large files (model weights and images) using Git LFS. After cloning, these files exist on disk but contain only small pointer files, not the actual content.
First, make sure Git LFS is installed. The command varies by platform:
Then pull all large files, including model weights and media samples:
git lfs pull
Grab a coffee while it downloads, the model weights are several GB.
Step 3: Ask the Model About an Image or Video
To analyze an image, use the sample included in the media/ folder:
uv run example.py \
--image ./media/hamburger.jpg \
--prompt "What is in this image?"
Or pass a video with --video:
uv run example.py \
--video ./media/many_penguins.mp4 \
--prompt "What is in this?"
The model will load, process your input, and print a description, all locally, all private.
Try different prompts to unlock more:
"Describe this scene in detail."
"What text is visible in this image?"
"Is there anything unusual or unexpected here?"
What's Actually Happening?
You don't need this to use the model, but if you're anything like me and can't help wondering what's going on under the hood, here's the magic behind example.py:
1. It picks the best hardware available.
The script checks whether your machine has a GPU (CUDA for Nvidia, Metal for Apple Silicon) and uses it automatically. If neither is available, it falls back to the CPU. This affects speed, not quality.
2. It loads the model into memory.
The 7 billion parameter model is read from the folder you cloned. This is the "weights": billions of numbers that encode everything the model has learned. Loading takes ~30 seconds depending on your hardware.
processor = AutoProcessor.from_pretrained(args.model, trust_remote_code=True)
model = AutoModelForImageTextToText.from_pretrained(args.model, ...).eval()
3. It packages your input into a structured message.
Your image (or video) and your text prompt are wrapped together into a conversation-style format, the same way a chat message works, except one part is visual instead of text.
4. It converts everything into numbers.
The processor translates your image into a grid of numerical patches and your prompt into tokens (small chunks of text, each mapped to a number). The model only understands numbers, so this step bridges the gap.
5. The model generates a response, token by token.
Starting from your input, the model predicts the most likely next word, then the next, up to 256 tokens. It stops when it hits a natural end-of-response marker.
6. It converts the numbers back into text and prints it.
The token IDs are decoded back into human-readable words and printed to your terminal. No internet involved at any point.
If you prefer watching and reading, here is the video version:
That's Pretty Cool, Right?
A single script. No API key. No cloud. You just ran a 7 billion parameter vision-language model entirely on your own machine, and it works whether you're on a Mac, Linux, or Windows with WSL, which is what I was using when I wrote this.
This works great as a one-off script: drop in a file, ask a question, get an answer. But what if you wanted to build something on top of it? A web app, a tool that watches a folder, or anything that needs to talk to the model repeatedly?
That's exactly what the next post is about. I'll show you how to wrap Edge as a local API, so instead of running a script, you have a service running on your machine that any app can plug into. Same model, same privacy, but now it's a proper building block.
Another week, another batch of interesting reads. This edition covers AI video experiments, extending coding agents with .NET skills, open source contributions, and a few podcast episodes worth adding to your queue.
OpenWebUI + Model Runner: Zero-Config Local AI (Ignasi Lopez Luna) - I'm a fan of OpenUI, and seeing this synergy between projects is awesome! This post is a must if you are thinking of running some AI locally, which will save you some pain.
The Rise of The Claw with OpenClaw's Peter Steinberger (Hanselminutes with Scott Hanselman) - Finally! The OpenClaw that we all needed. Peter has been interviewed many times, and many people have written about OpenClaw, but this episode was exactly what I needed.
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, podcasts and books that catch my interest during the week.
I'm always on the lookout for innovative ideas to streamline my development workflow. This week, I stumbled upon some fascinating reads that caught my eye, among them, an article about building an AI-powered pull request agent using GitHub Copilot SDK, and another demonstrating the secure use of OpenClaw in Docker sandboxes.
Run OpenClaw Securely in Docker Sandboxes (Oleg Selajev) - Now that looks much safer! I just need to decide what I would connect it to, and I could finally try it. Nice tutorial.
What's new with GitHub Copilot coding agent (Andrea Griffiths) - Oh wow, I didn't know we could continue our session from the cloud to the CLI and the other way around ,pretty cool
Cleaner switch expressions with pattern matching in C# (Bart Wullems) - My first reaction when I read the code in this post was: Wait, what? That works?! So, I guess I also forgot that C# has evolved; it's so much clearer.
Welcome to this new Reading Notes post, a collection of interesting articles and resources I've been absorbing lately! This week's roundup dives into a variety of topics, from practical storage solutions and leveraging AI for code upgrades to exploring the intersection of AI and business value. Get ready for a diverse mix of tech insights and management reflections.
Programming
Upgrading old code with Copilot (Mark Downie) - In this post, Mark shows one area where AI is really making a huge difference.
So, you want to run OpenClaw? (Jim Gumbley) - An interesting post that gives us hits where to start searching for more information about how to raise protection.
Books
The Making of a Manager: What to Do When Everyone Looks to You (Julie Zhuo) - Most management books are written by advanced managers, people with a lot of experience who already have the "manager" mindset well established in their heads. This book feels different, more accessible, closer to a conversational tone. In this book, Julie shares her stories of becoming a manager and the advice she learned along the way. I think it's a good book to get started on this topic, especially if you are new to that position or thinking about it, to understand and be better equipped for the new challenges coming your way.
This week's Reading Notes is packed with AI insights, open-source discoveries, programming tips, and podcast episodes that will leave you eager to dive in. From Ralph Wiggum's coding secrets to the dangers of one-shot glamour, we've got it all covered. So grab your favourite beverage, settle in, and get ready to level up your tech game!
499: Going Full Ralph, CLI, & GitHub Copilot SDK?!?! (Merge Conflict) - Really fun episode to listen to, the explanation of the Ralph loop pattern and hearing Frank panicking (sorry Frank) with all the emojis, it was a fun and very instructive episode.
This week's collection of interesting articles, blog posts, and insights from the world of technology, programming,
and AI. From the latest developments in Claude code and AI models for coding to discussions on the security of AI
assistants and the future of the craft of programming, there's something for everyone in this edition of Reading
Notes.
Balancing cloud innovation with AI practicality, this week’s notes blend Azure updates, .NET’s AI roadmap, and clever Python hacks. A sharp reminder on burnout prevention anchors the mix, while creative teams and DevOps culture inspire fresh perspectives. From Docker model runners to Git worktrees, every corner here offers actionable insights or a spark of curiosity, no clichés, just tools and truths for developers navigating the stormy seas of tech.
493: Git's most powerful but underutilized tool (Merge Conflict) - Great episode where I finally understood what Git work trees are and how to use them in VSCode. Frank is also hilarious in this episode with all the emojis
How a Creative Team Learned to Love Azure DevOps (Golnaz ) - More teams should take an example from this and bend tools to fully help them. We shouldn't accept the status quo. It's also great to see the synergy between different teams.
A lot of good stuff crossed my radar this week. From Aspire’s continued evolution and local AI workflows with Ollama, to smarter, more contextual help in GitHub Copilot, the theme is clear: better tools, used more intentionally. I also bookmarked a few thoughtful pieces on leadership and communication that are worth slowing down for. Plenty here to explore, whether you’re deep in code or thinking about how teams actually work.
The end of the curl bug-bounty (Daniel Stenberg) - I didn't know about this effort, and it's sad to learn about it too now, of course, but I'm glad those programs exist.
The Art of the Oner (Golnaz) - Another great post from Golnaz talks about how to help the message to land. How and why one takes are helping when presenting and the effort it represents.
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.
This week’s Reading Notes bring together programming tips, AI experiments, and cloud updates. Learn to build Python CLI tools. Untangle GitHub issue workflows. Try running AI models locally. Catch up on Azure news. And explore ideas around privacy and cloud architecture. Short reads. Useful takeaways.
5 Minimal API myths and the real truth (David Grace) - Nice post, I really like minimal API, and it's true that people are asking themselves a lot of questions. Here you will find many great answers.
Who Decides Who Doesn’t Deserve Privacy? (Troy Hunt) - A very interesting post about privacy. We should also think about all that when designing systems and applications.
This week’s reads blend cutting-edge tech with practical insights, like how Aspire elevates JavaScript to a first-class citizen in modern development, or why AI’s push toward typed languages might just be the future. From building a self-hosted model registry to uncovering AI’s surprising role in video
production (who knew Adobe had a sound AI gem?), there’s plenty to unpack. And if data-driven wardrobe experiments count as quirky, this week’s got you covered too.
Programming
Aspire for JavaScript developers (David Pine) - JavaScript and all its frameworks are now first citizen in Aspire. This post explains what it means and what the benefits are for developers.
AI
Why AI is pushing developers toward typed languages (Cassidy Williams) - This post shares the evolution of which AI uses languages. I would agree, as I noticed TypeScript was used in more and more blog posts and videos I was seeing online
I tracked everything I wore in 2025. Was it worth it? - I never thought about data related to my wardrobe, but playing with data and trying to understand what it means and how it can be used, that's always a fun thing. Thanks for sharing