Showing posts with label conference. Show all posts
Showing posts with label conference. Show all posts

Apps That See: Bringing Vision AI to Your Projects

I was wearing a t-shirt with a partial Reka logo at the edge of the frame. I never said the word "Reka" in that segment. The model caught the logo, connected it to the topic I was discussing, and mentioned it unprompted in the output it generated.

That is not a transcript trick. The model was watching.

At the AI Agents Conference 2026, I gave a talk called "Apps That See" — six live demos showing how to build applications that understand images and video. Every project is open source and ready to clone. This post walks through each one so you have enough context to pick it up, run it, and adapt it to something useful in your own work.

Vision AI Is Accessible Now

Not long ago, working with visual AI meant GPU clusters, specialized teams, and weeks of training. Today a compressed 4B model like Qwen or Gemini 3 runs on a regular laptop and handles image description well enough to prototype. Step up to a 7B model like Reka Edge and the quality improves meaningfully. It also runs locally: a gaming PC with a decent GPU is enough. No server required.

For tasks that need more power, cloud APIs give you faster results without local hardware requirements. The tradeoff is that your images and video go to a third-party provider. For corridor cameras or stock photos that is usually acceptable. For private or sensitive content, local is the better default.

The practical pattern: start local to build and test, then decide whether the task actually requires cloud.

What You Can Build With This

  • Accessibility: Describe a scene in real time for visually impaired users, or identify objects on demand.
  • Content creation: Extract structure from a video and turn it into a blog post, caption set, or highlight reel.
  • Productivity: Search through thousands of videos for a specific object or topic, even when the title gives no indication of the content.
  • Automation: Trigger actions only when specific visual conditions are met, such as an unrecognized person entering a room.
  • Fun: Most developers' first contact with AI is building something for themselves, and that is a perfectly valid starting point.


Demo 1: Caption This — Generate a Prompt from Any Image


Source: fboucher/caption-this

If you work with image generation models, you end up with a lot of images to test and compare. Writing the text prompt that would reproduce a specific image is tedious. This tool does it for you: give it an image, get back a prompt you can use to regenerate something similar.

The demo uses an HTTP client extension in VS Code to call the API directly, no SDK. Pass an image, ask for a plain-text prompt that would recreate it. One prompt detail that improved results noticeably: add no markdown to the instruction.

POST https://api.reka.ai/v1/chat
Content-Type: application/json

{
  "model": "reka-flash",
  "messages": [{
    "role": "user",
    "content": [
      { "type": "image_url", "image_url": { "url": "https://..." } },
      { "type": "text", "text": "Write a prompt in plain text, no markdown, that would generate the exact same image." }
    ]
  }]
}

One thing to know when testing this across different models: some accept an image URL directly, others require the image as a base64-encoded string. Same task, same prompt, different input contract. If you plan to swap models in your app, account for this difference from the start.

Demo 2: Media Library — Compare Vision Models Side by Side


Source: fboucher/media-library

This is a web app that connects to multiple vision backends and lets you switch between them at runtime. The motivation: benchmark Reka Edge running locally — via OpenRouter or directly through the Reka API — against other models on real tasks.

Object detection surfaces the biggest portability problem. Some models return bounding boxes in an HTML-style bracket format with pixel coordinates. Others use a 2D box structure with a different coordinate scheme. If you code against one format and then swap models, your rendering breaks. There is no standard here — handle the differences at the application layer, not the model layer.

The app uses the OpenAI API format as the common interface across all backends. Any model with a compatible endpoint can be swapped in with minimal changes. It does not eliminate the per-model quirks, but it reduces the friction of switching to a configuration change rather than a rewrite.

Video input is supported too, though far fewer models handle it than images. Of the models tested, Reka Edge is the standout for video — the others either reject it or behave inconsistently.

Demo 3: Video2Blog — Turn a Video into a Structured Post


Source: fboucher/video2blog

I built this for myself. I do a lot of tutorial videos and I wanted a tool that would turn a recording into a structured blog post without me having to write one from scratch.

The tool sends the video to a vision model with a detailed prompt: target structure, tone, format, and an instruction to flag moments where a screenshot would add value. The model returns timestamps — it cannot extract frames itself, but it tells you exactly where to look, and you pull them locally with ffmpeg.

That creates one architectural quirk worth knowing: the video lives in two places. ffmpeg needs it locally to extract frames. The hosted model needs it uploaded to analyze content. For a one-evening project it works well enough, and I use it often enough that it has paid for itself many times over.

After the first draft, you stay in a conversation loop: change the tone, translate to French, swap a timestamp, restructure a section. The model holds context and iterates with you until the result is what you want.

Demo 4: Video Analyzer — Search and Query Your Video Library


Source: reka-ai/api-examples-dotnet

Most video search runs on titles, descriptions, and transcribed audio. This demo searches by what is actually visible on screen.

The app pre-indexes a video library by sending each video through a vision model ahead of time. When a query arrives, the heavy work is already done. A search for "robot arm" returns the right video — a clip of a robotic arm animation. It also returns a false positive: fast-moving hands apparently looked close enough to fool the model. Useful, not perfect, and worth designing around in your UX.

The Q&A feature goes further. You pick a video and ask a specific question. "What database was used?" returned MySQL — and noted it was running in a Docker container. The model identified that from watching the screen, not from audio. No transcript needed.

From there, you can generate study materials from any recorded session. The demo produces a multiple-choice quiz with answer options, correct answers, and explanations. The model is doing comprehension, not transcription.

Demo 5: Roast My Life — What the Model Actually Sees


Source: reka-ai/api-examples-python

I never mentioned the pictures on my wall. The model did.

In a video about Python and AI, the model's generated blog post made a remark about the artwork hanging behind me. I had said nothing about it. The model noticed, mentioned it, and moved on as if it were obvious.

Then there was the t-shirt moment described at the top of this post. A partial logo, half out of frame, no mention of it anywhere in the audio — and the model connected it to the topic anyway.

This demo is named Roast My Life because the model ends up commenting on things you never intended to share. But the real point is what it reveals: a vision model is not a smarter transcript. It is watching. The larger models do this particularly well, and once you see it, it changes how you think about what these tools can do — and what they will pick up without you asking.

Demo 6: N8N Automation — No-Code Video Clipping Pipeline


Sources: N8N Reka Vision integration

Vision AI does not always need custom code. This demo wires everything together in N8N, a visual workflow tool, with no programming required.

The trigger is a new video published to YouTube. The workflow finds an engaging clip, reformats it from horizontal to vertical, adds captions in a specific style (all lowercase, specific colors — chosen to be obviously distinct from any default), and sends an email with the finished clip attached. The whole thing runs automatically.

For developers, this pattern is worth knowing even if you code everything else. Many real business workflows have a vision AI step that fits cleanly into a larger automation, and a no-code tool is often the fastest way to ship it.


Watch the Full Talk

The demos above are the written version. The live version, with the actual code running, models responding in real time, and a few things going sideways in interesting ways, is on YouTube.


All the Code

The demos span Python, C#, raw HTTP, Go, and N8N. Vision AI is not tied to a specific stack — if your environment can make an HTTP request, it can call a vision model.

All projects:


Reading Notes #483


Already Tuesday! Time for a new Reading Notes post; a list of all the articles, blog posts, and books that catch my interest during the week and that I found interesting. It's a mix of the actuality and what I consumed.

This week is the Cloud Summit (https://azuresummit.live) is an 11-day free conference that focuses on Azure, there is surely a session that will catch your interest.

You think you may have interesting content, share it!


Cloud


Programming


Miscellaneous


~Frank


Reading Notes #281

AzureCLICloud


Programming


Databases


Miscellaneous



What is an AzureCon View Party?

azureCon-Be_the_first

First what is AzureCon?


In less than a week Microsoft is doing a great event called AzureCon. This event is a virtual conference that will focus on Microsoft Azure. It is a virtual event because it's happening online. Even more, it will be available to watch it live for free! The lineup as been published and four great speaker will share with us the latest news about Azure.

AzureCon_speakers

What is a View Party?

A View Party is the chance to watch live the same content of all other, but in a group. It's an opportunity to ask your question while it's happening and gets answers from the MVPs or other viewers.

Where are those View Party?

By the time I'm writing this post, I don't know all of them, but please sharing is good, so if you know a view party is happening in your area share the info using the comment session. You could also send me an e-mail, and I will update this post. I will be at Ottawa, looking forward to meeting you there!
  • Montreal
    MsDevMtl Community
    2000 McGill College, 5e étage, Montréal, QC, Montréal, QC
    Meetup
  • Ottawa
    Ottawa IT Community
    100 Queen Street, Suite 500 , Ottawa, ON
    Meetup


Microsoft MVP Virtual Conference, one more day to go


MVPvConf 2015

Yesterday, was the first day of the #MVPvConf 2015. I already write about this event presented by MVP for everyone.

I really enjoy switching from one path to the other following the best fits of my interest. I even got some real though decisions to take because two sessions were at the same time! But wait; there is more!


Today it's Day 2, and many great presentations will by available.

Developer track with topics like: Roslyn Windows 10 ASP.NET Azure Cross-Platform...

IT Pro track with topics like: DevOps System Center Hyper-V Migration Office 365...

Consumer track with topics like: Pivot Table Data Windows 10 Cortana OneNote Security...

LATAM track with topics like: Power BI Exchange Enterprise Mobility Suite (EMS) Office 365...

Brazil track with topics like: Azure Active Directory Hybrid Cloud SQL Server...


Build your own agenda, and come join-us!




The first ever Microsoft MVP Virtual Conference

Did you eared about that great free event that Microsoft and the MVPs are putting on in May? On the 14th and 15th (yes two days!), join Microsoft MVPs from the Americas’ region as they share their knowledge and real-world expertise during a free event, the MVP Virtual Conference.

by MVPs, for everyone

MVPvConf
 

Gigantic event

The MVP Virtual Conference will showcase 95 sessions of content for IT Pros, Developers and Consumer experts designed to help you navigate life in a mobile-first, cloud-first world.  Microsoft’s Corporate Vice President of Developer Platform, Steve Guggenheimer, will be on hand to deliver the opening Key Note Address.
 

Still not sure if you will found what you are looking for?

The conference will have 5 tracks:
  • IT Pro English
  • Dev English
  • Consumer English
  • Portuguese mixed sessions
  • Spanish mixed sessions
There is something for everyone! Learn from the best and brightest MVPs in the tech world today and develop some great skills!

Join Me!

Be sure to register quickly to hold your spot and tell your friends & colleagues.

image
 
 
 


The conference will be widely covered on social media, you can join the conversation by following @MVPAward and using the hashtag #MVPvConf.