I'm excited to share that my new n8n template has been approved and is now available for everyone to use! This template automates the process of creating AI-generated video clips from YouTube videos and sending notifications directly to your inbox.
If you've ever wanted to automatically create short clips from long YouTube videos, this template is for you. It watches a YouTube channel of your choice, and whenever a new video is published, it uses AI to generate engaging short clips perfect for social media. You get notified by email when your clip is ready to download.
How It Works
The workflow is straightforward and runs completely on autopilot:
Monitor YouTube channels - The template watches the RSS feed of any YouTube channel you specify. When a new video appears, the automation kicks off.
Request AI clip generation - Using Reka's Vision API, the workflow sends the video for AI processing. You have full control over the output:
Write a custom prompt to guide the AI on what kind of clip to create
Choose whether to include captions
Set minimum and maximum clip duration
Smart status checking - When the clips are ready, you receive a success email with your download link. As a safety feature, if the job takes too long, you'll get an error notification instead.
Getting Started is Easy
The best part? You can install this template with just one click from the n8n Templates page. No complex setup required!
After installation, you'll just need two quick things:
A Gmail account (or use any email provider you like)
That's it! The template comes ready to use. Simply add your YouTube channel RSS feed, connect your API key, and you're ready to start generating clips automatically. The whole setup takes just a few minutes.
If you run into any questions or want to share what you've built, join the Reka Discord community. I'd love to hear how you're using this template!
Show Me
In this short video I show you how to get that template into your n8n and how to configure it.
I built an AI research agent that actually browses the live web and finds tech events, no search loops, no retry logic, no hallucinations. Just ask a question and get structured JSON back with the reasoning steps included. The secret? An API that handles multi-step research automatically. Built with .NET/Blazor in a weekend. Watch the video | Get the code | Free API key (version française)
Happy New Year! I wanted to share something I recently presented at the AI Agents Conference 2025: how to build intelligent research assistants that can search the live web and return structured, reliable results.
Coming back from the holidays, I'm reminded of a universal problem: information overload. Whether it's finding relevant tech conferences, catching up on industry news, or wading through piles of documentation that accumulated during time off, we all need tools that can quickly search and synthesize information for us. That's what Reka Research does, it's an agentic AI that browses the web (or your private documents), answers complex questions, and turns hours of research into minutes. I built a practical demo to show this in action: an Event Finder that searches the live internet for upcoming tech conferences.
The Problem: Finding Events Isn't Just a Simple Search
Let me paint a picture. You want to find upcoming tech conferences about AI in your area. You need specific information: the event name, start and end dates, location, and most importantly, the registration URL.
A simple web search or basic LLM query falls short because:
You might get outdated information
The first search result rarely contains all required details
You need to cross-reference multiple sources
Without structure, the data is hard to use in an application
This is where Reka's Research API shines. It doesn't just search, it reasons through multiple steps, aggregates information, and returns structured, grounded results.
The Solution: Multi-Step Research That Actually Works
The core innovation here is multi-step grounding. Instead of making a single query and hoping for the best, the Research API acts like a diligent human researcher:
It makes an initial search based on your query
Checks what information is missing
Performs additional targeted searches
Aggregates and validates the data
Returns a complete, structured response
As a developer, you simply send your question, and the API handles the complex iteration. No need to build your own search loops or retry logic.
How It Works: The Developer Experience
Here's what surprised me most: the simplicity. You define your data structure, ask a question, and the API handles all the complex research orchestration. No retry logic, no search loop management.
The key is structured output. Instead of parsing messy text, you tell the API exactly what JSON schema you want:
public class TechEvent
{
public string? Name { get; set; }
public DateTime? StartDate { get; set; }
public DateTime? EndDate { get; set; }
public string? City { get; set; }
public string? Country { get; set; }
public string? Url { get; set; }
}
Then you send your query with the schema, and it returns perfectly structured data every time. The API uses OpenAI-compatible format, so if you've worked with ChatGPT's API, this feels instantly familiar.
The real magic? You also get back the reasoning steps, the actual web searches it performed and how it arrived at the answer. Perfect for debugging and understanding the agent's thought process.
I walk through the complete implementation, including domain filtering, location-aware search, and handling the async research calls in the video. The full source code is on GitHub if you want to dive deeper.
I'm curious what you'll build with this. Research agents that monitor news? Product comparison tools? Documentation synthesizers? The API works for any web research task. If you build something, tag me. I'd love to see it.
My colleague Annie loves clipping videos from her favorite creators. You know that feeling when you catch a great moment and turn it into a perfect short? That's her jam. But she kept running into this frustrating problem: by the time she saw a new video and got around to clipping it, everyone else had already done it. She was always late to the party.
When she told me about this, I thought, "What if we could automatically clip videos the moment they're published?" That way, she'd have her clips ready to post while the content is still fresh.
So I put my experience with integration tools to work and built something for her—and for anyone else who has this same problem. And you know what? I'm pretty excited to share it with you.
I put together an open-source n8n templates that automatically clips YouTube videos using AI. Here's how it works:
It watches for new videos from your favorite YouTube channel
Sends the video to Reka's AI to create clips automatically
Checks when the clips are ready and sends you an email with the download link
The whole thing runs on n8n (it's a free automation platform), and it uses Reka's Clips API to do the AI magic. Best part? It's completely free to use and set up.
How It Actually Works
I built this using two n8n workflows that work together:
Workflow 1: Submit Reel Creation
This one's the watcher. It monitors a YouTube channel's RSS feed, and the moment a new video drops, it springs into action:
Grabs the video URL
Sends it to Reka's API with instructions like "Create an engaging short video highlighting the best moments"
Gets back a job ID so we can track the progress
Saves everything to a n8n data table
The cool thing is you can customize how the clips are made. Want vertical videos for TikTok? Done. Need subtitles? Got it. You can set the clip length anywhere from 0 to 30 seconds. It's all in the JSON configuration.
{
"video_urls": ["{{ $json.link }}"],
"prompt": "Create an engaging short video highlighting the best moments",
"generation_config": {
"template": "moments",
"num_generations": 1,
"min_duration_seconds": 0,
"max_duration_seconds": 30
},
"rendering_config": {
"subtitles": true,
"aspect_ratio": "9:16"
}
}
Workflow 2: Check Reel Status
This one's the patient checker. Since AI takes time to analyze a video and create clips (could be several minutes depending on the video length), we need to check in periodically:
Looks at all the pending jobs in our data table
Asks Reka's API "Hey, is this one done yet?"
When a clip is ready, sends you an email with the download link
Marks the job as complete so we don't check it again
I set mine to check every 15-30 minutes. No need to spam the API—good things take time! 😉
Setting It Up (It's Easier Than You Think)
When I was helping Annie set this up (you can watch the full walkthrough below), we got it working in just a few minutes. Here's what you need to do:
Step 1: Create Your Data Table
In n8n, create a new data table. Here's a pro tip I learned the hard way: don't name it "videos"—use something like "clip_jobs" or "reel_records" instead. Trust me on this one; it'll save you some headaches.
Your table needs four columns (all strings):
video_title - The name of the video
video_url - The YouTube URL
job_id - The ID Reka gives us to track the clip
job_status - Where we are in the process (queued, processing, completed, etc.)
Step 2: Import the Workflows
Download the two JSON files from the GitHub repo and import them into n8n. They'll show up with some errors at first—that's totally normal! We need to configure them.
Step 3: Configure "Submit Reel Creation"
RSS Feed Trigger: Replace my YouTube channel ID with the one you want to monitor. You can find any channel's ID in their channel URL.
API Key: Head to platform.reka.ai and grab your free API key. Pop it into the Bearer Auth field. Give it a memorable name like "Reka API key" so you know what it is later.
Clip Settings: This is where you tell the AI what kind of clips you want. The default settings create one vertical video (9:16 aspect ratio) up to 30 seconds long with subtitles. But you can change anything:
The prompt ("Create an engaging short video highlighting the best moments")
Duration limits
Aspect ratio (square, vertical, horizontal—your choice)
Whether to include subtitles
Data Table: Connect it to that table you created in Step 1.
Step 4: Configure "Check Reel Status"
Trigger: Start with the manual trigger while you're testing. Once everything works, switch it to a schedule trigger (I recommend every 15-30 minutes).
API Key: Same deal as before—add your Reka API key.
Email: Update the email node with your email address. You can customize the subject and body if you want, but the default works great.
Data Table: Make sure all the data table nodes point to your table from Step 1.
Watching It Work
When Annie and I tested it live, that moment when the first clip job came through with a "queued" status? That was exciting. Then checking back and seeing "completed"? Even better. And when that email arrived with the download link? Perfect.
The clips Reka AI creates are actually really good. It analyzes the entire video, finds the best key moments (or what ever your prompt asks), adds subtitles, and packages it all up in a format ready for social media.
Wrap Up
This tool works great whether you're a clipper enthusiast or a content creator looking to generate clips for your own channel. Once you set it up, it just runs. New video drops at 3 AM? Your clip is already processing. You wake up to a download link in your inbox.
It's open source and free to use. Take it, customize it, make it your own. And if you come up with improvements or have ideas, I'd love to hear about them. Share your updates on GitHub or join the conversation in the Reka Community Discord.
Watch the Full Setup
I recorded the entire setup process with Annie (she was testing it for the first time). You can see every step, every click, and yes, even the little mistakes we made along the way. That's real learning right there.
Get Started
Ready to try it? Here's everything you need:
🔗 n8n template/ Github: https://link.reka.ai/n8n-clip
🔗 Reka API key: https://link.reka.ai/free (renewable & free)
Ever wished you could ask AI from anywhere without needing an interface? Imagine just typing ? and your question in any terminal the moment it pops into your head, and getting the answer right away! In this post, I explain how I wrote a tiny shell script that turns this idea into reality, transforming the terminal into a universal AI client. You can query Reka, OpenAI, or a local Ollama model from any editor, tab, or pipeline—no GUI, no heavy clients, no friction.
Small, lightweight, and surprisingly powerful: once you make it part of your workflow, it becomes indispensable.
There is almost always a terminal within reach—embedded in your editor, sitting in a spare tab, or already where you live while building, debugging, and piping data around. So why break your flow to open a separate chat UI? I wanted to just type a single character (?) plus my question and get an answer right there. No window hopping. No heavy client.
How It Works
The trick is delightfully small: send a single JSON POST request to whichever AI provider you feel like (Reka, OpenAI, Ollama locally, etc.):
# Example: Reka
curl https://api.reka.ai/v1/chat
-H "X-Api-Key: <API_KEY>"
-d {
"messages": [
{
"role": "user",
"content": "What is the origin of thanksgiving?"
}
],
"model": "reka-core",
"stream": false
}
# Example: Ollama local
curl http://127.0.0.1:11434/api/chat
-d {
"model": "llama3",
"messages": [
{
"role": "user",
"content": "What is the origin of thanksgiving?"
}],
"stream": false
}
Once we get the response, we extract the answer field from it. A thin shell wrapper turns that into a universal “ask” verb for your terminal. Add a short alias (?) and you have the most minimalist AI client imaginable.
Let's go into the details
Let me walk you through the core script step-by-step using reka-chat.sh, so you can customize it the way you like. Maybe this is a good moment to mention that Reka has a free tier that's more than enough for this. Go grab your key—after all, it's free!
The script (reka-chat.sh) does four things:
Captures your question
Loads an API key from ~/.config/reka/api_key
Sends a JSON payload to the chat endpoint with curl.
Extracts the answer using jq for clean plain text.
1. Capture Your Question
This part of the script is a pure laziness hack. I wanted to save keystrokes by not requiring quotes when passing a question as an argument. So ? What is 32C in F works just as well as ? "What is 32C in F".
if [ $# -eq 0 ]; then
if [ ! -t 0 ]; then
QUERY="$(cat)"
else
exit 1
fi
else
QUERY="$*"
fi
2. Load Your API Key
If you're running Ollama locally you don't need any key, but for all other AI providers you do. I store mine in a locked-down file at ~/.config/reka/api_key, then read and trim trailing whitespace like this:
Finally, we parse the JSON response with jq to pull out just the answer text. If jq isn't installed we display the raw response, but a formatted answer is much nicer. If you are customizing for another provider, you may need to adjust the JSON path here. You can add echo "$RESPONSE" >> data_sample.json to the script to log raw responses for tinkering.
Now that we have the script, make it executable with chmod +x reka-chat.sh, and let's add an alias to your shell config to make it super easy to use. Add one line to your .zshrc or .bashrc that looks like this:
alias \\?=\"$REKA_CHAT_SCRIPT\"
Because ? is a special character in the shell, we escape it with a backslash. After adding this line, reload your shell configuration with source ~/.zshrc or source ~/.bashrc, and you are all set!
The Result
Now you can ask questions directly from your terminal. Wanna know what is origin of Thanksgiving, ask it like this:
? What is the origin of Thanksgiving
And if you want to keep the quotes, please you do you!
Extra: Web research
I couldn't stop there! Reka also supports web research, which means it can fetch and read web pages to provide more informed answers. Following the same pattern described previously, I wrote a similar script called reka-research.sh that sends a request to Reka's research endpoint. This obviously takes a bit more time to answer, as it's making different web queries and processing them, but the results are often worth the wait—and they are up to date! I used the alias ?? for this one.
On the GitHub repository, you can find both scripts (reka-chat.sh and reka-research.sh) along with a script to create the aliases automatically. Feel free to customize them to fit your workflow and preferred AI provider. Enjoy the newfound superpower of instant AI access right from your terminal!
What's Next?
With this setup, the possibilities are endless. Reka supports questions related to audio and video, which could be interesting to explore next. The project is open source, so feel free to contribute or suggest improvements. You can also join the Reka community on Discord to share your experiences and learn from others.
Ever wished you could ask a question and have the answer come only from a handful of trusted documentation sites—no random blogs, no stale forum posts? That’s exactly what the Check-In Doc MCP Server does. It’s a lightweight Model Context Protocol (MCP) server you can run locally (or host) to funnel questions to selected documentation domains and get a clean AI-generated answer back.
What It Is
The project (GitHub: https://github.com/fboucher/check-in-doc-mcp) is a Dockerized MCP server that:
Accepts a user question.
Calls the Reka AI Research API with constraints (only allowed domains).
Returns a synthesized answer based on live documentation retrieval.
You control which sites are searchable by passing a comma‑separated list of domains (e.g. docs.reka.ai,docs.github.com). That keeps > results focused, reliable, and relevant.
What Is the Reka AI Research API?
Reka AI’s Research API lets you blend language model reasoning with targeted, on‑the‑fly web/document retrieval. Instead of a model hallucinating an answer from static training data, it can:
Perform limited domain‑scoped web searches.
Pull fresh snippets.
Integrate them into a structured response.
In this project, we use the research feature with a web_search block specifying:
allowed_domains: Only the documentation sites you trust.
max_uses: Caps how many retrieval calls it makes per query (controls cost & latency).
Details used here:
Model:reka-flash-research
Endpoint:http://api.reka.ai/v1/chat/completions
Auth: Bearer API key (generated from the Reka dashboard: https://link.reka.ai/free)
How It Works Internally
The core logic lives in ResearchService (src/Domain/ResearchService.cs). Simplified flow:
Initialization
Stores the API key + array of allowed domains, sets model & endpoint, logs a safe startup message.
Build Request Payload
The CheckInDoc(string question) method creates a JSON payload:
var requestPayload = new {
model,
messages = new[] { new { role = "user", content = question } },
research = new {
web_search = new {
allowed_domains = allowedDomains,
max_uses = 4
}
}
};
Send Request
Creates a HttpRequestMessage (POST), adds Authorization: Bearer <APIKEY>, sends JSON to Reka.
Parse Response
Deserializes into a RekaResponse domain object, returns the first answer string.
Adding It to VS Code (MCP Extension)
You can run it as a Docker-based MCP server. Two simple approaches:
Option 1: Via “Add MCP Server” UI
In VS Code (with MCP extension), click Add MCP Server.
Choose type: Docker image.
Image name: fboucher/check-in-doc-mcp.
Enter allowed domains and your Reka API key when prompted.
Option 2: Via mcp.json (Recommended)
Alternatively, you can manually configure it in your mcp.json file. This will make sure your API key isn't displayed in plain text. Add or merge this configuration:
{
"servers": {
"check-in-docs": {
"type": "stdio",
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"ALLOWED_DOMAINS=${input:allowed_domains}
",
"-e",
"APIKEY=${input:apikey}",
"fboucher/check-in-doc-mcp"
]
}
},
"inputs": [
{
"id": "allowed_domains",
"type": "promptString",
"description": "Enter the comma-separated list of documentation domains to allow (e.g. docs.reka.ai,docs.github.com):"
},
{
"id": "apikey",
"type": "promptString",
"password": true,
"description": "Enter your Reka Platform API key:"
}
]
}
How to Use It
To use it ask to Check In Doc something or You can now use the SearchInDoc tool in your MCP-enabled environment. Just ask a question, and it will search only the specified documentation domains.
Final Thoughts
It’s intentionally simple—no giant orchestration layer. Just a clean bridge between a question, curated domains, and a research-enabled model. Sometimes that’s all you need to get focused, trustworthy answers.
If this sparks an idea, clone it and adapt away. If you improve it (citations, richer error handling, multi-turn context)—send a PR!
I've spent most of my career building software in C# and .NET, and only used Python in IoT projects. When I wanted to build a fun project—an app that uses AI to roast videos, I knew it was the perfect opportunity to finally dig into Python web development.
The question was: where do I start? I hopped into a brainstorming session with Reka's AI chat and asked about options for building web apps in Python. It mentioned Flask, and I remembered friends talking about it being lightweight and perfect for getting started. That sounded right.
In this post, I share how I built "Roast My Life," a Flask app using the Reka Vision API.
The Vision (Pun Intended)
The app needed three core things:
List videos: Show me what videos are in my collection
Upload videos: Let me add new ones via URL
Roast a video: Send a selected video to an AI and get back some hilarious commentary
See it in action
Part 1: Getting Started Environment Setup
The first hurdle was always going to be environment setup. I'm serious about keeping my Python projects isolated, so I did the standard dance:
Before even touching dependencies, I scaffolded a super bare-bones Flask app. Then one thing I enjoy from C# is that all dependencies are brought in one shot, so I like doing the same with my python projects using requirements.txt instead of installing things ad‑hoc (pip install flask then later freezing).
Dropping that file in first means the setup snippet below is deterministic. When you run pip install -r requirements.txt, Flask spins up using the exact versions I tested with, and you won't accidentally grab a breaking major update.
Here's the shell dance that activates the virtual environment and installs everything:
To get that API key, I visited the Reka Platform and grabbed a free one. Seriously, a free key for playing with AI vision APIs? I was in.
With python app.py, I fired up the Flask development server and opened http://127.0.0.1:5000 in my browser. The UI was there, but... it was dead. Nothing worked.
Perfect. Time to build.
The Backend: Flask Routing and API Integration
Coming from ASP.NET Core's controller-based routing and Blazor, Flask's decorator-based approach felt just like home. All the code code goes in the app.py file, and each route is defined with a simple decorator. But first things first: loading configuration from the .env file using python-dotenv:
from flask import Flask, request, jsonify
import requests
import os
from dotenv import load_dotenv
app = Flask(__name__)
# Load environment variables (like appsettings.json)
load_dotenv()
api_key = os.environ.get('API_KEY')
base_url = os.environ.get('BASE_URL')
All the imports packages are the same ones that needs to be in the requirements.txt. And we retreive the API key and base URL from environment variables, just like in .NET Core.
Now, to be able to get roasted we need first to upload a video to the Reka Vision API. Here's the code—I'll go over some details after.
@app.route('/api/upload_video', methods=['POST'])
def upload_video():
"""Upload a video to Reka Vision API"""
data = request.get_json() or {}
video_name = data.get('video_name', '').strip()
video_url = data.get('video_url', '').strip()
if not video_name or not video_url:
return jsonify({"error": "Both video_name and video_url are required"}), 400
if not api_key:
return jsonify({"error": "API key not configured"}), 500
try:
response = requests.post(
f"{base_url.rstrip('/')}/videos/upload",
headers={"X-Api-Key": api_key},
data={
'video_name': video_name,
'index': 'true', # Required: tells Reka to process the video
'video_url': video_url
},
timeout=30
)
response_data = response.json() if response.ok else {}
if response.ok:
video_id = response_data.get('video_id', 'unknown')
return jsonify({
"success": True,
"video_id": video_id,
"message": "Video uploaded successfully"
})
else:
error_msg = response_data.get('error', f"HTTP {response.status_code}")
return jsonify({"success": False, "error": error_msg}), response.status_code
except requests.Timeout:
return jsonify({"success": False, "error": "Request timed out"}), 504
except Exception as e:
return jsonify({"success": False, "error": f"Upload failed: {str(e)}"}), 500
Once the information from the frontend is validated we make a POST request to the Reka Vision API's /videos/upload endpoint. The parameters are sent as form data, and we include the API key in the headers for authentication. Here I was using URLs to upload videos, but you can also upload local files by adjusting the request accordingly. As you can see, it's pretty straightforward, and the documentation from Reka made it easy to understand what was needed.
The Magic: Sending Roast Requests to Reka Vision API
Here's where things get interesting. Once a video is uploaded, we can ask the AI to analyze it and generate content. The Reka Vision API supports conversational queries about video content:
def call_reka_vision_qa(video_id: str) -> Dict[str, Any]:
"""Call the Reka Video QA API to generate a roast"""
headers = {'X-Api-Key': api_key} if api_key else {}
payload = {
"video_id": video_id,
"messages": [
{
"role": "user",
"content": "Write a funny and gentle roast about the person, or the voice in this video. Reply in markdown format."
}
]
}
try:
resp = requests.post(
f"{base_url}/qa/chat",
headers=headers,
json=payload,
timeout=30
)
data = resp.json() if resp.ok else {"error": f"HTTP {resp.status_code}"}
if not resp.ok and 'error' not in data:
data['error'] = f"HTTP {resp.status_code} calling chat endpoint"
return data
except requests.Timeout:
return {"error": "Request to chat API timed out"}
except Exception as e:
return {"error": f"Chat API call failed: {e}"}
Here we pass the video ID and a prompt asking for a "funny and gentle roast." The API responds with AI-generated content, which we can then send back to the frontend for display. I try to give more "freedom" to the AI by asking it to reply in markdown format, which makes the output more engaging.
What really stood out to me was how approachable the Reka Vision API is. You don't need any special SDK—just the requests library making standard HTTP calls. And honestly, it doesn't matter what language you're used to; an HTTP call is pretty much always simple to do. Whether you're coming from .NET, Python, JavaScript, or anything else, you're just sending JSON and getting JSON back.
Authentication is refreshingly straightforward: just pop your API key in the header and you're good to go. No complex SDKs, no multi-step authentication flows, no wrestling with binary data streams. The conversational interface lets you ask questions in natural language, and you get back structured JSON responses with clear fields.
One thing worth noting: in this example, the videos are pre-uploaded and indexed, which means the responses come back fast. But here's the impressive part—the AI actually looks at the video content. It's not just reading a transcript or metadata; it's genuinely analyzing the visual elements. That's what makes the roasts so spot-on and contextual.
Final Thoughts
The Reka Vision API itself deserves credit for making video AI accessible. No complicated SDKs, no multi-GB model downloads, no GPU requirements. Just simple HTTP requests and powerful AI capabilities. I'm not saying I'm switching to Python full-time, but expect to see me sharing more Python projects in the future!
This week covers Microsoft’s open-source Agent Framework for agentic AI, prompt-injection risks and mitigations, and the causes of language model hallucinations. It also highlights NuGet package security updates, Azure SQL Data API Builder improvements, Reka’s new Parallel Thinking feature, and the latest in AI benchmarking.
MCP Prompt-Injection: The Trust Paradox in AI (Saurabh Davala, Sundeep Gottipati) - This is a good post to learn to start learning about the real danger. To be aware of the potential risk and learning about what we can do.
Why language models hallucinate - Very interesting post that explains the reason why we still have hallucinations and how it works
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.
I wanted to kick the tires on the upcoming .NET 10 C# script experience and see how far I could get calling Reka’s Research LLM from a single file, no project scaffolding, no .csproj. This isn’t a benchmark; it’s a practical tour to compare ergonomics, setup, and the little gotchas you hit along the way. I’ll share what worked, what didn’t, and a few notes you might find useful if you try the same.
All the sample code (and a bit more) is here: reka-ai/api-examples-dotnet · csharp10-script. The scripts run a small “top 3 restaurants” prompt so you can validate everything quickly.
We’ll make the same request in three ways:
OpenAI SDK
Microsoft.Extensions.AI for OpenAI
Raw HttpClient
What you need
The C# "script" feature used below ships with the upcoming .NET 10 and is currently available in preview. If you prefer not to install a preview SDK, you can run everything inside the provided Dev Container or on GitHub Codespaces. I include a .devcontainer folder with everything set up in the repo.
Set up your API key
We are talking about APIs here, so of course, you need an API key. The good news is that it's free to sign up with Reka and get one! It's a 2-click process, more details in the repo. The API key is then stored in a .env file, and each script loads environment variables using DotNetEnv.Env.Load(), so your key is picked up automatically. I went this way instead of using dotnet user-secrets because I thought it would be the way it would be done in a CI/CD pipeline or a quick script.
Run the demos
From the csharp10-script folder, run any of these scripts. Each line is an alternative
dotnet run 1-try-reka-openai.cs
dotnet run 2-try-reka-ms-ext.cs
dotnet run 3-try-reka-http.cs
You should see a short list of restaurant suggestions.
OpenAI SDK with a custom endpoint
Reka's API is using the OpenAI format; therefore, I thought of using the NuGet package OpenAI. To reference a package in a script, you use the #:package [package name]@[package version] directive at the top of the file. Here is an example:
#:package OpenAI@2.3.0
// ...
var baseUrl = "http://api.reka.ai/v1";
var openAiClient = new OpenAIClient(new ApiKeyCredential(REKA_API_KEY), new OpenAIClientOptions
{
Endpoint = new Uri(baseUrl)
});
var client = openAiClient.GetChatClient("reka-flash-research");
string prompt = "Give me 3 nice, not crazy expensive, restaurants for a romantic dinner in Montreal";
var completion = await client.CompleteChatAsync(
new List<ChatMessage>
{
new UserChatMessage(prompt)
}
);
var generatedText = completion.Value.Content[0].Text;
Console.WriteLine($" Result: \n{generatedText}");
The rest of the code is more straightforward. You create a chat client, specify the Reka API URL, select the model, and then you send a prompt. And it works just as expected. However, not everything was perfect, but before I share more about that part, let's talk about Microsoft.Extensions.AI.
Microsoft Extensions AI for OpenAI
Another common way to use LLM in .NET is to use one ot the Microsoft.Extensions.AI NuGet package. In our case Microsoft.Extensions.AI.OpenAI was used.
#:package Microsoft.Extensions.AI.OpenAI@9.8.0-preview.1.25412.6
// ....
var baseUrl = "http://api.reka.ai/v1";
IChatClient client = new ChatClient("reka-flash-research", new ApiKeyCredential(REKA_API_KEY), new OpenAIClientOptions
{
Endpoint = new Uri(baseUrl)
}).AsIChatClient();
string prompt = "Give me 3 nice, not crazy expensive, restaurants for a romantic dinner in Montreal";
Console.WriteLine(await client.GetResponseAsync(prompt));
As you can see, the code is very similar. Create a chat client, set the URL, the model, and add your prompt, and it works just as well.
That's two ways to use Reka API with different SDKs, but maybe you would prefer to go "SDKless", let's see how to do that.
Raw HttpClient calling the REST API
Without any SDK to help, there is a bit more line of code to write, but it's still pretty straightforward. Let's see the code:
using var httpClient = new HttpClient();
var baseUrl = "http://api.reka.ai/v1/chat/completions";
var requestPayload = new
{
model = "reka-flash-research",
messages = new[]
{
new
{
role = "user",
content = "Give me 3 nice, not crazy expensive, restaurants for a romantic dinner in New York city"
}
}
};
using var request = new HttpRequestMessage(HttpMethod.Post, baseUrl);
request.Headers.Add("Authorization", $"Bearer {REKA_API_KEY}");
request.Content = new StringContent(jsonPayload, Encoding.UTF8, "application/json");
var response = await httpClient.SendAsync(request);
var responseContent = await response.Content.ReadAsStringAsync();
var jsonDocument = JsonDocument.Parse(responseContent);
var contentString = jsonDocument.RootElement
.GetProperty("choices")[0]
.GetProperty("message")
.GetProperty("content")
.GetString();
Console.WriteLine(contentString);
So you create an HttpClient, prepare a request with the right headers and payload, send it, get the response, and parse the JSON to extract the text. In this case, you have to know the JSON structure of the response, but it follows the OpenAI format.
What did I learn from this experiment?
I used VS Code while trying the script functionality. One thing that surprised me was that I didn't get any IntelliSense or autocompletion. I try to disable the DevKit extension and change the setting for OmniSharp, but no luck. My guess is that because it's in preview, and it will work just fine in November 2025 when .NET 10 will be released.
In this light environment, I encountered some issues where, for some reason, I couldn't use an https endpoint, so I had to use http. In the raw httpClient script, I had some errors with the Reflection that wasn't available. It could be related to the preview or something else, I didn't investigate further.
For the most part, everything worked as expected. You can use C# code to quickly execute some tasks without any project scaffolding. It's a great way to try out the Reka API and see how it works.
What's Next?
While writing those scripts, I encountered multiple issues that aren't related to .NET but more about the SDKs when trying to do more advanced functionalities like optimization of the query and formatting the response output. Since it goes beyond the scope of this post, I will share my findings in a follow-up post. Stay tuned!
Introducing Reka Research: Your AI Research Assistant
Meet Reka Research a powerful AI agent that can search the web and analyze your files to answer complex questions in minutes. Whether you're staying up to date with AI news, screening resumes, or researching technical topics, Reka Research does the heavy lifting for you.
What Makes Reka Research Special?
Reka Research stands out in four key areas:
Top Performance: Best in class results on research benchmarks
Fast Results: Get thorough answers in 1-3 minutes
Full Transparency: See exactly how the AI reached its conclusions. All steps are visible.
Smart Web Search That Actually Works
Ever wished you could ask someone to research the latest AI developments while you focus on other work? That's exactly what Reka Research does.
Watch how it works:
In this demo, Jess and Sharath shows how Reka Research can automatically gather the most important AI news from the past week. The AI visits multiple websites, takes notes, and presents a clean summary with sources. You can even restrict searches to specific domains or set limits on how many sites to check.
File Search for Your Private Documents
Sometimes the information you need isn't on the web - it's in your company's documents, meeting notes, or file archives. Reka Research can search through thousands of private files to find exactly what you're looking for.
See it in action:
In this example, ess and Sharath shows how HR teams can use Reka Research to quickly screen resumes. Instead of manually reviewing hundreds of applications, the AI finds candidates who meet specific requirements (like having a computer science degree and 3+ years of backend experience) in seconds!
Writing Better Prompts Gets Better Results
Like any AI tool, Reka Research works best when you know how to ask the right questions. The key is being specific about what you want and providing context.
Learn the techniques:
Jess and Yi shares practical tips for getting the most out of Reka Research. Instead of asking "summarize meeting minutes," try "summarize April meeting minutes about public participation." The more specific you are, the better your results will be.
Ready to Try Reka Research?
Reka Research is currently available for everyone! Try it via the playground, or using directly the API. Whether you're researching competitors, analyzing documents, or staying current with industry trends, it can save you hours of work.
Want to learn more and connect with other users? Join our Discord community where you can: