Ask AI from Anywhere: No GUI, No Heavy Clients, No Friction

Ever wished you could ask AI from anywhere without needing an interface? Imagine just typing ? and your question in any terminal the moment it pops into your head, and getting the answer right away! In this post, I explain how I wrote a tiny shell script that turns this idea into reality, transforming the terminal into a universal AI client. You can query Reka, OpenAI, or a local Ollama model from any editor, tab, or pipeline—no GUI, no heavy clients, no friction.

Small, lightweight, and surprisingly powerful: once you make it part of your workflow, it becomes indispensable.

💡 All the code scripts are available at: https://github.com/reka-ai/terminal-tools


The Core Idea

There is almost always a terminal within reach—embedded in your editor, sitting in a spare tab, or already where you live while building, debugging, and piping data around. So why break your flow to open a separate chat UI? I wanted to just type a single character (?) plus my question and get an answer right there. No window hopping. No heavy client.

How It Works

The trick is delightfully small: send a single JSON POST request to whichever AI provider you feel like (Reka, OpenAI, Ollama locally, etc.):

# Example: Reka
curl https://api.reka.ai/v1/chat
     -H "X-Api-Key: <API_KEY>" 
     -d {
          "messages": [
            {
              "role": "user",
              "content": "What is the origin of thanksgiving?"
            }
          ],
          "model": "reka-core",
          "stream": false
        }
# Example: Ollama local
curl http://127.0.0.1:11434/api/chat 
-d {  
      "model": "llama3",   
      "messages": [
        {
          "role": "user", 
          "content": "What is the origin of thanksgiving?"
        }], 
      "stream": false
    }

Once we get the response, we extract the answer field from it. A thin shell wrapper turns that into a universal “ask” verb for your terminal. Add a short alias (?) and you have the most minimalist AI client imaginable.

Let's go into the details

Let me walk you through the core script step-by-step using reka-chat.sh, so you can customize it the way you like. Maybe this is a good moment to mention that Reka has a free tier that's more than enough for this. Go grab your key—after all, it's free!

The script (reka-chat.sh) does four things:

  1. Captures your question
  2. Loads an API key from ~/.config/reka/api_key
  3. Sends a JSON payload to the chat endpoint with curl.
  4. Extracts the answer using jq for clean plain text.

1. Capture Your Question

This part of the script is a pure laziness hack. I wanted to save keystrokes by not requiring quotes when passing a question as an argument. So ? What is 32C in F works just as well as ? "What is 32C in F".

if [ $# -eq 0 ]; then
    if [ ! -t 0 ]; then
        QUERY="$(cat)"
    else
        exit 1
    fi
else
    QUERY="$*"
fi

2. Load Your API Key

If you're running Ollama locally you don't need any key, but for all other AI providers you do. I store mine in a locked-down file at ~/.config/reka/api_key, then read and trim trailing whitespace like this:

API_KEY_FILE="$HOME/.config/reka/api_key"
API_KEY=$(cat "$API_KEY_FILE" | tr -d '[:space:]')

3. Send The JSON Payload

Building the JSON payload is the heart of the script, including the API_ENDPOINT, API_KEY, and obviously our QUERY. Here’s how I do it for Reka:

RESPONSE=$(curl -s -X POST "$API_ENDPOINT" \
     -H "X-Api-Key: $API_KEY" \
     -H "Content-Type: application/json" \
     -d "{
  \"messages\": [
    {
      \"role\": \"user\",
      \"content\": $(echo "$QUERY" | jq -R -s .)
    }
  ],
  \"model\": \"reka-core\",
  \"stream\": false
}")

4. Extract The Answer

Finally, we parse the JSON response with jq to pull out just the answer text. If jq isn't installed we display the raw response, but a formatted answer is much nicer. If you are customizing for another provider, you may need to adjust the JSON path here. You can add echo "$RESPONSE" >> data_sample.json to the script to log raw responses for tinkering.

With Reka, the response look like this:

{
    "id": "cb7c371b-3a7b-48d2-829d-70ffacf565c6",
    "model": "reka-core",
    "usage": {
        "input_tokens": 16,
        "output_tokens": 460,
        "reasoning_tokens": 0
    },
    "responses": [
        {
            "finish_reason": "stop",
            "message": {
                "role": "assistant",
                "content": " The origin of Thanksgiving ..."
            }
        }
    ]
}
The value we are looking for and want to display is the `content` field inside `responses[0].message`. Using `jq`, we do:
echo "$RESPONSE" | jq -r '.responses[0].message.content // .error // "Error: Unexpected response format"'

Putting It All Together

Now that we have the script, make it executable with chmod +x reka-chat.sh, and let's add an alias to your shell config to make it super easy to use. Add one line to your .zshrc or .bashrc that looks like this:

alias \\?=\"$REKA_CHAT_SCRIPT\"

Because ? is a special character in the shell, we escape it with a backslash. After adding this line, reload your shell configuration with source ~/.zshrc or source ~/.bashrc, and you are all set!

The Result

Now you can ask questions directly from your terminal. Wanna know what is origin of Thanksgiving, ask it like this:

? What is the origin of Thanksgiving

And if you want to keep the quotes, please you do you!

Extra: Web research

I couldn't stop there! Reka also supports web research, which means it can fetch and read web pages to provide more informed answers. Following the same pattern described previously, I wrote a similar script called reka-research.sh that sends a request to Reka's research endpoint. This obviously takes a bit more time to answer, as it's making different web queries and processing them, but the results are often worth the wait—and they are up to date! I used the alias ?? for this one.

On the GitHub repository, you can find both scripts (reka-chat.sh and reka-research.sh) along with a script to create the aliases automatically. Feel free to customize them to fit your workflow and preferred AI provider. Enjoy the newfound superpower of instant AI access right from your terminal!

What's Next?

With this setup, the possibilities are endless. Reka supports questions related to audio and video, which could be interesting to explore next. The project is open source, so feel free to contribute or suggest improvements. You can also join the Reka community on Discord to share your experiences and learn from others.


Resources




Reading Notes #675

Here’s a compact roundup of links and highlights I found interesting this week. You’ll find updates on Git, Chrome DevTools tips, C# 14 and .NET 10 coverage, Blazor upgrade notes, a practical Copilot + Visual Studio guide, plus a few useful tools and AI announcements. Enjoy, and tell me which item you want me to explore next.


AI


Programming


Miscellaneous

  • ZoomIt v9.21 | Microsoft Community Hub (Alex Mihaiuc) - If you are on Windows, please do you do yourself a favour and try Zoom it. Since I switched to Mac and Linux, it's the thing that I miss the most.

Check-In Doc MCP Server: A Handy Way to Search Only the Docs You Trust

Ever wished you could ask a question and have the answer come only from a handful of trusted documentation sites—no random blogs, no stale forum posts? That’s exactly what the Check-In Doc MCP Server does. It’s a lightweight Model Context Protocol (MCP) server you can run locally (or host) to funnel questions to selected documentation domains and get a clean AI-generated answer back.

What It Is

The project (GitHub: https://github.com/fboucher/check-in-doc-mcp) is a Dockerized MCP server that:

  • Accepts a user question.
  • Calls the Reka AI Research API with constraints (only allowed domains).
  • Returns a synthesized answer based on live documentation retrieval.

You control which sites are searchable by passing a comma‑separated list of domains (e.g. docs.reka.ai,docs.github.com). That keeps > results focused, reliable, and relevant.

What Is the Reka AI Research API?

Reka AI’s Research API lets you blend language model reasoning with targeted, on‑the‑fly web/document retrieval. Instead of a model hallucinating an answer from static training data, it can:

  • Perform limited domain‑scoped web searches.
  • Pull fresh snippets.
  • Integrate them into a structured response.

In this project, we use the research feature with a web_search block specifying:

  • allowed_domains: Only the documentation sites you trust.
  • max_uses: Caps how many retrieval calls it makes per query (controls cost & latency).

Details used here:

  • Model: reka-flash-research
  • Endpoint: http://api.reka.ai/v1/chat/completions
  • Auth: Bearer API key (generated from the Reka dashboard: https://link.reka.ai/free)

How It Works Internally

The core logic lives in ResearchService (src/Domain/ResearchService.cs). Simplified flow:

  1. Initialization
    Stores the API key + array of allowed domains, sets model & endpoint, logs a safe startup message.

  2. Build Request Payload
    The CheckInDoc(string question) method creates a JSON payload:

    var requestPayload = new {
      model,
      messages = new[] { new { role = "user", content = question } },
      research = new {
        web_search = new {
          allowed_domains = allowedDomains,
          max_uses = 4
        }
      }
    };
    
  3. Send Request
    Creates a HttpRequestMessage (POST), adds Authorization: Bearer <APIKEY>, sends JSON to Reka.

  4. Parse Response
    Deserializes into a RekaResponse domain object, returns the first answer string.

Adding It to VS Code (MCP Extension)

You can run it as a Docker-based MCP server. Two simple approaches:

Option 1: Via “Add MCP Server” UI

  1. In VS Code (with MCP extension), click Add MCP Server.
  2. Choose type: Docker image.
  3. Image name: fboucher/check-in-doc-mcp.
  4. Enter allowed domains and your Reka API key when prompted.

Option 2: Via mcp.json (Recommended)

Alternatively, you can manually configure it in your mcp.json file. This will make sure your API key isn't displayed in plain text. Add or merge this configuration:

{
  "servers": {
    "check-in-docs": {
      "type": "stdio",
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "-e",
        "ALLOWED_DOMAINS=${input:allowed_domains}
        ",
        "-e",
        "APIKEY=${input:apikey}",
        "fboucher/check-in-doc-mcp"
      ]
    }
  },
  "inputs": [
    {
      "id": "allowed_domains",
      "type": "promptString",
      "description": "Enter the comma-separated list of documentation domains to allow (e.g. docs.reka.ai,docs.github.com):"
    },
    {
      "id": "apikey",
      "type": "promptString",
      "password": true,
      "description": "Enter your Reka Platform API key:"
    }
  ]
}

How to Use It

To use it ask to Check In Doc something or You can now use the SearchInDoc tool in your MCP-enabled environment. Just ask a question, and it will search only the specified documentation domains.

Final Thoughts

It’s intentionally simple—no giant orchestration layer. Just a clean bridge between a question, curated domains, and a research-enabled model. Sometimes that’s all you need to get focused, trustworthy answers.

If this sparks an idea, clone it and adapt away. If you improve it (citations, richer error handling, multi-turn context)—send a PR!

Watch a quick demo


Links & References