AI Setup
Beetroot can transform clipboard text and images using AI -- translate, summarize, fix grammar, read text from photos, describe images, extract data, and more. You bring your own API key (BYOK). Beetroot never stores or transmits your data except to the AI provider you choose, and only when you explicitly ask for a transform. All AI API calls run in native Rust -- no browser JavaScript, no CORS issues.
Version: 1.6.5 Last updated: 2026-04-04
Supported Providers
| Provider | Speed | Cost | Best for |
|---|---|---|---|
| OpenAI | Fast | Paid | General transforms, widely used |
| Google Gemini | Fast | Free tier available | Budget-friendly option |
| Anthropic | Fast | Paid | High-quality rewrites |
| DeepSeek | Moderate | Very cheap | Deep reasoning tasks |
| Local LLM | Varies | Free | Full privacy, no internet needed |
Cloud Provider Setup
OpenAI
Models: gpt-5.4-nano (fast, cheapest) or gpt-5.4-mini (smarter, 2x faster than previous generation)
- Go to platform.openai.com and sign in or create an account.
- Navigate to API keys and create a new key. It starts with
sk-. - In Beetroot, open Settings > AI.
- Select OpenAI as your provider.
- Paste your API key.
- Choose a model -- gpt-5.4-nano is recommended to start (fast and cheap).
- Click Test to verify the connection.
- Click Save.
If you previously used gpt-5-nano or gpt-5-mini, Beetroot auto-migrates to gpt-5.4 models on launch.
Google Gemini
Models: gemini-2.5-flash-lite (fastest) or gemini-2.5-flash (better reasoning)
- Go to aistudio.google.com and sign in with your Google account.
- Click "Get API key" and create a key. It starts with
AIza. - In Beetroot, open Settings > AI.
- Select Google Gemini as your provider.
- Paste your API key.
- Choose a model -- gemini-2.5-flash-lite is the fastest option.
- Click Test to verify.
- Click Save.
Gemini offers a generous free tier, making it a great starting point if you want to try AI transforms without spending anything.
Anthropic
Models: claude-haiku-4-5 (fastest) or claude-sonnet-4-6 (best balance of speed and quality)
- Go to console.anthropic.com and sign in or create an account.
- Navigate to API keys and create one. It starts with
sk-ant-. - In Beetroot, open Settings > AI.
- Select Anthropic as your provider.
- Paste your API key.
- Choose a model -- claude-haiku-4-5 is recommended for quick transforms.
- Click Test to verify.
- Click Save.
DeepSeek
Models: deepseek-chat (everyday tasks) or deepseek-reasoner (deep reasoning, chain-of-thought)
- Go to platform.deepseek.com and sign in or create an account.
- Create an API key in your dashboard.
- In Beetroot, open Settings > AI.
- Select DeepSeek as your provider.
- Paste your API key.
- Choose a model -- deepseek-chat is good for most tasks; deepseek-reasoner is for complex analysis.
- Click Test to verify.
- Click Save.
DeepSeek offers very competitive pricing. The deepseek-reasoner model shows its thinking process, which is automatically cleaned from the output.
Local LLM Setup (No Internet Required)
Run AI transforms entirely on your computer with no API key and no data leaving your machine.
Option 1: Ollama
Ollama is the easiest way to run local models.
- Download and install Ollama from ollama.com.
- Open a terminal and pull a model:
ollama pull llama3.2 - Ollama runs automatically in the background on port 11434.
- In Beetroot, open Settings > AI.
- Select Local LLM as your provider.
- Choose the Ollama preset -- the endpoint fills in automatically.
- Click Test -- Beetroot will connect and show a dropdown of your installed models.
- Select your model from the dropdown.
- Click Save.
Recommended Ollama models for text:
llama3.2-- Good general-purpose model, runs on most hardwaremistral-- Fast and capablegemma2-- Google's open model, good for text tasks
Recommended Ollama models for vision:
llava-- General-purpose vision modelbakllava-- Better at detailed descriptionsmoondream-- Lightweight, fast
Option 2: LM Studio
LM Studio provides a graphical interface for managing and running local models.
- Download and install LM Studio from lmstudio.ai.
- Download a model through the LM Studio interface.
- Start the local server (LM Studio runs on port 1234 by default).
- In Beetroot, open Settings > AI.
- Select Local LLM as your provider.
- Choose the LM Studio preset.
- Click Test to verify the connection. The loaded model is detected automatically.
- Click Save.
Option 3: Custom Endpoint
Any server that speaks the OpenAI-compatible API format works with Beetroot.
- Start your model server.
- In Beetroot Settings > AI, select Local LLM.
- Choose Custom and enter your endpoint URL (e.g.,
http://127.0.0.1:8080/v1/chat/completions). - Enter the model name.
- Click Test and Save.
Model Recommendations
| Goal | Provider | Model | Why |
|---|---|---|---|
| Cheapest cloud option | Google Gemini | gemini-2.5-flash-lite | Generous free tier |
| Best quality | Anthropic | claude-sonnet-4-6 | Excellent rewrites and translations |
| Fastest | OpenAI | gpt-5.4-nano | Lowest latency |
| Deep analysis | DeepSeek | deepseek-reasoner | Chain-of-thought reasoning |
| Best vision (cloud) | Google Gemini | gemini-2.5-flash | Fast, well-structured OCR output |
| Best vision (local) | Local LLM | Qwen 3.5 4B (LM Studio) | Accurate OCR, runs on most hardware |
| Full privacy | Local LLM | llama3.2 (Ollama) | Nothing leaves your machine |
| Offline use | Local LLM | Any Ollama model | Works without internet |
AI Vision Transforms
Starting in v1.6.5, Beetroot can analyze images from your clipboard history using AI vision models. Right-click any image clip → AI, and choose a vision prompt.
Five built-in vision prompts:
| Prompt | What it does |
|---|---|
| Read Text | OCR from photos, screenshots, handwritten notes |
| Describe Image | Get a text description of what's in the image |
| Extract Data | Pull structured data from tables, receipts, prescriptions |
| Summarize Image | Quick summary of visual content |
| Translate Image Text | Translate text found in images |
You can also create custom vision prompts in Settings > AI.
Vision-capable providers:
- Cloud: OpenAI GPT-5.4, Anthropic Claude (Haiku/Sonnet), Google Gemini 2.5, DeepSeek
- Local: Ollama (llava, bakllava, moondream), LM Studio (any vision model)
Tip: Even small local vision models (4B parameters) can read handwritten text accurately. If privacy matters, use Ollama or LM Studio -- no data leaves your machine.
Background AI Processing
AI transforms (both text and vision) run in the background. Click a prompt, the menu closes instantly, and you can keep working. When the result is ready:
- If Beetroot is visible, the new clip appears at the top of the list.
- If Beetroot is hidden, a native Windows notification pops up. Click it to bring Beetroot to the front.
You can queue multiple transforms -- they run one by one.
Using AI Transforms
Once a provider is set up, there are several ways to transform text and images:
Method 1: Transform Panel
- Select a text clip in the list.
- Press Alt+T (or right-click > Transform).
- The Transform panel shows built-in text transforms (UPPERCASE, lowercase, Title Case, Trim whitespace, Remove spaces, Single line, Sort lines, Remove duplicates) and your AI prompts below. Use the search box to filter.
- Click an AI prompt. The text is sent to your provider.
- The transformed result is saved as a new clip in your history.
Method 2: Quick Access from Context Menu
- Right-click a text clip.
- At the bottom of the context menu, you'll see your Quick Access prompts (up to 5).
- Click one to transform the text immediately.
To enable Quick Access on a prompt, go to Settings > AI and check the "Quick Access" box next to the prompts you use most often. Up to 5 prompts can be Quick Access at once.
Method 3: Vision Transforms (Image Clips)
- Right-click an image clip.
- Hover over the AI submenu.
- Choose a vision prompt (Read Text, Describe Image, etc.).
- The image is sent to your AI provider. The result is saved as a new text clip.
Built-in Prompts
Beetroot comes with 10 ready-to-use AI prompts:
| Prompt | What it does |
|---|---|
| Fix Grammar | Corrects grammatical errors without changing meaning |
| Any to English | Detects the language and translates to English |
| Summarize | Condenses text into 2-3 key sentences |
| Make Professional | Rewrites in a clear, business-appropriate tone |
| Format as Code | Applies proper code indentation and formatting |
| Bullet Points | Converts text into a bulleted list |
| Simplify | Rewrites in plain, simple language |
| Make Shorter | Condenses to roughly half the length |
| Explain This | Explains in simple terms for anyone |
| Extract Key Data | Extracts names, dates, numbers, and URLs |
You can also create your own custom prompts (up to 20 total, including built-ins) in Settings > AI.
Troubleshooting
| Problem | Solution |
|---|---|
| "Set API key in Settings" | You have not entered an API key for the selected provider. Go to Settings > AI. |
| "API key is invalid" | Double-check your key. Make sure it matches the selected provider. |
| "Request timed out (30s)" | The provider took too long. Try again or switch to a faster model. |
| "Empty response" | The AI returned nothing. Try a different prompt or model. |
| "Text too long for AI transform" | Clips longer than 50,000 characters cannot be processed. Copy a shorter section. |
| Local LLM "Failed" | Make sure your model server (Ollama or LM Studio) is running and the endpoint is correct. |
Privacy
- Beetroot only sends data to an AI provider when you explicitly request a transform.
- For text transforms, only the specific clip text is sent. For vision transforms, only the specific image is sent.
- Your full clipboard history is never transmitted.
- If you use a Local LLM, no data leaves your machine at all.
- All AI API calls are made from native Rust code -- no browser JavaScript, no third-party servers in between.
- API keys are stored locally on your computer and are never transmitted anywhere except to their respective provider.
Last updated: 2026-04-04