How to Send Web Pages to Ollama with Share2Agent
Process any web page with a local Ollama model -- summarize articles, translate documentation, extract key points, or analyze content. Share2Agent sends the page to a small webhook receiver that calls Ollama's API and saves the result.
Prerequisites
- Ollama installed and running (ollama.com)
- A model pulled (e.g.,
ollama pull llama3.2) - Share2Agent Chrome extension installed
- Python 3.10+
Step 1: Create the Webhook Receiver
This script receives pages from Share2Agent, sends the content to Ollama with your comment as the prompt, and saves both the original page and the LLM response.
Save this as ollama_receiver.py:
#!/usr/bin/env python3
"""Share2Agent → Ollama webhook receiver."""
import json
import urllib.request
from datetime import datetime
from http.server import HTTPServer, BaseHTTPRequestHandler
from pathlib import Path
PORT = 9876
OUTPUT_DIR = Path.home() / "share2agent-ollama"
OLLAMA_URL = "http://localhost:11434/api/generate"
MODEL = "llama3.2"
DEFAULT_PROMPT = "Summarize this article in 3-5 bullet points."
OUTPUT_DIR.mkdir(parents=True, exist_ok=True)
def call_ollama(prompt: str, content: str) -> str:
payload = json.dumps({
"model": MODEL,
"prompt": f"{prompt}\n\n---\n\n{content}",
"stream": False,
}).encode()
req = urllib.request.Request(
OLLAMA_URL,
data=payload,
headers={"Content-Type": "application/json"},
)
with urllib.request.urlopen(req, timeout=120) as resp:
return json.loads(resp.read())["response"]
class Handler(BaseHTTPRequestHandler):
def do_POST(self):
length = int(self.headers.get("Content-Length", 0))
data = json.loads(self.rfile.read(length))
title = data.get("title", "untitled")
content = data.get("content", "")
comment = data.get("comment", "").strip()
prompt = comment if comment else DEFAULT_PROMPT
print(f"Processing: {title[:60]}...")
result = call_ollama(prompt, content)
# Save result
ts = datetime.now().strftime("%Y-%m-%d-%H%M")
slug = title[:40].lower().replace(" ", "-")
out = OUTPUT_DIR / f"{ts}-{slug}.md"
out.write_text(
f"# {title}\n\n"
f"**Prompt:** {prompt}\n"
f"**Source:** {data.get('url', '')}\n\n"
f"---\n\n{result}\n"
)
print(f"Saved: {out.name}")
self.send_response(200)
self.send_header("Content-Type", "application/json")
self.send_header("Access-Control-Allow-Origin", "*")
self.end_headers()
self.wfile.write(json.dumps({"status": "ok"}).encode())
def do_OPTIONS(self):
self.send_response(204)
self.send_header("Access-Control-Allow-Origin", "*")
self.send_header("Access-Control-Allow-Methods", "POST, OPTIONS")
self.send_header("Access-Control-Allow-Headers", "Content-Type")
self.end_headers()
if __name__ == "__main__":
print(f"Ollama receiver on :{PORT} (model: {MODEL})")
HTTPServer(("0.0.0.0", PORT), Handler).serve_forever()Step 2: Run the Receiver
python3 -u ollama_receiver.pyMake sure Ollama is running (ollama serve or the Ollama app).
Step 3: Configure Share2Agent
- Click the Share2Agent extension icon in Chrome.
- Open Settings.
- Set the Webhook URL to
http://localhost:9876. - Save.
Step 4: Process a Page
- Navigate to any article or documentation page.
- Click the Share2Agent icon.
- In the comment field, type your instruction:
Summarize in 3 bulletsTranslate to SpanishExtract all code examplesList the pros and cons mentioned
- Click Share.
If you leave the comment empty, the receiver uses the default prompt ("Summarize this article in 3-5 bullet points").
The result is saved to ~/share2agent-ollama/:
~/share2agent-ollama/2026-03-28-1430-understanding-rust-lifetimes.md
Customization
Change the model: Edit the MODEL variable. Use ollama list to see available models.
Change the default prompt: Edit DEFAULT_PROMPT to set a different fallback behavior.
Adjust timeout: For long documents or slow models, increase the timeout=120 value in urlopen.
Stream responses: Set "stream": True in the Ollama payload and read chunks incrementally for real-time output.
What's Next?
- Add a web UI -- extend the receiver with a simple HTML page that shows processed results in real time.
- Route by comment keyword -- use different models or prompts based on the comment (e.g., "translate" uses a multilingual model, "code" uses a coding model).
- Chain with other tools -- save Ollama's output to a directory that another AI tool (Aider, Cursor, Windsurf) watches, creating a two-stage pipeline.