#!/usr/bin/env bash # o-o Living Document — self-updating via LLM agent # To update: bash vendor-comparison-vector-dbs.o-o.html [--agent claude] [--model sonnet] # To read: open vendor-comparison-vector-dbs.o-o.html in any browser : << 'OO_HTML' Vector Databases Compared
← Back to index

Vector Databases Compared

As of Version 0

Overview

Vector databases are purpose-built systems for storing, indexing, and querying high-dimensional vector embeddings at scale. As AI applications — particularly retrieval-augmented generation (RAG), semantic search, and recommendation systems — have moved into production, choosing the right vector database has become a critical infrastructure decision. [s1]

The landscape in 2026 spans fully managed services like Pinecone, open-source solutions like Milvus, Weaviate, and Qdrant, lightweight embedded options like Chroma, and PostgreSQL extensions like pgvector. Each occupies a distinct niche: Pinecone optimizes for operational simplicity, Milvus for billion-scale throughput, Qdrant for raw query performance, Weaviate for hybrid search, Chroma for prototyping speed, and pgvector for teams already committed to PostgreSQL. [s2] [s3]

This comparison evaluates these vendors across features, performance benchmarks, pricing, and production readiness to help ML engineers, backend developers, and architects make an informed choice.

Vendor Profiles

Pinecone

Pinecone is a fully managed, serverless vector database that abstracts away all infrastructure management. It offers automatic scaling, real-time indexing, and strong SLA guarantees, making it the go-to choice for teams that want production vector search without operational overhead. Pinecone supports metadata filtering with operators like $eq, $lte, and others, and integrates natively with LangChain and LlamaIndex. [s1] [s4]

Pinecone reports 7ms p99 latency and has been proven at billions of vectors in production workloads. Its serverless architecture eliminates capacity planning, but the trade-off is vendor lock-in and higher costs at scale compared to self-hosted alternatives. In late 2025, Pinecone introduced Dedicated Read Nodes (DRN) for predictable performance at high throughput, a "cascading search" pipeline with re-ranking and a proprietary sparse vector embedding model, and enterprise security features including RBAC, audit logs, and AWS PrivateLink. [s5] [s13]

Weaviate

Weaviate is an open-source vector database with a strong focus on hybrid search — combining dense vector similarity with keyword matching and structured metadata filtering. It uses a GraphQL-first API (REST is also available) and supports automatic vectorization through pluggable modules like text2vec-openai. Built-in generative search (RAG) capabilities make it unique among vector databases. In early 2026, Weaviate launched three AI-powered agents — Query Agent (natural language search), Transformation Agent (automated data enrichment), and Personalization Agent (per-user result tailoring) — along with a generally available embedding service. [s2] [s4] [s14]

Weaviate achieves sub-100ms latency on 768-dimensional embeddings and is well-suited for applications that need both semantic and keyword relevance. It runs efficiently below 50 million vectors but requires significantly more memory and compute above that threshold. In late 2025, Weaviate rebranded its cloud tiers — Serverless became "Shared Cloud" and Enterprise became "Dedicated Cloud" — with pricing now starting at $25/month for Standard SLA. [s5] [s6] [s14]

Qdrant

Qdrant is a high-performance, open-source vector database written in Rust, designed for real-time applications with strong filtering requirements. Its advanced query planning engine avoids the common filtered search pitfalls (speed degradation, accuracy collapse) that affect competitors. HNSW indexing is configurable (e.g., m=16, ef_construct=100), and on-disk vector storage is supported for cost-effective large dataset handling. Version 1.16 (early 2026) added the ACORN algorithm for higher-accuracy filtered HNSW queries (97.2% vs 53.3% without it at 4% selectivity) and Inline Storage, which stores quantized vectors directly in HNSW nodes for 10x QPS improvement on disk-based workloads. [s4] [s7] [s15]

Qdrant achieves the lowest latencies in most benchmark scenarios according to its published benchmarks, with 1ms p99 on smaller datasets. Its free tier — 1 GB of vector storage forever, no credit card required — is the most generous among managed offerings. The Rust foundation provides memory safety and a compact deployment footprint suitable for edge scenarios. In January 2026, Qdrant added vendor-agnostic GPU-accelerated indexing (via Vulkan API), delivering up to 10x faster index builds than CPU-only methods. Enterprise features now include SSO, RBAC, and tiered multitenancy. [s5] [s7] [s15] [s16]

Milvus

Milvus is a cloud-native, open-source vector database backed by Zilliz, designed from the ground up for billion-scale deployments. It separates storage and compute, supports GPU acceleration, and offers the widest range of indexing algorithms including IVF_FLAT, HNSW, and product quantization (PQ). SDKs are available for Python, Java, Go, and more. [s2] [s4]

With over 42,000 GitHub stars (as of early 2026), Milvus is the most popular open-source vector database, used in production by NVIDIA, Salesforce, eBay, Airbnb, and DoorDash. Milvus 2.6, released in mid-2025, introduced RaBitQ 1-bit quantization that reduces memory usage by 72% while delivering 4x throughput improvements, built-in embedding functions for OpenAI, AWS Bedrock, and Vertex AI, and full-text search with 3–4x higher throughput than Elasticsearch at equivalent recall. The trade-off is operational complexity — production Milvus typically requires Kubernetes expertise. Managed hosting is available through Zilliz Cloud. [s3] [s5] [s8] [s17]

Chroma

Chroma is an open-source (Apache 2.0) embedded vector database purpose-built for developer experience. Its Python-first, NumPy-like API requires zero configuration and runs within your application process, eliminating network latency. A 2025 Rust rewrite delivered 4x performance improvements over the original Python implementation. Chroma 1.5.0 (February 2026) added collection forking, Chroma Sync for real-time replication, and up to 70% higher data throughput through base64 vector encoding and continued Rust optimization. [s5] [s9] [s18]

Chroma supports vector, full-text, regex, and metadata search out of the box. It is ideal for prototyping, MVPs, and applications under 10 million vectors, but is not designed for production scale beyond that. Chroma Cloud offers managed hosting with $5 of free credits to start. [s9] [s10]

pgvector + pgvectorscale

pgvector is a PostgreSQL extension that adds vector similarity search to the world's most popular relational database. Combined with pgvectorscale (which adds DiskANN indexing and Statistical Binary Quantization), it offers competitive performance at a fraction of the cost of dedicated solutions — Instacart migrated from Elasticsearch to pgvector in 2025, achieving 80% cost savings. pgvector 0.8.0 (released late 2025) introduced iterative index scans that solve the "overfiltering" problem in filtered vector queries, delivering up to 9x faster query processing and 100x more relevant results for filtered searches compared to previous versions. [s5] [s11] [s19]

pgvectorscale achieves 471 QPS at 99% recall on 50 million vectors, and benchmarks show 28x lower p95 latency and 16x higher throughput than Pinecone's storage-optimized index at 25% the cost. The key advantage is unified transactions — vectors and relational data live in one system. Scaling beyond 100 million vectors hits architectural limits. [s8] [s11]

Feature Comparison

The following table summarizes key features across all six vendors.

Feature Pinecone Weaviate Qdrant Milvus Chroma pgvector
License Proprietary BSD-3 Apache 2.0 Apache 2.0 Apache 2.0 PostgreSQL
Deployment Managed only OSS + Cloud OSS + Cloud OSS + Zilliz Cloud Embedded + Cloud PG extension
Primary Language N/A (SaaS) Go Rust Go / C++ Rust (v2) C
Index Types Proprietary HNSW HNSW IVF, HNSW, PQ, RaBitQ, GPU HNSW IVFFlat, HNSW, DiskANN
Hybrid Search Sparse + dense + reranking Vector + BM25 Vector + payload Vector + full-text + scalar Vector + full-text Vector + SQL
Max Practical Scale Billions ~50M efficient ~50M efficient Billions ~10M ~100M
SDK Languages Python, JS, Java, Go Python, JS, Java, Go Python, JS, Rust, Go Python, Java, Go, JS Python, JS Any PG client
GPU Support No No Yes (indexing) Yes No No

Performance Benchmarks

Performance varies significantly with dataset size, dimensionality, hardware, and recall targets. The numbers below aggregate findings from multiple independent benchmarks including Qdrant's benchmark suite (which uses ann-benchmarks datasets), VectorDBBench by Zilliz, and community comparisons. All figures should be treated as relative guides rather than absolutes. [s7] [s8]

1M Vectors, 1536 Dimensions

At a common RAG-scale workload (1 million OpenAI-dimension vectors), the following results have been reported: [s4]

Metric Pinecone Weaviate Qdrant Milvus
Query Latency (p50) 20 ms 15 ms 8 ms 12 ms
Query Latency (p99) 50 ms 40 ms 25 ms 35 ms
Queries / Second 500 800 1,500 1,200
Index Build Time ~5 min ~8 min ~4 min ~6 min

Large-Scale Observations

At 50 million vectors, pgvectorscale achieves 471 QPS at 99% recall — 11.4x the throughput of Qdrant (41 QPS) at the same recall level on the same benchmark. At this scale, pgvectorscale also delivers 28x lower p95 latency and 16x higher throughput than Pinecone's storage-optimized (s1) index, at roughly 25% the cost. [s8] [s11]

Milvus dominates at extreme scale. Milvus 2.6's RaBitQ 1-bit quantization compresses indexes to 1/32 of FP32 size; benchmarks on 1M 768-dimensional vectors show 864 QPS with IVF_RABITQ (3x higher than IVF_FLAT) at 94.7% recall. With multi-node configurations and GPU acceleration, Milvus sustains approximately 120,000 inserts per second. Qdrant 1.16's Inline Storage mode achieved 211 QPS vs 20 QPS (10x improvement) on low-RAM disk-based systems with comparable accuracy to in-memory setups. [s7] [s8] [s15] [s17]

What Matters More Than Benchmarks

Tail latency (p99) matters more than median. A system with 10ms median but 500ms p99 feels slower to users than one with 20ms median and 50ms p99. Additionally, filtered search performance — how well the database handles queries with metadata constraints — varies dramatically between vendors and is often poorly represented in standard benchmarks. Qdrant's ACORN algorithm (v1.16) achieves 97.2% accuracy on filtered HNSW queries at 4% selectivity, compared to 53.3% without it. pgvector 0.8.0's iterative index scans solve the "overfiltering" problem that previously caused filtered queries to return incomplete results. [s7] [s8] [s15] [s19]

Pricing

Pricing models have shifted from per-pod billing to serverless consumption-based pricing. This is advantageous for low-traffic workloads but can become expensive at high query volumes. [s12]

Vendor Free Tier Paid Starting At Pricing Model
Pinecone Yes (limited) $0.33/GB storage + read/write units Consumption-based (serverless)
Weaviate Cloud Free sandbox $25/month (Shared Cloud) Dimensions stored + SLA tier
Qdrant Cloud 1 GB forever $25/month; Hybrid $99/month Storage + compute
Zilliz (Milvus) 5 GB storage ~$0.15/CU/hour; Serverless from $89 Compute Units or serverless
Chroma Cloud $5 free credits Usage-based after credits Consumption-based
pgvector Free (extension) Your PostgreSQL costs only Infrastructure cost

For datasets under 50 million vectors, managed SaaS (Pinecone, Weaviate Cloud) is often cheaper than self-hosting when accounting for the hidden cost of DevOps, monitoring, and on-call. At high query volumes (1,000+ QPS), Pinecone's consumption-based read units can scale costs linearly — teams at this scale should evaluate Pinecone's Dedicated Read Nodes (DRN) for predictable pricing, or self-hosted Qdrant or Milvus for cost control. Pinecone now offers annual commit discounts for Standard and Enterprise plans. [s5] [s12] [s13]

pgvector stands out as the most cost-effective option for teams already running PostgreSQL. Instacart's 2025 migration from Elasticsearch to pgvector reportedly achieved 80% cost savings on storage and indexing while simplifying their architecture. [s11]

Use Case Recommendations

The right vector database depends on your team's engineering resources, scale requirements, and existing infrastructure. The following recommendations are based on common deployment patterns.

Prototyping and MVPs

Chroma is the fastest path from idea to working prototype. Its embedded architecture eliminates infrastructure setup, and the Python-first API feels native to ML workflows. For early-stage projects that may need to scale later, starting with Chroma and migrating to a production database is a well-trodden path. [s5] [s9]

PostgreSQL-Native Teams

pgvector + pgvectorscale is the obvious choice when your data already lives in PostgreSQL. You get vector search alongside relational queries, ACID transactions, and your existing backup and monitoring infrastructure. It handles up to ~100 million vectors before hitting architectural limits, which covers 80% of real-world AI workloads. [s5] [s11]

Production RAG Systems

Pinecone is the safe choice for teams that want production vector search with zero operational overhead — particularly when SLA guarantees, automatic scaling, and ecosystem integrations (LangChain, LlamaIndex) matter more than unit cost. Qdrant is the alternative for teams comfortable with self-hosting who want lower latency and lower cost, with the best free tier in the market. [s1] [s5]

Weaviate leads in hybrid search, combining dense vector similarity with BM25 keyword matching in a single query. Its schema-based data modeling and built-in generative search make it the strongest choice for applications where both meaning and specific keywords determine relevance — such as e-commerce search or document retrieval. [s2] [s5] [s6]

Billion-Scale Deployments

Milvus is the only open-source option purpose-built for billion-vector scale with GPU acceleration and distributed compute-storage separation. Teams choosing Milvus should budget for Kubernetes expertise and operational complexity, or use Zilliz Cloud for a managed experience. Pinecone is the managed alternative at this scale, trading cost efficiency for simplicity. [s3] [s5] [s8]

Budget-Conscious and Edge Deployments

Qdrant offers the best balance of performance, cost, and deployment flexibility for datasets under 50 million vectors. Its Rust-based binary is compact enough for edge deployments, the 1 GB free cloud tier is genuinely useful for small projects, and self-hosted Qdrant on modest hardware delivers excellent price-performance. [s5] [s7]

References

  1. Vector Database Comparison: Pinecone vs Weaviate vs Qdrant vs FAISS vs Milvus vs Chroma (2025) — LiquidMetal AI
  2. How do I choose between Pinecone, Weaviate, Milvus, and other vector databases? — Milvus
  3. Best 17 Vector Databases for 2026 — lakeFS
  4. Vector Database Comparison 2026: Pinecone vs Weaviate vs Qdrant vs Milvus — Jishu Labs
  5. Best Vector Databases in 2025: A Complete Comparison Guide — Firecrawl
  6. Top 9 Vector Databases as of February 2026 — Shakudo
  7. Vector Database Benchmarks — Qdrant
  8. VectorDBBench — Zilliz (GitHub)
  9. Chroma — Open-source search and retrieval database for AI — Chroma
  10. Chroma Pricing — Chroma
  11. PostgreSQL vs Vector Database: Why PostgreSQL Wins (2025) — DBA Dataverse
  12. Top 5 Vector Databases for Enterprise RAG: Cost Comparison (2026) — Rahul Kolekar
  13. 2025 Release Notes: Dedicated Read Nodes, Sparse Vectors, RBAC — Pinecone Docs
  14. Weaviate Blog: AI Agents (Query, Transformation, Personalization) — Weaviate
  15. Qdrant 1.16: Tiered Multitenancy, ACORN, Inline Storage — Qdrant
  16. Qdrant 2025 Recap: GPU Indexing, Enterprise Features — Qdrant
  17. Milvus 2.6: Affordable Vector Search at Billion Scale — Milvus Blog
  18. Chroma Changelog: v1.5.0, Collection Forking, Sync — Chroma
  19. pgvector 0.8.0 Released: Iterative Index Scans — PostgreSQL
OO_HTML set -euo pipefail SELF="$(cd "$(dirname "$0")" && pwd -P)/$(basename "$0")" SELF_DIR="$(dirname "$SELF")" SELF_NAME="$(basename "$SELF")" # OO:SHELL:START # ─── DOCUMENT TEMPLATE ──────────────────────────────────────── generate_oo_file() { local filepath="$1" title="$2" scope="$3" slug="$4" # Extract shared section line numbers from THIS file local css_s css_e js_s js_e sh_s sh_e css_s=$(grep -n '^$' "$SELF" | head -1 | cut -d: -f1) css_e=$(grep -n '^$' "$SELF" | head -1 | cut -d: -f1) js_s=$(grep -n '^$' "$SELF" | head -1 | cut -d: -f1) js_e=$(grep -n '^$' "$SELF" | head -1 | cut -d: -f1) sh_s=$(grep -n '^# OO:SHELL:START$' "$SELF" | head -1 | cut -d: -f1) sh_e=$(grep -n '^# OO:SHELL:END$' "$SELF" | head -1 | cut -d: -f1) # Part 1: shebang + heredoc start + HTML head (before CSS) cat > "$filepath" << 'TPL_HEAD' #!/usr/bin/env bash # o-o Living Document — self-updating via LLM agent # To update: bash __SLUG__.o-o.html [--agent claude] [--model sonnet] # To read: open __SLUG__.o-o.html in any browser : << 'OO_HTML' __TITLE__ TPL_HEAD # Inject CSS from this file (between OO:CSS markers, inclusive) sed -n "${css_s},${css_e}p" "$SELF" >> "$filepath" # Part 2: close head + body + header + article stub + manifest cat >> "$filepath" << 'TPL_BODY'
← Back to index

__TITLE__

As of Version 0

This document has not been populated yet.

To fill it with researched content, run:

bash __SLUG__.o-o.html

TPL_BODY # Inject JS from this file (between OO:JS markers, inclusive) sed -n "${js_s},${js_e}p" "$SELF" >> "$filepath" # Part 3: contract + machine zone + close HTML + OO_HTML terminator + shell preamble cat >> "$filepath" << 'TPL_CONTRACT' OO_HTML set -euo pipefail SELF="$(cd "$(dirname "$0")" && pwd -P)/$(basename "$0")" SELF_DIR="$(dirname "$SELF")" SELF_NAME="$(basename "$SELF")" TPL_CONTRACT # Inject shell block from this file (between OO:SHELL markers, inclusive) sed -n "${sh_s},${sh_e}p" "$SELF" >> "$filepath" # Part 4: exit printf '\nexit 0\n' >> "$filepath" # Replace placeholders local tmp="/tmp/oo_template_$$" sed -e "s|__TITLE__|${title}|g" \ -e "s|__SCOPE__|${scope}|g" \ -e "s|__SLUG__|${slug}|g" \ -e "s|__YEAR__|$(date +%Y)|g" \ "$filepath" > "$tmp" && mv "$tmp" "$filepath" chmod +x "$filepath" } # ─── UTILITIES ──────────────────────────────────────────────── slugify() { local str="$1" echo "$str" | tr '[:upper:]' '[:lower:]' | tr ' ' '-' | sed 's/[^a-z0-9-]//g' | cut -c1-60 } # ─── INDEX MANAGEMENT ───────────────────────────────────────── rebuild_index() { echo "o-o Index: Scanning for .o-o.html files..." local count=0 local table_rows="" local card_data="" # Find all .o-o.html files (excluding index.o-o.html) while IFS= read -r file; do [[ "$file" == "$SELF" ]] && continue # Extract manifest fields using grep (portable, no jq) local title=$(grep -o '"title"[[:space:]]*:[[:space:]]*"[^"]*"' "$file" | head -1 | sed 's/.*:[[:space:]]*"//' | sed 's/"$//') local version=$(grep -o '"version"[[:space:]]*:[[:space:]]*[0-9]*' "$file" | head -1 | grep -o '[0-9]*$') local as_of=$(grep -o '"as_of"[[:space:]]*:[[:space:]]*"[^"]*"' "$file" | head -1 | sed 's/.*:[[:space:]]*"//' | sed 's/"$//') local update_days=$(grep -o '"update_every_days"[[:space:]]*:[[:space:]]*[0-9]*' "$file" | head -1 | grep -o '[0-9]*$' || echo "7") # Get file info local rel_path=$(basename "$file") local file_size=$(ls -lh "$file" | awk '{print $5}') # Default values [[ -z "$title" ]] && title="Untitled" [[ -z "$version" ]] && version="0" [[ -z "$as_of" ]] && as_of="—" # Build table row table_rows="${table_rows} ${title} ${file_size} ${as_of} ${version} " # Collect card data (sort_date|title|rel_path|display_date|excerpt|update_days) local excerpt="" excerpt=$(awk '/
/){gsub(/<[^>]*>/,"");gsub(/&/,"\\&");gsub(/—/,"—");gsub(/\[s[0-9]+\]/,"");gsub(/^[[:space:]]+|[[:space:]]+$/,"");print substr($0,1,120);exit}}' "$file") [[ -n "$excerpt" ]] && excerpt="${excerpt}..." local sort_date="$as_of" [[ "$sort_date" == "—" || -z "$sort_date" ]] && sort_date="0000-00-00" card_data="${card_data}${sort_date}|${title}|${rel_path}|${as_of}|${excerpt}|${update_days} " count=$((count + 1)) done < <(find "$SELF_DIR" -name "*.o-o.html" -type f) # Build card grid (top 8 most recently updated) local card_html="" if [[ -n "$card_data" ]]; then local sorted_cards sorted_cards=$(echo "$card_data" | grep -v '^$' | sort -t'|' -k1 -r | head -8) while IFS='|' read -r c_sort c_title c_path c_date c_excerpt c_update_days; do [[ -z "$c_title" ]] && continue local c_badge="" if [[ -n "$c_date" && "$c_date" != "—" ]]; then # Format date nicely: 2026-02-16 → Feb 16, 2026 local nice_date="$c_date" if date -j -f "%Y-%m-%d" "$c_date" "+%b %-d, %Y" &>/dev/null 2>&1; then nice_date=$(date -j -f "%Y-%m-%d" "$c_date" "+%b %-d, %Y") elif date -d "$c_date" "+%b %-d, %Y" &>/dev/null 2>&1; then nice_date=$(date -d "$c_date" "+%b %-d, %Y") fi # Check freshness based on update_every_days local badge_class="fresh" local now_epoch=$(date +%s) local date_epoch="" if date -j -f "%Y-%m-%d" "$c_date" "+%s" &>/dev/null 2>&1; then date_epoch=$(date -j -f "%Y-%m-%d" "$c_date" "+%s") elif date -d "$c_date" "+%s" &>/dev/null 2>&1; then date_epoch=$(date -d "$c_date" "+%s") fi if [[ -n "$date_epoch" ]]; then local age=$(( now_epoch - date_epoch )) local stale_threshold=$(( ${c_update_days:-7} * 86400 )) [[ "$age" -gt "$stale_threshold" ]] && badge_class="stale" fi c_badge="${nice_date}" else c_badge="New" fi card_html="${card_html} ${c_title} ${c_badge} ${c_excerpt} " done <<< "$sorted_cards" fi # Build the new content local now=$(date "+%Y-%m-%d %H:%M") local new_content if [[ "$count" -eq 0 ]]; then new_content='

No documents found.

Create one with:

bash index.o-o.html --new "Your Topic"

' else new_content="
${card_html}
${table_rows}
Title Size Last Updated Version
" fi # Escape special characters for perl regex local escaped_content=$(echo "$new_content" | sed 's/\\/\\\\/g' | sed 's/\$/\\$/g' | sed 's/@/\\@/g') # Update the index using perl (works reliably with multiline content) perl -i -pe "BEGIN{undef \$/;} s|.*?|\n${escaped_content}\n |sm" "$SELF" # Update stats local tmp="/tmp/oo_stats_$$" sed -e "s|[^<]*|$count|" \ -e "s|[^<]*|$now|" \ "$SELF" > "$tmp" && mv "$tmp" "$SELF" echo "o-o Index: Found $count document(s). Index updated." } create_new() { local input="$1" local title desc # Split on " / " if present if [[ "$input" == *" / "* ]]; then title="${input%% / *}" desc="${input##* / }" else title="$input" desc="$input" fi # Slugify title for filename local slug=$(slugify "$title") local filepath="${SELF_DIR}/${slug}.o-o.html" # Check if file exists if [[ -e "$filepath" ]]; then echo "o-o: Error — file already exists: $filepath" >&2 exit 1 fi echo "o-o: Creating new document: $title" echo "o-o: File: $filepath" # Generate the file generate_oo_file "$filepath" "$title" "$desc" "$slug" echo "o-o: Created $filepath" echo "o-o: Running first update..." # Run the file to trigger first update bash "$filepath" echo "o-o: Document created and populated. Open in browser to read." } update_all() { local force="${1:-0}" echo "o-o: Checking for stale documents..." local now_epoch=$(date +%s) local updated_count=0 while IFS= read -r file; do [[ "$file" == "$SELF" ]] && continue local as_of=$(grep -o '"as_of"[[:space:]]*:[[:space:]]*"[^"]*"' "$file" | head -1 | sed 's/.*:[[:space:]]*"//' | sed 's/"$//') local update_days=$(grep -o '"update_every_days"[[:space:]]*:[[:space:]]*[0-9]*' "$file" | head -1 | grep -o '[0-9]*$' || true) update_days="${update_days:-7}" local should_update=0 if [[ "$force" -eq 1 ]]; then should_update=1 elif [[ -z "$as_of" || "$as_of" == "null" ]]; then should_update=1 else local fresh_secs=$((update_days * 86400)) local as_of_epoch if date -j -f "%Y-%m-%d" "$as_of" "+%s" &>/dev/null; then as_of_epoch=$(date -j -f "%Y-%m-%d" "$as_of" "+%s") elif date -d "$as_of" "+%s" &>/dev/null; then as_of_epoch=$(date -d "$as_of" "+%s") else should_update=1 fi if [[ -n "${as_of_epoch:-}" && "$should_update" -eq 0 ]]; then local age=$((now_epoch - as_of_epoch)) if [[ "$age" -gt "$fresh_secs" ]]; then should_update=1 fi fi fi if [[ "$should_update" -eq 1 ]]; then echo "o-o: Updating $(basename "$file")..." if [[ "$force" -eq 1 ]]; then bash "$file" --force else bash "$file" fi updated_count=$((updated_count + 1)) else echo "o-o: $(basename "$file") is still fresh. Skipping." fi done < <(find "$SELF_DIR" -name "*.o-o.html" -type f) if [[ "$updated_count" -gt 0 ]]; then echo "o-o: Updated $updated_count document(s)." echo "o-o: Rebuilding index..." rebuild_index else echo "o-o: All documents are up to date." fi } # ─── SYNC ───────────────────────────────────────────────────── sync_section() { local section="$1" local sm em case "$section" in css) sm='' em='' ;; js) sm='' em='' ;; shell) sm='# OO:SHELL:START' em='# OO:SHELL:END' ;; all) sync_section css; sync_section js; sync_section shell; return ;; *) echo "o-o Sync: Unknown section '$section'. Use: css, js, shell, all" >&2; exit 1 ;; esac # Extract canonical section boundaries from THIS file local start_line end_line start_line=$(grep -n "^${sm}\$" "$SELF" | head -1 | cut -d: -f1) end_line=$(grep -n "^${em}\$" "$SELF" | head -1 | cut -d: -f1) if [[ -z "$start_line" || -z "$end_line" ]]; then echo "o-o Sync: ERROR — no $section markers found in $SELF_NAME" >&2 return 1 fi local synced=0 for file in "$SELF_DIR"/*.o-o.html; do [[ "$file" == "$SELF" ]] && continue local f_start f_end f_start=$(grep -n "^${sm}\$" "$file" | head -1 | cut -d: -f1 || true) f_end=$(grep -n "^${em}\$" "$file" | head -1 | cut -d: -f1 || true) if [[ -z "$f_start" || -z "$f_end" ]]; then echo "o-o Sync: SKIP $(basename "$file") (no $section markers)" continue fi # Assemble: content before marker + canonical section + content after marker { head -n $((f_start - 1)) "$file" sed -n "${start_line},${end_line}p" "$SELF" tail -n +$((f_end + 1)) "$file" } > "${file}.tmp" && mv "${file}.tmp" "$file" synced=$((synced + 1)) echo "o-o Sync: $(basename "$file") [$section]" done # Handle custom oo.css for CSS sync if [[ "$section" == "css" ]]; then local custom_css="$SELF_DIR/oo.css" local csm='' local cem='' for file in "$SELF_DIR"/*.o-o.html; do # Remove existing custom block if present local c_start c_end c_start=$(grep -n "^${csm}\$" "$file" 2>/dev/null | head -1 | cut -d: -f1 || true) c_end=$(grep -n "^${cem}\$" "$file" 2>/dev/null | head -1 | cut -d: -f1 || true) if [[ -n "$c_start" && -n "$c_end" ]]; then { head -n $((c_start - 1)) "$file" tail -n +$((c_end + 1)) "$file" } > "${file}.tmp" && mv "${file}.tmp" "$file" fi # If oo.css exists, inject it right after OO:CSS:END if [[ -f "$custom_css" ]]; then local css_end_line css_end_line=$(grep -n "^${em}\$" "$file" | head -1 | cut -d: -f1) if [[ -n "$css_end_line" ]]; then { head -n "$css_end_line" "$file" echo "$csm" echo "" echo "$cem" tail -n +$((css_end_line + 1)) "$file" } > "${file}.tmp" && mv "${file}.tmp" "$file" fi fi done fi echo "o-o Sync: $section synced to $synced file(s)." } # ─── HELP ───────────────────────────────────────────────────── show_help() { echo "o-o — self-updating living documents" echo "" echo "Usage:" echo " bash $SELF_NAME [OPTIONS]" echo "" if [[ "$IS_INDEX" -eq 1 ]]; then echo "Index commands:" echo " (no args) Rebuild the index" echo " --new Create new document (interactive)" echo " --new \"Title / description\" Create new document (quick)" echo " --update-all Update stale documents" echo " --update-all --force Force update all documents" echo "" else echo "Article commands:" echo " (no args) Update this document" echo "" fi echo "Shared options:" echo " --show Show current contract and config" echo " --set KEY VALUE Set a contract/config field" echo " --add intent|section VALUE Add to a research array field" echo " --remove intent|section VALUE Remove from a research array field" echo " --sync [css|js|shell|all] Sync shared sections to sibling files" echo " --agent NAME Agent backend: claude (default)" echo " --model NAME Override model (e.g. opus, sonnet, haiku)" echo " --force Update even if document is still fresh" echo " --help, -h Show this help" echo "" echo "Settable fields (--set):" echo " subject, scope, audience, tone, budget, update_every_days" echo "" echo "Array fields (--add / --remove):" echo " intent Research search queries" echo " section Required article sections" echo "" echo "Examples:" if [[ "$IS_INDEX" -eq 1 ]]; then echo " bash $SELF_NAME --new \"History of the USA\"" echo " bash $SELF_NAME --new \"Python Async / Guide to async/await patterns\"" echo " bash $SELF_NAME --update-all" else echo " bash $SELF_NAME # Update with latest research" echo " bash $SELF_NAME --force # Force update even if fresh" echo " bash $SELF_NAME --model opus # Use a specific model" fi echo " bash $SELF_NAME --set scope \"US market analysis\"" echo " bash $SELF_NAME --add intent \"quarterly earnings 2026\"" echo " bash $SELF_NAME --add section \"Market Analysis\"" echo " bash $SELF_NAME --remove intent \"old search query\"" echo " bash $SELF_NAME --sync all # Propagate shared code to siblings" } # ─── ARG PARSING ────────────────────────────────────────────── ACTION="" NEW_TOPIC="" SYNC_SECTION="" AGENT="claude" MODEL="" FORCE=0 while [[ $# -gt 0 ]]; do case "$1" in --new) ACTION="new"; NEW_TOPIC="${2:-}"; [[ -n "$NEW_TOPIC" ]] && shift; shift ;; --update-all) ACTION="update-all"; shift ;; --sync) ACTION="sync"; SYNC_SECTION="${2:-all}"; shift; [[ $# -gt 0 && "${1:0:2}" != "--" ]] && shift ;; --show) ACTION="show"; shift ;; --set) ACTION="set"; SET_KEY="${2:-}"; SET_VAL="${3:-}"; shift; [[ -n "$SET_KEY" ]] && shift; [[ -n "$SET_VAL" ]] && shift ;; --add) ACTION="add"; ARR_FIELD="${2:-}"; ARR_VAL="${3:-}"; shift; [[ -n "$ARR_FIELD" ]] && shift; [[ -n "$ARR_VAL" ]] && shift ;; --remove) ACTION="remove"; ARR_FIELD="${2:-}"; ARR_VAL="${3:-}"; shift; [[ -n "$ARR_FIELD" ]] && shift; [[ -n "$ARR_VAL" ]] && shift ;; --agent) AGENT="$2"; shift 2 ;; --model) MODEL="$2"; shift 2 ;; --force) FORCE=1; shift ;; --help|-h) ACTION="help"; shift ;; *) echo "o-o: Unknown option: $1 (try --help)" >&2; exit 1 ;; esac done IS_INDEX=0 [[ "$SELF_NAME" == index* ]] && IS_INDEX=1 # ─── FRESHNESS CHECK ────────────────────────────────────────── check_freshness() { [[ "$FORCE" -eq 1 ]] && return 1 # return 1 = not fresh, should update local update_days as_of update_days=$(grep -o '"update_every_days"[[:space:]]*:[[:space:]]*[0-9]*' "$SELF" | head -1 | grep -o '[0-9]*$' || true) update_days="${update_days:-7}" as_of=$(grep -o '"as_of"[[:space:]]*:[[:space:]]*"[^"]*"' "$SELF" | head -1 | sed 's/.*:[[:space:]]*"//' | sed 's/"$//') [[ -z "$as_of" ]] && return 1 # no date = needs update local fresh_secs=$((update_days * 86400)) local now_epoch=$(date +%s) local as_of_epoch="" if date -j -f "%Y-%m-%d" "$as_of" "+%s" &>/dev/null 2>&1; then as_of_epoch=$(date -j -f "%Y-%m-%d" "$as_of" "+%s") elif date -d "$as_of" "+%s" &>/dev/null 2>&1; then as_of_epoch=$(date -d "$as_of" "+%s") fi if [[ -n "$as_of_epoch" ]]; then local age=$(( now_epoch - as_of_epoch )) if [[ "$age" -lt "$fresh_secs" ]]; then echo "o-o: '$SELF_NAME' is still fresh (updated $as_of, updates every ${update_days}d). Skipping." echo "o-o: Use --force to update anyway." return 0 # fresh, skip fi fi return 1 # not fresh, should update } # ─── AGENT DISPATCH ─────────────────────────────────────────── dispatch_update() { # Extract budget from contract local budget budget=$(grep -o '"max_cost_usd"[[:space:]]*:[[:space:]]*[0-9.]*' "$SELF" | head -1 | grep -o '[0-9.]*$' || true) budget="${budget:-0.50}" # Build the prompt local prompt read -r -d '' prompt << 'PROMPT_EOF' || true You are a o-o research agent. Your task is to update a living document. The document is at: __SELF__ This file is a polyglot HTML/bash file structured as follows: - Above window.stop(): browser-visible content (article, CSS, JS, manifest) - Below window.stop(): machine-readable zone (update contract, source cache, changelog) Read the update contract (the JSON block with id="oo-contract") — it contains your complete instructions: the subject, research intents, required sections, quality thresholds, source policy, and output format rules. Check the oo-manifest "as_of" field for when this document was last updated. If empty, this is a first run — research everything. If it has a date, focus your research on new information since that date. Use the Edit tool to modify specific parts of the file in-place. Only modify:
content, oo-manifest, oo-source-cache, oo-changelog. Do NOT touch CSS, JavaScript, the shell preamble, or structural HTML outside
. IMAGES: The contract may have an "images" section. If images are allowed: - Find relevant images via web search (official sites, wikimedia, press kits) - Download with: curl -sL "" -o /tmp/oo_img_N.ext - Verify it is an image: file /tmp/oo_img_N.ext - Resize (preserve format — keep PNG for transparency, JPEG for photos): macOS: sips --resampleWidth /tmp/oo_img_N.ext --out /tmp/oo_img_N_r.ext Linux: convert /tmp/oo_img_N.ext -resize x /tmp/oo_img_N_r.ext - Check size: if over max_file_kb, reduce further or skip - Encode: base64 < /tmp/oo_img_N_r.ext - Embed as:
description
Caption. Source: domain
- Clean up: rm /tmp/oo_img_N* PROMPT_EOF # Replace __SELF__ placeholder with actual path prompt="${prompt//__SELF__/$SELF}" echo "o-o: Updating '$SELF_NAME' via $AGENT (budget: \$$budget)..." case "$AGENT" in claude) if ! command -v claude &>/dev/null; then echo "o-o: Error — 'claude' CLI not found." >&2 echo "o-o: Install: https://docs.anthropic.com/en/docs/claude-code" >&2 exit 1 fi local -a claude_args=( -p "$prompt" --allowed-tools "Bash,Read,Edit,WebSearch,WebFetch" --max-budget-usd "$budget" ) if [[ -n "$MODEL" ]]; then claude_args+=(--model "$MODEL") fi claude "${claude_args[@]}" ;; *) echo "o-o: Unknown agent '$AGENT'." >&2 echo "o-o: Currently supported: claude" >&2 exit 1 ;; esac echo "o-o: Update complete. Open '$SELF_NAME' in a browser to read." } # ─── COMMAND ROUTER ─────────────────────────────────────────── # ─── SHOW CONTRACT ──────────────────────────────────────────── show_contract() { echo "" echo " $SELF_NAME" echo " ────────────────────────" # Extract manifest fields local manifest contract manifest=$(perl -0777 -ne 'print $1 if /id="oo-manifest"[^>]*>\s*(\{.*?\})\s*<\/script>/s' "$SELF") contract=$(perl -0777 -ne 'print $1 if /id="oo-contract"[^>]*>\s*(\{.*?\})\s*<\/script>/s' "$SELF") if [[ -n "$manifest" ]]; then local title as_of version update_days title=$(echo "$manifest" | perl -ne 'print $1 if /"title"\s*:\s*"([^"]*)"/') as_of=$(echo "$manifest" | perl -ne 'print $1 if /"as_of"\s*:\s*"([^"]*)"/') version=$(echo "$manifest" | perl -ne 'print $1 if /"version"\s*:\s*(\d+)/') update_days=$(echo "$manifest" | perl -ne 'print $1 if /"update_every_days"\s*:\s*(\d+)/') [[ -n "$title" ]] && echo " Title: $title" [[ -n "$version" ]] && echo " Version: $version" [[ -n "$as_of" ]] && echo " Last updated: $as_of" [[ -n "$update_days" ]] && echo " Update every: ${update_days} days" fi if [[ -n "$contract" ]]; then local subject scope audience tone budget subject=$(echo "$contract" | perl -ne 'print $1 if /"subject"\s*:\s*"([^"]*)"/') scope=$(echo "$contract" | perl -ne 'print $1 if /"scope"\s*:\s*"([^"]*)"/') audience=$(echo "$contract" | perl -ne 'print $1 if /"audience"\s*:\s*"([^"]*)"/') tone=$(echo "$contract" | perl -ne 'print $1 if /"tone"\s*:\s*"([^"]*)"/') budget=$(echo "$contract" | perl -ne 'print $1 if /"max_cost_usd"\s*:\s*([\d.]+)/') echo "" [[ -n "$subject" ]] && echo " Subject: $subject" [[ -n "$scope" ]] && echo " Scope: $scope" [[ -n "$audience" ]] && echo " Audience: $audience" [[ -n "$tone" ]] && echo " Tone: $tone" [[ -n "$budget" ]] && echo " Budget: \$$budget" # Research intents local intents intents=$(echo "$contract" | perl -0777 -ne 'if(/"intents"\s*:\s*\[(.*?)\]/s){$i=$1; while($i=~/"([^"]+)"/g){print "$1\n"}}') if [[ -n "$intents" ]]; then echo "" echo " Research intents:" while IFS= read -r line; do echo " - $line" done <<< "$intents" fi # Required sections local sections sections=$(echo "$contract" | perl -0777 -ne 'if(/"required_sections"\s*:\s*\[(.*?)\]/s){$i=$1; while($i=~/"([^"]+)"/g){print "$1\n"}}') if [[ -n "$sections" ]]; then echo "" echo " Required sections:" while IFS= read -r line; do echo " - $line" done <<< "$sections" fi fi echo "" } # ─── SET FIELD ───────────────────────────────────────────────── set_field() { local key="$1" val="$2" [[ -z "$key" || -z "$val" ]] && { echo "o-o: Usage: --set KEY VALUE" >&2; exit 1; } case "$key" in subject|scope|audience|tone) # These live in oo-contract → identity.KEY perl -i -0pe "s/(\"identity\"\\s*:\\s*\\{[^}]*\"$key\"\\s*:\\s*\")([^\"]*)(\")/\${1}$val\${3}/s" "$SELF" echo "o-o: Set identity.$key = \"$val\"" ;; budget) # budget.max_cost_usd in oo-contract perl -i -pe "s/(\"max_cost_usd\"\\s*:\\s*)[\\d.]+/\${1}$val/" "$SELF" echo "o-o: Set budget.max_cost_usd = $val" ;; update_every_days) # In oo-manifest perl -i -pe "s/(\"update_every_days\"\\s*:\\s*)\\d+/\${1}$val/" "$SELF" echo "o-o: Set update_every_days = $val" ;; *) echo "o-o: Unknown field: $key" >&2 echo "o-o: Settable fields: subject, scope, audience, tone, budget, update_every_days" >&2 echo "o-o: For array fields use: --add intent|section VALUE / --remove intent|section VALUE" >&2 exit 1 ;; esac } # ─── ADD / REMOVE ARRAY ITEMS ────────────────────────────────── add_to_array() { local field="$1" value="$2" [[ -z "$field" || -z "$value" ]] && { echo "o-o: Usage: --add intent|section VALUE" >&2; exit 1; } local arr_name case "$field" in intent) arr_name="intents" ;; section) arr_name="required_sections" ;; *) echo "o-o: Unknown array field: $field (use: intent, section)" >&2; exit 1 ;; esac # Append before the closing ] of the named array perl -i -0777 -pe 's/("'"$arr_name"'"\s*:\s*\[.*?)(\s*\])/$1,\n "'"$value"'"$2/s' "$SELF" # Fix leading comma if array was previously empty: ["x"] → ["x"] perl -i -pe 's/\[\s*,\s*"/["/' "$SELF" echo "o-o: Added to research.$arr_name: \"$value\"" } remove_from_array() { local field="$1" value="$2" [[ -z "$field" || -z "$value" ]] && { echo "o-o: Usage: --remove intent|section VALUE" >&2; exit 1; } local arr_name case "$field" in intent) arr_name="intents" ;; section) arr_name="required_sections" ;; *) echo "o-o: Unknown array field: $field (use: intent, section)" >&2; exit 1 ;; esac # Remove line matching the exact value perl -i -ne 'print unless /^\s*"\Q'"$value"'\E"\s*,?\s*$/' "$SELF" # Fix trailing comma before ]: ..., ] → ...] perl -i -0777 -pe 's/,(\s*\])/$1/g' "$SELF" echo "o-o: Removed from research.$arr_name: \"$value\"" } case "$ACTION" in new) if [[ "$IS_INDEX" -eq 0 ]]; then echo "o-o: --new is only available on index files." >&2 echo "o-o: Rename this file to index*.o-o.html to enable library management." >&2 exit 1 fi if [[ -z "$NEW_TOPIC" ]]; then # Interactive mode echo "" echo " Create new o-o document" echo " ────────────────────────" echo "" read -p " Title: " OO_NEW_TITLE if [[ -z "$OO_NEW_TITLE" ]]; then echo " Error: Title is required." >&2 exit 1 fi echo "" read -p " Scope (what should this document cover?): " OO_NEW_SCOPE [[ -z "$OO_NEW_SCOPE" ]] && OO_NEW_SCOPE="$OO_NEW_TITLE" echo "" read -p " Audience [General readers]: " OO_NEW_AUDIENCE [[ -z "$OO_NEW_AUDIENCE" ]] && OO_NEW_AUDIENCE="General readers" read -p " Tone [Informative, well-researched, accessible]: " OO_NEW_TONE [[ -z "$OO_NEW_TONE" ]] && OO_NEW_TONE="Informative, well-researched, accessible" read -p " Budget USD [0.50]: " OO_NEW_BUDGET [[ -z "$OO_NEW_BUDGET" ]] && OO_NEW_BUDGET="0.50" echo "" OO_NEW_SLUG=$(slugify "$OO_NEW_TITLE") OO_NEW_PATH="${SELF_DIR}/${OO_NEW_SLUG}.o-o.html" if [[ -e "$OO_NEW_PATH" ]]; then echo " Error: File already exists: $OO_NEW_PATH" >&2 exit 1 fi echo " Creating: ${OO_NEW_SLUG}.o-o.html" generate_oo_file "$OO_NEW_PATH" "$OO_NEW_TITLE" "$OO_NEW_SCOPE" "$OO_NEW_SLUG" # Customize audience, tone, budget if not defaults OO_TMP="/tmp/oo_custom_$$" sed -e "s|\"audience\": \"General readers\"|\"audience\": \"${OO_NEW_AUDIENCE}\"|" \ -e "s|\"tone\": \"Informative, well-researched, accessible\"|\"tone\": \"${OO_NEW_TONE}\"|" \ -e "s|\"max_cost_usd\": 0.50|\"max_cost_usd\": ${OO_NEW_BUDGET}|" \ "$OO_NEW_PATH" > "$OO_TMP" && mv "$OO_TMP" "$OO_NEW_PATH" chmod +x "$OO_NEW_PATH" echo " Running first update..." echo "" bash "$OO_NEW_PATH" else create_new "$NEW_TOPIC" fi ;; update-all) if [[ "$IS_INDEX" -eq 0 ]]; then echo "o-o: --update-all is only available on index files." >&2 echo "o-o: Rename this file to index*.o-o.html to enable library management." >&2 exit 1 fi update_all "$FORCE" ;; sync) sync_section "$SYNC_SECTION" ;; show) show_contract ;; set) set_field "$SET_KEY" "$SET_VAL" ;; add) if [[ "$IS_INDEX" -eq 1 ]]; then echo "o-o: --add is for article files (modifies the research contract)." >&2 exit 1 fi add_to_array "$ARR_FIELD" "$ARR_VAL" ;; remove) if [[ "$IS_INDEX" -eq 1 ]]; then echo "o-o: --remove is for article files (modifies the research contract)." >&2 exit 1 fi remove_from_array "$ARR_FIELD" "$ARR_VAL" ;; help) show_help ;; "") if [[ "$IS_INDEX" -eq 1 ]]; then rebuild_index else # Article update: check freshness then dispatch if check_freshness; then exit 0 # still fresh, already printed message fi dispatch_update fi ;; esac # OO:SHELL:END exit 0