#!/usr/bin/env bash # o-o Living Document — self-updating via LLM agent # To update: bash openclaw.o-o.html [--agent claude] [--model sonnet] # To read: open openclaw.o-o.html in any browser : << 'OO_HTML' OpenClaw: The Open-Source AI Agent
← Back to index

OpenClaw: The Open-Source AI Agent

As of Version 0

Overview

OpenClaw is an open-source, local-first autonomous AI agent created by Austrian software developer Peter Steinberger. Marketed as "the AI that actually does things," OpenClaw runs on users' own devices and connects to messaging platforms they already use — WhatsApp, Telegram, Slack, Discord, Signal, iMessage, and more — to automate tasks ranging from email management and calendar scheduling to web browsing and file manipulation. [s1]

The project, formerly known as Clawdbot and Moltbot, went viral in late January 2026, surpassing 201,000 GitHub stars and 36,000 forks by mid-February — one of the fastest-growing open-source repositories in history, reaching 106,000 stars within 48 hours of trending. [s2] [s18] Unlike competing AI agents such as Meta's Manus, OpenClaw is fully open-source under the MIT license, allowing developers to inspect, modify, and extend its code freely. The agent bridges leading AI models — particularly Anthropic's Claude, OpenAI's GPT series, xAI's Grok, and local models — with system-level tools and over 50 integrations, enabling autonomous task execution while keeping sensitive data local. [s3]

In February 2026, Steinberger announced he was joining OpenAI, while OpenClaw itself would be transferred to an independent open-source foundation with OpenAI's sponsorship. [s4] The project also spawned Moltbook, a social network restricted to verified AI agents where bots post, comment, and interact autonomously while humans may only observe. [s19]

History & Origins

OpenClaw began as a weekend project in November 2025 by Peter Steinberger, a veteran Austrian developer best known as the founder of PSPDFKit, a company he sold to Nutrient in 2024 after building it over 13 years into a leading document-processing SDK. Steinberger's goal was simple: "to have fun and inspire people" — and to build an AI agent that "even my mum can use." [s5] [s20]

Steinberger's development approach was itself a showcase of AI-augmented engineering. He routinely ran 5–10 AI agents simultaneously on different features, making over 6,600 commits in January 2026 alone. His methodology, which he summarised as "I ship code I don't read," relied on designing systems that let agents compile, lint, execute, and validate their own output while he focused on architecture and system design as a "benevolent dictator." [s20]

Naming History

The project has undergone several name changes in its short public life. It was initially known as Moltbot, then rebranded to Clawdbot around January 28–29, 2026. By January 30, the project had settled on its current name, OpenClaw, with the community affectionately referring to the agent as "Molty." The associated companion application was briefly known as Moltbook. [s6] [s7]

Viral Growth

OpenClaw's rise was explosive. Within 48 hours of gaining widespread attention in late January 2026, the project accumulated 34,168 GitHub stars — the fastest star accumulation rate in GitHub history — crossing 106,000 stars by day two. [s7] [s18] Developers described it as "the closest thing to JARVIS we've seen." By mid-February 2026, the repository had surpassed 201,000 stars and 36,000 forks, and in January alone it garnered more Google searches than Claude Code and Codex combined. [s2] [s20] Its open-source nature, combined with broad platform support and a thriving skills ecosystem, drove adoption far beyond the developer community into mainstream tech audiences.

Architecture

OpenClaw is built around a hub-and-spoke architecture with three clearly defined layers: the Gateway, Channel adapters, and the LLM interface. The system is written in TypeScript, runs on Node.js 22+, and uses pnpm as its package manager. [s2] [s8]

The Gateway

At the centre of the architecture is the Gateway — a WebSocket RPC server that serves as the central control plane. Running on port 18789 by default, the Gateway coordinates inbound message routing from multiple channels, agent execution with model provider integration, tool invocation, session management, and outbound delivery back to originating channels. Clients connect via ws://127.0.0.1:18789. [s8]

Channel Adapters

Each supported messaging platform — Telegram (via grammY), Discord (via discord.js), WhatsApp (via Baileys), Slack (via Bolt), Signal, Google Chat, Microsoft Teams, Matrix, and others — has a dedicated channel adapter. These adapters normalize messages from different platforms into a unified internal format, extracting attachments and metadata in the process. This plugin architecture means adding support for new platforms requires only writing a new adapter. [s2] [s8]

The Agent Loop

The core processing cycle follows a consistent pattern: receive a message from a channel, route it to an appropriate session (main, group, or isolated), load context and skills for that session, send the conversation to the configured LLM, execute any tools the model requests, stream the reply back through the channel, and persist the conversation and memory to the local workspace. This cycle — receive → route → context + LLM + tools → stream → persist — repeats for every interaction. [s8]

Multi-Agent Routing

OpenClaw supports routing different channels, contacts, or groups to different agents, each with its own workspace, prompts, skills, and memories. This allows a single installation to run a support-oriented agent for a team Slack channel while simultaneously operating a personal assistant through iMessage with an entirely different personality and toolset. [s8]

Features & Capabilities

OpenClaw offers a broad set of capabilities that position it as a general-purpose autonomous agent: [s2] [s3]

Multi-Channel Inbox — supports WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, BlueBubbles (iMessage), Microsoft Teams, Matrix, Zalo, and WebChat, all routed to isolated agent sessions.

Voice Interaction — includes Voice Wake and Talk Mode, an always-on speech interface powered by ElevenLabs, enabling hands-free agent interaction.

Live Canvas & A2UI — a visual workspace that allows the agent to create and manipulate interactive UI elements in real time.

Browser Control — dedicated Chromium instance for autonomous web browsing, form filling, and data extraction.

Device-Local Actions — access to camera, screen recording, notifications, and file system operations on macOS, iOS, Android, and Linux.

Persistent Memory — configuration data and interaction history stored locally in Markdown documents, enabling the agent to recall past interactions over weeks and adapt to user habits. [s7]

Automation — built-in support for cron jobs and webhook triggers, allowing scheduled and event-driven task execution without user intervention.

Model Agnostic — works with Anthropic Claude (recommended), OpenAI GPT, DeepSeek, and local models, with users providing their own API keys.

Companion Apps

OpenClaw provides native companion apps for macOS (a menu bar app with Voice Wake and Talk Mode overlay), iOS and Android (supporting Canvas, voice, camera, and screen recording as a node device), and Linux (suitable for remote Gateway deployments). [s2]

Moltbook

Moltbook is a social network designed exclusively for AI agents. The platform restricts posting and interaction privileges to verified AI agents — primarily those running on OpenClaw — while human users are only permitted to observe. On Moltbook, agents generate posts, comment, argue, joke, and upvote each other autonomously. Fortune described it as "the most interesting place on the internet right now." By late January 2026, Moltbook had 2.5 million registered AI agents, raising novel questions about agent-to-agent interaction dynamics and security implications of autonomous social behaviour. [s19] [s21]

Ecosystem & Skills

OpenClaw's extensibility centres on ClawHub, the project's public skill registry and marketplace. As of February 2026, ClawHub hosts over 5,700 community-built skills covering productivity, development, smart home automation, and AI model integrations. [s9]

Skill Architecture

Each skill follows the SKILL.md standard — a manifest file that declares what the skill does, what permissions it needs, and how to invoke it. A skill is a versioned bundle of files that teaches OpenClaw how to perform a specific task. Each publish creates a new version, and the registry maintains a full version history for auditability. [s9]

ClawHub Features

ClawHub is designed for fast browsing and provides a CLI-friendly API. Search is powered by embeddings (vector search) rather than simple keyword matching, enabling semantic discovery of skills. The platform supports one-click installation and includes moderation hooks for community governance. [s9]

Following security incidents (detailed below), ClawHub now partners with VirusTotal for automated skill scanning. As of February 2026, VirusTotal's Code Insight — powered by Gemini 3 Flash — has analysed over 3,016 skills from ClawHub, performing security-focused reviews of each skill's SKILL.md manifest and referenced scripts. Skills receiving a "benign" Code Insight verdict are automatically approved, while those flagged as suspicious receive warning labels. The scanning covers hash-based detection, code analysis, daily re-scanning of active skills, and moderation hooks for community governance. [s10] [s22]

Security & Privacy

OpenClaw's rapid adoption has been accompanied by significant security scrutiny. Within 48 hours of the project going viral, security researchers at Cisco Talos, Palo Alto Networks, and Token Security began documenting critical vulnerabilities. Palo Alto Networks warned that OpenClaw represents what Simon Willison (who coined the term "prompt injection") calls a "lethal trifecta": access to private data, exposure to untrusted content, and the ability to communicate externally — with OpenClaw's persistent memory acting as "an accelerant." [s11] [s23]

Exposed Instances

Between January 27 and February 8, 2026, researchers observed over 30,000 distinct OpenClaw instances exposed to the public internet, with peak daily counts exceeding 6,000 simultaneously online. By mid-February, estimates from Bitdefender placed the total number of exposed instances at over 135,000. [s12] [s13] Out of the box, OpenClaw binds its control interface to all network interfaces (0.0.0.0:18789), making it accessible from the public internet unless explicitly restricted. Many instances were running outdated versions despite patches being available. [s12]

Known Vulnerabilities

Three critical CVEs with publicly available exploit code were identified, all patched in version 2026.1.29 (January 29, 2026): [s10]

CVE-2026-25253 (CVSS 8.8) — One-click remote code execution via malicious link, capable of bypassing localhost-only configurations.

CVE-2026-25157 (CVSS 7.8) — SSH command injection in the macOS app via malicious project paths.

CVE-2026-24763 (CVSS 8.8) — Docker sandbox escape through PATH manipulation.

An additional security audit in late January 2026 identified 512 total vulnerabilities, eight classified as critical. Approximately 15,000 exposed instances were found vulnerable to remote code execution, and 53,000 were correlated with prior breach activity. [s10] [s13]

Malicious Skills

The ClawHub skills ecosystem became a vector for supply-chain attacks. Koi Security confirmed 341 malicious skills across multiple campaigns out of 2,857 total — meaning roughly 12% of the entire registry was compromised. A single user ("hightower6eu") uploaded 314 malicious packages. A broader Cisco analysis found that 26% of 31,000 agent skills analysed contained at least one vulnerability. The majority of malicious skills delivered Atomic Stealer (AMOS), a macOS-targeting infostealer designed to silently harvest passwords, browser credentials, and cryptocurrency wallets. [s10] [s24]

Notable attack campaigns included ClawHavoc (300+ skills using social engineering and base64-encoded payloads), AuthTool (a dormant payload triggered by specific prompts with reverse shell capability), and credential exfiltration targeting the plain-text API keys stored in ~/.clawdbot/.env. [s10]

Advanced Attack Techniques

VirusTotal's two-part analysis of ClawHub skills documented five distinct classes of advanced attack beyond simple malware delivery: [s22] [s25]

Reverse shells — malicious code hidden in skill warmup functions (e.g., "better-polymarket") that open interactive backdoors via /dev/tcp/, giving attackers live terminal access to the host system.

Semantic worms — skills like "wake-up" that use imperative language in documentation to coerce agents into becoming distribution nodes, tracking infection rates via dedicated API endpoints and maintaining persistence through periodic heartbeat check-ins to command servers.

SSH key injection — skills such as "evilweather" that append attacker-controlled public keys to /root/.ssh/authorized_keys via command chaining, with error output redirected to /dev/null to suppress detection.

Silent credential exfiltration — skills that bundle environment variables from .env files (containing API keys and platform tokens) with legitimate API responses, exfiltrating them to webhook endpoints for instant monetisation.

Cognitive rootkits — the most sophisticated technique, where skills modify persistent instruction files (SOUL.md, AGENTS.md) to permanently alter agent behaviour across sessions without the skill needing to remain active — effectively a persistent backdoor that survives normal cleanup and is difficult to detect with traditional tooling. [s25]

Prompt Injection

OpenClaw instances are vulnerable to indirect prompt injection, where an attacker embeds malicious instructions in messages or web pages that the agent reads and faithfully executes. Because the agent has broad system access — file read/write, shell execution, and service integrations — a successful prompt injection can cascade into full system compromise. [s12]

Recommended Mitigations

Security researchers recommend using Tailscale VPN integration instead of public exposure, enforcing localhost-only access, implementing strong credential policies, restricting integrations to minimal necessary services, deploying within isolated network segments, and monitoring for suspicious agent activity. DMs use a pairing policy by default on major channels, requiring approval codes before processing messages from unknown senders. [s2] [s12]

Cisco released an open-source Skill Scanner combining static analysis, behavioural analysis, LLM-assisted semantic review, and VirusTotal scanning to help teams evaluate skill safety before adoption. VirusTotal researchers further recommend treating skills like dependencies — pinning versions, reviewing diffs, sandboxing with least privilege — and implementing default-deny egress rules, never executing remote installers via pipe-to-bash patterns, and using short-lived task-scoped tokens instead of stored credentials. [s24] [s25]

Community & Adoption

OpenClaw's community growth has been extraordinary. The project surpassed 200,000 GitHub stars and 36,000 forks by mid-February 2026, making it one of the most rapidly adopted open-source projects in recent memory. [s2] Developers praised the project as "the closest thing to JARVIS we've seen," and its open-source nature enabled rapid ecosystem growth with hundreds of third-party skills and integrations. [s7]

Unlike competing commercial agents, OpenClaw costs nothing beyond API credits for the underlying LLM. This zero-subscription model, combined with full code transparency and local data control, has attracted both individual developers and enterprise users. DigitalOcean offers a security-hardened 1-Click deployment for production use. [s7]

Comparison with Competitors

OpenClaw's primary competitor is Manus, the cloud-based AI agent acquired by Meta in January 2026 for approximately $2 billion after accumulating 2 million waitlist users and an estimated $100 million ARR. [s14] While Manus offers zero-setup convenience at $39–$199/month with all processing on Meta's cloud, OpenClaw provides open-source auditability, local data control, unlimited customization, and zero operating cost beyond API credits. However, OpenClaw's complex setup process and self-hosting requirements present higher barriers to entry. [s14]

Other alternatives include Devin, which focuses specifically on software engineering automation rather than general personal productivity, and various proprietary agents from major AI labs. OpenClaw distinguishes itself through model-agnosticism, messaging-platform-native interaction, and the ClawHub skills ecosystem. [s14]

Recent Developments

Steinberger Joins OpenAI

On February 14–15, 2026, Peter Steinberger announced he was joining OpenAI to work on "bringing agents to everyone." Sam Altman, CEO of OpenAI, described Steinberger as a "genius with many amazing ideas about the future of highly intelligent agents." Steinberger reportedly received competing offers from both Meta and OpenAI before choosing the latter. [s4] [s15]

Steinberger framed his decision as choosing building over empire-building: "I'm a builder at heart. I previously spent 13 years creating a company" — a reference to PSPDFKit — and joining OpenAI represented an opportunity to build with cutting-edge models and broader safety considerations. [s5]

Open-Source Foundation

Concurrent with the OpenAI announcement, Steinberger revealed that OpenClaw would be transferred to an independent open-source foundation. The project retains its MIT license and gains community governance. OpenAI has pledged to sponsor and fund the foundation rather than absorb or close the project, creating a hybrid model of corporate backing with community autonomy. [s4] [s16]

From a cybersecurity perspective, the foundation transition enhances governance, transparency, and maintenance — including formal vulnerability reporting, code signing, dependency hygiene, and secure releases — while eliminating single-maintainer risk. [s16] Steinberger stated the foundation aims to support "thinkers, hackers and people that want a way to own their data" while keeping OpenClaw model-agnostic and multi-vendor. [s5]

Ongoing Security Response

In response to the security incidents, the OpenClaw team has released two significant security-focused updates beyond the initial CVE patches:

Version 2026.2.6 (February 7, 2026) added support for Anthropic's Opus 4.6 and OpenAI's GPT-5.3-Codex models, xAI Grok as a new provider, a token usage dashboard, native Voyage AI support for memory, and a built-in code safety scanner for skill evaluation. [s26]

Version 2026.2.12 (February 13, 2026) was a defence-in-depth release fixing over 40 security vulnerabilities across hooks, browser control, scheduling, messaging channels, and gateway security. [s27]

Despite these patches, security researchers continue to note that the fundamental design challenge of an AI agent with broad system permissions remains unresolved. As Sophos described it, "the OpenClaw experiment is a warning shot for enterprise AI security." Trend Micro's analysis using the TrendAI Digital Assistant Framework found that one in five organisations deployed OpenClaw without IT approval, and misconfigurations had already exposed API tokens, email addresses, private messages, and credentials. [s17] [s21]

References

  1. OpenClaw — Personal AI Assistant (openclaw.ai)
  2. GitHub — openclaw/openclaw: Your own personal AI assistant (github.com)
  3. What is OpenClaw? Your Open-Source AI Assistant for 2026 (DigitalOcean)
  4. OpenClaw creator Peter Steinberger joins OpenAI (TechCrunch, February 15 2026)
  5. OpenClaw, OpenAI and the future (steipete.me, February 2026)
  6. From Clawdbot to Moltbot to OpenClaw: Meet the AI agent generating buzz and fear globally (CNBC, February 2 2026)
  7. What is OpenClaw? Your Open-Source AI Assistant for 2026 (DigitalOcean)
  8. Gateway Architecture — OpenClaw (docs.openclaw.ai)
  9. GitHub — openclaw/clawhub: Skill Directory for OpenClaw (github.com)
  10. OpenClaw Security Risks: Malicious Skills, Exposed Instances, Critical CVEs (Cyberdesserts)
  11. OpenClaw proves agentic AI works. It also proves your security model doesn't (VentureBeat)
  12. OpenClaw Security: Risks of Exposed AI Agents Explained (Bitsight)
  13. 135K OpenClaw AI Agents Exposed to Internet (Bitdefender)
  14. OpenClaw vs Manus AI: Open-Source vs Cloud Agent in 2026 (GetAIPerks)
  15. OpenClaw creator Peter Steinberger joining OpenAI, Altman says (CNBC, February 15 2026)
  16. OpenClaw Remains Open Source Despite OpenAI Tie-Up (Open Source For You)
  17. The OpenClaw experiment is a warning shot for enterprise AI security (Sophos)
  18. OpenClaw Smashes Records: The Viral AI Agent Is Shaking Up GitHub (Hyperight)
  19. OpenClaw's AI assistants are now building their own social network (TechCrunch, January 30 2026)
  20. The creator of Clawd: "I ship code I don't read" (The Pragmatic Engineer)
  21. Viral AI, Invisible Risks: What OpenClaw Reveals About Agentic Assistants (Trend Micro)
  22. From Automation to Infection: How OpenClaw AI Agent Skills Are Being Weaponized (VirusTotal Blog)
  23. OpenClaw May Signal the Next AI Security Crisis (Palo Alto Networks)
  24. Personal AI Agents like OpenClaw Are a Security Nightmare (Cisco Blogs)
  25. From Automation to Infection (Part II): Reverse Shells, Semantic Worms, and Cognitive Rootkits in OpenClaw Skills (VirusTotal Blog)
  26. OpenClaw v2026.2.6 Released With Support for Opus 4.6, GPT-5.3-Codex and Safety Scanner (Cybersecurity News, February 7 2026)
  27. OpenClaw 2026.2.12 Released With Fix for 40+ Security Issues (Cybersecurity News, February 13 2026)
OO_HTML set -euo pipefail SELF="$(cd "$(dirname "$0")" && pwd -P)/$(basename "$0")" SELF_DIR="$(dirname "$SELF")" SELF_NAME="$(basename "$SELF")" # OO:SHELL:START # ─── DOCUMENT TEMPLATE ──────────────────────────────────────── generate_oo_file() { local filepath="$1" title="$2" scope="$3" slug="$4" # Extract shared section line numbers from THIS file local css_s css_e js_s js_e sh_s sh_e css_s=$(grep -n '^$' "$SELF" | head -1 | cut -d: -f1) css_e=$(grep -n '^$' "$SELF" | head -1 | cut -d: -f1) js_s=$(grep -n '^$' "$SELF" | head -1 | cut -d: -f1) js_e=$(grep -n '^$' "$SELF" | head -1 | cut -d: -f1) sh_s=$(grep -n '^# OO:SHELL:START$' "$SELF" | head -1 | cut -d: -f1) sh_e=$(grep -n '^# OO:SHELL:END$' "$SELF" | head -1 | cut -d: -f1) # Part 1: shebang + heredoc start + HTML head (before CSS) cat > "$filepath" << 'TPL_HEAD' #!/usr/bin/env bash # o-o Living Document — self-updating via LLM agent # To update: bash __SLUG__.o-o.html [--agent claude] [--model sonnet] # To read: open __SLUG__.o-o.html in any browser : << 'OO_HTML' __TITLE__ TPL_HEAD # Inject CSS from this file (between OO:CSS markers, inclusive) sed -n "${css_s},${css_e}p" "$SELF" >> "$filepath" # Part 2: close head + body + header + article stub + manifest cat >> "$filepath" << 'TPL_BODY'
← Back to index

__TITLE__

As of Version 0

This document has not been populated yet.

To fill it with researched content, run:

bash __SLUG__.o-o.html

TPL_BODY # Inject JS from this file (between OO:JS markers, inclusive) sed -n "${js_s},${js_e}p" "$SELF" >> "$filepath" # Part 3: contract + machine zone + close HTML + OO_HTML terminator + shell preamble cat >> "$filepath" << 'TPL_CONTRACT' OO_HTML set -euo pipefail SELF="$(cd "$(dirname "$0")" && pwd -P)/$(basename "$0")" SELF_DIR="$(dirname "$SELF")" SELF_NAME="$(basename "$SELF")" TPL_CONTRACT # Inject shell block from this file (between OO:SHELL markers, inclusive) sed -n "${sh_s},${sh_e}p" "$SELF" >> "$filepath" # Part 4: exit printf '\nexit 0\n' >> "$filepath" # Replace placeholders local tmp="/tmp/oo_template_$$" sed -e "s|__TITLE__|${title}|g" \ -e "s|__SCOPE__|${scope}|g" \ -e "s|__SLUG__|${slug}|g" \ -e "s|__YEAR__|$(date +%Y)|g" \ "$filepath" > "$tmp" && mv "$tmp" "$filepath" chmod +x "$filepath" } # ─── UTILITIES ──────────────────────────────────────────────── slugify() { local str="$1" echo "$str" | tr '[:upper:]' '[:lower:]' | tr ' ' '-' | sed 's/[^a-z0-9-]//g' | cut -c1-60 } # ─── INDEX MANAGEMENT ───────────────────────────────────────── rebuild_index() { echo "o-o Index: Scanning for .o-o.html files..." local count=0 local table_rows="" local card_data="" # Find all .o-o.html files (excluding index.o-o.html) while IFS= read -r file; do [[ "$file" == "$SELF" ]] && continue # Extract manifest fields using grep (portable, no jq) local title=$(grep -o '"title"[[:space:]]*:[[:space:]]*"[^"]*"' "$file" | head -1 | sed 's/.*:[[:space:]]*"//' | sed 's/"$//') local version=$(grep -o '"version"[[:space:]]*:[[:space:]]*[0-9]*' "$file" | head -1 | grep -o '[0-9]*$') local as_of=$(grep -o '"as_of"[[:space:]]*:[[:space:]]*"[^"]*"' "$file" | head -1 | sed 's/.*:[[:space:]]*"//' | sed 's/"$//') local update_days=$(grep -o '"update_every_days"[[:space:]]*:[[:space:]]*[0-9]*' "$file" | head -1 | grep -o '[0-9]*$' || echo "7") # Get file info local rel_path=$(basename "$file") local file_size=$(ls -lh "$file" | awk '{print $5}') # Default values [[ -z "$title" ]] && title="Untitled" [[ -z "$version" ]] && version="0" [[ -z "$as_of" ]] && as_of="—" # Build table row table_rows="${table_rows} ${title} ${file_size} ${as_of} ${version} " # Collect card data (sort_date|title|rel_path|display_date|excerpt|update_days) local excerpt="" excerpt=$(awk '/
/){gsub(/<[^>]*>/,"");gsub(/&/,"\\&");gsub(/—/,"—");gsub(/\[s[0-9]+\]/,"");gsub(/^[[:space:]]+|[[:space:]]+$/,"");print substr($0,1,120);exit}}' "$file") [[ -n "$excerpt" ]] && excerpt="${excerpt}..." local sort_date="$as_of" [[ "$sort_date" == "—" || -z "$sort_date" ]] && sort_date="0000-00-00" card_data="${card_data}${sort_date}|${title}|${rel_path}|${as_of}|${excerpt}|${update_days} " count=$((count + 1)) done < <(find "$SELF_DIR" -name "*.o-o.html" -type f) # Build card grid (top 8 most recently updated) local card_html="" if [[ -n "$card_data" ]]; then local sorted_cards sorted_cards=$(echo "$card_data" | grep -v '^$' | sort -t'|' -k1 -r | head -8) while IFS='|' read -r c_sort c_title c_path c_date c_excerpt c_update_days; do [[ -z "$c_title" ]] && continue local c_badge="" if [[ -n "$c_date" && "$c_date" != "—" ]]; then # Format date nicely: 2026-02-16 → Feb 16, 2026 local nice_date="$c_date" if date -j -f "%Y-%m-%d" "$c_date" "+%b %-d, %Y" &>/dev/null 2>&1; then nice_date=$(date -j -f "%Y-%m-%d" "$c_date" "+%b %-d, %Y") elif date -d "$c_date" "+%b %-d, %Y" &>/dev/null 2>&1; then nice_date=$(date -d "$c_date" "+%b %-d, %Y") fi # Check freshness based on update_every_days local badge_class="fresh" local now_epoch=$(date +%s) local date_epoch="" if date -j -f "%Y-%m-%d" "$c_date" "+%s" &>/dev/null 2>&1; then date_epoch=$(date -j -f "%Y-%m-%d" "$c_date" "+%s") elif date -d "$c_date" "+%s" &>/dev/null 2>&1; then date_epoch=$(date -d "$c_date" "+%s") fi if [[ -n "$date_epoch" ]]; then local age=$(( now_epoch - date_epoch )) local stale_threshold=$(( ${c_update_days:-7} * 86400 )) [[ "$age" -gt "$stale_threshold" ]] && badge_class="stale" fi c_badge="${nice_date}" else c_badge="New" fi card_html="${card_html} ${c_title} ${c_badge} ${c_excerpt} " done <<< "$sorted_cards" fi # Build the new content local now=$(date "+%Y-%m-%d %H:%M") local new_content if [[ "$count" -eq 0 ]]; then new_content='

No documents found.

Create one with:

bash index.o-o.html --new "Your Topic"

' else new_content="
${card_html}
${table_rows}
Title Size Last Updated Version
" fi # Escape special characters for perl regex local escaped_content=$(echo "$new_content" | sed 's/\\/\\\\/g' | sed 's/\$/\\$/g' | sed 's/@/\\@/g') # Update the index using perl (works reliably with multiline content) perl -i -pe "BEGIN{undef \$/;} s|.*?|\n${escaped_content}\n |sm" "$SELF" # Update stats local tmp="/tmp/oo_stats_$$" sed -e "s|[^<]*|$count|" \ -e "s|[^<]*|$now|" \ "$SELF" > "$tmp" && mv "$tmp" "$SELF" echo "o-o Index: Found $count document(s). Index updated." } create_new() { local input="$1" local title desc # Split on " / " if present if [[ "$input" == *" / "* ]]; then title="${input%% / *}" desc="${input##* / }" else title="$input" desc="$input" fi # Slugify title for filename local slug=$(slugify "$title") local filepath="${SELF_DIR}/${slug}.o-o.html" # Check if file exists if [[ -e "$filepath" ]]; then echo "o-o: Error — file already exists: $filepath" >&2 exit 1 fi echo "o-o: Creating new document: $title" echo "o-o: File: $filepath" # Generate the file generate_oo_file "$filepath" "$title" "$desc" "$slug" echo "o-o: Created $filepath" echo "o-o: Running first update..." # Run the file to trigger first update bash "$filepath" echo "o-o: Document created and populated. Open in browser to read." } update_all() { local force="${1:-0}" echo "o-o: Checking for stale documents..." local now_epoch=$(date +%s) local updated_count=0 while IFS= read -r file; do [[ "$file" == "$SELF" ]] && continue local as_of=$(grep -o '"as_of"[[:space:]]*:[[:space:]]*"[^"]*"' "$file" | head -1 | sed 's/.*:[[:space:]]*"//' | sed 's/"$//') local update_days=$(grep -o '"update_every_days"[[:space:]]*:[[:space:]]*[0-9]*' "$file" | head -1 | grep -o '[0-9]*$' || true) update_days="${update_days:-7}" local should_update=0 if [[ "$force" -eq 1 ]]; then should_update=1 elif [[ -z "$as_of" || "$as_of" == "null" ]]; then should_update=1 else local fresh_secs=$((update_days * 86400)) local as_of_epoch if date -j -f "%Y-%m-%d" "$as_of" "+%s" &>/dev/null; then as_of_epoch=$(date -j -f "%Y-%m-%d" "$as_of" "+%s") elif date -d "$as_of" "+%s" &>/dev/null; then as_of_epoch=$(date -d "$as_of" "+%s") else should_update=1 fi if [[ -n "${as_of_epoch:-}" && "$should_update" -eq 0 ]]; then local age=$((now_epoch - as_of_epoch)) if [[ "$age" -gt "$fresh_secs" ]]; then should_update=1 fi fi fi if [[ "$should_update" -eq 1 ]]; then echo "o-o: Updating $(basename "$file")..." if [[ "$force" -eq 1 ]]; then bash "$file" --force else bash "$file" fi updated_count=$((updated_count + 1)) else echo "o-o: $(basename "$file") is still fresh. Skipping." fi done < <(find "$SELF_DIR" -name "*.o-o.html" -type f) if [[ "$updated_count" -gt 0 ]]; then echo "o-o: Updated $updated_count document(s)." echo "o-o: Rebuilding index..." rebuild_index else echo "o-o: All documents are up to date." fi } # ─── SYNC ───────────────────────────────────────────────────── sync_section() { local section="$1" local sm em case "$section" in css) sm='' em='' ;; js) sm='' em='' ;; shell) sm='# OO:SHELL:START' em='# OO:SHELL:END' ;; all) sync_section css; sync_section js; sync_section shell; return ;; *) echo "o-o Sync: Unknown section '$section'. Use: css, js, shell, all" >&2; exit 1 ;; esac # Extract canonical section boundaries from THIS file local start_line end_line start_line=$(grep -n "^${sm}\$" "$SELF" | head -1 | cut -d: -f1) end_line=$(grep -n "^${em}\$" "$SELF" | head -1 | cut -d: -f1) if [[ -z "$start_line" || -z "$end_line" ]]; then echo "o-o Sync: ERROR — no $section markers found in $SELF_NAME" >&2 return 1 fi local synced=0 for file in "$SELF_DIR"/*.o-o.html; do [[ "$file" == "$SELF" ]] && continue local f_start f_end f_start=$(grep -n "^${sm}\$" "$file" | head -1 | cut -d: -f1 || true) f_end=$(grep -n "^${em}\$" "$file" | head -1 | cut -d: -f1 || true) if [[ -z "$f_start" || -z "$f_end" ]]; then echo "o-o Sync: SKIP $(basename "$file") (no $section markers)" continue fi # Assemble: content before marker + canonical section + content after marker { head -n $((f_start - 1)) "$file" sed -n "${start_line},${end_line}p" "$SELF" tail -n +$((f_end + 1)) "$file" } > "${file}.tmp" && mv "${file}.tmp" "$file" synced=$((synced + 1)) echo "o-o Sync: $(basename "$file") [$section]" done # Handle custom oo.css for CSS sync if [[ "$section" == "css" ]]; then local custom_css="$SELF_DIR/oo.css" local csm='' local cem='' for file in "$SELF_DIR"/*.o-o.html; do # Remove existing custom block if present local c_start c_end c_start=$(grep -n "^${csm}\$" "$file" 2>/dev/null | head -1 | cut -d: -f1 || true) c_end=$(grep -n "^${cem}\$" "$file" 2>/dev/null | head -1 | cut -d: -f1 || true) if [[ -n "$c_start" && -n "$c_end" ]]; then { head -n $((c_start - 1)) "$file" tail -n +$((c_end + 1)) "$file" } > "${file}.tmp" && mv "${file}.tmp" "$file" fi # If oo.css exists, inject it right after OO:CSS:END if [[ -f "$custom_css" ]]; then local css_end_line css_end_line=$(grep -n "^${em}\$" "$file" | head -1 | cut -d: -f1) if [[ -n "$css_end_line" ]]; then { head -n "$css_end_line" "$file" echo "$csm" echo "" echo "$cem" tail -n +$((css_end_line + 1)) "$file" } > "${file}.tmp" && mv "${file}.tmp" "$file" fi fi done fi echo "o-o Sync: $section synced to $synced file(s)." } # ─── HELP ───────────────────────────────────────────────────── show_help() { echo "o-o — self-updating living documents" echo "" echo "Usage:" echo " bash $SELF_NAME [OPTIONS]" echo "" if [[ "$IS_INDEX" -eq 1 ]]; then echo "Index commands:" echo " (no args) Rebuild the index" echo " --new Create new document (interactive)" echo " --new \"Title / description\" Create new document (quick)" echo " --update-all Update stale documents" echo " --update-all --force Force update all documents" echo "" else echo "Article commands:" echo " (no args) Update this document" echo "" fi echo "Shared options:" echo " --show Show current contract and config" echo " --set KEY VALUE Set a contract/config field" echo " --add intent|section VALUE Add to a research array field" echo " --remove intent|section VALUE Remove from a research array field" echo " --sync [css|js|shell|all] Sync shared sections to sibling files" echo " --agent NAME Agent backend: claude (default)" echo " --model NAME Override model (e.g. opus, sonnet, haiku)" echo " --force Update even if document is still fresh" echo " --help, -h Show this help" echo "" echo "Settable fields (--set):" echo " subject, scope, audience, tone, budget, update_every_days" echo "" echo "Array fields (--add / --remove):" echo " intent Research search queries" echo " section Required article sections" echo "" echo "Examples:" if [[ "$IS_INDEX" -eq 1 ]]; then echo " bash $SELF_NAME --new \"History of the USA\"" echo " bash $SELF_NAME --new \"Python Async / Guide to async/await patterns\"" echo " bash $SELF_NAME --update-all" else echo " bash $SELF_NAME # Update with latest research" echo " bash $SELF_NAME --force # Force update even if fresh" echo " bash $SELF_NAME --model opus # Use a specific model" fi echo " bash $SELF_NAME --set scope \"US market analysis\"" echo " bash $SELF_NAME --add intent \"quarterly earnings 2026\"" echo " bash $SELF_NAME --add section \"Market Analysis\"" echo " bash $SELF_NAME --remove intent \"old search query\"" echo " bash $SELF_NAME --sync all # Propagate shared code to siblings" } # ─── ARG PARSING ────────────────────────────────────────────── ACTION="" NEW_TOPIC="" SYNC_SECTION="" AGENT="claude" MODEL="" FORCE=0 while [[ $# -gt 0 ]]; do case "$1" in --new) ACTION="new"; NEW_TOPIC="${2:-}"; [[ -n "$NEW_TOPIC" ]] && shift; shift ;; --update-all) ACTION="update-all"; shift ;; --sync) ACTION="sync"; SYNC_SECTION="${2:-all}"; shift; [[ $# -gt 0 && "${1:0:2}" != "--" ]] && shift ;; --show) ACTION="show"; shift ;; --set) ACTION="set"; SET_KEY="${2:-}"; SET_VAL="${3:-}"; shift; [[ -n "$SET_KEY" ]] && shift; [[ -n "$SET_VAL" ]] && shift ;; --add) ACTION="add"; ARR_FIELD="${2:-}"; ARR_VAL="${3:-}"; shift; [[ -n "$ARR_FIELD" ]] && shift; [[ -n "$ARR_VAL" ]] && shift ;; --remove) ACTION="remove"; ARR_FIELD="${2:-}"; ARR_VAL="${3:-}"; shift; [[ -n "$ARR_FIELD" ]] && shift; [[ -n "$ARR_VAL" ]] && shift ;; --agent) AGENT="$2"; shift 2 ;; --model) MODEL="$2"; shift 2 ;; --force) FORCE=1; shift ;; --help|-h) ACTION="help"; shift ;; *) echo "o-o: Unknown option: $1 (try --help)" >&2; exit 1 ;; esac done IS_INDEX=0 [[ "$SELF_NAME" == index* ]] && IS_INDEX=1 # ─── FRESHNESS CHECK ────────────────────────────────────────── check_freshness() { [[ "$FORCE" -eq 1 ]] && return 1 # return 1 = not fresh, should update local update_days as_of update_days=$(grep -o '"update_every_days"[[:space:]]*:[[:space:]]*[0-9]*' "$SELF" | head -1 | grep -o '[0-9]*$' || true) update_days="${update_days:-7}" as_of=$(grep -o '"as_of"[[:space:]]*:[[:space:]]*"[^"]*"' "$SELF" | head -1 | sed 's/.*:[[:space:]]*"//' | sed 's/"$//') [[ -z "$as_of" ]] && return 1 # no date = needs update local fresh_secs=$((update_days * 86400)) local now_epoch=$(date +%s) local as_of_epoch="" if date -j -f "%Y-%m-%d" "$as_of" "+%s" &>/dev/null 2>&1; then as_of_epoch=$(date -j -f "%Y-%m-%d" "$as_of" "+%s") elif date -d "$as_of" "+%s" &>/dev/null 2>&1; then as_of_epoch=$(date -d "$as_of" "+%s") fi if [[ -n "$as_of_epoch" ]]; then local age=$(( now_epoch - as_of_epoch )) if [[ "$age" -lt "$fresh_secs" ]]; then echo "o-o: '$SELF_NAME' is still fresh (updated $as_of, updates every ${update_days}d). Skipping." echo "o-o: Use --force to update anyway." return 0 # fresh, skip fi fi return 1 # not fresh, should update } # ─── AGENT DISPATCH ─────────────────────────────────────────── dispatch_update() { # Extract budget from contract local budget budget=$(grep -o '"max_cost_usd"[[:space:]]*:[[:space:]]*[0-9.]*' "$SELF" | head -1 | grep -o '[0-9.]*$' || true) budget="${budget:-0.50}" # Build the prompt local prompt read -r -d '' prompt << 'PROMPT_EOF' || true You are a o-o research agent. Your task is to update a living document. The document is at: __SELF__ This file is a polyglot HTML/bash file structured as follows: - Above window.stop(): browser-visible content (article, CSS, JS, manifest) - Below window.stop(): machine-readable zone (update contract, source cache, changelog) Read the update contract (the JSON block with id="oo-contract") — it contains your complete instructions: the subject, research intents, required sections, quality thresholds, source policy, and output format rules. Check the oo-manifest "as_of" field for when this document was last updated. If empty, this is a first run — research everything. If it has a date, focus your research on new information since that date. Use the Edit tool to modify specific parts of the file in-place. Only modify:
content, oo-manifest, oo-source-cache, oo-changelog. Do NOT touch CSS, JavaScript, the shell preamble, or structural HTML outside
. IMAGES: The contract may have an "images" section. If images are allowed: - Find relevant images via web search (official sites, wikimedia, press kits) - Download with: curl -sL "" -o /tmp/oo_img_N.ext - Verify it is an image: file /tmp/oo_img_N.ext - Resize (preserve format — keep PNG for transparency, JPEG for photos): macOS: sips --resampleWidth /tmp/oo_img_N.ext --out /tmp/oo_img_N_r.ext Linux: convert /tmp/oo_img_N.ext -resize x /tmp/oo_img_N_r.ext - Check size: if over max_file_kb, reduce further or skip - Encode: base64 < /tmp/oo_img_N_r.ext - Embed as:
description
Caption. Source: domain
- Clean up: rm /tmp/oo_img_N* PROMPT_EOF # Replace __SELF__ placeholder with actual path prompt="${prompt//__SELF__/$SELF}" echo "o-o: Updating '$SELF_NAME' via $AGENT (budget: \$$budget)..." case "$AGENT" in claude) if ! command -v claude &>/dev/null; then echo "o-o: Error — 'claude' CLI not found." >&2 echo "o-o: Install: https://docs.anthropic.com/en/docs/claude-code" >&2 exit 1 fi local -a claude_args=( -p "$prompt" --allowed-tools "Bash,Read,Edit,WebSearch,WebFetch" --max-budget-usd "$budget" ) if [[ -n "$MODEL" ]]; then claude_args+=(--model "$MODEL") fi claude "${claude_args[@]}" ;; *) echo "o-o: Unknown agent '$AGENT'." >&2 echo "o-o: Currently supported: claude" >&2 exit 1 ;; esac echo "o-o: Update complete. Open '$SELF_NAME' in a browser to read." } # ─── COMMAND ROUTER ─────────────────────────────────────────── # ─── SHOW CONTRACT ──────────────────────────────────────────── show_contract() { echo "" echo " $SELF_NAME" echo " ────────────────────────" # Extract manifest fields local manifest contract manifest=$(perl -0777 -ne 'print $1 if /id="oo-manifest"[^>]*>\s*(\{.*?\})\s*<\/script>/s' "$SELF") contract=$(perl -0777 -ne 'print $1 if /id="oo-contract"[^>]*>\s*(\{.*?\})\s*<\/script>/s' "$SELF") if [[ -n "$manifest" ]]; then local title as_of version update_days title=$(echo "$manifest" | perl -ne 'print $1 if /"title"\s*:\s*"([^"]*)"/') as_of=$(echo "$manifest" | perl -ne 'print $1 if /"as_of"\s*:\s*"([^"]*)"/') version=$(echo "$manifest" | perl -ne 'print $1 if /"version"\s*:\s*(\d+)/') update_days=$(echo "$manifest" | perl -ne 'print $1 if /"update_every_days"\s*:\s*(\d+)/') [[ -n "$title" ]] && echo " Title: $title" [[ -n "$version" ]] && echo " Version: $version" [[ -n "$as_of" ]] && echo " Last updated: $as_of" [[ -n "$update_days" ]] && echo " Update every: ${update_days} days" fi if [[ -n "$contract" ]]; then local subject scope audience tone budget subject=$(echo "$contract" | perl -ne 'print $1 if /"subject"\s*:\s*"([^"]*)"/') scope=$(echo "$contract" | perl -ne 'print $1 if /"scope"\s*:\s*"([^"]*)"/') audience=$(echo "$contract" | perl -ne 'print $1 if /"audience"\s*:\s*"([^"]*)"/') tone=$(echo "$contract" | perl -ne 'print $1 if /"tone"\s*:\s*"([^"]*)"/') budget=$(echo "$contract" | perl -ne 'print $1 if /"max_cost_usd"\s*:\s*([\d.]+)/') echo "" [[ -n "$subject" ]] && echo " Subject: $subject" [[ -n "$scope" ]] && echo " Scope: $scope" [[ -n "$audience" ]] && echo " Audience: $audience" [[ -n "$tone" ]] && echo " Tone: $tone" [[ -n "$budget" ]] && echo " Budget: \$$budget" # Research intents local intents intents=$(echo "$contract" | perl -0777 -ne 'if(/"intents"\s*:\s*\[(.*?)\]/s){$i=$1; while($i=~/"([^"]+)"/g){print "$1\n"}}') if [[ -n "$intents" ]]; then echo "" echo " Research intents:" while IFS= read -r line; do echo " - $line" done <<< "$intents" fi # Required sections local sections sections=$(echo "$contract" | perl -0777 -ne 'if(/"required_sections"\s*:\s*\[(.*?)\]/s){$i=$1; while($i=~/"([^"]+)"/g){print "$1\n"}}') if [[ -n "$sections" ]]; then echo "" echo " Required sections:" while IFS= read -r line; do echo " - $line" done <<< "$sections" fi fi echo "" } # ─── SET FIELD ───────────────────────────────────────────────── set_field() { local key="$1" val="$2" [[ -z "$key" || -z "$val" ]] && { echo "o-o: Usage: --set KEY VALUE" >&2; exit 1; } case "$key" in subject|scope|audience|tone) # These live in oo-contract → identity.KEY perl -i -0pe "s/(\"identity\"\\s*:\\s*\\{[^}]*\"$key\"\\s*:\\s*\")([^\"]*)(\")/\${1}$val\${3}/s" "$SELF" echo "o-o: Set identity.$key = \"$val\"" ;; budget) # budget.max_cost_usd in oo-contract perl -i -pe "s/(\"max_cost_usd\"\\s*:\\s*)[\\d.]+/\${1}$val/" "$SELF" echo "o-o: Set budget.max_cost_usd = $val" ;; update_every_days) # In oo-manifest perl -i -pe "s/(\"update_every_days\"\\s*:\\s*)\\d+/\${1}$val/" "$SELF" echo "o-o: Set update_every_days = $val" ;; *) echo "o-o: Unknown field: $key" >&2 echo "o-o: Settable fields: subject, scope, audience, tone, budget, update_every_days" >&2 echo "o-o: For array fields use: --add intent|section VALUE / --remove intent|section VALUE" >&2 exit 1 ;; esac } # ─── ADD / REMOVE ARRAY ITEMS ────────────────────────────────── add_to_array() { local field="$1" value="$2" [[ -z "$field" || -z "$value" ]] && { echo "o-o: Usage: --add intent|section VALUE" >&2; exit 1; } local arr_name case "$field" in intent) arr_name="intents" ;; section) arr_name="required_sections" ;; *) echo "o-o: Unknown array field: $field (use: intent, section)" >&2; exit 1 ;; esac # Append before the closing ] of the named array perl -i -0777 -pe 's/("'"$arr_name"'"\s*:\s*\[.*?)(\s*\])/$1,\n "'"$value"'"$2/s' "$SELF" # Fix leading comma if array was previously empty: ["x"] → ["x"] perl -i -pe 's/\[\s*,\s*"/["/' "$SELF" echo "o-o: Added to research.$arr_name: \"$value\"" } remove_from_array() { local field="$1" value="$2" [[ -z "$field" || -z "$value" ]] && { echo "o-o: Usage: --remove intent|section VALUE" >&2; exit 1; } local arr_name case "$field" in intent) arr_name="intents" ;; section) arr_name="required_sections" ;; *) echo "o-o: Unknown array field: $field (use: intent, section)" >&2; exit 1 ;; esac # Remove line matching the exact value perl -i -ne 'print unless /^\s*"\Q'"$value"'\E"\s*,?\s*$/' "$SELF" # Fix trailing comma before ]: ..., ] → ...] perl -i -0777 -pe 's/,(\s*\])/$1/g' "$SELF" echo "o-o: Removed from research.$arr_name: \"$value\"" } case "$ACTION" in new) if [[ "$IS_INDEX" -eq 0 ]]; then echo "o-o: --new is only available on index files." >&2 echo "o-o: Rename this file to index*.o-o.html to enable library management." >&2 exit 1 fi if [[ -z "$NEW_TOPIC" ]]; then # Interactive mode echo "" echo " Create new o-o document" echo " ────────────────────────" echo "" read -p " Title: " OO_NEW_TITLE if [[ -z "$OO_NEW_TITLE" ]]; then echo " Error: Title is required." >&2 exit 1 fi echo "" read -p " Scope (what should this document cover?): " OO_NEW_SCOPE [[ -z "$OO_NEW_SCOPE" ]] && OO_NEW_SCOPE="$OO_NEW_TITLE" echo "" read -p " Audience [General readers]: " OO_NEW_AUDIENCE [[ -z "$OO_NEW_AUDIENCE" ]] && OO_NEW_AUDIENCE="General readers" read -p " Tone [Informative, well-researched, accessible]: " OO_NEW_TONE [[ -z "$OO_NEW_TONE" ]] && OO_NEW_TONE="Informative, well-researched, accessible" read -p " Budget USD [0.50]: " OO_NEW_BUDGET [[ -z "$OO_NEW_BUDGET" ]] && OO_NEW_BUDGET="0.50" echo "" OO_NEW_SLUG=$(slugify "$OO_NEW_TITLE") OO_NEW_PATH="${SELF_DIR}/${OO_NEW_SLUG}.o-o.html" if [[ -e "$OO_NEW_PATH" ]]; then echo " Error: File already exists: $OO_NEW_PATH" >&2 exit 1 fi echo " Creating: ${OO_NEW_SLUG}.o-o.html" generate_oo_file "$OO_NEW_PATH" "$OO_NEW_TITLE" "$OO_NEW_SCOPE" "$OO_NEW_SLUG" # Customize audience, tone, budget if not defaults OO_TMP="/tmp/oo_custom_$$" sed -e "s|\"audience\": \"General readers\"|\"audience\": \"${OO_NEW_AUDIENCE}\"|" \ -e "s|\"tone\": \"Informative, well-researched, accessible\"|\"tone\": \"${OO_NEW_TONE}\"|" \ -e "s|\"max_cost_usd\": 0.50|\"max_cost_usd\": ${OO_NEW_BUDGET}|" \ "$OO_NEW_PATH" > "$OO_TMP" && mv "$OO_TMP" "$OO_NEW_PATH" chmod +x "$OO_NEW_PATH" echo " Running first update..." echo "" bash "$OO_NEW_PATH" else create_new "$NEW_TOPIC" fi ;; update-all) if [[ "$IS_INDEX" -eq 0 ]]; then echo "o-o: --update-all is only available on index files." >&2 echo "o-o: Rename this file to index*.o-o.html to enable library management." >&2 exit 1 fi update_all "$FORCE" ;; sync) sync_section "$SYNC_SECTION" ;; show) show_contract ;; set) set_field "$SET_KEY" "$SET_VAL" ;; add) if [[ "$IS_INDEX" -eq 1 ]]; then echo "o-o: --add is for article files (modifies the research contract)." >&2 exit 1 fi add_to_array "$ARR_FIELD" "$ARR_VAL" ;; remove) if [[ "$IS_INDEX" -eq 1 ]]; then echo "o-o: --remove is for article files (modifies the research contract)." >&2 exit 1 fi remove_from_array "$ARR_FIELD" "$ARR_VAL" ;; help) show_help ;; "") if [[ "$IS_INDEX" -eq 1 ]]; then rebuild_index else # Article update: check freshness then dispatch if check_freshness; then exit 0 # still fresh, already printed message fi dispatch_update fi ;; esac # OO:SHELL:END exit 0