Commit 6821f865 authored by Andrey Filippov's avatar Andrey Filippov

Add attic/CLAUDE/bin scripts and Gemini integration

- ask_gemini.sh: non-interactive Gemini CLI wrapper (-f for context
  files, -m flash/pro model shorthand, auto @-reference for PNG/PDF)
- dual-opinion.sh: extended with -g/-G flags for optional Gemini third
  agent; produces 4-section "Triple Opinion Synthesis" when enabled
- ask_claude.sh, claude_daemon.sh, dual-opinion.sh, export scripts,
  backfill_center_ims.py: force-tracked (attic/ is gitignored globally,
  git add -f used to persist these helper scripts in the repo)
- gemini.md: Gemini team integration instructions

Gemini CLI setup: npm install -g @google/gemini-cli, GEMINI_API_KEY in
~/.bashrc. Default model gemini-2.5-flash; Pro requires billing enabled
on the Google Cloud project backing the API key.
Co-Authored-By: 's avatarClaude Sonnet 4.6 <noreply@anthropic.com>
parent 1cece288
#!/usr/bin/env bash
# ask_claude.sh — Submit a question to the Claude daemon and wait for the response.
#
# Designed to be called by Codex (or any tool in a restricted sandbox) that can write
# files but cannot directly execute the claude binary.
#
# The Claude daemon (claude_daemon.sh) must be running.
#
# Usage:
# ask_claude.sh "question"
# ask_claude.sh -q question_file.md
# ask_claude.sh -f context.java -f other.java "question"
#
# Options:
# -f FILE Embed file contents in the prompt (repeatable)
# -q FILE Read question text from a file
# -t SECS Timeout waiting for response (default: 180)
# -o FILE Write response to FILE in addition to stdout
# -h Show help
#
# Exit codes:
# 0 success
# 1 bad arguments
# 2 daemon timeout (no response within -t seconds)
# 3 daemon returned an error response
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
WORKSPACE_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)"
INBOX_DIR="${CLAUDE_DAEMON_INBOX:-$WORKSPACE_ROOT/attic/CLAUDE/daemon/inbox}"
CONTEXT_FILES=()
QUESTION=""
QUESTION_FILE=""
TIMEOUT_SEC=180
OUTPUT_FILE=""
while getopts "f:q:t:o:h" opt; do
case "$opt" in
f) CONTEXT_FILES+=("$OPTARG") ;;
q) QUESTION_FILE="$OPTARG" ;;
t) TIMEOUT_SEC="$OPTARG" ;;
o) OUTPUT_FILE="$OPTARG" ;;
h) awk 'NR==1{next} /^#/{gsub(/^# ?/,""); print; next} {exit}' "$0"; exit 0 ;;
*) echo "Unknown flag: -$OPTARG" >&2; exit 1 ;;
esac
done
shift $((OPTIND - 1))
# ── Build question ────────────────────────────────────────────────────────────
if [[ -n "$QUESTION_FILE" ]]; then
[[ -f "$QUESTION_FILE" ]] || { echo "ERROR: question file not found: $QUESTION_FILE" >&2; exit 1; }
QUESTION="$(cat "$QUESTION_FILE")"
elif [[ $# -gt 0 ]]; then
QUESTION="$*"
else
echo "ERROR: provide a question via -q FILE or as a positional argument." >&2; exit 1
fi
# ── Build prompt ──────────────────────────────────────────────────────────────
PROMPT="$QUESTION"
for f in "${CONTEXT_FILES[@]}"; do
if [[ -f "$f" ]]; then
PROMPT+=$'\n\n'"--- FILE: $f ---"$'\n'"$(cat "$f")"$'\n'"--- END FILE ---"
else
echo "WARNING: context file not found: $f" >&2
fi
done
# ── Submit request ────────────────────────────────────────────────────────────
mkdir -p "$INBOX_DIR"
TS="$(date +%Y%m%d_%H%M%S)_$$"
REQ_FILE="$INBOX_DIR/${TS}.request"
RESP_FILE="$INBOX_DIR/${TS}.response"
ERR_FILE="$INBOX_DIR/${TS}.error"
echo "$PROMPT" > "$REQ_FILE"
# ── Poll for response ─────────────────────────────────────────────────────────
ELAPSED=0
while true; do
if [[ -f "$RESP_FILE" ]]; then
# Check if daemon reported an error
if [[ -f "$ERR_FILE" ]] || grep -q '^DAEMON ERROR:' "$RESP_FILE" 2>/dev/null; then
echo "ERROR: Claude daemon reported a failure." >&2
cat "$RESP_FILE" >&2
[[ -f "$ERR_FILE" ]] && cat "$ERR_FILE" >&2
exit 3
fi
if [[ -n "$OUTPUT_FILE" ]]; then
cp "$RESP_FILE" "$OUTPUT_FILE"
fi
cat "$RESP_FILE"
exit 0
fi
if [[ $ELAPSED -ge $TIMEOUT_SEC ]]; then
echo "ERROR: timeout after ${TIMEOUT_SEC}s — is claude_daemon.sh running?" >&2
rm -f "$REQ_FILE"
exit 2
fi
sleep 2
ELAPSED=$((ELAPSED + 2))
done
#!/usr/bin/env bash
# ask_gemini.sh — Submit a question to Gemini CLI (non-interactive).
#
# Usage:
# ask_gemini.sh "question"
# ask_gemini.sh -q question_file.md
# ask_gemini.sh -f context.java -f diagram.png "question"
#
# Options:
# -f FILE Attach a context file (repeatable).
# Text files (.java .c .py .md .txt .xml etc.) are embedded inline.
# Binary/multimodal files (.png .jpg .pdf .gif) use Gemini @ reference.
# -q FILE Read question text from a file.
# -m MODEL Model override. Accepts short names:
# flash (default) → gemini-2.5-flash
# pro → gemini-2.5-pro
# or full model IDs like gemini-2.0-flash.
# -o FILE Write response to FILE in addition to stdout.
# -h Show this help.
#
# Environment:
# GEMINI_API_KEY Required — set in ~/.bashrc.
# GEMINI_BIN Override path to gemini binary.
set -euo pipefail
# ── Locate binary ─────────────────────────────────────────────────────────────
GEMINI_BIN="${GEMINI_BIN:-}"
if [[ -z "$GEMINI_BIN" ]]; then
if command -v gemini &>/dev/null; then
GEMINI_BIN="$(command -v gemini)"
else
# Try nvm-installed path
NVM_GEMINI="$(ls ~/.nvm/versions/node/*/bin/gemini 2>/dev/null | tail -1 || true)"
[[ -n "$NVM_GEMINI" ]] && GEMINI_BIN="$NVM_GEMINI"
fi
fi
[[ -n "$GEMINI_BIN" ]] || { echo "ERROR: gemini not found. Run: npm install -g @google/gemini-cli" >&2; exit 1; }
# ── Defaults ──────────────────────────────────────────────────────────────────
DEFAULT_MODEL="gemini-2.5-flash"
CONTEXT_FILES=()
QUESTION=""
QUESTION_FILE=""
MODEL=""
OUTPUT_FILE=""
# ── Parse args ────────────────────────────────────────────────────────────────
while getopts "f:q:m:o:h" opt; do
case "$opt" in
f) CONTEXT_FILES+=("$OPTARG") ;;
q) QUESTION_FILE="$OPTARG" ;;
m) MODEL="$OPTARG" ;;
o) OUTPUT_FILE="$OPTARG" ;;
h) awk 'NR==1{next} /^#/{gsub(/^# ?/,""); print; next} {exit}' "$0"; exit 0 ;;
*) echo "Unknown flag: -$OPTARG" >&2; exit 1 ;;
esac
done
shift $((OPTIND - 1))
# ── Question ──────────────────────────────────────────────────────────────────
if [[ -n "$QUESTION_FILE" ]]; then
[[ -f "$QUESTION_FILE" ]] || { echo "ERROR: question file not found: $QUESTION_FILE" >&2; exit 1; }
QUESTION="$(cat "$QUESTION_FILE")"
elif [[ $# -gt 0 ]]; then
QUESTION="$*"
else
echo "ERROR: provide a question via -q FILE or as a positional argument." >&2
exit 1
fi
# ── Resolve model ─────────────────────────────────────────────────────────────
case "${MODEL:-flash}" in
flash) MODEL="gemini-2.5-flash" ;;
pro) MODEL="gemini-2.5-pro" ;;
flash-2.0) MODEL="gemini-2.0-flash" ;;
*) : ;; # pass through full model IDs unchanged
esac
# ── Build prompt ──────────────────────────────────────────────────────────────
# Text files → inline. Binary/multimodal → @path reference in prompt.
MULTIMODAL_EXTS="png jpg jpeg gif pdf bmp tiff tif webp"
PROMPT="$QUESTION"
MULTIMODAL_REFS=""
for f in "${CONTEXT_FILES[@]}"; do
if [[ ! -f "$f" ]]; then
echo "WARNING: context file not found: $f" >&2
continue
fi
ext="${f##*.}"
ext="${ext,,}" # lowercase
is_binary=0
for e in $MULTIMODAL_EXTS; do
[[ "$ext" == "$e" ]] && { is_binary=1; break; }
done
if [[ $is_binary -eq 1 ]]; then
MULTIMODAL_REFS+=" @${f}"
else
PROMPT+=$'\n\n'"--- FILE: $f ---"$'\n'"$(cat "$f")"$'\n'"--- END FILE ---"
fi
done
# Prepend any @-references to the prompt text
if [[ -n "$MULTIMODAL_REFS" ]]; then
PROMPT="${MULTIMODAL_REFS} ${PROMPT}"
fi
# ── Invoke Gemini ─────────────────────────────────────────────────────────────
if [[ -n "$OUTPUT_FILE" ]]; then
"$GEMINI_BIN" -m "$MODEL" -p "$PROMPT" 2>&1 | tee "$OUTPUT_FILE"
else
"$GEMINI_BIN" -m "$MODEL" -p "$PROMPT" 2>&1
fi
#!/usr/bin/env python3
"""
Backfill *-CENTER-ims.corr-xml files from the corresponding physical scene IMS files.
For each <models_dir>/<ts>-CENTER/ directory that is missing its IMS file, copy
<models_dir>/<ts>/<ts>-ims.corr-xml → <models_dir>/<ts>-CENTER/<ts>-CENTER-ims.corr-xml
The content is copied as-is (camera LLA from the reference scene is close enough
for testing; createCenterClt() will overwrite with a proper averaged value on the
next full run).
Usage:
python3 backfill_center_ims.py [<models_root>...]
Defaults to /home/elphel/lwir16-proc/LV/models if no arguments are given.
"""
import sys
import os
import shutil
import glob
IMS_SUFFIX = "-ims.corr-xml"
def backfill(models_root):
center_dirs = sorted(glob.glob(os.path.join(models_root, "*-CENTER")))
if not center_dirs:
print(f" No *-CENTER directories found under {models_root}")
return 0, 0
copied = skipped = 0
for center_dir in center_dirs:
center_name = os.path.basename(center_dir) # e.g. 1773135527_803834-CENTER
ts = center_name.replace("-CENTER", "") # e.g. 1773135527_803834
src = os.path.join(models_root, ts, ts + IMS_SUFFIX)
dst = os.path.join(center_dir, center_name + IMS_SUFFIX)
if os.path.exists(dst):
print(f" SKIP {dst} (already exists)")
skipped += 1
continue
if not os.path.exists(src):
print(f" MISS {src} (source not found, skipping)")
skipped += 1
continue
shutil.copy2(src, dst)
print(f" COPY {src}")
print(f" -> {dst}")
copied += 1
return copied, skipped
def main():
roots = sys.argv[1:] if len(sys.argv) > 1 else \
sorted(glob.glob("/home/elphel/lwir16-proc/LV/models/models_*"))
total_copied = total_skipped = 0
for root in roots:
print(f"\n{root}")
c, s = backfill(root)
total_copied += c
total_skipped += s
print(f"\nDone: {total_copied} copied, {total_skipped} skipped.")
if __name__ == "__main__":
main()
#!/usr/bin/env bash
# claude_daemon.sh — Watches INBOX_DIR for *.request files, runs claude -p, writes *.response.
#
# Intended to run as a persistent background process (or systemd user service) so that
# Codex and other tools running in restricted sandboxes can query Claude without needing
# direct access to the claude binary.
#
# Request protocol:
# Write a plain-text prompt to INBOX_DIR/<id>.request
# Poll for INBOX_DIR/<id>.response (response text) or INBOX_DIR/<id>.error (failure)
#
# Environment variables:
# CLAUDE_DAEMON_INBOX inbox directory (default: attic/CLAUDE/daemon/inbox)
# CLAUDE_DAEMON_LOG log file path (default: attic/CLAUDE/daemon/daemon.log)
# CLAUDE_DAEMON_MODEL claude model (default: sonnet)
# CLAUDE_BIN claude binary (default: /home/elphel/.local/bin/claude)
#
# Usage:
# claude_daemon.sh # run in foreground
# claude_daemon.sh --background # fork to background, write PID to daemon.pid
# claude_daemon.sh --stop # stop a backgrounded instance
# claude_daemon.sh --status # show status
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
WORKSPACE_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)"
DAEMON_DIR="$WORKSPACE_ROOT/attic/CLAUDE/daemon"
INBOX_DIR="${CLAUDE_DAEMON_INBOX:-$DAEMON_DIR/inbox}"
ARCHIVE_DIR="$DAEMON_DIR/archive"
LOG_FILE="${CLAUDE_DAEMON_LOG:-$DAEMON_DIR/daemon.log}"
PID_FILE="$DAEMON_DIR/daemon.pid"
CLAUDE_BIN="${CLAUDE_BIN:-/home/elphel/.local/bin/claude}"
CLAUDE_MODEL="${CLAUDE_DAEMON_MODEL:-sonnet}"
POLL_INTERVAL=1 # seconds between inbox scans
mkdir -p "$INBOX_DIR" "$ARCHIVE_DIR" "$(dirname "$LOG_FILE")"
# ── Helpers ──────────────────────────────────────────────────────────────────
log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG_FILE"; }
handle_request() {
local req_file="$1"
local base="${req_file%.request}"
local resp_file="${base}.response"
local err_file="${base}.error"
local name
name="$(basename "$req_file")"
log "→ $name"
if cd "$WORKSPACE_ROOT" && "$CLAUDE_BIN" \
-p "$(cat "$req_file")" \
--no-session-persistence \
--model "$CLAUDE_MODEL" \
> "$resp_file" 2>"$err_file"; then
local size
size="$(wc -c < "$resp_file")"
log " ✓ ${name%.request}.response ($size bytes)"
[[ -s "$err_file" ]] || rm -f "$err_file"
else
log " ✗ claude exited non-zero — see ${name%.request}.error"
echo "DAEMON ERROR: claude exited non-zero. Stderr in $(basename "$err_file")" >> "$resp_file"
fi
mv "$req_file" "$ARCHIVE_DIR/"
}
run_loop() {
log "Claude daemon started (pid $$). Inbox: $INBOX_DIR"
log "Model: $CLAUDE_MODEL | Poll: ${POLL_INTERVAL}s"
# Handle any leftover requests from a previous run
for req in "$INBOX_DIR"/*.request; do
[[ -f "$req" ]] && handle_request "$req"
done
while true; do
for req in "$INBOX_DIR"/*.request; do
[[ -f "$req" ]] && handle_request "$req"
done
sleep "$POLL_INTERVAL"
done
}
# ── CLI ───────────────────────────────────────────────────────────────────────
case "${1:-}" in
--background)
if [[ -f "$PID_FILE" ]]; then
OLD_PID="$(cat "$PID_FILE")"
if kill -0 "$OLD_PID" 2>/dev/null; then
echo "Daemon already running (pid $OLD_PID)."
exit 0
fi
fi
nohup bash "$0" >> "$LOG_FILE" 2>&1 &
echo $! > "$PID_FILE"
echo "Claude daemon started in background (pid $(cat "$PID_FILE"))."
echo "Log: $LOG_FILE"
;;
--stop)
if [[ -f "$PID_FILE" ]]; then
PID="$(cat "$PID_FILE")"
if kill -0 "$PID" 2>/dev/null; then
kill "$PID"
rm -f "$PID_FILE"
echo "Daemon stopped (pid $PID)."
else
echo "PID file exists but process $PID is not running."
rm -f "$PID_FILE"
fi
else
echo "No PID file found. Daemon may not be running."
fi
;;
--status)
if [[ -f "$PID_FILE" ]]; then
PID="$(cat "$PID_FILE")"
if kill -0 "$PID" 2>/dev/null; then
echo "Daemon running (pid $PID)."
echo "Log: $LOG_FILE"
echo "Inbox: $INBOX_DIR ($(ls "$INBOX_DIR"/*.request 2>/dev/null | wc -l) pending)"
else
echo "Daemon NOT running (stale PID file: $PID)."
fi
else
echo "Daemon NOT running (no PID file)."
fi
;;
""|--foreground)
run_loop
;;
*)
echo "Usage: claude_daemon.sh [--background|--stop|--status|--foreground]" >&2
exit 1
;;
esac
This diff is collapsed.
#!/usr/bin/env python3
"""
export_codex_session.py — Convert a Codex session JSONL to readable markdown.
Codex stores sessions automatically at ~/.codex/sessions/YYYY/MM/DD/rollout-*.jsonl
Usage (manual):
export_codex_session.py # export most recent session
export_codex_session.py <path/to/session.jsonl>
export_codex_session.py --all # export all sessions not yet exported
Output goes to attic/CLAUDE/session-logs/ as codex_YYYYMMDD_<session-id-prefix>.md
Same session = same filename, so re-running overwrites cleanly.
"""
import json
import os
import sys
from datetime import datetime
from pathlib import Path
CODEX_SESSIONS_ROOT = Path.home() / '.codex' / 'sessions'
# ── Conversion ────────────────────────────────────────────────────────────────
def convert(jsonl_path: Path) -> tuple[str, str, str]:
"""Return (markdown_text, session_id, date_str)."""
entries = []
with open(jsonl_path, encoding='utf-8', errors='replace') as f:
for line in f:
line = line.strip()
if not line:
continue
try:
entries.append(json.loads(line))
except json.JSONDecodeError:
pass
session_id = ''
cwd = ''
model = ''
start_ts = None
turns = [] # list of (role, time_str, text, metadata_dict)
current_turn_meta = {}
for e in entries:
etype = e.get('type', '')
ts_raw = e.get('timestamp', '')
try:
time_str = ts_raw[11:19] # "HH:MM:SS" from ISO timestamp
except Exception:
time_str = ''
if ts_raw and start_ts is None:
start_ts = ts_raw
if etype == 'session_meta':
p = e.get('payload', {})
session_id = p.get('id', session_id)
cwd = p.get('cwd', cwd)
elif etype == 'turn_context':
p = e.get('payload', {})
current_turn_meta = {
'model': p.get('model', ''),
'effort': p.get('effort', ''),
}
elif etype == 'event_msg':
p = e.get('payload', {})
msg_type = p.get('type', '')
if msg_type == 'user_message':
text = p.get('message', '').strip()
if text:
turns.append(('user', time_str, text, dict(current_turn_meta)))
elif msg_type == 'agent_message':
text = p.get('message', '').strip()
if text:
turns.append(('assistant', time_str, text, dict(current_turn_meta)))
elif msg_type == 'task_complete':
# summarise any completion metadata
pass # nothing useful to show currently
# ── Render markdown ───────────────────────────────────────────────────────
if start_ts:
date_str = start_ts[:10]
dt_display = start_ts[:16].replace('T', ' ')
else:
date_str = datetime.now().strftime('%Y-%m-%d')
dt_display = date_str
lines = [
f'# Codex session — {dt_display}',
f'',
f'Session: `{session_id}` ',
f'Working dir: `{cwd}` ',
f'Source: `{jsonl_path}`',
f'Exported: {datetime.now().strftime("%Y-%m-%d %H:%M:%S")}',
f'',
'---',
'',
]
for role, ts, text, meta in turns:
if role == 'user':
lines.append(f'## You [{ts}]')
lines.append('')
lines.append(text)
lines.append('')
else:
model_tag = f' _{meta.get("model", "")} effort={meta.get("effort", "")}_' if meta.get('model') else ''
lines.append(f'## Codex [{ts}]{model_tag}')
lines.append('')
lines.append(text)
lines.append('')
lines.append('---')
lines.append('')
return '\n'.join(lines), session_id, date_str
def find_all_sessions() -> list[Path]:
return sorted(CODEX_SESSIONS_ROOT.rglob('*.jsonl'), key=lambda p: p.stat().st_mtime)
def export_one(jsonl_path: Path, log_dir: Path) -> Path:
md, session_id, date_str = convert(jsonl_path)
date_compact = date_str.replace('-', '')
sid_short = (session_id or jsonl_path.stem)[:8]
out_path = log_dir / f'codex_{date_compact}_{sid_short}.md'
out_path.write_text(md, encoding='utf-8')
return out_path
# ── Entry point ───────────────────────────────────────────────────────────────
def main():
project_dir = Path(os.environ.get('CLAUDE_PROJECT_DIR',
os.environ.get('PWD', '/home/elphel/git/imagej-elphel')))
log_dir = project_dir / 'attic' / 'CLAUDE' / 'session-logs'
log_dir.mkdir(parents=True, exist_ok=True)
args = sys.argv[1:]
if args and args[0] == '--all':
sessions = find_all_sessions()
exported = 0
for p in sessions:
try:
out = export_one(p, log_dir)
print(f' → {out.name}', file=sys.stderr)
exported += 1
except Exception as exc:
print(f' ✗ {p.name}: {exc}', file=sys.stderr)
print(f'export_codex_session: {exported}/{len(sessions)} exported → {log_dir}', file=sys.stderr)
return
if args and not args[0].startswith('-'):
jsonl_path = Path(args[0])
else:
sessions = find_all_sessions()
if not sessions:
print('export_codex_session: no Codex session files found', file=sys.stderr)
sys.exit(1)
jsonl_path = sessions[-1]
if not jsonl_path.exists():
print(f'export_codex_session: not found: {jsonl_path}', file=sys.stderr)
sys.exit(1)
try:
out_path = export_one(jsonl_path, log_dir)
size = out_path.stat().st_size
print(f'export_codex_session: {size} bytes → {out_path}', file=sys.stderr)
except Exception as exc:
print(f'export_codex_session: failed: {exc}', file=sys.stderr)
sys.exit(1)
if __name__ == '__main__':
main()
#!/usr/bin/env python3
"""
export_session.py — Convert a Claude Code session JSONL to readable markdown.
Called automatically by the Claude Code Stop hook after each turn, or manually.
Output is written to attic/CLAUDE/session-logs/ and named by date + session-id prefix,
so repeated calls during the same session overwrite the same file.
Usage (manual):
export_session.py # export most recent session
export_session.py <path/to/session.jsonl>
Stop hook (stdin receives event JSON from Claude Code):
{"type":"stop","session_id":"...","transcript_path":"..."}
"""
import json
import os
import sys
from datetime import datetime
from pathlib import Path
# ── Text extraction ───────────────────────────────────────────────────────────
def extract_text(content) -> str:
"""Extract readable text from a message content field (str or list of blocks)."""
if isinstance(content, str):
return content
if not isinstance(content, list):
return ''
parts = []
for block in content:
if not isinstance(block, dict):
continue
bt = block.get('type', '')
if bt == 'text':
parts.append(block.get('text', ''))
# thinking blocks: omit by default (too verbose, internal reasoning)
return '\n'.join(parts)
def summarise_tool(block: dict) -> str:
name = block.get('name', '?')
inp = block.get('input', {})
if name == 'Bash':
cmd = inp.get('command', '').strip()
desc = inp.get('description', '')
short = cmd[:160].replace('\n', ' ')
suffix = '…' if len(cmd) > 160 else ''
detail = f'`{short}{suffix}`'
if desc:
detail += f' _{desc}_'
return f'- **Bash** {detail}'
if name in ('Read', 'Write', 'Edit'):
path = inp.get('file_path', inp.get('path', '?'))
return f'- **{name}** `{path}`'
if name == 'Grep':
pat = inp.get('pattern', '?')[:80]
loc = inp.get('path', '')
return f'- **Grep** `{pat}`' + (f' in `{loc}`' if loc else '')
if name == 'Glob':
pat = inp.get('pattern', '?')
return f'- **Glob** `{pat}`'
if name == 'Agent':
desc = inp.get('description', inp.get('prompt', '')[:60])
return f'- **Agent** _{desc}_'
# Generic fallback
keys = [f'{k}={str(v)[:40]}' for k, v in list(inp.items())[:3]]
return f'- **{name}** {", ".join(keys)}'
# ── Conversion ────────────────────────────────────────────────────────────────
def convert(jsonl_path: Path) -> tuple[str, str]:
"""Return (markdown_text, session_id)."""
entries = []
with open(jsonl_path, encoding='utf-8', errors='replace') as f:
for line in f:
line = line.strip()
if not line:
continue
try:
entries.append(json.loads(line))
except json.JSONDecodeError:
pass
session_id = ''
start_ts = None
turns = []
for e in entries:
t = e.get('type', '')
if t == 'permission-mode':
session_id = e.get('sessionId', session_id)
elif t in ('user', 'assistant'):
ts_ms = e.get('timestamp')
if ts_ms and start_ts is None:
start_ts = ts_ms
try:
if isinstance(ts_ms, (int, float)):
time_str = datetime.fromtimestamp(ts_ms / 1000).strftime('%H:%M:%S')
elif isinstance(ts_ms, str):
# ISO format: "2026-04-16T01:59:43.686Z"
time_str = ts_ms[11:19]
else:
time_str = ''
except Exception:
time_str = ''
msg = e.get('message', {})
content = msg.get('content', '')
role = t # 'user' or 'assistant'
if role == 'user':
text = extract_text(content).strip()
if text:
turns.append(('user', time_str, text, []))
else:
blocks = content if isinstance(content, list) else []
text_parts = []
tool_lines = []
for block in blocks:
if not isinstance(block, dict):
continue
bt = block.get('type', '')
if bt == 'text':
text_parts.append(block.get('text', ''))
elif bt == 'tool_use':
tool_lines.append(summarise_tool(block))
text = '\n'.join(text_parts).strip()
turns.append(('assistant', time_str, text, tool_lines))
# ── Render markdown ───────────────────────────────────────────────────────
if start_ts:
try:
if isinstance(start_ts, (int, float)):
date_str = datetime.fromtimestamp(start_ts / 1000).strftime('%Y-%m-%d %H:%M')
else:
date_str = str(start_ts)[:16].replace('T', ' ')
except Exception:
date_str = datetime.now().strftime('%Y-%m-%d')
else:
date_str = datetime.now().strftime('%Y-%m-%d')
lines = [
f'# Claude session — {date_str}',
f'',
f'Session: `{session_id}` ',
f'Source: `{jsonl_path}`',
f'Exported: {datetime.now().strftime("%Y-%m-%d %H:%M:%S")}',
f'',
'---',
'',
]
for role, ts, text, tools in turns:
if role == 'user':
lines.append(f'## You [{ts}]')
lines.append('')
lines.append(text)
lines.append('')
else:
lines.append(f'## Claude [{ts}]')
lines.append('')
if text:
lines.append(text)
lines.append('')
if tools:
lines.append('**Tools:**')
lines.extend(tools)
lines.append('')
lines.append('---')
lines.append('')
return '\n'.join(lines), session_id
# ── Entry point ───────────────────────────────────────────────────────────────
def find_latest_jsonl(project_dir: Path) -> Path | None:
slug = str(project_dir).lstrip('/').replace('/', '-')
storage = Path.home() / '.claude' / 'projects' / slug
if not storage.exists():
# Try with leading dash removed (Claude Code convention)
storage = Path.home() / '.claude' / 'projects' / f'-{slug.lstrip("-")}'
if not storage.exists():
return None
candidates = [p for p in storage.glob('*.jsonl') if p.stat().st_size > 0]
if not candidates:
return None
return max(candidates, key=lambda p: p.stat().st_mtime)
def main():
project_dir = Path(os.environ.get('CLAUDE_PROJECT_DIR',
os.environ.get('PWD', '/home/elphel/git/imagej-elphel')))
log_dir = project_dir / 'attic' / 'CLAUDE' / 'session-logs'
log_dir.mkdir(parents=True, exist_ok=True)
jsonl_path = None
# 1. Explicit path argument
if len(sys.argv) > 1:
jsonl_path = Path(sys.argv[1])
# 2. Stop-hook stdin event
if jsonl_path is None and not sys.stdin.isatty():
try:
data = json.load(sys.stdin)
tp = data.get('transcript_path') or data.get('transcriptPath')
if tp:
jsonl_path = Path(tp)
elif 'session_id' in data or 'sessionId' in data:
sid = data.get('session_id') or data.get('sessionId')
# Locate by session id
projects_root = Path.home() / '.claude' / 'projects'
for p in projects_root.rglob(f'{sid}.jsonl'):
jsonl_path = p
break
except Exception:
pass
# 3. Most recent JSONL for the current project
if jsonl_path is None or not jsonl_path.exists():
jsonl_path = find_latest_jsonl(project_dir)
if jsonl_path is None or not jsonl_path.exists():
print('export_session: no session JSONL found', file=sys.stderr)
sys.exit(1)
try:
md, session_id = convert(jsonl_path)
except Exception as exc:
print(f'export_session: conversion failed: {exc}', file=sys.stderr)
sys.exit(1)
# Name by date + first 8 chars of session id so same-session calls overwrite
date_str = datetime.now().strftime('%Y%m%d')
sid_short = (session_id or jsonl_path.stem)[:8]
out_path = log_dir / f'claude_{date_str}_{sid_short}.md'
out_path.write_text(md, encoding='utf-8')
print(f'export_session: {len(md)} chars → {out_path}', file=sys.stderr)
if __name__ == '__main__':
main()
Here is a summary designed to be used with an agent to implement the "Gemini Team Integration" script.
------------------------------
## Instructions for Team Integration: Adding Gemini to the Workflow## 1. Goal
Integrate Gemini 1.5/2.0 Pro into the existing imagej-elphel collaboration environment. Gemini must be accessible via CLI using the GEMINI_API_KEY from .bashrc. It will participate in the "summary" script to find consensus and unique recommendations.
## 2. Software Requirements
* Gemini CLI: Install via npm install -g @google/gemini-cli.
* Environment: Ensure export GEMINI_API_KEY="xxx" is active in the shell.
## 3. Model Logic (Flash vs. Pro)
To minimize human effort, the script should toggle models based on the task:
* Flash (Default): Use for quick code checks, unit test results, and standard console output analysis. It is fast and uses fewer tokens/credits.
* Pro (Deep Research/Multimodal): Use for "Consensus Summaries," reading complex PCB diagrams, or analyzing large sections of Xilinx Zynq documentation.
* CLI Implementation: Use the --model flag.
* Example: gemini --model gemini-1.5-pro "Task description"
## 4. Implementation Task
Write a bash script (e.g., agent_consulter.sh) that automates the following "Consultation" loop:
1. Input: Takes the tail of the current session or a specific file path as input.
2. Multimodal Handling: If a file path is a PDF or PNG (circuit diagram), it should use the Pro model with the @ reference.
3. The Summary Loop:
* Pipe the context to the Gemini CLI.
* Prompt Strategy: "Review the provided dialogue/code from Codex and Claude. Output a 3-section summary: 1) Points of universal agreement. 2) Unique insights from Codex/Claude. 3) Your (Gemini) unique recommendations based on Zynq/ImageJ architecture."
4. Eclipse Integration: Ensure the script outputs the summary to a file that the human can view in Eclipse without switching windows.
## 5. Specific Constraints
* The script must operate entirely within the CLI.
* No "Paste this into a browser" prompts.
* If an image/PDF needs analysis (like the camera board diagram), the script should handle the file path directly.
------------------------------
This approach allows you to stay in Konsole and keep your Eclipse view for code while agents perform tasks. By defaulting to Flash for minor tasks and calling Pro for the "Summary," speed and intelligence are balanced.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment