feat(cron): per-job workdir for project-aware cron runs (#15110)

Cron jobs can now specify a per-job working directory. When set, the job
runs as if launched from that directory: AGENTS.md / CLAUDE.md /
.cursorrules from that dir are injected into the system prompt, and the
terminal / file / code-exec tools use it as their cwd (via TERMINAL_CWD).
When unset, old behaviour is preserved (no project context files, tools
use the scheduler's cwd).

Requested by @bluthcy.

## Mechanism

- cron/jobs.py: create_job / update_job accept 'workdir'; validated to
  be an absolute existing directory at create/update time.
- cron/scheduler.py run_job: if job.workdir is set, point TERMINAL_CWD
  at it and flip skip_context_files to False before building the agent.
  Restored in finally on every exit path.
- cron/scheduler.py tick: workdir jobs run sequentially (outside the
  thread pool) because TERMINAL_CWD is process-global. Workdir-less jobs
  still run in the parallel pool unchanged.
- tools/cronjob_tools.py + hermes_cli/cron.py + hermes_cli/main.py:
  expose 'workdir' via the cronjob tool and 'hermes cron create/edit
  --workdir ...'. Empty string on edit clears the field.

## Validation

- tests/cron/test_cron_workdir.py (21 tests): normalize, create, update,
  JSON round-trip via cronjob tool, tick partition (workdir jobs run on
  the main thread, not the pool), run_job env toggle + restore in finally.
- Full targeted suite (tests/cron/, test_cronjob_tools.py, test_cron.py,
  test_config_cwd_bridge.py, test_worktree.py): 314/314 passed.
- Live smoke: hermes cron create --workdir $(pwd) works; relative path
  rejected; list shows 'Workdir:'; edit --workdir '' clears.
This commit is contained in:
Teknium 2026-04-24 05:07:01 -07:00 committed by GitHub
parent 0e235947b9
commit 852c7f3be3
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
7 changed files with 551 additions and 9 deletions

View file

@ -371,6 +371,39 @@ def save_jobs(jobs: List[Dict[str, Any]]):
raise raise
def _normalize_workdir(workdir: Optional[str]) -> Optional[str]:
"""Normalize and validate a cron job workdir.
Rules:
- Empty / None None (feature off, preserves old behaviour).
- ``~`` is expanded. Relative paths are rejected cron jobs run detached
from any shell cwd, so relative paths have no stable meaning.
- The path must exist and be a directory at create/update time. We do
NOT re-check at run time (a user might briefly unmount the dir; the
scheduler will just fall back to old behaviour with a logged warning).
Returns the absolute path string, or None when disabled.
Raises ValueError on invalid input.
"""
if workdir is None:
return None
raw = str(workdir).strip()
if not raw:
return None
expanded = Path(raw).expanduser()
if not expanded.is_absolute():
raise ValueError(
f"Cron workdir must be an absolute path (got {raw!r}). "
f"Cron jobs run detached from any shell cwd, so relative paths are ambiguous."
)
resolved = expanded.resolve()
if not resolved.exists():
raise ValueError(f"Cron workdir does not exist: {resolved}")
if not resolved.is_dir():
raise ValueError(f"Cron workdir is not a directory: {resolved}")
return str(resolved)
def create_job( def create_job(
prompt: str, prompt: str,
schedule: str, schedule: str,
@ -385,6 +418,7 @@ def create_job(
base_url: Optional[str] = None, base_url: Optional[str] = None,
script: Optional[str] = None, script: Optional[str] = None,
enabled_toolsets: Optional[List[str]] = None, enabled_toolsets: Optional[List[str]] = None,
workdir: Optional[str] = None,
) -> Dict[str, Any]: ) -> Dict[str, Any]:
""" """
Create a new cron job. Create a new cron job.
@ -407,6 +441,12 @@ def create_job(
enabled_toolsets: Optional list of toolset names to restrict the agent to. enabled_toolsets: Optional list of toolset names to restrict the agent to.
When set, only tools from these toolsets are loaded, reducing When set, only tools from these toolsets are loaded, reducing
token overhead. When omitted, all default tools are loaded. token overhead. When omitted, all default tools are loaded.
workdir: Optional absolute path. When set, the job runs as if launched
from that directory: AGENTS.md / CLAUDE.md / .cursorrules from
that directory are injected into the system prompt, and the
terminal/file/code_exec tools use it as their working directory
(via TERMINAL_CWD). When unset, the old behaviour is preserved
(no context files injected, tools use the scheduler's cwd).
Returns: Returns:
The created job dict The created job dict
@ -439,6 +479,7 @@ def create_job(
normalized_script = normalized_script or None normalized_script = normalized_script or None
normalized_toolsets = [str(t).strip() for t in enabled_toolsets if str(t).strip()] if enabled_toolsets else None normalized_toolsets = [str(t).strip() for t in enabled_toolsets if str(t).strip()] if enabled_toolsets else None
normalized_toolsets = normalized_toolsets or None normalized_toolsets = normalized_toolsets or None
normalized_workdir = _normalize_workdir(workdir)
label_source = (prompt or (normalized_skills[0] if normalized_skills else None)) or "cron job" label_source = (prompt or (normalized_skills[0] if normalized_skills else None)) or "cron job"
job = { job = {
@ -471,6 +512,7 @@ def create_job(
"deliver": deliver, "deliver": deliver,
"origin": origin, # Tracks where job was created for "origin" delivery "origin": origin, # Tracks where job was created for "origin" delivery
"enabled_toolsets": normalized_toolsets, "enabled_toolsets": normalized_toolsets,
"workdir": normalized_workdir,
} }
jobs = load_jobs() jobs = load_jobs()
@ -504,6 +546,15 @@ def update_job(job_id: str, updates: Dict[str, Any]) -> Optional[Dict[str, Any]]
if job["id"] != job_id: if job["id"] != job_id:
continue continue
# Validate / normalize workdir if present in updates. Empty string or
# None both mean "clear the field" (restore old behaviour).
if "workdir" in updates:
_wd = updates["workdir"]
if _wd in (None, "", False):
updates["workdir"] = None
else:
updates["workdir"] = _normalize_workdir(_wd)
updated = _apply_skill_fields({**job, **updates}) updated = _apply_skill_fields({**job, **updates})
schedule_changed = "schedule" in updates schedule_changed = "schedule" in updates

View file

@ -795,6 +795,30 @@ def run_job(job: dict) -> tuple[bool, str, str, Optional[str]]:
chat_name=origin.get("chat_name", "") if origin else "", chat_name=origin.get("chat_name", "") if origin else "",
) )
# Per-job working directory. When set (and validated at create/update
# time), we point TERMINAL_CWD at it so:
# - build_context_files_prompt() picks up AGENTS.md / CLAUDE.md /
# .cursorrules from the job's project dir, AND
# - the terminal, file, and code-exec tools run commands from there.
#
# tick() serializes workdir-jobs outside the parallel pool, so mutating
# os.environ["TERMINAL_CWD"] here is safe for those jobs. For workdir-less
# jobs we leave TERMINAL_CWD untouched — preserves the original behaviour
# (skip_context_files=True, tools use whatever cwd the scheduler has).
_job_workdir = (job.get("workdir") or "").strip() or None
if _job_workdir and not Path(_job_workdir).is_dir():
# Directory was removed between create-time validation and now. Log
# and drop back to old behaviour rather than crashing the job.
logger.warning(
"Job '%s': configured workdir %r no longer exists — running without it",
job_id, _job_workdir,
)
_job_workdir = None
_prior_terminal_cwd = os.environ.get("TERMINAL_CWD", "_UNSET_")
if _job_workdir:
os.environ["TERMINAL_CWD"] = _job_workdir
logger.info("Job '%s': using workdir %s", job_id, _job_workdir)
try: try:
# Re-read .env and config.yaml fresh every run so provider/key # Re-read .env and config.yaml fresh every run so provider/key
# changes take effect without a gateway restart. # changes take effect without a gateway restart.
@ -920,7 +944,10 @@ def run_job(job: dict) -> tuple[bool, str, str, Optional[str]]:
enabled_toolsets=_resolve_cron_enabled_toolsets(job, _cfg), enabled_toolsets=_resolve_cron_enabled_toolsets(job, _cfg),
disabled_toolsets=["cronjob", "messaging", "clarify"], disabled_toolsets=["cronjob", "messaging", "clarify"],
quiet_mode=True, quiet_mode=True,
skip_context_files=True, # Don't inject SOUL.md/AGENTS.md from scheduler cwd # When a workdir is configured, inject AGENTS.md / CLAUDE.md /
# .cursorrules from that directory; otherwise preserve the old
# behaviour (don't inject SOUL.md/AGENTS.md from the scheduler cwd).
skip_context_files=not bool(_job_workdir),
skip_memory=True, # Cron system prompts would corrupt user representations skip_memory=True, # Cron system prompts would corrupt user representations
platform="cron", platform="cron",
session_id=_cron_session_id, session_id=_cron_session_id,
@ -1059,6 +1086,14 @@ def run_job(job: dict) -> tuple[bool, str, str, Optional[str]]:
return False, output, "", error_msg return False, output, "", error_msg
finally: finally:
# Restore TERMINAL_CWD to whatever it was before this job ran. We
# only ever mutate it when the job has a workdir; see the setup block
# at the top of run_job for the serialization guarantee.
if _job_workdir:
if _prior_terminal_cwd == "_UNSET_":
os.environ.pop("TERMINAL_CWD", None)
else:
os.environ["TERMINAL_CWD"] = _prior_terminal_cwd
# Clean up ContextVar session/delivery state for this job. # Clean up ContextVar session/delivery state for this job.
clear_session_vars(_ctx_tokens) clear_session_vars(_ctx_tokens)
if _session_db: if _session_db:
@ -1186,14 +1221,28 @@ def tick(verbose: bool = True, adapters=None, loop=None) -> int:
mark_job_run(job["id"], False, str(e)) mark_job_run(job["id"], False, str(e))
return False return False
# Run all due jobs concurrently, each in its own ContextVar copy # Partition due jobs: those with a per-job workdir mutate
# so session/delivery state stays isolated per-thread. # os.environ["TERMINAL_CWD"] inside run_job, which is process-global —
# so they MUST run sequentially to avoid corrupting each other. Jobs
# without a workdir leave env untouched and stay parallel-safe.
workdir_jobs = [j for j in due_jobs if (j.get("workdir") or "").strip()]
parallel_jobs = [j for j in due_jobs if not (j.get("workdir") or "").strip()]
_results: list = []
# Sequential pass for workdir jobs.
for job in workdir_jobs:
_ctx = contextvars.copy_context()
_results.append(_ctx.run(_process_job, job))
# Parallel pass for the rest — same behaviour as before.
if parallel_jobs:
with concurrent.futures.ThreadPoolExecutor(max_workers=_max_workers) as _tick_pool: with concurrent.futures.ThreadPoolExecutor(max_workers=_max_workers) as _tick_pool:
_futures = [] _futures = []
for job in due_jobs: for job in parallel_jobs:
_ctx = contextvars.copy_context() _ctx = contextvars.copy_context()
_futures.append(_tick_pool.submit(_ctx.run, _process_job, job)) _futures.append(_tick_pool.submit(_ctx.run, _process_job, job))
_results = [f.result() for f in _futures] _results.extend(f.result() for f in _futures)
return sum(_results) return sum(_results)
finally: finally:

View file

@ -93,6 +93,9 @@ def cron_list(show_all: bool = False):
script = job.get("script") script = job.get("script")
if script: if script:
print(f" Script: {script}") print(f" Script: {script}")
workdir = job.get("workdir")
if workdir:
print(f" Workdir: {workdir}")
# Execution history # Execution history
last_status = job.get("last_status") last_status = job.get("last_status")
@ -168,6 +171,7 @@ def cron_create(args):
skill=getattr(args, "skill", None), skill=getattr(args, "skill", None),
skills=_normalize_skills(getattr(args, "skill", None), getattr(args, "skills", None)), skills=_normalize_skills(getattr(args, "skill", None), getattr(args, "skills", None)),
script=getattr(args, "script", None), script=getattr(args, "script", None),
workdir=getattr(args, "workdir", None),
) )
if not result.get("success"): if not result.get("success"):
print(color(f"Failed to create job: {result.get('error', 'unknown error')}", Colors.RED)) print(color(f"Failed to create job: {result.get('error', 'unknown error')}", Colors.RED))
@ -180,6 +184,8 @@ def cron_create(args):
job_data = result.get("job", {}) job_data = result.get("job", {})
if job_data.get("script"): if job_data.get("script"):
print(f" Script: {job_data['script']}") print(f" Script: {job_data['script']}")
if job_data.get("workdir"):
print(f" Workdir: {job_data['workdir']}")
print(f" Next run: {result['next_run_at']}") print(f" Next run: {result['next_run_at']}")
return 0 return 0
@ -218,6 +224,7 @@ def cron_edit(args):
repeat=getattr(args, "repeat", None), repeat=getattr(args, "repeat", None),
skills=final_skills, skills=final_skills,
script=getattr(args, "script", None), script=getattr(args, "script", None),
workdir=getattr(args, "workdir", None),
) )
if not result.get("success"): if not result.get("success"):
print(color(f"Failed to update job: {result.get('error', 'unknown error')}", Colors.RED)) print(color(f"Failed to update job: {result.get('error', 'unknown error')}", Colors.RED))
@ -233,6 +240,8 @@ def cron_edit(args):
print(" Skills: none") print(" Skills: none")
if updated.get("script"): if updated.get("script"):
print(f" Script: {updated['script']}") print(f" Script: {updated['script']}")
if updated.get("workdir"):
print(f" Workdir: {updated['workdir']}")
return 0 return 0

View file

@ -7429,6 +7429,10 @@ For more help on a command:
"--script", "--script",
help="Path to a Python script whose stdout is injected into the prompt each run", help="Path to a Python script whose stdout is injected into the prompt each run",
) )
cron_create.add_argument(
"--workdir",
help="Absolute path for the job to run from. Injects AGENTS.md / CLAUDE.md / .cursorrules from that directory and uses it as the cwd for terminal/file/code_exec tools. Omit to preserve old behaviour (no project context files).",
)
# cron edit # cron edit
cron_edit = cron_subparsers.add_parser( cron_edit = cron_subparsers.add_parser(
@ -7467,6 +7471,10 @@ For more help on a command:
"--script", "--script",
help="Path to a Python script whose stdout is injected into the prompt each run. Pass empty string to clear.", help="Path to a Python script whose stdout is injected into the prompt each run. Pass empty string to clear.",
) )
cron_edit.add_argument(
"--workdir",
help="Absolute path for the job to run from (injects AGENTS.md etc. and sets terminal cwd). Pass empty string to clear.",
)
# lifecycle actions # lifecycle actions
cron_pause = cron_subparsers.add_parser("pause", help="Pause a scheduled job") cron_pause = cron_subparsers.add_parser("pause", help="Pause a scheduled job")

View file

@ -0,0 +1,380 @@
"""Tests for per-job workdir support in cron jobs.
Covers:
- jobs.create_job: param plumbing, validation, default-None preserved
- jobs._normalize_workdir: absolute / relative / missing / file-not-dir
- jobs.update_job: set, clear, re-validate
- tools.cronjob_tools.cronjob: create + update JSON round-trip, schema
includes workdir, _format_job exposes it when set
- scheduler.tick(): partitions workdir jobs off the thread pool, restores
TERMINAL_CWD in finally, honours the env override during run_job
"""
from __future__ import annotations
import json
from pathlib import Path
import pytest
@pytest.fixture()
def tmp_cron_dir(tmp_path, monkeypatch):
"""Isolate cron job storage into a temp dir so tests don't stomp on real jobs."""
monkeypatch.setattr("cron.jobs.CRON_DIR", tmp_path / "cron")
monkeypatch.setattr("cron.jobs.JOBS_FILE", tmp_path / "cron" / "jobs.json")
monkeypatch.setattr("cron.jobs.OUTPUT_DIR", tmp_path / "cron" / "output")
return tmp_path
# ---------------------------------------------------------------------------
# jobs._normalize_workdir
# ---------------------------------------------------------------------------
class TestNormalizeWorkdir:
def test_none_returns_none(self):
from cron.jobs import _normalize_workdir
assert _normalize_workdir(None) is None
def test_empty_string_returns_none(self):
from cron.jobs import _normalize_workdir
assert _normalize_workdir("") is None
assert _normalize_workdir(" ") is None
def test_absolute_existing_dir_returns_resolved_str(self, tmp_path):
from cron.jobs import _normalize_workdir
result = _normalize_workdir(str(tmp_path))
assert result == str(tmp_path.resolve())
def test_tilde_expands(self, tmp_path, monkeypatch):
from cron.jobs import _normalize_workdir
monkeypatch.setenv("HOME", str(tmp_path))
result = _normalize_workdir("~")
assert result == str(tmp_path.resolve())
def test_relative_path_rejected(self):
from cron.jobs import _normalize_workdir
with pytest.raises(ValueError, match="absolute path"):
_normalize_workdir("some/relative/path")
def test_missing_dir_rejected(self, tmp_path):
from cron.jobs import _normalize_workdir
missing = tmp_path / "does-not-exist"
with pytest.raises(ValueError, match="does not exist"):
_normalize_workdir(str(missing))
def test_file_not_dir_rejected(self, tmp_path):
from cron.jobs import _normalize_workdir
f = tmp_path / "file.txt"
f.write_text("hi")
with pytest.raises(ValueError, match="not a directory"):
_normalize_workdir(str(f))
# ---------------------------------------------------------------------------
# jobs.create_job and update_job
# ---------------------------------------------------------------------------
class TestCreateJobWorkdir:
def test_workdir_stored_when_set(self, tmp_cron_dir):
from cron.jobs import create_job, get_job
job = create_job(
prompt="hello",
schedule="every 1h",
workdir=str(tmp_cron_dir),
)
stored = get_job(job["id"])
assert stored["workdir"] == str(tmp_cron_dir.resolve())
def test_workdir_none_preserves_old_behaviour(self, tmp_cron_dir):
from cron.jobs import create_job, get_job
job = create_job(prompt="hello", schedule="every 1h")
stored = get_job(job["id"])
# Field is present on the dict but None — downstream code checks
# truthiness to decide whether the feature is active.
assert stored.get("workdir") is None
def test_create_rejects_invalid_workdir(self, tmp_cron_dir):
from cron.jobs import create_job
with pytest.raises(ValueError):
create_job(
prompt="hello",
schedule="every 1h",
workdir="not/absolute",
)
class TestUpdateJobWorkdir:
def test_set_workdir_via_update(self, tmp_cron_dir):
from cron.jobs import create_job, get_job, update_job
job = create_job(prompt="x", schedule="every 1h")
update_job(job["id"], {"workdir": str(tmp_cron_dir)})
assert get_job(job["id"])["workdir"] == str(tmp_cron_dir.resolve())
def test_clear_workdir_with_none(self, tmp_cron_dir):
from cron.jobs import create_job, get_job, update_job
job = create_job(
prompt="x", schedule="every 1h", workdir=str(tmp_cron_dir)
)
update_job(job["id"], {"workdir": None})
assert get_job(job["id"])["workdir"] is None
def test_clear_workdir_with_empty_string(self, tmp_cron_dir):
from cron.jobs import create_job, get_job, update_job
job = create_job(
prompt="x", schedule="every 1h", workdir=str(tmp_cron_dir)
)
update_job(job["id"], {"workdir": ""})
assert get_job(job["id"])["workdir"] is None
def test_update_rejects_invalid_workdir(self, tmp_cron_dir):
from cron.jobs import create_job, update_job
job = create_job(prompt="x", schedule="every 1h")
with pytest.raises(ValueError):
update_job(job["id"], {"workdir": "nope/relative"})
# ---------------------------------------------------------------------------
# tools.cronjob_tools: end-to-end JSON round-trip
# ---------------------------------------------------------------------------
class TestCronjobToolWorkdir:
def test_create_with_workdir_json_roundtrip(self, tmp_cron_dir):
from tools.cronjob_tools import cronjob
result = json.loads(
cronjob(
action="create",
prompt="hi",
schedule="every 1h",
workdir=str(tmp_cron_dir),
)
)
assert result["success"] is True
assert result["job"]["workdir"] == str(tmp_cron_dir.resolve())
def test_create_without_workdir_hides_field_in_format(self, tmp_cron_dir):
from tools.cronjob_tools import cronjob
result = json.loads(
cronjob(
action="create",
prompt="hi",
schedule="every 1h",
)
)
assert result["success"] is True
# _format_job omits the field when unset — reduces noise in agent output.
assert "workdir" not in result["job"]
def test_update_clears_workdir_with_empty_string(self, tmp_cron_dir):
from tools.cronjob_tools import cronjob
created = json.loads(
cronjob(
action="create",
prompt="hi",
schedule="every 1h",
workdir=str(tmp_cron_dir),
)
)
job_id = created["job_id"]
updated = json.loads(
cronjob(action="update", job_id=job_id, workdir="")
)
assert updated["success"] is True
assert "workdir" not in updated["job"]
def test_schema_advertises_workdir(self):
from tools.cronjob_tools import CRONJOB_SCHEMA
assert "workdir" in CRONJOB_SCHEMA["parameters"]["properties"]
desc = CRONJOB_SCHEMA["parameters"]["properties"]["workdir"]["description"]
assert "absolute" in desc.lower()
# ---------------------------------------------------------------------------
# scheduler.tick(): workdir partition
# ---------------------------------------------------------------------------
class TestTickWorkdirPartition:
"""
tick() must run workdir jobs sequentially (outside the ThreadPoolExecutor)
because run_job mutates os.environ["TERMINAL_CWD"], which is process-global.
We verify the partition without booting the real scheduler by patching the
pieces tick() calls.
"""
def test_workdir_jobs_run_sequentially(self, tmp_path, monkeypatch):
import cron.scheduler as sched
# Two "jobs" — one with workdir, one without. get_due_jobs returns both.
workdir_job = {"id": "a", "name": "A", "workdir": str(tmp_path)}
parallel_job = {"id": "b", "name": "B", "workdir": None}
monkeypatch.setattr(sched, "get_due_jobs", lambda: [workdir_job, parallel_job])
monkeypatch.setattr(sched, "advance_next_run", lambda *_a, **_kw: None)
# Record call order / thread context.
import threading
calls: list[tuple[str, bool]] = []
def fake_run_job(job):
# Return a minimal tuple matching run_job's signature.
calls.append((job["id"], threading.current_thread().name))
return True, "output", "response", None
monkeypatch.setattr(sched, "run_job", fake_run_job)
monkeypatch.setattr(sched, "save_job_output", lambda _jid, _o: None)
monkeypatch.setattr(sched, "mark_job_run", lambda *_a, **_kw: None)
monkeypatch.setattr(
sched, "_deliver_result", lambda *_a, **_kw: None
)
n = sched.tick(verbose=False)
assert n == 2
ids = [c[0] for c in calls]
# Workdir jobs always come before parallel jobs.
assert ids.index("a") < ids.index("b")
# The workdir job must run on the main thread (sequential pass).
main_thread_name = threading.current_thread().name
workdir_thread_name = next(t for jid, t in calls if jid == "a")
assert workdir_thread_name == main_thread_name
# ---------------------------------------------------------------------------
# scheduler.run_job: TERMINAL_CWD + skip_context_files wiring
# ---------------------------------------------------------------------------
class TestRunJobTerminalCwd:
"""
run_job sets TERMINAL_CWD + flips skip_context_files=False when workdir
is set, and restores the prior TERMINAL_CWD in finally even on error.
We stub AIAgent so no real API call happens.
"""
@staticmethod
def _install_stubs(monkeypatch, observed: dict):
"""Patch enough of run_job's deps that it executes without real creds."""
import os
import sys
import cron.scheduler as sched
class FakeAgent:
def __init__(self, **kwargs):
observed["skip_context_files"] = kwargs.get("skip_context_files")
observed["terminal_cwd_during_init"] = os.environ.get(
"TERMINAL_CWD", "_UNSET_"
)
def run_conversation(self, *_a, **_kw):
observed["terminal_cwd_during_run"] = os.environ.get(
"TERMINAL_CWD", "_UNSET_"
)
return {"final_response": "done", "messages": []}
def get_activity_summary(self):
return {"seconds_since_activity": 0.0}
fake_mod = type(sys)("run_agent")
fake_mod.AIAgent = FakeAgent
monkeypatch.setitem(sys.modules, "run_agent", fake_mod)
# Bypass the real provider resolver — it reads ~/.hermes and credentials.
from hermes_cli import runtime_provider as _rtp
monkeypatch.setattr(
_rtp,
"resolve_runtime_provider",
lambda **_kw: {
"provider": "test",
"api_key": "k",
"base_url": "http://test.local",
"api_mode": "chat_completions",
},
)
# Stub scheduler helpers that would otherwise hit the filesystem / config.
monkeypatch.setattr(sched, "_build_job_prompt", lambda job, prerun_script=None: "hi")
monkeypatch.setattr(sched, "_resolve_origin", lambda job: None)
monkeypatch.setattr(sched, "_resolve_delivery_target", lambda job: None)
monkeypatch.setattr(sched, "_resolve_cron_enabled_toolsets", lambda job, cfg: None)
# Unlimited inactivity so the poll loop returns immediately.
monkeypatch.setenv("HERMES_CRON_TIMEOUT", "0")
# run_job calls load_dotenv(~/.hermes/.env, override=True), which will
# happily clobber TERMINAL_CWD out from under us if the real user .env
# has TERMINAL_CWD set (common on dev boxes). Stub it out.
import dotenv
monkeypatch.setattr(dotenv, "load_dotenv", lambda *_a, **_kw: True)
def test_workdir_sets_and_restores_terminal_cwd(
self, tmp_path, monkeypatch
):
import os
import cron.scheduler as sched
# Make sure the test's TERMINAL_CWD starts at a known non-workdir value.
# Use monkeypatch.setenv so it's restored on teardown regardless of
# whatever other tests in this xdist worker have left behind.
monkeypatch.setenv("TERMINAL_CWD", "/original/cwd")
observed: dict = {}
self._install_stubs(monkeypatch, observed)
job = {
"id": "abc",
"name": "wd-job",
"workdir": str(tmp_path),
"schedule_display": "manual",
}
success, _output, response, error = sched.run_job(job)
assert success is True, f"run_job failed: error={error!r} response={response!r}"
# AIAgent was built with skip_context_files=False (feature ON).
assert observed["skip_context_files"] is False
# TERMINAL_CWD was pointing at the job workdir while the agent ran.
assert observed["terminal_cwd_during_init"] == str(tmp_path.resolve())
assert observed["terminal_cwd_during_run"] == str(tmp_path.resolve())
# And it was restored to the original value in finally.
assert os.environ["TERMINAL_CWD"] == "/original/cwd"
def test_no_workdir_leaves_terminal_cwd_untouched(self, monkeypatch):
"""When workdir is absent, run_job must not touch TERMINAL_CWD at all —
whatever value was present before the call should be present after.
We don't assert on the *content* of TERMINAL_CWD (other tests in the
same xdist worker may leave it set to something like '.'); we just
check it's unchanged by run_job.
"""
import os
import cron.scheduler as sched
# Pin TERMINAL_CWD to a sentinel via monkeypatch so we control both
# the before-value and the after-value regardless of cross-test state.
monkeypatch.setenv("TERMINAL_CWD", "/cron-test-sentinel")
before = os.environ["TERMINAL_CWD"]
observed: dict = {}
self._install_stubs(monkeypatch, observed)
job = {
"id": "xyz",
"name": "no-wd-job",
"workdir": None,
"schedule_display": "manual",
}
success, *_ = sched.run_job(job)
assert success is True
# Feature is OFF — skip_context_files stays True.
assert observed["skip_context_files"] is True
# TERMINAL_CWD saw the same value during init as it had before.
assert observed["terminal_cwd_during_init"] == before
# And after run_job completes, it's still the sentinel (nothing
# overwrote or cleared it).
assert os.environ["TERMINAL_CWD"] == before

View file

@ -217,6 +217,8 @@ def _format_job(job: Dict[str, Any]) -> Dict[str, Any]:
result["script"] = job["script"] result["script"] = job["script"]
if job.get("enabled_toolsets"): if job.get("enabled_toolsets"):
result["enabled_toolsets"] = job["enabled_toolsets"] result["enabled_toolsets"] = job["enabled_toolsets"]
if job.get("workdir"):
result["workdir"] = job["workdir"]
return result return result
@ -237,6 +239,7 @@ def cronjob(
reason: Optional[str] = None, reason: Optional[str] = None,
script: Optional[str] = None, script: Optional[str] = None,
enabled_toolsets: Optional[List[str]] = None, enabled_toolsets: Optional[List[str]] = None,
workdir: Optional[str] = None,
task_id: str = None, task_id: str = None,
) -> str: ) -> str:
"""Unified cron job management tool.""" """Unified cron job management tool."""
@ -275,6 +278,7 @@ def cronjob(
base_url=_normalize_optional_job_value(base_url, strip_trailing_slash=True), base_url=_normalize_optional_job_value(base_url, strip_trailing_slash=True),
script=_normalize_optional_job_value(script), script=_normalize_optional_job_value(script),
enabled_toolsets=enabled_toolsets or None, enabled_toolsets=enabled_toolsets or None,
workdir=_normalize_optional_job_value(workdir),
) )
return json.dumps( return json.dumps(
{ {
@ -366,6 +370,10 @@ def cronjob(
updates["script"] = _normalize_optional_job_value(script) if script else None updates["script"] = _normalize_optional_job_value(script) if script else None
if enabled_toolsets is not None: if enabled_toolsets is not None:
updates["enabled_toolsets"] = enabled_toolsets or None updates["enabled_toolsets"] = enabled_toolsets or None
if workdir is not None:
# Empty string clears the field (restores old behaviour);
# otherwise pass raw — update_job() validates / normalizes.
updates["workdir"] = _normalize_optional_job_value(workdir) or None
if repeat is not None: if repeat is not None:
# Normalize: treat 0 or negative as None (infinite) # Normalize: treat 0 or negative as None (infinite)
normalized_repeat = None if repeat <= 0 else repeat normalized_repeat = None if repeat <= 0 else repeat
@ -470,6 +478,10 @@ Important safety rule: cron-run sessions should not recursively schedule more cr
"items": {"type": "string"}, "items": {"type": "string"},
"description": "Optional list of toolset names to restrict the job's agent to (e.g. [\"web\", \"terminal\", \"file\", \"delegation\"]). When set, only tools from these toolsets are loaded, significantly reducing input token overhead. When omitted, all default tools are loaded. Infer from the job's prompt — e.g. use \"web\" if it calls web_search, \"terminal\" if it runs scripts, \"file\" if it reads files, \"delegation\" if it calls delegate_task. On update, pass an empty array to clear." "description": "Optional list of toolset names to restrict the job's agent to (e.g. [\"web\", \"terminal\", \"file\", \"delegation\"]). When set, only tools from these toolsets are loaded, significantly reducing input token overhead. When omitted, all default tools are loaded. Infer from the job's prompt — e.g. use \"web\" if it calls web_search, \"terminal\" if it runs scripts, \"file\" if it reads files, \"delegation\" if it calls delegate_task. On update, pass an empty array to clear."
}, },
"workdir": {
"type": "string",
"description": "Optional absolute path to run the job from. When set, AGENTS.md / CLAUDE.md / .cursorrules from that directory are injected into the system prompt, and the terminal/file/code_exec tools use it as their working directory — useful for running a job inside a specific project repo. Must be an absolute path that exists. When unset (default), preserves the original behaviour: no project context files, tools use the scheduler's cwd. On update, pass an empty string to clear. Jobs with workdir run sequentially (not parallel) to keep per-job directories isolated."
},
}, },
"required": ["action"] "required": ["action"]
} }
@ -515,6 +527,7 @@ registry.register(
reason=args.get("reason"), reason=args.get("reason"),
script=args.get("script"), script=args.get("script"),
enabled_toolsets=args.get("enabled_toolsets"), enabled_toolsets=args.get("enabled_toolsets"),
workdir=args.get("workdir"),
task_id=kw.get("task_id"), task_id=kw.get("task_id"),
))(), ))(),
check_fn=check_cronjob_requirements, check_fn=check_cronjob_requirements,

View file

@ -86,6 +86,38 @@ cronjob(
This is useful when you want a scheduled agent to inherit reusable workflows without stuffing the full skill text into the cron prompt itself. This is useful when you want a scheduled agent to inherit reusable workflows without stuffing the full skill text into the cron prompt itself.
## Running a job inside a project directory
Cron jobs default to running detached from any repo — no `AGENTS.md`, `CLAUDE.md`, or `.cursorrules` is loaded, and the terminal / file / code-exec tools run from whatever working directory the gateway started in. Pass `--workdir` (CLI) or `workdir=` (tool call) to change that:
```bash
# Standalone CLI
hermes cron create --schedule "every 1d at 09:00" \
--workdir /home/me/projects/acme \
--prompt "Audit open PRs, summarize CI health, and post to #eng"
```
```python
# From a chat, via the cronjob tool
cronjob(
action="create",
schedule="every 1d at 09:00",
workdir="/home/me/projects/acme",
prompt="Audit open PRs, summarize CI health, and post to #eng",
)
```
When `workdir` is set:
- `AGENTS.md`, `CLAUDE.md`, and `.cursorrules` from that directory are injected into the system prompt (same discovery order as the interactive CLI)
- `terminal`, `read_file`, `write_file`, `patch`, `search_files`, and `execute_code` all use that directory as their working directory (via `TERMINAL_CWD`)
- The path must be an absolute directory that exists — relative paths and missing directories are rejected at create / update time
- Pass `--workdir ""` (or `workdir=""` via the tool) on edit to clear it and restore the old behaviour
:::note Serialization
Jobs with a `workdir` run sequentially on the scheduler tick, not in the parallel pool. This is deliberate — `TERMINAL_CWD` is process-global, so two workdir jobs running at the same time would corrupt each other's cwd. Workdir-less jobs still run in parallel as before.
:::
## Editing jobs ## Editing jobs
You do not need to delete and recreate jobs just to change them. You do not need to delete and recreate jobs just to change them.