Home About Who We Are Team Services Startups Businesses Enterprise Case Studies Blog Guides Contact Connect with Us
Back to Guides
Software & Platforms 16 min read

Openclaw Cron Jobs: Schedule Automated Tasks and Reports

Openclaw Cron Jobs: Schedule Automated Tasks and Reports

OpenClaw’s heartbeat system checks conditions on a rolling interval. Cron does something different: it fires a specific task at an exact time you choose. “Summarize overnight updates at 7 AM” is a cron job. “Check my inbox every 30 minutes” is a heartbeat. Mixing them up is the most common scheduling mistake in new deployments.

A typical OpenClaw setup runs anywhere from a handful to a dozen cron jobs, handling everything from daily standup preparation to monthly cost audits. The setup takes five minutes per job. The part that takes longer is deciding what deserves a fixed schedule versus a heartbeat check, and getting the payload specific enough that the agent produces useful output instead of generic summaries.

This guide walks through cron syntax, the three schedule types, session targeting, and three production configurations you can copy directly. If you have not set up OpenClaw yet, start with our setup guide first.


How OpenClaw Cron Works

The cron system is built into the OpenClaw Gateway. It persists jobs to ~/.openclaw/cron/jobs.json, survives restarts, and runs independently of whether you are chatting with your agent. When a job fires, the Gateway either injects an event into your main session or spins up an isolated session dedicated to that job.

Three schedule types cover different timing needs:

  • at runs once at a specific time. Use it for one-shot reminders or deployment checks. The job auto-deletes after execution by default.
  • every runs on a fixed interval in milliseconds. Use it for “every 30 minutes” or “every 6 hours” patterns where exact clock time does not matter.
  • cron uses standard 5-field cron expressions for precise scheduling. This is what you want for “9 AM every weekday” or “first Monday of the month.”

Most scheduled tasks use the cron type. The every type overlaps with heartbeats, and in practice heartbeats handle interval-based work better because they share conversational context. Reserve every for cases where you need interval timing without heartbeat behavior.

Cron Expression Syntax

If you have used crontab on Linux, this is the same format. Five fields, left to right:

minute  hour  day-of-month  month  day-of-week
  0      9        *           *        1-5

That expression means “at 9:00 AM, Monday through Friday.” Here are the expressions for common schedules:

ScheduleExpressionNotes
Every day at 9 AM0 9 * * *Runs once daily
Weekdays at 8:30 AM30 8 * * 1-5Monday through Friday
Every Monday at 10 AM0 10 * * 1Weekly
First day of month at 6 AM0 6 1 * *Monthly
Every 15 minutes*/15 * * * *Quarter-hour intervals
Fridays at 5 PM0 17 * * 5End-of-week reports

One detail the official docs mention but most guides skip: OpenClaw applies a deterministic stagger of up to 5 minutes for recurring top-of-hour expressions. If you schedule three jobs at 0 9 * * *, they will not all fire at exactly 9:00. The Gateway spreads them out to avoid spiking your API costs in a single burst. If you need exact timing (for a client-facing report, say), pass the --exact flag when creating the job.

Timezone behavior matters. Cron expressions use your Gateway host’s local timezone by default. If your OpenClaw runs on a VPS set to UTC and you want 9 AM Eastern, you must either set the timezone explicitly with --tz "America/New_York" or change the VPS system timezone. Timezone mismatches cause more “my cron never fires” issues than any other configuration error.

Use crontab.guru to validate expressions before deploying them.

Main Session vs Isolated Session

Every cron job targets a session type, and picking the wrong one is the second most common mistake after timezone issues.

Main session jobs inject a system event into your agent’s primary conversation. The agent processes it with full context: your conversation history, preferences, memory files, everything. Use main session when the task needs awareness of what you have been doing. A reminder to follow up on a conversation from earlier today belongs in the main session.

Isolated session jobs spin up a dedicated session with no conversation history. Each run starts clean. This is the default and the right choice for most scheduled tasks. A daily briefing does not need to know what you asked your agent about yesterday afternoon. It needs to check your calendar, pull updates, and format a summary.

# Main session: reminder that needs conversation context
openclaw cron add \
  --name "Follow-up reminder" \
  --at "2026-04-01T16:00:00Z" \
  --session main \
  --system-event "Check if the proposal discussion from this morning needs a follow-up email." \
  --wake now

# Isolated session: daily briefing (no context needed)
openclaw cron add \
  --name "Morning briefing" \
  --cron "0 7 * * 1-5" \
  --tz "America/New_York" \
  --session isolated \
  --message "Pull my Google Calendar events for today. List external meetings with attendees and prep notes. Flag any conflicts." \
  --announce \
  --channel slack \
  --to "channel:C1234567890"

You can also use named persistent sessions with --session session:weekly-review. Unlike standard isolated sessions, named sessions maintain context across runs. Your Monday analytics cron can reference what it found last Monday. This is particularly useful for jobs that build up state over time, like a running project log.

Delivery and Output

When an isolated cron job finishes, what happens to the output? Three delivery modes:

Announce (default) sends the output through your configured channel: Slack, Telegram, WhatsApp, or Discord. The agent’s response gets chunked and formatted for the target platform. This is what you want for briefings, reports, and alerts.

Webhook sends an HTTP POST to a URL you specify. Use this for piping cron output into other systems: a dashboard, a logging service, or a custom API.

None runs the job silently. The output exists in the run log but goes nowhere. Use this for background maintenance tasks like memory cleanup or data preparation that do not need human attention.

# Deliver to a Telegram forum topic
openclaw cron add \
  --name "Nightly digest" \
  --cron "0 22 * * *" \
  --tz "America/Los_Angeles" \
  --session isolated \
  --message "Summarize today's completed tasks and open blockers." \
  --announce \
  --channel telegram \
  --to "-1001234567890:topic:123"

Cron vs Heartbeat: When to Use Each

We covered the heartbeat system in depth in our heartbeat scheduling guide. Here is the short version of when to pick which.

Use cron when the task must happen at a specific time. Daily standup prep at 8:45 AM. Weekly analytics digest on Friday at 4 PM. Monthly billing report on the 1st. The trigger is the clock, not a condition.

Use heartbeat when the task is condition-based monitoring. Check if new emails arrived. See if disk space dropped below 10%. Look for overdue tasks. The trigger is a state change, not a time.

Use both together for the best coverage. Your heartbeat monitors cron health, re-triggering jobs that missed their window. Your cron handles the fixed-schedule work that heartbeats are not designed for.

CronHeartbeat
TriggerClock timeInterval check
SessionIsolated (default)Main (default)
ContextClean slate per runFull conversation history
Best forReports, briefings, scheduled actionsMonitoring, alerts, maintenance
Cost patternPredictable (runs at set times)Variable (runs whether needed or not)

An important distinction: do not schedule cron jobs for tasks where “roughly every N hours” is good enough. A heartbeat checking analytics data every 60 minutes costs less and produces the same practical result as a cron running 24 times a day, because the heartbeat stays silent when nothing changed.

Three Production Recipes

These are production-ready configurations you can adapt to your own setup. Each includes the CLI command, the equivalent JSON, and notes on why specific choices were made.

Recipe 1: Daily Standup Prep (Weekdays at 8:45 AM)

This job runs 15 minutes before standup and posts a summary to Slack. It checks completed tasks, open blockers, and calendar conflicts for the day.

openclaw cron add \
  --name "Standup prep" \
  --cron "45 8 * * 1-5" \
  --tz "America/New_York" \
  --session isolated \
  --message "Prepare standup summary: 1) List tasks completed since yesterday's standup from memory files. 2) List open blockers or items waiting on others. 3) Check today's calendar for meetings that conflict with deep work blocks. Format as bullet points under three headers: Done, Blocked, Today's Schedule." \
  --model "anthropic/claude-haiku" \
  --announce \
  --channel slack \
  --to "channel:C-STANDUP"

Claude Haiku is the right model here because the task is structured extraction, not reasoning. The model reads memory files, formats bullets, and delivers. Haiku handles it at a fraction of the cost of a full model. Each run costs roughly $0.002 to $0.005.

The payload is specific. “Prepare standup summary” alone would produce a rambling paragraph. The numbered instructions with explicit output format (“bullet points under three headers”) consistently produce clean, scannable output.

Recipe 2: Weekly Analytics Digest (Fridays at 4 PM)

openclaw cron add \
  --name "Weekly analytics" \
  --cron "0 16 * * 5" \
  --tz "America/New_York" \
  --session "session:analytics-weekly" \
  --message "Weekly analytics digest: 1) Fetch this week's website traffic summary. 2) Compare to last week's numbers (check your previous run notes in this session). 3) Highlight the top 3 performing pages and any page with traffic drop exceeding 20%. 4) Note one actionable recommendation based on the data. Keep the total summary under 300 words." \
  --model "anthropic/claude-sonnet-4-6" \
  --thinking low \
  --announce \
  --channel slack \
  --to "channel:C-ANALYTICS"

This uses a named persistent session (session:analytics-weekly) so each Friday’s run can reference prior weeks. The agent builds a running comparison over time without you maintaining a separate data store.

Claude Sonnet 4.6 is the right choice here because the task requires judgment: identifying meaningful traffic changes versus noise, and generating a recommendation. Haiku would produce the numbers but miss the interpretation.

The “under 300 words” constraint prevents the agent from writing a dissertation when a concise digest is what you want before the weekend.

Recipe 3: Monthly Billing Audit (1st of Month at 6 AM)

openclaw cron add \
  --name "Monthly billing audit" \
  --cron "0 6 1 * *" \
  --tz "America/New_York" \
  --session isolated \
  --message "Monthly billing audit: 1) Calculate total API token spend across all agents for the previous month by reading the cost logs. 2) Break down by agent and by model. 3) Flag any agent whose spend increased more than 25% month-over-month. 4) Flag any single cron job costing more than $5/month. 5) Recommend specific cost optimizations: model downgrades for simple tasks, interval increases for over-frequent jobs, or lightContext flags for jobs not using conversation history. Format as a table followed by recommendations." \
  --model "anthropic/claude-sonnet-4-6" \
  --thinking high \
  --announce \
  --channel telegram \
  --to "-1001234567890"

This fires at 6 AM on the first of the month, giving you a cost report before the workday starts. The --thinking high flag enables extended reasoning because cost optimization recommendations require weighing trade-offs between model capability and budget.

This runs against all agents, not just the one executing the cron. The payload instructs the agent to read cost logs across the deployment, which works because the cron job has file system access to the shared log directory.

Managing Cron Jobs

After creating jobs, these commands handle the lifecycle:

# List all jobs with status
openclaw cron list

# Check scheduler health
openclaw cron status

# Edit an existing job's prompt
openclaw cron edit <jobId> --message "Updated instructions here"

# Change the model for a job
openclaw cron edit <jobId> --model "anthropic/claude-haiku"

# Manually trigger a job (for testing)
openclaw cron run <jobId>

# View recent run history
openclaw cron runs --id <jobId> --limit 10

# Delete a job
openclaw cron remove <jobId>

When testing a new cron job, use openclaw cron run <jobId> to trigger it immediately instead of waiting for the schedule. Once the output looks right, let the schedule take over.

Cost Estimation

Every cron run is a model inference call. The cost depends on the model, context size, and whether you use lightContext. Here are rough estimates:

Job TypeModelTokens Per RunCost Per RunMonthly Cost (Daily)
Simple extractionClaude Haiku2,000-5,000$0.002$0.06
Structured reportClaude Sonnet 4.610,000-25,000$0.04$1.20
Deep analysisClaude Opus 4.625,000-60,000$0.30$9.00
Simple extractionGPT-5.42,000-5,000$0.003$0.09

The two highest-impact cost controls:

  1. Use the cheapest model that handles the task. Standup prep and memory cleanup do not need Opus. Drop to Haiku and cut costs by 95%.
  2. Add lightContext: true to jobs that do not reference your bootstrap files (soul.md, agents.md). This reduces the input token overhead from roughly 100K to under 5K tokens per run.

A common mistake: scheduling an every-5-minute job with a full-context model. At $0.04 per run, that is $0.48/hour, $11.52/day, $345/month for a single job. Always estimate monthly cost before creating interval-based cron jobs.

Error Handling and Retries

OpenClaw handles failures automatically with exponential backoff. For one-shot jobs, it retries up to 3 times: 30 seconds, then 1 minute, then 5 minutes. For recurring jobs, it applies backoff across runs: 30s, 1m, 5m, 15m, 60m between retries. The backoff resets after the next successful run.

Permanent errors (authentication failures, invalid configuration) disable the job immediately rather than retrying.

Check run history when something seems off:

openclaw cron runs --id <jobId> --limit 5

The most common failure cause is a vague payload. “Check on things” produces unreliable behavior that often errors out when the agent cannot determine a concrete action. Specific instructions with numbered steps and explicit output formats fail far less often.

Frequently Asked Questions

What is the difference between openclaw cron and heartbeat?

Cron fires at exact times you specify (9 AM Monday, first of the month). Heartbeat fires on a rolling interval (every 30 minutes) and evaluates conditions. Use cron for scheduled reports and actions. Use heartbeat for monitoring and alerts. See our heartbeat guide for the full comparison.

How do I set up a daily cron job in openclaw?

Run openclaw cron add --name "Job name" --cron "0 9 * * *" --session isolated --message "Your task instructions". That creates a job at 9 AM daily. Add --tz "America/New_York" to set the timezone and --announce --channel slack --to "channel:ID" to route output somewhere.

How much do openclaw cron jobs cost per month?

It depends on the model and frequency. A daily Haiku job costs about $0.06/month. A daily Sonnet job costs about $1.20/month. A daily Opus job costs about $9/month. The fastest way to overspend is running high-frequency interval jobs with expensive models. Estimate before you deploy.

Can I use a cheaper model just for cron jobs?

Yes. Pass --model "anthropic/claude-haiku" when creating or editing a job. This overrides the agent’s default model for that specific cron run. Your normal conversations keep using whatever model you configured.

Where are cron jobs stored?

Jobs persist at ~/.openclaw/cron/jobs.json. Run history lives at ~/.openclaw/cron/runs/<jobId>.jsonl. Both survive Gateway restarts. You can manually edit jobs.json, but only when the Gateway is stopped.

How do I debug a cron job that is not running?

Check in this order: 1) Is the Gateway running? (openclaw daemon status) 2) Is the timezone correct? VPS defaults to UTC. 3) Is the cron expression valid? Test at crontab.guru. 4) Check run history for errors: openclaw cron runs --id <jobId> --limit 5. 5) Was the job disabled by a permanent error? Re-enable it after fixing the config.

Can cron jobs deliver messages to Slack, Telegram, or Discord?

Yes. Add --announce --channel slack --to "channel:C1234567890" when creating the job. Supported channels include Slack, Telegram (including forum topics), WhatsApp, and Discord. Each platform has its own target format.

How many cron jobs can I run simultaneously?

There is no hard limit. The practical constraint is your token budget and the maxConcurrentRuns setting (default: 1, meaning jobs queue rather than overlap). Five to twenty jobs is manageable without performance issues. Beyond that, review your cost estimates carefully.


Key Takeaways

  • Cron handles exact-time scheduling (daily briefings, weekly reports, monthly audits). Heartbeat handles interval-based monitoring. Use both together for complete coverage.
  • Make payloads specific: numbered steps, explicit output format, word limits. Vague instructions produce unreliable results.
  • Use the cheapest model that handles the task. Haiku at $0.002 per run handles structured extraction. Save Opus for jobs requiring judgment.
  • Set the timezone explicitly with --tz on every job. Timezone mismatches are the number one cause of “my cron never fires.”
  • Test jobs with openclaw cron run <jobId> before relying on the schedule. Fix the output format while you can see results immediately.

Last Updated: Apr 16, 2026

SL

SFAI Labs

SFAI Labs helps companies build AI-powered products that work. We focus on practical solutions, not hype.

Get OpenClaw Running — Without the Headaches

  • End-to-end setup: hosting, integrations, and skills
  • Skip weeks of trial-and-error configuration
  • Ongoing support when you need it
Get OpenClaw Help →
From zero to production-ready in days, not weeks

Related articles