Skip to content
Academy Menu

Scheduler dashboard explained

How to read Dailybot’s scheduler dashboard—upcoming runs, history, statuses, filters, and troubleshooting—so ops and managers catch failures before they become silent gaps.

how-it-works Ops Manager 5 min read

Scheduled work is easy to forget about until something does not happen. The scheduler dashboard in Dailybot exists so ops and managers can see time as a first-class dimension: what should have fired, what did fire, and what needs attention. This walkthrough focuses on how to read the UI and use it for practical troubleshooting.

Main views: upcoming, history, and active schedules

Upcoming runs answers the planning question: “What will execute next?” You should see a timeline or list ordered by next execution time, with enough context (workflow name, scope, timezone) to sanity-check overlap. If two heavy jobs stack at the same minute, that is visible here before it becomes a rate-limit problem.

Run history is the audit trail: past executions with timestamps, duration hints, and outcome. This is where you confirm whether last night’s digest actually ran or whether only the Slack message succeeded while a downstream step did not.

Active schedules (sometimes grouped in a separate tab or panel) lists the definitions still enrolled: cron expressions, owning workflow, and enable/disable toggles. When someone says “turn off the Friday rollup,” this is where you verify the right schedule—not a similarly named test job.

Reading the timeline

A good timeline encodes three stories at once: what ran (checkmarks and logs), what is next (forward queue), and what failed (errors or alerts). Scan forward for gaps: if “next run” jumps unexpectedly far, investigate pauses, maintenance windows, or disabled workflows.

For each entry, note scope: org-wide versus team-scoped jobs behave differently when membership changes. A run tied to a deprecated team may show as skipped until you reassign ownership.

Filtering: agent, team, workflow

Filters turn a busy dashboard into a signal. Typical dimensions include:

  • Workflow or automation name — isolate one playbook.
  • Team or channel scope — see only customer-success digests.
  • Agent or integration — separate human check-ins from agent-driven jobs.

Use filters when debugging a single complaint (“my reminder never arrived”) without losing global context for unrelated schedules.

Run statuses decoded

Completed means the scheduler handed off to the automation runner and the workflow reported success end-to-end—still spot-check outputs occasionally, because “green” can hide logic bugs.

Skipped often ties to conditions: the clock fired, but prerequisites were not met (empty cohort, feature flag off, or “no new data since last run”). Skipped is not inherently bad; it becomes a problem only when you expected work to happen.

Failed indicates an error: integration timeout, permission change, malformed payload, or exceeded quota. Open the run detail for error text and correlation IDs your support team can use.

Retrying shows transient failure handling: the platform may backoff and try again. Watch for jobs stuck retrying—usually a sign of a persistent credential or configuration issue, not a flaky network blip.

Troubleshooting missed or failed runs

Start from history, not from chat. Confirm whether a run exists at all. If there is no row, the schedule may be disabled, the workflow deleted, or the timezone shifted. If there is a skipped row, read the reason before re-running manually.

For failed rows, fix root cause first (token refresh, scope change), then use retry if the product offers it; blind retries without fixes waste quotas and obscure logs.

Document expected cadence per critical job (owner, business purpose, escalation path). The dashboard is most powerful when paired with a one-page ops note: “If X does not post by 09:05 UTC, check Y.”

Habits for managers and ops together

Managers should know where to look and who owns schedules; ops should keep definitions tidy (no duplicate cron jobs doing the same thing). A five-minute weekly scan of failures prevents month-end surprises when everyone assumed automation had been running.

Used consistently, the scheduler dashboard turns invisible background jobs into observable infrastructure—the difference between hoping the bot remembered and knowing it did.

FAQ

What is the scheduler dashboard for?
It is the operational view of scheduled and recent automation runs—what is next, what already ran, and what failed—so teams can audit timing and fix issues without digging through chat logs alone.
How do I find a missed run?
Use run history with filters for timeframe, workflow, team, or agent; compare expected schedule to actual statuses; then open failed or skipped entries for error context and retry options if available.
What does skipped mean versus failed?
Skipped usually means conditions were not met or the run was intentionally bypassed; failed means an error occurred during execution—each needs different troubleshooting steps.