On-Call and Incident Alerts for Small Teams

The problem

Your monitoring tool fires. Datadog says a service is down. Sentry caught an unhandled exception in production. UptimeRobot’s health check failed. The alert lands in a Slack channel with 400 unread messages. Nobody sees it for 45 minutes because the person who should have seen it is in a meeting, and everyone else assumed someone else was watching.

For a 50-person company, PagerDuty solves this. Escalation policies, rotation schedules, acknowledgment workflows, incident timelines. But PagerDuty starts at $21 per user per month. A 15-person team pays $315 a month before anyone sends a single alert. For a startup that ships one product and has three engineers who take turns being on-call, that is a hard budget line to justify.

The alternative most small teams actually use: a Slack channel, a verbal agreement about who checks it, and the hope that the right person is looking at the right time. It works until it doesn’t. And when it doesn’t work, the failure is silent: the alert fired, it landed in Slack, nobody noticed, and the incident lasted an extra hour.

The gap is real. PagerDuty is too much tooling and too much money for a 5 to 30 person team. A noisy Slack channel nobody watches is too little. What small teams actually need is something in between: a push notification on the phone of whoever is on-call, the moment a monitoring tool fires, with a flat per-workspace price that does not punish the team for growing.

The general shape of the fix

The pattern is the same regardless of which monitoring tool fires the alert:

  1. Monitoring tool detects the problem (Datadog, Sentry, UptimeRobot, BetterStack, Checkly, Pingdom, Grafana, New Relic, or anything that can send a webhook or a Zapier trigger).
  2. Monitoring tool sends a webhook or triggers a Zapier Zap with the alert payload (severity, service name, description, link to the incident).
  3. API Alerts receives the event and pushes a notification to every phone in the workspace that has the mobile app installed. Whoever is on-call, whoever is watching, whoever is near their phone, they all get the buzz at the same time.
  4. The notification includes a tap-through link to the incident in the original tool, so triage starts on the detail page, not on a dashboard home screen.

Split critical and non-critical monitors into separate channels. A prod-down channel for “service is down, wake someone up.” A prod-warn channel for elevated latency or soft degradation. A noise channel for the broad monitoring stream. Each channel is its own feed in the mobile app, so when you open the notification you land on a clean stream of that one severity level, not a mixed pile you have to triage before triaging.

This is the layer between your monitoring tools and whoever is on-call tonight. It does not replace Datadog or Sentry (they detect the problems). It replaces the manual glue that relies on someone watching Slack, with a push notification that lands on the phone of every engineer in the workspace.

Implementation by monitoring tool

Datadog

Datadog supports webhook integrations natively. Create a webhook that posts to your API Alerts channel endpoint when a monitor triggers.

Setup:

  1. In Datadog, go to Integrations > Webhooks.
  2. Add a new webhook pointing to your API Alerts channel’s API endpoint.
  3. Configure the payload template:
{
  "channel": "incidents",
  "message": "$EVENT_TITLE",
  "tags": ["datadog", "$ALERT_TYPE"],
  "link": "$LINK"
}
  1. Attach the webhook to your Datadog monitors. When a monitor triggers, a push notification lands on every phone in the workspace.

Channel suggestion: send error events to a prod-critical channel every on-call engineer stays subscribed to. Send warning and recovery events to a separate prod-warn channel, so the critical feed stays clean and engineers not on rotation can leave the warn channel until their shift comes back around.

Sentry

Sentry can send webhooks on issue creation, state changes, and error spikes. Use Sentry’s webhook integration or its Zapier integration for more granular control.

Webhook setup:

  1. In Sentry, go to Settings > Integrations > Webhooks.
  2. Add your API Alerts channel endpoint as the callback URL.
  3. Select the event types you want to forward (Issue Created, Issue Resolved, Error, etc.).

Zapier setup (more flexible):

  1. Create a Zap with Sentry as the trigger (New Issue, or New Event by Filter).
  2. Add API Alerts as the action (Send Event).
  3. Map the Sentry issue title to the API Alerts message, add tags for severity, include the Sentry issue link.

Channel suggestion: send new error or fatal issues to a prod-critical channel every on-call engineer stays subscribed to. Send resolved issues to a separate prod-resolved channel you check as a follow-up. Sentry’s built-in alert rules can pre-filter so only genuinely critical issues reach API Alerts at all.

UptimeRobot

UptimeRobot monitors URLs and ports and fires alerts when they go down or come back up. It supports webhooks natively.

Setup:

  1. In UptimeRobot, go to My Settings > Alert Contacts.
  2. Add a new Webhook alert contact.
  3. Set the URL to your API Alerts channel endpoint with the payload:
POST https://api.apialerts.com/event
Authorization: Bearer YOUR_API_KEY
Content-Type: application/json

{
  "channel": "uptime",
  "message": "*monitorFriendlyName* is *alertTypeFriendlyName*",
  "tags": ["uptime", "*alertType*"]
}
  1. Attach this alert contact to your monitors.

Channel suggestion: send down alerts to a uptime-down channel and recovery alerts to a separate uptime-resolved channel you scan when you get a chance. If you monitor 10+ URLs, keep uptime events out of the application-error channels so neither feed drowns the other.

BetterStack (formerly Better Uptime)

BetterStack has native webhook support and a Zapier integration. For simple alerting, the webhook is enough.

Webhook setup:

  1. In BetterStack, go to Integrations > Webhooks.
  2. Add your API Alerts channel endpoint.
  3. BetterStack sends a JSON payload with incident details (name, URL, status, started_at).

Channel suggestion: same as UptimeRobot. Down events on a uptime-down channel, recovery on a separate uptime-resolved channel.

Checkly

Checkly monitors APIs and browser checks. It supports webhook alert channels.

Setup:

  1. In Checkly, go to Alert Settings > Alert Channels.
  2. Add a Webhook channel with your API Alerts endpoint.
  3. Configure the payload template to include the check name, result, and link.

Pingdom

Pingdom supports webhook integrations through its alerting system.

Setup:

  1. In Pingdom, go to Alerting > Integrations.
  2. Add a new Webhook integration pointing to your API Alerts channel endpoint.
  3. Attach it to the checks you want to forward to API Alerts.

Grafana

Grafana’s alerting system supports webhook contact points natively. If you already have Grafana dashboards with alert rules, adding API Alerts as a contact point takes two minutes.

Setup:

  1. In Grafana, go to Alerting > Contact Points.
  2. Add a new contact point of type Webhook.
  3. Set the URL to your API Alerts channel endpoint with the appropriate headers.
  4. Assign the contact point to your alert rules or notification policies.

New Relic

New Relic supports webhook notification channels for alert policies.

Setup:

  1. In New Relic, go to Alerts > Notification Channels.
  2. Add a Webhook channel with your API Alerts endpoint.
  3. Attach it to the alert policies you want to route.

Any tool with webhook support

The pattern is the same for any monitoring tool that can send a webhook: point the webhook at your API Alerts channel endpoint and format the payload with a message and optional tags. If the tool supports Zapier, use the API Alerts Zapier integration for a no-code setup.

Why this is different from just using Slack

Most small teams already have their monitoring tools posting to a Slack channel. That works until:

  • The channel gets noisy and critical alerts drown in warnings
  • The on-call person is away from Slack (commuting, sleeping, in a meeting with notifications muted)
  • Nobody knows whether the person who should have seen the alert actually saw it

API Alerts pushes the alert directly to every phone in the workspace the moment it fires. The mobile app is the on-call mechanism: a clean feed per channel, push notifications that bypass the Slack noise, and a tap-through to the incident detail. Flat per-workspace pricing means adding the fifth or fifteenth engineer to the on-call rotation does not change your bill.

  • Webhooks for connecting any monitoring tool that can send HTTP requests
  • Zapier for no-code connections to 8,000+ apps including most monitoring tools

Get started in five minutes. Create a free workspace, set up a channel for incidents, point your monitoring tool’s webhook at it, install the API Alerts mobile app, and let the on-call rotation carry it in their pocket.