5 Events Every Backend Should Be Alerting On
March 19, 2026
Your backend probably logs everything and alerts on nothing. Logs are great for debugging after the fact, but they don’t tell you something broke. You find out when a user complains, or when you happen to check a dashboard.
Here are five things worth getting a push notification for, and how to set them up.
1. Payment failures
When a payment fails, someone tried to pay you and something went wrong. Every minute you don’t know about it is a minute the customer is frustrated.
import { ApiAlerts } from 'apialerts-js'
const alerts = new ApiAlerts(process.env.API_ALERTS_KEY)
async function handlePayment(order) {
try {
await chargeCustomer(order)
} catch (err) {
await alerts.send({
message: `Payment failed: ${order.id} - ${err.message}`,
channel: 'payments',
link: `https://dashboard.stripe.com/payments/${order.paymentId}`,
tags: ['payment', 'error'],
})
throw err
}
}
Don’t wait for your Stripe webhook to tell you. Alert at the point of failure in your own code. You’ll know before Stripe’s retry logic even kicks in.
For a deeper dive on this, see our guide on alerting on failed Stripe webhooks.
2. Failed deployments
A deployment that fails silently is worse than one that fails loudly. Your team thinks the new code is live, but the old version is still running.
If you’re using GitHub Actions:
- name: Deploy
run: ./deploy.sh
- name: Notify
if: success() || failure()
uses: apialerts/notify-action@v2
with:
api_key: ${{ secrets.API_ALERTS_KEY }}
channel: 'releases'
message: ${{ job.status == 'success' && 'Deployed to production' || 'Deploy failed' }}
tags: 'deploy,production'
link: ${{ job.status == 'success' && 'https://your-app.com' || format('{0}/{1}/actions/runs/{2}', github.server_url, github.repository, github.run_id) }}
The if: success() || failure() is important. You want to know either way. On success, the link goes to your app. On failure, it goes straight to the CI run.
We covered this in detail for mobile builds in our CI/CD build alerts guide.
3. Cron job failures
Cron jobs are the most neglected part of any backend. They run in the background, and when they fail, nothing happens. No error page. No user complaint. Just silence, until someone notices the nightly backup hasn’t run in two weeks.
# Alert on failure only
0 3 * * * /usr/local/bin/backup.sh || apialerts send -m "Nightly backup failed on $(hostname)" -c ops -g cron,backup
Or alert on both success and failure for jobs where silence is ambiguous:
# Did it run, or did it not run at all?
0 3 * * * /usr/local/bin/backup.sh && apialerts send -m "Backup complete" -c ops || apialerts send -m "Backup failed" -c ops -g error
We wrote a full guide on cron job failure alerting with examples in Python and GitHub Actions.
4. Authentication anomalies
Failed login attempts, password resets, and account lockouts are security-relevant events. You don’t need a full SIEM to stay aware. A simple alert on unusual patterns is enough.
from apialerts import ApiAlerts
client = ApiAlerts('your-api-key')
def handle_login(email, password):
user = authenticate(email, password)
if not user:
failed_attempts = increment_failed_attempts(email)
if failed_attempts == 5:
client.send(
message=f'Account locked: {email} ({failed_attempts} failed attempts)',
channel='security',
tags=['auth', 'lockout'],
)
return None
return user
You’re not building an intrusion detection system. You’re just making sure you know when something suspicious happens so you can check it.
5. Quota and resource limits
Running out of disk space, hitting API rate limits, or exhausting database connections are the kind of failures that cascade. By the time your app crashes, the root cause is already hours old.
import "github.com/apialerts/apialerts-go"
client := apialerts.New("your-api-key")
var diskAlertSent bool
func checkDiskSpace() {
usage := getDiskUsage()
if usage > 90 && !diskAlertSent {
client.Send(apialerts.Event{
Message: fmt.Sprintf("Disk usage at %d%% on %s", usage, hostname),
Channel: "ops",
Tags: []string{"disk", "warning"},
})
diskAlertSent = true
} else if usage < 85 {
diskAlertSent = false
}
}
Alert at the warning threshold (90%), not the failure threshold (100%). By the time you’re at 100%, it’s too late to prevent downtime.
Other resource limits worth alerting on:
- Database connection pool when connections are above 80% of max
- API rate limits when you’re approaching a third-party API’s rate limit
- Memory usage on sustained high memory before the OOM killer arrives
- Queue depth when background job queues are growing faster than they’re draining
The pattern
Every example above follows the same structure:
- Detect the event in your existing code
- Send a one-line alert with context (what happened, where, and a link to investigate)
- Route it to the right channel so the right person sees it
The alert should take less code than the log statement you’d write anyway. If setting up alerting feels heavy, you’re overengineering it.
Getting started
Pick one event from this list, whichever keeps you up at night, and add alerting today. You can set up API Alerts in under 5 minutes with any of our SDKs: