Skip to main content

Device Alerts

Anava's Device Alert System provides structured error reporting and health monitoring across your entire camera fleet. Every device continuously reports its status, enabling proactive maintenance and rapid issue resolution.

Key Capabilities

FeatureDescription
Real-time AlertsImmediate notification of device issues
Severity ClassificationCRITICAL, ERROR, WARNING, INFO levels
Fleet DashboardAggregate view of all device health
Alert HistoryFull audit trail of device events
Auto-resolution TrackingKnow when issues self-heal

Alert Severity Levels

Alert Severity Levels

Severity Definitions

LevelResponse TimeExamples
CRITICALImmediateDevice offline, MQTT hijack attempt, certificate expired
ERRORWithin 1 hourStream failure, persistent auth failure, disk full
WARNINGWithin 24 hoursHigh memory usage, repeated retries, config drift
INFONo action neededSuccessful restart, config applied, version update

Alert Categories

Connectivity Alerts

CodeSeverityDescription
CONN_001CRITICALMQTT connection lost (> 5 minutes)
CONN_002ERRORMQTT connection unstable (> 3 reconnects/hour)
CONN_003WARNINGNetwork latency high (> 500ms)
CONN_004INFOConnection restored

Security Alerts

CodeSeverityDescription
SEC_001CRITICALCertificate validation failed
SEC_002CRITICALUnauthorized broker connection attempt
SEC_003ERRORCertificate expiring (< 7 days)
SEC_004WARNINGMultiple failed auth attempts

Configuration Alerts

CodeSeverityDescription
CFG_001CRITICALCritical configuration drift detected
CFG_002WARNINGConfiguration healed automatically
CFG_003WARNINGConfiguration conflict (repeated drift)
CFG_004INFOConfiguration updated successfully

Resource Alerts

CodeSeverityDescription
RES_001ERRORMemory usage critical (> 90%)
RES_002WARNINGMemory usage high (> 80%)
RES_003WARNINGStorage space low (< 100MB)
RES_004INFOResource usage normalized

Alert Payload Structure

Every alert follows a consistent JSON structure:

{
"alertId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"deviceId": "ACCC8EF12345",
"groupId": "warehouse-east",
"code": "CONN_001",
"severity": "CRITICAL",
"category": "connectivity",
"message": "MQTT connection lost for 5 minutes",
"timestamp": "2025-12-19T10:30:00Z",
"context": {
"lastConnected": "2025-12-19T10:25:00Z",
"brokerHost": "mqtt.anava.ai",
"reconnectAttempts": 12
},
"resolved": false,
"resolvedAt": null
}

Fleet Dashboard

The Anava Console provides a fleet-wide view of device health:

┌─────────────────────────────────────────────────────────────┐
│ Fleet Health Overview │
├─────────────────────────────────────────────────────────────┤
│ │
│ Total Devices: 247 Online: 243 (98.4%) │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ CRITICAL │ │ ERROR │ │ WARNING │ │ INFO │ │
│ │ 2 │ │ 5 │ │ 12 │ │ 34 │ │
│ │ 🔴 │ │ 🟠 │ │ 🟡 │ │ 🔵 │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
│ │
│ Recent Alerts: │
│ • 10:30 CRITICAL CONN_001 - Lobby Camera 1 offline │
│ • 10:28 ERROR SEC_003 - Parking Cam cert expiring │
│ • 10:15 WARNING CFG_002 - Dock 3 config healed │
│ │
└─────────────────────────────────────────────────────────────┘

Alert Lifecycle

Alert Lifecycle

Alert States

StateDescriptionActions Available
ActiveNew alert, requires attentionAcknowledge, Resolve, Snooze
AcknowledgedUser aware, working on itResolve, Add Note
EscalatedSLA breached, needs attentionAcknowledge, Resolve
ResolvedIssue fixedView History, Reopen

Notification Channels

Alerts can be delivered through multiple channels:

ChannelCRITICALERRORWARNINGINFO
Console Dashboard
EmailOptional-
Slack/TeamsOptional-
SMSOptional--
Webhook

Configure notifications in: Console → Settings → Notifications

Alert Rules

Create custom alert rules to filter or escalate specific conditions:

# Example: Escalate offline cameras in critical areas
rule:
name: "Critical Area Offline"
condition:
code: "CONN_001"
group: ["entrance", "server-room", "executive"]
action:
escalate_to: "security-team"
notify: ["sms", "email"]
priority: "P1"

API Access

Query alerts programmatically:

# Get active alerts
curl -H "Authorization: Bearer $TOKEN" \
"https://api.anava.ai/v1/alerts?status=active"

# Get alerts for a device
curl -H "Authorization: Bearer $TOKEN" \
"https://api.anava.ai/v1/devices/ACCC8EF12345/alerts"

# Acknowledge an alert
curl -X POST -H "Authorization: Bearer $TOKEN" \
"https://api.anava.ai/v1/alerts/a1b2c3d4/acknowledge"

Best Practices

  1. Set up escalation policies - Don't let critical alerts go unnoticed
  2. Use alert groups - Organize devices by location/function for targeted notifications
  3. Review weekly - Check for patterns in warnings before they become errors
  4. Configure quiet hours - Reduce notification fatigue for non-critical alerts
  5. Integrate with existing tools - Use webhooks to connect to your incident management system

Last updated: December 2025