Settings Glossary
This glossary provides definitions for all settings across Anava Skills, Detections, and Groups.
A
Active (Detection)
Type: Boolean Default: true Location: Detection Settings
Whether the Detection is enabled. When false, the Detection does not process events.
Active Monitoring
Type: Object Location: Detection Settings
Configuration for continuous frame capture and temporal analysis. See Learning Mode.
Analysis Schedule
Type: String Default: "24/7" Location: Detection Settings
When the Detection should be active. Values:
24/7- Always activebusiness_hours- Camera-defined business hours- Custom cron expression
Author
Type: String Location: Skill Settings
Creator of the skill. For documentation and attribution.
B
Batch Size
Type: Number (1-10) Default: 1 Location: Active Monitoring Settings
Number of frames sent together for analysis. Higher values provide more context but increase latency.
C
Category
Type: String Location: Skill Settings
Organizational category for the skill. Common values:
- Security
- Safety
- Operations
- Compliance
Confidence Threshold
Type: Number (0-100) Default: 0 Location: Detection Settings
Minimum confidence percentage required to emit ONVIF events. Higher values reduce false positives but may miss valid Detections.
D
Description
Type: String Location: Skill, Profile, Group Settings
Human-readable description of purpose and function.
Draw Bounding Box
Type: Boolean Default: false Location: Object Settings
When true, detected objects are highlighted with bounding boxes in stored images.
E
Enabled (Object)
Type: Boolean Default: true Location: Object Settings
Whether to detect this object type. Disabled objects are not analyzed.
Enabled (Question)
Type: Boolean Default: true Location: Question Settings
Whether this question is active. Disabled questions are not answered.
Enabled (Active Monitoring)
Type: Boolean Default: false Location: Active Monitoring Settings
Whether continuous frame capture is enabled for the profile.
F
Full Analysis Model
Type: String Default: (system default) Location: Detection Settings
AI model used for full analysis phase. Leave empty for system default.
Full Analysis System Prompt
Type: String Location: Skill Settings
System context provided to AI for full analysis. Establishes role and guidelines.
Full Analysis User Prompt
Type: String Location: Skill Settings
Instructions for full analysis. Defines what AI should analyze and how.
G
Group ID
Type: String Location: Group Settings
Unique identifier for the group. Auto-generated.
I
ID (Trigger)
Type: String Location: Trigger Settings
Specific scenario ID for perimeter triggers. References AOAS scenario.
Interval Ms
Type: Number (1000-60000) Default: 5000 Location: Active Monitoring Settings
Milliseconds between frame captures during active monitoring.
M
Max Duration Sec
Type: Number (30-600) Default: 60 Location: Active Monitoring Settings
Maximum duration in seconds for active monitoring window.
Max Images
Type: Number (5-50) Default: 10 Location: Active Monitoring Settings
Maximum number of images to capture during active monitoring.
N
Name
Type: String Location: Skill, Profile, Group, Object, Question Settings
Unique identifier. Used in references and displays.
O
Object
Type: Array Location: Skill Settings
List of objects to detect. Each object has its own configuration.
P
Port
Type: Number (1-8) Location: Trigger Settings
I/O port number for DigitalInput or Manual triggers.
Pre-Filter Criteria
Type: String Location: Skill Settings
Description of what makes an image worth analyzing. Guides AI pre-filtering.
Pre-Filter Model
Type: String Default: (system default) Location: Detection Settings
AI model used for pre-filter phase. Typically faster/cheaper model.
Pre-Filter System Prompt
Type: String Location: Skill Settings
System context for pre-filter analysis.
Pre-Filter User Prompt
Type: String Location: Skill Settings
Instructions for pre-filter. Determines if image proceeds to full analysis.
Profile (Trigger)
Type: String Location: Trigger Settings
AOAS profile or scenario name for Object/Perimeter triggers. Refers to the camera-side analytics profile, not the Anava Detection.
Push Notification
Type: Boolean Default: false Location: Object, Question Settings
When true, a Detection triggers a mobile push notification.
Q
Question
Type: Array Location: Skill Settings
List of questions for AI to answer about each image.
Question ID
Type: Number Location: Question Settings
Unique identifier for the question within the skill.
Question Text
Type: String Location: Question Settings
The question the AI will answer about the image.
Question Type
Type: Enum Location: Question Settings
Data type of the answer:
bool- True/falseint- Integer numberstring- Free textset- Multiple choicevarchar(50)- Short textvarchar(500)- Long text
R
Rapid Eligible
Type: Boolean Default: false Location: Object, Question Settings
When true, confident pre-filter detection can skip full analysis.
Resolution Profile
Type: Enum Default: BALANCED Location: Active Monitoring Settings
Image capture resolution:
TINY- 228×128LOW- 455×256BALANCED- 640×360HIGH- 854×480HD_720- 1280×720FULL_HD_1080- 1920×1080ULTRA- 2560×1440CUSTOM- User-defined
Response Criteria
Type: String Location: Skill Settings
Expected response format and structure for AI output.
S
Skill ID
Type: String Location: Detection Settings
Reference to the Skill this Detection uses for analysis.
Stateful
Type: Boolean Default: false Location: Object, Question Settings
When true, generates ONVIF true/false events for VMS integration.
Status (Group)
Type: Enum Default: active Location: Group Settings
Group status:
active- Group is operationalinactive- Group is disabled
T
Tags
Type: Array of Strings Location: Group Settings
Organizational tags for grouping and filtering.
Talkdown Buffer Ms
Type: Number Default: 30000 Location: Detection Settings
Minimum milliseconds between TTS announcements. Prevents rapid repeat.
Talkdown Enabled
Type: Boolean Default: false Location: Object, Question Settings
When true, a Detection can trigger text-to-speech announcement.
Talkdown Guidance
Type: String Location: Object, Question Settings
Instructions for generating TTS message content.
Talkdown Priority
Type: Number (1-10) Default: 5 Location: Object, Question Settings
Priority ordering when multiple TTS events occur. Higher number = higher priority.
Talkdown Rule
Type: Object Location: Question Settings
Condition that must be met to trigger TTS. Contains operator and value.
Talkdown Style
Type: String Location: Object, Question Settings
Voice tone and style for TTS announcement.
Temporal Prompt Template
Type: String Location: Skill Settings
Template for temporal analysis across multiple frames during Active Monitoring.
Trigger
Type: Object Location: Detection Settings
Event source configuration. See Trigger Types.
Trigger Deep Analysis
Type: Boolean Default: false Location: Object, Question Settings
When true, a Detection always runs full analysis regardless of pre-filter.
TTS Config
Type: Object Location: Detection Settings
Text-to-speech configuration including model and voice.
TTS Model
Type: String Location: TTS Config
TTS generation model.
TTS Voice
Type: String Default: Kore Location: TTS Config
Voice selection for announcements:
Kore- Authoritative, clearCharon- Calm, professionalAoede- Friendly, warmPuck- EnergeticFenrir- Deep, commanding
Type (Trigger)
Type: Enum Location: Trigger Settings
Trigger source:
None- Detection disabledManual- Virtual inputDigitalInput- Physical I/OSchedule- Time-basedMotion- Legacy VMDVMD4- AXIS VMD4 motion detectionObject- AOAS detection (legacy)ObjectAnalytics- AXIS Object AnalyticsPerimeter- AOAS line/zonePerimeterDefender- Advanced perimeterPulse- Camera-based interval scheduling
U
Use Single
Type: Boolean Default: false Location: Active Monitoring Settings
When true, uses single frame mode instead of batch.
V
Version
Type: String Location: Skill, Group Settings
Version identifier for tracking changes.
View Area
Type: Number (1-8) Default: 1 Location: Detection Settings
Camera stream/view to capture. For multi-sensor cameras or PTZ presets.
Related Topics
- Settings Reference - Screen-by-screen reference
- Skills Guide - Complete Skills documentation
- Detections Guide - Complete Detections documentation