Overview
IncidentFox provides 7 specialized log analysis tools that work across any log backend (CloudWatch, Elasticsearch, Coralogix, Splunk, etc.). These tools help identify patterns, anomalies, and correlations in log data.Tools Available
| Tool | Description |
|---|---|
log_get_statistics | Get log volume, error rates, and distribution stats |
log_sample | Sample logs for pattern discovery |
log_search_pattern | Search for specific patterns using regex |
log_around_timestamp | Get context around a specific event |
log_correlate_events | Correlate events across services |
log_extract_signatures | Identify recurring error patterns |
log_detect_anomalies | Find unusual log patterns |
log_get_statistics
Get statistical overview of log data:- Total log volume
- Error rate percentage
- Log level distribution
- Top error messages
- Throughput over time
log_sample
Sample logs to understand patterns without overwhelming data:- Initial investigation to understand error types
- Pattern discovery before targeted searches
- Representative data for analysis
log_search_pattern
Search for specific patterns using regex:- Full regex syntax
- Case-insensitive matching
- Multi-line patterns
log_around_timestamp
Get context around a specific event:- Logs from the target service
- Related logs from dependent services
- System events in the timeframe
log_correlate_events
Correlate events across services using trace IDs or request IDs:- Timeline of events across services
- Latency breakdown by service
- Error propagation path
log_extract_signatures
Identify recurring error patterns automatically:- Clusters similar log messages
- Extracts common patterns (parameterized)
- Ranks by frequency and impact
log_detect_anomalies
Find unusual patterns in log data:- Unusual log volume spikes/drops
- New error types not seen before
- Abnormal patterns in log messages
Configuration
Backend Selection
Configure which log backend to use:Sampling Settings
Use Cases
Error Investigation
- Start with
log_get_statisticsto understand volume - Use
log_sampleto see representative errors - Apply
log_extract_signaturesto identify patterns - Drill down with
log_search_pattern
Incident Timeline
- Identify incident start with
log_detect_anomalies - Get context with
log_around_timestamp - Trace across services with
log_correlate_events
Proactive Monitoring
- Run
log_detect_anomaliesto find new issues - Extract signatures to track recurring problems
- Correlate with deployment events
Best Practices
Time Ranges
Start with narrow time ranges and expand if needed:- Initial investigation: 1 hour
- Pattern analysis: 24 hours
- Trend analysis: 7 days
Filtering
Use service/component filters to reduce noise:Correlation IDs
Ensure your services log correlation IDs for effective tracing:- Trace ID (OpenTelemetry)
- Request ID
- Session ID

