Skip to main content

Quickstart Guide

Get AgentXogs running in just a few minutes.

Prerequisites

  • Python 3.10+
  • uv package manager
  • Access to log sources you want to analyze

Installation

  1. Clone the repository
git clone https://github.com/tph-kds/AgentXogs.git
cd AgentXogs
  1. Install dependencies
uv sync
  1. Verify installation
# Verify project setup
python src/setup/verify_setup_project.py
# Or with uv:
uv run src/setup/verify_setup_project.py

# Verify dependencies
python src/setup/verify_dependencies.py
# Or with uv:
uv run src/setup/verify_dependencies.py

Pipeline Usage

Run Full Pipeline Script

# Using the shell script
./scripts/pipeline.sh

# Or directly with Python
uv run src/agentX/pipeline/pipeline.py

Pipeline Options

# Custom config file
uv run src/agentX/pipeline/pipeline.py --config custom-config.json

Configuration

Using Default Configuration

The system uses config.json for default settings:
{
  "project": "agentx-logs",
  "time_range": "24h",
  "max_logs": 10000,
  "environment": "production",
  "log_sources_file": "src/agentX/config/log_sources.yaml",
  "log_patterns_file": "src/agentX/config/log_patterns.yaml",
  "baseline_metrics_file": "src/agentX/config/baseline_metrics.json",
  "anomaly_thresholds_file": "src/agentX/config/anomaly_thresholds.yaml",
  "output_dir": "output"
}

Configure Log Sources

Edit src/agentX/config/log_sources.yaml:
sources:
  - type: filesystem
    path: /var/log
    name: app-logs
  - type: elasticsearch
    host: es.example.com
    port: 9200
    index: "auth-service-*"
    name: elasticsearch-logs
  - type: s3
    bucket: my-logs-bucket
    prefix: logs/
    region: us-east-1
  - type: cloudwatch
    log_group: /aws/ec2/my-app
    region: us-east-1
  - type: loki
    url: http://loki:3100
    query: "{job=\"my-app\"}"

Viewing Results

After running the pipeline, check the output/ directory:
# View the summary report
cat output/summary.md

# View detected anomalies
cat output/anomalies.json | python -m json.tool

# View recommendations
cat output/recommendations.json | python -m json.tool

# View metrics
cat output/metrics.json | python -m json.tool

Sample Data

The project includes sample data for testing:
# Run with sample metadata
uv run main.py --time-range 24h --environment production --use-sample-data
Sample files are located in metadata/:
  • sample_logs.json - Sample raw logs
  • sample_parsed_events.json - Sample parsed events
  • sample_metrics.json - Sample metrics

Pipeline Stages

The analysis pipeline runs through these stages:
  1. Log Source Discovery - Identifies available log sources
  2. Fetch Logs - Retrieves raw logs from sources
  3. Parse Logs - Normalizes log formats
  4. Aggregate Metrics - Computes statistics
  5. Detect Anomalies - Identifies issues
  6. Generate Hypotheses - Root cause analysis
  7. Create Summary - Human-readable report

Troubleshooting

No logs found

Ensure your log sources are correctly configured in src/agentX/config/log_sources.yaml.

Permission denied

Make sure the user running the script has read access to log directories.

Out of memory

Reduce max_logs in config.json or use a smaller time range.

Dependencies not found

Run uv sync to reinstall all dependencies.

Next Steps

Pipeline Reference

Understand the analysis pipeline.

Core Concepts

Learn how the system works.

Skills Reference

Explore all available skills.

Configuration

Configure log sources.

CLI Version

Switch to CLI version for interactive mode.