Runtime Verification

/camel-verify — Runtime verification feedback loop

Overview

/camel-verify is the runtime verification orchestrator that validates generated integrations actually work. Through a 5-phase feedback loop with error classification and automated fixes, the AI ensures your integration builds, starts, passes behavioral tests, and handles failures gracefully.

The output is a verified, working integration ready for deployment.

When to Use

Invoke /camel-verify when you:

  • Want to validate a generated integration works at runtime
  • Encounter build failures, runtime errors, or test failures
  • Need to troubleshoot an existing integration
  • Want automated diagnosis and fixing of common issues

Auto-invocation: After /camel-execute completes, the AI automatically invokes /camel-verify. You can also invoke it standalone for troubleshooting.

The Five-Phase Loop

The verification process runs five phases in sequence. If any phase fails, the AI classifies the error, fixes it, and retries (up to 15 attempts per phase).

Error Classification System

The AI classifies every error into one of four categories:

1. BUILD_ERROR

Indicators:

  • Maven compilation errors
  • Missing dependencies
  • YAML syntax errors
  • Java source errors

Fix Strategy:

  • Add missing dependencies to pom.xml
  • Repair YAML with camel-validate skill
  • Update Java code (rare for generated code)
  • Resolve version conflicts with BOM

Routed To: Build system, dependency manager, camel-validate skill

2. RUNTIME_ERROR

Indicators:

  • Routes fail to start
  • Connection refused errors
  • Port conflicts
  • Configuration errors

Fix Strategy:

  • Restart services
  • Update configuration properties
  • Change ports
  • Create missing resources (databases, topics)

Routed To: camel-implement skill (route fixes), configuration manager

3. TEST_ERROR

Indicators:

  • Test failures
  • Assertion errors
  • Timeouts in tests
  • Missing test data

Fix Strategy:

  • Regenerate routes if logic wrong
  • Fix test code if assumptions wrong
  • Increase timeouts
  • Add test data setup

Routed To: camel-implement (route logic), camel-test (test code)

4. ENVIRONMENT_ERROR

Indicators:

  • Docker not running
  • Container failures
  • Port conflicts
  • Image pull errors

Fix Strategy:

  • Start Docker daemon
  • Pull missing images
  • Restart containers
  • Change conflicting ports

Routed To: Docker/environment manager

Retry Budget

Each phase has a retry budget of 15 attempts.

Retry Strategy:

Attempt 1: Try original approach
Attempt 2: Apply simple fix (restart service)
Attempt 3: Apply configuration fix
Attempt 4: Apply code fix
Attempt 5: Apply environment fix
...
Attempt 15: Last attempt
  → If still failing, give up and report

Exponential Backoff:

Between retries, the AI waits:

  • Attempts 1-3: 5 seconds
  • Attempts 4-7: 10 seconds
  • Attempts 8-12: 30 seconds
  • Attempts 13-15: 60 seconds

This gives external services time to recover.

Standalone Invocation

You can invoke /camel-verify standalone for troubleshooting:

Use Case 1: After Manual Code Changes

(You manually edit a route)

/camel-verify

→ Runs full 5-phase verification
→ Reports if your changes broke anything

Use Case 2: Environment Issues

You: My integration used to work but now fails to start

/camel-verify

→ Phase 1: Checks environment (finds PostgreSQL container stopped)
→ Fix: Restarts PostgreSQL
→ Phases 2-5: Verify everything works again

Use Case 3: New Test Scenarios

(You add a new Citrus test)

/camel-verify --phase=4

→ Skips phases 1-3 (already verified)
→ Runs only Phase 4 (Behavioral Testing)
→ Reports test results

Environment-in-the-Loop Concept

/camel-verify is “environment-in-the-loop” verification: it doesn’t just check code, it actually runs the integration with real databases, message brokers, and HTTP endpoints.

Why This Matters:

  • Catches real issues: Code might compile but fail at runtime
  • Validates integrations: Endpoints actually connect, messages actually flow
  • Tests behavior: Not just unit tests, but full integration tests
  • Prevents surprises: Find issues now, not in production

Contrast with traditional testing:

  • Unit tests: Mock everything (no environment)
  • Integration tests: Run against real services (environment-in-the-loop)

Camel-Kit uses integration tests for verification because integrations are, by definition, about connecting systems.

Graceful Degradation

If tools are unavailable, the AI adapts:

No Docker

Warning: Docker not available. Skipping environment setup.

Assuming external services are already running:
- PostgreSQL at localhost:5432
- Kafka at localhost:9092

Proceeding to build phase...

No Maven Wrapper

Warning: ./mvnw not found. Using system Maven...

mvn clean package

No Citrus Tests

Warning: No Citrus tests found in src/test/java/

Skipping behavioral testing phase.

Note: Without tests, we cannot verify integration behavior.
Consider generating tests with /camel-test skill.

The AI continues with available tools, warning about limitations.

Summary

/camel-verify validates integrations through a 5-phase feedback loop:

  1. Environment Setup - Start and verify external dependencies
  2. Build - Compile and resolve dependencies
  3. Start Integration - Launch Camel and verify routes start
  4. Behavioral Testing - Run Citrus tests against real services
  5. Report Generation - Summarize results and provide insights

Key Features:

  • Error Classification - BUILD, RUNTIME, TEST, ENVIRONMENT errors
  • Auto-Fixes - 15 retry attempts with intelligent fixes
  • Fix Routing - Errors routed to appropriate skills for fixes
  • Environment-in-the-Loop - Tests against real databases and brokers
  • Graceful Degradation - Works even when tools unavailable
  • Standalone Invocation - Use for troubleshooting existing integrations

The result is confidence that your integration actually works, not just compiles.

This completes the three-phase pipeline: Design → Plan → Execute → Verify.