LLM
Core reference

LLM Core: errors

The errors module defines a standardized exception hierarchy for all Large Language Model interactions within the jazzmine framework. Because different providers utilize different underlying transport libraries (e.g., httpx for OpenAI, boto3 for Bedrock, or subprocess for Local models), they raise different native exceptions. This module wraps those disparate failures into a consistent set of errors that the agent can programmatically handle.

1. Behavior and Context

In the jazzmine architecture, LLM Errors act as the "Signal System" for the orchestration layer.

  • Abstraction: Providers (like AnthropicLLM) are responsible for catching their specific library errors and re-raising them as one of the classes defined here.
  • Retry Integration: High-level components like the Agent or ToolOrchestrator catch these errors to decide whether to retry a request (e.g., on a timeout) or abort (e.g., on an invalid request).
  • Telemetry: When an LLM call fails, the exception message is captured and stored in the LLMCallRecord and the final TurnTrace.

2. Purpose

  • Uniformity: Providing a single point of truth for LLM-related failure states.
  • Resiliency: Enabling granular retry logic based on the type of error (Retry on LLMRateLimitError vs. Fail on LLMInvalidRequestError).
  • Clean API: Shielding the rest of the framework from implementation details of specific AI SDKs.

3. High-Level API (Hierarchy)

All errors in this module inherit from the base LLMError.

bash
Exception
└── LLMError
    ├── LLMTimeoutError
    ├── LLMRateLimitError
    ├── LLMConnectionError
    ├── LLMInvalidRequestError
    └── LLMInternalError
python
from jazzmine.core.llm.errors import LLMRateLimitError, LLMTimeoutError, LLMError

try:
    response = await llm.agenerate(messages)
except LLMRateLimitError as e:
    print(f"We are being throttled: {e}. Backing off...")
except LLMTimeoutError:
    print("The model took too long to respond.")
except LLMError as e:
    print(f"A general LLM failure occurred: {e}")

4. Detailed Class Descriptions

LLMError

Purpose: The base class for all exceptions in the LLM module.

  • Usage: Use this in a try...except block if you want to catch any failure that occurs during an LLM interaction, regardless of the cause.

LLMTimeoutError

Purpose: Raised when a request to the provider exceeds the configured timeout duration.

  • Context: Commonly occurs during high network congestion or when requesting a very large number of tokens from a slow model.

LLMRateLimitError

Purpose: Raised when the provider returns a "Too Many Requests" signal (HTTP 429).

  • Context: Indicates that the API quota or the rate-limit for the specific model has been exceeded.
  • Action: Usually warrants an exponential backoff retry strategy.

LLMConnectionError

Purpose: Raised when a low-level network failure occurs.

  • Context: Includes DNS resolution failures, lost internet connectivity, or the provider's server closing the connection unexpectedly.

LLMInvalidRequestError

Purpose: Raised when the request is malformed or rejected by the provider's business logic (HTTP 400).

  • Context: This occurs if the parameters (like temperature) are out of range, if the API key is invalid, or if the prompt triggers the provider's safety/content filters.

LLMInternalError

Purpose: Raised when the provider's server encounters an internal error (HTTP 5xx).

  • Context: Indicates an issue on the AI provider's side. Like rate limits, these are often temporary and may succeed upon retry.

5. Remarks

  • Provider Responsibility: It is the strict requirement of every class inheriting from BaseLLM to map their internal errors to this hierarchy.
  • Traceability: Every exception carries a string message (usually the raw error from the provider) which is invaluable for debugging "Black Box" model failures.