Anthropic Provider
The Anthropic provider enables integration with Claude models including Claude Sonnet 4.5, Claude Haiku 4.5, Claude Opus 4.1, and the Claude 3.x family. It offers advanced reasoning capabilities, extended context windows, extended thinking mode, and strong performance on complex tasks.
Configuration
Basic Setup
Configure Anthropic in your agent:
class AnthropicAgent < ApplicationAgent
generate_with :anthropic, model: "claude-sonnet-4-5-20250929"
# @return [ActiveAgent::Generation]
def ask
prompt(message: params[:message])
end
endBasic Usage Example
response = AnthropicAgent.with(
message: "What is the Model Context Protocol?"
).ask.generate_nowResponse Example
Configuration File
Set up Anthropic credentials in config/active_agent.yml:
anthropic: &anthropic
service: "Anthropic"
access_token: <%= Rails.application.credentials.dig(:anthropic, :access_token) %>Environment Variables
Alternatively, use environment variables:
ANTHROPIC_API_KEY=your-api-keySupported Models
Anthropic provides access to the Claude model family. For the complete list of available models, see Anthropic's Models Overview.
Claude 4.x Family (Latest)
| Feature | Claude Sonnet 4.5 | Claude Haiku 4.5 | Claude Opus 4.1 |
|---|---|---|---|
| Description | Smartest model for complex agents and coding | Fastest model with near-frontier intelligence | Exceptional model for specialized reasoning |
| Pricing | $3 / MTok input $15 / MTok output | $1 / MTok input $5 / MTok output | $15 / MTok input $75 / MTok output |
| Extended Thinking | ✓ | ✓ | ✓ |
| Priority Tier | ✓ | ✓ | ✓ |
| Latency | Fast | Fastest | Moderate |
| Context Window | 200K tokens 1M tokens (beta) | 200K tokens | 200K tokens |
| Max Output | 64K tokens | 64K tokens | 32K tokens |
| Knowledge Cutoff | Jan 2025 | Feb 2025 | Jan 2025 |
| Training Data | Jul 2025 | Jul 2025 | Mar 2025 |
Recommended model identifiers:
- claude-sonnet-4.5 - Best for complex reasoning and coding tasks
- claude-haiku-4.5 - Best for speed with high intelligence
- claude-opus-4.1 - Best for specialized reasoning tasks requiring deep analysis
Claude 3.5 Family
- claude-3-5-sonnet-latest - Most intelligent Claude 3.5 model
- claude-3-5-sonnet-20241022 - Specific version for reproducibility
Claude 3 Family
- claude-3-opus-latest - Most capable Claude 3 model
- claude-3-sonnet-20240229 - Balanced performance and cost
- claude-3-haiku-20240307 - Fastest and most cost-effective
Provider-Specific Parameters
Required Parameters
model- Model identifier (e.g., "claude-3-5-sonnet-latest")max_tokens- Maximum tokens to generate (default: 4096, minimum: 1)
Sampling Parameters
temperature- Controls randomness (0.0 to 1.0, default: varies by model)top_p- Nucleus sampling parameter (0.0 to 1.0)top_k- Top-k sampling parameter (integer ≥ 0)stop_sequences- Array of strings to stop generation
System & Instructions
system- System message to guide Claude's behaviorinstructions- Alias forsystem(for common format compatibility)
Tools & Functions
tools- Array of tool definitions for function callingtool_choice- Control which tools can be used ("auto", "any", or specific tool)
Metadata & Tracking
metadata- Custom metadata for request trackingrubygenerate_with :anthropic, metadata: { user_id: -> { Current.user&.id } }
Advanced Features
thinking- Enable Claude's thinking mode for complex reasoningcontext_management- Configure context window managementservice_tier- Select service tier ("auto", "standard_only")mcp_servers- Array of MCP server definitions (max 20)
Client Configuration
api_key- Anthropic API key (also acceptsaccess_token)base_url- API endpoint URL (default: "https://api.anthropic.com")timeout- Request timeout in seconds (default: 600.0)max_retries- Maximum retry attempts (default: 2)anthropic_beta- Enable beta features via header
Response Format
response_format- Control output format (see Emulated JSON Object Support)
Streaming
stream- Enable streaming responses (boolean, default: false)
Emulated JSON Object Support
While Anthropic does not natively support structured response formats like OpenAI's json_object mode, ActiveAgent provides emulated support through a prompt engineering technique.
When you specify response_format: { type: "json_object" }, the framework:
- Adds a lead-in assistant message containing
"Here is the JSON requested:\n{"to prime Claude to output JSON - Receives Claude's response which continues from the opening brace
- Reconstructs the complete JSON by prepending the
{character - Removes the lead-in message from the message stack for clean conversation history
Usage Example
class DataExtractionAgent < ApplicationAgent
generate_with :anthropic, model: "claude-haiku-4-5"
def extract_colors
prompt(
"Return a JSON object with three primary colors in an array named 'colors'.",
response_format: { type: "json_object" }
)
end
endresponse = DataExtractionAgent.extract_colors.generate_now
colors = response.message.parsed_json # Parsed JSON hash
# => { colors: ["red", "blue", "yellow"] }Best Practices
- Be explicit in your prompt: Ask Claude to "return a JSON object" or "respond with valid JSON"
- Specify the schema: Describe the expected structure in your prompt for better results
- Validate the output: While Claude is reliable, always validate parsed JSON in production
Limitations
Unlike OpenAI's native JSON mode:
- No schema enforcement: Claude is not forced to conform to a specific schema
- Prompt-dependent reliability: Success depends on clear prompt instructions
- No strict mode: Cannot guarantee specific field requirements
For applications requiring guaranteed schema conformance, consider using the Structured Output feature with providers that support native JSON schema validation.
Constitutional AI
Claude is trained with Constitutional AI, making it particularly good at:
- Following ethical guidelines
- Refusing harmful requests
- Providing balanced perspectives
- Being helpful, harmless, and honest
Error Handling
Handle Anthropic-specific errors:
class ResilientAgent < ApplicationAgent
generate_with :anthropic,
model: "claude-3-5-sonnet-latest",
max_retries: 3
rescue_from Anthropic::RateLimitError do |error|
Rails.logger.warn "Rate limited: #{error.message}"
sleep(error.retry_after || 60)
retry
end
rescue_from Anthropic::APIError do |error|
Rails.logger.error "Anthropic error: #{error.message}"
fallback_to_cached_response
end
endRelated Documentation
- Providers Overview - Compare all available providers
- Getting Started - Complete setup guide
- Configuration - Environment-specific settings
- Tools - Function calling and MCP integration
- Messages - Work with multimodal content
- Structured Output - JSON response formatting
- Error Handling - Retry strategies and error handling
- Testing - Test Anthropic integrations
- Anthropic API Documentation - Official Anthropic docs