Skip to content

Generation Provider

Generation Providers are the backbone of the Active Agent framework, allowing seamless integration with various AI services. They provide a consistent interface for prompting and generating responses, making it easy to switch between different providers without changing the core logic of your application.

Available Providers

You can use the following generation providers with Active Agent:

ruby
class OpenAIAgent < ApplicationAgent
  layout "agent"
  generate_with :openai, model: "gpt-4o-mini", instructions: "You're a basic OpenAI agent."
end
ruby
class AnthropicAgent < ActiveAgent::Base
  generate_with :anthropic
end
ruby
class OpenRouterAgent < ApplicationAgent
  layout "agent"
  generate_with :open_router, model: "qwen/qwen3-30b-a3b:free", instructions: "You're a basic Open Router agent."
end
ruby
class OllamaAgent < ApplicationAgent
  layout "agent"
  generate_with :ollama, model: "gemma3:latest", instructions: "You're a basic Ollama agent."
end

Response

Generation providers handle the request-response cycle for generating responses based on the provided prompts. They process the prompt context, including messages, actions, and parameters, and return the generated response.

Response Object

The ActiveAgent::GenerationProvider::Response class encapsulates the result of a generation request, providing access to both the processed response and debugging information.

Attributes

  • message - The generated response message from the AI provider
  • prompt - The complete prompt object used for generation, including updated context, messages, and parameters
  • raw_response - The unprocessed response data from the AI provider, useful for debugging and accessing provider-specific metadata

Example Usage

ruby
response = ApplicationAgent.with(message: "Hello").prompt_context.generate_now

# Access response content
content = response.message.content

# Access response role
role = response.message.role

# Access full prompt context
messages = response.prompt.messages

# Access usage statistics (if available)
usage = response.usage
Response Example

activeagent/test/generation_provider_examples_test.rb:75

ruby
# Response object
#<ActiveAgent::GenerationProvider::Response:0x3ea8
  @message=#<ActiveAgent::ActionPrompt::Message:0x3ebc
    @action_id=nil,
    @action_name=nil,
    @action_requested=false,
    @charset="UTF-8",
    @content="Hello! How can I assist you today?",
    @role=:assistant>
  @prompt=#<ActiveAgent::ActionPrompt::Prompt:0x3ed0 ...>
  @content_type="text/plain"
  @raw_response={...}>

# Message content
response.message.content # => "Hello! How can I assist you today?"

The response object ensures you have full visibility into both the input prompt context and the raw provider response, making it easy to debug generation issues or access provider-specific response metadata.

Provider Configuration

You can configure generation providers with custom settings:

Model and Temperature Configuration

ruby
class AnthropicConfigAgent < ActiveAgent::Base
  generate_with :anthropic,
    model: "claude-3-5-sonnet-20241022",
    temperature: 0.7
end
ruby
require "test_helper"

class GenerationProviderExamplesTest < ActiveAgentTestCase
  test "provider configuration examples" do
    # Mock configurations for providers that might not be configured
    mock_config = {
      "anthropic" => {
        "service" => "Anthropic",
        "api_key" => "test-key",
        "model" => "claude-3-5-sonnet-20241022"
      },
      "openai" => {
        "service" => "OpenAI",
        "api_key" => "test-key",
        "model" => "gpt-4"
      },
      "open_router" => {
        "service" => "OpenRouter",
        "api_key" => "test-key",
        "model" => "anthropic/claude-3-5-sonnet"
      }
    }

    with_active_agent_config(mock_config) do
      # These are documentation examples only
      # region anthropic_provider_example
      class AnthropicConfigAgent < ActiveAgent::Base
        generate_with :anthropic,
          model: "claude-3-5-sonnet-20241022",
          temperature: 0.7
      end
      # endregion anthropic_provider_example

      # region open_router_provider_example
      class OpenRouterConfigAgent < ActiveAgent::Base
        generate_with :open_router,
          model: "anthropic/claude-3-5-sonnet",
          temperature: 0.5
      end
      # endregion open_router_provider_example

      # region custom_host_configuration
      class CustomHostAgent < ActiveAgent::Base
        generate_with :openai,
          host: "https://your-azure-openai-resource.openai.azure.com",
          api_key: "your-api-key",
          model: "gpt-4"
      end
      # endregion custom_host_configuration

      assert_equal "anthropic", AnthropicConfigAgent.generation_provider_name
      assert_equal "open_router", OpenRouterConfigAgent.generation_provider_name
      assert_equal "openai", CustomHostAgent.generation_provider_name
    end
  end

  test "response object usage" do
    VCR.use_cassette("generation_response_usage_example") do
      # region generation_response_usage
      response = ApplicationAgent.with(message: "Hello").prompt_context.generate_now

      # Access response content
      content = response.message.content

      # Access response role
      role = response.message.role

      # Access full prompt context
      messages = response.prompt.messages

      # Access usage statistics (if available)
      usage = response.usage
      # endregion generation_response_usage

      doc_example_output(response)

      assert_not_nil content
      assert_equal :assistant, role
      assert messages.is_a?(Array)
    end
  end
end

Custom Host Configuration

For Azure OpenAI or other custom endpoints:

ruby
class CustomHostAgent < ActiveAgent::Base
  generate_with :openai,
    host: "https://your-azure-openai-resource.openai.azure.com",
    api_key: "your-api-key",
    model: "gpt-4"
end