Skip to content

AI Configuration

DIBOP uses AI to power features like the Orchestration Composer and AI-assisted field mapping. This page explains how AI is configured, which models are available, and how to manage AI usage.


Where AI Is Used

DIBOP's AI capabilities appear in several features:

Feature How AI Is Used
AI Composer Generates orchestration steps from natural-language descriptions
Field Mapping Suggests Canonical Data Model mappings from sample JSON data
Connector Builder Extracts connector configuration from API documentation URLs

AI is used to assist and accelerate your work -- it does not make autonomous decisions or execute actions without your review and approval.


Available AI Engines

DIBOP can be configured to use different AI engines depending on your requirements:

Claude (Anthropic)

Setting Value
Provider Anthropic
Default Model Claude Sonnet
Strengths Strong reasoning, excellent at code generation and data mapping
Data Handling Data is sent to Anthropic's API; see Anthropic's data policy

Azure OpenAI

Setting Value
Provider Microsoft Azure
Available Models GPT-4o, GPT-4, GPT-3.5 Turbo
Strengths Azure data boundary compliance, enterprise agreements
Data Handling Data stays within your Azure region

OpenAI (Direct)

Setting Value
Provider OpenAI
Available Models GPT-4o, GPT-4, GPT-3.5 Turbo
Strengths Access to latest models
Data Handling Data is sent to OpenAI's API; see OpenAI's data policy

Configuring the AI Engine

Platform Admin Configuration

Platform Admins configure which AI engines are available to enterprises:

  1. Navigate to Platform Config > AI Configuration
  2. Configure one or more AI providers:
    • Enter API keys or connection details
    • Select which models are available
    • Set default model for new enterprises
  3. Save changes

Enterprise Admin Configuration

Enterprise Admins can select their preferred AI engine from the options the Platform Admin has enabled:

  1. Navigate to SETTINGS > Enterprise Settings
  2. Scroll to the AI Configuration section
  3. Select the preferred AI engine and model
  4. Optionally configure:
    • Token limit per request: Maximum tokens per AI request (controls cost and response length)
    • AI features enabled: Toggle individual AI features on or off
  5. Save changes

Model Selection

Different models offer different trade-offs:

Model Speed Quality Cost
Claude Sonnet Fast High Medium
Claude Opus Moderate Highest High
GPT-4o Fast High Medium
GPT-4 Moderate High High
GPT-3.5 Turbo Very fast Good Low

Recommendation

For most use cases, a mid-tier model (Claude Sonnet or GPT-4o) provides the best balance of quality and speed. Use a higher-tier model only for complex orchestrations or critical field mapping tasks.


Token Limits and Costs

What Are Tokens?

AI models process text as "tokens" -- roughly 4 characters or 0.75 words per token. Both the input (your description, context, system schemas) and the output (generated orchestration, suggested mappings) consume tokens.

Token Limits

Setting Description Default
Max input tokens Maximum context sent to the AI model 4,000
Max output tokens Maximum length of the AI's response 2,000
Monthly token budget Total tokens your enterprise can use per month Varies by plan

Cost Tracking

View your AI usage from SETTINGS > Enterprise Settings > AI Usage:

  • Tokens used this month (input + output)
  • Number of AI requests
  • Estimated cost (based on the model's per-token pricing)
  • Monthly budget remaining

Data Privacy

What Data Is Sent to the AI

When you use an AI feature, DIBOP sends:

  • Your natural-language description (AI Composer)
  • Sample JSON data (field mapping)
  • Connector schemas and operation definitions (for context)
  • The Canonical Data Model schema (for mapping accuracy)

DIBOP does not send:

  • Your credentials or API keys
  • Production data or customer records
  • Execution logs or API call logs
  • User account information

Data Handling by Provider

Provider Data Retention Training Use
Anthropic (Claude) Not retained after processing Not used for training
Azure OpenAI Per your Azure agreement Not used for training (enterprise tier)
OpenAI (Direct) Per OpenAI's data policy Check your agreement

Azure Data Boundary

If your enterprise requires that data never leaves a specific geographic region, use Azure OpenAI configured in your preferred Azure region. This ensures AI processing stays within your data boundary.


Disabling AI Features

If your enterprise policy prohibits sending data to external AI providers, you can disable AI features entirely:

  1. Navigate to SETTINGS > Enterprise Settings > AI Configuration
  2. Toggle AI Features to Disabled
  3. Save

When AI is disabled:

  • The AI Composer shows a message that AI is unavailable; you can still build orchestrations manually
  • Field mapping does not offer AI suggestions; manual mapping is still available
  • The Connector Builder does not offer "Import from Docs"; manual configuration is still available

Troubleshooting

Issue Cause Solution
AI requests failing API key expired or invalid Update the API key in Platform Config
Slow AI responses Model overloaded or network latency Try a faster model or check network connectivity
Poor quality output Description too vague or insufficient context Provide more specific descriptions and select relevant systems
Token limit exceeded Description or context too long Simplify the input or increase the token limit
AI features not available Disabled by enterprise or platform admin Check AI Configuration settings

Next Steps