Skip to content

AI Agents FAQ & Trust

Answers to critical questions that enterprise customers ask before adopting AI agents.


Data Privacy & Security

Does the agent read everything on my board?

No. Agents only see events they're subscribed to via triggers.

For example, if you configure a trigger for object:created, the agent only sees new objects being created. It doesn't automatically scan existing board content unless you explicitly enable scanExisting: true when creating the agent.

What data does an agent receive?

  • Only the event payload (e.g., new object data, participant joined)
  • Board context you explicitly provide in instructions.context
  • Agent's own action history (for continuity)

What data does it NOT receive?

  • Historical objects (unless scanning enabled)
  • Private frames outside its scope
  • Other agents' memories
  • Organization data

Does my data go to OpenAI or external LLMs?

Yes, but with safeguards. When an agent uses AI-powered actions (like evaluate_answer or generate_hint), the request is sent to an LLM through our proxy service (gptproxy.ru).

What's sent:

  • Agent instructions (persona, goals, rules)
  • Event context (the specific object/event being processed)
  • Board context you provided

What's NOT sent:

  • Your API keys or credentials
  • Unrelated board content
  • Personal user data (names, emails)

Data retention policy:

  • OpenAI: Zero data retention (we use API mode, not training mode)
  • Our proxy: Logs kept for 30 days for debugging only
  • Your agent history: Stored in your database indefinitely (you control retention)

For Enterprise customers:

  • Option to use your own OpenAI API key (BYOK)
  • Option to use Azure OpenAI (private cloud)
  • Option to disable AI features entirely (rule-based agents only)

Can I use local or private LLMs?

Yes, for Enterprise plans.

We support:

  • Azure OpenAI (private cloud deployment)
  • Self-hosted LLMs via OpenAI-compatible API (e.g., Ollama, LM Studio)
  • Custom LLM endpoints (requires configuration)

Setup:

  1. Contact support for Enterprise plan
  2. Provide your LLM endpoint URL and authentication
  3. We configure your organization to route agent requests there
  4. Test with a sample agent

Limitations:

  • LLM must support OpenAI-compatible chat completions API
  • Minimum context window: 8K tokens
  • Response time < 10 seconds recommended

AI Reliability

What if the agent teaches wrong information?

Multiple safeguards are in place:

  1. Instructions context: Load your curriculum, methodology, or source material directly into the agent's instructions:

    json
    {
      "description": "Spanish tutor for beginners",
      "context": "Teaching methodology: Direct Method. Grammar: Present tense only. Vocabulary: chapters 1-3 of textbook XYZ.",
      "rules": ["Only answer from provided context", "Never introduce advanced grammar"]
    }
  2. Explicit rules: Define "don't" constraints:

    json
    {
      "rules": [
        "Never reveal full answers",
        "Don't teach topics outside chapters 1-3",
        "Don't use profanity or slang"
      ]
    }
  3. Action log: Review every decision the agent made:

    bash
    GET /api/v1/boards/{board_uuid}/agents/{agent_id}/actions

    Each entry includes:

    • Timestamp
    • Trigger event
    • Agent's reasoning
    • Action taken
    • LLM tokens used
  4. Scope limits: Restrict agent to specific frames or zones:

    json
    {
      "scope": {
        "frames": ["frame-uuid-123"],
        "objectTypes": ["answer-card"]
      }
    }
  5. Budget limits: Cap token usage to prevent runaway behavior:

    json
    {
      "budgetTokens": 10000
    }

    Agent automatically pauses when budget is exhausted.

How do I prevent AI hallucinations?

Best practices:

  1. Provide source material in instructions.context:

    json
    {
      "context": "Textbook excerpts:\n- Chapter 1: Greetings\n- Chapter 2: Numbers\n..."
    }
  2. Use explicit rules:

    json
    {
      "rules": [
        "Only answer questions from the provided context",
        "If the answer is not in context, say 'I need to check with the teacher'",
        "Never make up information"
      ]
    }
  3. Narrow the scope:

    json
    {
      "scope": {
        "frames": ["exercise-frame-uuid"]
      },
      "triggers": [
        {
          "event": "object:created",
          "condition": "object.type === 'answer-card'"
        }
      ]
    }

    Agents with narrow scope are less likely to hallucinate.

  4. Review action logs regularly:

    bash
    GET /agents/{id}/actions?limit=50

    Look for patterns of incorrect responses.

  5. Use temperature=0 for factual tasks:

    json
    {
      "llmConfig": {
        "temperature": 0,
        "model": "gpt-4o"
      }
    }

Can the agent make mistakes that break my board?

No, multiple safeguards prevent damage:

  1. Action validation: All actions are validated before execution:

    • Object schema validation
    • Position boundaries
    • User permissions
  2. Rate limiting:

    json
    {
      "maxActionsPerMinute": 10
    }

    Prevents runaway loops from flooding the board.

  3. Budget cap:

    json
    {
      "budgetTokens": 50000
    }

    Agent pauses when budget exhausted, can't continue indefinitely.

  4. Undo support: Users can undo agent actions in the board UI (standard Excalidraw undo).

  5. Pause anytime:

    bash
    POST /agents/{id}/pause

    Immediately stops all agent activity.

  6. Future: propose_change action (coming Q2 2025):

    json
    {
      "action": "propose_change",
      "params": {
        "description": "Add hint card",
        "requiresApproval": true
      }
    }

    Teacher approves before action executes.


Cost Control

Will the agent drain my budget overnight?

No, multiple cost controls:

json
{
  "maxActionsPerMinute": 10,
  "budgetTokens": 50000
}

What happens when limits are reached:

  1. Agent status changes to paused
  2. last_error field explains why: "Budget exhausted"
  3. Email notification sent to board owner (80% and 100% thresholds)
  4. Agent stops processing events

To resume:

  • Increase budget: PATCH /agents/{id} with new budgetTokens
  • Or call /agents/{id}/resume endpoint (resets to default budget)

How do I estimate costs?

Rough formula:

Average action ≈ 500 tokens
1,000 tokens ≈ $0.01 (varies by model)
budgetTokens: 50,000 ≈ $0.50 max per session

Example scenarios:

Use CaseActions/HourTokens/HourCost/HourDaily Cost
Answer checker (10 students)~30~15,000$0.15$1.20
Tutor (50 students)~150~75,000$0.75$6.00
Timer (notifications only)~6~100$0.001$0.01

Tips to reduce costs:

  1. Use cheaper models for simple tasks:

    json
    {
      "llmConfig": {
        "model": "gpt-4o-mini"
      }
    }

    GPT-4o-mini is 10x cheaper than GPT-4o.

  2. Reduce context size:

    json
    {
      "context": "Keep instructions concise"
    }

    Shorter context = fewer tokens.

  3. Use rule-based actions when possible (no LLM):

    json
    {
      "triggers": [
        {
          "event": "timer:tick",
          "action": "send_notification",
          "params": { "message": "5 minutes left!" }
        }
      ]
    }

    Rule-based actions are free (no LLM call).

  4. Set conservative rate limits:

    json
    {
      "maxActionsPerMinute": 5
    }

What happens when limits are reached?

Agent pauses automatically:

json
{
  "id": "agent-123",
  "status": "paused",
  "last_error": {
    "type": "budget_exceeded",
    "message": "Token budget exhausted (50,000/50,000)",
    "timestamp": "2025-11-28T10:30:00Z"
  }
}

Email notification sent:

  • At 80% budget: "Agent approaching budget limit"
  • At 100% budget: "Agent paused - budget exhausted"

To resume:

bash
# Option 1: Increase budget
PATCH /agents/{id}
{
  "budgetTokens": 100000
}

# Option 2: Resume with default budget
POST /agents/{id}/resume

Agent Conflicts

What if two agents contradict each other?

Priority system resolves conflicts:

json
{
  "priority": 8
}
  • Priority range: 1-10 (higher = more important)
  • Higher priority agent's action wins
  • Same priority: First-come-first-served

Example:

Agent A (Tutor, priority: 8) wants to add hint card
Agent B (Moderator, priority: 5) wants to delete card
→ Agent A wins (higher priority)

Object locking (future):

  • When Agent A acts on object X, object is locked for 1 second
  • Agent B's action on object X waits or skips
  • Prevents simultaneous conflicting edits

Can agents create infinite loops?

Protected by:

  1. Self-event filtering:

    json
    {
      "allowSelfEvents": false
    }

    Default: Agents ignore their own actions. Prevents feedback loops.

  2. Rate limiting:

    json
    {
      "maxActionsPerMinute": 10
    }

    Even if loop occurs, capped at 10 actions/min.

  3. Budget cap:

    json
    {
      "budgetTokens": 50000
    }

    Loop drains budget → agent pauses.

  4. Action deduplication:

    • If agent tries same action twice within 5 seconds → blocked
    • Example: "create hint card" → "create hint card" → blocked

How do I debug agent behavior?

Tools available:

  1. Action log:

    bash
    GET /agents/{id}/actions?limit=100

    Response:

    json
    [
      {
        "timestamp": "2025-11-28T10:30:00Z",
        "trigger": {
          "event": "object:created",
          "objectId": "obj-123"
        },
        "reasoning": "Student submitted answer. Checking correctness...",
        "decision": {
          "action": "create_object",
          "params": { "type": "hint-card", "content": "Check your verb conjugation" }
        },
        "result": "success",
        "tokensUsed": 487
      }
    ]
  2. Stats endpoint:

    bash
    GET /agents/{id}/stats

    Response:

    json
    {
      "totalActions": 245,
      "totalTokens": 123456,
      "averageTokensPerAction": 503,
      "actionsByType": {
        "create_object": 120,
        "send_notification": 80,
        "evaluate_answer": 45
      },
      "errorRate": 0.02
    }
  3. Real-time monitoring:

    bash
    # Subscribe to agent events via WebSocket
    socket.on('agent:action', (event) => {
      console.log('Agent decision:', event.reasoning);
    });
  4. Test mode:

    json
    {
      "testMode": true
    }

    Agent simulates actions without executing (dry run).


Enterprise Concerns

Is this GDPR/SOC2 compliant?

GDPR compliance:

  • ✅ Data minimization: Agents only access necessary event data
  • ✅ Purpose limitation: Data used only for agent operation
  • ✅ Right to erasure: Delete agent = delete all action logs
  • ✅ Data portability: Export action logs via API
  • ✅ Consent: Agents created explicitly by board owner

Audit trail:

  • Every agent action logged with timestamp, reasoning, result
  • Logs retained according to your organization's data retention policy
  • Export logs: GET /agents/{id}/actions?export=csv

For SOC2 compliance (Enterprise plan):

  • Dedicated instance option (data isolation)
  • Custom data retention policies
  • Penetration testing reports available
  • Security questionnaire provided

Can I get audit logs?

Yes, comprehensive logging:

bash
GET /agents/{id}/actions?from=2025-11-01&to=2025-11-30

Each log entry includes:

  • Timestamp (ISO 8601)
  • Trigger event (what happened)
  • Agent reasoning (why it decided to act)
  • Action taken (what it did)
  • Result (success/failure)
  • Tokens used
  • LLM model used
  • Board state snapshot (before/after)

Export formats:

  • JSON
  • CSV
  • XLSX (coming soon)

Retention:

  • Default: 180 days
  • Enterprise: Custom (up to 5 years)

What about uptime and SLA?

Service Level Agreements (by plan):

PlanUptime SLASupport ResponseCompensation
StarterBest effortCommunity forumNone
Professional99.5%Email (48h)None
Business99.9%Priority (24h)10% credit per 1% below SLA
Enterprise99.99%Phone (4h)Custom contract

Uptime calculated:

  • Excludes scheduled maintenance (announced 7 days in advance)
  • Measured per month
  • Status page: status.boardapi.io (coming soon)

For mission-critical use cases (Enterprise):

  • Dedicated infrastructure (isolated from multi-tenant)
  • Hot standby failover
  • 24/7 on-call engineering support

Getting Help

Where do I report issues?

Support channels:

Can I request new features?

Absolutely!

How to request:

  1. Search existing requests: GitHub Issues
  2. If not found, create new issue with:
    • Use case description
    • Expected behavior
    • Current workaround (if any)
  3. Community upvotes help prioritize

Prioritization:

  • Customer feedback (weighted by plan tier)
  • Strategic value (aligns with product vision)
  • Technical feasibility
  • Community upvotes

Enterprise customers:

  • Direct feature requests to account manager
  • Roadmap influence based on contract value
  • Custom feature development available (paid)

What if I need custom agent types?

We can help!

Professional plan:

  • Email support@boardapi.io with use case
  • We provide guidance on configuring existing personas
  • May develop new preset if common use case

Business/Enterprise plans:

  • Custom agent development included
  • Dedicated engineering support
  • Integration with your LMS/CRM
  • White-label agent UI

Example custom agents we've built:

  • "Discussion Facilitator" for workshop platforms
  • "Compliance Checker" for corporate training
  • "Progress Tracker" for language schools
  • "Quiz Generator" for test prep companies

Contact sales@boardapi.io to discuss your use case.


Next Steps