AI Agents FAQ & Trust
Answers to critical questions that enterprise customers ask before adopting AI agents.
Data Privacy & Security
Does the agent read everything on my board?
No. Agents only see events they're subscribed to via triggers.
For example, if you configure a trigger for object:created, the agent only sees new objects being created. It doesn't automatically scan existing board content unless you explicitly enable scanExisting: true when creating the agent.
What data does an agent receive?
- Only the event payload (e.g., new object data, participant joined)
- Board context you explicitly provide in
instructions.context - Agent's own action history (for continuity)
What data does it NOT receive?
- Historical objects (unless scanning enabled)
- Private frames outside its scope
- Other agents' memories
- Organization data
Does my data go to OpenAI or external LLMs?
Yes, but with safeguards. When an agent uses AI-powered actions (like evaluate_answer or generate_hint), the request is sent to an LLM through our proxy service (gptproxy.ru).
What's sent:
- Agent instructions (persona, goals, rules)
- Event context (the specific object/event being processed)
- Board context you provided
What's NOT sent:
- Your API keys or credentials
- Unrelated board content
- Personal user data (names, emails)
Data retention policy:
- OpenAI: Zero data retention (we use API mode, not training mode)
- Our proxy: Logs kept for 30 days for debugging only
- Your agent history: Stored in your database indefinitely (you control retention)
For Enterprise customers:
- Option to use your own OpenAI API key (BYOK)
- Option to use Azure OpenAI (private cloud)
- Option to disable AI features entirely (rule-based agents only)
Can I use local or private LLMs?
Yes, for Enterprise plans.
We support:
- Azure OpenAI (private cloud deployment)
- Self-hosted LLMs via OpenAI-compatible API (e.g., Ollama, LM Studio)
- Custom LLM endpoints (requires configuration)
Setup:
- Contact support for Enterprise plan
- Provide your LLM endpoint URL and authentication
- We configure your organization to route agent requests there
- Test with a sample agent
Limitations:
- LLM must support OpenAI-compatible chat completions API
- Minimum context window: 8K tokens
- Response time < 10 seconds recommended
AI Reliability
What if the agent teaches wrong information?
Multiple safeguards are in place:
Instructions context: Load your curriculum, methodology, or source material directly into the agent's instructions:
json{ "description": "Spanish tutor for beginners", "context": "Teaching methodology: Direct Method. Grammar: Present tense only. Vocabulary: chapters 1-3 of textbook XYZ.", "rules": ["Only answer from provided context", "Never introduce advanced grammar"] }Explicit rules: Define "don't" constraints:
json{ "rules": [ "Never reveal full answers", "Don't teach topics outside chapters 1-3", "Don't use profanity or slang" ] }Action log: Review every decision the agent made:
bashGET /api/v1/boards/{board_uuid}/agents/{agent_id}/actionsEach entry includes:
- Timestamp
- Trigger event
- Agent's reasoning
- Action taken
- LLM tokens used
Scope limits: Restrict agent to specific frames or zones:
json{ "scope": { "frames": ["frame-uuid-123"], "objectTypes": ["answer-card"] } }Budget limits: Cap token usage to prevent runaway behavior:
json{ "budgetTokens": 10000 }Agent automatically pauses when budget is exhausted.
How do I prevent AI hallucinations?
Best practices:
Provide source material in
instructions.context:json{ "context": "Textbook excerpts:\n- Chapter 1: Greetings\n- Chapter 2: Numbers\n..." }Use explicit rules:
json{ "rules": [ "Only answer questions from the provided context", "If the answer is not in context, say 'I need to check with the teacher'", "Never make up information" ] }Narrow the scope:
json{ "scope": { "frames": ["exercise-frame-uuid"] }, "triggers": [ { "event": "object:created", "condition": "object.type === 'answer-card'" } ] }Agents with narrow scope are less likely to hallucinate.
Review action logs regularly:
bashGET /agents/{id}/actions?limit=50Look for patterns of incorrect responses.
Use temperature=0 for factual tasks:
json{ "llmConfig": { "temperature": 0, "model": "gpt-4o" } }
Can the agent make mistakes that break my board?
No, multiple safeguards prevent damage:
Action validation: All actions are validated before execution:
- Object schema validation
- Position boundaries
- User permissions
Rate limiting:
json{ "maxActionsPerMinute": 10 }Prevents runaway loops from flooding the board.
Budget cap:
json{ "budgetTokens": 50000 }Agent pauses when budget exhausted, can't continue indefinitely.
Undo support: Users can undo agent actions in the board UI (standard Excalidraw undo).
Pause anytime:
bashPOST /agents/{id}/pauseImmediately stops all agent activity.
Future: propose_change action (coming Q2 2025):
json{ "action": "propose_change", "params": { "description": "Add hint card", "requiresApproval": true } }Teacher approves before action executes.
Cost Control
Will the agent drain my budget overnight?
No, multiple cost controls:
{
"maxActionsPerMinute": 10,
"budgetTokens": 50000
}What happens when limits are reached:
- Agent status changes to
paused last_errorfield explains why: "Budget exhausted"- Email notification sent to board owner (80% and 100% thresholds)
- Agent stops processing events
To resume:
- Increase budget:
PATCH /agents/{id}with newbudgetTokens - Or call
/agents/{id}/resumeendpoint (resets to default budget)
How do I estimate costs?
Rough formula:
Average action ≈ 500 tokens
1,000 tokens ≈ $0.01 (varies by model)
budgetTokens: 50,000 ≈ $0.50 max per sessionExample scenarios:
| Use Case | Actions/Hour | Tokens/Hour | Cost/Hour | Daily Cost |
|---|---|---|---|---|
| Answer checker (10 students) | ~30 | ~15,000 | $0.15 | $1.20 |
| Tutor (50 students) | ~150 | ~75,000 | $0.75 | $6.00 |
| Timer (notifications only) | ~6 | ~100 | $0.001 | $0.01 |
Tips to reduce costs:
Use cheaper models for simple tasks:
json{ "llmConfig": { "model": "gpt-4o-mini" } }GPT-4o-mini is 10x cheaper than GPT-4o.
Reduce context size:
json{ "context": "Keep instructions concise" }Shorter context = fewer tokens.
Use rule-based actions when possible (no LLM):
json{ "triggers": [ { "event": "timer:tick", "action": "send_notification", "params": { "message": "5 minutes left!" } } ] }Rule-based actions are free (no LLM call).
Set conservative rate limits:
json{ "maxActionsPerMinute": 5 }
What happens when limits are reached?
Agent pauses automatically:
{
"id": "agent-123",
"status": "paused",
"last_error": {
"type": "budget_exceeded",
"message": "Token budget exhausted (50,000/50,000)",
"timestamp": "2025-11-28T10:30:00Z"
}
}Email notification sent:
- At 80% budget: "Agent approaching budget limit"
- At 100% budget: "Agent paused - budget exhausted"
To resume:
# Option 1: Increase budget
PATCH /agents/{id}
{
"budgetTokens": 100000
}
# Option 2: Resume with default budget
POST /agents/{id}/resumeAgent Conflicts
What if two agents contradict each other?
Priority system resolves conflicts:
{
"priority": 8
}- Priority range: 1-10 (higher = more important)
- Higher priority agent's action wins
- Same priority: First-come-first-served
Example:
Agent A (Tutor, priority: 8) wants to add hint card
Agent B (Moderator, priority: 5) wants to delete card
→ Agent A wins (higher priority)Object locking (future):
- When Agent A acts on object X, object is locked for 1 second
- Agent B's action on object X waits or skips
- Prevents simultaneous conflicting edits
Can agents create infinite loops?
Protected by:
Self-event filtering:
json{ "allowSelfEvents": false }Default: Agents ignore their own actions. Prevents feedback loops.
Rate limiting:
json{ "maxActionsPerMinute": 10 }Even if loop occurs, capped at 10 actions/min.
Budget cap:
json{ "budgetTokens": 50000 }Loop drains budget → agent pauses.
Action deduplication:
- If agent tries same action twice within 5 seconds → blocked
- Example: "create hint card" → "create hint card" → blocked
How do I debug agent behavior?
Tools available:
Action log:
bashGET /agents/{id}/actions?limit=100Response:
json[ { "timestamp": "2025-11-28T10:30:00Z", "trigger": { "event": "object:created", "objectId": "obj-123" }, "reasoning": "Student submitted answer. Checking correctness...", "decision": { "action": "create_object", "params": { "type": "hint-card", "content": "Check your verb conjugation" } }, "result": "success", "tokensUsed": 487 } ]Stats endpoint:
bashGET /agents/{id}/statsResponse:
json{ "totalActions": 245, "totalTokens": 123456, "averageTokensPerAction": 503, "actionsByType": { "create_object": 120, "send_notification": 80, "evaluate_answer": 45 }, "errorRate": 0.02 }Real-time monitoring:
bash# Subscribe to agent events via WebSocket socket.on('agent:action', (event) => { console.log('Agent decision:', event.reasoning); });Test mode:
json{ "testMode": true }Agent simulates actions without executing (dry run).
Enterprise Concerns
Is this GDPR/SOC2 compliant?
GDPR compliance:
- ✅ Data minimization: Agents only access necessary event data
- ✅ Purpose limitation: Data used only for agent operation
- ✅ Right to erasure: Delete agent = delete all action logs
- ✅ Data portability: Export action logs via API
- ✅ Consent: Agents created explicitly by board owner
Audit trail:
- Every agent action logged with timestamp, reasoning, result
- Logs retained according to your organization's data retention policy
- Export logs:
GET /agents/{id}/actions?export=csv
For SOC2 compliance (Enterprise plan):
- Dedicated instance option (data isolation)
- Custom data retention policies
- Penetration testing reports available
- Security questionnaire provided
Can I get audit logs?
Yes, comprehensive logging:
GET /agents/{id}/actions?from=2025-11-01&to=2025-11-30Each log entry includes:
- Timestamp (ISO 8601)
- Trigger event (what happened)
- Agent reasoning (why it decided to act)
- Action taken (what it did)
- Result (success/failure)
- Tokens used
- LLM model used
- Board state snapshot (before/after)
Export formats:
- JSON
- CSV
- XLSX (coming soon)
Retention:
- Default: 180 days
- Enterprise: Custom (up to 5 years)
What about uptime and SLA?
Service Level Agreements (by plan):
| Plan | Uptime SLA | Support Response | Compensation |
|---|---|---|---|
| Starter | Best effort | Community forum | None |
| Professional | 99.5% | Email (48h) | None |
| Business | 99.9% | Priority (24h) | 10% credit per 1% below SLA |
| Enterprise | 99.99% | Phone (4h) | Custom contract |
Uptime calculated:
- Excludes scheduled maintenance (announced 7 days in advance)
- Measured per month
- Status page: status.boardapi.io (coming soon)
For mission-critical use cases (Enterprise):
- Dedicated infrastructure (isolated from multi-tenant)
- Hot standby failover
- 24/7 on-call engineering support
Getting Help
Where do I report issues?
Support channels:
GitHub Issues: github.com/boardapi/feedback
- Bug reports
- Feature requests
- Public discussions
Email support:
- Professional: support@boardapi.io (48h response)
- Business: priority@boardapi.io (24h response)
- Enterprise: dedicated account manager
Community Discord: discord.gg/boardapi (coming soon)
- Real-time chat
- Community help
- Beta feature discussions
Can I request new features?
Absolutely!
How to request:
- Search existing requests: GitHub Issues
- If not found, create new issue with:
- Use case description
- Expected behavior
- Current workaround (if any)
- Community upvotes help prioritize
Prioritization:
- Customer feedback (weighted by plan tier)
- Strategic value (aligns with product vision)
- Technical feasibility
- Community upvotes
Enterprise customers:
- Direct feature requests to account manager
- Roadmap influence based on contract value
- Custom feature development available (paid)
What if I need custom agent types?
We can help!
Professional plan:
- Email support@boardapi.io with use case
- We provide guidance on configuring existing personas
- May develop new preset if common use case
Business/Enterprise plans:
- Custom agent development included
- Dedicated engineering support
- Integration with your LMS/CRM
- White-label agent UI
Example custom agents we've built:
- "Discussion Facilitator" for workshop platforms
- "Compliance Checker" for corporate training
- "Progress Tracker" for language schools
- "Quiz Generator" for test prep companies
Contact sales@boardapi.io to discuss your use case.
Next Steps
- Getting Started - Create your first agent
- Agent Presets - Ready-to-use agent templates
- API Reference - Complete API documentation
- Best Practices - Tips for effective agents