Chat API
The Chat API powers ainative-business’s conversational interface. Conversations are scoped to agent runtimes and optionally linked to projects. User messages are streamed back as Server-Sent Events with real-time deltas, permission requests, and structured questions from the executing agent.
Quick Start
Create a conversation, send a message with SSE streaming, handle a permission request, and discover available models:
// 1. Create a conversation linked to a project
const conversation: Conversation = await fetch('/api/chat/conversations', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
runtimeId: 'claude-code',
projectId: 'proj-8f3a-4b2c',
title: 'Debug auth flow',
modelId: 'sonnet',
}),
}).then(r => r.json());
// → { id: "conv-d4e2-7b1a", runtimeId: "claude-code", status: "active", ... }
// 2. Send a message and stream the response via SSE
const res: Response = await fetch(`/api/chat/conversations/${conversation.id}/messages`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ content: 'Why is the login endpoint returning 403?' }),
});
const reader = res.body!.getReader();
const decoder = new TextDecoder();
let pendingPermission: string | null = null;
while (true) {
const { done, value } = await reader.read();
if (done) break;
for (const line of decoder.decode(value).split('\n')) {
if (!line.startsWith('data: ')) continue;
const event = JSON.parse(line.slice(6));
switch (event.type) {
case 'delta':
process.stdout.write(event.content);
break;
case 'permission_request':
// Agent wants to read a file — save the requestId to respond
pendingPermission = event.requestId;
console.log(`\nPermission: ${event.toolName}(${JSON.stringify(event.toolInput)})`);
break;
case 'done':
console.log(`\nMessage ID: ${event.messageId}`);
break;
}
}
}
// 3. If the agent requested permission, approve it
if (pendingPermission) {
await fetch(`/api/chat/conversations/${conversation.id}/respond`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
requestId: pendingPermission,
behavior: 'allow',
alwaysAllow: true,
}),
});
}
// 4. Discover available models for the model picker
const models: ChatModelOption[] = await fetch('/api/chat/models').then(r => r.json());
models.forEach(m => console.log(`${m.label} (${m.provider}) — ${m.costLabel}`)); Base URL
/api/chat
Endpoints
List Conversations
/api/chat/conversations Retrieve conversations with optional filtering by status, project, and result limit. Results are ordered by most recent first.
Query Parameters
| Param | Type | Req | Description |
|---|---|---|---|
| status | enum | — | Filter by conversation status: active or archived |
| projectId | string | — | Filter conversations by project UUID |
| limit | number | — | Maximum number of conversations to return |
Response 200 — Array of conversation objects
Conversation Object
| Field | Type | Req | Description |
|---|---|---|---|
| id | string (UUID) | * | Conversation identifier |
| projectId | string (UUID) | — | Associated project |
| title | string | — | Conversation title |
| runtimeId | enum | * | Agent runtime: claude-code or openai-codex-app-server |
| modelId | string | — | Model used for responses (e.g., haiku, sonnet, gpt-5.4) |
| status | enum | * | active or archived |
| createdAt | ISO 8601 | * | Creation timestamp |
| updatedAt | ISO 8601 | * | Last modification timestamp |
Fetch recent active conversations for a project — useful for displaying a conversation sidebar:
// Fetch active conversations for a project
const conversations: Conversation[] = await fetch(
'/api/chat/conversations?status=active&projectId=proj-8f3a-4b2c&limit=20'
).then(r => r.json());
conversations.forEach(c => {
const age: number = Math.round((Date.now() - new Date(c.updatedAt).getTime()) / 3600000);
console.log(`${c.title || 'Untitled'} (${c.runtimeId}) — ${age}h ago`);
}); Example response:
[
{
"id": "conv-d4e2-7b1a",
"projectId": "proj-8f3a-4b2c",
"title": "Debug auth flow",
"runtimeId": "claude-code",
"modelId": "sonnet",
"status": "active",
"createdAt": "2026-04-03T10:00:00.000Z",
"updatedAt": "2026-04-03T10:45:00.000Z"
}
] Create Conversation
/api/chat/conversations Start a new conversation. Requires an agent runtime. Optionally link to a project (triggers an automatic environment scan) and select a model.
Request Body
| Field | Type | Req | Description |
|---|---|---|---|
| runtimeId | enum | * | Agent runtime: claude-code or openai-codex-app-server |
| projectId | string (UUID) | — | Project to associate with (triggers auto environment scan) |
| title | string | — | Conversation title |
| modelId | string | — | Model ID for responses |
Response 201 Created — The created conversation object
Errors: 400 — Missing or invalid runtimeId
Start a conversation linked to a project — the agent receives the project’s working directory and environment context automatically:
// Create a conversation with a specific model
const conversation: Conversation = await fetch('/api/chat/conversations', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
runtimeId: 'claude-code',
projectId: 'proj-8f3a-4b2c',
title: 'Debug auth flow',
modelId: 'sonnet',
}),
}).then(r => r.json());
console.log(conversation.id); // "conv-d4e2-7b1a"
console.log(conversation.status); // "active" Example response:
{
"id": "conv-d4e2-7b1a",
"projectId": "proj-8f3a-4b2c",
"title": "Debug auth flow",
"runtimeId": "claude-code",
"modelId": "sonnet",
"status": "active",
"createdAt": "2026-04-03T10:00:00.000Z",
"updatedAt": "2026-04-03T10:00:00.000Z"
} Get Conversation
/api/chat/conversations/{id} Retrieve a single conversation with its message count.
Response 200 — Conversation object with messageCount field
Additional Fields
| Field | Type | Req | Description |
|---|---|---|---|
| messageCount | number | * | Total messages in the conversation |
Errors: 404 — Conversation not found
// Get conversation with message count
const conv: Conversation & { messageCount: number } = await fetch('/api/chat/conversations/conv-d4e2-7b1a')
.then(r => r.json());
console.log(`${conv.title}: ${conv.messageCount} messages`); Update Conversation
/api/chat/conversations/{id} Update conversation title, status, model, or runtime.
Request Body (all fields optional)
| Field | Type | Req | Description |
|---|---|---|---|
| title | string | — | Updated conversation title |
| status | enum | — | New status: active or archived |
| modelId | string | — | Change model for future messages |
| runtimeId | string | — | Change agent runtime |
Errors: 400 — Invalid status value, 404 — Not found
Archive a completed conversation or switch to a different model mid-conversation:
// Switch to a faster model for quick follow-up questions
await fetch('/api/chat/conversations/conv-d4e2-7b1a', {
method: 'PATCH',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ modelId: 'haiku' }),
}); Delete Conversation
/api/chat/conversations/{id} Permanently delete a conversation and all its messages.
Response 204 No Content
Errors: 404 — Conversation not found
// Permanently delete a conversation and its messages
await fetch('/api/chat/conversations/conv-d4e2-7b1a', { method: 'DELETE' }); Get Messages
/api/chat/conversations/{id}/messages Fetch message history for a conversation. Supports cursor-based pagination for reconnection scenarios.
Query Parameters
| Param | Type | Req | Description |
|---|---|---|---|
| after | string | — | Message ID cursor — return messages after this ID |
| limit | number | — | Maximum number of messages to return |
Response 200 — Array of message objects
Errors: 404 — Conversation not found
Fetch message history with pagination — use the after cursor to resume from where you left off:
// Fetch the last 50 messages
const messages: Message[] = await fetch(
'/api/chat/conversations/conv-d4e2-7b1a/messages?limit=50'
).then(r => r.json());
// Use cursor-based pagination to load more
if (messages.length === 50) {
const lastId: string = messages[messages.length - 1].id;
const older: Message[] = await fetch(
`/api/chat/conversations/conv-d4e2-7b1a/messages?after=${lastId}&limit=50`
).then(r => r.json());
} Send Message (SSE Stream)
/api/chat/conversations/{id}/messages Send a user message and receive the assistant response as a Server-Sent Events stream. Supports @-mentions to inject entity context. The stream emits deltas, status updates, permission requests, and a final done event.
Request Body (POST)
| Field | Type | Req | Description |
|---|---|---|---|
| content | string | * | User message text |
| mentions | object[] | — | Array of @-mention references to inject as context |
Response — text/event-stream with JSON event objects
Stream Event Types
| Field | Type | Req | Description |
|---|---|---|---|
| delta | event | — | Incremental text content from the assistant |
| status | event | — | Phase update (e.g., thinking, tool_use) with a human-readable message |
| permission_request | event | — | Agent is requesting permission to use a tool — includes requestId, toolName, toolInput |
| question | event | — | Agent is asking structured questions — includes requestId and questions array |
| screenshot | event | — | Screenshot attachment with documentId, thumbnailUrl, dimensions |
| done | event | — | Stream complete — includes final messageId and quickAccess entity links |
| error | event | — | Error message — stream terminates after this event |
Errors: 400 — Missing or invalid content, 404 — Conversation not found
Send a message and handle all SSE event types — the stream contains text deltas, status updates, permission requests, and a final completion event:
// Send a message and process the SSE stream
const res: Response = await fetch('/api/chat/conversations/conv-d4e2-7b1a/messages', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ content: 'What tasks are running right now?' }),
});
const reader = res.body!.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
for (const line of decoder.decode(value).split('\n')) {
if (!line.startsWith('data: ')) continue;
const event = JSON.parse(line.slice(6));
switch (event.type) {
case 'delta':
process.stdout.write(event.content);
break;
case 'status':
console.log(`[status] ${event.phase}: ${event.message}`);
break;
case 'permission_request':
console.log(`[permission] ${event.toolName}: ${JSON.stringify(event.toolInput)}`);
// Respond via /respond endpoint
break;
case 'done':
console.log(`\nComplete: ${event.messageId}`);
break;
case 'error':
console.error(`Error: ${event.message}`);
break;
}
}
} Example stream events:
data: {"type":"status","phase":"thinking","message":"Analyzing the request..."}
data: {"type":"delta","content":"Let me check the running tasks. "}
data: {"type":"status","phase":"tool_use","message":"Querying tasks API..."}
data: {"type":"delta","content":"There are 3 tasks currently running:\n\n1. **Analyze Q4 revenue trends** — started 5 minutes ago\n2. **Code review for auth module** — started 2 minutes ago\n3. **Generate test fixtures** — started 1 minute ago"}
data: {"type":"done","messageId":"msg-a8f3-4c2e","quickAccess":[{"entityType":"task","entityId":"task-9d4e-a1b2","label":"Analyze Q4 revenue trends"}]} Respond to Permission Request
/api/chat/conversations/{id}/respond Allow or deny a pending permission or question request from an active chat turn. Resolves the in-memory promise that blocks the agent SDK's tool callback. Optionally save an always-allow rule.
Request Body
| Field | Type | Req | Description |
|---|---|---|---|
| requestId | string | * | ID of the pending permission request |
| behavior | enum | * | allow or deny |
| messageId | string | — | Message ID to update status in the UI |
| updatedInput | object | — | Modified tool input (only applied on allow) |
| message | string | — | Message back to the agent (used on deny) |
| alwaysAllow | boolean | — | Persist as a permanent permission rule |
| permissionPattern | string | — | Pattern for the always-allow rule |
| toolName | string | — | Tool name for auto-building the permission pattern |
| toolInput | object | — | Tool input for auto-building the permission pattern |
Response 200 — { "ok": true, "stale": false }
The stale field is true if the in-memory request had already expired (timeout, HMR restart). The DB and UI are still updated regardless.
Errors: 400 — Missing requestId or behavior, 500 — Failed to resolve
Allow a tool use request and save it as a permanent rule so the agent won’t ask again:
// Approve a permission request and save a permanent rule
await fetch('/api/chat/conversations/conv-d4e2-7b1a/respond', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
requestId: 'req-b7c3-2e4f',
behavior: 'allow',
alwaysAllow: true,
toolName: 'Read',
toolInput: { file_path: '/src/auth/login.ts' },
}),
});
// Deny a permission with an explanation
await fetch('/api/chat/conversations/conv-d4e2-7b1a/respond', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
requestId: 'req-c8d4-3f5a',
behavior: 'deny',
message: 'Do not modify production config files',
}),
}); Activate Skill
/api/chat/conversations/{id}/skills/activate Activate a skill for a conversation. By default uses replace mode, which swaps out any currently active skill. Add mode stacks the new skill alongside active ones. Returns a conflict object when replacing would cause tool-name collisions and force is not set.
Request Body
| Field | Type | Req | Description |
|---|---|---|---|
| skillId | string | * | ID of the skill to activate |
| mode | enum | — | replace (default) — swap current skill, or add — stack alongside existing skills |
| force | boolean | — | Activate even when tool-name conflicts exist (default false) |
Success Response (200)
| Field | Type | Req | Description |
|---|---|---|---|
| activatedSkillId | string | * | The skill that is now active |
| activeSkillIds | string[] | * | Full list of currently active skill IDs |
| skillName | string | * | Display name of the activated skill |
| note | string | — | Optional informational note |
Conflict Response (200 — requiresConfirmation)
| Field | Type | Req | Description |
|---|---|---|---|
| requiresConfirmation | boolean | * | Always true when conflicts are returned |
| conflicts | Array | * | List of conflicting tool names |
| hint | string | — | Suggested resolution message |
Activate a skill in replace mode — if a skill is already active it will be swapped out:
// Activate a skill for a conversation
const result = await fetch('/api/chat/conversations/conv-d4e2-7b1a/skills/activate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ skillId: 'data-analyst', mode: 'replace' }),
}).then((r) => r.json());
if (result.requiresConfirmation) {
console.log('Conflicts detected:', result.conflicts);
// Re-send with force: true to override
} else {
console.log(`Active skill: ${result.skillName}`);
console.log(`Active skill IDs: ${result.activeSkillIds.join(', ')}`);
} Example success response:
{
"activatedSkillId": "data-analyst",
"activeSkillIds": ["data-analyst"],
"skillName": "Data Analyst"
}Errors: 400 — Validation failure or logic error (e.g. unknown skill), 404 — Conversation or skill not found
Deactivate Skill
/api/chat/conversations/{id}/skills/deactivate Deactivate all skills for a conversation. Clears both the active skill and the full active skill list. Idempotent — safe to call when no skill is active.
Response 200 — { "previousSkillId": string | null }
The previousSkillId field returns the skill that was active before deactivation, or null if none was active.
Errors: 404 — Conversation not found
// Deactivate all skills for a conversation
const result: { previousSkillId: string | null } = await fetch(
'/api/chat/conversations/conv-d4e2-7b1a/skills/deactivate',
{ method: 'POST' }
).then((r) => r.json());
if (result.previousSkillId) {
console.log(`Deactivated: ${result.previousSkillId}`);
} else {
console.log('No skill was active');
} Export Conversation
/api/chat/export Export a conversation as a Markdown document. The document is saved to disk and registered in the documents table so it can be referenced in future messages or downloaded. Returns the new document ID and filename.
Request Body
| Field | Type | Req | Description |
|---|---|---|---|
| title | string | * | Document title (1-200 chars) |
| markdown | string | * | Full conversation content in Markdown format |
| conversationId | string | null | — | Conversation to associate with the exported document |
Response 201 Created — { "id": string, "filename": string }
Errors: 400 — Missing or invalid title or markdown
Export a rendered conversation to a reusable document — the returned id can be attached to future tasks or messages as context:
// Export the current conversation as a Markdown document
const result: { id: string; filename: string } = await fetch('/api/chat/export', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
title: 'Auth Debug Session — 2026-04-03',
markdown: '# Auth Debug Session\n\nWe investigated the 403 error on /login...\n\n## Root Cause\n...',
conversationId: 'conv-d4e2-7b1a',
}),
}).then((r) => r.json());
console.log(`Document ID: ${result.id}`);
console.log(`Filename: ${result.filename}`); Example response:
{
"id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"filename": "1743674400000-Auth_Debug_Session___2026-04-03.md"
} Search Files
/api/chat/files/search Search files in the active project's working directory for @-file mention autocomplete. Results are filtered using git ls-files to respect .gitignore. The working directory is resolved server-side from the projectId — clients never supply a path directly.
Query Parameters
| Param | Type | Req | Description |
|---|---|---|---|
| q | string | — | File name or path fragment to search for. Empty returns recent files. |
| projectId | string | — | Project ID — resolves the search root to the project working directory |
| limit | number | — | Maximum results to return (1-50, default 20) |
Response 200 — { "results": FileResult[] }
FileResult
| Field | Type | Req | Description |
|---|---|---|---|
| path | string | * | File path relative to the working directory |
| name | string | * | File name (basename) |
Search for files to power @-file mention autocomplete in the chat input:
// Power @-file mention autocomplete
const { results }: { results: Array<{ path: string; name: string }> } = await fetch(
'/api/chat/files/search?q=auth&projectId=proj-8f3a-4b2c&limit=10'
).then((r) => r.json());
results.forEach((f) => console.log(`@${f.path}`)); Example response:
{
"results": [
{ "path": "src/auth/login.ts", "name": "login.ts" },
{ "path": "src/auth/middleware.ts", "name": "middleware.ts" },
{ "path": "tests/auth.test.ts", "name": "auth.test.ts" }
]
} Search Entities
/api/chat/entities/search Search across all entity types (projects, tasks, workflows, documents, schedules, tables, profiles) for @-mention autocomplete. Returns results from all types in parallel, capped per type.
Query Parameters
| Param | Type | Req | Description |
|---|---|---|---|
| q | string | — | Search query — LIKE match against entity names. Empty returns recent entities. |
| limit | number | — | Maximum total results (default 20, max 30) |
Response 200 — { "results": EntityResult[] }
EntityResult
| Field | Type | Req | Description |
|---|---|---|---|
| entityType | enum | * | project, task, workflow, document, schedule, table, or profile |
| entityId | string | * | Entity identifier |
| label | string | * | Display name |
| status | string | — | Entity status or domain (for profiles) |
| description | string | — | Truncated description (max 120 chars) |
Search entities for @-mention autocomplete — returns matches from all entity types:
// Power @-mention autocomplete with entity search
const { results }: { results: EntityResult[] } = await fetch('/api/chat/entities/search?q=revenue&limit=10')
.then(r => r.json());
// Group results by entity type for the autocomplete dropdown
const grouped = Object.groupBy(results, r => r.entityType);
for (const [type, items] of Object.entries(grouped)) {
console.log(`${type}:`);
items.forEach(item => console.log(` @${item.label} (${item.entityId})`));
} Example response:
{
"results": [
{
"entityType": "task",
"entityId": "task-9d4e-a1b2",
"label": "Analyze Q4 revenue trends",
"status": "completed",
"description": "Review revenue data and produce a summary report with charts"
},
{
"entityType": "document",
"entityId": "doc-revenue-q4-csv",
"label": "revenue-q4-2025.csv",
"status": "processed"
}
]
} List Models
/api/chat/models Return available chat models discovered from configured SDKs. Falls back to a hardcoded catalog if SDKs are unreachable.
Response 200 — Array of model objects
ChatModelOption
| Field | Type | Req | Description |
|---|---|---|---|
| id | string | * | Model identifier (e.g., haiku, sonnet, gpt-5.4) |
| label | string | * | Display name |
| provider | enum | * | anthropic, openai, or ollama |
| tier | string | * | Performance tier: Fast, Balanced, or Best |
| costLabel | string | * | Relative cost: $, $$, $$$, or Free |
Fetch available models to populate a model selector — shows capabilities and pricing tier:
// Build a model picker grouped by provider
const models: ChatModelOption[] = await fetch('/api/chat/models').then(r => r.json());
const byProvider = Object.groupBy(models, m => m.provider);
for (const [provider, providerModels] of Object.entries(byProvider)) {
console.log(`${provider}:`);
providerModels.forEach(m => {
console.log(` ${m.label} [${m.tier}] ${m.costLabel}`);
});
} Example response:
[
{ "id": "haiku", "label": "Haiku", "provider": "anthropic", "tier": "Fast", "costLabel": "$" },
{ "id": "sonnet", "label": "Sonnet", "provider": "anthropic", "tier": "Balanced", "costLabel": "$$" },
{ "id": "opus", "label": "Opus", "provider": "anthropic", "tier": "Best", "costLabel": "$$$" },
{ "id": "gpt-5.4-mini", "label": "GPT-5.4 Mini", "provider": "openai", "tier": "Fast", "costLabel": "$" },
{ "id": "gpt-5.3-codex", "label": "Codex 5.3", "provider": "openai", "tier": "Balanced", "costLabel": "$$" },
{ "id": "gpt-5.4", "label": "GPT-5.4", "provider": "openai", "tier": "Best", "costLabel": "$$$" }
] Suggested Prompts
/api/chat/suggested-prompts Return context-aware prompt categories with expandable sub-prompts for the chat input.
Response 200 — Array of prompt categories
PromptCategory
| Field | Type | Req | Description |
|---|---|---|---|
| id | string | * | Category identifier |
| label | string | * | Category display name |
| icon | string | * | Lucide icon name |
| prompts | SuggestedPrompt[] | * | Array of prompts in this category |
SuggestedPrompt
| Field | Type | Req | Description |
|---|---|---|---|
| label | string | * | Short display text (~40 chars) |
| prompt | string | * | Full detailed prompt text |
Fetch suggested prompts to display quick-action buttons in the chat input area:
// Populate the chat input with suggested prompt categories
const categories: PromptCategory[] = await fetch('/api/chat/suggested-prompts')
.then(r => r.json());
categories.forEach(cat => {
console.log(`${cat.label} (${cat.icon}):`);
cat.prompts.forEach(p => console.log(` ${p.label}`));
}); Stream Event Reference
The Send Message endpoint emits these SSE event types:
| Event Type | Key Fields | Description |
|---|---|---|
| delta | content | Incremental assistant text |
| status | phase, message | Phase transition (thinking, tool_use, etc.) |
| permission_request | requestId, toolName, toolInput | Agent needs tool approval |
| question | requestId, questions[] | Agent asking structured questions |
| screenshot | documentId, thumbnailUrl, width, height | Screenshot attachment |
| done | messageId, quickAccess[] | Stream complete with entity links |
| error | message | Terminal error — stream closes |
Default Models
| Provider | Model ID | Label | Tier | Cost |
|---|---|---|---|---|
| Anthropic | haiku | Haiku | Fast | $ |
| Anthropic | sonnet | Sonnet | Balanced | $$ |
| Anthropic | opus | Opus | Best | $$$ |
| OpenAI | gpt-5.4-mini | GPT-5.4 Mini | Fast | $ |
| OpenAI | gpt-5.3-codex | Codex 5.3 | Balanced | $$ |
| OpenAI | gpt-5.4 | GPT-5.4 | Best | $$$ |