Documentation

Everything you need to integrate FullAI into your application.

Quick Start

1. Get your API Key

Sign up at fullai.com/signup and create an API key from your dashboard.

2. Make your first request

Using curl:

curl https://fullai.com/api/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "fullai-1",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

3. Use with OpenAI SDK

FullAI is OpenAI-compatible. Just change the base URL:

from openai import OpenAI

client = OpenAI(
    base_url="https://fullai.com/api/v1",
    api_key="sk-full_your_key_here"
)

response = client.chat.completions.create(
    model="fullai-1",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)

API Reference

Base URL

https://fullai.com/api/v1

Authentication

Include your API key in the Authorization header:

Authorization: Bearer sk-full_your_key_here
POST/chat/completions

Create a chat completion.

Request Body

{
  "model": "fullai-1",           // Required: Model ID
  "messages": [                   // Required: Array of messages
    {
      "role": "system",           // "system", "user", or "assistant"
      "content": "You are helpful"
    },
    {
      "role": "user",
      "content": "Hello!"
    }
  ],
  "temperature": 0.7,             // Optional: 0-2, default 0.7
  "max_tokens": 4096,             // Optional: Max tokens to generate
  "stream": false,                // Optional: Enable streaming
  "top_p": 1,                     // Optional: Nucleus sampling
  "stop": ["\n"]                  // Optional: Stop sequences
}

Response

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1699000000,
  "model": "fullai-1",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I help you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 10,
    "completion_tokens": 15,
    "total_tokens": 25
  }
}
GET/models

List available models.

Response

{
  "object": "list",
  "data": [
    {"id": "fullai-1", "object": "model", "owned_by": "fullai"},
    {"id": "fullai-fast", "object": "model", "owned_by": "fullai"},
    {"id": "fullai-large", "object": "model", "owned_by": "fullai"}
  ]
}

Models

fullai-1

Flagship

Our most capable model. Best for complex reasoning, coding, analysis, and creative tasks.

Context Window:128K tokens
Max Output:8K tokens

fullai-fast

Fast

Optimized for speed. Perfect for real-time applications, chatbots, and high-volume tasks.

Context Window:128K tokens
Max Output:8K tokens

fullai-large

Latest

Our newest and largest model. Pushing the boundaries of AI capabilities.

Context Window:128K tokens
Max Output:8K tokens

Streaming

Enable streaming to receive partial responses as they are generated:

curl https://fullai.com/api/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "fullai-1",
    "messages": [{"role": "user", "content": "Tell me a story"}],
    "stream": true
  }'

Streamed responses use Server-Sent Events (SSE):

data: {"id":"chatcmpl-123","choices":[{"delta":{"content":"Once"}}]}

data: {"id":"chatcmpl-123","choices":[{"delta":{"content":" upon"}}]}

data: {"id":"chatcmpl-123","choices":[{"delta":{"content":" a"}}]}

data: [DONE]

Rate Limits

TierRequests/minTokens/day
Free1010,000
Starter60100,000
Pro1201,000,000
Enterprise1,000+Unlimited

Error Handling

Errors are returned in a standard format:

{
  "error": {
    "message": "Invalid API key",
    "type": "invalid_request_error",
    "code": "invalid_api_key"
  }
}

Common Error Codes

401Invalid or missing API key
429Rate limit exceeded
400Invalid request body
500Internal server error