跳轉到主要內容
POST
/
v1
/
chat
/
completions
from openai import OpenAI
client = OpenAI(
    base_url="https://api.cometapi.com/v1",
    api_key="<COMETAPI_KEY>",
)

completion = client.chat.completions.create(
    model="gpt-5.4",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello!"},
    ],
)

print(completion.choices[0].message)
{
  "id": "chatcmpl-DNA27oKtBUL8TmbGpBM3B3zhWgYfZ",
  "object": "chat.completion",
  "created": 1774412483,
  "model": "gpt-4.1-nano-2025-04-14",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Four",
        "refusal": null,
        "annotations": []
      },
      "logprobs": null,
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 29,
    "completion_tokens": 2,
    "total_tokens": 31,
    "prompt_tokens_details": {
      "cached_tokens": 0,
      "audio_tokens": 0
    },
    "completion_tokens_details": {
      "reasoning_tokens": 0,
      "audio_tokens": 0,
      "accepted_prediction_tokens": 0,
      "rejected_prediction_tokens": 0
    }
  },
  "service_tier": "default",
  "system_fingerprint": "fp_490a4ad033"
}

概覽

聊天補全端點是與大型語言模型互動時最廣泛使用的 API。它接受由多則訊息組成的對話,並回傳模型的回應。 CometAPI 透過單一統一介面,將此端點路由到多個供應商,包括 OpenAI、Anthropic Claude(透過相容層)、Google Gemini 等。你只需變更 model 參數,即可切換模型。
此端點遵循 OpenAI Chat Completions 格式。大多數與 OpenAI 相容的 SDK 和工具,只要將 base_url 改為 https://api.cometapi.com/v1,即可搭配 CometAPI 使用。

重要說明

模型特定行為 — 不同模型可能支援不同的參數子集,且回傳的回應欄位也可能略有差異。例如,reasoning_effort 僅適用於推理模型(o-series、GPT-5.1+),而某些模型可能不支援 logprobsn > 1。
回應直通傳遞 — CometAPI 會在不修改模型回應內容的情況下直接傳遞(僅在跨供應商路由時進行格式正規化),確保你收到與原始 API 一致的輸出。
OpenAI Pro 模型 — 對於 OpenAI Pro 系列模型(例如 o1-pro),請改用 responses 端點。

訊息角色

角色說明
system設定 assistant 的行為與個性。放置於對話開頭。
developer在較新的模型(o1+)中取代 system。提供模型無論使用者輸入為何都應遵循的指示。
user來自終端使用者的訊息。
assistant先前的模型回應,用於維持對話歷史。
tool工具/函式呼叫的結果。必須包含與原始工具呼叫相符的 tool_call_id
對於較新的模型(GPT-4.1、GPT-5 系列、o-series),建議在指示訊息中優先使用 developer 而非 system。兩者皆可運作,但 developer 能提供更強的指示遵循行為。

多模態(Multimodal)輸入

許多模型支援文字以外的圖片與音訊。請使用 content 的陣列格式來傳送多模態訊息:
{
  "role": "user",
  "content": [
    {"type": "text", "text": "Describe this image"},
    {
      "type": "image_url",
      "image_url": {
        "url": "https://example.com/image.png",
        "detail": "high"
      }
    }
  ]
}
detail 參數可控制圖片分析的深度:
  • low — 較快,使用較少 Token(固定成本)
  • high — 詳細分析,會消耗更多 Token
  • auto — 由模型自行決定(預設)

串流(Streaming)

stream 設為 true 時,回應會以 Server-Sent Events (SSE) 的形式傳送。每個事件都包含一個帶有遞增內容的 chat.completion.chunk 物件:
data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"role":"assistant"},"finish_reason":null}]}

data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]}

data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"content":"!"},"finish_reason":null}]}

data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}

data: [DONE]
若要在串流回應中包含 Token 使用統計資訊,請將 stream_options.include_usage 設為 true。使用量資料會出現在 [DONE] 之前的最後一個 chunk 中。

結構化輸出

使用 response_format 強制模型回傳符合特定 schema 的有效 JSON:
{
  "response_format": {
    "type": "json_schema",
    "json_schema": {
      "name": "result",
      "strict": true,
      "schema": {
        "type": "object",
        "properties": {
          "answer": {"type": "string"},
          "confidence": {"type": "number"}
        },
        "required": ["answer", "confidence"],
        "additionalProperties": false
      }
    }
  }
}
JSON Schema 模式(json_schema)可保證輸出完全符合你的 schema。JSON Object 模式(json_object)僅保證輸出是有效的 JSON——不會強制其結構。

工具/函式呼叫(Function Calling)

透過提供工具定義,讓模型能夠呼叫外部函式:
{
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_weather",
        "description": "Get current weather for a city",
        "parameters": {
          "type": "object",
          "properties": {
            "location": {"type": "string", "description": "City name"}
          },
          "required": ["location"]
        }
      }
    }
  ],
  "tool_choice": "auto"
}
當模型決定呼叫工具時,回應會具有 finish_reason: "tool_calls",且 message.tool_calls 陣列會包含函式名稱與參數。接著你要執行該函式,並以對應的 tool_call_id 將結果作為 tool 訊息送回。

回應欄位

欄位說明
id唯一的 completion 識別碼(例如 chatcmpl-abc123)。
object一律為 chat.completion
model產生此回應的模型(可能包含版本後綴)。
choicescompletion 選項陣列(通常為 1,除非 n > 1)。
choices[].messageassistant 的回應訊息,包含 rolecontent,以及可選的 tool_calls
choices[].finish_reason模型停止的原因:stoplengthtool_callscontent_filter
usageToken 使用量明細:prompt_tokenscompletion_tokenstotal_tokens,以及更詳細的子計數。
system_fingerprint用於除錯重現性的後端設定指紋。

跨供應商注意事項

ParameterOpenAI GPTClaude (via compat)Gemini (via compat)
temperature0–20–10–2
top_p0–10–10–1
n1–128僅 11–8
stop最多 4 個最多 4 個最多 5 個
tools
response_format✅ (json_schema)
logprobs
reasoning_efforto-series、GPT-5.1+❌(Gemini 原生請使用 thinking
  • max_tokens — 傳統參數。適用於大多數模型,但對較新的 OpenAI 模型而言已被棄用。
  • max_completion_tokens — GPT-4.1、GPT-5 系列與 o-series 模型建議使用的參數。對推理模型為必填,因為它同時包含輸出 token 與推理 token。
當路由到不同供應商時,CometAPI 會自動處理對應映射。
  • system — 傳統的指令 role。適用於所有模型。
  • developer — 隨 o1 模型引入。可為較新的模型提供更強的指令遵循能力。在較舊模型上會回退為 system 行為。
對於以 GPT-4.1+ 或 o-series 模型為目標的新專案,請使用 developer

常見問題

如何處理速率限制?

當遇到 429 Too Many Requests 時,請實作指數退避:
import time
import random
from openai import OpenAI, RateLimitError

client = OpenAI(
    base_url="https://api.cometapi.com/v1",
    api_key="<COMETAPI_KEY>",
)

def chat_with_retry(messages, max_retries=3):
    for i in range(max_retries):
        try:
            return client.chat.completions.create(
                model="gpt-5.4",
                messages=messages,
            )
        except RateLimitError:
            if i < max_retries - 1:
                wait_time = (2 ** i) + random.random()
                time.sleep(wait_time)
            else:
                raise

如何維持對話上下文?

請在 messages 陣列中包含完整的對話歷史:
messages = [
    {"role": "developer", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What is Python?"},
    {"role": "assistant", "content": "Python is a high-level programming language..."},
    {"role": "user", "content": "What are its main advantages?"},
]

finish_reason 是什麼意思?

ValueMeaning
stop自然完成,或命中了停止序列。
length已達到 max_tokensmax_completion_tokens 限制。
tool_calls模型呼叫了一個或多個工具/函式呼叫。
content_filter因內容政策而過濾了輸出。

如何控制成本?

  1. 使用 max_completion_tokens 限制輸出長度。
  2. 選擇具成本效益的模型(例如較簡單的任務可使用 gpt-5.4-minigpt-5.4-nano)。
  3. 保持 prompt 簡潔——避免冗餘的上下文。
  4. usage 回應欄位中監控 token 用量。

授權

Authorization
string
header
必填

Bearer token authentication. Use your CometAPI key.

主體

application/json
model
string
預設值:gpt-5.4
必填

Model ID to use for this request. See the Models page for current options.

範例:

"gpt-4.1"

messages
object[]
必填

A list of messages forming the conversation. Each message has a role (system, user, assistant, or developer) and content (text string or multimodal content array).

stream
boolean

If true, partial response tokens are delivered incrementally via server-sent events (SSE). The stream ends with a data: [DONE] message.

temperature
number
預設值:1

Sampling temperature between 0 and 2. Higher values (e.g., 0.8) produce more random output; lower values (e.g., 0.2) make output more focused and deterministic. Recommended to adjust this or top_p, but not both.

必填範圍: 0 <= x <= 2
top_p
number
預設值:1

Nucleus sampling parameter. The model considers only the tokens whose cumulative probability reaches top_p. For example, 0.1 means only the top 10% probability tokens are considered. Recommended to adjust this or temperature, but not both.

必填範圍: 0 <= x <= 1
n
integer
預設值:1

Number of completion choices to generate for each input message. Defaults to 1.

stop
string

Up to 4 sequences where the API will stop generating further tokens. Can be a string or an array of strings.

max_tokens
integer

Maximum number of tokens to generate in the completion. The total of input + output tokens is capped by the model's context length.

presence_penalty
number
預設值:0

Number between -2.0 and 2.0. Positive values penalize tokens based on whether they have already appeared, encouraging the model to explore new topics.

必填範圍: -2 <= x <= 2
frequency_penalty
number
預設值:0

Number between -2.0 and 2.0. Positive values penalize tokens proportionally to how often they have appeared, reducing verbatim repetition.

必填範圍: -2 <= x <= 2
logit_bias
object

A JSON object mapping token IDs to bias values from -100 to 100. The bias is added to the model's logits before sampling. Values between -1 and 1 subtly adjust likelihood; -100 or 100 effectively ban or force selection of a token.

user
string

A unique identifier for your end-user. Helps with abuse detection and monitoring.

max_completion_tokens
integer

An upper bound for the number of tokens to generate, including visible output tokens and reasoning tokens. Use this instead of max_tokens for GPT-4.1+, GPT-5 series, and o-series models.

response_format
object

Specifies the output format. Use {"type": "json_object"} for JSON mode, or {"type": "json_schema", "json_schema": {...}} for strict structured output.

tools
object[]

A list of tools the model may call. Currently supports function type tools.

tool_choice
預設值:auto

Controls how the model selects tools. auto (default): model decides. none: no tools. required: must call a tool.

logprobs
boolean
預設值:false

Whether to return log probabilities of the output tokens.

top_logprobs
integer

Number of most likely tokens to return at each position (0-20). Requires logprobs to be true.

必填範圍: 0 <= x <= 20
reasoning_effort
enum<string>

Controls the reasoning effort for o-series and GPT-5.1+ models.

可用選項:
low,
medium,
high
stream_options
object

Options for streaming. Only valid when stream is true.

service_tier
enum<string>

Specifies the processing tier.

可用選項:
auto,
default,
flex,
priority

回應

200 - application/json

Successful chat completion response.

id
string

Unique completion identifier.

範例:

"chatcmpl-abc123"

object
enum<string>
可用選項:
chat.completion
範例:

"chat.completion"

created
integer

Unix timestamp of creation.

範例:

1774412483

model
string

The model used (may include version suffix).

範例:

"gpt-5.4-2025-07-16"

choices
object[]

Array of completion choices.

usage
object
service_tier
string
範例:

"default"

system_fingerprint
string | null
範例:

"fp_490a4ad033"