Skip to main content
POST
/
v1
/
responses
from openai import OpenAI client = OpenAI( base_url="https://api.cometapi.com/v1", api_key="<COMETAPI_KEY>", ) response = client.responses.create( model="gpt-5.4", input="Tell me a three sentence bedtime story about a unicorn.", ) print(response.output_text)
{
  "id": "resp_0a153ae8201f73bc0069a7e8044cc481",
  "object": "response",
  "created_at": 1772611588,
  "status": "completed",
  "background": false,
  "completed_at": 1772611589,
  "error": null,
  "incomplete_details": null,
  "instructions": null,
  "max_output_tokens": null,
  "model": "gpt-4.1-nano",
  "output": [
    {
      "id": "msg_0a153ae8201f73bc0069a7e8049a8881",
      "type": "message",
      "status": "completed",
      "content": [
        {
          "type": "output_text",
          "annotations": [],
          "text": "Four."
        }
      ],
      "role": "assistant"
    }
  ],
  "parallel_tool_calls": true,
  "previous_response_id": null,
  "prompt_cache_key": null,
  "prompt_cache_retention": null,
  "reasoning": {
    "effort": null,
    "summary": null
  },
  "safety_identifier": null,
  "service_tier": "auto",
  "store": true,
  "temperature": 1,
  "text": {
    "format": {
      "type": "text"
    },
    "verbosity": "medium"
  },
  "tool_choice": "auto",
  "tools": [],
  "top_p": 1,
  "truncation": "disabled",
  "usage": {
    "input_tokens": 19,
    "input_tokens_details": {
      "cached_tokens": 0
    },
    "output_tokens": 9,
    "output_tokens_details": {
      "reasoning_tokens": 0
    },
    "total_tokens": 28
  },
  "user": null,
  "metadata": {}
}

Documentation Index

Fetch the complete documentation index at: https://apidoc.cometapi.com/llms.txt

Use this file to discover all available pages before exploring further.

The Responses API extends Chat Completions with stateful conversations, built-in tools, multimodal file inputs, and reasoning control. It is the recommended endpoint for OpenAI o-series reasoning models, GPT-5 series, and Codex models.
Different model providers support different request parameters and return varying response fields. Not all parameters listed in the playground above work with every model on CometAPI.

Use stateful conversations

Chain responses together using previous_response_id instead of managing message history yourself:
from openai import OpenAI

client = OpenAI(
    base_url="https://api.cometapi.com/v1",
    api_key="<COMETAPI_KEY>",
)

# First turn
response = client.responses.create(
    model="gpt-5.4",
    input="What is quantum computing?",
)

# Second turn — previous context is included automatically
follow_up = client.responses.create(
    model="gpt-5.4",
    input="Can you explain that more simply?",
    previous_response_id=response.id,
)

print(follow_up.output_text)

Use built-in tools

The Responses API includes platform-provided tools that require no configuration:
ToolPurpose
web_search_previewSearch the web for real-time information
file_searchSearch through uploaded files
code_interpreterExecute Python code in a sandbox
To enable a built-in tool, add it to the tools array:
response = client.responses.create(
    model="gpt-5.4",
    input="Find the current price of Bitcoin",
    tools=[{"type": "web_search_preview"}],
)

print(response.output_text)

Call custom functions

Define functions the model can invoke with structured arguments:
response = client.responses.create(
    model="gpt-5.4",
    input="What's the weather in Tokyo?",
    tools=[{
        "type": "function",
        "name": "get_weather",
        "description": "Get current weather for a location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {"type": "string"}
            },
            "required": ["location"]
        }
    }],
)
When the model calls a function, the response output array contains a function_call item with the function name and parsed arguments. Execute the function and send the result back in a follow-up request.

Request structured output

To force JSON output matching a specific schema, use the text.format parameter:
response = client.responses.create(
    model="gpt-5.4",
    input="List 3 programming languages with their main use cases",
    text={
        "format": {
            "type": "json_schema",
            "name": "languages",
            "strict": True,
            "schema": {
                "type": "object",
                "properties": {
                    "languages": {
                        "type": "array",
                        "items": {
                            "type": "object",
                            "properties": {
                                "name": {"type": "string"},
                                "use_case": {"type": "string"}
                            },
                            "required": ["name", "use_case"],
                            "additionalProperties": False
                        }
                    }
                },
                "required": ["languages"],
                "additionalProperties": False
            }
        }
    },
)

Configure reasoning

For o-series and GPT-5 models, control reasoning depth with reasoning.effort:
response = client.responses.create(
    model="o3",
    input="Solve this step by step: if f(x) = x^3 - 6x^2 + 11x - 6, find all roots.",
    reasoning={"effort": "high"},  # "low", "medium", or "high"
)

print(response.output_text)
Higher reasoning effort produces more thorough answers but uses more tokens. Use "low" for simple queries and "high" for complex multi-step problems.

Stream responses

To receive incremental output, set stream to true. The API sends server-sent events (SSE) in this order:
  1. response.created — Response object initialized
  2. response.in_progress — Generation started
  3. response.output_item.added — New output item (message or tool call)
  4. response.content_part.added — Content part started
  5. response.output_text.delta — Text chunk (contains delta field)
  6. response.output_text.done — Text generation complete for this content part
  7. response.content_part.done — Content part finished
  8. response.output_item.done — Output item finished
  9. response.completed — Full response with usage data
Stream a response with the Python SDK:
stream = client.responses.create(
    model="gpt-5.4",
    input="Write a haiku about coding",
    stream=True,
)

for event in stream:
    if event.type == "response.output_text.delta":
        print(event.delta, end="")

For in-depth guides on each capability, see the OpenAI documentation: Text · Images · PDF files · Structured Outputs · Function Calling · Conversation State · Built-in Tools · Reasoning

Authorizations

Authorization
string
header
required

Bearer token authentication. Use your CometAPI key.

Body

application/json
model
string
required

Model ID to use for this request. See the Models page for current options.

Example:

"gpt-5.4"

input
required

Text, image, or file inputs to the model, used to generate a response. Can be a simple string for text-only input, or an array of input items for multimodal content (images, files) and multi-turn conversations.

instructions
string

A system (or developer) message inserted into the model's context. When used with previous_response_id, instructions from the previous response are not carried over — this makes it easy to swap system messages between turns.

background
boolean
default:false

Whether to run the model response in the background. Background responses do not return output directly — you retrieve the result later via the response ID.

context_management
object[]

Context management configuration for this request. Controls how the model manages context when the conversation exceeds the context window.

conversation

The conversation this response belongs to. Items from the conversation are prepended to input for context. Input and output items are automatically added to the conversation after the response completes. Cannot be used with previous_response_id.

include
enum<string>[]

Additional output data to include in the response. Use this to request extra information that is not included by default.

Available options:
web_search_call.action.sources,
code_interpreter_call.outputs,
computer_call_output.output.image_url,
file_search_call.results,
message.input_image.image_url,
message.output_text.logprobs,
reasoning.encrypted_content
max_output_tokens
integer

An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.

max_tool_calls
integer

The maximum number of total calls to built-in tools that can be processed in a response. This limit applies across all built-in tool calls, not per individual tool. Any further tool call attempts by the model will be ignored.

metadata
object

Set of up to 16 key-value pairs that can be attached to the response. Useful for storing additional information in a structured format. Keys have a maximum length of 64 characters; values have a maximum length of 512 characters.

parallel_tool_calls
boolean
default:true

Whether to allow the model to run tool calls in parallel.

previous_response_id
string

The unique ID of a previous response. Use this to create multi-turn conversations without manually managing conversation state. Cannot be used with conversation.

prompt
object

Reference to a prompt template and its variables.

prompt_cache_key
string

A key used to cache responses for similar requests, helping optimize cache hit rates. Replaces the deprecated user field for caching purposes.

prompt_cache_retention
enum<string>

The retention policy for the prompt cache. Set to 24h to keep cached prefixes active for up to 24 hours.

Available options:
in-memory,
24h
reasoning
object

Configuration options for reasoning models (o-series and gpt-5). Controls the depth of reasoning before generating a response.

safety_identifier
string

A stable identifier for your end-users, used to help detect policy violations. Should be a hashed username or email — do not send identifying information directly.

Maximum string length: 64
service_tier
enum<string>

Specifies the processing tier for the request. When set, the response will include the actual service_tier used.

  • auto: Uses the tier configured in project settings (default behavior).
  • default: Standard pricing and performance.
  • flex: Flexible processing with potential cost savings.
  • priority: Priority processing with faster response times.
Available options:
auto,
default,
flex,
priority
store
boolean
default:true

Whether to store the generated response for later retrieval via API.

stream
boolean
default:false

If set to true, the response data will be streamed to the client as it is generated using server-sent events (SSE). Events include response.created, response.output_text.delta, response.completed, and more.

stream_options
object

Options for streaming responses. Only set this when stream is true.

temperature
number
default:1

Sampling temperature between 0 and 2. Higher values (e.g., 0.8) increase randomness; lower values (e.g., 0.2) make output more focused and deterministic. We recommend adjusting either this or top_p, but not both.

Required range: 0 <= x <= 2
text
object

Configuration for text output. Use this to request structured JSON output via JSON mode or JSON Schema.

tool_choice
default:auto

Controls how the model selects which tool(s) to call.

  • auto (default): The model decides whether and which tools to call.
  • none: The model will not call any tools.
  • required: The model must call at least one tool.
  • An object specifying a particular tool to use.
tools
object[]

An array of tools the model may call while generating a response. CometAPI supports three categories:

  • Built-in tools: Platform-provided tools like web_search_preview and file_search.
  • Function calls: Custom functions you define, enabling the model to call your own code with structured arguments.
  • MCP tools: Integrations with third-party systems via MCP servers.
top_logprobs
integer

Number of most likely tokens to return at each position (0–20), each with an associated log probability. Must include message.output_text.logprobs in the include parameter to receive logprobs.

Required range: 0 <= x <= 20
top_p
number
default:1

Nucleus sampling parameter. The model considers tokens with top_p cumulative probability mass. For example, 0.1 means only the top 10% probability tokens are considered. We recommend adjusting either this or temperature, but not both.

Required range: 0 <= x <= 1
truncation
enum<string>
default:disabled

The truncation strategy for handling inputs that exceed the model's context window.

  • auto: The model truncates the input by dropping items from the beginning of the conversation to fit.
  • disabled (default): The request fails with a 400 error if the input exceeds the context window.
Available options:
auto,
disabled
user
string
deprecated

Deprecated. Use safety_identifier and prompt_cache_key instead. A stable identifier for your end-user.

Response

200 - application/json

The generated Response object.

id
string

Unique identifier for the response.

Example:

"resp_0a153ae8201f73bc0069a7e8044cc481"

object
enum<string>

The object type, always response.

Available options:
response
Example:

"response"

created_at
integer

Unix timestamp (in seconds) of when the response was created.

Example:

1772611588

status
enum<string>

The status of the response.

Available options:
completed,
in_progress,
failed,
cancelled,
queued
Example:

"completed"

background
boolean

Whether the response was run in the background.

Example:

false

completed_at
integer | null

Unix timestamp of when the response was completed, or null if still in progress.

Example:

1772611589

error
object

Error information if the response failed, or null on success.

incomplete_details
object

Details about why the response is incomplete, if applicable.

instructions
string | null

The system instructions used for this response.

max_output_tokens
integer | null

The maximum output token limit that was applied.

model
string

The model used for the response.

Example:

"gpt-4.1-nano"

output
object[]

An array of output items generated by the model. Each item can be a message, function call, or other output type.

output_text
string

A convenience field containing the concatenated text output from all output message items.

parallel_tool_calls
boolean

Whether parallel tool calls were enabled.

previous_response_id
string | null

The ID of the previous response, if this is a multi-turn conversation.

reasoning
object

The reasoning configuration that was used.

service_tier
string

The service tier actually used to process the request.

Example:

"default"

store
boolean

Whether the response was stored.

temperature
number

The temperature value used.

Example:

1

text
object

The text configuration used.

tool_choice

The tool choice setting used.

tools
object[]

The tools that were available for this response.

top_p
number

The top_p value used.

Example:

1

truncation
string

The truncation strategy used.

usage
object

Token usage statistics for this response.

user
string | null

The user identifier, if provided.

metadata
object

The metadata attached to this response.

content_filters
array | null

Content filter results applied to the response, if any.

frequency_penalty
number
default:0

The frequency penalty applied to the request.

max_tool_calls
integer | null

Maximum number of tool calls allowed, if set.

presence_penalty
number
default:0

The presence penalty applied to the request.

prompt_cache_key
string | null

Cache key for prompt caching, if applicable.

prompt_cache_retention
string | null

Prompt cache retention policy, if applicable.

safety_identifier
string | null

Safety system identifier for the response, if applicable.

top_logprobs
integer
default:0

Number of top log probabilities returned per token position.