CometAPI POST /v1/responses를 사용해 멀티모달 입력, 상태 저장형 채팅, 내장 도구, 함수 호출을 활용한 고급 모델 출력을 생성합니다.
from openai import OpenAI
client = OpenAI(
base_url="https://api.cometapi.com/v1",
api_key="<COMETAPI_KEY>",
)
response = client.responses.create(
model="gpt-5.4",
input="Tell me a three sentence bedtime story about a unicorn.",
)
print(response.output_text){
"id": "resp_0a153ae8201f73bc0069a7e8044cc481",
"object": "response",
"created_at": 1772611588,
"status": "completed",
"background": false,
"completed_at": 1772611589,
"error": null,
"incomplete_details": null,
"instructions": null,
"max_output_tokens": null,
"model": "gpt-4.1-nano",
"output": [
{
"id": "msg_0a153ae8201f73bc0069a7e8049a8881",
"type": "message",
"status": "completed",
"content": [
{
"type": "output_text",
"annotations": [],
"text": "Four."
}
],
"role": "assistant"
}
],
"parallel_tool_calls": true,
"previous_response_id": null,
"prompt_cache_key": null,
"prompt_cache_retention": null,
"reasoning": {
"effort": null,
"summary": null
},
"safety_identifier": null,
"service_tier": "auto",
"store": true,
"temperature": 1,
"text": {
"format": {
"type": "text"
},
"verbosity": "medium"
},
"tool_choice": "auto",
"tools": [],
"top_p": 1,
"truncation": "disabled",
"usage": {
"input_tokens": 19,
"input_tokens_details": {
"cached_tokens": 0
},
"output_tokens": 9,
"output_tokens_details": {
"reasoning_tokens": 0
},
"total_tokens": 28
},
"user": null,
"metadata": {}
}previous_response_id 또는 conversation을 사용해 응답을 서로 연결할 수 있습니다web_search_preview, file_search, code_interpreter 등을 사용할 수 있습니다text.format 파라미터를 통해 JSON Schema를 준수하는 출력을 요청할 수 있습니다reasoning.effort 파라미터로 추론 깊이를 설정할 수 있습니다response.output_text.delta, response.completed 등)과 함께 SSE를 통해 응답을 스트리밍할 수 있습니다temperature, tools, reasoning 등)가 다시 포함됩니다.stream이 true로 설정되면 API는 다음 순서로 서버 전송 이벤트(SSE)를 보냅니다:
response.created — 응답 객체가 초기화됨response.in_progress — 생성 시작response.output_item.added — 새 출력 항목(메시지 또는 도구 호출)response.content_part.added — 콘텐츠 파트 시작response.output_text.delta — 텍스트 청크(delta 필드에 텍스트 조각 포함)response.output_text.done — 이 콘텐츠 파트의 텍스트 생성 완료response.content_part.done — 콘텐츠 파트 종료response.output_item.done — 출력 항목 종료response.completed — usage 데이터가 포함된 전체 응답Bearer token authentication. Use your CometAPI key.
Model ID to use for this request. See the Models page for current options.
"gpt-5.4"
Text, image, or file inputs to the model, used to generate a response. Can be a simple string for text-only input, or an array of input items for multimodal content (images, files) and multi-turn conversations.
A system (or developer) message inserted into the model's context. When used with previous_response_id, instructions from the previous response are not carried over — this makes it easy to swap system messages between turns.
Whether to run the model response in the background. Background responses do not return output directly — you retrieve the result later via the response ID.
Context management configuration for this request. Controls how the model manages context when the conversation exceeds the context window.
Show child attributes
The conversation this response belongs to. Items from the conversation are prepended to input for context. Input and output items are automatically added to the conversation after the response completes. Cannot be used with previous_response_id.
Additional output data to include in the response. Use this to request extra information that is not included by default.
web_search_call.action.sources, code_interpreter_call.outputs, computer_call_output.output.image_url, file_search_call.results, message.input_image.image_url, message.output_text.logprobs, reasoning.encrypted_content An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.
The maximum number of total calls to built-in tools that can be processed in a response. This limit applies across all built-in tool calls, not per individual tool. Any further tool call attempts by the model will be ignored.
Set of up to 16 key-value pairs that can be attached to the response. Useful for storing additional information in a structured format. Keys have a maximum length of 64 characters; values have a maximum length of 512 characters.
Show child attributes
Whether to allow the model to run tool calls in parallel.
The unique ID of a previous response. Use this to create multi-turn conversations without manually managing conversation state. Cannot be used with conversation.
Reference to a prompt template and its variables.
Show child attributes
A key used to cache responses for similar requests, helping optimize cache hit rates. Replaces the deprecated user field for caching purposes.
The retention policy for the prompt cache. Set to 24h to keep cached prefixes active for up to 24 hours.
in-memory, 24h Configuration options for reasoning models (o-series and gpt-5). Controls the depth of reasoning before generating a response.
Show child attributes
A stable identifier for your end-users, used to help detect policy violations. Should be a hashed username or email — do not send identifying information directly.
64Specifies the processing tier for the request. When set, the response will include the actual service_tier used.
auto: Uses the tier configured in project settings (default behavior).default: Standard pricing and performance.flex: Flexible processing with potential cost savings.priority: Priority processing with faster response times.auto, default, flex, priority Whether to store the generated response for later retrieval via API.
If set to true, the response data will be streamed to the client as it is generated using server-sent events (SSE). Events include response.created, response.output_text.delta, response.completed, and more.
Options for streaming responses. Only set this when stream is true.
Show child attributes
Sampling temperature between 0 and 2. Higher values (e.g., 0.8) increase randomness; lower values (e.g., 0.2) make output more focused and deterministic. We recommend adjusting either this or top_p, but not both.
0 <= x <= 2Configuration for text output. Use this to request structured JSON output via JSON mode or JSON Schema.
Show child attributes
Controls how the model selects which tool(s) to call.
auto (default): The model decides whether and which tools to call.none: The model will not call any tools.required: The model must call at least one tool.An array of tools the model may call while generating a response. CometAPI supports three categories:
web_search_preview and file_search.Number of most likely tokens to return at each position (0–20), each with an associated log probability. Must include message.output_text.logprobs in the include parameter to receive logprobs.
0 <= x <= 20Nucleus sampling parameter. The model considers tokens with top_p cumulative probability mass. For example, 0.1 means only the top 10% probability tokens are considered. We recommend adjusting either this or temperature, but not both.
0 <= x <= 1The truncation strategy for handling inputs that exceed the model's context window.
auto: The model truncates the input by dropping items from the beginning of the conversation to fit.disabled (default): The request fails with a 400 error if the input exceeds the context window.auto, disabled Deprecated. Use safety_identifier and prompt_cache_key instead. A stable identifier for your end-user.
The generated Response object.
Unique identifier for the response.
"resp_0a153ae8201f73bc0069a7e8044cc481"
The object type, always response.
response "response"
Unix timestamp (in seconds) of when the response was created.
1772611588
The status of the response.
completed, in_progress, failed, cancelled, queued "completed"
Whether the response was run in the background.
false
Unix timestamp of when the response was completed, or null if still in progress.
1772611589
Error information if the response failed, or null on success.
Show child attributes
Details about why the response is incomplete, if applicable.
Show child attributes
The system instructions used for this response.
The maximum output token limit that was applied.
The model used for the response.
"gpt-4.1-nano"
An array of output items generated by the model. Each item can be a message, function call, or other output type.
Show child attributes
A convenience field containing the concatenated text output from all output message items.
Whether parallel tool calls were enabled.
The ID of the previous response, if this is a multi-turn conversation.
The reasoning configuration that was used.
Show child attributes
The service tier actually used to process the request.
"default"
Whether the response was stored.
The temperature value used.
1
The text configuration used.
Show child attributes
The tool choice setting used.
The tools that were available for this response.
The top_p value used.
1
The truncation strategy used.
Token usage statistics for this response.
Show child attributes
The user identifier, if provided.
The metadata attached to this response.
Show child attributes
Content filter results applied to the response, if any.
The frequency penalty applied to the request.
Maximum number of tool calls allowed, if set.
The presence penalty applied to the request.
Cache key for prompt caching, if applicable.
Prompt cache retention policy, if applicable.
Safety system identifier for the response, if applicable.
Number of top log probabilities returned per token position.
from openai import OpenAI
client = OpenAI(
base_url="https://api.cometapi.com/v1",
api_key="<COMETAPI_KEY>",
)
response = client.responses.create(
model="gpt-5.4",
input="Tell me a three sentence bedtime story about a unicorn.",
)
print(response.output_text){
"id": "resp_0a153ae8201f73bc0069a7e8044cc481",
"object": "response",
"created_at": 1772611588,
"status": "completed",
"background": false,
"completed_at": 1772611589,
"error": null,
"incomplete_details": null,
"instructions": null,
"max_output_tokens": null,
"model": "gpt-4.1-nano",
"output": [
{
"id": "msg_0a153ae8201f73bc0069a7e8049a8881",
"type": "message",
"status": "completed",
"content": [
{
"type": "output_text",
"annotations": [],
"text": "Four."
}
],
"role": "assistant"
}
],
"parallel_tool_calls": true,
"previous_response_id": null,
"prompt_cache_key": null,
"prompt_cache_retention": null,
"reasoning": {
"effort": null,
"summary": null
},
"safety_identifier": null,
"service_tier": "auto",
"store": true,
"temperature": 1,
"text": {
"format": {
"type": "text"
},
"verbosity": "medium"
},
"tool_choice": "auto",
"tools": [],
"top_p": 1,
"truncation": "disabled",
"usage": {
"input_tokens": 19,
"input_tokens_details": {
"cached_tokens": 0
},
"output_tokens": 9,
"output_tokens_details": {
"reasoning_tokens": 0
},
"total_tokens": 28
},
"user": null,
"metadata": {}
}