استخدم CometAPI POST /v1/responses لإنشاء مخرجات نماذج متقدمة مع إدخال Multimodal، ودردشة ذات حالة، وأدوات مدمجة، وFunction Calling.
from openai import OpenAI
client = OpenAI(
base_url="https://api.cometapi.com/v1",
api_key="<COMETAPI_KEY>",
)
response = client.responses.create(
model="gpt-5.4",
input="Tell me a three sentence bedtime story about a unicorn.",
)
print(response.output_text){
"id": "resp_0a153ae8201f73bc0069a7e8044cc481",
"object": "response",
"created_at": 1772611588,
"status": "completed",
"background": false,
"completed_at": 1772611589,
"error": null,
"incomplete_details": null,
"instructions": null,
"max_output_tokens": null,
"model": "gpt-4.1-nano",
"output": [
{
"id": "msg_0a153ae8201f73bc0069a7e8049a8881",
"type": "message",
"status": "completed",
"content": [
{
"type": "output_text",
"annotations": [],
"text": "Four."
}
],
"role": "assistant"
}
],
"parallel_tool_calls": true,
"previous_response_id": null,
"prompt_cache_key": null,
"prompt_cache_retention": null,
"reasoning": {
"effort": null,
"summary": null
},
"safety_identifier": null,
"service_tier": "auto",
"store": true,
"temperature": 1,
"text": {
"format": {
"type": "text"
},
"verbosity": "medium"
},
"tool_choice": "auto",
"tools": [],
"top_p": 1,
"truncation": "disabled",
"usage": {
"input_tokens": 19,
"input_tokens_details": {
"cached_tokens": 0
},
"output_tokens": 9,
"output_tokens_details": {
"reasoning_tokens": 0
},
"total_tokens": 28
},
"user": null,
"metadata": {}
}previous_response_id أو conversation دون إدارة سجل الرسائل بنفسكweb_search_preview وfile_search وcode_interpreter والمزيد دون أي إعدادtext.formatreasoning.effort لنماذج o-series وgpt-5response.output_text.delta وresponse.completed وغيرها)temperature وtools وreasoning) من أجل الشفافية.stream إلى true، ترسل الواجهة أحداثًا مرسلة من الخادم (SSE) بهذا الترتيب:
response.created — تمت تهيئة كائن الاستجابةresponse.in_progress — بدأ الإنشاءresponse.output_item.added — تمت إضافة عنصر مخرجات جديد (رسالة أو استدعاء أداة)response.content_part.added — بدأ جزء المحتوىresponse.output_text.delta — مقطع نصي (يحتوي على الحقل delta مع جزء النص)response.output_text.done — اكتمل إنشاء النص لهذا الجزء من المحتوىresponse.content_part.done — انتهى جزء المحتوىresponse.output_item.done — انتهى عنصر المخرجاتresponse.completed — الاستجابة الكاملة مع بيانات usageBearer token authentication. Use your CometAPI key.
Model ID to use for this request. See the Models page for current options.
"gpt-5.4"
Text, image, or file inputs to the model, used to generate a response. Can be a simple string for text-only input, or an array of input items for multimodal content (images, files) and multi-turn conversations.
A system (or developer) message inserted into the model's context. When used with previous_response_id, instructions from the previous response are not carried over — this makes it easy to swap system messages between turns.
Whether to run the model response in the background. Background responses do not return output directly — you retrieve the result later via the response ID.
Context management configuration for this request. Controls how the model manages context when the conversation exceeds the context window.
Show child attributes
The conversation this response belongs to. Items from the conversation are prepended to input for context. Input and output items are automatically added to the conversation after the response completes. Cannot be used with previous_response_id.
Additional output data to include in the response. Use this to request extra information that is not included by default.
web_search_call.action.sources, code_interpreter_call.outputs, computer_call_output.output.image_url, file_search_call.results, message.input_image.image_url, message.output_text.logprobs, reasoning.encrypted_content An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.
The maximum number of total calls to built-in tools that can be processed in a response. This limit applies across all built-in tool calls, not per individual tool. Any further tool call attempts by the model will be ignored.
Set of up to 16 key-value pairs that can be attached to the response. Useful for storing additional information in a structured format. Keys have a maximum length of 64 characters; values have a maximum length of 512 characters.
Show child attributes
Whether to allow the model to run tool calls in parallel.
The unique ID of a previous response. Use this to create multi-turn conversations without manually managing conversation state. Cannot be used with conversation.
Reference to a prompt template and its variables.
Show child attributes
A key used to cache responses for similar requests, helping optimize cache hit rates. Replaces the deprecated user field for caching purposes.
The retention policy for the prompt cache. Set to 24h to keep cached prefixes active for up to 24 hours.
in-memory, 24h Configuration options for reasoning models (o-series and gpt-5). Controls the depth of reasoning before generating a response.
Show child attributes
A stable identifier for your end-users, used to help detect policy violations. Should be a hashed username or email — do not send identifying information directly.
64Specifies the processing tier for the request. When set, the response will include the actual service_tier used.
auto: Uses the tier configured in project settings (default behavior).default: Standard pricing and performance.flex: Flexible processing with potential cost savings.priority: Priority processing with faster response times.auto, default, flex, priority Whether to store the generated response for later retrieval via API.
If set to true, the response data will be streamed to the client as it is generated using server-sent events (SSE). Events include response.created, response.output_text.delta, response.completed, and more.
Options for streaming responses. Only set this when stream is true.
Show child attributes
Sampling temperature between 0 and 2. Higher values (e.g., 0.8) increase randomness; lower values (e.g., 0.2) make output more focused and deterministic. We recommend adjusting either this or top_p, but not both.
0 <= x <= 2Configuration for text output. Use this to request structured JSON output via JSON mode or JSON Schema.
Show child attributes
Controls how the model selects which tool(s) to call.
auto (default): The model decides whether and which tools to call.none: The model will not call any tools.required: The model must call at least one tool.An array of tools the model may call while generating a response. CometAPI supports three categories:
web_search_preview and file_search.Number of most likely tokens to return at each position (0–20), each with an associated log probability. Must include message.output_text.logprobs in the include parameter to receive logprobs.
0 <= x <= 20Nucleus sampling parameter. The model considers tokens with top_p cumulative probability mass. For example, 0.1 means only the top 10% probability tokens are considered. We recommend adjusting either this or temperature, but not both.
0 <= x <= 1The truncation strategy for handling inputs that exceed the model's context window.
auto: The model truncates the input by dropping items from the beginning of the conversation to fit.disabled (default): The request fails with a 400 error if the input exceeds the context window.auto, disabled Deprecated. Use safety_identifier and prompt_cache_key instead. A stable identifier for your end-user.
The generated Response object.
Unique identifier for the response.
"resp_0a153ae8201f73bc0069a7e8044cc481"
The object type, always response.
response "response"
Unix timestamp (in seconds) of when the response was created.
1772611588
The status of the response.
completed, in_progress, failed, cancelled, queued "completed"
Whether the response was run in the background.
false
Unix timestamp of when the response was completed, or null if still in progress.
1772611589
Error information if the response failed, or null on success.
Show child attributes
Details about why the response is incomplete, if applicable.
Show child attributes
The system instructions used for this response.
The maximum output token limit that was applied.
The model used for the response.
"gpt-4.1-nano"
An array of output items generated by the model. Each item can be a message, function call, or other output type.
Show child attributes
A convenience field containing the concatenated text output from all output message items.
Whether parallel tool calls were enabled.
The ID of the previous response, if this is a multi-turn conversation.
The reasoning configuration that was used.
Show child attributes
The service tier actually used to process the request.
"default"
Whether the response was stored.
The temperature value used.
1
The text configuration used.
Show child attributes
The tool choice setting used.
The tools that were available for this response.
The top_p value used.
1
The truncation strategy used.
Token usage statistics for this response.
Show child attributes
The user identifier, if provided.
The metadata attached to this response.
Show child attributes
Content filter results applied to the response, if any.
The frequency penalty applied to the request.
Maximum number of tool calls allowed, if set.
The presence penalty applied to the request.
Cache key for prompt caching, if applicable.
Prompt cache retention policy, if applicable.
Safety system identifier for the response, if applicable.
Number of top log probabilities returned per token position.
from openai import OpenAI
client = OpenAI(
base_url="https://api.cometapi.com/v1",
api_key="<COMETAPI_KEY>",
)
response = client.responses.create(
model="gpt-5.4",
input="Tell me a three sentence bedtime story about a unicorn.",
)
print(response.output_text){
"id": "resp_0a153ae8201f73bc0069a7e8044cc481",
"object": "response",
"created_at": 1772611588,
"status": "completed",
"background": false,
"completed_at": 1772611589,
"error": null,
"incomplete_details": null,
"instructions": null,
"max_output_tokens": null,
"model": "gpt-4.1-nano",
"output": [
{
"id": "msg_0a153ae8201f73bc0069a7e8049a8881",
"type": "message",
"status": "completed",
"content": [
{
"type": "output_text",
"annotations": [],
"text": "Four."
}
],
"role": "assistant"
}
],
"parallel_tool_calls": true,
"previous_response_id": null,
"prompt_cache_key": null,
"prompt_cache_retention": null,
"reasoning": {
"effort": null,
"summary": null
},
"safety_identifier": null,
"service_tier": "auto",
"store": true,
"temperature": 1,
"text": {
"format": {
"type": "text"
},
"verbosity": "medium"
},
"tool_choice": "auto",
"tools": [],
"top_p": 1,
"truncation": "disabled",
"usage": {
"input_tokens": 19,
"input_tokens_details": {
"cached_tokens": 0
},
"output_tokens": 9,
"output_tokens_details": {
"reasoning_tokens": 0
},
"total_tokens": 28
},
"user": null,
"metadata": {}
}