API Doc-CometAPI
HomeDashBoardModel_Price
HomeDashBoardModel_Price
Discord_Support
  1. OpenAI Compatiable Endpoint
  • GET START
    • Model New Release Announcement
    • Help Center
    • Quick Start
    • About Pricing
    • About Grouping
    • Interface Stability
    • Privacy policy
    • Terms of service
    • Error code description
    • Code example
    • Must see for use
    • Common Misconceptions
    • Confusion about use
    • Best Practices
      • CometAPI Account Balance Query API Usage Instructions
      • Retry Logic Documentation for CometAPI and OpenAI Official API
      • Midjourney Best Practices
      • Runway Best Practices
  • OpenAI Compatiable Endpoint
    • gpt-4o-image generates image
      POST
    • Chat
      POST
    • Recognizing Images
      POST
    • Models
      GET
    • Embeddings
      POST
    • Images
      POST
    • Realtime
      POST
    • Image Editing (gpt-image-1)
      POST
  • Audio
    • Create speech
    • Create transcription
    • Create translation
  • Anthropic Compatiable Endpoint
    • Anthropic Claude
  • Music Generation Endpoint
    • Suno
      • Setting suno Version
      • Generate lyrics
      • Generate music clip
      • Upload clip
      • Submit concatenation
      • Single task query
      • Batch query tasks
    • Udio(Temporarily unavailable)
      • Generate music
      • Task query
  • Image Generation Endpoint
    • Midjourney(images)
      • Task Fetching API
        • List by Condition
        • Fetch Single Task (most recommended)
      • Imagine
      • Change (UPSCALE; VARIATION; REROLL)
      • Action (UPSCALE; VARIATION; REROLL; ZOOM, etc.)
      • Blend (image -> image)
      • Describe (image -> text)
      • Modal (Area Redesign & Zoom)
    • Ideogram(images)
      • Official documentation (updated in real time)
      • Generate 3.0 (text to image)
      • Remix 3.0 (hybrid image)
      • Reframe 3.0(Reconstruction)
      • Replace Background 3.0(Background replacement)
      • Edit 3.0(Editing images)
      • ideogram Text Raw Image
      • ideogram Hybrid image
      • ideogram enlargement HD
      • ideogram describes the image
      • ideogram Edit image
    • Flux(images)
      • Generate image (replicate format)
      • flux fine-tune images(Temporarily unavailable)
      • flux generate image(Temporarily unavailable)
      • flux query
    • Replicate(image)
      • replicate Generate
      • replicate query
    • Recraft(images)
      • Appendix
      • Recraft Generate Image
      • Recraft Vectorize Image
      • Recraft Remove Background
      • Recraft Clarity Upscale
      • Recraft Create style
      • Recraft Generative Upscale
  • Video Generation Ednpoint
    • runway(video)
      • official format
        • runway images raw video
        • runway to get task details
      • Reverse Format
        • generate(text)
        • generate(Reference images)
        • Video to Video Style Redraw
        • Act-one Expression Migration
        • feed-get task
    • kling (video)
      • callback_url
      • Generating images
      • Text Generation Video
      • Image Generation Video
      • Video Extension
      • virtual try-on
      • lip sync
      • Individual queries (videos)
    • MiniMax Conch(video)
      • MiniMax Conch Official Documentation
      • MiniMax Conch Generation
      • MiniMax Conch Query
      • MiniMax Conch Download
    • luma (video)
      • Official api interface format
        • luma generate
        • luma search
    • PIKA(video)
      • pika feed
      • PIKA Reference Video Generation
      • PIKA Reference Image Generation
      • PIKA reference text generation
    • sora
      • Reverse Format
        • Create Video
        • Query Video Task
        • Create Video
  • Software Integration Guide
    • cometapi Site API Call Testing
    • OpenManus
    • Chatbox
    • CherryStudio
    • Cursor
    • ChatHub
    • cline
    • dify
    • gptme
    • Immersive Translation
    • Lobe-Chat
    • Zotero
    • LangChain
    • AnythingLLM
    • Eudic Translation
    • OpenAI Translator
    • ChatAll Translation
    • Pot Translation
    • GPT Academic Optimization (gpt_academic)
    • NEXT CHAT (ChatGPT Next Web)
    • Obsidian's Text Generator Plugin
    • Open WebUI
    • avante.nvim
    • librechat
    • Lazy Customer Service
    • utools-ChatGPT Friend
    • IntelliJ Translation Plugin
    • n8n
  1. OpenAI Compatiable Endpoint

gpt-4o-image generates image

POST
https://api.cometapi.com/v1/chat/completions
Maintainer:Not configured
Using gpt-4o-image to generate images

Request

Header Params
Authorization
string 
required
Example:
{{api-key}}
Body Params application/json
model
string 
required
ID of the model to be used.For more information on which models are available for the Chat API, see the Model Endpoint Compatibility Table:https://platform.openai.com/docs/models/model-endpoint-compatibility
messages
array [object {2}] 
required
In Chat Format:https://platform.openai.com/docs/guides/text?api-mode=chat Generate a chat completion message.
role
string 
optional
role
content
string 
optional
stream
boolean 
optional
If set, a partial message increment will be sent, as in ChatGPT. When the token is available, the token will be sent as a data-only server send event data: [DONE] and the stream is terminated by the message. For sample code, see the OpenAI Cookbook.
temperature
integer 
optional
What sampling temperature to use, between 0 and 2. Higher values (e.g. 0.8) will make the output more random, while lower values (e.g. 0.2) will make the output more focused and deterministic. We usually recommend changing this or top_p but not both.
top_p
integer 
optional
An alternative to temperature sampling, called kernel sampling, where the model considers the results of markers with top_p probability mass. So 0.1 means that only the markers that make up the top 10% probability mass are considered. We usually recommend changing either this or TEMPERATURE but not both.
n
integer 
optional
How many chat completion options to generate for each input message.
stop
string 
optional
The API will stop generating more tokens for up to 4 sequences.
max_tokens
integer 
optional
Maximum number of tokens generated for chat completion. The total length of input tokens and generated tokens is limited by the length of the model context.
presence_penalty
number 
optional
A number between -2.0 and 2.0. Positive values penalize new tokens based on whether or not they have appeared in the text so far, thus increasing the likelihood that the model will talk about new topics. See more about frequency and presence penalties:https://platform.openai.com/docs/api-reference/parameter-details
frequency_penalty
number 
optional
A number between -2.0 and 2.0. Positive values penalize new tokens based on whether or not they have appeared in the text so far, thus increasing the likelihood that the model will talk about new topics. See more about frequency and presence penalties:https://platform.openai.com/docs/api-reference/parameter-details
logit_bias
null 
optional
Modifies the likelihood that the specified token will appear in the completion. Accepts a json object that maps markers (specified by the marker ID in the tagger) to an associated deviation value from -100 to 100. Mathematically, the deviations are added to the logits generated by the model before sampling. The exact effect varies from model to model, but values between -1 and 1 should reduce or increase the likelihood of selection; values like -100 or 100 should result in prohibited or exclusive selection of the associated token.
user
string 
optional
Unique identifiers that represent your end users can help OpenAI monitor and detect abuse. Learn more:https://platform.openai.com/docs/guides/safety-best-practices/end-user-ids
Examples
{
  "model": "gpt-4o-image",
  "messages": [
    {
      "role": "user",
      "content": "Generate a cute kitten sitting on a cloud, cartoon style"
    }
  ]
}

Request samples

Shell
JavaScript
Java
Swift
Go
PHP
Python
HTTP
C
C#
Objective-C
Ruby
OCaml
Dart
R
Request Request Example
Shell
JavaScript
Java
Swift
cURL
curl --location --request POST 'https://api.cometapi.com/v1/chat/completions' \
--header 'Authorization: {{api-key}}' \
--header 'Content-Type: application/json' \
--data-raw '{
    "model": "gpt-4o-image",
    "messages": [
        {
            "role": "user",
            "content": "Generate a cute kitten sitting on a cloud, cartoon style"
        }
    ]
}'

Responses

🟢200Successful Response
application/json
Body
id
string 
required
A unique identifier for the chat completion.
object
string 
required
The object type, which is always chat.completion.
created
integer 
required
The Unix timestamp (in seconds) of when the chat completion was created.
model
string 
required
The model used for the chat completion.
system_fingerprint
string 
required
This fingerprint represents the backend configuration that the model runs with.
choices
array [object {3}] 
required
A list of chat completion choices.
index
integer 
optional
The index of the choice in the list of choices.
message
object 
optional
A chat completion message generated by the model.
finish_reason
string 
optional
The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool, or insufficient_system_resource if the request is interrupted due to insufficient resource of the inference system.
usage
object 
required
Usage statistics for the completion request.
completion_tokens
integer 
required
Number of tokens in the generated completion.
completion_tokens_details
object 
required
Breakdown of tokens used in a completion.
prompt_tokens
integer 
required
Number of tokens in the prompt. It equals prompt_cache_hit_tokens + prompt_cache_miss_tokens.
prompt_tokens_details
object 
required
Breakdown of tokens used in a promt.
total_tokens
integer 
required
Total number of tokens used in the request (prompt + completion).
Example
{
  "id": "chatcmpl-89DrEm9HUEuAk2JUN4tmIi5TEwVam",
  "object": "chat.completion",
  "created": 1744696747,
  "model": "gpt-4o-image",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "```\n{\n  \"prompt\": \"A cute kitten sitting on a cloud, in a cartoon style.\",\n  \"ratio\": \"1:1\"\n}\n```\n\n>🕐 Queuing.\n\n>⚡ Generating..\n\n>🏃‍ Progress 14...37...61...74..[100](https://videos.openai.com/vg-assets/assets%2Ftask_01jrw0e6vwez5sh16h25kn0xn2%2Fsrc_0.png?st=2025-04-15T04%3A40%3A06Z&se=2025-04-21T05%3A40%3A06Z&sks=b&skt=2025-04-15T04%3A40%3A06Z&ske=2025-04-21T05%3A40%3A06Z&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skoid=aa5ddad1-c91a-4f0a-9aca-e20682cc8969&skv=2019-02-02&sv=2018-11-09&sr=b&sp=r&spr=https%2Chttp&sig=tVdhsKmI14ibIK%2BKuwhSrCphnrSBZX0d%2FXtqjLVTF08%3D&az=oaivgprodscus)\n\n> ✅ Generation complete\n\n\n![gen_01jrw0e7j2fc2rp7evwe19y0q7](https://filesystem.site/cdn/20250415/CK1wClwp1so6YTPImkPlJpZD8KlrEM.png)"
      },
      "finish_reason": "stop"
    }
  ],
  "data": null,
  "usage": {
    "prompt_tokens": 21,
    "completion_tokens": 373,
    "total_tokens": 394,
    "prompt_tokens_details": {
      "CachedCreationTokens": 0,
      "text_tokens": 14,
      "cached_tokens_details": {}
    },
    "completion_tokens_details": {}
  },
  "error": {}
}
Previous
Runway Best Practices
Next
Chat
Built with