API Doc-CometAPI
English
  • English
  • Русский
HomeDashBoardModel Marketplace
HomeDashBoardModel Marketplace
Discord_Support
English
  • English
  • Русский
  1. 🤝🏼 Support
  • How to Use CometAPI?
  • 🚀 Overview
    • Quick Start
    • Important Guidelines
    • Release Notes
    • Quickly request CometAPI via ApiDog
    • Models
  • 💬 Text Models
    • Chat
    • Responses
    • Anthropic Messages
    • Gemini Generating Content
    • Embeddings
  • 🖼️ Image Models
    • OpenAI
      • gpt-4o-image generates image
      • Images
      • Image Editing (gpt-image-1)
    • Midjourney
      • Midjourney Quick Start: Complete Image Generation Workflow in One Go
      • Task Fetching API
        • List by Condition
        • Fetch Single Task (most recommended)
      • Submit Editor
      • Imagine
      • Submit Video
      • Action (UPSCALE; VARIATION; REROLL; ZOOM, etc.)
      • Blend (image -> image)
      • Describe (image -> text)
      • Modal (Area Redesign & Zoom)
    • Replicate(image)
      • Create Predictions - General
      • replicate query
    • seededit/seedream
      • bytedance-Image Editing (seededit)
      • bytedance-image-generation(seedream)
    • bria
      • Generate Image
      • Generate Vector Graphics - Base (Beta)
      • Image Editing
      • Query Status
    • Gemini
      • Guide to calling gemini-2.5-flash-image (Nano Banana)
      • Gemini generates image
    • Hunyuan3D
  • 📺 Video Models
    • sora-2
      • official
        • Create video
        • Remix video
        • Retrieve video
        • Retrieve video content
      • self-developed
        • Retrieve video
        • Create video
    • veo3
      • veo3-chat format
      • Submit video generation task
      • Query video generation status
    • kling (video)
      • callback_url
      • Multimodal Video Editing
        • Initialize Video for Editing
        • Add Video Selection
        • Delete Video Selection
        • Clear Video Selection
        • Preview Selected Video Area
        • Create Task
      • Text to Video
      • Image to Video
      • Multi-Image To Video
      • Video Extension
      • Avatar
      • Lip-Sync
      • Video Effects
      • Text to Audio
      • Video to Audio
      • TTS
      • Image Generation
      • Multi-Image to Image
      • Image Expansion
      • Image Recognize
      • Virtual Try-On
      • [Counterpart] Creating Tasks
      • lip sync
      • Individual queries
    • runway(video)
      • official format
        • runway images raw video
        • Generate a video from a video
        • Generate an image from text
        • Upscale a video
        • Control a character
        • runway to get task details
      • Reverse Format
        • generate(text)
        • generate(Reference images)
        • Video to Video Style Redraw
        • Act-one Expression Migration
        • feed-get task
    • bytedance
      • bytedance-video
      • bytedance-video get
    • MiniMax Conch(video)
      • MiniMax Conch Official Documentation
      • MiniMax Conch Generation
      • MiniMax Conch Query
      • MiniMax Conch Download
  • 🎵 Music Models
    • Suno
      • Setting suno Version
      • Suno API Scenario Application Guide
      • Generate lyrics
      • Generate music clip
      • Upload clip
      • Submit concatenation
      • Full Track Audio Separation
      • Single Track Audio Separation
      • Create New Persona
      • add style tags
      • Single task query
      • Generate mp4 mv video
      • Timing: lyrics, audio timeline
      • Get wav format file
      • Get midi
      • Batch query tasks
  • 🔊 Audio Models
    • Realtime
    • Create speech
    • Create transcription
    • Create translation
  • 🧩 Integration Guides
    • LiteLLM
    • Dify
    • Make
    • n8n
    • Lobe-Chat
    • COZE
    • Zapier
    • Activepieces
    • LlamaIndex
    • Continue
    • FlowiseAI
    • Chatbox
    • CherryStudio
    • AnythingLLM
    • LangChain
    • BuildShip
    • gptme
    • Immersive Translation
    • Cline
    • Eudic Translation
    • ChatHub
    • OpenAI Translator
    • ChatAll Translation
    • Pot Translation
    • Zotero
    • NEXT CHAT (ChatGPT Next Web)
    • Obsidian's Text Generator Plugin
    • librechat
    • utools-ChatGPT Friend
    • avante.nvim
    • Open WebUI
    • GPT Academic Optimization (gpt_academic)
    • OpenManus
    • IntelliJ Translation Plugin
    • FastGPT
    • n8n Local Deployment
  • ⚠️ Errors
    • Error Codes & Handling
  • 📝 Code Examples
    • Text-to-Image Generation
    • Image-to-image generation URL upload
    • Regular Post Text Conversation
    • OpenAI Official Library Usage Example
    • Streamed Output
    • Json Fixed Format Output Code Display
    • Embedding code example
    • o1-preview Model Code Example
    • LangChain Usage Example (Successful Test Date: 2024-11-25)
    • Openai dall-e-3 & flux series drawing model
    • gpt, claude, gemini multimodal network image parsing example
    • Multimodal PDF File Parsing Examples for GPT, Claude, and Gemini
    • Code example
  • 🏄🏼‍♀️ Best Practices
    • Midjourney Best Practices
    • Retry Logic Documentation for CometAPI and OpenAI Official API
    • Runway Best Practices
    • Claude Code Installation and Usage Guide
    • Gemini CLI Installation and Usage Guide
    • Codex Usage Guide
    • CometAPI Account Balance Query API Usage Instructions
  • 💳 Pricing & Billing
    • About Pricing
  • 🤝🏼 Support
    • Help Center
    • Interface Stability
    • Privacy policy
    • Terms of service
    • Common Misconceptions
    • Confusion about use
  1. 🤝🏼 Support

Confusion about use

1. What models should I use for text-to-image generation?#

cometapi supports many image models, covering almost all mainstream image models globally, including:
DALL-E
Midjourney
Indeogram
Stable-Diffusion
Flux
Replicate
Kelin
For specific supported models, please refer to our model list:
https://api.cometapi.com/pricing
For specific calling methods, see our API documentation:
View link: https://api.cometapi.com/doc

2. What models should I use for text-to-video generation?#

cometapi supports many video models, covering almost all mainstream video models globally, including:
Luma
Runway
Pika
MiniMax
Kelin
Sora
Yes, you read that right, we even have Sora which OpenAI is still beta testing. cometapi can be considered the most comprehensive large model aggregation platform, and you're welcome to use it.
For specific supported models, please refer to our model list:
https://api.cometapi.com/pricing
For specific calling methods, see our API documentation:
View link: https://api.cometapi.com/doc

2. Can I use O1? The full version of O1#

Support range: Full version of O1 series, including o1-preview, o1-mini.
Usage method: Call directly according to the model list, or try the o1-all and o1-pro-all web models for experience.
Comet API is the most comprehensive large model aggregation platform on the web, with full support for the O1 series.

3. Does cometapi support text Embedding?#

Yes! cometapi can call embedding models and fully supports related functions.

4. Meaning of API interface error return codes#

For common error codes and meanings, please check:https://api.cometapi.com/doc GET START-Error code description.
Below are explanations for some codes:
400: Request error.
401: Unauthorized, please check API Key.
403: Access forbidden.
404: Resource not found, possibly Base URL setting error.
429: Too many requests, triggered rate limiting.
500: Internal server error.
503: Service unavailable, server may be busy.

5. Does the interface support OpenAI fine-tuning?#

Fine-tuning not supported.
Reason: cometapi is a large model API aggregation platform, and fine-tuning requires fixed account support, so fine-tuning services cannot be provided.

6. Is integration with open-source software supported?#

Yes! cometapi fully supports integration with the following tools:
ChatBox
Dify
Cling
Please refer to the corresponding tool's help documentation.

7. Does Claude support MCP? Does it support the v1/messages interface?#

Currently, this interface (v1/messages) is not supported.
It will be followed up in subsequent versions, please stay tuned!

8. Can I use a .env file to store API Keys?#

Yes! Please note to also modify the Base URL to ensure it matches the model configuration.

9. The sample code uses OpenAI, can I change it to use Claude 3.7?#

Supported! In the sample code, you only need to:
1.
Modify the model parameter.
2.
Fill in the complete Claude model name, and you can directly call Claude series models.

10. How do I upload videos to the large model?#

Sorry, video upload functionality is not currently supported.

11. What if I forget my password? How to recover my password?#

1.
On the login page, click the "Reset Password" button.
2.
Enter the email address used during registration as prompted.
3.
Check your email and follow the instructions in the email to complete the password reset.
Modified at 2025-07-11 10:21:43
Previous
Common Misconceptions
Built with