API Doc-CometAPI
English
  • English
  • Русский
HomeDashBoardModel Marketplace
HomeDashBoardModel Marketplace
Discord_Support
English
  • English
  • Русский
  1. 🔊 Audio Models
  • 🚀 Overview
    • Quick Start
    • Important Guidelines
    • Release Notes
    • Quickly request CometAPI via ApiDog
    • Models
  • 💬 Text Models
    • Chat
    • Responses
    • Anthropic Messages
    • Gemini Generating Content
    • Embeddings
    • Recognizing Images
  • 🖼️ Image Models
    • Midjourney
      • Midjourney Quick Start: Complete Image Generation Workflow in One Go
      • Task Fetching API
        • List by Condition
        • Fetch Single Task (most recommended)
      • Submit Video
      • Submit Editor
      • Imagine
      • Action (UPSCALE; VARIATION; REROLL; ZOOM, etc.)
      • Blend (image -> image)
      • Describe (image -> text)
      • Modal (Area Redesign & Zoom)
    • Replicate(image)
      • Create Predictions - General
      • replicate query
      • Create Task -flux-kontext-pro、max
      • Create Task -flux-1.1-pro
      • Create Task -flux-1.1-pro-ultra
    • seededit/seedream
      • bytedance-image-generation(seedream)
      • bytedance-Image Editing (seededit)
    • OpenAI
      • Images
      • gpt-4o-image generates image
      • Image Editing (gpt-image-1)
    • Gemini
      • Guide to calling gemini-2.5-flash-image (Nano Banana)
      • Gemini generates image
    • Hunyuan3D
  • 🎵 Music Models
    • Suno
      • Setting suno Version
      • Suno API Scenario Application Guide
      • Generate lyrics
      • Generate music clip
      • Upload clip
      • Submit concatenation
      • Full Track Audio Separation
      • Single Track Audio Separation
      • Create New Persona
      • add style tags
      • Single task query
      • Generate mp4 mv video
      • Timing: lyrics, audio timeline
      • Get wav format file
      • Get midi
      • Batch query tasks
  • 📺 Video Models
    • veo3
      • veo3-chat format
      • Submit video generation task
      • Query video generation status
    • sora-2
      • official
        • Create video
        • Remix video
        • Retrieve video
        • Delete video
        • Retrieve video content
      • sora-2 generate video by chat
    • runway(video)
      • official format
        • runway images raw video
        • Generate a video from a video
        • Generate an image from text
        • Upscale a video
        • Control a character
        • runway to get task details
      • Reverse Format
        • generate(text)
        • generate(Reference images)
        • Video to Video Style Redraw
        • Act-one Expression Migration
        • feed-get task
    • kling (video)
      • callback_url
      • Multimodal Video Editing
        • Initialize Video for Editing
        • Add Video Selection
        • Delete Video Selection
        • Clear Video Selection
        • Preview Selected Video Area
        • Create Task
      • Generating images
      • Expanded
      • Text Generation Video
      • Image Generation Video
      • Multi-Image To Video
      • Multi-Image to Image
      • Video Extension
      • virtual try-on
      • lip sync
      • effects
      • Video to audio
      • Text to audio
      • Individual queries
    • bytedance
      • bytedance-video
      • bytedance-video get
    • MiniMax Conch(video)
      • MiniMax Conch Official Documentation
      • MiniMax Conch Generation
      • MiniMax Conch Query
      • MiniMax Conch Download
  • 🔊 Audio Models
    • Create speech
      POST
    • Create transcription
      POST
    • Create translation
      POST
  • 🧩 Integration Guides
    • LiteLLM
    • Dify
    • Make
    • n8n
    • Lobe-Chat
    • COZE
    • Zapier
    • Activepieces
    • LlamaIndex
    • Continue
    • FlowiseAI
    • Chatbox
    • CherryStudio
    • Cursor
    • AnythingLLM
    • LangChain
    • BuildShip
    • gptme
    • Immersive Translation
    • Cline
    • Eudic Translation
    • ChatHub
    • OpenAI Translator
    • ChatAll Translation
    • Pot Translation
    • Zotero
    • NEXT CHAT (ChatGPT Next Web)
    • Obsidian's Text Generator Plugin
    • librechat
    • utools-ChatGPT Friend
    • avante.nvim
    • Open WebUI
    • GPT Academic Optimization (gpt_academic)
    • OpenManus
    • IntelliJ Translation Plugin
    • FastGPT
    • n8n Local Deployment
  • ⚠️ Errors
    • Error Codes & Handling
  • 📝 Code Examples
    • Text-to-Image Generation
    • Image-to-image generation URL upload
    • Regular Post Text Conversation
    • OpenAI Official Library Usage Example
    • Streamed Output
    • Json Fixed Format Output Code Display
    • Embedding code example
    • o1-preview Model Code Example
    • LangChain Usage Example (Successful Test Date: 2024-11-25)
    • Openai dall-e-3 & flux series drawing model
    • gpt, claude, gemini multimodal network image parsing example
    • Multimodal PDF File Parsing Examples for GPT, Claude, and Gemini
    • Code example
  • 🏄🏼‍♀️ Best Practices
    • Midjourney Best Practices
    • Retry Logic Documentation for CometAPI and OpenAI Official API
    • Runway Best Practices
    • Claude Code Installation and Usage Guide
    • Gemini CLI Installation and Usage Guide
    • Codex Usage Guide
    • CometAPI Account Balance Query API Usage Instructions
  • 💳 Pricing & Billing
    • About Pricing
  • 🤝🏼 Support
    • Help Center
    • Interface Stability
    • Privacy policy
    • Terms of service
    • Common Misconceptions
    • Confusion about use
  1. 🔊 Audio Models

Create translation

POST
https://api.cometapi.com/v1/audio/translations
Maintainer:Not configured

POST /v1/audio/translations#

This endpoint is used to request audio translations using the specified parameters.

Request Body#

file (text): The audio file object (not file name) to translate, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
model (text): ID of the model to use. Only whisper-1 (which is powered by our open source Whisper V2 model) is currently available.
prompt (text): An optional text to guide the model's style or continue a previous audio segment. The prompt should be in English.
response_format (text): The format of the output, in one of these options: json, text, srt, verbose_json, or vtt.
temperature (text): The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.

Request

Header Params

Body Params multipart/form-data

Responses

🟢200Create translation
text/plain
Body

Request Request Example
Shell
JavaScript
Java
Swift
curl --location --request POST 'https://api.cometapi.com/v1/audio/translations' \
--header 'Authorization: Bearer {{api-key}}' \
--form 'file=@""' \
--form 'model="whisper-1"' \
--form 'prompt=""' \
--form 'response_format="json"' \
--form 'temperature="0"'
Response Response Example
{
    "text": "Hello, my name is Wolfgang and I come from Germany. Where are you heading today?"
}
Modified at 2025-05-12 07:49:53
Previous
Create transcription
Next
LiteLLM
Built with