Skip to main content
POST
/
v1
/
embeddings
from openai import OpenAI client = OpenAI( base_url="https://api.cometapi.com/v1", api_key="<COMETAPI_KEY>", ) response = client.embeddings.create( model="text-embedding-3-small", input="The food was delicious and the waiter was friendly.", ) print(response.data[0].embedding[:5]) # First 5 dimensions print(f"Dimensions: {len(response.data[0].embedding)}")
{
  "object": "list",
  "data": [
    {
      "object": "embedding",
      "index": 0,
      "embedding": [
        -0.0021,
        -0.0491,
        0.0209,
        0.0314,
        -0.0453
      ]
    }
  ],
  "model": "text-embedding-3-small",
  "usage": {
    "prompt_tokens": 2,
    "total_tokens": 2
  }
}

Overview

The Embeddings API generates vector representations of text that capture semantic meaning. These vectors can be used for semantic search, clustering, classification, anomaly detection, and retrieval-augmented generation (RAG). CometAPI supports embedding models from multiple providers. Pass one or more text strings and receive back numerical vectors that you can store in a vector database or use directly for similarity calculations.

Available Models

ModelDimensionsMax TokensBest For
text-embedding-3-large3,072 (adjustable)8,191Highest quality embeddings
text-embedding-3-small1,536 (adjustable)8,191Cost-effective, fast
text-embedding-ada-0021,536 (fixed)8,191Legacy compatibility
See the model list for all available embedding models and pricing.

Important Notes

Reducing Dimensions — The text-embedding-3-* models support the dimensions parameter, allowing you to shorten the embedding vector without significant loss of accuracy. This can reduce storage costs by up to 75% while retaining most of the semantic information.
Batch Input — You can embed multiple texts in a single request by passing an array of strings to the input parameter. This is significantly more efficient than making individual requests for each text.

Authorizations

Authorization
string
header
required

Bearer token authentication. Use your CometAPI key.

Body

application/json
model
string
required

The embedding model to use. See the Models page for current embedding model IDs.

Example:

"text-embedding-3-small"

input
required

The text to embed. Can be a single string, an array of strings, or an array of token arrays. Each input must not exceed the model's maximum token limit (8,191 tokens for text-embedding-3-* models).

encoding_format
enum<string>
default:float

The format of the returned embedding vectors. float returns an array of floating-point numbers. base64 returns a base64-encoded string representation, which can reduce response size for large batches.

Available options:
float,
base64
dimensions
integer

The number of dimensions for the output embedding vector. Only supported by text-embedding-3-* models. Reducing dimensions can lower storage costs while maintaining most of the embedding's utility.

Required range: x >= 1
user
string

A unique identifier for your end-user, which can help monitor and detect abuse.

Response

200 - application/json

A list of embedding vectors for the input text(s).

object
enum<string>

The object type, always list.

Available options:
list
Example:

"list"

data
object[]

An array of embedding objects, one per input text. When multiple inputs are provided, results are returned in the same order as the input.

model
string

The model used to generate the embeddings.

Example:

"text-embedding-3-small"

usage
object

Token usage statistics for this request.