Use a API Anthropic Messages por meio da CometAPI para acessar modelos Claude com extended thinking, prompt caching, uso de ferramentas, web search/fetch, streaming e controle de esforço.
import anthropic
client = anthropic.Anthropic(
base_url="https://api.cometapi.com",
api_key="<COMETAPI_KEY>",
)
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
system="You are a helpful assistant.",
messages=[
{"role": "user", "content": "Hello, world"}
],
)
print(message.content[0].text){
"id": "<string>",
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "<string>",
"thinking": "<string>",
"signature": "<string>",
"id": "<string>",
"name": "<string>",
"input": {}
}
],
"model": "<string>",
"stop_reason": "end_turn",
"stop_sequence": "<string>",
"usage": {
"input_tokens": 123,
"output_tokens": 123,
"cache_creation_input_tokens": 123,
"cache_read_input_tokens": 123,
"cache_creation": {
"ephemeral_5m_input_tokens": 123,
"ephemeral_1h_input_tokens": 123
}
}
}import anthropic
client = anthropic.Anthropic(
base_url="https://api.cometapi.com",
api_key="<COMETAPI_KEY>",
)
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}],
)
print(message.content[0].text)
x-api-key quanto Authorization: Bearer são compatíveis para autenticação. Os SDKs oficiais da Anthropic usam x-api-key por padrão.thinking. A resposta inclui blocos de conteúdo thinking que mostram o raciocínio interno do Claude antes da resposta final.
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=16000,
thinking={
"type": "enabled",
"budget_tokens": 10000,
},
messages=[
{"role": "user", "content": "Prove that there are infinitely many primes."}
],
)
for block in message.content:
if block.type == "thinking":
print(f"Thinking: {block.thinking[:200]}...")
elif block.type == "text":
print(f"Answer: {block.text}")
budget_tokens mínimo de 1.024. Os tokens de thinking contam para o seu limite de max_tokens — defina max_tokens alto o suficiente para acomodar tanto o thinking quanto a resposta.cache_control aos blocos de conteúdo que devem ser armazenados em cache:
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
system=[
{
"type": "text",
"text": "You are an expert code reviewer. [Long detailed instructions...]",
"cache_control": {"type": "ephemeral"},
}
],
messages=[{"role": "user", "content": "Review this code..."}],
)
usage da resposta:
cache_creation_input_tokens — tokens gravados no cache (cobrados a uma taxa mais alta)cache_read_input_tokens — tokens lidos do cache (cobrados a uma taxa reduzida)stream: true. Os eventos chegam nesta ordem:
message_start — contém os metadados da mensagem e o uso inicialcontent_block_start — marca o início de cada bloco de conteúdocontent_block_delta — blocos de texto incrementais (text_delta)content_block_stop — marca o fim de cada bloco de conteúdomessage_delta — stop_reason final e usage completomessage_stop — sinaliza o fim do streamwith client.messages.stream(
model="claude-sonnet-4-6",
max_tokens=256,
messages=[{"role": "user", "content": "Hello"}],
) as stream:
for text in stream.text_stream:
print(text, end="")
output_config.effort:
message = client.messages.create(
model="claude-opus-4-6",
max_tokens=4096,
messages=[
{"role": "user", "content": "Summarize this briefly."}
],
output_config={"effort": "low"}, # "low", "medium", or "high"
)
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[
{"role": "user", "content": "Analyze the content at https://arxiv.org/abs/1512.03385"}
],
tools=[
{"type": "web_fetch_20250910", "name": "web_fetch", "max_uses": 5}
],
)
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[
{"role": "user", "content": "What are the latest developments in AI?"}
],
tools=[
{"type": "web_search_20250305", "name": "web_search", "max_uses": 5}
],
)
{
"id": "msg_bdrk_01UjHdmSztrL7QYYm7CKBDFB",
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "Hello!"
}
],
"model": "claude-sonnet-4-6",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 19,
"cache_creation_input_tokens": 0,
"cache_read_input_tokens": 0,
"cache_creation": {
"ephemeral_5m_input_tokens": 0,
"ephemeral_1h_input_tokens": 0
},
"output_tokens": 4
}
}
| Recurso | Anthropic Messages (/v1/messages) | OpenAI-Compatible (/v1/chat/completions) |
|---|---|---|
| Pensamento estendido | Parâmetro thinking com budget_tokens | Não disponível |
| Cache de Prompt | cache_control em blocos de conteúdo | Não disponível |
| Controle de esforço | output_config.effort | Não disponível |
| Web fetch/search | Ferramentas do servidor (web_fetch, web_search) | Não disponível |
| Cabeçalho de autenticação | x-api-key ou Bearer | Apenas Bearer |
| Formato da resposta | Formato Anthropic (content blocks) | Formato OpenAI (choices, message) |
| Modelos | Apenas Claude | Multi-provider (GPT, Claude, Gemini, etc.) |
Your CometAPI key passed via the x-api-key header. Authorization: Bearer <key> is also supported.
The Anthropic API version to use. Defaults to 2023-06-01.
"2023-06-01"
Comma-separated list of beta features to enable. Examples: max-tokens-3-5-sonnet-2024-07-15, pdfs-2024-09-25, output-128k-2025-02-19.
The Claude model to use. See the Models page for current Claude model IDs.
"claude-sonnet-4-6"
The conversation messages. Must alternate between user and assistant roles. Each message's content can be a string or an array of content blocks (text, image, document, tool_use, tool_result). There is a limit of 100,000 messages per request.
Show child attributes
The maximum number of tokens to generate. The model may stop before reaching this limit. When using thinking, the thinking tokens count towards this limit.
x >= 11024
System prompt providing context and instructions to Claude. Can be a plain string or an array of content blocks (useful for prompt caching).
Controls randomness in the response. Range: 0.0–1.0. Use lower values for analytical tasks and higher values for creative tasks. Defaults to 1.0.
0 <= x <= 1Nucleus sampling threshold. Only tokens with cumulative probability up to this value are considered. Range: 0.0–1.0. Use either temperature or top_p, not both.
0 <= x <= 1Only sample from the top K most probable tokens. Recommended for advanced use cases only.
x >= 0If true, stream the response incrementally using Server-Sent Events (SSE). Events include message_start, content_block_start, content_block_delta, content_block_stop, message_delta, and message_stop.
Custom strings that cause the model to stop generating when encountered. The stop sequence is not included in the response.
Enable extended thinking — Claude's step-by-step reasoning process. When enabled, the response includes thinking content blocks before the answer. Requires a minimum budget_tokens of 1,024.
Show child attributes
Tools the model may use. Supports client-defined functions, web search (web_search_20250305), web fetch (web_fetch_20250910), code execution (code_execution_20250522), and more.
Show child attributes
Controls how the model uses tools.
Show child attributes
Request metadata for tracking and analytics.
Show child attributes
Configuration for output behavior.
Show child attributes
The service tier to use. auto tries priority capacity first, standard_only uses only standard capacity.
auto, standard_only Successful response. When stream is true, the response is a stream of SSE events.
Unique identifier for this message (e.g., msg_01XFDUDYJgAACzvnptvVoYEL).
Always message.
message Always assistant.
assistant The response content blocks. May include text, thinking, tool_use, and other block types.
Show child attributes
The specific model version that generated this response (e.g., claude-sonnet-4-6).
Why the model stopped generating.
end_turn, max_tokens, stop_sequence, tool_use, pause_turn The stop sequence that caused the model to stop, if applicable.
Token usage statistics.
Show child attributes
import anthropic
client = anthropic.Anthropic(
base_url="https://api.cometapi.com",
api_key="<COMETAPI_KEY>",
)
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
system="You are a helpful assistant.",
messages=[
{"role": "user", "content": "Hello, world"}
],
)
print(message.content[0].text){
"id": "<string>",
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "<string>",
"thinking": "<string>",
"signature": "<string>",
"id": "<string>",
"name": "<string>",
"input": {}
}
],
"model": "<string>",
"stop_reason": "end_turn",
"stop_sequence": "<string>",
"usage": {
"input_tokens": 123,
"output_tokens": 123,
"cache_creation_input_tokens": 123,
"cache_read_input_tokens": 123,
"cache_creation": {
"ephemeral_5m_input_tokens": 123,
"ephemeral_1h_input_tokens": 123
}
}
}