400 Bad Request | Please check your request format, usually it's a client-side error. |
401 Invalid Token | API key verification failed. Try changing models to test if your API key is correct; if changing models works normally, please contact the administrator for feedback and processing. |
403 Token Group XXX Has Been Disabled | Usually a token permission issue. If you still get an error after creating and using a new token, you need to contact the administrator to check. For example, O1 series models do not support the system parameter. |
404 Not Found | Please check if the Base URL is filled in correctly, try adding /v1 or the last slash / . |
413 Request Entity Too Large | The prompt may be too long. Please shorten your prompt and try again, confirm if a shorter prompt can be called normally. |
429 Current Group Upstream Load Is Saturated | OpenAI has rate limits for individual accounts, 429 indicates that a backend account's concurrent usage is too high and has encountered rate limiting. Please continue to try calling. |
500 Internal Server Error | Server internal error. Could be an issue with the proxy server or OpenAI server, unrelated to the user. Please try again, if multiple errors occur please contact the administrator. |
503 No Available Channel for Model XXXX Under Current Group NNN | A management issue with the proxy platform backend. Please contact the administrator to add this model and try calling again. If multiple errors occur, please contact the administrator. |
504 Gateway Timeout | Gateway timeout, failed to get a response from the upstream server within the specified time. Please try again, for multiple errors please contact the administrator. |
524 Connection Timeout | The server did not complete the request within the specified time, possibly caused by congestion in the cometapi channel. Please try again, for multiple errors please contact the administrator. |