OpenAI API error: "Not supported in the v1/completions endpoint"

Key takeaways:

  • The “Not supported in the v1/completions endpoint” error arises when incompatible models (like GPT-3.5 or GPT-4 chat models) are called using the wrong API endpoint.

  • The /v1/completions endpoint is compatible with completion models such as davinci-002 but does not support chat models like GPT-3.5-turbo or GPT-4.

  • To use GPT-3.5 and GPT-4 chat models, switch to the /v1/chat/completions endpoint and employ openai.ChatCompletion.create() with the messages parameter.

Running into the “OpenAI API error: Not supported in the v1/completions endpoint” can feel like hitting a brick wall in your project. This error pops up when you’re trying to get the API to do something it simply can’t, much like asking a calculator to make you a cup of coffee. It’s frustrating, time-consuming, and halts your progress. But don’t worry—there’s a straightforward fix. We’ll break down why this error happens and guide you step-by-step to solve it, so you can get back to building your AI application without the headache.

What’s causing the “Not supported in the v1/completions endpoint” error?

Imagine trying to tune into a radio station that doesn’t broadcast on your frequency—no matter how much you fiddle with the dial, you won’t hear a thing. This error happens when you request a model or feature from the OpenAI API that’s not available through the v1/completions endpoint. The Completion endpoint supports specific models designed for text completion tasks. If you try to use a model meant for something else, like chat or embeddings, the API throws up its hands and gives you this error.

The usual suspect is always developers trying to use chat-specific models like gpt-3.5-turbo or gpt-4o with the completions endpoint. Let’s say you're using Python with the openai library, and you write the following code:

import openai
openai.api_key = 'ADD YOUR API KEY HERE'
response = openai.completions.create(
model='gpt-4o',
prompt='Explain what is Educative in simple terms.',
max_tokens=150
)
print(response.choices[0].text)

Why does this error occur? In essence, gpt-4o is a chat model and isn’t supported by the /v1/completions endpoint. We can find the exact details of the model that are supported by our current code in the table below:

API Endpoint

Model Group

/v1/chat/completions

GPT-4, GPT-3.5

/v1/completions

Davinci, Babbage

If we change the model name from gpt-4o to text-davinci-002, the code should work without running into the specific error:

import openai
openai.api_key = 'ADD YOUR API KEY HERE'
response = openai.completions.create(
model='davinci-002',
prompt='Explain what is Educative in simple terms.',
max_tokens=150
)
print(response.choices[0].text)

It works, right? We’ve fixed the error. But what if we want to use advanced models like GPT-3.5 or GPT-4? From experience, the output of Davinci models isn’t quite on par with these newer models. The easiest fix is to adjust our code to use the correct endpoint and method for chat models. To use GPT-3.5 or GPT-4 models, we need to switch from the completions endpoint to the chat.completions endpoint and adjust our code accordingly. Here’s how we fix the code:

import openai
openai.api_key = 'ADD YOUR API KEY HERE'
# Define the prompt
prompt = "Explain what is Educative."
# Create a completion using the chat completions API
response = openai.chat.completions.create( # Use openai.ChatCompletion.create() for chat models
model="gpt-4",
messages=[
{"role": "user", "content": prompt}
],
max_tokens=150
)
# Extract the assistant's reply
print(response.choices[0].message.content.strip())

So what did we change?

  1. Switched to the correct method: We replaced openai.completions.create() with openai.chat.completions.create(). This method is designed specifically for chat models like gpt-3.5-turbo and gpt-4.

  2. Updated the parameters: Replaced prompt with messages: Chat models expect a conversation history provided in the messages parameter, which is a list of dictionaries. Each dictionary represents a message with a role (user, assistant, or system) and content.

  3. Modified how we access the response: Instead of response.choices[0].text, we now use response.choices[0].message.content.strip() to get the assistant’s reply.

By switching to openai.chat.completions.create() and structuring our input as a conversation, we’re communicating with the GPT-4 model in the way it expects. This aligns our request with the API’s requirements for chat models, eliminating the error and allowing us to harness the full capabilities of GPT-4.

Conclusion

By making these changes, we’ve sidestepped the error and can continue building our AI application without unnecessary delays. Remember, when working with the OpenAI API:

  • Use the correct endpoint and method: Match the model type with the appropriate endpoint (Completion vs. ChatCompletion).

  • Structure your input properly: Use prompt for completion models and messages for chat models.

  • Access the response correctly: The way you extract the assistant’s reply differs between endpoints.

Understanding these nuances ensures smooth sailing in your AI development journey. If you’re working with the OpenAI API and want to become a pro at building powerful NLP applications, our "Using OpenAI API for Natural Language Processing in Python" course is exactly what you need! Learn how to seamlessly integrate OpenAI into your Python projects, handle common API errors, and unlock the full potential of natural language processing.

Frequently asked questions

Haven’t found what you were looking for? Contact Us


What models are compatible with the OpenAI v1/completions endpoint?

The /v1/completions endpoint supports models like davinci-002, and babbage-002. These models are designed specifically for text completion tasks.


How can I update my OpenAI API calls to use GPT-4 models effectively?

To use GPT-4 models, switch to the chat.completion.create() method and the /v1/chat/completions endpoint. Structure your input using the messages parameter to provide the necessary conversation context, ensuring that your requests align with the chat models’ expectations.


Free Resources

Copyright ©2025 Educative, Inc. All rights reserved