The OpenAI API allows developers to integrate advanced natural language processing capabilities into applications, enabling human-like text generation based on user input.
The chat completions API supports a variety of tasks, including text classification, generation, and transformation, and responds to user messages with context-aware output.
Key request parameters include
model
,messages
,max_tokens
,temperature
, and more.The API returns a JSON object that includes a unique ID and an array of
choices
, each containing valuable information about the generated response.
OpenAI's ChatGPT API is a service that allows developers to integrate natural language processing capabilities into applications, enabling interactions where the AI generates human-like text based on the input it receives. This API is designed to handle a wide range of conversational tasks, such as answering questions and providing recommendations, all based on the context supplied to it.
The chat completions API serves various functions on text, such as classification, generation, transformation, completing incomplete text, providing factual responses, and more. It requires an input message from the user along with its assigned role, then returns the output.
Let's examine the chat completions API in more detail, reviewing the request and response parameters.
Let’s see some essential request parameters for this API in the table below:
Fields | Format | Type | Description |
| String | Required | This is the ID of the model that the chat completions endpoint will use. |
| Object | Required | This provides the context for generating responses. It is an array of message objects, each with the following fields:
|
| Integer | Optional | This is the maximum number of tokens to generate in the chat completion. |
| Float | Optional | Which sampling temperature should be employed, ranging from |
| Float | Optional | Nucleus sampling is an alternative to temperature sampling in which the model evaluates the outcomes of tokens with top p probability mass. So 0.1 indicates that only the top 10% probability mass tokens will be evaluated. Default value: |
| object | Optional | This is an object that specifies the format of the output. This parameter is compatible with the newer models. Setting to |
The response is a JSON object. Some essential attributes are given below:
Fields | Format | Description |
| String | This is a unique ID for the chat completion. |
| Array | It is an array of objects. Every object contains valuable information about the response. The size of the array will be equal to the n parameter that we provided in the request parameters. |
n
is used to determine the number of chat completion choices to generate for each input message.
Among the response parameters, the choices
parameter contains the API-generated output. Let’s look at the choices
array to understand its structure and contents.
Fields | Format | Description |
| String | This provides the reason the model stopped generating tokens. Here are the reasons sent:
|
| Integer | This gives the index of the choice in a list of choices. |
This API's input is an array comprising the request text message and the system role. Similarly, the output is also an array consisting of a role and the text message response.
The chat completions API operates with an array as both input and output. By providing relevant texts within the array, we can direct it to take specific actions. A carefully crafted prompt will yield favorable output.
Let's utilize the chat completions API to generate content about "Large language models." In the code widget below, we'll employ the gpt-4o-mini
model for this task, with a temperature value of 0.8
.
Note: Before running the code, generate an
and replace API key https://platform.openai.com/api-keys {{SECRET_KEY}}
with the generated API key in the code below.
from openai import OpenAIclient = OpenAI(api_key="{{SECRET_KEY}}")response = client.chat.completions.create(model="gpt-4o-mini",messages=[{"role": "system", "content": "You are a helpful assistant designed to output JSON."},{"role": "user", "content": "Write a tagline about large language models."}],temperature = 0.8)print(response.choices[0].message.content)
The code initializes an OpenAI client using a provided API key, then sends a request to generate a chat completion. It specifies the gpt-4o-mini
model, includes both system and user messages, and sets the temperature
parameter to 0.8
. Finally, it prints the generated text from the response.
Free Resources